I'm still lacking the obligatory post-DebConf7 post... These weeks had been intense...
So, first of all, thanks to all the DC-team for another wonderful conference. Next year I'll feel how it's done one of the two things that everybody likes but nobody wants to know how it's done. Hint: not sausages.
This is the first time I've also attended DebCamp, and it was very useful. Not that I could do any real work, as very few people are able to do that during DebConf, but I managed to do what I planned, that is talk with many people who are working on debian-cd, debian-installer and custom-cdds.
I've already uploaded my pictures to a temporary place, whitout captions, until I decide how I will replace the stinko called gallery. And there you can see that I was one of the few that actually wore a kilt during the event —not a proper Debian kilt, but hey...
After DebConf, and after a year of hard work, I took a seven days vacation in the UK. So I was able to know Edinburgh (I prefer to hang out with Debian people that doing tourism during the conference, so I only knew the way to the supermarket). I also went to the Arthur's Seat HIll, which was very beautiful, and was only a 1-hour walk from the hostel to the top. Then I did a small visit to Stirling and concluded that castles in Scotland are not that interesting (but go to the Smith Museum!).
Then, I went to London for a couple of days (dang, that was expensive!!), visited Marx's tomb, rided near the car bombs, got to know all the nice stuff that the british stole from around the world in the British Museum (it's really great, and free), and went to Cambridge to spend a day (and punting, I even did the punting myself!). Finally I spent my last day walking around that beatiful city, in the Soho and South Bank areas. I don't want to forget to say thanks to Nattie, Kai and Ben for hosting me, you guys rock!!
This last week has been very productive. I was working GSoC project, trying to recover from lateness, and staying until late. It was worth it, I have finished settling down the architecture and overall design, and wrote the main program which will coordinate all the tests to be run. It includes from the begining dependency-based scheduling and paralellism, so I hope it will scale well in SMP computers. I also wrote a draft specification for the plugins and description of the running plan in the wiki page.
I've also asked for the creation of a Alioth's project, so I can start publishing the code, and get feedback from the people that will use it.
PS: do MIND THE GAP, please.
Hi, as everybody else, I'm blogging about the status of my GSoC project, which now has a name: Pancután.
Pancután is a lintian-like tool to check ISO images generated by debian-cd and friends. It aims to support CDDs, and even Debian-Live CDs. In case you're wondering, the name is a stupid pun about avoiding (skin) burns and unusable debian CDs (Pancután is a vintage lotion for burns in Argentina, and I think most joungsters won't even recognise the name). In any case, it sounds good :-).
Since it has a name, also it has a alioth project page, SVN repository and mailing list.
If you want to try it out, just checkout the repository and run:
$ ./pancutan <list of iso files>
If you set the environment variable DEBUG
to a number greater than zero, you get debugging info.
Its dependencies are currently libyaml-perl (or libyaml-syck-perl for faster operation), fuseiso9660 and an user which fuse permissions.
Of course, as I've been working in the architecture, there aren't many tests. But you can try the powerpc business card ISO for which pancutan currently detects that it's too big for a business card cd:
$ ./pancutan /media/IOMega/mirror/isos/debian-40r0-powerpc-businesscard.iso E: image-too-big - The ISO file is too big for the target media + 73207808
About the architecture.
The general idea is that a directory is scanned for modules, "requires" them, and extracts metadata embedded into them (I'm still thinking that this can be separated to delay compiling of modules when they're needed, but I haven't decided it yet). From that metadata, tasks and tags (errors and warnings) definitions are loaded. Also a lot of internal sanity checking is performed.
There isn't any fixed order of execution, it's constructed based on the dependencies the tasks define. And it has been thought from the start with parallelisation in mind, although that functionality hasn't been written yet (I'll do it during this week).
I have written a draft spec for the plug-ins in the wiki. I have to properly document that, but you can get an idea from it.
At this point, I'd like to other, more experienced people take a look at it. So feedback is very welcomed, specially from the people that will use the tool!