usb-creator

How to create a debian installer for a usb pen drive ? There are many way.s from a simple sudo zcat boot.img.gz > /dev/sdb to unetbootin.

Last night I discovered a very nice tool from ubuntu (usb-creator) that apparently is still not in the debian archives. There is a bit of doco here if you don't know it already.

Installing it on debian is a breeze (I used the debs from lucid) and it works pretty well. The reason I've tried this one is that unetbootin sometimes failed for me producing an un-usable usb key. The old school method of course works, but sometimes, late at night, I feel the need of a friendly gtk interface to help my sleep.

Debs :

Average: 1.9 (16 votes)

SNCF on his way to drupal !

I've just learned from this article (in french) that the SNCF, the french railways company, is migrating all its infrastructure to an open source based solution. Apparently the organization has already migrated large part of their servers to IBM running linux, apache tomcat for the applications and drupal as their content management system.

Pretty soon alsy voyages-sncf will migrate to drupal. Hopufully it will work better then the actual website that from time to time really gives me an hard time when booking tickets.

hurray for sncf (at least reagaring this move :) )

Average: 1.1 (88 votes)

More on Xen 4.0 setup on squeeze

After the upgrade of last week, I didn't have any major problems : xen 4 seems pretty stable and does its job well. One problem I encountered the other day was about the dom0 balloning. By default, xen sets dom0_min_mem to 196Mb and balloning set to true. This is all and good untill you try to use too much memory for your VMs, squeezing dom0 to its minimum amount of memory and causing all sort of problems. On the xen wiki, they reccomend as best practice to reserve a minimum of 512Mb to dom0 for its operations. This is done but setting dom0_mem=512M on the grub command line and the adjusting enable-dom0-ballooning to no and dom0-min-mem accordingly to the amount of memory you choose.

On debian, you can set the grub command line once for all just by adding in /etc/default/group the conf variable

GRUB_CMDLINE_XEN="dom0_mem=512M"

Another small problem is related to the reboot sequence. Since I'm using lvm on aoe, the default shutdown sequence (network down first, lvm later) is not going to work for me. As I've few lvm volumes on aoe and others on the physical disk, the proper solution to this problem is to write a custom shutdown script for the aoe lvm volumes and make it run before deconfiguring the network interfaces. In the mean time, to avoid the kernel hanging there forever, I've added these lines to in /etc/syscntl.d/panic.conf

# Reboot 5 seconds after panic
kernel.panic = 5

# Panic if a hung task was found
kernel.hung_task_panic = 1

# Setup timeout for hung task to 300 seconds
kernel.hung_task_timeout_secs = 120

This will instruct the kernel to panic and then reboot if there a task will not respond for more then 120 seconds.

Average: 1.2 (117 votes)

easy cudf parsing in python

With the forth run of Misc live, you might wonder how to you can quickly write a parser for a cudf document. If you are writing your solver in C / C++ , I advice to either grab the legacy ocaml parser and use the C bindings or reuse a parser written by other competitors (all frontends have a FOSS-compatible licence).

If you want to write a dirty and quick frontend in python, maybe the following 10 lines of python might help you:

from itertools import groupby

cnf_fields = ['conflict','depends','provides','recommends']

def cnf(k,s) :
    if k in cnf_fields :
        l = s.split(',')
        ll = map(lambda s : s.split('|'), l)
        return ll
    else :
        return s

records = []
for empty, record in groupby(open("universe.cudf"), key=str.isspace):
  if not empty:
    l = map(lambda s : s.split(': '), record)
    # we ignore the preamble here ...
    if 'preamble' not in l[0] :
        pairs = ([k, cnf(k,v.strip())] for k,v in l)
        records.append(dict(pairs))

for i in records :
    print i

we use the function groupby from itertools to create a list of stanzas and then we just trasfrom each of them in a dictionary that should be pretty easy to manipulate. We ignore the preamble, but adding support for it should be straigthforward... I got the idea from this forum post.

the result :

#python cudf.py
{'recommends': [['perl-modules '], [' libio-socket-inet6-perl']], 'package': '2ping', 'replaces': '', 'number': '1.0-1', 'sourceversion': '1.0-1', 'source': '2ping', 'depends': [['perl']], 'version': '4806', 'architecture': 'all', 'conflicts': '2ping'}0.5-3', 'source': '2vcard', 'version': '1523', 'architecture': 'all', 'conflicts': '2vcard', 'recommends': [['true!']]}'package': '3270-common', 'number': '3.3.10ga4-2', 'sourceversion': '3.3.10ga4-2', 'source': 'ibm-3270', 'depends': [['libc6 >= 9784 '], [' libssl0.9.8 >= 2840']], 'version': '11009', 'architecture': 'amd64', 'conflicts': '3270-common', 'recommends': [['true!']]}chess', 'depends': [['libc6 >= 9578 '], [' libx11-6 '], [' libxext6 '], [' libxmu6 '], [' libxpm4 '], [' libxt6 '], [' xaw3dg >= 6582']], 'version': '2409', 'architecture': 'amd64', 'conflicts': '3dchess', 'recommends': [['true!']]} [' libxpm4 '], [' libxt6 '], [' xaw3dg >= 6582']], 'version': '2410', 'architecture': 'amd64', 'conflicts': '3dchess', 'recommends': [['true!']]}6 >= 8923 '], [' libfreetype6 >= 8856 '], [' libftgl2 >= 8661 '], [' libgcc1 >= 14906 '], [' libgl1-mesa-glx ', ' libgl1--virtual ', ' libgl1 '], [' libglu1-mesa ', ' libglu1--virtual ', ' libglu1 '], [' libgomp1 >= 11829 '], [' libmgl5 '], [' libpng12-0 >= 5996 '], [' libstdc++6 >= 11843 '], [' libwxbase2.8-0 >= 9714 '], [' libwxgtk2.8-0 >= 9714 '], [' libxml2 >= 9624 '], [' zlib1g >= 14223']], 'version': '116', 'architecture': 'amd64', 'conflicts': '3depict', 'recommends': [['true!']]} '], [' libstdc++6 >= 11664 '], [' libwxbase2.8-0 >= 9714 '], [' libwxgtk2.8-0 >= 9714 '], [' libxml2 >= 9624 '], [' zlib1g >= 14223']], 'version': '138', 'architecture': 'amd64', 'conflicts': '3depict', 'recommends': [['true!']]}': '14987', 'architecture': 'amd64', 'conflicts': '9base', 'recommends': [['true!']]}.8-5', 'sourceversion': '1.8-5', 'source': '9menu', 'depends': [['libc6 >= 8923 '], [' libx11-6']], 'version': '7010', 'architecture': 'amd64', 'conflicts': '9menu', 'recommends': [['true!']]}sion': '1.2-9', 'source': '9wm', 'depends': [['libc6 >= 9578 '], [' libx11-6 '], [' libxext6']], 'version': '5712', 'architecture': 'amd64', 'provides': [['x-window-manager--virtual']], 'conflicts': '9wm', 'recommends': [['true!']]}
{'replaces': '', 'package': 'abook', 'number': '0.5.6-7+b1', 'sourceversion': '0.5.6-7', 'source': 'abook', 'depends': [['libc6 >= 9022 '], [' libncursesw5 >= 12348 '], [' libreadline5 >= 12239 '], [' debconf >= 1510 ', ' debconf-2.0--virtual ', ' debconf-2.0']], 'version': '1712', 'architecture': 'amd64', 'conflicts': 'abook', 'recommends': [['true!']]}
...

update

Maybe a small example of the input file would help :)

package: m4
version: 3
depends: libc6 >= 8

package: openssl
version: 11
depends: libc6 >= 18, libssl0.9.8 >= 8, zlib1g >= 1
conflicts: ssleay < 1

Average: 1 (102 votes)

xen 4 on debian squeeze

It's time to upgrade my xen servers to squeeze. I've already put this off too long and now I've to task to go from etch to squeeze in one long step. In order to avoid problems I just did a first upgrade etch -> lenny and then to squeeze. However, since so much has changed in the meantime, and so much twicking of essential components is needed (such as Xen !), I guess I could have gone directly from etch to squeeze in one go, and fix everthing in the process... Anyway, to late for this kind of considerations.

 
The xen debian wiki is full of invaluable information. Kudos to the xen team for their hard work. To get you started on squeeze you need to install the xen hypervisor. Everything is provided by one package:
aptitude install xen-linux-system-2.6-xen-amd64 xen-hypervisor-4.0-amd64

This will pull the latest linux kernel and xen-hypervisor to run on dom0 .

By default the hypervisor is probably not going to be the default kernel. If you want to change this, you should edit the grub default values :

vi /etc/default/grub

to make sure that the default kernel on dom0 is the xen-hypervisor. This is tricky, because grub let you define a default w.r.t the list of available kernels. so if you install a new kernel, you have to change the default accordingly with the list of kernels in /boot/grub/grub.cfg. It would be nice if I could define the default kernel with a label instead of a number... ( ref #505517 )

Alternatively, as suggested in the wiki, you can just move the xen kernel out of the way ...

mv -i /etc/grub.d/10_linux /etc/grub.d/50_linux

When installing xen related tools, aptitude will also probably install by default rinse and xenwatch. The first one is to boostrap redhat machines and maybe you don't need it. The second one is a GUI and will pull in a lot of X-related dependencies. If we have similar needs, you can just remove what is not needed...

aptitude purge rinse rpm rpm-common
aptitude purge xenwatch

Something that is new, is the new schema for virtual devices. Now all vms will see /dev/xdva1 instead of /dev/sda1 as before. This needs to be changed in the domU as well as in the xen config files (/etc/xen/vm.cfg).

One fantastic news is that xen 4 now uses pyGrub. It is not mandatory (so if you want, you can stick with the old configuration file). But if you use pygrub, on the domU you can install whatever kernel you want. Finally, your users will have complete freedom to pick and choose their kernels !

There was a small detail I didn't notice on the debian wiki, that is, if you try to use grub2 in squeeze, it will fail when probing the device#601974 . The workaround described in the wiki is to use xvd{a,b,c,...} as device names (and not xvd{1,2,3,...}) to make grub happy. Once you have changed then naming schema, grub will be able to see the disks and install the bootloader. Another solution is to install the os-prober from unstable / experimental. It seems a patch is on the works.

On newly created images, you can also pass the --scsi parameter to xen-create-image to ignore this problem altogeher... I'm not sure if this will have other implications...

 
The console name is also changes from tty to xhv0 . To get back the console you should add this line in the inittab of all you VMs.
vc:2345:respawn:/sbin/getty 38400 hvc0

A last note is about the merge upstream of the xen patch !! \o/ yeiiii !!

Average: 1 (54 votes)

Package Managers Comparison

The Mancoosi Team has recently published the details of a study we conducted analyzing different packages managers available in debian. The goal of this study was to compare MPM (the mancoosi package manager) to other legacy solvers and try to get a big picture regarding of the state of the art. A similar study was conduct during EDOS and the results are still available here.

As I wrote few months ago, MPM is a proof of concept that we wrote to test the behavior of a number of solvers that have been developed for the MISC competition in a real world scenario.

These results do not show anything new w.r.t. the experience of a lot of poeple dueling daily with their machines in order to install a new piece of software. In a nutshell, we have shown that apt-get, aptitude, smart and cupt perform pretty well when used only with one baseline (for example a stable release) : this conforms with the experience of the majority of users of debian based systems. However problems start to arise when a user start mixing more then one baseline putting a lot of stress on the solver in order to find a satisfying solution. This solution of course exists, but it is cleverly hidden in the dependency structure of more then 40K packages...

MPM is not as fast as other package solvers (say 10 seconds for mpm, while apt-get is able to find a solution in 3 second), but is remarkably stable. It is always able to find a satisfying solution, even in the harder cases where all the other failed. In these experiments MPM uses the postdam solver aspcud. This solver uses only GPL components and it would be a good candidate for inclusion in debian (there are actually a couple of ITP already filled for clasp and gringo).

The results (with a lot of details) are published on the mancoosi website (a more detailed report is in the works). Enjoy !

During fosdem the Mancoosi team that authored this work (Roberto, Ralf, Zack and Me) will be around, so, please stop us for a chat ! And don't miss Ralf's talk !!

Average: 1.1 (61 votes)

The results of the Misc live competition 3rd are online !

In december we published the results of the third run of the misc live competition on the mancoosi website. I left this post in my draft folder for a quite a while now. I'll publish for posterity...

The main difference from the last misc competition is the introduction of a new track, the user track, where we we want to answer the user request, and look for an optimal solution according to an optimization criterion provided by the user.

The initial assessment of the results are quite positive. We received 6 submissions for the trendy and paranoid track and 4 for the user track. These are very interesting results. We don't have a clear winner on all tracks as in the misc 2010 competition. The cudf2msu4trendy-1.0 from INESC wins the trandy and user1 track. The aspcud-paranoid-1.3 from the university of posdam is the best on the paranoid track, while cudf2pbo4user-1.0 is the winner of the track user2 and user2.

The track paranoid and trendy are the same as in misc 2010. We actually used the same problems as in misc2010 plus 4 new categories (sarge...sid) that are a collection of problems featuring the same installation request but with increasingly large number of packages and versions per packages.

A few words on the new user track. Since in this track the optimization function was given to the solver as an additional input, we decided to try out different criteria. The first one, in user1 is what I called the "Paranoid upgrade" criteria and all problems used in this track are (real) upgrade problems. In cudf the upgrade semantic allow to effectively not upgrade at all a package (since in the solution its version must be greater or equal or the version currently installed). This definition does not goes very well together with the paranoid criteria as the best solution for an upgrade would always be a solution that do not change anything. For this reason the we defined the new criteria as '-notuptodate,-removed,-changed' where solutions that privilege new (upgraded) packages are preferred to solution that do not change anything at all.

Average: 1 (54 votes)

I'm going to fosdem !!

And don't miss the talk from Ralf at the crossdistro devroom [1] ...

Sat 05/02 18:00 - 19:00: Mancoosi tools for the analysis and quality assurance of FOSS distributions (Ralf Treinen)

[1]http://fosdem.org/2011/preview-saturday#crossdistro_devroom

Average: 1 (63 votes)

debian git packaging with git upstream

Update

There is an easier method to do all this using gbp-clone as described here. Ah !

Then to build the package, you just need to suggest git-buildpackage where to find the pristin-tar :

git-buildpackage --git-upstream-branch=upstream/master

or you could simply describe (as suggested) the layout in debian/gbp.conf.

Easy !!!


I've found a lot of different recipes and howtos about git debian packaging, but I failed to find one simple recipe to create a debian package from scratch when upstream is using git. Of course the following is a big patchwork from many different sources.

First we need to do a bit of administrative work to setup the repository :

mkdir yourpackage
cd yourpackage
git init --shared

Then, since I'm interested in tracking upstream development branch I'm going to add a remote branch to my repo:

git remote add upstream git://the.url/here.git

at this point I need to fetch upstream and create a branch for it.

git fetch upstream
git checkout -b upstream upstream/master

Now in my repo I have a master branch and an upstream branch. So far, so good. Let's add the debian branch based on master:

git checkout master
git checkout -b debian master

It's in the debian branch where I'm going to keep the debian related files. I'm finally read for hacking git add / git commit / git remove ...

When I'm done, I can switch to master, merge the debian branch into it and use git-buildpackage to build the package.

git checkout master
git branch
  debian
* master
  upstream

git merge debian
git-buildpackage

Suppose I want to put everything on gitourious for example. I'll create an acocunt, set up my ssh pub key and then I've to add an origin ref in my .git/config . Something like :

[remote "origin"]
       url = git@gitorious.org:debian-stuff/yourpackage.git
       fetch = +refs/heads/*:refs/remotes/origin/*
[branch "master"]
       remote = origin
       merge = refs/heads/master

The only thing left to do is to push everything on gitourious. the --all is important.

git push --all

People willing to pull your work from girourious have to follow the following script :

$git clone git@gitorious.org:debian-stuff/yourpackage.git
$cd yourpackage
$git branch -a
* master
  remotes/origin/HEAD -> origin/master
  remotes/origin/debian
  remotes/origin/master
  remotes/origin/upstream
$git checkout -t origin/debian
$git checkout -t origin/upstream
$git branch -a
  debian
  master
* upstream
  remotes/origin/HEAD -> origin/master
  remotes/origin/debian
  remotes/origin/master
  remotes/origin/upstream
$git checkout master
$git-buildpackage

Maybe there is an easier way to pull all remote branches at once, but I'm not aware of it. Any better way ?

Average: 1.1 (56 votes)

the best privacy policy EVER !

And this is all True !!! well, for the 5 people reading this blog, I assure you, I'm not selling your data or tracking you in any way :)


At COMPANY _______ we value your privacy a great deal. Almost as much as we value the ability to take the data you give us and slice, dice, julienne, mash, puree and serve it to our business partners, which may include third-party advertising networks, data brokers, networks of affiliate sites, parent companies, subsidiaries, and other entities, none of which we’ll bother to list here because they can change from week to week and, besides, we know you’re not really paying attention.

We’ll also share all of this information with the government. We’re just suckers for guys with crew cuts carrying subpoenas.

Remember, when you visit our Web site, our Web site is also visiting you. And we’ve brought a dozen or more friends with us, depending on how many ad networks and third-party data services we use. We’re not going to tell which ones, though you could probably figure this out by carefully watching the different URLs that flash across the bottom of your browser as each page loads or when you mouse over various bits. It’s not like you’ve got better things to do.

Each of these sites may leave behind a little gift known as a cookie -- a text file filled with inscrutable gibberish that allows various computers around the globe to identify you, including your preferences, browser settings, which parts of the site you visited, which ads you clicked on, and whether you actually purchased something.

Those same cookies may let our advertising and data broker partners track you across every other site you visit, then dump all of your information into a huge database attached to a unique ID number, which they may sell ad infinitum without ever notifying you or asking for permission.

Also: We collect your IP address, which might change every time you log on but probably doesn’t. At the very least, your IP address tells us the name of your ISP and the city where you live; with a legal court order, it can also give us your name and billing address (see guys with crew cuts and subpoenas, above).

Besides your IP, we record some specifics about your operating system and browser. Amazingly, this information (known as your user agent string) can be enough to narrow you down to one of a few hundred people on the Webbernets, all by its lonesome. Isn’t technology wonderful?

The data we collect is strictly anonymous, unless you’ve been kind enough to give us your name, email address, or other identifying information. And even if you have been that kind, we promise we won’t sell that information to anyone else, unless of course our impossibly obtuse privacy policy says otherwise and/or we change our minds tomorrow.

We store this information an indefinite amount of time for reasons even we don’t fully understand. And when we do eventually get around to deleting it, you can bet it’s still kicking around on some network backup drives in somebody’s closet. So once we have it, there’s really no getting it back. Hell, we can’t even find our keys half the time -- how do you expect us to keep track of this stuff?

Not to worry, though, because we use the very bestest security measures to protect your data against hackers and identity thieves, though no one has actually ever bothered to verify this. You’ll pretty much just have to take our word for it.

So just to recap: Your information is extremely valuable to us. Our business model would totally collapse without it. No IPO, no stock options; all those 80-hour weeks and bupkis to show for it. So we’ll do our very best to use it in as many potentially profitable ways as we can conjure, over and over, while attempting to convince you there’s nothing to worry about.

(Hey, Did somebody hold a gun to your head and force you to visit this site? No, they did not. Did you run into a pay wall on the home page demanding your Visa number? No, you did not. You think we just give all this stuff away because we’re nice guys? Bet you also think every roomful of manure has a pony buried inside.)

This privacy policy may change at any time. In fact, it’s changed three times since we first started typing this. Good luck figuring out how, because we’re sure as hell not going to tell you. But then, you probably stopped reading after paragraph three.


(Source : http://www.itworld.com/print/129778 )

Average: 1.4 (9 votes)
Syndicate content