Recently we did a bit of clean up of our git repositories and now thanks to roberto's efforts we have a new shiny git repository on the inria forge and two mailing lists to discuss development and user questions.
If you are a user, or interested in dose development, please sign up to these mailing lists:
if you already have a copy of the latest git repository, you can change the upstream repository issuing the following command :
If you are curious, you can clone dose
git clone git://gforge.inria.fr/dose/dose.git and let us know what you think about it.
The API documentation is available here. The man pages or various applications developed on top of the dose library are available here. We are still actively working on the documentation and any contribution is very much welcome. I'm working on a nice homepage...
And now we are even social ! follow us on identi.ca : http://identi.ca/group/dose3
The dose3 code base is getting larger and larger and waiting 40 secs each time I want to recompile the library is not acceptable anymore. Recently I tried to understand why the ocaml compiler was so slow .
To compile dose, I use a combination of ocamlbuild, ocamlfind, camlp4 and Makefile to drive everything.
The first important problem was related to my naive use of camlp4. I extensively use the Camlp4MacroParser module for the conditional compilation of certain features and actually I use camlp4 to process all my source code. The bottom line her is that using the byte code version of camlp4o is definitely a bad idea.
To use the native binary you need to change the pre-processing directive in the file _tags :
This will use the opt version of camlp4 and the cmxs instead of the .cma. The next problem is that debian, for some reason (I guess lack of time, more then anything else), does not ship the cmxs libraries of the Camlp4MacroParser. In other to make this available, I added a simple compilation rule in my make file to create the file Camlp4MacroParser.cmxs in the _build directory.
The second BIG problem I solved yesterday was related to the tag traverse. Since I use git as VCS, every invocation to ocamlbuild was triggering a recursive traverse of all the .git directory with a considerable waste of time.
To solve this problem, I've reverted the default ocamlbuild and explicitly add only those directory that I'm really interested to traverse.
This is actually needed not to compile the code itself, but to generate the documentation with ocamldoc.
On year ago, we (the mancoosi team) published a comparison study regarding the state of the art of dependency solving in debian. As few noticed, the data presented had few glitches that I promised to fix. So we've repeated our tests using exactly the same data we used one year ago, but now using the latest available versions of all package managers as available in debian unstable.
During the last year, three out of four solver that we evaluated release a major upgrade so I expected many improvements in performances and accuracy.
Mpm, our test-bench for new technologies, changed quite a bit under the wood as a consequence of the evolution of apt-cudf and the recent work done in apt-get to integrate external cudf dependency solvers.
Overall the results of our study are not changed. All solvers but mpm, that is based on aspcud, are not scalable as the number of packages (and alternatives) grows. It seems that Smart is the solver that does not give up, incurring in a timeout (fixed at 60 seconds) most of the time. Aptitude is the solver that tried to give you a solution, doesn't matter what and as result providing solutions that do not satisfy the user request for one reason or the other. Apt-get does surprisingly well, but it gives up pretty often showing the incomplete nature of it's internal solver. Cupt sometimes timeout, sometimes gives up, but when it is able to provide an answer it is usually optimal and it is very fast ... Mpm consistently finds an optimal solution, but sometimes it takes really a long time to do it. Since mpm is written in python and not optimized for speed this is not a big problem for us. The technology used by mpm is now integrated in apt-get and I hope this will alleviate this problem.
All the details of our study can be found one the Mancoosi Website as usual with a lot of details. For example here you can find the results when mixing four major releases : sarge-etch-lenny-squeeze.
Comments are more then welcome.
For quite a while I had this problem with low volume while listening music or watching a video. The default mixer controls of alsa only give you master and PCM, but even setting everything to 100% , the volume is still annoyingly low. For quite a while I suspected this was a software problem. For example VLC is able to boost the volume level at 140% ...
A while ago I found an interesting post (that I can't find anymore) on this subject. The author suggested to add this snippet to your ~/.asoundrc .
This will add a new control setting in your mixer controls, named "Pre-Amp" that will allow you to boost considerable the sound in same situations. I've noticed that with this new control, the sound can get a bit distorted. Anyhow, better then not hearing anything.
It's true, sometimes the title of my post try to be more as web search engine-friendly as possible...
Yesterday I was very unhappy of the new behavior of awesome. All of the sudden windows were not receiving focus automatically on screen change, forcing me to either use the mouse of to do some other keyboard action. so for the first time I decided to have a look at the awesome config file. At file I've to admit it's a bit intimidating. I don't know a word of lua and it's syntax is not really intuitive.
First a word about my setup. I use xfce4 + awesome. I don't want thunar of xfcedestop running and I don't want the wibox bar hiding somewhere...
I'll show highlight few chunks of the file. If interested, the whole file is attached to this post. I wrote only a small function, everything else are bit and pieces that I found on the net.
A bit debugging code that is actually very handy when trying to understand what is going on :
A small function to raise one window in each screen when I move left or right. This is actually not what I want, but it is better then nothing. What I'd really like is to raise the window under the mouse pointer. However the function
awful.mouse.client_under_pointer () does not seem working as expected... so this is something I want to fix sometimes in the future.
And then the rule I use to adjust different type of program I currently use. To get the class of a window you can use xprop .
And this small bit is to remove the wibox :
All in all lua is a nice language and awesome is a really flexible WM. Having the possibility to script its configuration file gives you unlimited space to experiment and automate boring and repetitive actions. Very nice :)
If you are using a laptop as your main computation device having reliable and uptodate backup should be a strong priority. A laptop can be stolen, can fall or as any other computer die in a cloud of smoke (like my last laptop did $^%# ).
For quite sometimes I've used duply that is a nice an intuitive frontend to duplicity. Duply allows you to have backups that are :
This is basically everything that I want from a backup solution. A nice article about duply here. Since I don't want to remember to run duply every now and then, I want a daemon to think about it at my place. If you are using a laptop, cron is not your friend as you laptop might be sleeping or simply not connected to the network.
A solution that I knew about for years is anacron. Today I set it up following these instructions. I've created 4 different profiles to backup different part of my home directory : home, pictures, media and projects and instructed anacron to run the backup for home and project daily while the rest only weekly.
Since the backup is unattended I've used the options of duply that will take care of creating a new full backup each month (2 months for pictures and music). To do this you need to call duply as
duply <profile> backup
and specify the option MAX_FULLBKP_AGE=1M in the conf file of your profile.
I can not sleep better ... somebody/something else is looking after my data
something else I might add in the next few days is a script to detect the type of network I'm connected to and to launch the backup accordingly. Some like :
I had for a while a tenvis mini319W network camera. this camera is the usual consumer Chinese product, built on the cheap, bad packaging and sold in low end computer shops. Anyway, it was a cheap and dirty solution for a project of mine and despite everything it does the job ok.
The Tenvis mini319W is actually the same as foscam FI8918W and when I say the same, it means different casing, but same exact identical board. It's just a re-branding. For some reason there are a lot of clones of this model. To stress even more the fact that they are actually the same product I even downloaded the firmware from foscam and flashed my tenvis without problems. The correct firmware to use is lr_cmos_11_37_2_46.bin that is the zip file you can download from the foscam website. I assume that the foscam firmware is more up-to-date, and for the moment the firmware is working fine. The two relevant links :
Here there is a long list of cameras that are compatible with the same firmware. I spent all this time on this gadget just because the tenvis model is not included in this list ...
After a fair amount of time learning how to unpack the firmware, I found this blog that has all the information and tools to turn this camera in an open camera. There is also a project called openipcam that focus on this family of products. You can find there all the information that I managed to figure out in the mean time. I thing this wiki is the main source of information about foscam hw/sw.
This is a walk through of all the things I've learned this morning about this camera.
The easier thing you can do while approaching a binary you don't know if to look at it. After a few try with hexdump I quickly realized that the format wasn't anything I had seen before. Since I didn't have enough info to duckduck, I end up asking on irc for help. Shortly after I stumbled on the excellent binwalk a nice tool to unpack firmaware images.
This tool definitely got me started. Analyzing the output it is easy to guess that the first part is a zipped kernel, the second part, and more interesting, is the romfs image. I wanted to know if it is possible to have a shell console on the device, so I focused immediately on the fs image. To extract and mount the image, you just need to get it from the binary blob with dd and the just mount it.
so far, so good. This is what we got in the image :
So there is definitely no way to access remotely to the camera... However at this point, I started to suspect that there were few too many similarities with the foscam firmware... and once you have to right keyword, you can find almost everything on the internet.
From here on, it was easy : somebody else already did all the hard work. Unpacking the firmware with fostar is a breeze:
Now I know the architecture and in theory with this info I could recreate the build environment to add more binaries to the rom. Then the Web interface is in a different binary blob. To extract :
fostar has also built in the capability to repack the image, giving the possibility to fix, hack and modify the interface (or the rom image, if you want to go thought the hassle of cross compiling your tools.
These cameras can also be connected via a jtag (serial cable) interface. I've a small jtag to usb cable and I want to try this next...
These are other few interesting links related to firmware analysis in different contexts that I found and read along the way.
If you use iceweasel / firefox and privoxy , you might want to let privoxy do its job more effectively and to remove all ads from your webpages. In the past I've used the adblock firefox extension, but I've the gut feeling that letting privoxy handle the ads removal might be more effective.
There is a very nice script here to convert adblock plus rule to privoxy rules. Using it, it is very easy :
The only thing left to do it to point your web browser to provoxy and have fun on your new clean internet.
Now that my infrastructure is up and running I'd love to have a command to create a small cluster of VM for a specific purpose. Up until now to create my VMs I've used a simple script calling the following command line.
Then reading the gnt-instance I discovered the batch-create command.
After writing the script I realized I've done all this just to satisfy my need of using a json snippet. But I guess my script is not longer and less understandable then before... Ah well. This is a bit from the Dpt of useless optimizations... Anyway there you go.The gnt-instance man page is a bit low on details about the syntax to use. Looking at the queue directory (/var/lib/ganeti/queue) you can have a look at the json generated by the command line above. From there it's easy to figure out the various options you can use.
Once you got this bit, we can easily script the creation of multiple instance :
Consider that the json snippet above can include a number of other parameters specific to the hypervisor you are using. In my setup, I've configured the hypervisor once for all with the following options:
Last year we invited David to work with us for a few days to add a generic interface to apt to call external solvers. After a few iterations, this patch finally landed in master and recently (about 3 months ago), in debian unstable.
[ David Kalnischkies ] * [ABI-Break] Implement EDSP in libapt-pkg so that all front-ends which use the internal resolver can now be used also with external ones as the usage is hidden in between the old API * provide two edsp solvers in apt-utils: - 'dump' to quickly output a complete scenario and - 'apt' to use the internal as an external resolverToday the new version of apt-cudf was upload in unstable and with it the latest bug fixes that makes it ready for daily use. I've used it quite a lot myself to upgrade my machine and it seems working pretty well so far... The most important difference with the old version is the support for multi-arch enabled machines.
This marks an important milestone in our efforts to integrate external solvers, built using different technologies, directly into apt. From a user prospective, this means that (s)he will have the possibility to check if there exists a better (best ?) solution to an installation problem then what proposed by the internal apt solver. Moreover, even if apt-get gives very satisfactory answers, there are occasions where it fails miserably, leaving the user wondering how to unravel the complex web of dependencies to accomplish his task. Available cudf solvers in debian are at the moment : aspcud, mccs and packup.
From an architectural point of view this is accomplished by abstracting the installation problem via a simple textual protocol (EDSP) and using an external tool to do the heavy duty translation. Since all solvers now available in debian are not meant to be debian-specific, using them involve a two step translation. The EDSP protocol specification is for the moment "hidden" in the apt documentation. I hope to find a better place for it soon : it would be cool if other package managers as smart or cupt could add an implementation on EDSP in their code so to automatically benefit of this technology.
To make it happen, Apt first creates an EDSP file that is then handled to apt-cudf that takes care of the translation to cudf and back into EDSP that is then read back by apt. Apt-cudf is the bridge between edsp and the external solvers and takes care of doing the book keeping and to select the right optimization criteria.
Roberto recently wrote a very nice article explaining how to use apt-get with an external solver.
In a nutshell, if you want to try this out you just need to install apt-cudf, one external solver like aspcud from the university of Pasdam and then call apt-get using the --solver option (that is not yet documented #67442 ). For example :
This will install gnome while using the a optimization criteria that tries to minimizing the changes on the system. Various other optimization criteria for all apt-get default actions can be specified in the apt-cudf configuration file /etc/apt-cudf.conf
I hope the new release of apt-cudf make it into testing before the freeze. Time to test !!!