Recently I had to deploy a couple of new virtual servers and since we are in the 21st century, I decided to configure puppet from the beginning. For the first two machines, on the internal network, the job is easy. Puppet is well packaged in debian and the default configuration was like a charm.
The problem (and solution) I'm describing here is about tunneling puppet thought ssh jump from the DMZ to the internal network and to allow the client in the DMZ to access the puppet master in the internal network. I'm aware that this solution is hackish , but since sometimes is easier not asking (and piercing a firewall) then to open a ticket and go thought the notorious French bureaucracy. Puppet works with ssl certificates, all traffic is encrypted, so in my opinion is much better to add an exception to the firewall rules that to tunnel it. But anyway, here we go.
After duck-ducking a bit, I found on the web a nice suggestion to use autossh to establish a tunnel (master <-> client) and to use it to access my puppet master.
So on the pupper master I simply run :
This will make sure to open a ssh connection and monitor it. Autossh is a very nice tool that I didn't know.
Once this is done, the client can connect to the puppet master via localhost:8140. The next step on the book is to sign his key and allow him to connect and retrieve its catalog. In other to do this, I need to specify the new server name in the puppet.conf file. Simply specifying localhost will immediately give you trouble ...
Once the connection is established you can proceed with the authentication :
client side :
server side :
While writing the graphml printer for ocamlgraph (that btw has just been committed in the ocamlgraph svn and it going to be part of the next release of ocamlgraph...), I stumbled in the following pitfall (this is really ocaml for dummies :D). Imagine you want to associate a unique id to each data type. But you also want that equal values have equal ids. The first (wrong !) idea that might come to mind is to use the Hashtbl.hash function compute this integer uid. Even if this approach might work for simple data structure, it is doomed to fail for more complex data structure whose depth is greater then 10. The natural explanation for this lies in the documentation for the function Hashtbl.hash .
For example consider a simple list. If the list has less then 10 elements, then the hash function will return a unique id and equal lists will be associated to the same id. But if the list contains more then 10 elements, you have the following behavior :
This is fully in line with the Hashtbl.hash function specification. Using the function Hashtbl.hash_param we can easily solve see that the problem here is related to the number of elements that are used to generate the hash code. If we consider more meaningful values, we get different results ...
Now this is of course an example and using hash_param is not an option. If we really want to hash a list we need something more reliable then to guess a value for hash_param.
What we can do is to encapsulate our values in a new data type "Unique" that will make sure that each value is associated to a really unique id. The practice is easy :
In this module I use a variant data type and a Map to keep track the uids. And there is really nothing deep about it. Another idea would be to use a phantom type instead, in this version I've to query the map each time I want to associate an uid with a value.
This is a very simple way of getting a unique ID for any custom data structure. Another way would be wrap my data structures in an object and then use Oo.id . I haven't tried this approach yet.
The problem of using git to write latex documents is that there is not built-in way to clearly see the differences from two different commits (SO discussion here). Git was originally written for code and the standard diff utils used are line oriented. Using
diff --color-words can marginally obviate to this problem, but the solution is still unsatisfactory to me.
One excellent project is latexdiff . Latexdiff is a Perl script for visual mark up and revision of significant differences between two latex files. The problem is that, if you split your latex documents in more then one file, you will struggle a bit to use latexdiff as it accepts only one latex file as input.
Another project that I discovered recently is rcs-latexdiff. Rcs-latexdiff is a simple tool to generate a diff of a LaTeX file contained in a Revision Control System like git or svn. The other feature of rcs-latexdiff is the ability to concatenate latex documents split in multiple files into one and pass the result to latexdiff.
The "integration" with git is also straightforward. You just need to add this alias in your .gitconfig file :
Then using calling git as :
will generate a file diff.tex that you can compile and display as usual. You can easily add a micro functionality to rcs-latexdiff to compile and display the latex document for me. This can either be done using a wrapper script on top of rcs-latexdiff or hacking the script itself.
This week I had to create a plot using two different scales in the same graph to show the evolution of two related, but not directly comparable, variables. This operation is described in this FAQ on the matplot lib website. Nonetheless I'd like to give a small step by step example...
Consider my input data of the form date release total broken outdated .
I want to create one graph containing three sub graphs, each one containing data for unstable and wheezy. For the sub graph plotting the total number of packages, since the data is kinda uniform, the plot is pretty and self explanatory. The problem arise if we compare the non installable packages in unstable and wheezy, since the data from unstable will squash the plot for wheezy, making it useless.
Below I've added the commented python code and the resulting graph. You can get the full source of this example here.
I just finished to address the awesome debian crowd at the Mini Deb conf in paris. My presentation was about a few challenges we have ahead to bootstrap debian on a new architecture. Johannes Schauer and Wookey did a lot of work in the last few months particularly focusing on Linaro/Ubuntu. After Wheezy I think it is important to catch up with their work and integrate it in debian.
The two main take away messages from my presentation :
Build-Depends: foo (>=0.1) [amd64] <!stage1 bootstrap> | bar.
My slides are attached.
The signature is minimal. Since in GraphMl all attributes are typed, we only need two functions to describe the name, type a default value for the attributes of each vertex and edge, and two functions to map the value of each vertex and edge to a key / value list.
To give a small example, we build a simple graph with three vertex and two edges . In this case we only print the id of the node.
Use use ocamlbuild to compile the lot.
The result looks like this. I agree the formatting is not perfect ...
Using GraphTool, you can easily access a zillion more algorithms from the boost graph library. To be sincere, GraphTool accepts also graphs in dot and gml format. Graphml is the default format for GraphTool as it contains precise type information.
A refined version of the module Graphml is going to be included in the next release of ocamlgraph ! The tgz is attached to this message.
This entry is not really about computers, technology, or other work-related topics, but more about a hard-hack that I wanted to try for a while. How to make a kilt !!!
After a bit of duck-ducking, I decided to follow this excellent tutorial. At first sight the entire process seems a bit long, but you will realize that after the first read, that everything boils down to 3 steps: measure, fold and pin, sew.
For the measure part, I've the impression that the formula that is given in the instructable (waist/3*8+1) is a bit short for my comfort and taste. This is the size for the internal apron, the folded part that goes all around your left hip, back, and right hip, and the front apron. My suggestion would be to make the inner apron a bit longer then the front apron. This way the kilt will feel in my opinion more comfortable and it will envelop you body completely.
For the fold and pin part, you just need a bit of patience and a ruler. Put the pins parallel to the folding as in the instructable and not perpendicular. This will help you later when sewing everything.
The sewing ... If you know how to use a sewing machine, this is going to be a piece of cake. Otherwise, well, I spent more time troubleshooting the machine then sewing the kilt. I broke a few needles in the process and learned how to thread the machine with my eyes blindfolded. Not to mention that you have to learn how to disassemble this machines in a thousand parts to understand how the thread got stuck. It was fun. A lesson that I've learned is that a sewing machine works much better in the morning than late at night when you are tired and sleepy. Really !
Other then that, it was a fun experience. Maybe I'll make another one to commit this skill to mind. Maybe I'll run a kilt making workshop at the next debconf :)
So, what happens if your root partition is full and you reboot your machine ? If it is really full, and in particular there is no space to write anywhere, you might be stuck with a no space on device.
To avoid this problem, there exists a script /etc/init.d/mountoverflowtmp that runs a check to see if there is a minimum acceptable space on /tmp and if there is not, it mounts it overflow. It also checks for unneeded overflow tmpfs for /tmp and removes them if that is appropriate.(src).
But if you do not reboot again, you might get stuck with a mini /tmp directory of 1Mb . This is a neat trick, but you need to know about it to avoid unpleasant and unexpected surprises.
I've recently added a hook in my git repository to send all my commits to identi.ca. There are a plethora of methods to do so, and I've chosen bti, that is a small program, available in debian with oauth support.
Following the instructions I found here :
Once you have bti working, you can add the following script in your post-receive script in the directory hooks.
your next commit should appear on identi.ca . The format string :
should print appear like this on identi.ca. Have a look at the man page of git rev-list if you want to add other info. Notice that the url gets shorten automatically and we use the !bang syntax to post in the group dose.
For twitter is basically the same thing.
The new release of dose, apart from a few bug fixes, ships a new and improved version of dose-builddebcheck (man page). All the improvements done to dose-builddebcheck are from a set of patches submitted by Johannes Schauer in the context of the Bootstrap GSoC. We are still actively working on this. I invite you to read josh's recent blog post on the topic. For more background and discussion, you can have a look at the archives of this mailing list.
Dose-builddebcheck (or buildcheck for short) is similar in intent to dose-debcheck but for source packages. Buildcheck allows you to check, just by looking at the Source and Packages files, if all the dependencies can be satisfied and installed, all this in a matter of seconds (less then 30 secs to test all the source packages in sid on my laptop).
A simple example :
This will check for all source packages in the Source file if their build dependencies can satisfied on amd64 given the Packages binary file. This is nice but nothing new. A script based on the old edos tools has been around for quite a long time. We want more !
The new exciting feature brought by Johannes is the capability of checking source packages for cross-compilation :
This will check, if the source package 'picolisp' in the Sources file can be cross compiled for 'armel' on the native architecture 'amd64' given the list of binary packages in the Packages file. The generated report is, as for dose-debcheck, encoded in yaml and it can be simply parsed using an off-the-shelf library.
Apt is the canonical tool that can be used to check if a package can be cross compiled. Josh found a few discrepancies between dose and apt results. This was a very good test for both tools: Bug #683786 is a very interesting read...