Xfce terminal and middle mouse button.

just a quicky. If you use the xfce terminal and you like to have more then one tab-terminal open, you might have niticed the the middle button closes it, without confirmation... I just discovered that this annoying feature can be turned off.

Following the freedesktop spec, you can just edit the file .config/Terminal/terminalrc and set the option MiscTabCloseMiddleClick to False.

There are many other settings . The full list is here : http://docs.xfce.org/apps/terminal/advanced

Average: 5 (3 votes)

Mancoosi fosdem talks

A small (and probably not complete) list of videos associated to the mancoosi project @ FOSDEM.

  • QA tools for FOSS distributions (FOSDEM 2012) - Pietro Abate (video)
  • Mancoosi tools for the analysis and quality assurance of FOSS (FOSDEM 2011) - Ralf Treinen (video)
  • Improving Rollback in Linux via DSL approach & distributing (FOSDEM 2011) - John Thomson (video)
  • Cross distro dependency resolution reusing solvers among distros (FOSDEM 2010) - Stefano Zacchiroli (video)
  • The MANCOOSI project (FOSDEM 2008) - Ralf Treinen (video)
No votes yet

Finding all the elementary circuits of a directed graph

Below you can find an implementation in ocaml of the algorithm described in this paper by D. B. Johnson

  Finding all the elementary circuits of a directed graph.
  D. B. Johnson, SIAM Journal on Computing 4, no. 1, 77-84, 1975.
  http://dx.doi.org/10.1137/0204007
the idea of this algorithm is to enumerate all cycles in a strongly connected component. Next step will be to implement the "feedback arc set" of this connected component so to find the optimal way to break these loops in a way that the connected component can be partitioned in smaller DAGs.

This is only the main function, you can find everything else in the git repo given below. If I've time I'll also give a shot to implement tarjan algorithm and to compare the two... Johnson is supposed to be better complexity wise, but I'm curious to check the difference on my domain specific input.

let find_all_cycles_johnson g =
  if not G.is_directed then
    assert false;

  (*  stack of nodes in current path *)
  let path = Stack.create () in

  (* main function. return (true, acc) if in the last iteration we find a new cycle *)
  let rec circuit t result thisnode startnode component =

   (* add the node the candate cycles and block it *)
    Stack.push thisnode path;
    block t thisnode;

    let (closed,result) =
      G.fold_succ (fun nextnode (c,r) ->
        if G.V.equal nextnode startnode then
         (* this is a cycle ! *)
          (true,(Stack.copy path)::r)
        else begin
          if not(is_bloked t nextnode) then
            circuit t r nextnode startnode component
          else
            (c,r)
        end
      ) component thisnode (false,result)
    in
    if closed then
      (* this node is part of a cycles, but we don't want to exclude it from subsequent
          iterations. Longer cycles can contain smaller cycles *)

      unblock t thisnode
    else
      (* since it is not part of a cycles, then it must be part of a portion of the graph that
          does not contain any elementary circuits *)

      G.iter_succ (fun nextnode ->
        let l = get_notelem t nextnode in
        if List.mem thisnode l then
          Hashtbl.replace t.notelem nextnode (thisnode::l)
      ) component thisnode;

    ignore(Stack.pop path);
    (closed,result)
  in

  (* Johnson's algorithm requires some ordering of the nodes.
      We use the order induced by the compare function associated to the
      vertex type. See ocamlgraph Pack.Diagraph  *)

  let vertex_set = G.fold_vertex SV.add g SV.empty in
 
  SV.fold (fun s result ->
    (* Build the subgraph induced by s and following nodes in the ordering *)
    let subset = SV.add s (partition vertex_set s) in
    let subgraph = extract_subgraph g subset in

    if G.nb_edges subgraph > 0 then begin
      (* Find the strongly connected component in the subgraph
       * that contains the least node according to the ordering *)

      let scc = G.Components.scc_list subgraph in
      let minnode = SV.min_elt subset in
      let mincomp = List.find (fun l -> List.mem minnode l) scc in
     
      (* smallest node in the component according to the ordering *)
      let startnode = minnode in
      let component = extract_subgraph subgraph (to_set mincomp) in

      (* init the block table for this component *)
      let t = init_block component in

      snd(circuit t result startnode startnode component);
    end else
      result
  ) vertex_set []
;;

You can get the code here : git clone http://mancoosi.org/~abate/repos/cycles.git . There are two versions of the algorithm. One is a translation in ocaml of the imperative algorithm, the other, presented above, adds a bit of functional flavor ...

Update 1

the code repository now contains also two implementations of approximate algorithms the feedback arc set problem

Update 2

Since the number of cycles in a graph can be very large, I've added a lazy version of the algorithm using ExtLib enums (but can be easily done without). It would be cool to avoid using a global data structure to keep track of which nodes I've already visited like the persistent array of filliatr ... the code could be further optimized of course as many operations on lists and set can be avoided with a smarter DS to build the connected component of the subgraph at each iteration ... Complete code in the git repo above.

let find_all_cycles_enum g =
  let rec circuit path t thisnode startnode component =                                        
     let rec aux = function                                                                    
       |[] -> Enum.empty ()                                                                    
       |nextnode :: rest ->
         if G.V.equal nextnode startnode then begin                                            
           let e = aux rest in                                                                
           unblock t thisnode;
           Enum.push e (List.rev (thisnode::path));                                            
           e
         end else
           if not(is_bloked t nextnode) then begin
             let e = circuit (thisnode::path) t nextnode startnode component in                
             Enum.append e (aux rest)                                                          
           end else begin                                                                      
             aux rest                                                                          
           end                                                                                
     in
     block t thisnode;
     let e = aux (G.succ component thisnode) in                                                
     G.iter_succ (fun nextnode ->
       let l = get_notelem t nextnode in                                                      
       if List.mem thisnode l then
         Hashtbl.replace t.notelem nextnode (thisnode::l)                                      
     ) component thisnode;                                                                    
     e                                                                                        
  in                                                                                          
 
  let vertex_set = G.fold_vertex SV.add g SV.empty in                                          
  let rec aux = function                                                                      
    |[] -> Enum.empty ()                                                                      
    |s :: rest ->  
      let subset = SV.add s (partition vertex_set s) in                                        
      let subgraph = extract_subgraph g subset in                                              
      (* I need only one... not all scc *)
      (* actually I need only the scc that contains the min *)                                
      let scc = G.Components.scc_list subgraph in                                              
      let minnode = SV.min_elt subset in
      let mincomp = List.find (fun l -> List.mem minnode l) scc in                            
      let startnode = minnode in
      let component = extract_subgraph subgraph (to_set mincomp) in                            
      let t = init_block component in
      let partial = circuit [] t startnode startnode component in                              
      Enum.append partial (aux rest)                                                          
  in
  aux (SV.elements vertex_set)                                                                
;;

Average: 2.3 (4 votes)

Nice menus in drupal 7 ?

I wanted to add a nice dropdown menu using drupal 7 and browsing the net I realized that there are so many drupal consultants making soooo many (fantastic) videos that it's kind of difficult to understand which version of drupal they are using and if things changed in the meanwhile.

An I know this blog post is going to be obsolete in a short while...

Anyway, if you are using drupal 7.12 and nice menus 7.x-2.0 creating a dropdown menu does not require any templateing foo (except if you want to change the default nice menu appearance). Just create a menu (I used the main menu) with a hierarchical structure. Then in the them setting (I'm using bartik) disable the main and secondary menus. At this point you need to go in the block configuration panel and add in the "features" region the nice menu block. save, rinse and it's done !

It's actually very easy, but the doco and videos around gave me a different impression. I think mostly because drupal evolves so fast and developers listen a lot to users that simple this are actually simple (and hard things are possible !)

My 002 drupal cents for the day.

Average: 2.2 (9 votes)

Dependency order matters !

Recently I've discovered a subtle consequence of how the order in which dependencies are specified in debian actually matters. While re-factoring the code of dose3, I changed the order in which dependencies are considered by our sat solver (of edos-fame) . I witnessed a twofold performance loss just by randomizing how variables were presented to our sat solver. This highlights, on one hand, how our solver is strongly dependent on the structure of the problem and, on the other hand the standard practice of debian maintainers to assign an implicit priority in the disjunctive dependencies where the first is the most preferred packages (and maybe the most tested, at least dependency-wise).

The basic idea of distcheck is to encode the dependencies information contained in a Packages file in CNF format and then to feed them to a sat solver to find out if a package has broken dependencies or if its dependencies are such that no matter what, it would be impossible to install this package on a user machine.

Conflicts are encoded as binary clauses. So if package A conflicts with package B, I add a constraint they says "not (A and B)" , that is A and B cannot be considered together. The dependencies encoding associates to each disjunction of the depends field a clause that says "A implies B". If a package foo depends on A,B | C,D , I'll add "foo implies A and B" & "foo implies C and D" . This encoding is pretty standard and it is easy to understand.

The problem is how the sat solver will search for a solution to the problem "Is is possible to install package foo in an empty repository". The solver we use is very efficient and can easily deal with 100K packages or more. But in general is not very good at dealing with random CNF instances. The reason because edos-debcheck is so efficient lies in the way it exploits the structure of the sat problems.

The goal of a sat solver is to find a model (that is a variable assignment list) that is compatible with the given set of constraints. So if my encoding of the debian repository is a set of constraints R, the installability problem boils down to add an additional constraint to R imposing that the variable associated to the package foo must be true, and then ask the solver to find a model to make this possible. This installation, in sat terms, would be just an array of variables that must be true in order to satisfy the given set of constraints.

If you look at the logic problem as a truth table, the idea is to find a row in this table. This is the solution of your problem. Brute force of course is not an option and modern sat solvers use a number of strategies and heuristic to guide the search in the most intelligent way possible. Some of them try to learn from previous attempts, some of them, when they are lost try to pick a random variable to proceed.

If we consider problems that have a lot of structure, award winning sat solver do not back track very much. By exploiting the structure of the problem, their algorithm allows them to considerably narrow down the search only to those variables that are really important to find a solution.

All this long introduction was to talk about the solver that is currently used in edos-debcheck and distcheck (that is a rewrite of the edos-debcheck).

So why dependency order matters ? If we consider any package, even if the policy does not specify any order in the dependencies, it's common practice to write disjunctive dependencies specifying the most probable and tested alternative first and all other, more esoteric choices later. Moreover real packages are considered *before* virtual packages. Since every developer seems be doing the same, some kind of structure might be hidden in the order in which dependencies are specified.

Part of the efficiency of the the solver used in our tools is actually due to the fact that its search strategy is strongly dependent on the order in which literal are specified in each clause. Saying the package foo depends on A and B is "different" then saying it depends on B and A, even if semantically equivalent.

In my tests, I found about a twofold performance loss if the order of literals is either randomized or inverted. This is clearly a specific problem related to our solvers, while other solvers might not be susceptible to such small structural changes. Sat competitions often employs some form of obfuscation strategies of well known problems with well known structures in order to make useless to encode a search strategy that exploits the specific structure of a problem.

Since here we're not trying to win a sat competition, but to provide tool to solve a specific problem, we are of course very happy to exploit this structure.

Average: 1 (4 votes)

QA tools for FOSS distributions

I'm going to deliver this talk at fosdem 2012, room H.1301 (CrossDistribution Devroom) at 16:30 on Sat. If you are interested, please come by. In particular I'd like to talk with all the developers out there that are using our work (of edos fame) and to discuss with them future plans to migrate their programs to the new generation of mancoosi - powered QA tools. Scroll down to get the slides .

fosdem link : http://fosdem.org/2012/schedule/event/distro_qa_tools

Abstract

FOSS distributions are increasingly over pressure to deliver stable releases including the most up to date upstream software. Time-based release strategies has exacerbated this problem putting even more pressure on QA teams. The recently concluded Mancoosi project has developed a number of tools to automatically analyse large packages collections for a range of problems, from installability checks to speculative analysis of software repositories. In this talk I'll present four command line tools to identify and correct potential problems as soon as possible during the release cycle.

In particular :

  • Debcheck: This tools helps to identify all broken packages within a repository and provides a detailed explanation of the problem. This can be used to prevent shipping releases that contain packages that cannot be installed because of missing or malformed dependencies.
  • Buildcheck: Given a Sources file and a set of binary repositories, this tool identifies those source packages that cannot be compiled because their build dependencies cannot be satisfied.
  • Outdated: This tool identifies those broken packages that need special attention because of outdated meta-data.
  • Challenged: This tool performs a speculative analysis of the repository to identify those packages that, if upgraded to a specific version, would break a large number of other packages in the repository. This tool would be particularly useful during the upgrade of a specific component to evaluate its impact on the software archive.

Most of our tools support both rpm (version 4 and 5) and deb based distributions.

The mancoosi team.

AttachmentSize
main.pdf504.46 KB
No votes yet

xfce4 and awsome

If you are tracking unstable, and you were using gnome2, then it's futile to resist and not to move to gnome3. A lot has been written about gnome3, some poeple love it, others hate it. Others put their head under the sand by using the gnome3 fall-back mode. I'm on the "hate" category. I've used the fall-back mode for a while, then switched to the full blown gnome-shell and I've tried to use it for one month. I have to admit that is nice looking, intuitive and accessible. However it lacks so many things (gnome shell plug-ins are nice, but we still have to wait quite a while to have back all the fantastic gnome2 plug-ins) that I had in gnome2 that the new shiny look is not enough to keep me using it.

Moreover, apart for the very subjective reasons I gave above (I'm sure then other had different experiences and have a different pain threshold then I do), what I missed the most is the integration with awesome. I started using a tiling window manager last year and my productivity sky rocketed. Going back to manual window placement, overlapping, hiding, and this desperate continuous use of the "expose" functionality of gnome-shell was driving me mad.

So, since I started fresh with the new laptop, I looked around for alternatives. Going KDE is not an option. Many people say it's nice and it works very well, but it's not my cup of tea. Going back to gnome 2 was not really an option either. What I knew is that I wanted a desktop environment that is compatible with the freedesktop standards, modular and that would allow me to use my WM of choice.

The almost natural solution was to try xfce4. It seems to me a very nice desktop environment, light weight, extensible and with all the goodies I was looking for. The feeling is very much of gnome2. All components can work independently and it works very well with awesome.

Since i wanted a minimal subset of components I started by installing the xfce4 panel and awsome. This worked ok, but there were a lot of functionality missing, like plug-ins, notifications, automunting, integrations with consolekit, etc...

So after fighting a while, I've installed the full xcfe4 stack. Running awesome instead of the standard wm is just a matter of creating a custom session in the user preferences. On the awesome side, you need to disable the awesome panel and the awesome menu. This is all pretty easy and it was pretty much the same conf I used to have with gnome2.

I also tried to use slim as display manager. I've to say it works well, but fails to integrate with xfce4 and consolekit leaving me without the correct permissions. Looking for a replacement, I've tried ligthdm. This one more used then slim and intergrates perfectly with consolekit solving all my problems.

On very nice application that comes with xfce4 is thunar, their file manager and it integration with Ristretto, the image viewer. It always stuck me how eog and nautilus work badly together...

And since I was at it, I also dumped rhythmbox for listen and f-spot for shotwell . I like these two applications. They do their job well, they are stable (so far) and have all the functionalities I need. Bonus I finally go rid of mono !

Average: 3.7 (3 votes)

new thinkpad x220

This summer my beloved thinkpad x301 died in a cloud of smoke. It was exactly 3 years and 20 days old while my warranty was valid only for 3 years. Now, don't tell me this is a coincidence. Anyway. After about 5 months, I finally managed to convince my employer to get me a new thinkpad, the x220. My specs includes a 128G SSD , 4G of RAM, 2.4Gz processor, camera and fingerprint reader.

It's a pity that the x300 series is not in production anymore. They were light, with a solid battery and a large screen. The X1 just don't cut it. Even though it can be consider the successor of the x300, its battery life is just not enough for my needs. On the other hand, the x220 is a very nice machine : the screen is a bit smaller then the X300, but it is light, with a very powerful processor, good battery and it feels very solid. In my opinion lenovo should have packed the new hw of the x220 in the chassis of the x300, maybe with small compromise on the battery life (I got the big battery and I can squeeze almost 7 hs with a single charge) but clearly this was not a good business choice...

Installing debian on this laptop is not immediate because none of the official debian installers are shipped with a recent kernel (as in 3.x series). Since with the official debian installer I cannot have neither the driver for the Ethernet card or the driver for the wirelles card, I opted to use a custom installer built by kmuto (http://kmuto.jp/debian/d-i/ ) . Using this installer the ethernet card is recognized immediately and it's easy to proceed with the installation as usual. Another option would have been to add the binary blog for the wireless chip, but apparently the deb installer supports only WEP auth, while all my access point are WPA. I didn't spend too much time on the wireless setup, so it might well be that is indeed possible to install using a WPA access point.

Last time I installed a laptop, I used the automatic partition option to have lvm on top of a lucks encrypted partition, only to discover later that the default dimensions of the partitions were a bit too small for me. For example, by default the boot partition is only 256Mb. This is plenty if you want to have only one kernel image installed at each given time, but if you want more then one kernel, memtest and for example a grml rescue iso, it's easy to run out of space.

So I proceed to manually partition the disc creating a boot partition of 512M, and using the rest as a luks encrypted device with lvm on top and 3 logical volumes : sys (15G), swap (4G) and home (the rest). For my daily use having 15G on my laptop for /usr, /var, etc should more the enough...

Next step was to install the system. Since in recent times I got extremely pissed off with gnome 3, I've decided to dump it completely and go back to awesome. But since awesome all by itself is a bit sad, I paired it up with xfce. Everything works, except the automount, and I'm still trying to figure out how to make it work. Apparently is a consolkit problem... I'll write another post about the xfce4 + awsome setup soon...

Today I've also started playing with the finger print reader. It seems working, but I haven't managed yet to use it in conjunction with pam for authentication ... more to come.

And... On last closing remark : during the last 5 months I've used a dell latitude e6410 ... Gosh. I feel I'm on anther planet. The keyboard of a thinkpad give you pleasure, not pain, from 2 to 4G of RAM is a big jump and from a conventional HD to a SSD ... well... it seems I'm flyinggggg :) I've the impression my productivity just went up 50% !!!

If you work with your laptop everyday get a good laptop. It is well worth the investment ...

Update

Now debian can be installed on this model using the stock installation images.
Average: 2 (2 votes)

Managing the complexity of component based systems

I've recently given this talk at RSISE/NICTA at the Australian National University in Canberra.

http://cecs.anu.edu.au/seminars/more/SID/2990

If somebody at linux Cong AU is interested to have a chat about around these topics, I should be around the entire week. Please drop me a note or look for me on IRC.

ABSTRACT


Free and Open Source Software (FOSS) distributions are rather peculiar instances of component-based software platforms. They are developed rapidly and without tight central coordination, they are huge (tens to thousands components per platform), and their importance in the Internet computing infrastructure is growing.

Both the construction of a coherent collection of components and the maintenance of installations based on these raise difficult problems for distribution maintainers and system administrators. Distributions evolve rapidly by releasing new component versions and strive for increasingly high Quality Assurance (QA) requirements on their component collections. System upgrades may proceed on different paths depending on the current state of the system and the available components, and system administrators are faced with difficult choices of upgrade paths and with frequent upgrade failures.

The now concluded project MANCOOSI (Managing the Complexity of the Open Source Infrastructure) aims to solve some of these problems. I will describe current and past work done in the context of MANCOOSI and some future directions.

No votes yet

ocamlbuild stubs and dynamic libraries

The other day I wrote about my experience to set up the build system for a simple library. However, since parmap includes only two simple stubs to syscall I didn't have the chance to talk how to convince ocamlbuild to build stubs that depend on an external dynamic library.

I'm sure, facing this problem you had a look at this example : http://brion.inria.fr/gallium/index.php/Ocamlbuild_example_with_C_stubs

Despite providing all elements, the page is a bit scars of details and explanations (at least for me...).

So, suppose you want to build ocaml stubs for a C library called toto . Good practices tell us to put our stubs in a file that is called toto_stubs.c and then add the .o file in a clib file (libtoto_stubs.clib ) that ocamlbuild is going to use to build out dynamic library.

So far so good. Now we need to add a tag, say use_toto that we will use to compile our binaries and libraries. Our _tags file will look like :

 <libtoto_stubs.*>: use_toto

Here I use only one tag. In the example of the ocamlbuild they use two tags, one to compile, one to link.

At this point we need to explain to ocamlbuild how to build our library. First we add a compile rule where we say that whenever we compile a c object that use toto, then we must also add its include directory.

       flag ["c"; "use_toto"; "compile"] & S[
         A"-ccopt"; A"-I/usr/include/toto";
       ];

Second we add a link flag to add to the dynamic library all the information it needs to load other external libraries. This is important as we don't want to add any other flags anywhere else. When we use -ltoto_stubs we want all other dependencies automatically satisfied by the linker. Note the libtoto.so referred by -ltoto is the C library for which we are writing these bindings and that sits in /usr/lib/.

       flag ["c"; "use_toto"; "ocamlmklib"] & S[
         A"-ltoto";
       ];

At the end we add a flag that whenever we try to build out ocaml module ( say toto.cma ), we want to add all necessary information to load at run time its dynamic component.

       flag ["ocaml"; "use_toto"; "link"; "library"; "byte"] & S[
         A"-dllib"; A"-ltoto_stubs";
       ];

Using ocamlobjdump we can easily check if the cma contains indeed this information. The output should look something like this :

$ocamlobjinfo _build/toto.cma
File _build/toto.cma
Force custom: no
Extra C object files:
Extra C options:
Extra dynamically-loaded libraries: -ltoto_stubs
Unit name: Toto
Force link: no
...

In order to generate cmxa and a objects we need to specify few other flags and dependencies like :

       dep ["link"; "ocaml"; "link_toto"] ["libtoto_stubs.a"]
       flag ["ocaml"; "use_toto"; "link"; "library"; "native"] & S[
         A"-cclib"; A"-ltoto_stubs";
       ];

As always, if I'm missing something, please drop a note in the comment page.

Average: 1.5 (6 votes)
Syndicate content