For quite a while I had this problem with low volume while listening music or watching a video. The default mixer controls of alsa only give you master and PCM, but even setting everything to 100% , the volume is still annoyingly low. For quite a while I suspected this was a software problem. For example VLC is able to boost the volume level at 140% ...
A while ago I found an interesting post (that I can't find anymore) on this subject. The author suggested to add this snippet to your ~/.asoundrc .
This will add a new control setting in your mixer controls, named "Pre-Amp" that will allow you to boost considerable the sound in same situations. I've noticed that with this new control, the sound can get a bit distorted. Anyhow, better then not hearing anything.
It's true, sometimes the title of my post try to be more as web search engine-friendly as possible...
Yesterday I was very unhappy of the new behavior of awesome. All of the sudden windows were not receiving focus automatically on screen change, forcing me to either use the mouse of to do some other keyboard action. so for the first time I decided to have a look at the awesome config file. At file I've to admit it's a bit intimidating. I don't know a word of lua and it's syntax is not really intuitive.
First a word about my setup. I use xfce4 + awesome. I don't want thunar of xfcedestop running and I don't want the wibox bar hiding somewhere...
I'll show highlight few chunks of the file. If interested, the whole file is attached to this post. I wrote only a small function, everything else are bit and pieces that I found on the net.
A bit debugging code that is actually very handy when trying to understand what is going on :
A small function to raise one window in each screen when I move left or right. This is actually not what I want, but it is better then nothing. What I'd really like is to raise the window under the mouse pointer. However the function
awful.mouse.client_under_pointer () does not seem working as expected... so this is something I want to fix sometimes in the future.
And then the rule I use to adjust different type of program I currently use. To get the class of a window you can use xprop .
And this small bit is to remove the wibox :
All in all lua is a nice language and awesome is a really flexible WM. Having the possibility to script its configuration file gives you unlimited space to experiment and automate boring and repetitive actions. Very nice :)
If you are using a laptop as your main computation device having reliable and uptodate backup should be a strong priority. A laptop can be stolen, can fall or as any other computer die in a cloud of smoke (like my last laptop did $^%# ).
For quite sometimes I've used duply that is a nice an intuitive frontend to duplicity. Duply allows you to have backups that are :
This is basically everything that I want from a backup solution. A nice article about duply here. Since I don't want to remember to run duply every now and then, I want a daemon to think about it at my place. If you are using a laptop, cron is not your friend as you laptop might be sleeping or simply not connected to the network.
A solution that I knew about for years is anacron. Today I set it up following these instructions. I've created 4 different profiles to backup different part of my home directory : home, pictures, media and projects and instructed anacron to run the backup for home and project daily while the rest only weekly.
Since the backup is unattended I've used the options of duply that will take care of creating a new full backup each month (2 months for pictures and music). To do this you need to call duply as
duply <profile> backup
and specify the option MAX_FULLBKP_AGE=1M in the conf file of your profile.
I can not sleep better ... somebody/something else is looking after my data
something else I might add in the next few days is a script to detect the type of network I'm connected to and to launch the backup accordingly. Some like :
I had for a while a tenvis mini319W network camera. this camera is the usual consumer Chinese product, built on the cheap, bad packaging and sold in low end computer shops. Anyway, it was a cheap and dirty solution for a project of mine and despite everything it does the job ok.
The Tenvis mini319W is actually the same as foscam FI8918W and when I say the same, it means different casing, but same exact identical board. It's just a re-branding. For some reason there are a lot of clones of this model. To stress even more the fact that they are actually the same product I even downloaded the firmware from foscam and flashed my tenvis without problems. The correct firmware to use is lr_cmos_11_37_2_46.bin that is the zip file you can download from the foscam website. I assume that the foscam firmware is more up-to-date, and for the moment the firmware is working fine. The two relevant links :
Here there is a long list of cameras that are compatible with the same firmware. I spent all this time on this gadget just because the tenvis model is not included in this list ...
After a fair amount of time learning how to unpack the firmware, I found this blog that has all the information and tools to turn this camera in an open camera. There is also a project called openipcam that focus on this family of products. You can find there all the information that I managed to figure out in the mean time. I thing this wiki is the main source of information about foscam hw/sw.
This is a walk through of all the things I've learned this morning about this camera.
The easier thing you can do while approaching a binary you don't know if to look at it. After a few try with hexdump I quickly realized that the format wasn't anything I had seen before. Since I didn't have enough info to duckduck, I end up asking on irc for help. Shortly after I stumbled on the excellent binwalk a nice tool to unpack firmaware images.
This tool definitely got me started. Analyzing the output it is easy to guess that the first part is a zipped kernel, the second part, and more interesting, is the romfs image. I wanted to know if it is possible to have a shell console on the device, so I focused immediately on the fs image. To extract and mount the image, you just need to get it from the binary blob with dd and the just mount it.
so far, so good. This is what we got in the image :
So there is definitely no way to access remotely to the camera... However at this point, I started to suspect that there were few too many similarities with the foscam firmware... and once you have to right keyword, you can find almost everything on the internet.
From here on, it was easy : somebody else already did all the hard work. Unpacking the firmware with fostar is a breeze:
Now I know the architecture and in theory with this info I could recreate the build environment to add more binaries to the rom. Then the Web interface is in a different binary blob. To extract :
fostar has also built in the capability to repack the image, giving the possibility to fix, hack and modify the interface (or the rom image, if you want to go thought the hassle of cross compiling your tools.
These cameras can also be connected via a jtag (serial cable) interface. I've a small jtag to usb cable and I want to try this next...
These are other few interesting links related to firmware analysis in different contexts that I found and read along the way.
If you use iceweasel / firefox and privoxy , you might want to let privoxy do its job more effectively and to remove all ads from your webpages. In the past I've used the adblock firefox extension, but I've the gut feeling that letting privoxy handle the ads removal might be more effective.
There is a very nice script here to convert adblock plus rule to privoxy rules. Using it, it is very easy :
The only thing left to do it to point your web browser to provoxy and have fun on your new clean internet.
Now that my infrastructure is up and running I'd love to have a command to create a small cluster of VM for a specific purpose. Up until now to create my VMs I've used a simple script calling the following command line.
Then reading the gnt-instance I discovered the batch-create command.
After writing the script I realized I've done all this just to satisfy my need of using a json snippet. But I guess my script is not longer and less understandable then before... Ah well. This is a bit from the Dpt of useless optimizations... Anyway there you go.The gnt-instance man page is a bit low on details about the syntax to use. Looking at the queue directory (/var/lib/ganeti/queue) you can have a look at the json generated by the command line above. From there it's easy to figure out the various options you can use.
Once you got this bit, we can easily script the creation of multiple instance :
Consider that the json snippet above can include a number of other parameters specific to the hypervisor you are using. In my setup, I've configured the hypervisor once for all with the following options:
Last year we invited David to work with us for a few days to add a generic interface to apt to call external solvers. After a few iterations, this patch finally landed in master and recently (about 3 months ago), in debian unstable.
[ David Kalnischkies ] * [ABI-Break] Implement EDSP in libapt-pkg so that all front-ends which use the internal resolver can now be used also with external ones as the usage is hidden in between the old API * provide two edsp solvers in apt-utils: - 'dump' to quickly output a complete scenario and - 'apt' to use the internal as an external resolverToday the new version of apt-cudf was upload in unstable and with it the latest bug fixes that makes it ready for daily use. I've used it quite a lot myself to upgrade my machine and it seems working pretty well so far... The most important difference with the old version is the support for multi-arch enabled machines.
This marks an important milestone in our efforts to integrate external solvers, built using different technologies, directly into apt. From a user prospective, this means that (s)he will have the possibility to check if there exists a better (best ?) solution to an installation problem then what proposed by the internal apt solver. Moreover, even if apt-get gives very satisfactory answers, there are occasions where it fails miserably, leaving the user wondering how to unravel the complex web of dependencies to accomplish his task. Available cudf solvers in debian are at the moment : aspcud, mccs and packup.
From an architectural point of view this is accomplished by abstracting the installation problem via a simple textual protocol (EDSP) and using an external tool to do the heavy duty translation. Since all solvers now available in debian are not meant to be debian-specific, using them involve a two step translation. The EDSP protocol specification is for the moment "hidden" in the apt documentation. I hope to find a better place for it soon : it would be cool if other package managers as smart or cupt could add an implementation on EDSP in their code so to automatically benefit of this technology.
To make it happen, Apt first creates an EDSP file that is then handled to apt-cudf that takes care of the translation to cudf and back into EDSP that is then read back by apt. Apt-cudf is the bridge between edsp and the external solvers and takes care of doing the book keeping and to select the right optimization criteria.
Roberto recently wrote a very nice article explaining how to use apt-get with an external solver.
In a nutshell, if you want to try this out you just need to install apt-cudf, one external solver like aspcud from the university of Pasdam and then call apt-get using the --solver option (that is not yet documented #67442 ). For example :
This will install gnome while using the a optimization criteria that tries to minimizing the changes on the system. Various other optimization criteria for all apt-get default actions can be specified in the apt-cudf configuration file /etc/apt-cudf.conf
I hope the new release of apt-cudf make it into testing before the freeze. Time to test !!!
This situation piss me off a big time. And I don't understand what's wrong ! I've a 9 cell battery, but I'm not able to squeeze out of it more then 1.30 mins. And this is ridiculous considering that other people report 6 hs and more with an x220 and a 9 cell bat.
Well I'm putting here various info for reference... maybe I'll find the culprit .
These info are obtained with this script: thinkpad-smapi.sh
BATTERY 0 INFORMATION ===================== Battery slot: Battery present: yes Battery state: discharging Embedded info: FRU P/N: 42T4940 Barcoding: 1ZJRM1BHCDR Serial number: 13521 OEM Manufacturer: SANYO Chemistry: LION Manufacture date: 2011-11-17 Design capacity & voltage: 93240 mWh, 11100 mV Battery health: First use date: 2012-01-23 Cycle count: 134 Last full capacity: 93180 mWh Average current / power (past 1 minute): -1079 mA, -12077 mW Battery status: Remaining capacity: 78420 mWh (84 %) Remaining running time: 441 min Running current & power: -1054 mA, -11797 mW Temperature: 33400 mC Voltage: 11193 mV Remaining charging time: [not charging] Battery charging control: Start charging at: [unavailable] % Stop charging at: 85 % Prevent charging for: 0 min Force battery discharge: [unavailable]Here I print out the output of ibam
And this is what upsets me the most. One moment the battery is at 74% one moment at 6% !!! I really don't understand ...
Managing the puppet manifest using a vcs is a best practice and there is a lot of material on the web. The easier way to do it, is to use git directly in the directory /etc/puppet and use a simple synchronization strategy with an external repo, either to publish your work, or simply to keep a backup somewhere.
Things are a bit more complicated when you would like to co-administer the machine with multiple people. Setting up user accounts, permission and everything can be a pain in the neck. Moreover working from your desktop is always more comfortable then logging in as root on a remote system and make changes there ...
The solution I've chosen to make my life a bit easier is to use gitolite, that is a simple git gateway that works using ssh public keys for authentication and does not require the creating of local users on the server machine. Gitolite is available in debian and installing it is super easy :
apt-get install gitolite .
If you use puppet already you might be tempted to use puppet to manage your gitolite installation. This is all good, but I don't advice you to use modules like this one http://forge.puppetlabs.com/gwmngilfen/gitolite/1.0.0 as it's going to install gitolite from source and on debian it's not necessary ... For my purposes, I didn't find necessary to manage gitolite with puppet as all the default config options where good enough for me.
Once your debian package is installed, in order to initialize your repo, you just need to pass to gitolite the admin public key, that is your .ssh/id_rsa.pub key and then run this command:
This will create the admin and testing repo in /var/lib/gitolite/repositories and setup few other things. At this point you are ready to test you gitolite installation by cloning the admin repo :
Gitolite is engineered to use only the gitolite use to manage all your repositories. To add more repositories and users you should have a look at the documentation and then editing the file conf/gitolite.conf to add your new puppet repository.
At this point, you can go two ways. If you use git to manage your puppet directory, you can just make a copy of it somewhere and then add gitolite as a remote
If you didn't use git before, you can just copy the manifest in your new git repository, make a first commit and push it on the server.
Every authorized users can now use git to clone your puppet repository, hack, commit, push ...
One last step is to add a small post-receive hook on the server to synchronize your gitolite repository with the puppet directory in
/etc : This will sync your main puppet directory and trigger changes on the nodes for the next puppetd run. First I created a small shell script in /usr/local/bin/puppet-post-receive-hook.sh :
This script presuppose that your git repo in /etc/puppet has the gitolite repo as origin ... Then I added a simple hook in the gitolite git repo that calls this script using sudo :
And since you are at it you should also add a pre-commit hook to check the manifest syntax. This will save you a lot of useless commits.
If you use a more complicated puppet setup using environments (I'm not there yet, and I don't think my setup will evolve in that direction in the near future), you can use puppet-sync that seems a neat script to do the job.
For the moment this setup works pretty well. I'm tempted to explore mcollective to trigger puppet runs on my nodes, but I'm there yet...
If you have an intel sound card and you are concerned about battery life, probably you have seen this in the powetop output...
100.0% Device Audio codec hwC0D0: ConexantAfter much duck-duck-ing around I found this enlightening comment on the lesswatt mailing list. It turns out that to enable power management on this device, the device must be open first. Something like
echo -n | aplay
in your rc.local script should do the trick.
hopefully this will give me few more minutes of battery time. Still no clue for the other device :
100.0% Device Audio codec hwC0D3: IntelThis page has plenty of good tips, most of them already integrated in the laptop-mode package in debian...
I added a couple of lines to my /etc/sysfs.conf file :
but the Audio codec hwC0D3: Intel is still there hanging with 100% :(