This situation piss me off a big time. And I don't understand what's wrong ! I've a 9 cell battery, but I'm not able to squeeze out of it more then 1.30 mins. And this is ridiculous considering that other people report 6 hs and more with an x220 and a 9 cell bat.
Well I'm putting here various info for reference... maybe I'll find the culprit .
These info are obtained with this script: thinkpad-smapi.sh
BATTERY 0 INFORMATION ===================== Battery slot: Battery present: yes Battery state: discharging Embedded info: FRU P/N: 42T4940 Barcoding: 1ZJRM1BHCDR Serial number: 13521 OEM Manufacturer: SANYO Chemistry: LION Manufacture date: 2011-11-17 Design capacity & voltage: 93240 mWh, 11100 mV Battery health: First use date: 2012-01-23 Cycle count: 134 Last full capacity: 93180 mWh Average current / power (past 1 minute): -1079 mA, -12077 mW Battery status: Remaining capacity: 78420 mWh (84 %) Remaining running time: 441 min Running current & power: -1054 mA, -11797 mW Temperature: 33400 mC Voltage: 11193 mV Remaining charging time: [not charging] Battery charging control: Start charging at: [unavailable] % Stop charging at: 85 % Prevent charging for: 0 min Force battery discharge: [unavailable]Here I print out the output of ibam
And this is what upsets me the most. One moment the battery is at 74% one moment at 6% !!! I really don't understand ...
Managing the puppet manifest using a vcs is a best practice and there is a lot of material on the web. The easier way to do it, is to use git directly in the directory /etc/puppet and use a simple synchronization strategy with an external repo, either to publish your work, or simply to keep a backup somewhere.
Things are a bit more complicated when you would like to co-administer the machine with multiple people. Setting up user accounts, permission and everything can be a pain in the neck. Moreover working from your desktop is always more comfortable then logging in as root on a remote system and make changes there ...
The solution I've chosen to make my life a bit easier is to use gitolite, that is a simple git gateway that works using ssh public keys for authentication and does not require the creating of local users on the server machine. Gitolite is available in debian and installing it is super easy :
apt-get install gitolite .
If you use puppet already you might be tempted to use puppet to manage your gitolite installation. This is all good, but I don't advice you to use modules like this one http://forge.puppetlabs.com/gwmngilfen/gitolite/1.0.0 as it's going to install gitolite from source and on debian it's not necessary ... For my purposes, I didn't find necessary to manage gitolite with puppet as all the default config options where good enough for me.
Once your debian package is installed, in order to initialize your repo, you just need to pass to gitolite the admin public key, that is your .ssh/id_rsa.pub key and then run this command:
This will create the admin and testing repo in /var/lib/gitolite/repositories and setup few other things. At this point you are ready to test you gitolite installation by cloning the admin repo :
Gitolite is engineered to use only the gitolite use to manage all your repositories. To add more repositories and users you should have a look at the documentation and then editing the file conf/gitolite.conf to add your new puppet repository.
At this point, you can go two ways. If you use git to manage your puppet directory, you can just make a copy of it somewhere and then add gitolite as a remote
If you didn't use git before, you can just copy the manifest in your new git repository, make a first commit and push it on the server.
Every authorized users can now use git to clone your puppet repository, hack, commit, push ...
One last step is to add a small post-receive hook on the server to synchronize your gitolite repository with the puppet directory in
/etc : This will sync your main puppet directory and trigger changes on the nodes for the next puppetd run. First I created a small shell script in /usr/local/bin/puppet-post-receive-hook.sh :
This script presuppose that your git repo in /etc/puppet has the gitolite repo as origin ... Then I added a simple hook in the gitolite git repo that calls this script using sudo :
And since you are at it you should also add a pre-commit hook to check the manifest syntax. This will save you a lot of useless commits.
If you use a more complicated puppet setup using environments (I'm not there yet, and I don't think my setup will evolve in that direction in the near future), you can use puppet-sync that seems a neat script to do the job.
For the moment this setup works pretty well. I'm tempted to explore mcollective to trigger puppet runs on my nodes, but I'm there yet...
If you have an intel sound card and you are concerned about battery life, probably you have seen this in the powetop output...
100.0% Device Audio codec hwC0D0: ConexantAfter much duck-duck-ing around I found this enlightening comment on the lesswatt mailing list. It turns out that to enable power management on this device, the device must be open first. Something like
echo -n | aplay
in your rc.local script should do the trick.
hopefully this will give me few more minutes of battery time. Still no clue for the other device :
100.0% Device Audio codec hwC0D3: IntelThis page has plenty of good tips, most of them already integrated in the laptop-mode package in debian...
I added a couple of lines to my /etc/sysfs.conf file :
but the Audio codec hwC0D3: Intel is still there hanging with 100% :(
Developing software while working at university always and invariably put you in a uncomfortable position. On the one side academia is one of the driving forces behind good software development practices. We study, analyze, test, defend and sometimes attack different development methodology and more importantly we teach students what they should do once outside academia. On the other hand, just because our primary job is to do all of the above, sometimes is difficult while developing software ourselves, to follow these best practices. Sometimes is more a matter of mind setting, sometimes is a matter of resources and time.
Today I invested a bit of time to configure and install virtualbox and run a jenkins instance on it. I prefer not to litter my laptop with jenkins as I know I won't run it all the time and I don't want to leave around hundreds of MBs of unused dependencies.
Installing virtualbox is pretty easy. It's in the debian repos, and it's just one apt-get way. Once installed, you need to create a virtual machine. For this purpose I simply downloaded on the the netinstall CD and I use it in the VB GUI as installation CD. Everything went smoothly and my host was up and running in no time.
By default VB set up a NAT network on the first adapter (eth0). This is nice and easy if you want a machine that does not need to talk to the outside world. On the other hand, if you want to to connect to this machine you need to do a bit more of work. To this end I added a host-only network between the guest VM and the host. The catch is that you first need to create a host adapter on the host machine. Simply go File -> Preferences -> Network and create a new interface. This is the interface that will appear on you host. On the guest side, configure the second adapter (eth1) as host-only network and select the interface that you just created before.
The first time you run the VM, the NAT connection should work straightaway, while the second interface will not. To fix this problem you need to edit the file /etc/network/interfaces and set eth1 to auto-configure using dhcp.
Once this is all done, we need to install jenkins. This is pretty easy as well.
the jenkins wiki gives all the explanations you need :
once this is done, on your host got to :8080 and voila ! you can start playing with jenkins !!!
Puppet has a built-in functionality to serve small files to its clients. However, for my internal use I sometimes find easier to create a custom debian package to install a specific component then to write a puppet recipe and to copy files around.
To create a local debian repository I use the package reprepro. This is a simple tool that creates and manages apt repository, it is easy to configure and for the moment it lived fully to my expectations.
First of all you need to create a configuration file where you describe your distribution. In this case I choose /var/www/debian/conf/distributions and add the following content :
Notice that since reprepro wants to sign your repository, you need to provide a gpg keyid for it.
To add a package to the repository it is straightforward :
As I said, since the repository is signed, we need to make have a way to add the keyid to the known keys of the target machine. In order to achieve this, we add the following puppet recipe :
First we copy the keyid that we have stored in the puppet file bucket in the root directory of the client, then we exec the apt-key command. Note that since puppet executes each action in parallel, we must specify an execution order using the attributes subscribe and notify. Similarly as soon as the file /etc/apt/sources.list.d/puppet.list is added to the machine, we run apt-get update to refresh the cache of apt.
The last stanza simply installs the package that we added to the local repository.
Ganet-debootstrap-instance contains a nice set of scripts to create a debian (or derivatives) image using debootstrap. Images can be configured and customized by writing simple hooks script to modify various aspects of the default installation. However writing these script is not really fun and pushing it too far can lead to long messy scripts, loosing the overall benefit of automatic configuration.
Puppet is my configuration management tool of choice, but installing puppet on a new machine requires few magic incantations that the user should perform manually, or in a semi automatic mode (autosign=true) to make it work. My goal is to install puppet automatically on the newly created instance so it will run and configure the new instance at the first boot. From that moment on I'll forget about ganeti and configure all remaining services of my new VM using puppet.
In order to do so, we need to install puppet (and apt-get update/upgrade...), create the ssl certificates for the client and enabling the puppet daemon on the client. We add another hook in /etc/ganeti/instance-debootstrap/hooks/ :
This script uses puppetca to create on the puppet (and ganeti) server the client key, sign it, and then copy it to the target machine. Notice that we create the certificate for a fqnd name
$INSTANCE_NAME.$DOMAIN or otherwise puppet will complain loudly. This is not strictly needed, but if you want to do otherwise, you'll need to fiddle with the puppet configuration a bit more. The procedure to create a puppet certificate server-side is well documented on the puppet website, so if you are curious about the details duck-duck-it .
Second post about ganeti. This time I'll talk about adding a swap partition to an image added with ganeti-deboostrap-instance. Browsing the web, it seems that an old version of the ganeti debostrap script allowed for the creation of a swap partition from the command line. The actual version in sid does not, so, if you want to add a swap partition, you need to write a small hook in /etc/ganeti/instance-debootstrap/hooks/.
Part of the code below is taken from the instance-debootstrap script.
This script does two things: first it checks if the user passed a second disk argument to the gnt-instance add call. I decided arbitrarily that the second disk is going to be used a swap disk. Second it figures out the vol-id of this disk, create the swap partition and write an entry in the fstab. All in all it's a straightforward procedure, but I love when I can cut and paste easy scripts :)
The call to create the instance is as follows, using a disk of 5G for the system and a disk of 1G for the swap.
I'll start here a small series of posts about ganeti, xen and puppet. For my work I run few servers sitting on xen and it has always been a bit of a pain to create a new instance and keep it up to date. Up to now I've used the excellent xen-create-image tool to create my VMs, but I wanted to try something new and more sexy... Last week I finally found some time to learn (and a spare box to run my experiments) how to use ganeti. Ganeti is the only tool I tried out, but it seems to fit the bill for my use and it seems polished and mature project to me... Moreover I've seen a presentation about it in every FLOSS conference I've attended in the last few years and I thought it was time to give it a try.
Installing and configuring ganeti is fairly easy, there is a lot of documentation available and this post is not going to be about installing it, but rather how to create a new bare instance with ganeti-deboostrap-instance. There is also a way to create a new instance from an image, but I didn't go that way yet.
This first post is about the first problem I've encountered, that is, how to automatically assign a network address and a name to each new instance created by gnt-instance add. Since all my instances should be able to communicate together on the same subnet, I've decided to configure xen to create a NATted private network and add each new instance to this network.
The first step is to create an interface in /etc/network/interfaces .
This is the standard debian way but since xen uses a different naming convention (here I'm using ganeti naming convention xenbr0 vs xen-br0), I need to convince tell xen what I intend to do by adding these lines in /etc/xen/xend.config :
Next I have to connect my real network interface to the private network using few iptables rules in /etc/rc.local (probably there is a better place to do this...):
The xen setup is complete and every new image should have a vif connected to the subbet 10.0.0.0. The xen setup corresponds to the physical wiring of the network. The next step is to configure each instance so to allow them to communicate on this subnet. Since I build my VMs using ganeti-debootstrap-instance, and by default debootstrap does not configure the network, we need to add a new hook in the directory /etc/ganeti/instance-debootstrap/hooks.
This hook will do two things. First it will configure the interfaces of the new instance to get configured using dhcp, second, it will add an entry to the dnsmasq configuration file to make this instance known to the world. This basically boils down to add a file in /etc/dnsmasq.d/ with the mac address of the new instance and its designated name. Dnsmasq will then provide an ip address for this instance and add it to the dns.
Configuring dnsmasq is pretty easy as well. First I want it to answer dhcp queries only on the internal network, second I want to configure my clients passing 10.0.0.1 as nameserver and gataway. You can just add the following lines in /etc/dnsmasq.d/general to get it going.
To create your new instance you can just run the following command :
If you are running your dom0 on debian squeeze before running this command you should configure ganeti to pass the right xen parameters to the newly created instance :
I use --no-ip-check and --no-name-check to skip ip and dns check performed by ganeti and to avoid a sort of chicken-egg problem, where the name and address of this new instance is yet unknown to dnsmasq and that node1 is the name that will be used by the hook to add an entry in the dnsmasq configuration. debootstrap+unstable is a variant of the default configuration and you need to add it to the list of variants used by ganeti-deboostrap-instance.
That should be it. The new instance should come up with a dynamically assigned ip address, able to talk to the outside world and automatically known by all the other machine on the subnet via dns.
Next post will be about how to add a swap hook for ganete-debootstrap-istance.
I'm blogging about this small configuration issue as it took me some time to figure out how to configure cupt and smart to solve this problem. The reason I'm playing with cupt and smartpm is that I'm working to compare again a number of package managers in debian against the state of the art cudf solvers using mpm, and I'm suffering quite a bit to configure my virtual environment. Last year I promised to revise and fix our results. I didn't forget my promise, but it seems it took longer then expected.
Anyway, back to the main topic. The release-problem arises because the key used to sign sarge (that I'm using as baseline for my experiments) is long expired. If you try to retrieve sarge from archive.debian.org you will find that sarge is signed with the key A70DAF536070D3A1 and apt-get will complain loudly if you try to use an archive signed with an expired key.
For cupt this is documented in the man page and there are a number of options to add either to /etc/apt/apt.conf to the the cupt own conf file. Then cupt will happily accept the sarge Packages file and let you run update.
For smart I could not found this information anywhere, but reading the source code (tnx good it's python !). To cut it short, you need to set the key trustdb to an empty value for a specific channel. On the command line you get something like :
smart channel --set aptsync-614482cb2c7e08d5722af3498232ba52 keyring= --config-file=/root/var/lib/smart/config
where aptsync-614482cb2c7e08d5722af3498232ba52 is the channel name corresponding to sarge in my conf. Since I'm using a simulated environment, I save the result of this option in a non-default config file in my chroot.
Today I installed unburden-home-dir and I'm very please with it. It's a simple script that takes care of your temporary files to move them outside your home directory. The main reason why I installed it is to minimize the number of read/writes of iceweasel. My favorite browser is apparently the culprit of 80% of read / write operations on disk even when it is idle... Moving the cache to tmpfs, I hope to reduce the IO on disk and to extend my battery life. Using a SSD I haven't noticed any remarkable benefits regarding performances, but I hope I'll manage to squeeze a bit more from my battery.
Installing and configuring unburden-home-dir is straightforward. It is packaged for debian (experimental at the time of writing), and it is very easy to configure. Remember that if you want to have your cache on tmpfs, you need to ether mount a tmpfs-enabled file system somewhere or enable RAMTMP=yes in /etc/default/rcS (default in wheezy).
After installing unburden-home-dir iotop shows a delightful page full of zeros :)
ps: if you use duplicity, remember to specify the option --archive-dir to move the duplicity cache somewhere else...