Qemu 0.8.1 (with kqemu 1.3.0pre7)
While I was testing out the “single cut and paste” linux vnc remote desktop sharing script and x11vnc binary…. I spent a fair amount of time booting up livecd’s n qemu to test various distributions/ages of linux setups to see how compatible things were. I had not checked in at the qemu site in quite a while (a few months), but there was a new version out… in fact, I think 0.7.1 is what I was running previously, so I missed 0.7.2 and 0.8.0… Anyway, I’m running 0.8.1 now and I compiled kqemu as well (now at 1.3.0pre7)…. wow qemu has made great strides (with kqemu) since 0.7.1 ….
I remember my first post about qemu here with the kernel accelerator module was fairly glowing. I was impressed with the speed. It was still a bit more sluggish than other commercial virtualizers, but I thought it might be close enough and I have used it. In fact, I used it through the testing in the whole WMF exploit that was between Christmas and New Years, but it became obvious that at times it was painfully sluggish. (I know, I know, never satisfied….) But, for occasional light testing it was fine, for heavier use it was annoyingly slow.
Anyway, I tried (and failed) to get vmplayer running (it bombed out with signal 11). I thought perhaps I had hardware issues, and operating system (Mandrake 10.1) issues….. since then, I’m up to Mandriva 2006 and a new amd64 system (although I’m running the 32 bit release of Mandriva….) but… still signal 11. I thought it might be kernel modules (like kqemu) causing the problem, that didn’t seem to solve it. I’ve found a few others with similar problems in the forums, but nothing that looks really promising. I DID get it running on my p3 laptop though and it seemed fairly snappy there (relative to qemu’s performance on that machine.)
Anyway, this is about qemu though and the strides it’s made…. First, let me give a bit of info into compiling it. Yes, I comiled it from source, the whole thing, it worked quite well.
After downloading and unpacking (tar -xzvf qemu*gz), the following got a successful compilation.
./configure --disable-user --cc=/usr/bin/gcc-3.3.6
make
make install
In all honesty, I started with just ./configure and it nagged about gcc-4 being the compiler and wanting gcc3.3, so I did urpmi gcc3.3 and after that installed ‘ls /usr/bin/gcc*’ to find the correct file to point the configure script to. I may not have had to disable-user, but the first configure try (after the gcc-4) I used /usr/bin/gcc3.3-version and it still warned about gcc-4, so I changed things and tried again. Like I said, I probably could have left of the disable-user, I didn’t really think I needed the user virtualization at this point anyway.
After the make install I turned to kqemu. Now, I had a dkms-kqemu that I had previously installed (I thought it might have been a src.rpm that I compiled… not sure now…) urpme dkms-kqemu…. then, in the kqemu directory (after unpacking…) (BTW – it doesn’t need to be unpacked in any special way, just it’s own directory tar -xzvf anywhere… I seem to recall a previous version needing access to the qemu source, but that’s not necessary.)
./configure
make
make install
VERY easy, then…. modprobe kqemu (I think I may have changed the ownership of kqemu (root.root by default.)
I started up qemu with a boot image and instantly noticed a difference. Then I discovered there’s a switch to allow kernel mode acceleration…. -kernel-kqemu documentation here. With this switch it’s considered “full virtualization mode” and guest OS kernel and user code is directly executed on the host cpu. (Without the switch only user code is executed in the host cpu itself).
Anyway, it seems to be a VERY dramatic improvement in speed over the last I saw. By the way, I did see someone else comment on it as bringing the virtualization speed up to that of vmware. I’m not certain on that as I can’t do a benchmark comparison on this system, but it IS a noticable improvement (and since February the switch has become -kernel-kqemu which is backwards from what the linked article says…)