Sorry, you need to enable JavaScript to visit this website.

QEMU - Deep Dive

So I wanted to touch on a really powerful tool that I have mentioned before, but haven't dove far into - the QEMU build that Xilinx provides for ARM development on Zynq.

I touched on how to download, and build QEMU previously, but I didn't touch on much more than that.

Note:  I am using Ubuntu 12.04 LTS for this blog post.

First, let's make sure we have the latest version of the QEMU enviornment from Xilinx here.  Download it here:

http://wiki.xilinx.com/zynq-qemu

There is a link within the text to download the tar archive.  Once you do that then decompress it into your home directory with the tar command:

    zynqgeek@sue:~$ tar -zxvf zynq_linux.tar.gz

Once you have extracted the archive, let's take a look at what is included from Xilinx.

    zynqgeek@sue:~/zynq_linux$ ls -al
    total 36
    drwxr-xr-x  6 zynqgeek zynqgeek 4096 Aug 15 19:55 .
    drwxr-xr-x 27 zynqgeek zynqgeek 4096 Oct  1 19:29 ..
    drwxr-xr-x  2 zynqgeek zynqgeek 4096 Jun  4 17:45 arm-softmmu
    drwxr-xr-x  2 zynqgeek zynqgeek 4096 Jun  4 18:17 filesystem
    drwxr-xr-x  2 zynqgeek zynqgeek 4096 Aug 15 17:07 kernel
    drwxr-xr-x  3 zynqgeek zynqgeek 4096 Apr 19 19:31 pc-bios
    -rwxr-xr-x  1 zynqgeek zynqgeek  349 Aug 15 17:29 start_qemu.sh
    -rw-r--r--  1 zynqgeek zynqgeek 2846 Aug 15 19:53 xilinx_zynq.dtb
    -rw-r--r--  1 zynqgeek zynqgeek 2982 Aug 15 19:37 xilinx_zynq.dts

So there are a few files and folders included in the archive.  Let's take them one at a time.

arm-softmmu:
The first folder is the arm-softmmu.  This holds the compiled binary of the qemu emulation software.  This is the executable that will actually be executed to create the virtual ARM hardware within Zynq.

filesystem:
This folder contains a single file: ramdisk.img.  This is the image of the ramdisk that is mounted when the Linux kernel boots.  If you remember this post, we went into the details of how to modify your ramdisk for your Zedboard.  Let's come back to this folder as it is an important one.

kernel:
This is another folder that has but a single file: zImage.  This is the compiled Linux kernel image.  This is the Xilinx page on how to compile the kernel for further references.  For our emulation environment there really isn't too much reason to recompile the kernel - so we will run with this pre-compiled binary for this post.

pc-bios:
This folder contains another folder that has various keyboard mappings within it.  This allows qemu to decode the input from the host machine correctly when routing stdio signals.  They are all text, so go ahead and open a few up and take a look.  They are simply just register offsets to describe the different keyboard layouts.

start_qemu.sh:
This is our launching point script.  Let's take a look at it's contents:


#!/bin/sh

./arm-softmmu/qemu-system-arm -M xilinx-zynq-a9 -m 1024 -serial null -serial mon:stdio -kern-dtb ./xilinx_zynq.dtb -smp 2 -nographic -kernel kernel/zImage -initrd filesystem/ramdisk.img -net nic,model=cadence_gem -net user -tftp ~/ -redir tcp:10023::23 -redir tcp:10080::80 -redir tcp:10022::22 -redir tcp:10021::21 -redir tcp:1234::1234

Woah ... ok, a lot of information here.  let's just take it a small bit at a time.

First, all this script is doing is starting the qemu virtual enviornment with a specific configuration that will allow for correct booting of the Linux kernel, as well as correct interfacing to the host/outside world.

let's take each switch one at a time.  I am using this as my reference.

-M xilinx-zynq-a9
The -M option selects the type of machine that you would like to emulate.  The options for this are compiled into qemu it's self.  To get a listing of all available machines that you can emulate, you can set your machine type to '?':

    zynqgeek@sue:~/zynq_linux/arm-softmmu$ ./qemu-system-arm -M ?

-m 1024
The -m (note this is lowercase, and the previous switch was uppercase) set's the amount of memory the virtual machine will have.  In this case we are setting it to 1024 megs of memory, or 1GB of memory.  Note: The Zedboard only has 512MB (0.5GB) of memory, where the ZC702 board has 1GB of memory.  If you are targeting to use your code on the Zedboard you may want to edit your launch script to reflect the available memory on the Zedboard.

-serial null
-serial mon:stdio
The -serial creates a serial port within the virtual machine.  It can be used up to four times to create four different serial ports.  For the first occorence a serial port is being created that is redirected to the /dev/null device.  For the second serial port, we are connecting it to the stdio (standard input and output) of our Linux kernel.  That is, that everytime something is printed to the screen it will be sent out the serial port, and every time an input happens from the serial port it will be forwarded to the kernel.


-kern-dtb ./xilinx_zynq.dtb
The way that qemu defines hardware peripherals (SPI, I2C, CAN, Ethernet, USB, etc.) is via a dtb file.  In this case, we are given a dtb file that helps us describe the hard-silicon peripherals available within Zynq.  This file is called xilinx_zynq.dtb.

-smp 2
SMP, in this instance, stands for Symmetrical Multi Processing.  This is the number of symmetrical virtual cores that our VM will have.  In this case we are setting this to 2 cores because our Zynq device has two ARM A9 cores (booooom!).

-nographic
We don't have any graphics devices within the Zynq device, and we have not gone down the road of simulating custom peripherals using qemu yet, so we will be turning off our graphics for our VM.

-kernel kernel/zImage
We need to tell qemu what code to execute once it is past post.  In this case, we want to point it to the Linux kernel that we have compiled (or had compiled for us).  That is our zImage file within our kernel directory.

-initrd filesystem/ramdisk.img
Once the Kernel boots we want to be able to create a ramdisk so we can execute code and configure out system.  This initial ram disk needs to be defined - this is done within the ramdisk.img file.  the -initrd (or initial rand disk) switch mounts the ramdisk image into memory.

-net nic, module=cadence_gem
We need to create a network adapter that the Linux kernel understands.  In our case we are using the compiled in driver module called "cadence_gem" which mimics the Ethernet MAC hardware within the Zynq device.

-net user -tftp ~/
Note: This is a legacy switch, but is still supported.  I believe this may have been used to get code onto the VM durring testing perhaps?  If someone can shed some light on this I'll update the post.

-redir tcp:10023::23
-redir tcp:10080::80
-redir tcp:10022::22
-redir tcp:10021::21
-redir tcp:1234::1234
I am going to bundle these all together as they are all doing the same thing.  They are redirecting a VM port to a port on the host machine.  So now, if you connect to your host machine on port 10023, it will actually be connecting to port 23 in the VM.  We are redirecting port 23, 80, 22, 21, 1234.

Ok, so that's that!  That is the overview of what is happening when you launch the start_qemu.sh script.

back to files ...

xilinx_zynq.dtb:
This file is the compiled information as to where different hardware is located within the ARM subsystem, and how to communicate to it.  This is created using scripts within the Linux kernel source code.

xilinx_zynq.dts:
This is the file that describes the hardware of the Zynq IC, and is used to create the xilinx_zynq.dtb file using the scripts found in the Linux kernel tree.  If you open this file up you can see the various devices that will be present when you launch your VM, and their configuration.  Here is a list:

gic - Interupt controller
p1310 - Cache controller
uart - UART (remember the first one of these is set to null inour launch script)
uart - UART
timer - Timer hardware
swdt - Watch dog timer
eth - Gigabit Ethernet controller
sdhci - SD Host-Controller Device Driver
usb - The USB OTG slave controller
gpio - The GPIO pins on the ARM subsystem
devcfg - The configuration port for the Programmable Logic (PL) subsystem.

Well that's it for file and folder definitions.  Hopefully now you have a better idea of what is going on with your qemu virtual machine, and how it get's launched.

Next, let's go back and talk about the ramdisk.img file.  Within the ./filesystem folder exists the ramdisk.img file.  This is the initial ramdisk that is mounted after you boot your qemu VM.  Let's mount that and take a look to see what is in it.

    zynqgeek@sue:~/zynq_linux$ cd filesystem/
    zynqgeek@sue:~/zynq_linux/filesystem$ ls
    ramdisk.img
    zynqgeek@sue:~/zynq_linux/filesystem$ mkdir ramdisk
    zynqgeek@sue:~/zynq_linux/filesystem$ sudo mount -o loop ramdisk.img ./ramdisk
    zynqgeek@sue:~/zynq_linux/filesystem$ cd ramdisk/
    zynqgeek@sue:~/zynq_linux/filesystem/ramdisk$ ls -al
    total 35
    drwxr-xr-x 17 6146  2223  1024 Jun  4 18:17 .
    drwxr-xr-x  3 zynqgeek  zynqgeek 4096 Oct  1 20:15 ..
    drwxr-sr-x  2 6146 10195  2048 Jun  4 17:54 bin
    drwxr-sr-x  2 6146 10195  1024 Nov 24  2010 dev
    drwxr-sr-x  4 6146 10195  1024 Apr 11 19:56 etc
    drwxr-sr-x  2 6146 10195  2048 Jun  4 17:50 lib
    drwxr-sr-x 11 6146 10195  1024 May 31 16:22 licenses
    lrwxrwxrwx  1 6146 10195    11 Jun  4 17:54 linuxrc -> bin/busybox
    drwx------  2 root root  12288 Jun  4 17:50 lost+found
    drwxr-sr-x  2 6146 10195  1024 Aug 21  2010 mnt
    drwxr-sr-x  2 6146 10195  1024 Aug 21  2010 opt
    drwxr-sr-x  2 6146 10195  1024 Aug  6  2010 proc
    -rw-r--r--  1 6146 10195   256 Jun  2  2011 README
    drwxr-sr-x  2 6146 10195  1024 Aug 21  2010 root
    drwxr-sr-x  2 6146 10195  1024 Jun  4 17:55 sbin
    drwxr-sr-x  2 6146 10195  1024 Aug  6  2010 sys
    drwxr-sr-x  2 6146 10195  1024 Aug  6  2010 tmp
    -rwxr--r--  1 6146 10195   481 Dec  2  2010 update_qspi.sh
    drwxr-sr-x  5 6146 10195  1024 Jun  4 17:54 usr
    drwxr-sr-x  4 6146 10195  1024 Oct 25  2010 var
    zynqgeek@sue:~/zynq_linux/filesystem/ramdisk$

Cool, now we can modify this if we want to so it is reflected in our VM after we boot.

I am going to reference this blog post, and write myself a quick program to launch after boot.  Get your ARM dev enviornment all setup.  My code for my program looks like this:


   include

   int main()
   {
           printf("Hello Zynqgeek, how are you today?  Good I hope.\n");
           printf("Well anyway, here is a prompt so you can actually do some work:\r\n");

           return 0;
   }

And here is the command I used to compile it, and copy it to my mounted ramdisk.img file.

    zynqgeek@sue:~/arm-devel$ cat hello.c
    #include

    int main()
    {
            printf("Hello Zynqgeek, how are you today?  Good I hope.\n");
            printf("Well anyway, here is a prompt so you can actually do some work:\n");
    
            return 0;
    }
    zynqgeek@sue:~/arm-devel$ arm-linux-gnueabi-gcc -o hello hello.c
    zynqgeek@sue:~/arm-devel$ ls
    hello  hello.c
    zynqgeek@sue:~/arm-devel$ sudo cp ./hello ../zynq_linux/filesystem/ramdisk/root/
    zynqgeek@sue:~/arm-devel$

Ok, now we have our hello executable within our ramdisk image.  Next, we need to set it to automaticly launch when we are done with lauding the kernel.  The script that is launched when the Kernel is done is called rcS and is located in the ./zynq_linux/filesystem/ramdisk/etc/ directory.

I edited my file to launch my executable.  The last line in my script *was*:

    echo "rcS Complete"

I changed this to this:

    clear
    ./root/hello

Great!  Once you are done editing the file you can un-mount the ramdisk image and we can boot our VM!

    zynqgeek@sue:~/zynq_linux/filesystem/ramdisk/etc/init.d$ sudo vi rcS
    zynqgeek@sue:~/zynq_linux/filesystem/ramdisk/etc/init.d$ cd ..
    zynqgeek@sue:~/zynq_linux/filesystem/ramdisk/etc$ cd ..
    zynqgeek@sue:~/zynq_linux/filesystem/ramdisk$ cd ..
    zynqgeek@sue:~/zynq_linux/filesystem$ sudo umount ./ramdisk/
    zynqgeek@sue:~/zynq_linux/filesystem$

Cool, now let's boot this bad boy.  Simply just execute your start_qemu.sh script.
    zynqgeek@sue:~/zynq_linux$ ./start_qemu.sh
    ram size=40000000
    error reading QSPI block device
    error no mtd drive for nand flash
    a0mpcore_priv: smp_priv_base f8f00000
    error no sd drive for sdhci controller (0)
    error no sd drive for sdhci controller (1)
    Number of configured NICs 0x1
    ram_size 40000000, board_id d32, loader_start 0
    Uncompressing Linux... done, booting the kernel.

... lot's of kernel messages ...

    Hello Zynqgeek, how are you today?  Good I hope.
    Well anyway, here is a prompt so you can actually do some work:
    zynq>
Woohoo!  Isn't this stuff so much fun?  Well .. I think it is at least :D.

The next thing you can take advantage of is the port forwarding that we setup in our launch script.  If you open up another terminal window on your Ubuntu 12.04 LTS box, you will be able to SSH into the VM on port 10023.
    zynqgeek@sue:~$ ssh -p 10022 root@localhost
    root@localhost's password:
    zynq>
In this case the username and password are both root.
Well there you have it!  Enjoy!

Comments

Hi, good post, thanks very much.

I have two questions :

(1) QEMU Xilinx ARM emulate NEON Instruction ?
(2) QEMU Xilinx ARM emulate double precision FPU ?

secureasm

I have answered your question on the Zedboard forum here:

http://zedboard.org/content/qemu-deep-dive

Hope that helps! I'll try and get a bench mark on double precision Floating Point this evening.

Hi,

Did you ever manage to do some floating point testing?

I'm trying to build code with the hardware floating point enabled using the Code Sourcery tools, and am not having much success.

From the speed, I think the default is software floating point (unless the HW emulation in QEMU is terrible)

On your question about what the "-net user -tftp ~/" options to QEMU do, they're documented on the page you referenced:

http://wiki.qemu.org/download/qemu-doc.html#Invocation (scroll down to the networking options)

‘-net user[,option][,option][,...]’
Use the user mode network stack which requires no administrator privilege to run. Valid options are:

and

‘tftp=dir’
When using the user mode network stack, activate a built-in TFTP server. The files in dir will be exposed as the root of a TFTP server. The TFTP client on the guest must be configured in binary mode (use the command bin of the Unix TFTP client).

I presume that "-net user -tftp ~/" is equivalent to "-net user,tftp=~/"

I'm so glad you are finding the material interesting and useful! Oh, and I am out of the United States :D

The problem with QEMU is that it's NOT cycle accurate, so you can't use it to benchmark the performance. I have used it before I get the zc702 board, but my primary objective was to measure the execution time of an algorithm, so it's useless for this kind of measurements.

You are correct Fregona. Cycle accurate simulators are usually really expensive, qemu is simply a hardware emulator - So for cycle accurate stuff qemu isn't going to be your solution.

Were you able to get a Zedboard or a ZC702 board that you could run your code on?

Hello everybody!

I was wondering if it was possible to replace files zImage, xilinx_zynq.dtb etc such that I can run a modified version of the system. For example here: http://digilentinc.com/Products/Detail.cfm?NavPath=2,400,1028&Prod=ZEDBOARD is possible to download a working zedboard configuration and I was thinking to use its files (zImage, ramdisk etc) with qemu. I think I have at least to modify start_qemu.sh..
Is that possible?

Thanks in advance,

sticken

Where can I find more information about running an elf in QEMU.

Port 1234 is the default port for gdb when you start qemu with the -s option. I don't see why you would want to redirect 1234 inside the VM to that port outside. Is there some useful magic going on?

Thanks