-
Notifications
You must be signed in to change notification settings - Fork 459
Creating Linux based Embedded System Images with Yocto
NOTE: This page is a work in progress. It is not complete and is subject to (significant) change
This page describes how to build Linux-based embedded system images with bladeRF support, using Yocto. It is not intended to be a full Yocto tutorial, but rather a quick start or summary. F
Per its website, "The Yocto Project is an open source collaboration project that provides templates, tools and methods to help you create custom Linux-based systems for embedded products regardless of the hardware architecture."
Yocto heavily leverages the build system provided and originally developed by OpenEmbedded. You will find that there is significant common ground between these projects.
The Yocto Project provides *excellent* documentation. Perhaps one of the most handy documents is the beloved "Mega-Manual" that consists of all the documentation concatenated together into a single web page, for your searching pleasure. In particular, this guide is handy for quickly searching for terms and various names.
This section of the manual, these slides, and this blog post provide some helpful clarifications on various terminology. Have these handy as you read the manual.
Lastly, there is plenty of information on the Yocto Project Wiki.
Covered on this page:
Wishlist:We'd love to see the following hardware tested and its respective build process with Yocto documented.
Available to test:The associated user has this platform available to test and develop with.
- i.MX28 EVK (jynik)
Problems have been experienced with the following platforms. Additional investigations and work is required.
-
Beaglebone Black
- The musb driver was crashing with kernel OOPses. It may be related to the use of the USB reset request when libbladeRF opens a device. (jynik)
- Update: it's been reported that some kernel patches have resolved this, but I haven't yet verified it. (jynik)
The tools used in this page do have a significant learning curve. As such, you will need to practice independence and due diligence in reading manuals, mailing list archives, and searching the web.
The bladeRF developers and maintainers of the layers used in this guide simply will not have enough time in the day to walk everyone through this process and debug the mistakes that one are bound to make while learning the tools. While you're certainly free to ask questions on the bladeRF IRC channel and forums, please do your homework first when encountering issues.
It is very important to understand and be aware of the limitations of the platform you are targeting.
Many of the popular low-cost platforms simply may not have the computational capabilities of executing the signal processing code at the desired sample rate. As such, it is advised that you do some initial benchmarking to better understand this.
Bear in mind that the FPGA on the bladeRF provides excellent opportunities for offloading DSP operations. Consider whether or not it makes sense in your application to have the embedded platform simply perform control and configuration, while custom FPGA blocks perform the intensive processing.
To achieve a reasonable level of signal quality, the platform must be able to sustain a sample rate of 2Msps, or 2 Msps * 2 * sizeof(int16_t) = 8MiB/s
, since the lowest front-end LPF setting is 1.5 MHz. (Sampling slightly above this ensures the filter reaches full rejection before the Nyquist limit).
Some platforms many not be able to supply sufficient power to the bladeRF via their USB ports. Therefore, it is recommended that you first consider powering the bladeRF externally.
In general, it is not recommended to perform you perform your initial software development and testing on the embedded target.
Instead, consider performing your initial development, testing, and benchmarking on a desktop machine. This will often ensure you can debug and benchmark code effectively, in a comfortable environment. (As a side effect, this requires that you develop reasonably portable code.)
At this stage you can also identify potential bottlenecks that may prove problematic on the embedded target, as well as develop your benchmarking strategies.
Next, you could try building, testing, and benchmarking on the embedded target. At this point, you should have an understanding of expected behavior, and debugging will consist of identifying the causes for any differences in operation you observe. If you find that you are exceeding the limits of your platform, you will need to make optimizations in either your code or through FPGA development.
Rinse and repeat!
First, read and review the Yocto Quick Start Guide. This page will assume you're familiar with the terms presented there, and may not make sense if you skip this.
It is highly recommended that you work through the Quick Start Guide, including the parts where you build and "boot" images for a QEMU x86 target.
These builds can become quite large. Some vendors providing Yocto-based BSPs recommend a minimum of 50 GiB of available space. With that said, you may want to perform these builds in a directory on a large disk.
These instructions will assume the user's home directory is on a sufficiently large disk. We first create a location to perform builds. Change ~/software/yocto
as you'd prefer.
$ mkdir -p ~/software/yocto/builds/sdr
Lots of software will be downloaded during this process. It's handy to store it all in one common place, so we'll now make a directory for it.
$ mkdir ~/software/yocto/downloads
Now we'll clone the Yocto from the project's git repository. See the Yocto downloads page for more information and any new releases.
$ cd ~/software/yocto
First, we check out the Yocto's core codebase:
$ git clone -b jethro git://git.yoctoproject.org/poky.git poky-jethro-2.0
Next, we'll check out the layers we'll need for our SDR-related items:
$ git clone git://github.com/balister/meta-sdr.git $ git clone -b jethro git://github.com/openembedded/meta-openembedded.git
For Freescale ARM platforms:
$ git clone -b jethro git://git.yoctoproject.org/meta-fsl-arm $ git clone -b jethro [email protected]:Freescale/meta-fsl-arm-extra.git # This is optional, but includes some cool demo images $ git clone -b jethro [email protected]:Freescale/meta-fsl-demos.git
For the Raspberry Pi 2:
$ git clone -b jethro http://git.yoctoproject.org/git/meta-raspberrypi
When entering your build directory, you must always do it via the oe-init-build-env script to ensure your environment is set up. The argument here is the target build directory, which we created earlier:
$ source ~/software/yocto/poky-jethro-2.0/oe-init-build-env ~/software/yocto/builds/sdr
First, we'll open and edit the conf/local.conf
file. Search for MACHINE ?= ... and change it to one of the following, per your target platform:
MACHINE = "wandboard"
MACHINE = "raspberrypi2"
DL_DIR = "/home/jon/software/yocto/downloads"
For Freescale platforms using the meta-fsl-arm
layer, you must review the EULA supplied in that layer, and add the following line to the conf/local.conf
to accept it. (This will have no effect for other platforms.)
ACCEPT_FSL_EULA = "1"
Review the rest of this file, as the template describes additional options. Also consider using the following parameters, if they are of interest to you:
-
Parallel build parameters
- By default, all cores will be used. This may slow your machine down, so you may wish to limit the build to half of your cores.
- You can get a list of all packages and their versions installed into your final image by adding the line:
BUILDHISTORY_COMMIT ?= "0"
- See BUILDHISTORY_COMMIT for more information. (Note that this will slow your build down!)
- If you wish to use a package manager, consider selecting the desired package type via PACKAGE_CLASSES.
Next, we need to inform the build system of where it can find recipes. This is done via the conf/bblayers.conf file.
Edit this file to add the meta-openembedded and meta-sdr layers.
Also, add any of the layers you fetched for your specific hardware platform. For example, for the Wandboard Quad, your BBLAYERS definition would look similar to this:
BBLAYERS ?= " \ /home/jon/software/yocto/poky-jethro-2.0/meta \ /home/jon/software/yocto/poky-jethro-2.0/meta-yocto \ /home/jon/software/yocto/poky-jethro-2.0/meta-yocto-bsp \ /home/jon/software/yocto/meta-openembedded/meta-oe \ /home/jon/software/yocto/meta-openembedded/meta-filesystems \ /home/jon/software/yocto/meta-openembedded/meta-networking \ /home/jon/software/yocto/meta-openembedded/meta-python \ /home/jon/software/yocto/meta-sdr \ /home/jon/software/yocto/meta-fsl-arm \ /home/jon/software/yocto/meta-fsl-arm-extra \ "
As indicated by its name, core-image-minimal image is a minimalistic build that does little more than allow the platform to boot Linux and a barebones rootfs. This is very useful in ensuring you have the build environment set up correctly, and verifying that the hardware is operating correctly. See this Yocto wiki page for more information.
Assuming you're still in the builds/sdr
directory (from executing oe-init-build-env
), invoke bitbake to build this image:
bitbake core-image-minimal
The very first build will take a long time, as all of the required software is being downloaded and built (including toolchains). Successive builds will be much quicker, as only the components affected by changes will be re-built.
When bitbake
fails, it generally does a great job of articulating the reason for the failure. Using its error messages in a search engine query will usually lead you to mailing list discussions involving people encountering the same issue.
Below are some common issues:
- Host machine does not have required package(s). If you do not have certain tools installed,
bitbake
will error out and request that you install a specific program.- For example, Ubuntu users will need to install the
chrpath
package, as it is not installed by default.
- For example, Ubuntu users will need to install the
- Typos in the config files
- Check for missing quotes around variable assignments
- Check for missing or extra line continuation characters (
\
at the end of a line)
- Forgot to include a layer in
conf/bblayers.conf
Once the image has built, you may find the build artifacts in tmp/deploy/images/$MACHINE/
. This directory will contain all previous build artifacts, with symlinks to the most recent items.
This generally will include U-Boot, device tree files, the kernel, and a rootfs. Platforms that boot from SD cards will generally have an SD card image, which includes all of the aforementioned items properly staged.
Assuming an SD card is attached as /dev/sdd
, you can write the image to the card as follows:
$ sudo dd if=tmp/deploy/images/wandboard/core-image-minimal-wandboard.sdcard of=/dev/sdd bs=1M $ sync # Wait a few seconds after this completes
- Insert the card into the SD card slot on the top EDM module (that contains the processor).
- Connect a serial cable to the UART and open your favorite serial console program.
- Configure your serial console for 115200 8N1, with no hardware flow control
- Optionally, connect an Ethernet cable
- Power on the platform. You should see output during the boot process.
- Upon booting, you will be presented with a login.
- Log in as root with no password.
Assuming an SD card is attached as /dev/sdd
, you can write the image to the card as follows:
$ dd if=tmp/deploy/images/raspberrypi2/core-image-minimal-raspberrypi2.rpi-sdimg of=/dev/sdd $ sync # Wait a few seconds after this completes
- Insert the card into the SD card slot.
- Connect a UART (pinout) and open your favorite serial console program.
- Configure your serial console for 115200 8N1, with no hardware flow control
- Optionally, connect an Ethernet cable
- Power on the platform. You should see output during the boot process.
- Upon booting, you will be presented with a login.
- Log in as root with no password.
If you wish to add a few items to the image for quick testing, one way to do this is through defining a IMAGE_INSTALL_append variable in your conf/local.conf
.
To add bladeRF items and an ssh server add the following. Note that root login permitted when "debug-tweaks" is enabled in conf/local.conf
.
IMAGE_INSTALL_append = "\ libbladerf-bin \ bladerf-fpga \ dropbear \ "
If you are working on a project that requires you develop and maintain firmware releases, IMAGE_INSTALL_append
is not an appropriate means to add items to the build.
Instead, you should develop a recipe (bitbake file) that describes the image. You'll likely want to place this in a layer that you create and maintain. However, this is outside the scope of this wiki page. See the following examples as references:
The gnuradio-demo-image
provided by meta-sdr includes GNU Radio (with bladeRF support), X11, Matchbox, and a number of development and testing tools. This can be build via:
bitbake gnuradio-demo-image
It has been reported that some recipes result in QA errors, such as the following:
ERROR: QA Issue: File '/usr/lib/libqwt.so.6.0.1' from qwt was already stripped, this will prevent future debugging! [already-stripped]
ERROR: QA Issue: File '/usr/lib/libqwtmathml.so.6.0.1' from qwt was already stripped, this will prevent future debugging! [already-stripped]
ERROR: QA Issue: File '/usr/lib/qt4/plugins/designer/libqwt_designer_plugin.so' from qwt was already stripped, this will prevent future debugging! [already-stripped]
If you experience errors such as the following, add INSANE_SKIP_${PN}_append = "already-stripped"
to your local.conf
file, where ${PN}
is the associated package name. For the above errors this would be:
INSANE_SKIP_qwt_append = "already-stripped"
The libbladeRF_test_* programs are currently not installed by the recipes in meta-sdr.
If you'd like to run these, you can copy them out of the build directory and over to the embedded platform via the filesystem on the SD card, ssh, etc.
These are found in:
tmp/work/$ARCH-poky-linux-gnueabi/libbladerf/1.2.1-r0/build/host/output/ # Example: Wandboard Quad tmp/work/cortexa9hf-vfp-neon-poky-linux-gnueabi/libbladerf/1.2.1-r0/build/host/output/ # Example: Raspberry Pi 2 tmp/work/cortexa7hf-vfp-vfpv4-neon-poky-linux-gnueabi/libbladerf/1.2.1-r0/build/host/output/
In particular, the libbladeRF_test_rx_discont
is handy for determining the RX sample rate at which the platform begins dropping samples.
After they're built, toolchains are stored in tmp/sysroots/x86-64-linux
. As you're developing code you can use these to build programs for your target platform, until you're ready to create a recipe for your software.
See this Makefile for an example of how to use these toolchains to cross-compile this simple program.
To follow the aforementioned example, copy bladeRF-versions.mk
and bladeRF-versions.c
to your machine. Rename bladeRF-versions.mk
to Makefile
.
Below are example invocations, using the paths used throughout this page:
For a Wandboard Quad:
$ PLATFORM=wandboard YOCTO_BUILD_DIR=~/software/yocto/builds/sdr make clean all
For a Raspberry Pi 2:
$ PLATFORM=raspberrypi2 YOCTO_BUILD_DIR=~/software/yocto/builds/sdr make clean all
Note that you can also create a toolchain installer for the cross toolchain you've built. This is handy when you're working with other developers, who may not want to setup and use Yocto.