Chapter 3. Working with Buildroot

This section explains how you can customize Buildroot to fit your needs.

3.1. Details on Buildroot configuration

All the configuration options in make *config have a help text providing details about the option. However, a number of topics require additional details that cannot easily be covered in the help text and are there covered in the following sections.

3.1.1. Cross-compilation toolchain

A compilation toolchain is the set of tools that allows you to compile code for your system. It consists of a compiler (in our case, gcc), binary utils like assembler and linker (in our case, binutils) and a C standard library (for example GNU Libc, uClibc).

The system installed on your development station certainly already has a compilation toolchain that you can use to compile an application that runs on your system. If you’re using a PC, your compilation toolchain runs on an x86 processor and generates code for an x86 processor. Under most Linux systems, the compilation toolchain uses the GNU libc (glibc) as the C standard library. This compilation toolchain is called the "host compilation toolchain". The machine on which it is running, and on which you’re working, is called the "host system" [3].

The compilation toolchain is provided by your distribution, and Buildroot has nothing to do with it (other than using it to build a cross-compilation toolchain and other tools that are run on the development host).

As said above, the compilation toolchain that comes with your system runs on and generates code for the processor in your host system. As your embedded system has a different processor, you need a cross-compilation toolchain - a compilation toolchain that runs on your host system but generates code for your target system (and target processor). For example, if your host system uses x86 and your target system uses ARM, the regular compilation toolchain on your host runs on x86 and generates code for x86, while the cross-compilation toolchain runs on x86 and generates code for ARM.

Buildroot provides different solutions to build, or use existing cross-compilation toolchains:

  • The internal toolchain backend, called Buildroot toolchain in the configuration interface.
  • The external toolchain backend, called External toolchain in the configuration interface.
  • The Crosstool-NG toolchain backend, called Crosstool-NG toolchain in the configuration interface.

The choice between these three solutions is done using the Toolchain Type option in the Toolchain menu. Once one solution has been chosen, a number of configuration options appear, they are detailed in the following sections.

Internal toolchain backend

The internal toolchain backend is the backend where Buildroot builds by itself a cross-compilation toolchain, before building the userspace applications and libraries for your target embedded system.

This backend is the historical backend of Buildroot, and has been limited for a long time to the usage of the uClibc C library. Support for the eglibc C library has been added in 2013 and is at this point considered experimental. See the External toolchain backend and Crosstool-NG toolchain backend for other solutions to use glibc or eglibc.

Once you have selected this backend, a number of options appear. The most important ones allow to:

  • Change the version of the Linux kernel headers used to build the toolchain. This item deserves a few explanations. In the process of building a cross-compilation toolchain, the C library is being built. This library provides the interface between userspace applications and the Linux kernel. In order to know how to "talk" to the Linux kernel, the C library needs to have access to the Linux kernel headers (i.e, the .h files from the kernel), which define the interface between userspace and the kernel (system calls, data structures, etc.). Since this interface is backward compatible, the version of the Linux kernel headers used to build your toolchain do not need to match exactly the version of the Linux kernel you intend to run on your embedded system. They only need to have a version equal or older to the version of the Linux kernel you intend to run. If you use kernel headers that are more recent than the Linux kernel you run on your embedded system, then the C library might be using interfaces that are not provided by your Linux kernel.
  • Change the version and the configuration of the uClibc C library (if uClibc is selected). The default options are usually fine. However, if you really need to specifically customize the configuration of your uClibc C library, you can pass a specific configuration file here. Or alternatively, you can run the make uclibc-menuconfig command to get access to uClibc’s configuration interface. Note that all packages in Buildroot are tested against the default uClibc configuration bundled in Buildroot: if you deviate from this configuration by removing features from uClibc, some packages may no longer build.
  • Change the version of the GCC compiler and binutils.
  • Select a number of toolchain options (uClibc only): whether the toolchain should have largefile support (i.e support for files larger than 2 GB on 32 bits systems), IPv6 support, RPC support (used mainly for NFS), wide-char support, locale support (for internationalization), C++ support, thread support. Depending on which options you choose, the number of userspace applications and libraries visible in Buildroot menus will change: many applications and libraries require certain toolchain options to be enabled. Most packages show a comment when a certain toolchain option is required to be able to enable those packages.

It is worth noting that whenever one of those options is modified, then the entire toolchain and system must be rebuilt. See Section 3.5.1, “Understanding when a full rebuild is necessary”.

Advantages of this backend:

  • Well integrated with Buildroot
  • Fast, only builds what’s necessary

Drawbacks of this backend:

  • Rebuilding the toolchain is needed when doing make clean, which takes time. If you’re trying to reduce your build time, consider using the External toolchain backend.

External toolchain backend

The external toolchain backend allows to use existing pre-built cross-compilation toolchains. Buildroot knows about a number of well-known cross-compilation toolchains (from Linaro for ARM, Sourcery CodeBench for ARM, x86, x86-64, PowerPC, MIPS and SuperH, Blackfin toolchains from ADI, Xilinx toolchains for Microblaze, etc.) and is capable of downloading them automatically, or it can be pointed to a custom toolchain, either available for download or installed locally.

Then, you have three solutions to use an external toolchain:

  • Use a predefined external toolchain profile, and let Buildroot download, extract and install the toolchain. Buildroot already knows about a few CodeSourcery, Linaro, Blackfin and Xilinx toolchains. Just select the toolchain profile in Toolchain from the available ones. This is definitely the easiest solution.
  • Use a predefined external toolchain profile, but instead of having Buildroot download and extract the toolchain, you can tell Buildroot where your toolchain is already installed on your system. Just select the toolchain profile in Toolchain through the available ones, unselect Download toolchain automatically, and fill the Toolchain path text entry with the path to your cross-compiling toolchain.
  • Use a completely custom external toolchain. This is particularly useful for toolchains generated using crosstool-NG. To do this, select the Custom toolchain solution in the Toolchain list. You need to fill the Toolchain path, Toolchain prefix and External toolchain C library options. Then, you have to tell Buildroot what your external toolchain supports. If your external toolchain uses the glibc library, you only have to tell whether your toolchain supports C+\+ or not and whether it has built-in RPC support. If your external toolchain uses the uClibc library, then you have to tell Buildroot if it supports largefile, IPv6, RPC, wide-char, locale, program invocation, threads and C++. At the beginning of the execution, Buildroot will tell you if the selected options do not match the toolchain configuration.

Our external toolchain support has been tested with toolchains from CodeSourcery and Linaro, toolchains generated by crosstool-NG, and toolchains generated by Buildroot itself. In general, all toolchains that support the sysroot feature should work. If not, do not hesitate to contact the developers.

We do not support toolchains from the ELDK of Denx, for two reasons:

  • The ELDK does not contain a pure toolchain (i.e just the compiler, binutils, the C and C++ libraries), but a toolchain that comes with a very large set of pre-compiled libraries and programs. Therefore, Buildroot cannot import the sysroot of the toolchain, as it would contain hundreds of megabytes of pre-compiled libraries that are normally built by Buildroot.
  • The ELDK toolchains have a completely non-standard custom mechanism to handle multiple library variants. Instead of using the standard GCC multilib mechanism, the ARM ELDK uses different symbolic links to the compiler to differentiate between library variants (for ARM soft-float and ARM VFP), and the PowerPC ELDK compiler uses a CROSS_COMPILE environment variable. This non-standard behaviour makes it difficult to support ELDK in Buildroot.

We also do not support using the distribution toolchain (i.e the gcc/binutils/C library installed by your distribution) as the toolchain to build software for the target. This is because your distribution toolchain is not a "pure" toolchain (i.e only with the C/C++ library), so we cannot import it properly into the Buildroot build environment. So even if you are building a system for a x86 or x86_64 target, you have to generate a cross-compilation toolchain with Buildroot or crosstool-NG.

If you want to generate a custom toolchain for your project, that can be used as an external toolchain in Buildroot, our recommandation is definitely to build it with crosstool-NG. We recommend to build the toolchain separately from Buildroot, and then import it in Buildroot using the external toolchain backend.

Advantages of this backend:

  • Allows to use well-known and well-tested cross-compilation toolchains.
  • Avoids the build time of the cross-compilation toolchain, which is often very significant in the overall build time of an embedded Linux system.
  • Not limited to uClibc: glibc and eglibc toolchains are supported.

Drawbacks of this backend:

  • If your pre-built external toolchain has a bug, may be hard to get a fix from the toolchain vendor, unless you build your external toolchain by yourself using Crosstool-NG.

Crosstool-NG toolchain backend

The Crosstool-NG toolchain backend integrates the Crosstool-NG project with Buildroot. Crosstool-NG is a highly-configurable, versatile and well-maintained tool to build cross-compilation toolchains.

If you select the Crosstool-NG toolchain option in Toolchain Type, then you will be offered to:

  • Choose which C library you want to use. Crosstool-NG supports the three most important C libraries used in Linux systems: glibc, eglibc and uClibc
  • Choose a custom Crosstool-NG configuration file. Buildroot has its own default configuration file (one per C library choice), but you can provide your own. Another option is to run make ctng-menuconfig to get access to the Crosstool-NG configuration interface. However, note that all Buildroot packages have only been tested with the default Crosstool-NG configurations.
  • Choose a number of toolchain options (rather limited if glibc or eglibc are used, or numerous if uClibc is used)

When you will start the Buildroot build process, Buildroot will download and install the Crosstool-NG tool, build and install its required dependencies, and then run Crosstool-NG with the provided configuration.

Advantages of this backend:

  • Not limited to uClibc: glibc and eglibc are supported.
  • Vast possibilities of toolchain configuration.

Drawbacks of this backend:

  • Crosstool-NG is not perfectly integrated with Buildroot. For example, Crosstool-NG has its own download infrastructure, not integrated with the one in Buildroot (for example a Buildroot make source will not download all the source code tarballs needed by Crosstool-NG).
  • The toolchain is completely rebuilt from scratch if you do a make clean.

3.1.2. /dev management

On a Linux system, the /dev directory contains special files, called device files, that allow userspace applications to access the hardware devices managed by the Linux kernel. Without these device files, your userspace applications would not be able to use the hardware devices, even if they are properly recognized by the Linux kernel.

Under System configuration, /dev management, Buildroot offers four different solutions to handle the /dev directory :

  • The first solution is Static using device table. This is the old classical way of handling device files in Linux. With this method, the device files are persistently stored in the root filesystem (i.e they persist accross reboots), and there is nothing that will automatically create and remove those device files when hardware devices are added or removed from the system. Buildroot therefore creates a standard set of device files using a device table, the default one being stored in system/device_table_dev.txt in the Buildroot source code. This file is processed when Buildroot generates the final root filesystem image, and the device files are therefore not visible in the output/target directory. The BR2_ROOTFS_STATIC_DEVICE_TABLE option allows to change the default device table used by Buildroot, or to add an additional device table, so that additional device files are created by Buildroot during the build. So, if you use this method, and a device file is missing in your system, you can for example create a board/<yourcompany>/<yourproject>/device_table_dev.txt file that contains the description of your additional device files, and then you can set BR2_ROOTFS_STATIC_DEVICE_TABLE to system/device_table_dev.txt board/<yourcompany>/<yourproject>/device_table_dev.txt. For more details about the format of the device table file, see Section 11.1, “Makedev syntax documentation”.
  • The second solution is Dynamic using devtmpfs only. devtmpfs is a virtual filesystem inside the Linux kernel that has been introduced in kernel 2.6.32 (if you use an older kernel, it is not possible to use this option). When mounted in /dev, this virtual filesystem will automatically make device files appear and disappear as hardware devices are added and removed from the system. This filesystem is not persistent accross reboots: it is filled dynamically by the kernel. Using devtmpfs requires the following kernel configuration options to be enabled: CONFIG_DEVTMPFS and CONFIG_DEVTMPFS_MOUNT. When Buildroot is in charge of building the Linux kernel for your embedded device, it makes sure that those two options are enabled. However, if you build your Linux kernel outside of Buildroot, then it is your responsability to enable those two options (if you fail to do so, your Buildroot system will not boot).
  • The third solution is Dynamic using mdev. This method also relies on the devtmpfs virtual filesystem detailed above (so the requirement to have CONFIG_DEVTMPFS and CONFIG_DEVTMPFS_MOUNT enabled in the kernel configuration still apply), but adds the mdev userspace utility on top of it. mdev is a program part of Busybox that the kernel will call every time a device is added or removed. Thanks to the /etc/mdev.conf configuration file, mdev can be configured to for example, set specific permissions or ownership on a device file, call a script or application whenever a device appears or disappear, etc. Basically, it allows userspace to react on device addition and removal events. mdev can for example be used to automatically load kernel modules when devices appear on the system. mdev is also important if you have devices that require a firmware, as it will be responsible for pushing the firmware contents to the kernel. mdev is a lightweight implementation (with fewer features) of udev. For more details about mdev and the syntax of its configuration file, see
  • The fourth solution is Dynamic using udev. This method also relies on the devtmpfs virtual filesystem detailed above, but adds the udev userspace daemon on top of it. udev is a daemon that runs in the background, and gets called by the kernel when a device gets added or removed from the system. It is a more heavyweight solution than mdev, but provides higher flexibility and is sometimes mandatory for some system components (systemd for example). udev is the mechanism used in most desktop Linux distributions. For more details about udev, see

The Buildroot developers recommandation is to start with the Dynamic using devtmpfs only solution, until you have the need for userspace to be notified when devices are added/removed, or if firmwares are needed, in which case Dynamic using mdev is usually a good solution.

3.1.3. init system

The init program is the first userspace program started by the kernel (it carries the PID number 1), and is responsible for starting the userspace services and programs (for example: web server, graphical applications, other network servers, etc.).

Buildroot allows to use three different types of init systems, which can be chosen from System configuration, Init system:

  • The first solution is Busybox. Amongst many programs, Busybox has an implementation of a basic init program, which is sufficient for most embedded systems. Enabling the BR2_INIT_BUSYBOX will ensure Busybox will build and install its init program. This is the default solution in Buildroot. The Busybox init program will read the /etc/inittab file at boot to know what to do. The syntax of this file can be found in (note that Busybox inittab syntax is special: do not use a random inittab documentation from the Internet to learn about Busybox inittab). The default inittab in Buildroot is stored in system/skeleton/etc/inittab. Apart from mounting a few important filesystems, the main job the default inittab does is to start the /etc/init.d/rcS shell script, and start a getty program (which provides a login prompt).
  • The second solution is systemV. This solution uses the old traditional sysvinit program, packed in Buildroot in package/sysvinit. This was the solution used in most desktop Linux distributions, until they switched to more recent alternatives such as Upstart or Systemd. sysvinit also works with an inittab file (which has a slightly different syntax than the one from Busybox). The default inittab installed with this init solution is located in package/sysvinit/inittab.
  • The third solution is systemd. systemd is the new generation init system for Linux. It does far more than traditional init programs: aggressive parallelization capabilities, uses socket and D-Bus activation for starting services, offers on-demand starting of daemons, keeps track of processes using Linux control groups, supports snapshotting and restoring of the system state, etc. systemd will be useful on relatively complex embedded systems, for example the ones requiring D-Bus and services communicating between each other. It is worth noting that systemd brings a fairly big number of large dependencies: dbus, glib and more. For more details about systemd, see

The solution recommended by Buildroot developers is to use the Busybox init as it is sufficient for most embedded systems. systemd can be used for more complex situations.

3.2. make tips

This is a collection of tips that help you make the most of Buildroot.

Configuration searches: The make *config commands offer a search tool. Read the help message in the different frontend menus to know how to use it:

  • in menuconfig, the search tool is called by pressing /;
  • in xconfig, the search tool is called by pressing Ctrl + f.

The result of the search shows the help message of the matching items.

Display all commands executed by make: 

 $ make V=1 <target>

Display all available targets: 

 $ make help

Not all targets are always available, some settings in the .config file may hide some targets:

  • linux-menuconfig and linux-savedefconfig only work when linux is enabled;
  • uclibc-menuconfig is only available when the Buildroot internal toolchain backend is used;
  • ctng-menuconfig is only available when the crosstool-NG backend is used;
  • barebox-menuconfig and barebox-savedefconfig only work when the barebox bootloader is enabled.

Cleaning: Explicit cleaning is required when any of the architecture or toolchain configuration options are changed.

To delete all build products (including build directories, host, staging and target trees, the images and the toolchain):

 $ make clean

Generating the manual: The present manual sources are located in the docs/manual directory. To generate the manual:

 $ make manual-clean
 $ make manual

The manual outputs will be generated in output/docs/manual.


Reseting Buildroot for a new target: To delete all build products as well as the configuration:

 $ make distclean

Notes. If ccache is enabled, running make clean or distclean does not empty the compiler cache used by Buildroot. To delete it, refer to Section 5.2.2, “Using ccache in Buildroot”.

3.3. Customization

3.3.1. Customizing the generated target filesystem

Besides changing one or another configuration through make *config, there are a few ways to customize the resulting target filesystem.

  • Customize the target filesystem directly and rebuild the image. The target filesystem is available under output/target/. You can simply make your changes here and run make afterwards - this will rebuild the target filesystem image. This method allows you to do anything to the target filesystem, but if you decide to completely rebuild your toolchain and tools, these changes will be lost. This solution is therefore only useful for quick tests only: changes do not survive the make clean command. Once you have validated your changes, you should make sure that they will persist after a make clean by using one of the following methods.
  • Create a filesystem overlay: a tree of files that are copied directly over the target filesystem after it has been built. Set BR2_ROOTFS_OVERLAY to the top of the tree. .git, .svn, .hg directories, .empty files and files ending with ~ are excluded. Among these first 3 methods, this one should be preferred.
  • In the Buildroot configuration, you can specify the paths to one or more post-build scripts. These scripts are called in the given order, after Buildroot builds all the selected software, but before the rootfs images are assembled. The BR2_ROOTFS_POST_BUILD_SCRIPT allows you to specify the location of your post-build scripts. This option can be found in the System configuration menu. The destination root filesystem folder is given as the first argument to these scripts, and these scripts can then be used to remove or modify any file in your target filesystem. You should, however, use this feature with care. Whenever you find that a certain package generates wrong or unneeded files, you should fix that package rather than work around it with some post-build cleanup scripts. You may also use these variables in your post-build script:

    • BUILDROOT_CONFIG: the path to the Buildroot .config file
    • HOST_DIR, STAGING_DIR, TARGET_DIR: see the section called “generic-package Reference”
    • BINARIES_DIR: the place where all binary files (aka images) are stored
    • BASE_DIR: the base output directory
  • Create your own target skeleton. You can start with the default skeleton available under system/skeleton and then customize it to suit your needs. The BR2_ROOTFS_SKELETON_CUSTOM and BR2_ROOTFS_SKELETON_CUSTOM_PATH will allow you to specify the location of your custom skeleton. These options can be found in the System configuration menu. At build time, the contents of the skeleton are copied to output/target before any package installation. Note that this method is not recommended, as it duplicates the entire skeleton, which prevents from taking advantage of the fixes or improvements brought to the default Buildroot skeleton. The recommended method is to use the post-build scripts mechanism described in the previous item.

Note also that you can use the post-image scripts if you want to perform some specific actions after all filesystem images have been created (for example to automatically extract your root filesystem tarball in a location exported by your NFS server, or to create a special firmware image that bundles your root filesystem and kernel image, or any other custom action), you can specify a space-separated list of scripts in the BR2_ROOTFS_POST_IMAGE_SCRIPT configuration option. This option can be found in the System configuration menu as well.

Each of those scripts will be called with the path to the images output directory as first argument, and will be executed with the main Buildroot source directory as the current directory. Those scripts will be executed as the user that executes Buildroot, which should normally not be the root user. Therefore, any action requiring root permissions in one of these post-image scripts will require special handling (usage of fakeroot or sudo), which is left to the script developer.

Just like for the post-build scripts mentioned above, you also have access to the following environment variables from your post-image scripts: BUILDROOT_CONFIG, HOST_DIR, STAGING_DIR, TARGET_DIR, BINARIES_DIR and BASE_DIR.

Additionally, each of the BR2_ROOTFS_POST_BUILD_SCRIPT and BR2_ROOTFS_POST_IMAGE_SCRIPT scripts will be passed the arguments specified in BR2_ROOTFS_POST_SCRIPT_ARGS (if that is not empty). All the scripts will be passed the exact same set of arguments, it is not possible to pass different sets of arguments to each script.

3.3.2. Customizing the Busybox configuration

Busybox is very configurable, and you may want to customize it. You can follow these simple steps to do so. This method isn’t optimal, but it’s simple, and it works:

  • Do an initial compilation of Buildroot, with busybox, without trying to customize it.
  • Invoke make busybox-menuconfig. The nice configuration tool appears, and you can customize everything.
  • Run the compilation of Buildroot again.

Otherwise, you can simply change the package/busybox/busybox-<version>.config file, if you know the options you want to change, without using the configuration tool.

If you want to use an existing config file for busybox, then see Section 3.5.5, “Environment variables”.

3.3.3. Customizing the uClibc configuration

Just like BusyBox Section 3.3.2, “Customizing the Busybox configuration”, uClibc offers a lot of configuration options. They allow you to select various functionalities depending on your needs and limitations.

The easiest way to modify the configuration of uClibc is to follow these steps:

  • Do an initial compilation of Buildroot without trying to customize uClibc.
  • Invoke make uclibc-menuconfig. The nice configuration assistant, similar to the one used in the Linux kernel or Buildroot, appears. Make your configuration changes as appropriate.
  • Copy the $(O)/build/uClibc-VERSION/.config file to a different place (e.g. board/MANUFACTURER/BOARDNAME/uClibc.config) and adjust the uClibc configuration file option BR2_UCLIBC_CONFIG to refer to this configuration instead of the default one.
  • Run the compilation of Buildroot again.

Otherwise, you can simply change package/uclibc/uClibc-VERSION.config, without running the configuration assistant.

If you want to use an existing config file for uClibc, then see Section 3.5.5, “Environment variables”.

3.3.4. Customizing the Linux kernel configuration

The Linux kernel configuration can be customized just like BusyBox Section 3.3.2, “Customizing the Busybox configuration” and uClibc Section 3.3.3, “Customizing the uClibc configuration” using make linux-menuconfig. Make sure you have enabled the kernel build in make menuconfig first. Once done, run make to (re)build everything.

If you want to use an existing config file for Linux, then see Section 3.5.5, “Environment variables”.

3.3.5. Customizing the toolchain

There are three distinct types of toolchain backend supported in Buildroot, available under the menu Toolchain, invoking make menuconfig.

Using the external toolchain backend

There is no way of tuning an external toolchain since Buildroot does not generate it.

It also requires to set the Buildroot settings according to the toolchain ones (see the section called “External toolchain backend”).

Using the internal Buildroot toolchain backend

The internal Buildroot toolchain backend only allows to generate uClibc-based toolchains.

However, it allows to tune major settings, such as:

These settings are available after selecting the Buildroot toolchain type in the menu Toolchain.

Using the Crosstool-NG backend

The crosstool-NG toolchain backend enables a rather limited set of settings under the Buildroot Toolchain menu:

  • The crosstool-NG configuration file
  • Gdb and some toolchain options

Then, the toolchain can be fine-tuned by invoking make ctng-menuconfig.

3.4. Storing the configuration

When you have a buildroot configuration that you are satisfied with and you want to share it with others, put it under revision control or move on to a different buildroot project, you need to store the configuration so it can be rebuilt later. The configuration that needs to be stored consists of the buildroot configuration, the configuration files for packages that you use (kernel, busybox, uClibc, …), and your rootfs modifications.

3.4.1. Basics for storing the configuration

Buildroot configuration

For storing the buildroot configuration itself, buildroot offers the following command: make savedefconfig.

This strips the buildroot configuration down by removing configuration options that are at their default value. The result is stored in a file called defconfig. If you want to save it in another place, change the BR2_DEFCONFIG option, or call make with make savedefconfig BR2_DEFCONFIG=<path-to-defconfig>. The usual place is configs/<boardname>_defconfig. The configuration can then be rebuilt by running make <boardname>_defconfig.

Alternatively, you can copy the file to any other place and rebuild with make defconfig BR2_DEFCONFIG=<path-to-defconfig-file>.

Other package configuration

The configuration files for busybox, the linux kernel, barebox, uClibc and crosstool-NG should be stored as well if changed. For each of these, a buildroot configuration option exists to point to an input configuration file, e.g. BR2_LINUX_KERNEL_CUSTOM_CONFIG_FILE. To save their configuration, set those configuration options to a path outside your output directory, e.g. board/<manufacturer>/<boardname>/linux.config. Then, copy the configuration files to that path.

Make sure that you create a configuration file before changing the BR2_LINUX_KERNEL_CUSTOM_CONFIG_FILE etc. options. Otherwise, buildroot will try to access this config file, which doesn’t exist yet, and will fail. You can create the configuration file by running make linux-menuconfig etc.

Buildroot provides a few helper targets to make the saving of configuration files easier.

  • make linux-update-defconfig saves the linux configuration to the path specified by BR2_LINUX_KERNEL_CUSTOM_CONFIG_FILE. It simplifies the config file by removing default values. However, this only works with kernels starting from 2.6.33. For earlier kernels, use make linux-update-config.
  • make busybox-update-config saves the busybox configuration to the path specified by BR2_PACKAGE_BUSYBOX_CONFIG.
  • make uclibc-update-config saves the uClibc configuration to the path specified by BR2_UCLIBC_CONFIG.
  • make barebox-update-defconfig saves the barebox configuration to the path specified by BR2_TARGET_BAREBOX_CUSTOM_CONFIG_FILE.
  • For crosstool-NG and at91bootstrap3, no helper exists so you have to copy the config file manually to BR2_TOOLCHAIN_CTNG_CONFIG, resp. BR2_TARGET_AT91BOOTSTRAP3_CUSTOM_CONFIG_FILE.

3.4.2. Creating your own board support

Creating your own board support in Buildroot allows users of a particular hardware platform to easily build a system that is known to work.

To do so, you need to create a normal Buildroot configuration that builds a basic system for the hardware: toolchain, kernel, bootloader, filesystem and a simple Busybox-only userspace. No specific package should be selected: the configuration should be as minimal as possible, and should only build a working basic Busybox system for the target platform. You can of course use more complicated configurations for your internal projects, but the Buildroot project will only integrate basic board configurations. This is because package selections are highly application-specific.

Once you have a known working configuration, run make savedefconfig. This will generate a minimal defconfig file at the root of the Buildroot source tree. Move this file into the configs/ directory, and rename it <boardname>_defconfig.

It is recommended to use as much as possible upstream versions of the Linux kernel and bootloaders, and to use as much as possible default kernel and bootloader configurations. If they are incorrect for your board, or no default exists, we encourage you to send fixes to the corresponding upstream projects.

However, in the mean time, you may want to store kernel or bootloader configuration or patches specific to your target platform. To do so, create a directory board/<manufacturer> and a subdirectory board/<manufacturer>/<boardname>. You can then store your patches and configurations in these directories, and reference them from the main Buildroot configuration.

3.4.3. Step-by-step instructions for storing configuration

To store the configuration for a specific product, device or application, it is advisable to use the same conventions as for the board support: put the buildroot defconfig in the configs directory, and any other files in a subdirectory of the boards directory. This section gives step-by-step instructions about how to do that. Of course, you can skip the steps that are not relevant for your use case.

  1. make menuconfig to configure toolchain, packages and kernel.
  2. make linux-menuconfig to update the kernel config, similar for other configuration.
  3. mkdir -p board/<manufacturer>/<boardname>
  4. Set the following options to board/<manufacturer>/<boardname>/<package>.config (as far as they are relevant):

  5. Write the configuration files:

    • make linux-update-defconfig
    • make busybox-update-config
    • cp <output>/build/build-toolchain/.config board/<manufacturer>/<boardname>/ctng.config
    • make uclibc-update-config
    • cp <output>/build/at91bootstrap3-*/.config board/<manufacturer>/<boardname>/at91bootstrap3.config
    • make barebox-update-defconfig
  6. Create board/<manufacturer>/<boardname>/fs-overlay/ and fill it with additional files you need on your rootfs, e.g. board/<manufacturer>/<boardname>/fs-overlay/etc/inittab. Set BR2_ROOTFS_OVERLAY to board/<manufacturer>/<boardname>/fs-overlay.
  7. Create a post-build script board/<manufacturer>/<boardname>/ Set BR2_ROOTFS_POST_BUILD_SCRIPT to board/<manufacturer>/<boardname>/
  8. If additional setuid permissions have to be set or device nodes have to be created, create board/<manufacturer>/<boardname>/device_table.txt and add that path to BR2_ROOTFS_DEVICE_TABLE.
  9. make savedefconfig to save the buildroot configuration.
  10. cp defconfig configs/<boardname>_defconfig
  11. To add patches to the linux build, set BR2_LINUX_KERNEL_PATCH to board/<manufacturer>/<boardname>/patches/linux/ and add your patches in that directory. Each patch should be called linux-<num>-<description>.patch. Similar for U-Boot, barebox, at91bootstrap and at91bootstrap3.
  12. If you need modifications to other packages, or if you need to add packages, do that directly in the packages/ directory, following the instructions in Section 6.2, “Adding new packages to Buildroot”.

3.4.4. Customizing packages

It is sometimes useful to apply extra patches to packages - over and above those provided in Buildroot. This might be used to support custom features in a project, for example, or when working on a new architecture.

The BR2_GLOBAL_PATCH_DIR configuration file option can be used to specify a directory containing global package patches.

For a specific version <packageversion> of a specific package <packagename>, patches are applied as follows.

First, the default Buildroot patch set for the package is applied.

If the directory $(BR2_GLOBAL_PATCH_DIR)/<packagename>/<packageversion> exists, then all *.patch files in the directory will be applied.

Otherwise, if the directory $(BR2_GLOBAL_PATCH_DIR)/<packagename> exists, then all *.patch files in the directory will be applied.

3.5. Daily use

3.5.1. Understanding when a full rebuild is necessary

A full rebuild is achieved by running:

$ make clean all

In some cases, a full rebuild is mandatory:

  • each time the toolchain properties are changed, this includes:

    • after changing any toolchain option under the Toolchain menu (if the internal Buildroot backend is used);
    • after running make ctng-menuconfig (if the crosstool-NG backend is used);
    • after running make uclibc-menuconfig.
  • after removing some libraries from the package selection.

In some cases, a full rebuild is recommended:

  • after adding some libraries to the package selection (otherwise, packages that can be optionally linked against those libraries won’t be rebuilt, so they won’t support those new available features).

In other cases, it is up to you to decide if you should run a full rebuild, but you should know what is impacted and understand what you are doing anyway.

3.5.2. Understanding how to rebuild packages

One of the most common questions asked by Buildroot users is how to rebuild a given package or how to remove a package without rebuilding everything from scratch.

Removing a package is unsupported by Buildroot without rebuilding from scratch. This is because Buildroot doesn’t keep track of which package installs what files in the output/staging and output/target directories, or which package would be compiled differently depending on the availability of another package.

The easiest way to rebuild a single package from scratch is to remove its build directory in output/build. Buildroot will then re-extract, re-configure, re-compile and re-install this package from scratch. You can ask buildroot to do this with the make <package>-dirclean command.

For convenience, the special make targets <package>-reconfigure and <package>-rebuild repeat the configure resp. build steps.

However, if you don’t want to rebuild the package completely from scratch, a better understanding of the Buildroot internals is needed. Internally, to keep track of which steps have been done and which steps remain to be done, Buildroot maintains stamp files (empty files that just tell whether this or that action has been done):

  • output/build/<package>-<version>/.stamp_configured. If removed, Buildroot will trigger the recompilation of the package from the configuration step (execution of ./configure).
  • output/build/<package>-<version>/.stamp_built. If removed, Buildroot will trigger the recompilation of the package from the compilation step (execution of make).

Note: toolchain packages use custom makefiles. Their stamp files are named differently.

Further details about package special make targets are explained in Section 5.2.4, “Package-specific make targets”.

3.5.3. Offline builds

If you intend to do an offline build and just want to download all sources that you previously selected in the configurator (menuconfig, xconfig or gconfig), then issue:

 $ make source

You can now disconnect or copy the content of your dl directory to the build-host.

3.5.4. Building out-of-tree

As default, everything built by Buildroot is stored in the directory output in the Buildroot tree.

Buildroot also supports building out of tree with a syntax similar to the Linux kernel. To use it, add O=<directory> to the make command line:

 $ make O=/tmp/build


 $ cd /tmp/build; make O=$PWD -C path/to/buildroot

All the output files will be located under /tmp/build.

When using out-of-tree builds, the Buildroot .config and temporary files are also stored in the output directory. This means that you can safely run multiple builds in parallel using the same source tree as long as they use unique output directories.

For ease of use, Buildroot generates a Makefile wrapper in the output directory - so after the first run, you no longer need to pass O=.. and -C .., simply run (in the output directory):

 $ make <target>

3.5.5. Environment variables

Buildroot also honors some environment variables, when they are passed to make or set in the environment:

  • HOSTCXX, the host C++ compiler to use
  • HOSTCC, the host C compiler to use
  • UCLIBC_CONFIG_FILE=<path/to/.config>, path to the uClibc configuration file, used to compile uClibc, if an internal toolchain is being built. Note that the uClibc configuration file can also be set from the configuration interface, so through the Buildroot .config file; this is the recommended way of setting it.
  • BUSYBOX_CONFIG_FILE=<path/to/.config>, path to the Busybox configuration file. Note that the Busybox configuration file can also be set from the configuration interface, so through the Buildroot .config file; this is the recommended way of setting it.
  • BUILDROOT_DL_DIR to override the directory in which Buildroot stores/retrieves downloaded files Note that the Buildroot download directory can also be set from the configuration interface, so through the Buildroot .config file; this is the recommended way of setting it.

An example that uses config files located in the toplevel directory and in your $HOME:

 $ make UCLIBC_CONFIG_FILE=uClibc.config BUSYBOX_CONFIG_FILE=$HOME/bb.config

If you want to use a compiler other than the default gcc or g++ for building helper-binaries on your host, then do

 $ make HOSTCXX=g++-4.3-HEAD HOSTCC=gcc-4.3-HEAD

3.6. Integration with Eclipse

While a part of the embedded Linux developers like classical text editors like Vim or Emacs, and command-line based interfaces, a number of other embedded Linux developers like richer graphical interfaces to do their development work. Eclipse being one of the most popular Integrated Development Environment, Buildroot integrates with Eclipse in order to ease the development work of Eclipse users.

Our integration with Eclipse simplifies the compilation, remote execution and remote debugging of applications and libraries that are built on top of a Buildroot system. It does not integrate the Buildroot configuration and build processes themselves with Eclipse. Therefore, the typical usage model of our Eclipse integration would be:

  • Configure your Buildroot system with make menuconfig, make xconfig or any other configuration interface provided with Buildroot.
  • Build your Buildroot system by running make.
  • Start Eclipse to develop, execute and debug your own custom applications and libraries, that will rely on the libraries built and installed by Buildroot.

The Buildroot Eclipse integration installation process and usage is described in detail at

3.7. Hacking Buildroot

If Buildroot does not yet fit all your requirements, you may be interested in hacking it to add:

[3] This terminology differs from what is used by GNU configure, where the host is the machine on which the application will run (which is usually the same as target)