Architecture Overview


BalenaOS is an operating system optimised for running Docker containers on embedded devices, with an emphasis on reliability over long periods of operation, as well as a productive developer workflow inspired by the lessons learned while building balena.

The core insight behind balenaOS is that Linux Containers offer, for the first time, a practical path to using virtualisation on embedded devices. VMs and hypervisors have lead to huge leaps in productivity and automation for cloud deployments, but their abstraction of hardware as well as their resource overhead and lack of hardware support means that they are out of the question for embedded scenarios. With OS-level virtualisation as implemented for Linux Containers, both those objections are lifted for Linux devices, of which there are many in the Internet of Things.

BalenaOS is an operating system built for easy portability to multiple device types (via the Yocto framework) and optimised for Linux Containers, and Docker in particular. There are many decisions, large and small, we have made to enable that vision, which are present throughout our architecture.

The first version of balenaOS was developed as part of balena, and has run on thousands of embedded devices, deployed in many different contexts for several years. BalenaOS v2 represents the combination of the learnings we extracted over those years, as well as our determination to make balenaOS a first-class open source project, able to run as an independent operating system, for any context where embedded devices and containers intersect.

We look forward to working with the community to grow and mature balenaOS into an operating system with even broader device support, a broader operating envelope, and as always, taking advantage of the most modern developments in security and reliability.

OS composition

The OS is composed of multiple yocto layers. The Yocto Project build system uses these layers to compile balenaOS for the various supported platforms. This document will not go into detailed explanation about how the Yocto Project works, but will require from the reader a basic understanding of its internals and release versioning/codename.

Codename Yocto Project Version Release Date Current Version Support Level Poky Version BitBake branch
Pyro 2.3 Apr 2017 Development
Morty 2.2 Oct 2016 2.2.1 Stable 16.0 1.32
Krogoth 2.1 Apr 2016 2.1.2 Stable 15.0 1.30
Jethro 2.0 Nov 2015 2.0.3 Community 14.0 1.28
Fido 1.8 Apr 2015 1.8.2 Community 13.0 1.26
Dizzy 1.7 Oct 2014 1.7.3 Community 12.0 1.24
Daisy 1.6 Apr 2014 1.6.3 Community 11.0 1.22
Dora 1.5 Oct 2013 1.5.4 Community 10.0 1.20

We will start looking into balenaOS’s composition from the core of the Yocto Project, i.e. poky. Poky has released a whole bunch of versions and supporting all of them is not in the scope of our OS, but we do try to support its latest versions. This might sound ironic as we do not currently support poky’s last version (i.e. 2.1/Krogoth), but this is only because we did not need this version yet. We tend to support versions of poky based on what our supported boards require and also do a yearly update to the latest poky version for all the boards that can run that version. Currently we support three poky versions: 2.0/Jethro, 1.8/Fido and 1.6/Daisy.

On top of poky we add the collection of packages from meta-openembedded. Now that we are done with setting up the build system let’s add Board Support Packages (BSP), these layers are here to provide board-specific configuration and packages (e.g. bootloader, kernel), thus enabling building physical hardware (not emulators). These types of layers are the ones one should be looking for if one wants to add support for a board; if you already have this layer your job should be fairly straightforward, if you do not have it you might be looking into a very cumbersome job. At this point we have all the bits and pieces in place to build an OS. The core code of balenaOS resides in meta-balena. This layer handles a lot of functionality but the main thing that one should remember now is that here one will find the recipe. This layer also needs a poky version-specific layer to combine with (e.g. meta-balena-jethro), these two layers will give you the necessary framework for the abstract balenaOS generation. Now for the final piece of the puzzle, the board-specific meta-balena configuration layer. This layer goes hand in hand with a BSP layer, for example for the Raspberry Pi family (i.e. rpi0, rpi1, rpi2, rpi3) that is supported by the meta-raspberrypi BSP, we provide a meta-balena-raspberrypi layer that configures meta-balena to the raspberrypi's needs.

Below is a representative example from the Raspberry Pi family, which helps explain meta-balena-raspberrypi/conf/samples/bblayers.conf.sample.

Layer Name Repository Description
meta-balena This repository enables building balenaOS for various devices
meta-balena-jethro This layer enables building balenaOS for jethro supported BSPs
meta-balena-raspberrypi Enables building balenaOS for chosen meta-raspberrypi machines.
meta-raspberrypi This is the general hardware specific BSP overlay for the Raspberry Pi device family.
meta-openembedded Collection of OpenEmbedded layers
meta-openembedded/meta-python The home of python modules for OpenEmbedded.
meta-openembedded/meta-networking Central point for networking-relatedpackages and configuration.
oe-meta-go OpenEmbedded layer for the Go programming language
poky/meta Core functionality and configuration of Yocto Project

Userspace Components

The balenaOS userspace tries to package only the bare essentials for running containers while still offering a lot of flexibility. The philosophy is that software and services always default to being in a container, unless they are generically useful to all containers or they absolutely can’t live in a container. The userspace consists of many open source components, but in this section we will just highlight some of the important ones.


We use systemd as the init system for balenaOS and it is responsible for launching and managing all the other services. We leverage many of the great features of systemd, such as adjusting OOM scores for critical services and running services in separate mount namespaces. Systemd also allows us to easily manage service dependencies.


The balena engine is a lightweight container runtime that allows us to build and run linux containers on balenaOS. BalenaOS has been optimized to run Docker containers and has been set up to use the journald log driver and DNSmasq for container DNS resolution. We use AUFS as the default underlying storage driver since it is arguably the most production tested storage driver in the Docker ecosystem. It also allows us to more easily support devices with older kernel versions and additionally gives us the ability to run on devices with Unmanaged NAND flash. For device types that have a 4.4 kernel or above we now use overlayfs as the storage driver and in future will be transitioning all devices to this driver.


The supervisor is a container that is responsible for starting and managing the state of all the other containers on device. When balenaOS is run without a management backend like balenaCloud or openBalena, it runs in a state called "localMode". In localMode the supervisor will load up any preloaded images on boot and ensure they keep running. It also offers a useful set of APIs via the local Supervisor API that containers on the device can make use of.

NetworkManager and ModemManager

BalenaOS uses NetworkManager accompanied by ModemManager, to deliver a stable and reliable connection to the internet, be it via ethenet, wifi or cellular modem. Additionally to make headless configuration of the device’s network easy, we have added a system-connections folder in the boot partition which is copied into /etc/NetworkManager/system-connections. So any valid NetworkManager connection file can just be dropped into the boot partition before device commissioning.


DNSmasq is here to manage the nameservers that NetworkManager provides for balenaOS. NetworkManager will discover the nameservers that can be used and a binary called resolvconf will write them to a tmpfs location, from where DNSmasq will take over and manage these nameservers to give the user the fastest most responsive DNS resolution.


In order to improve the development experience of balenaOS, there is an Avahi daemon that starts advertising the device as balena.local or <hostname>.local on boot if the image is a development image.


BalenaOS will provide the user with an OpenVPN server that they might use. It is worth noting that this server will be disabled by default and manual interaction from the user is needed to activate and configure this server to their needs.

Image Partition Layout

The first partition, resin-boot, is meant to hold boot important bits according to each board (e.g. kernel image, bootloader image). It also holds a very important file that you will find mentioned elsewhere in this document (i.e. config.json). The config.json file is the central point of configuring balenaOS and defining its behaviour, for example you can set your hostname, allow persistent logging, etc. Resin-rootA is the partition that holds our read-only root filesystem; it holds almost everything that balenaOS is. Resin-rootB is an empty partition that is only used when the rootfs is to be updated. We follow the Blue-Green deployment strategy. Essentially we have one active partition that is the OS’s current rootfs and one dormant one that is empty, we download the new rootfs to the dormant partition and try to switch them, if the switch is successful the dormant partition becomes the new rootfs, if not, we rollback to the old active partition. Resin-state is the partition that holds persistent data as explained in the Stateless and Read-only rootfs. Resin-data is the partition that holds downloaded Docker images. Generally any container data will be found here.

Stateless and Read-Only rootFS

BalenaOS comes with a read-only root filesystem, so we can ensure our hostOS is stateless, but we still need some data to be persistent over system reboots. We achieve this with a very simple mechanism, i.e. bind mounts. BalenaOS contains a partition named resin-state that is meant to hold all this persistent data, inside we populate a Linux filesystem hierarchy standard with the rootfs paths that we require to be persistent. After this partition is populated we are ready to bind mount the respective rootfs paths to this read-write location, thus allowing different components (e.g. journald) to be able to write data to disk. A mechanism to purge this partition is provided, thus allowing users to rollback to an unconfigured balenaOS image.

A diagram of our read-only rootfs can be seen below:

Dev vs. Prod images

BalenaOS comes in two flavours, namely Development (dev) and Production (prod). The Development images are recommended while getting started with balenaOS and building a system. The dev images enable a number of useful features while developing, namely:

  • Passwordless SSH to balenaOS
  • The device broadcasts as balena.local or <hostname>.local on the network for easy access.
  • Docker socket exposed on via port 2377
  • Getty console attached to tty1 and serial

Note: Raspberry Pi devices don’t have Getty attached to serial.

The production images have all of the above functionality disabled by default. In both forms of the OS we write logs to an 8 MB journald RAM buffer in order to avoid wear on the flash storage used by most of the supported boards. However, persistent logging can be enabled by setting the "persistentLogging": true key in the config.json file in the boot partition of the device.

If you wish to add SSH access to your .prod images, you can add your SSH key to the config.json of the image before flashing it as described here.

OS Tools

Base Images

To help getting started with containers on embedded systems, balenaOS comes with a full complement of over 500 Docker base images. We currently have base images for Debian, Fedora and Alpine Linux distributions, as well as Nodejs, Python, Go and Java language base images. For a more in-depth look into all the available base images head over to the balena base images wiki or the Docker Hub repository.

Balena Command Line Tool

The balena CLI is a set of useful tools that help with setting up and developing containers with a balenaOS device. The goal of the CLI is to provide a simple and intuitive developer experience. We love it when you report bugs,you can report them here:


Currently the CLI is a node.js based command line tool which requires that our system has the following dependencies installed and in our path:

Once we have those setup we can install the CLI using npm:

$ npm install --global --production --unsafe-perm balena-cli



balena local configure allows you configure or reconfigure a balenaOS system image or SD card. Currently, this allows for configuration of wifi settings, hostname and enablement of persistent journald logs.

$ balena help local configure
Usage: local configure <target>

Use this command to configure or reconfigure a balenaOS drive or image.


	$ balena local configure /dev/sdc
	$ balena local configure path/to/image.img

balena local flash command helps to easily and safely flash the balenaOS system image on an SD card or USB drive.

Note: Currently, balena local flash doesn't work with the Intel Edison board.

$ balena help local flash
Usage: local flash <image>

Use this command to flash a balenaOS image to a drive.


	$ balena local flash path/to/balenaos.img
	$ balena local flash path/to/balenaos.img --drive /dev/disk2
	$ balena local flash path/to/balenaos.img --drive /dev/disk2 --yes


    --yes, -y                           confirm non-interactively
    --drive, -d <drive>                 drive

This command can be used to start an image build on the remote balenaCloud build servers, or on a local-mode balena device.

When building on the balenaCloud servers, the given source directory will be sent to the remote server. This can be used as a drop-in replacement for the "git push" deployment method.

When building on a local-mode device, the given source directory will be built on the device, and the resulting containers will be run on the device. Logs will be streamed back from the device as part of the same invocation.

The --registry-secrets option specifies a JSON or YAML file containing private Docker registry usernames and passwords to be used when pulling base images. Sample registry-secrets YAML file:

		username: ann
		password: hunter2
	'':  # Use the empty string to refer to the Docker Hub
		username: mike
		password: cze14
	'':  # Google Container Registry
		username: '_json_key'
		password: '{escaped contents of the GCR keyfile.json file}'


	$ balena push
	$ balena push --source <source directory>
	$ balena push -s <source directory>

--source, -s <source>

The source that should be sent to the balena builder to be built (defaults to the current directory)

--nocache, -c

Don't use cache when building this project

--registry-secrets, -R <secrets.yml|.json>

Path to a local YAML or JSON file containing Docker registry passwords used to pull base images


balena local ssh discovers balenaOS devices on the local network and allows you to drop a SSH session into any of the containers running on the device. It also enables to you to drop in to the underlying host OS by doing balena local ssh --host, however you can of course always just do ssh root@balena.local -p22222.

$ balena help local ssh
Usage: local ssh [deviceIp]

Warning: 'balena local ssh' requires an openssh-compatible client to be correctly
installed in your shell environment. For more information (including Windows
support) please check the README here:

Use this command to get a shell into the running application container of
your device.

The '--host' option will get you a shell into the Host OS of the balenaOS device.
No option will return a list of containers to enter or you can explicitly select
one by passing its name to the --container option


	$ balena local ssh
	$ balena local ssh --host
	$ balena local ssh --container chaotic_water
	$ balena local ssh --container chaotic_water --port 22222
	$ balena local ssh --verbose


    --verbose, -v                       increase verbosity
    --host, -s                          get a shell into the host OS
    --container, -c <container>         name of container to access
    --port, -p <port>                   ssh port number (default: 22222)

balena local logs allows the fetching of logs from any of the running containers on the device.

$ balena help local logs
Usage: local logs [deviceIp]


	$ balena local logs
	$ balena local logs -f
	$ balena local logs
	$ balena local logs -f
	$ balena local logs -f --app-name myapp


    --follow, -f                        follow log
    --app-name, -a <name>               name of container to get logs from