Improve this doc

Define a container

Balena uses Docker containers to manage deployment and updates. You can use one or more containers to package your services with whichever environments and tools they need to run.

To ensure a service has everything it needs, you'll want to create a list of instructions for building a container image. Whether the build process is done on your device, on your workstation, or on the balena builders, the end result is a read-only image that ends up on your device. This image is used by the container engine (balena or Docker, depending on the balenaOS version) to kick off a running container.

Note: For additional information on working with Dockerfiles with balena see the services masterclass.


The instructions for building a container image are written in a Dockerfile - this is similar to a Makefile in that it contains a recipe or set of instructions to build our container.

The syntax of Dockerfiles is fairly simple - at core there are 2 valid entries in a Dockerfile - comments, prepended with # as in script files, and instructions of the format INSTRUCTION arguments.

Typically you will only need to use 4 instructions - FROM, RUN and ADD or COPY:-

  • FROM has to be the first instruction in any valid Dockerfile and defines the base image to use as the basis for the container you're building.

  • RUN simply executes commands in the container - this can be of the format of a single line to execute, e.g. RUN apt-get -y update which will be run via /bin/sh -c, or [ "executable", "param1", "param2", ... ] which is executed directly.

  • ADD copies files from the current directory into the container, e.g. ADD <src> <dest>. Note that if <dest> doesn't exist, it will be created for you, e.g. if you specify a folder. If the <src> is a local tar archive it will unpack it for you. It also allows the <src> to be a url but will not unpack remote urls.

  • COPY is very similar to ADD, but without the compression and url functionality. According to the Dockerfile best practices, you should always use COPY unless the auto-extraction capability of ADD is needed.

  • CMD this command provides defaults for an executing container. This command will be run when the container starts up on your device, whereas RUN commands will be executed on our build servers. In a balena service, this is typically used to execute a start script or entrypoint for the user's service. CMD should always be the last command in your Dockerfile. The only processes that will run inside the container are the CMD command and all processes that it spawns.

For details on other instructions, consult the official Dockerfile documentation.

Using Dockerfiles with balena

To deploy a single-container release to balena, simply place a Dockerfile at the root of your repository. A docker-compose.yml file will be automatically generated, ensuring your container has host networking, is privileged, and has lib/modules, /lib/firmware, and /run/dbus bind mounted into the container. The default docker-compose.yml will look something like this:

Note: If you have devices in your app that have a supervisor version lower than 7.22.0, then you should use the io.resin.features. form of the labels to ensure that all devices obey the label. Earlier supervisor versions will not understand the io.balena.features label.

version: '2.1'
networks: {}
  resin-data: {}
      context: .
    privileged: true
    restart: always
    network_mode: host
      - 'resin-data:/data'
      io.balena.features.kernel-modules: '1'
      io.balena.features.firmware: '1'
      io.balena.features.dbus: '1'
      io.balena.features.supervisor-api: '1'
      io.balena.features.balena-api: '1'

Releases with multiple services should include a Dockerfile or package.json in each service directory. A docker-compose.yml file will need to be defined at the root of the repository, as discussed in our multicontainer documentation.

You can also include a .dockerignore file with your project if you wish the builder to ignore certain files.

Note: You don't need to worry about ignoring .git as the builders already do this by default.

Dockerfile templates

One of the goals of balena is code portability and ease of use, so you can easily manage and deploy a whole fleet of different devices. This is why Docker containers were such a natural choice. However, there are cases where Dockerfiles fall short and can't easily target multiple different device architectures.

To allow our builders to build containers for multiple architectures from one code repository, we implemented simple Dockerfile templates.

It is now possible to define a Dockerfile.template file that looks like this:

FROM balenalib/%%BALENA_MACHINE_NAME%%-node

COPY package.json /package.json
RUN npm install

COPY src/ /usr/src/app
CMD ["node", "/usr/src/app/main.js"]

This template will build and deploy a Node.js project for any of the devices supported by balena, regardless of whether the device architecture is ARM or x86. In this example, you can see the build variable %%BALENA_MACHINE_NAME%%. This will be replaced by the machine name (i.e.: raspberry-pi) at build time. See below for a list of machine names.

The machine name is inferred from the device type of the fleet you are deploying on. So if you have a NanoPi Neo Air fleet, the machine name will be nanopi-neo-air and an armv7hf architecture base image will be built.

Note: You need to ensure that your dependencies and Node.js modules are also multi-architecture, otherwise you will have a bad time.

Currently our builder supports the following build variables:

Variable Name Description
BALENA_APP_NAME The name of the fleet.
BALENA_ARCH The instruction set architecture for the base images associated with this device.
BALENA_MACHINE_NAME The name of the yocto machine this board is base on. It is the name that you will see in most of the balena Docker base images. This name helps us identify a specific BSP.
BALENA_RELEASE_HASH The hash corresponding to the release.
BALENA_SERVICE_NAME The name of the service defined in the docker-compose.yml file.

Note: If your fleet contains devices of different types, the %%BALENA_MACHINE_NAME%% build variable will not evaluate correctly for all devices. Your fleet services are built once for all devices, and the %%BALENA_MACHINE_NAME%% variable will pull from the device type associated with the fleet, rather than the target device. In this scenario, you can use %%BALENA_ARCH%% to pull a base image that matches the shared architecture of the devices in your fleet.

If you want to see an example of build variables in action, have a look at this basic openssh example.

Here are the supported machine names and architectures:

AM571X-EVM am571x-evm armv7hf Link
Aetina N510 TX2 n510-tx2 aarch64 Link
Asus Tinker Board asus-tinker-board armv7hf Link
Asus Tinker Board S asus-tinker-board-s armv7hf Link
Auvidea JN30B Nano jn30b-nano aarch64 Link
BalenaFin fincm3 armv7hf Link
BananaPi-M1+ bananapi-m1-plus armv7hf Link
BeagleBoard-XM beagleboard-xm armv7hf Link
BeagleBone Black beaglebone-black armv7hf Link
BeagleBone Green beaglebone-green armv7hf Link
BeagleBone Green Wifi beaglebone-green-wifi armv7hf Link
CTI Orbitty TX2 orbitty-tx2 aarch64 Link
CTI Spacely TX2 spacely-tx2 aarch64 Link
Compulab iMX8 cl-som-imx8 aarch64 Link
Coral Dev Board coral-dev aarch64 Link
Cybertan ze250 cybertan-ze250 i386-nlp Link
Dart imx6ul-var-dart armv7hf Link
Generic AARCH64 generic-aarch64 aarch64 Link
Generic ARMv7-a HF generic-armv7ahf armv7hf Link
Hummingboard hummingboard armv7hf Link
Intel Edison intel-edison i386 Link
Intel NUC intel-nuc amd64 Link
Microsoft Surface Go surface-go amd64 Link
Microsoft Surface Pro 6 surface-pro-6 amd64 Link
NPE X500 M3 npe-x500-m3 armv7hf Link
NanoPC-T4 nanopc-t4 aarch64 Link
Nanopi Neo Air nanopi-neo-air armv7hf Link
Nitrogen 6X Quad 2GB nitrogen6xq2g armv7hf Link
Nitrogen 6x nitrogen6x armv7hf Link
Nitrogen8M Mini SBC nitrogen8mm aarch64 Link
Nvidia D3 TX2 srd3-tx2 aarch64 Link
Nvidia Jetson Nano jetson-nano aarch64 Link
Nvidia Jetson TX1 jetson-tx1 aarch64 Link
Nvidia Jetson TX2 jetson-tx2 aarch64 Link
Nvidia Jetson Xavier jetson-xavier aarch64 Link
Nvidia blackboard TX2 blackboard-tx2 aarch64 Link
ODroid C1 odroid-c1 armv7hf Link
ODroid XU4 odroid-xu4 armv7hf Link
Orange Pi Lite orange-pi-lite armv7hf Link
Orange Pi One orange-pi-one armv7hf Link
Orange Pi Plus2 orangepi-plus2 armv7hf Link
Orange Pi Zero orange-pi-zero armv7hf Link
Parallella parallella armv7hf Link
PocketBeagle beaglebone-pocket armv7hf Link
QEMU x86 qemux86 i386 Link
QEMU x86-64 qemux86-64 amd64 Link
Raspberry Pi (1, Zero, Zero W) raspberry-pi rpi Link
Raspberry Pi 2 raspberry-pi2 armv7hf Link
Raspberry Pi 3 raspberrypi3 armv7hf Link
Raspberry Pi 3 64bits raspberrypi3-64 aarch64 Link
Raspberry Pi 4 (using 64bit OS) raspberrypi4-64 aarch64 Link
Revolution Pi Core 3 revpi-core-3 armv7hf Link
RushUp Kitra 520 kitra520 armv7hf Link
RushUp Kitra 710 kitra710 aarch64 Link
SKX2 skx2 aarch64 Link
Samsung Artik 10 artik10 armv7hf Link
Samsung Artik 5 artik5 armv7hf Link
Samsung Artik 530 artik530 armv7hf Link
Samsung Artik 530s 1G artik533s armv7hf Link
Samsung Artik 710 artik710 aarch64 Link
Siemens IOT2000 iot2000 i386-nlp Link
Technologic TS-4900 ts4900 armv7hf Link
Toradex Apalis apalis-imx6q armv7hf Link
Toradex Colibri colibri-imx6dl armv7hf Link
UP Board up-board amd64 Link
UP Core up-core amd64 Link
UP Core Plus up-core-plus amd64 Link
UP Squared up-squared amd64 Link
VIA vab820 via-vab820-quad armv7hf Link
Variscite DART-MX8M imx8m-var-dart aarch64 Link
Variscite DART-MX8M Mini imx8mm-var-dart aarch64 Link
Variscite VAR-SOM-MX6 var-som-mx6 armv7hf Link
Variscite VAR-SOM-MX7 imx7-var-som armv7hf Link

Multiple Dockerfiles

There are cases when you would need a higher granularity of control when specifying build instructions for different devices and architectures than a single Dockerfile template can provide. An example of this would be when different configuration or installation files are required for each architecture or device.

When creating a release, the balenaCloud build servers or the balena CLI tool (depending on the deployment method used) look at all available Dockerfiles and build the appropriate image using the following order of preference:

  • Dockerfile.<device-type>
  • Dockerfile.<arch>
  • Dockerfile.template

As an example, let's say you have two Dockerfiles available, Dockerfile.raspberrypi3 and Dockerfile.template. Whenever you publish the application to balenaCloud, if the fleet device-type is Raspberry Pi 3, Dockerfile.raspberrypi3 will be selected as an exact match and for all other devices the builder will automatically select Dockerfile.template.

Note that this feature works with the following commands: git push, balena push, balena build, and balena deploy.

Node applications

Balena supports Node.js natively using the package.json file located in the root of the repository to determine how to build and execute node applications.

When you push your code to your fleet, the build server generates a container for the environment your device operates in, deploys your code to it and runs npm install to resolve npm dependencies, reporting progress to your terminal as it goes.

If the build executes successfully the release is deployed to your device where the supervisor runs it in place of any previously running containers, using npm start to execute your code (note that if no start script is specified, it defaults to running node server.js.)

Node.js Example

A good example of this is the text-to-speech application - here's its package.json file*:

  "name": "text2speech",
  "description": "Simple balena app that uses Google's TTS endpoint",
  "repository": {
    "type": "git",
    "url": ""
  "scripts": {
    "preinstall": "bash"
  "version": "0.0.3",
  "dependencies": {
    "speaker": "~0.0.10",
    "request": "~2.22.0",
    "lame": "~1.0.2"
  "engines": {
      "node": "0.10.22"

Note: We don't specify a start script here which means node will default to running server.js. We execute a bash script called before npm install tries to satisfy the code's dependencies. Let's have a look at that:-

apt-get install -y alsa-utils libasound2-dev
mv sound_start /usr/bin/sound_start

These are shell commands that are run within the container on the build server which are configured such that dependencies are resolved for the target architecture not the build server's - this can be very useful for deploying non-javascript code or fulfilling package dependencies that your node code might require.

We use Raspbian as our contained operating system, so this script uses aptitude to install native packages before moving a script for our node code to use over to /usr/bin (the install scripts runs with root privileges within the container.)

Note: With a plain Node.js project, our build server will detect compatible nodejs versions from the package.json and build the container using a Docker image that satisfies the version requirement. If no version is specified then the default node version is 0.10.22 and it will be used if a node version is not specified. There will be an error if the specified node version is not in our registry. You can either try another node version or contact us to be supported. More details about Docker node images in our registry can be found here.