As your applications grow more complex, you may find significant benefit in running some services in separate containers. Splitting your application into multiple containers allows you to better isolate and maintain key services, providing a more modular and secure approach to application management. Each service can be packaged with the operating environment and tools it specifically needs to run, and each service can be limited to the minimum system resources necessary to perform its task. The benefits of multicontainer applications compound as the complexity of the application grows. Because each service can be updated independently, larger applications can be developed and maintained by separate teams, each free to work in a way that best supports their service.
This guide will cover the considerations you need to take into account when running multiple containers, including
docker-compose.yml configuration and some important balena specific settings.
Note: Multicontainer functionality requires balenaOS v2.12.0 or higher, and it is only available to microservices and starter application types. If you are creating an application and do not see microservices or starter as available application types, a multicontainer compatible OS version has not yet been released for the selected device type.
The multicontainer functionality provided by balena is built around the Docker Compose file format. The balena device supervisor implements a subset of the Compose v2.1 feature set. You can find a full list of supported and known unsupported features in our device supervisor reference docs.
At the root of your multicontainer application, you'll use a
docker-compose.yml file to specify the configuration of your containers. The
docker-compose.yml defines the services you'll be building, as well as how the services interact with each other and the host OS.
Here's an example
docker-compose.yml for a simple multicontainer application, composed of a static site server, a websocket server, and a proxy:
version: '2' services: frontend: build: ./frontend expose: - "80" proxy: build: ./haproxy depends_on: - frontend - data ports: - "80:80" data: build: ./data expose: - "8080"
Each service can either be built from a directory containing a
Dockerfile, as shown here, or can use a Docker image that has already been built, by replacing
image:. If your containers need to started in a specific order, make sure to use the
Unlike single container applications, multicontainer applications do not run containers in privileged mode by default. If you want to make use of hardware, you will either have to set some services to privileged, using
privileged: true, or use the
devices settings to map in the correct hardware access to the container.
As an example, here the
gpio service is set up to use i2c and serial uart sensors:
gpio: build: ./gpio devices: - "/dev/i2c-1:/dev/i2c-1" - "/dev/mem:/dev/mem" - "/dev/ttyACM0:/dev/ttyACM0" cap_add: - SYS_RAWIO
There are a few settings and considerations specific to balena that need to be taken into account when building multicontainer applications.
INITSYSTEM=on setting in the
Dockerfile of a service is only supported if the container is run as privileged, as systemd does not run correctly in unprivileged containers. In addition, if you want to ensure your container is always kept running, set
privileged: true restart: always
host allows the container to share the same network namespace as the host OS. When this is set, any ports exposed on the container will be exposed locally on the device. This is necessary for features such as bluetooth.
With multicontainer applications, balena supports the use of named volumes, a feature that expands on the persistent storage functionality used by older versions of balenaOS. Named volumes can be given arbitrary names and can be linked to a directory in one or more containers. As long as every release of the application includes a
docker-compose.yml and the volume name does not change, the data in the volume will persist across updates.
volumes field of the service to link a directory in your container to your named volume. The named volume should also be specified at the top level of the
version: '2' volumes: resin-data: services: example: build: ./example volumes: - 'resin-data:/data'
For devices upgraded from older versions of balenaOS to v2.12.0 or higher, a link will automatically be created from the
/data directory of the container to the
resin-data named volume (similar to above). This ensures application behavior will remain consistent across host OS versions. One notable difference is that accessing this data via the host OS is done at
/var/lib/docker/volumes/<APP ID>_resin-data/_data, rather than the
/mnt/data/resin-data/<APP ID> location used with earlier host OS versions.
In addition to the settings above, there are some balena specific labels that can be defined in the
docker-compose.yml file. These provide access to certain bind mounts and environment variables without requiring you to run the container as privileged:
Note: If you have devices in your app that have a supervisor version lower than 7.22.0, then you should use the
io.resin.features. form of the labels to ensure that all devices obey the label. Earlier supervisor versions will not understand the
|io.balena.features.balena-socket||false||Bind mounts the balena container engine socket into the container|
|io.balena.features.dbus||false||Bind mounts the host OS dbus into the container using
|io.balena.features.kernel-modules||false||Bind mounts the host OS
|io.balena.features.firmware||false||Bind mounts the host OS
|io.balena.features.balena-api||false||When enabled, it will make sure that
|io.balena.update.strategy||download-then-kill||Set the application update strategy|
|io.balena.update.handover-timeout||60000||Time, in milliseconds, before an old container is automatically killed. Only used with the
These labels are applied to a specific service with the
labels: io.balena.features.kernel-modules: '1' io.balena.features.firmware: '1' io.balena.features.dbus: '1' io.balena.features.supervisor-api: '1' io.balena.features.balena-api: '1' io.balena.update.strategy: download-then-kill io.balena.update.handover-timeout: ''