How to classify socks using a Raspberry Pi, Edge Impulse, and balena

Deploy an image classifier application on Raspberry Pi with a Pi Camera- Edge Impulse embedded machine learning and balena. It’s AI to track your socks.

We’re going to create a project on a Raspberry Pi with a Camera that distinguishes between socks using the Edge Impulse Machine Learning system running on balena. We all know that matching socks is a nightmare, so having an affordable, intelligent system saying if a sock matches with another sock is the best weekend project ever.

Use AI on a Raspberry Pi with Edge Impulse and balena to determine matching socks

There are a lot of solutions for Machine Learning for Internet of Things projects nowadays (in this case of this project, we start with classifying socks). What makes this project interesting is how we’ll use Edge Impulse and some pretty affordable gear to help you get started with AI. Beyond that, it’s up to you as a product builder to see how it can fit into your needs.


Before you start

What’s Edge Impulse?

Edge Impulse is a service that enables you to generate Machine Learning trained models in the cloud and deploy it on microcontrollers (e.g. Arduino and STM32), or single board computers like the Raspberry Pi. That means that no GPU or TPU are needed because all the machine learning and neural network training are done beforehand in the cloud with advanced methods. Edge Impulse generates a trained model that deploys into the device and enables it to classify images (or sound, motion and more) in the edge without any special hardware requirement.

This project deploys an image classifier of the stream captured via the Raspberry Pi camera. It classifies images using a trained model on the neural network of Edge Impulse based on transfer learning technique for images plus a dataset of images taken with the mobile phone camera, in this case.

We’ll also show you how to generate a Machine Learning model using Edge Impulse and deploy an image classification system running on a Raspberry Pi with balena. By the end of this project, you’ll be able to reunite all of your unpaired socks together again (and try out other edge AI use cases)!

Hardware required

To build this project, you’ll need the following:

Software setup

For the software, you’ll need:

  • A free Edge Impulse studio account (sign up here)
  • A free balenaCloud account (sign up here)
  • Software to flash an SD card e.g. balenaEtcher
  • Optional: balenaCLI if you want to manually push code or work on your device locally

Tutorial

Create the EdgeImpulse Project

Go to the Edge Impulse Studio website and create an account.

Sign up for Edge Impulse

Click on the menu and select Create new project and enter a name for your new project. In this case, we’ll create a project that will classify socks and will tell us if the socks match or not. Obviously a useful tool that everybody should have in their home.

Train the model of the Project

Connect a device and start taking pictures

Once the project is created, select the project and start collecting the data to train the Machine Learning model.

Start training your Edge Impulse model

Navigate to Devices on the main menu and then click Connect a new deviceon the top right. A modal will pop-up to connect a new device. For this project, we’ll use our mobile phone to take pictures to train the model.

Click Use your mobile phoneand then scan the QR code, generated by the website, on your phone. The QR code will open a website on your mobile phone where you will be able to capture pictures of the paired socks or unpaired socks, once you grant permission to your camera.

At that point, click on Label and write pair, and start taking pictures of paired socks. Once you have taken 40-50 pictures of different paired socks (depending on your jungle of socks), change the label to unpair and take pictures of your socks unpaired. There is the possibility to split automatically the pictures (80/20) as training pictures and testing pictures or you can do it manually.

Now if you go to the Edge Impulse Studio Data Acquisition you will see all the Training Data and the Test Data. In this case we have more than 250 items as Training Data and more than 90 items as Test Data. We selected the automatic split (80/20).

See all your training data

Create the impulse of the project

While uploading the pictures from your phone, you may have seen an error saying that there was no impulse detected on the project. Let’s create it now.

Go on the main menu to Create impulse. For this project we are using Image data on a resolution of 96×96.

Create the impulse for the model

Click Add a processing block and add Image.

Add a processing block

Click Add a learning blockand Add Transfer Learning (Images) which has been created for images datasets.

Add an Edge Impulse learning block

Transfer Learning is used to make a fast image model classifier. It’s complicated to build a good computer vision system from zero, usually because a lot of images are needed to train models on a GPU. Transfer Learning uses a well-trained model, only using the upper layers of a neural network to train the model in a fraction of the time and work on smaller datasets.

Once the learning block has been created it should detect two different Output features the pair, unpair and the unknown. The final block Output features should say 3(pair, unpair and unknown). For the unknown label, we took pictures of random objects, as well. Now it’s time to click Save Impulse.

Now there are more options available

Now there are more menus below Create impulse on the main menu. Now click on Image. And click on Save Parameters with RGB color depth. This sends us to Generate features that will create a 3D visualization of the dataset captured.

See 3D visualization of your model

With the 3D model generated with the Training Data captured with the mobile phone you can see how different are the objects classified.

The data is processed, now we have to start training a neural network to recognize the patterns of the data. The neural networks are algorithms that are designed to recognize patterns. In this case the neural network will be trained with image data as input, and it will try to map the images into one of the categories, paired or unpaired socks plus unknown (which I only took 3 pictures).

To train the neural network we’re going to use these parameters:

  • Number of training cycles to 100
  • Learning rate to 0.0075
  • Data augmentation: enabled
  • Minimum confidence rating: 0.8

And click Start training and the neural network will start to compute all the images and train to generate the Machine Learning model. After the model is done you’ll see accuracy numbers, a confusion matrix, and some predicted on-device performance on the bottom. You have now trained your model.

Training your model

Test the model generated

As you generated some test set of pictures meanwhile you captured pictures with your mobile phone, let’s test the model with those pictures. Click on Model testing on the main menu. Select all the pictures and click Classify selected.

Time to test your model

With our model, we get more than 89% of accuracy. Great!

Now it’s time to deploy the model on a Raspberry Pi and apply it to the real world.

Deploy the EdgeImpulse ML Model

Click on the main menu Deploymentand select WebAssembly, scroll down and click Analyze optimizations on the Available optimizations for Transfer Learning table.

Set up for application deployment

Click Build in order to build the Quantized (int8) model. This will build and download a wasm model. However, you won’t need this file since the project automatically downloads the model once it’s on the balenaCloud using your Edge Impulse API KEY and PROJECT ID as you will see below.

Create the balena application

Go to the EdgeImpulse balenaCam project and click the Deploy with balena button below to automatically deploy the application. If you use this one-click approach, you can skip the manual step of adding device environment values later because they’ll be pre-configured for you.

Select your board as a device type (Raspberry Pi 4 in this case) and click the button ‘Create and deploy’.

Alternatively, if you want to learn the ins and outs of balena, you can choose to download the project repo and push it to your balenaCloud account using balenaCLI.

Once the application is deployed on your balenaCloud account, go to Edge Impulse and copy your PROJECT ID and the API KEY so we can set them as Application Service variables in balenaCloud.

For the PROJECT ID on Edge Impulse go to the Dashboard and you will find it on the bottom-right.

Connect your ML project to your device

For the API KEY, select on the top menu, next to the selected Project Info, Keys and generate a new API key for balenaCloud.

Add API key

Copy it and go to balenaCloud to generate the Service variables into the container edgeimpulse-inference EI_API_KEY and EI_PROJECT_ID.

Add the API key to your balenaCloud Service Variables

Once your application has been created, you can add a device to that new application by clicking the Add device button. You can also set your WiFi SSID and password here if you are going to use WiFi.

Add a device and give it wi-fi credentials if applicable

This process creates a customized balenaOS image configured for your application and device type and includes your network settings if you specified them. Once the balenaOS image has been downloaded, it’s time to flash your SD card (in case you use a Raspberry Pi).

You can use balenaEtcher for this. If the downloaded image file has a .zip extension, there’s no need to uncompress it before using balenaEtcher.

Use Etcher to flash the OS onto your SD card

Once the flashing process has completed, insert your SD card into the Raspberry Pi and connect the power supply.

Insert SD card into the Raspberry Pi

When the device boots for the first time, it connects to your network automatically and then to the balenaCloud dashboard. After a few moments, you’ll see the newly provisioned device listed as online.

Once the device appears online in the dashboard, it will automatically start to download the Edge Impulse balenaCam application. After a few minutes, your device information screen in the dashboard should look something like this, showing the device with the two container services running ready to classify images through the Pi Camera attached on your Raspberry Pi 4.

Toggle Public Device URL to enable you to remotely access the camera.

Click into your newly-added device

Test your image classification application

Open a browser and put in the Public Device URL or the local IP address.

The Pi camera stream should be displayed on the website. If you experience any problems, check the troubleshooting section below.

If the camera is streaming properly, try to move different objects in front of the camera and see how well the classifier works! Predictions are displayed for all labels with values between 0 and 1, with 1 being a perfect prediction.

A likely pair

According to Edge Impulse, there’s a 99% chance that these socks match.

These don't match

…and according to Edge Impulse, there’s a 99% chance that these socks **don’t** match.

Not a sock

Edge Impulse can also determine what is likely not a sock, aka “unknown.”

Enjoy training machine learning models with Edge Impulse and deploy them on your fleet of connected devices with balena.

Troubleshooting

  • This project uses WebRTC (a Real-Time Communication protocol). In some cases a direct WebRTC connection fails.

  • This current version uses mjpeg streaming when the webRTC connection fails.

  • Chrome browsers will hide the local IP address from WebRTC, making the page appear but no camera view. To resolve this try the following

  • Navigate to chrome://flags/#enable-webrtc-hide-local-ips-with-mdns and set it to Disabled

  • You will need to relaunch Chrome after altering the setting

  • Firefox may also hide local IP address from WebRTC, confirm following in 'config:about'

  • media.peerconnection.enabled: true

  • media.peerconnection.ice.obfuscate_host_addresses: false

If you wish to test the app in balena local mode, you’ll need to clone the repository and add your Edge Impulse Project ID and API Key in edgeimpulse-inference/app/downloadWasm.sh and uncomment lines 5 and 6. This will enable your project to download the Edge Impulse ML models locally on your computer.

Check the BalenaCam project if you have more issues or check the advanced options available in this guide.


Until next time

We’d love to hear from you to see what you are classifying with balena and Edge Impulse. We’re always interested to see how the community puts these projects to work.

Get in touch with us on our Forums, Twitter, and Instagram to show off your work or to ask questions. We’re more than happy to help.

Acknowledgements

This project is made possible by the great work of Aurelien Lequertier from Edge Impulse and the balenaCam project developers.


Posted

in

Start the discussion at forums.balena.io