This is an extremely basic way to utilise the Akida on-chip learning functionality. The demo will let you learn new classes of objects to recognise in the camera feed. This application is built to soley demonstrate how easy it is to use Akida's unique one-shot/few shot learning abilities. Instead of text labels, the system uses RGB LED's to represent the class that has been predicted.
In native learning mode, event domain neurons learn quickly through a biological process known as Spike Time Dependent Plasticity (STDP), in which synapses that match an activation pattern are reinforced. BrainChip is utilizing a naturally homeostatic form of STDP learning in which neurons don’t saturate or switch off completely.
STDP is possible because of the event-based processing method used by the Akida processor, and can be applied to incremental learning and one-shot or multi-shot learning.
Read more:
What Is the Akida Event Domain Neural Processor?
- Raspberry Pi 4 Compute Model with an IO Board
- PCI-e Akida Neuromorphic Processor link
- Raspberry Pi Camera Module
- WS2812 compatible RGB LEDs link
- Python 3.8 or higher
- Connect the Raspberry Pi Camera to the Raspberry Pi 4 Compute Model.
- Ensure the Akida Neuromorphic Processor is correctly installed in the PCI-e slot on the IO Board and the drivers are installed. link to instructions
- Conect the WS2812 compatible RGB LED's wiring to the 5v, GROUND and GPIO 18
-
Create a virtual environment with access to system packages (required for
picamera2
module):python3 -m venv venv --system-site-packages source venv/bin/activate
-
Clone the repository:
git clone https://github.com/stdp/akida-camera.git cd akida-camera
-
Install the required Python modules:
pip install -r requirements.txt
To start the Inference system, ensure your virtual environment is activated and follow the steps below. Python must be run as sudo for the LEDs to function:
- get your virtualenv python path
which python
# example output "/home/neuro/projects/akida-camera/venv/bin/python"
- copy the output of this and run the follow command, replacing python path with the output from the previous step:
sudo <python path> neurocam.py
# example command "sudo /home/neuro/projects/akida-camera/venv/bin/python akida_camera.py"
-
Press
1
to0
on your keyboard to learn a new class, numbers 1 through 7 will be a RGB colour, 0 will be black (or off) -
Press
s
to save the newly learnt classes into your model (delete the model file to re-initialise a blank slate)
Essentially this is a homemade version of this demonstration that BrainChip has built. You can view this in action here:
View more One-shot / few shot learning demonstration videos: User Community Platform