Powerful visual processing

Powerful visual processing

Powerful visual processing

with a tiny energy budget

The new DYNAP™CNN development kit

Capabilities

Prototype vision processing applications for edge deployment. Powered by SynSense DYNAP™CNN cores, bringing the flexibility of convolutional vision processing to milliwatt energy budgets.

Real-time presence detection, real-time gesture recognition, real-time object classification, all with mW average energy use.

Develop in Python, test in simulation, deploy and test in HW with live event-based camera input.

Technical specifications

Supports event-based vision applications via direct input, or input via a desktop.

Develop using our open-source Python library SINABS, deploy on the DYNAP™CNN development kit with one line of code.

Deploy up to nine-layer convolutional networks in real-time.

USB-C interface, bus powered.

Research sponsorship grants

SynSense is offering sponsorships to research groups that would like access to a DYNAP™CNN development kit.

Submit a research application for a development kit. Applications will be reviewed on a continual basis up to December 2020.

Development boards will be available in early 2021.

For more information, see SynSense Research Sponsorships.