Unleash the full potential of AI in intelligent cars
AI toolkit for Neural Networks performance optimization and high BPU utilization

Flexible
Supports mainstream frameworks including TensorFlow and PyTorch and MXNet
Comprehensive
Supports state-of-the-art operators used in modern neural networks.
High performance
Automatic optimizing compiler ensures most efficient utilization of the BPU engine.
Easy to Use
Abundant documentation and design examples to accelerate development.
Typical time to port
Typical BPU MAC utilization
Quantization

Typical performance improvements
Powerful optimizing compiler and example models at your fingertip
High parallelism
Workload is broken down at kernel or feature map level to be efficiently processed by the BPU
Pipelining
BPU pipelining is automatically optimized by the compiler
Layer fusion
Perform global model computation analysis, followed by vertical and horizontal layer fusion
Model Zoo
Pre-trained AI algorithms to accelerate your development
Minimum effort. Maximum model performance

Solutions Served
OpenExplorer let you port and optimize your software quickly and easily to Journey processor deep learning engine