As engineers, we’re always learning and developing new skills and knowledge. That’s why we at NXP
have been working diligently to bring you new eIQ® machine learning (ML) software enablement
solutions and new ML training materials to enable your next generation of smart and aware embedded
products. We encourage you to explore the robust set of training modules available in the new
Machine Learning Training Academy.
As system engineers, Manish and I often get asked how developers can enable artificial
intelligence (AI) and machine learning on their existing or new products. ML is widely documented
online, but there is a gap in knowledge when it comes to the details needed for deploying an
embedded solution. Fortunately with NXP’s AI/ML hardware and software, we have the solutions
available to turn ideas into reality. To accelerate product design and utilize the latest in AI/ML
advancements, NXP’s new
Machine Learning Training Academy
has been designed to provide a thorough introduction on how to incorporate AI/ML on embedded
systems.
As machine learning moves to the edge, thanks to new enablement like TensorFlow™ Lite, ONNX and
Glow, some embedded developers may not have much experience leveraging the recent breakthroughs in
this rapidly emerging and changing field.
One area that many embedded developers struggle with when first jumping into the world of AI/ML is
in creating a new model for a specific use case, because it can become very complex and confusing.
NXP’s new
eIQ Toolkit
is a machine learning workflow tool designed to make it easier to create your own models using a
simple GUI interface for vision-based AI/ML tasks.
As systems engineers dedicated to supporting our customers’ needs, we’ve seen first hand how the
eIQ Toolkit really helps streamline every step of the process with a simple click-through GUI
interface designed to easily develop ML models that can be deployed directly onto the hardware –
from data management to ML model creation, to training, validation and deployment.
For those with ML experience, NXP’s ML Training Academy can take you further in your design with
video modules that teach you how to leverage NXP MCUs and applications processors to take your
neural network models to the edge using a variety of supported inference engines.
NXP’s ML Training Academy provides more than 20 on-demand video modules that cover a wide range of
embedded ML enablement offered on NXP devices. For i.MX RT MCUs, learn how to use TensorFlow Lite
for CUs, the Glow neural network compiler, and NXP's new DeepViewRT™ inference engines. Also
learn to take your ideas and make them into reality. eIQ ML software is available for many devices
in the i.MX RT family of crossover MCUs, and with the hands-on labs and presentations you can
quickly learn how to take an existing neural network model and run it on NXP embedded devices.
Similarly, for the i.MX applications processors, developers can learn how to use various open
source inference engines like TensorFlow Lite and ONNX Runtime, or NXP’s proprietary inference
engine, DeepViewRT. We have designed the training material to help beginners understand typical ML
workflows with easy-to-follow examples and demos, like how to bring your own data (BYOD) or bring
your own model (BYOM).
As product support engineers, we’ve spent significant time developing the content for this ML
Training Academy, and we hope that it helps your development journey, too. Check out all the
modules available, each focused on a specific subset of ML enablement and let us know your
thoughts and requests for additional training topics or questions in the NXP Community.
Happy (machine) learning!
To get started, please visit
nxp.com/mltraining.
To learn more about the eIQ ML Software Development platform, please visit
nxp.com/eiq.
Co contributor Manish Bajaj – Senior Systems Engineer, NXP
Manish Bajaj received his MS in Information Technology with specialization in Intelligent Systems
from the Indian Institute of Information Technology, Allahabad. He’s been the systems engineering
lead for general purpose MPU Applications Processors and has extensive experience on
Linux®, Video, Graphics and Display Devices. Currently, he leads the applications
engineering team for AI/ML and ISP.