10 min

ONNX and Intel® nGraph API Deliver AI Framework Flexibility – Intel® Chip Chat episode 611 Intel Chip Chat

    • Technology

Prasanth Pulavarthi, Principal Program Manager for AI Infrastructure at Microsoft, and Padma Apparao, Principal Engineer and Lead Technical Architect for AI at Intel, discuss a collaboration that enables developers to switch from one deep learning operating environment to another regardless of software stack or hardware configuration.

ONNX is an open format that unties developers from specific machine learning frameworks so they can easily move between software stacks. It also reduces ramp-up time by sparing them from learning new tools. Many hardware and software companies have joined the ONNX community over the last year and added ONNX support in their products. Microsoft has enabled ONNX in Windows and Azure and has released the ONNX Runtime which provides a full implementation of the ONNX-ML spec.

With the nGraph API, developed by Intel, developers can optimize their deep learning software without having to learn the specific intricacies of the underlying hardware. It enables portability between Intel® Xeon® Scalable processors and Intel® FPGAs as well as Intel® Nervana™ Neural Network Processors (Intel® Nervana™ NNPs). Intel is integrating the nGraph API into the ONNX Runtime to provide developers accelerated performance on a variety of hardware.

For information about ONNX as well as tutorials and ways to get involved in the ONNX community, visit https://onnx.ai/.

To learn more about ONNX Runtime visit https://azure.microsoft.com/en-us/blog/onnx-runtime-for-inferencing-machine-learning-models-now-in-preview/.

To learn more about the Intel nGraph API, visit https://ai.intel.com/ngraph-a-new-open-source-compiler-for-deep-learning-systems/.

Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration. No computer system can be absolutely secure. Check with your system manufacturer or retailer or learn more at https://ai.intel.com/.

Intel, the Intel logo, Intel® Xeon® Scalable processors, Intel® FPGAs, and Intel® Nervana™ Neural Network Processors (Intel® Nervana™ NNPs) are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries.

*Other names and brands may be claimed as the property of others.

© Intel Corporation

Prasanth Pulavarthi, Principal Program Manager for AI Infrastructure at Microsoft, and Padma Apparao, Principal Engineer and Lead Technical Architect for AI at Intel, discuss a collaboration that enables developers to switch from one deep learning operating environment to another regardless of software stack or hardware configuration.

ONNX is an open format that unties developers from specific machine learning frameworks so they can easily move between software stacks. It also reduces ramp-up time by sparing them from learning new tools. Many hardware and software companies have joined the ONNX community over the last year and added ONNX support in their products. Microsoft has enabled ONNX in Windows and Azure and has released the ONNX Runtime which provides a full implementation of the ONNX-ML spec.

With the nGraph API, developed by Intel, developers can optimize their deep learning software without having to learn the specific intricacies of the underlying hardware. It enables portability between Intel® Xeon® Scalable processors and Intel® FPGAs as well as Intel® Nervana™ Neural Network Processors (Intel® Nervana™ NNPs). Intel is integrating the nGraph API into the ONNX Runtime to provide developers accelerated performance on a variety of hardware.

For information about ONNX as well as tutorials and ways to get involved in the ONNX community, visit https://onnx.ai/.

To learn more about ONNX Runtime visit https://azure.microsoft.com/en-us/blog/onnx-runtime-for-inferencing-machine-learning-models-now-in-preview/.

To learn more about the Intel nGraph API, visit https://ai.intel.com/ngraph-a-new-open-source-compiler-for-deep-learning-systems/.

Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration. No computer system can be absolutely secure. Check with your system manufacturer or retailer or learn more at https://ai.intel.com/.

Intel, the Intel logo, Intel® Xeon® Scalable processors, Intel® FPGAs, and Intel® Nervana™ Neural Network Processors (Intel® Nervana™ NNPs) are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries.

*Other names and brands may be claimed as the property of others.

© Intel Corporation

10 min

Top Podcasts In Technology

Acquired
Ben Gilbert and David Rosenthal
All-In with Chamath, Jason, Sacks & Friedberg
All-In Podcast, LLC
Hard Fork
The New York Times
Lex Fridman Podcast
Lex Fridman
TED Radio Hour
NPR
Darknet Diaries
Jack Rhysider