AWS isn’t exactly known as an open-source powerhouse, but maybe change is in the air. Amazon’s cloud computing unit today announced the launch of Neo-AI, a new open-source project under the Apache Software License. The new tool takes some of the technologies that the company developed and used for its SageMaker Neo machine learning service and brings them (back) to the open source ecosystem.
The main goal here is to make it easier to optimize models for deployments on multiple platforms — and in the AWS context, that’s mostly machines that will run these models at the edge.
“Ordinarily, optimizing a machine learning model for multiple hardware platforms is difficult because developers need to tune models manually for each platform’s hardware and software configuration,” AWS’s Sukwon Kim and Vin Sharma write in today’s announcement. “This is especially challenging for edge devices, which tend to be constrained in compute power and storage.”
Neo-AI can take TensorFlow, MXNet, PyTorch, ONNX, and XGBoost models and optimize them. AWS says Neo-AI can often speed these models up to twice their original speed, all without the loss of accuracy. As for hardware, the tools supports Intel, Nvidia, and ARM chips, with support for Xilinx, Cadence, and Qualcomm coming soon. All of these companies, except for Nvidia, will also contribute to the project.
“To derive value from AI, we must ensure that deep learning models can be deployed just as easily in the data center and in the cloud as on devices at the edge,” said Naveen Rao, General Manager of the Artificial Intelligence Products Group at Intel. “Intel is pleased to expand the initiative that it started with nGraph by contributing those efforts to Neo-AI. Using Neo, device makers and system vendors can get better performance for models developed in almost any framework on platforms based on all Intel compute platforms.”
In addition to optimizing the models, the tool also converts them into a new format to prevent compatibility issues and a local runtime on the devices where the model then runs handle the execution.
AWS notes that some of the work on the Neo-AI compiler started at the University of Washington (specifically the TVM and Treelite projects). “Today’s release of AWS code back to open source through the Neo-AI project allows any developer to innovate on the production-grade Neo compiler and runtime.” AWS has somewhat of a reputation of taking open source projects and using them in its cloud services. It’s good to see the company starting to contribute back a bit more now.
In the context of Amazon’s open source efforts, it’s also worth noting that the company’s Firecracker hypervisor now supports the OpenStack Foundation’s Kata Containers project. Firecracker itself is open source, too, and I wouldn’t be surprised if Firecracker ended up as the first open source project that AWS brings under the umbrella of the OpenStack Foundation.
Written by Frederic Lardinois
This news first appeared on https://techcrunch.com/2019/01/24/aws-launches-neo-ai-an-open-source-tool-for-tuning-ml-models/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Techcrunch+%28TechCrunch%29 under the title “AWS launches Neo-AI, an open-source tool for tuning ML models”. Bolchha Nepal is not responsible or affiliated towards the opinion expressed in this news article.