ARM and Nvidia have announced a partnership that will bring deep learning to billions of mobile, consumer electronics and Internet of Things (IoT) devices.
According to details provided both by ARM and Nvidia, this will lead to the integration of the open-source Nvidia Deep Learning Accelerator (NVDLA) architecture into ARM’s Project Trillium platform for machine learning allowing IoT chip companies to integrated AI into their design and come up with intelligent and affordable products for billions of consumers worldwide.
“Accelerating AI at the edge is critical in enabling Arm’s vision of connecting a trillion IoT devices,” said Rene Haas, EVP, and president of the IP Group, at Arm. “Today we are one step closer to that vision by incorporating NVDLA into the Arm Project Trillium platform, as our entire ecosystem will immediately benefit from the expertise and capabilities our two companies bring in AI and IoT.”
Nvidia Deep Learning Accelerator (NVDLA) is based on Nvidia Xavier SoC and it is a free, open architecture designed to promote a standard way to design deep learning inference accelerators. According to Nvidia, it brings several benefits that should speed the adoption of deep learning inference and is supported by company’s suite of developer tools, including the upcoming version of a programmable deep learning accelerator, the TensorRT.
Nvidia was keen to note that the integration of NVDLA with Project Trillium will give deep learning developers the highest level of performance as well as ARM’s flexibility and scalability across the wide range of IoT devices.
ARM announced its Project Trillium back in February, which is a series of scalable processors designed for machine learning and neural networks.