At a gathering of industry influencers, Intel has announced updates on new products designed to accelerate AI system development and deployment from cloud to edge. Intel demonstrated its Intel Nervana Neural Network Processors (NNP) for training (NNP-T1000) and inference (NNP-I1000) — Intel’s first purpose-built ASICs for complex deep learning with incredible scale and efficiency for cloud and data center customers. Intel also revealed its next-generation Intel Movidius Vision Processing Unit (VPU) for edge media, computer vision and inference applications.
“With this next phase of AI, we’re reaching a breaking point in terms of computational hardware and memory. Purpose-built hardware like Intel Nervana NNPs and Movidius VPUs are necessary to continue the incredible progress in AI. Using more advanced forms of system-level AI will help us move from the conversion of data into information toward the transformation of information into knowledge,” said Naveen Rao, Intel Corporate Vice President and General Manager, Intel Artificial Intelligence Products Group.
These products further strengthen Intel’s portfolio of AI solutions, which is expected to generate more than $3.5 billion in revenue in 2019. Now in production and being delivered to customers, the new Intel Nervana NNPs are part of a systems-level AI approach offering a full software stack developed with open components and deep learning framework integration.
“We are excited to be working with Intel to deploy faster and more efficient inference compute with the Intel Nervana Neural Network Processor for inference and to extend support for our state-of-the-art deep learning compiler, Glow, to the NNP-I,” said Misha Smelyanskiy, Director, AI System Co-Design at Facebook.
Additionally, Intel’s next-generation Intel Movidius VPU, scheduled to be available in the first half of 2020, incorporates unique architectural advances that are expected to deliver leading performance — more than 10 times the inference performance as the previous generation — with up to six times the power efficiency of competitor processors.
Intel also announced its new Intel DevCloud for the Edge, which along with the Intel Distribution of OpenVINO toolkit, addresses a key pain point for developers — allowing them to try, prototype and test AI solutions on a broad range of Intel processors before they buy hardware.
Complex data, models and techniques are required to advance deep learning reasoning and context, bringing about a need to think differently about architectures.
With most of the world running some part of its AI on Intel Xeon Scalable processors, Intel continues to improve this platform with features like Intel Deep Learning Boost with Vector Neural Network Instruction (VNNI) that bring enhanced AI inference performance across the data centre and edge deployments.
While that will continue to serve as a strong AI foundation for years, the most advanced deep learning training needs for Intel customers call for performance to double every 3.5 months, and those types of breakthroughs will only happen with a portfolio of AI solutions like Intel’s. Intel is equipped to look at the full picture of computing, memory, storage, interconnect, packaging and software to maximize efficiency, programmability and ensure the critical ability to scale up distributing deep learning across thousands of nodes to, in turn, scale the knowledge revolution. Read more here.
Related Web Hosting, VPS Blog / Web Hosting Business News:
Mitesh Ganatra is CTO at HostNamaste.com. He shares his web hosting insights at HostNamaste blog. He mostly writes on the latest Web Hosting Business, News, Trends, WordPress, Storage Technologies, Windows, Linux Hosting Platforms and Control Panels.