Super Chips, the Beating Heart of AI
We are officially in the era of Artificial Intelligence ‘AI’, and at the beating heart of the AI ability is none other than Super Chips.
AI is an old-ish term that is slowly overtaking ‘Big Data Analysis’ as the dominant outcome of deep-learning and machine-learning.
While cloud computing made it possible to acquire enormous amounts of data cheaply, next-generation AI chips will supercharge machine learning and deep learning of this data via neural networks running on GPU (graphical processing units) chips.
Enormous amounts of computing power is required to crunch large databases (of validated examples) to train systems in a particular domain such as image analysis, speech recognition, translation and more. These trained systems are the brains behind AI.
Today, the mastery of AI is invigorating the semiconductor industry, we observe an chips ‘arms race’ happening all over the world.
Most recently to capture the imagination is the Huawei’s Kunpeng 920 chip, just unveiled a few days ago. According to Huawei, its ARM-based Kunpeng 920, “..significantly improves processor performance – outstripping the industry benchmark by 25% and lowering power consumption by about 30 per cent compared to competitors.”
Although the new Huawei superchip is based on the architecture of British chip design firm ARM, which is owned by Japanese conglomerate SoftBank Group, it’s an an important part of China’s strategy to become a global leader in AI, automation and next generation mobile networks. Kunpeng 920 will power immense corporate Cloud-ready data centers via its new flagship TaiShan servers.
In 2018 as the trend clearly shifted from the CPU’s role in computing performance, Nvidia’s business grew astronomically with the great demand for its GPU chips to handle high performance computing workloads. IBM, together with Nvidia is developing a specific AI processor for high-speed data throughput specific to AI and ML.
Another huge super chip announcement was GoogleTensor Processing Units (TPUs), which are custom-developed application- specific integrated circuits (ASICs) used to accelerate machine learning workloads. They are available for AI applications on the Google Cloud Platform (for data centers). *Note: Data centers are crucial for AI via deep learning because they handle some of the heavy lifting when it comes to training the models that handle machine learning processes like image recognition.
Just in November last year, AWS’s machine learning inference chip ‘Inferentia’ was also introduced to support deep learning frameworks with high throughput, at low cost; reportedly mainly for its increasingly popular Alexa home assistant.
While Intel’s purchase of Movidius is specifically to develop a chip for image processing AI; we read of Apple’s ‘Neural Engine’ AI processor that will power Siri and FaceID.
Apparently, Alibaba, that has about 4% of the cloud infrastructure services market, is also now making its own custom AI-chips to compete better against the AWS, Google and Microsoft.
Even non-traditional tech companies like Tesla is throwing its wager onto the table. Mid last year, Elon Musk shared that it would be building its own silicon for the Hardware 3 – its hardware in Tesla cars that does all the data crunching to advance those cars’ self-driving capabilities.
For Tesla, it essentially means moving away from running on Nvidia’s chips that handled about 200 frames per second, to 10x more at 2,000 frames per second “with full redundancy and failover.”
AI-Chip StartUps
The macro-view of the growth in AI-driven super chips needs to include what has been an exciting line of AI-hardward startups. According to The New York Times, 45 AI-dedicated ‘silicon’ chip startup companies, were launched last year and growing.
UK semiconductor designer Graphcore recently got into the limelight for raising USD200 million from investors including BMW AG and Microsoft. Graphcore’s objective? To design a better class of AI chips than the ones sold by Intel and Nvidia. The startup, valued at USD1.7 billion already has existing investors that include Dell Technologies Bosch Venture Capital.
Finally, special notice must be given to Cerebras Systems, touted to be a shining example of what seems to be a new race of AI super chips that are specialist processors, rather than the generalist chips of Intel and Nvidia GPUs.
In 2019 as AI extends it far-reaching arm into the crevices of daily businesses and lives, super chips will be powering every aspects of the operations in this AI era.