Graphcore, a UK-based startup that designs processors specifically for artificial intelligence software (AI) applications, has just raised $150 million, bringing the company's valuation to just shy of $2 billion.
This mega funding and valuation acknowledge an important reality: The level and scale of compute for AI is not commensurate to other forms of traditional computing. In fact, according to research from OpenAI, the computing power needed to train AI is now rising seven times faster than ever before. (Moore’s Law has run out of steam; the pace has increased.)
Today, with all of the various types of devices, computing, and software (edge devices, machine learning software, deep learning software, etc.) users expect a variety of ways for this computation to be handled, to thereby reduce latency and increase efficiency. According to G2's VP of technology research, Tom Pringle, “The sheer volume of use cases simply makes generalized processing units inefficient.”
Enter Graphcore. With $300 million in cash, the startup is looking to bring its AI-specific chips to a broader global market and continue research & development efforts. “Deep learning has only really existed since 2012,” Nigel Toon, founder and CEO, said recently to TechCrunch. “When we started Graphcore, what we heard from innovators was that hardware was holding them back.”
We’ve seen two key approaches as to how leading companies are bolstering their core AI software offering with the appropriate hardware:
- Building their own AI-specific chips (e.g., Google’s Cloud TPU)
- Partnering with chip manufacturers (e.g., Microsoft, which has partnered with Graphcore)
Microsoft's NLP announcement last week as well as other large scale AI projects, shows that the requirements for compute power are only going up in training very large models.
It’s not only the hardware that is key. It’s also about the software that ensures the hardware is running efficiently. Graphcore, for example, has developed its Poplar software specifically for the kind of simultaneous, intensive calculations demanded of AI applications.
With this powerful software powering even more powerful hardware, data scientists and other data experts are able to work smarter, faster, and more efficiently. This will allow for the training of even larger models for building algorithms (e.g., machine learning software, natural language processing (NLP) software, etc.) and will, hopefully, lead to even more accurate AI in general. In turn, data scientists will say, “I’ve got 99 problems, but powerful AI ain’t one.”