Hardware And Software Limitations For The AI Boom

by · Forbes

Sometimes in our enthusiasm about the power of large language models and neural networks to do new things, we overlook some of the basis of how we got to this point.

In our recent lectures and events, though, you can hear people talking about this sort of context – what really makes AI possible.

In a way, there are two main components. The first one is the obvious one – that we uncovered the innate power of LLMs to learn and to produce dynamic results. We are astounded by the ability of these technologies to pass all kinds of Turing tests, and looked on with amazed eyes as artificial general intelligence start to emerge.

The second one is a lot less obvious to most people, but it makes sense to those who have been reporting on technology for a couple of decades.

Mother with little son enjoying the view over the sea next to a cabin on the beach.getty

Simply put, the idea is that our hardware accelerations made AI possible.

If you don’t have the compute – if you don’t have the resources, if you don’t have the efficiency of modern microchips, you’re not going to be able to do all of these things that AI is able to do. As one speaker suggested recently, these technologies came along at the right time.

Try a little Science Fiction experiment – imagine if we had gone further in large language model research in the 1940s, as the world struggle through war, horror, and chaos.

Selective focus view of vintage antique map of Europe on a faded sepia antique globegetty

It might’ve changed the outcome of the 20th century’s geopolitical reality, but on the other hand, it probably wouldn’t have been possible with those big mainframe computers that look like washer and dryer machines.

MORE FOR YOU
‘Agatha All Along’ Has Set A Marvel TV Series Record
WWE Bad Blood 2024 Results, Winners And Grades As Roman And Cody Win
WWE Bad Blood 2024 Results: Liv Morgan Retains Against Rhea Ripley

Old mechanical computer used to be a fire control device onbouard USS Missourigetty

It’s Moore’s law, in that sense, that made all of this possible – the shrinking of components and the proliferation of transistors on a circuit. In fact, when we talk about these new technologies, we’re talking about dynamic gating mechanisms that go beyond your traditional logic structures. I’ve talked about these in detail elsewhere – because we’re hearing so much about how this innovation works, and why it’s important.

But the point is that we needed the hardware to get to the natural power of the algorithms. Most people agree that it wouldn’t have been otherwise – that we couldn’t have leapfrogged ahead and created the algorithms based on more primitive hardware.

“As computer hardware continued to improve at an exponential rate, AI researchers were finally able to start building systems that could begin to approach human levels of intelligence,” wrote Jacob Stoner at Unite.AI last year. “This breakthrough led to the rapid expansion of machine learning, a subset of AI that led to the development of many successful applications such as self-driving cars and digital assistants.

Moore's law is often cited as one of the key reasons why AI has seen such rapid progress in recent years. This trend will likely continue, leading to even more amazing advances in AI technology.”

Others would say, though, that we could’ve strung together thousands of mainframe machines and come up with data centers that could handle at least the beginning of AI/ML research.

There’s also the sort of correlative idea that modern mainframes can be useful in hosting LLMs and related systems.

“Mainframes have a very specific role to play in this modern ecosystem,” says Chiraq Degate, analyst at Gartner, in a piece released yesterday at CIO.com. “Many enterprise core data assets in financial services, manufacturing, healthcare, and retail rely on mainframes quite extensively. IBM is enabling enterprises to leverage the crown jewels that are managed using mainframes as a first-class citizen in the AI journey.”

The question is moot, because the AI revolution is happening right now, not in the 1980s, not in the 1990s, and not in the early years of the millennium.

And things are changing quickly - so get ready!