Welcome to AI Collision 💥,
In today’s collision between AI and our world:
Up down, nobody knows…
Groq vs Grok
Which AI have you used?
If that’s enough to get the Groqs (or Groks) fighting, read on…
AI Collision 💥 how do you tell if an AI stock is worth picking or not?
My morning routine usually starts at 5.30am to 6.00am.
And no, this isn’t one of those “life hack” posts where I tell you I’ve done six workouts and eaten 2kg of steak before 7am.
My day starts that early because at least one of my kids is standing beside my bed, like a silent serial killer, waiting for one eye to open so they can go and play.
With no other choice, I’m up, bleary eyed, them smashing some milk and a banana and me checking in to see what’s been going down in the markets overnight.
Hence, my usual first port of call is a data aggregation site like Yahoo Finance where I can catch up on closing prices and post-market trading in the US, what’s been moving in Australia and Asia and then mainlining coffee right into my system.
As I did that yesterday morning, I checked in on a few “darling” stocks of the market.
One of those being Super Micro Computer Inc. (NASDAQ:SMCI).
When I checked in, it wasn’t the price action that caught my attention right away, it was some of the news feeds about SMCI that took my half-waking-up state into the realms of a dreamland.
Here’s the first thing I saw…
$1,300 per share would be 71% higher.
Then I saw this…
60% higher would be $1,212.
Then this…
And this came off the back of an intraday high of $1,077 on Friday 16 February to an intraday low of $692 on Tuesday 20 Februry. That’s a 35% swing lower.
So, what is it? What does SMCI do next?
Well, the short answer is, short term, nobody bloody knows!
Will it retest over $1,000 in the next week, or will it sail 50% lower towards prices it was trading at only a month ago?
Every trader, broker and analyst will have a take on it. So how do you know who to listen to, who to trust…?
That includes me by the way!
I think the approach to take here is one that over time has proven for me to be most effective. That is to simply form a distinctive world view about what our world looks like in a decade, in two decades, and work backwards from there.
If you think that technology like AI will only get better, that we’ll use more computationally intensive and heavy hardware and that we will be more connected, faster, with greater access to AI, then you just need to ask: what are the smart plays that build that world?
And then ask: is SMCI the kind of company that fits into that thesis?
But if you think AI is all hyped up, that an unforeseen competitor will come along and scoop up the market, or that the technology fundamentally isn’t going to have a big impact on our world, then you come to find your answers there pretty quick too.
I believe this is how you at least start with figuring out what AI stocks are worth tipping capital into or not. After you’ve at least built a thesis or got some expert information and guidance to build your view (like the exceptionally great content here at AI Collision 💥) then you look at those companies and see if they actually make money from the tech they’ve got going.
This is a tried and tested blueprint for picking AI stocks. Some call it qualitative, top-down investing. But for me, I think it’s just a reverse jigsaw. When you know what the end puzzle looks like, then you start to figure out what the pieces are that make it so.
It’s not always right, but you don’t need to be right all the time. But when you see the pace and explosion of AI companies involved in it, like SMCI, then I think it’s a puzzle worth figuring out.
AI gone wild 🤪
First, we had computer processing units (CPUs).
Then we had graphics processing units (GPUs).
Now we’ve got language processing units (LPUs).
Well, we don’t “have” LPUs per-se, but we might. And that’s enough to have sent the AI world into a bit of a tizz this last few days.
Have you ever heard of Groq?
No, that’s not a spelling mistake. We’re not talking about Grok, Elon Musk’s AI that he’s rolling out through X.com – the Grok that we wrote about in November last year.
I mean Groq, the same Groq that apparently existed well before Elon Musk’s ever did.
Here’s what Groq said when Elon made a song and dance about Grok:
Now we’ve got the whole “whose Groq (or Grok) came first” out of the way – clearly it was Groq – the question remains, why is this important today?
Well, it comes back to the CPU, GPU and now LPU evolution.
Groq is claiming that it’s developed an “AI chip” that’s exponentially faster than anything out there in the market right now. More specifically, the AI chips that Nvidia is making and that companies like Microsoft are using.
As Groq explains:
An LPU Inference Engine, with LPU standing for Language Processing Unit™, is a new type of end-to-end processing unit system that provides the fastest inference for computationally intensive applications with a sequential component to them, such as AI language applications (LLMs).
In practice there are claims that when using LPUs, the speed at which computation occurs is as much as 13-times faster.
In a recent Gizmodo article the speed comparison was put to the test, finding:
Groq produces 247 tokens/second compared to Microsoft’s 18 tokens/second, according to a third-party test from Artificial Analysis published last week. That means ChatGPT could run more than 13x as fast if it was running on Groq’s chips.
I also decided to do my own real-world test between Groq and OpenAI’s ChatGPT.
With ChatGPT (using GPT-4) I asked the question, “Who invented the TPU?”
26 seconds later, I got the following answer.
The Tensor Processing Unit (TPU) was invented by Google. TPUs were designed specifically for accelerating machine learning tasks. They are custom-designed chips optimized to run TensorFlow, Google's open-source machine learning framework, and are intended to deliver higher performance and efficiency than conventional hardware such as CPUs and GPUs for certain types of computations common in deep learning algorithms. Google announced TPUs in 2016 as part of their efforts to accelerate machine learning workloads and improve the performance of their artificial intelligence services.
I then went on to Groq, using the Llama 2 70B-4K model, and asked the same question, “Who invented the TPU?”
1.46 seconds later I got the following answer:
The Tensor Processing Unit (TPU) was invented by Google. The first TPU was developed in 2013 by the Google Brain team, led by Andrew Ng, a prominent researcher and entrepreneur in the field of artificial intelligence. The team recognized the need for specialized hardware to accelerate the training of deep neural networks, which were becoming increasingly complex and computationally intensive.
Google developed the TPU specifically to address the challenges of training deep neural networks, which were taking weeks or even months to train on traditional CPUs and GPUs. The company designed the TPU to optimize the performance of deep neural network computations, and the first TPU was deployed in Google's data centres in 2015.
Since then, TPUs have become widely adopted in various industries, and Google has continued to develop and improve the technology. In 2018, Google announced the Cloud TPU, which allowed customers to rent TPU resources in the cloud, making it easier for organizations to access the technology without having to invest in their own hardware.
In summary, the Tensor Processing Unit (TPU) was invented by the Google Brain team, led by Andrew Ng, to address the need for specialized hardware to accelerate the training of deep neural networks.
Groq gives a more in-depth answer whilst also doing it 17.8-times faster.
That’s quite something.
Now, for me, there’s still a big question that remains.
These new LPUs, are these the real deal? Are we talking about a true game-changer piece of hardware here, or is this a lot of hype and the reality is it’s the models, not the hardware, that’s making a difference here?
And amidst all of it, what’s the investible angle?
Right now, I’m not sure. I’m trying to figure that out. Ideally, if Groq was investible then it’d be something worth considering, a big chunk of asymmetric risk on the table…
But it’s not. So, what impact does this have short term? My initial feeling is that it’s another platform for which the wider industry leaps higher from.
That it’s further confirmation that 2024 is going to be the year of AI hardware. That big tech, mid-tech and little-tech are all scrambling to develop, launch and get to market, game-changing AI hardware.
The demand for all this is not only strong but growing at an insatiable pace. I suspect we’ll be hearing a lot more about Groq (with a Q, remember) in the near future as the idea of LPUs starts to grab hold of the market.
Ed note: by the way, I mentioned on Tuesday that my new AI briefing was coming soon. It’s actually going live next Tuesday. If you wanted to check that out, you will need to pre-register for it.
Again, to see what its all about just hit the button below and it’ll take you where you need to go to find out more.
Boomers & Busters 💰
AI and AI-related stocks moving and shaking up the markets this week. (All performance data below over the rolling week).
Boom 📈
Appen (ASX:APX) up 35%
Brainchip (ASX:BRN) up 29%
Cyngn (NASDAQ:CYN) up 18%
Bust 📉
Vicarious Surgical (NASDAQ:RBOT) down 12%
C3.ai (NASDAQ:AI) down 5%
Nvidia (NASDAQ:NVDA) down 3%
From the hive mind 🧠
There are a lot of elections happening in 2024 around the world. And it’s shaping up that AI is going to be the hottest story about how elections are run; how the voters are influenced; and when losers lose, how these elections were “corrupted”. AI and elections are something we’re going to hear a lot more about in 2024.
While on this topic of AI and deepfakery, another “godfather” of AI has published an open letter about the need for greater regulation on deepfakes. While AI and elections will be a hot topic, so will the regulation of AI.
Often at home when I’m looking for something (that happens to be right in front of me) and can’t find it, my wife refers to it as me doing a “boy look”. I rebut that as it’s just how I’m wired. And I’m now hoping that AI is going to prove my point exactly.
Artificial Polltelligence 🗳️
This week’s poll is about using AI models. Mainly have you used them and if so, which ones…
If your answer happens to be “other” drop us a comment to tell us which one(s) so we can dive in ourselves to find out more.
Weirdest AI image of the day
Communist countries propaganda posters starring famous fictionnal characters – r/Weirddallee
ChatGPT’s random quote of the day
"Technology is best when it brings people together, but it is at its peak when it makes us forget it's even there, seamlessly enhancing our lives without our conscious recognition." – Tatsuo Horiuchi
Thanks for reading, and don’t forget to leave comments and questions below,