You’d have to be pretty brave to bet against the idea that applying more computing power and data to machine learning—a recipe that birthed ChatGPT—won’t lead to further advances of some kind in artificial intelligence. Even so, you’d be braver still to bet that combo will produce specific advances or breakthroughs on a specific timeline, no matter how desirable.
A report issued last weekend by the investment bank Morgan Stanley predicts that a supercomputer called Dojo, which Tesla is building to boost its work on autonomous driving, could add $500 billion to the company’s value by providing a huge advantage in carmaking, robotaxis, and selling software to other businesses.
The report juiced Tesla’s stock price, adding more than 6 percent, or $70 billion—roughly the value of BMW and much less than Elon Musk paid for Twitter—to the EV-maker’s market cap as of September 13.
The 66-page Morgan Stanley report is an interesting read. It makes an impassioned case for why Dojo, the custom processors that Tesla has developed to run machine learning algorithms, and the huge amount of driving data the company is collecting from Tesla vehicles on the road, could pay huge dividends in future. Morgan Stanley’s analysts say that Dojo will provide breakthroughs that give Tesla an “asymmetric” advantage over other carmakers in autonomous driving and product development. The report even claims the supercomputer will help Tesla branch into other industries where computer vision is critical, including health care, security, and aviation.
There are good reasons to be cautious about those grandiose claims. You can see why, at this particular moment of AI mania, Tesla’s strategy might seem so enthralling. Thanks to a remarkable leap in the capabilities of the underlying algorithms, the mind-bending abilities of ChatGPT can be traced back to a simple equation: more compute x more data = more clever.
The wizards at OpenAI were early adherents to this mantra of moar, betting their reputations and their investors’ millions on the idea that supersizing the engineering infrastructure for artificial neural networks would lead to big breakthroughs, including in language models like those that power ChatGPT. In the years before OpenAI was founded, the same pattern had been seen in image recognition, with larger datasets and more powerful computers leading to a remarkable leap in the ability of computers to recognize—albeit at a superficial level—what an image shows.
Walter Isaacson’s new biography of Musk, which has been excerpted liberally over the past week, describes how the latest version of Tesla’s optimistically-branded Full Self Driving (FSD) software, which guides its vehicles along busy streets, relies less on hard-coded rules and more on a neural network trained to imitate good human driving. This sounds similar to how ChatGPT learns to write by ingesting endless examples of text written by humans. Musk has said in interviews that he expects a Tesla to have “ChatGPT moment” with FSD in the next year or so.
Musk has made big promises about breakthroughs in autonomous driving many times before, including a prediction that there would be a million Tesla robotaxis by the end of 2020. So let’s consider this one carefully.
By developing its own machine learning chips and building Dojo, Tesla could certainly save money on training the AI systems behind FSD. This may well help it do more to improve its driving algorithms using the real-world driving data it collects from its cars, which competitors lack. But whether those improvements will cross an inflection point in autonomous driving or computer vision more generally seems virtually impossible to predict.
Source