In this issue: Google’s new models, XR and quantum, OpenAI does video, GM dumps Cruise… and why big companies are ‘slow’ to AI
My work
AI eats the world
Every year, I produce a big presentation exploring macro and strategic trends in the tech industry. For 2025, ‘AI eats the world’ – now with video. LINK
Podcast: AI eats the world
For the past decade, Benedict has given an annual presentation on the state of technology, and he did the latest at Slush in Helsinki last month. In this episode we discuss some of the challenges and issues that he tried to cover. LINK
News
Google’s week of shipping
Google’s AI teams had a big week, releasing version 2 of its flagship LLM suite Gemini, showing ‘Mariner’, its version of the ‘LLM controlling your web browser’ concept that OpenAI and Anthropic have also showed, and launching ‘Agentspace’ to help enterprises that use Google Cloud (not many) to build multi-step AI workflows. Feeds and speeds. GEMINI 2, MARINER, AGENTSPACE
Google also announced progress in its quantum computing project. LINK
Google returns to XR
Google has returned to VR and AR a couple of years after leaving the market, announcing Android XR, a combined developer platform for both VR and AR. There are the obvious partnerships with Qualcomm and Samsung, with the first hardware shipping next year, and Google is showing a bunch of content and UI mockups, all of which look almost identical to Apple’s Vision OS, except that Google also talks a lot about integrated AI that can do, say, live translation or explain what you’re seeing.
This whole space is in a ‘winter’ at the moment. The concept of AI-powered overlays onto the real world might be really cool, but we’re nowhere close to glasses hardware that can actually deliver that at a reasonable size and price. Meanwhile, Apple and Meta both have VR-ish devices you wear at home, but neither have got to the display quality / weight / price needed, and even without that they both still struggle to worked out any use cases beyond niches (industrial & medical, small-scale gaming and fitness). Now Google is offering OEMs another platform, but consumers aren’t buying xR much, and those who do aren’t using it much, so how much enthusiasm will OEMs have before consumer traction? This seems like a marker for the future. ANDROID XR, VIDEO, DEMO
OpenAI launches its video model
OpenAI showed off a video generation model called Sora earlier this year, and now it’s finally opened up access. There are already a couple of video-generating models from startups in the market, plus Adobe, but what are they for? There’re at least half-a-dozen new production companies in LA thinking about how to use them creatively, and at the other extreme Amazon is offering one to D2C brands to generate video ads and marketing material. In the short term, though, the most interesting thing about Sora is that there’s an actual GUI, and it’s not all driven by a prompt. Natural language is a trap. SORA, HOLLYWOOD
Broadcom, and Apple
Broadcom’s stock rose over 20%, taking it to a market cap above $1tr, after it said it thinks the hyperscalers will spend $60-90bn on custom AI chips in 2027. AI is a very big deal… but this is a frothy market. LINK
Meanwhile, The Information reported that Apple is also working with Broadcom on a custom AI data centre chip. The ‘Private Cloud Compute’ that’s part of Apple Intelligence, rolling out now, uses Apple’s existing high-end Mac chips, but dedicated silicon would be better, and Apple doesn’t want to use Nvidia (though it has been training its models on Hyperscaler platforms). LINK
GM gives up on Cruise
GM has invested more than $9bn in Cruise, hoping for $50bn of robotaxi revenue by 2030, and adding more investment from Softbank, Honda and Microsoft, amongst others. It halted testing after a nasty (and mis-handled) accident last year, but it was about to restart testing: now GM has decided to give up, focusing its autonomy work on its own cars and giving up on robot axis. That leaves the only significant US players as Waymo (now doing 300k trips per month in California) and perhaps Amazon’s Zoox. (Meanwhile Tesla promises that it will have its bottom-up approach working ‘in a year or two’, as it has been saying for close to a decade.)
When Uber gave up on autonomy a few years ago, the best reaction was that its investment was ‘both too much and not enough’ – Uber’s investment was too much for it to afford, and yet even so still short of the amount that would ultimately be needed, as it became clear that machine learning had solved the first 90% of the problem but the last 10% would be 90% of the work. GM seems to have concluded the same (there are also a bunch of unanswered questions about how a robot business would actually work – who owns the cars?). Meanwhile it may be that the new wave of LLM-based AI will get us another leap toward a working system, but even then, that’s not GM’s expertise. It may be right to save all of its capital to handle the coming wave of Chinese EVs. COVERAGE, RELEASE
The week in AI
Klarna became one of the first viral AI cases studies last year when it claimed that it was replacing a bunch of its internal systems with ChatGPT. Now it says it’s cut headcount by 20% as a result (mostly through attrition). Reaction amongst its peers can best be described as skeptical. LINK
Apple launched the next batch of ‘Apple Intelligence’ features, with ChatGPT and Google Lens integration. The most interesting feature announced at WWDC in the summer remains to ship, though – a new Siri with a systematic index of what you do on your phone, so that you can ask ‘where am I meeting my brother for lunch next week?’ and it can work out the answer. LINK
Apparently, Google is asking the FTC to intervene in Microsoft’s exclusive enterprise hosting deal with OpenAI – Google wants to be able to sell ChatGPT on Google Cloud. (I don’t think anyone knows how the incoming Trump administration will handle any of this, though). LINK
OpenAI posted a long blog post with a bunch of screenshots of messages to and from Elon Musk, saying, in the context of his lawsuit, that he wanted OpenAI to be a for-profit company controlled by him, whereas he’s now saying that he wanted a non-profit and didn’t want control. LINK
Amazon does cars?
An interesting experiment: Amazon has launched a partnership with Hyundai to sell cars directly online and hopes to add more OEMs next year (subject to the byzantine US regulations that prohibit this in a bunch of states). LINK
Australia’s link tax
Like a dog eating its own vomit, Australia is revisiting its attempt to tax Google and Meta every time anyone saw a link to a website owned by a small number of media tycoons. This project was always a subsidy pretending to be competition regulation, in which Google and Meta were supposed to ‘buy’ something that had no commercial value, and while Google gave in, Meta walked away, blocking links to news sites entirely instead (as it also did in Canada). Now the government is proposing a compulsory ‘charge’ for companies that don’t pay the link tax, but this doesn’t solve the problem. As I and many others wrote at the time, if you want to tax one industry to subsidise another, you should be honest and argue for that, not invent dishonest and economically illiterate ways to pretend you’re doing something else. LINK
What matters in tech? What’s going on, what might it mean, and what will happen next?
I’ve spent 20 years analysing mobile, media and technology, and worked in equity research, strategy, consulting and venture capital. I’m now an independent analyst. Mostly, that means trying to work out what questions to ask.
Ideas
Ilya Sutskever’s presentation at NeurIPS argues that “Pre-training will end” – making better models simply by combining more compute with more data won’t keep scaling, because compute is growing but data is not (“Data is the fossil fuel of AI”). LINK
The Verge has a long interview with Mustapha Suleyman, co-founder of Deepmind and now head of Microsoft’s AI division. Many things covered, including optimism on synthetic data as a path to scaling. LINK
The SemiAnalysis team have a lot of deeply technical arguments about how scaling might continue. LINK
The WSJ on the ways digital and now AI is overturning the old ad agency and holding company model entirely (even as Omnicom and Interpublic merge). LINK
Ev Williams is back with an app to connect when you and your friends are travelling. Remember Dopplr? LINK
One path to generative AI deployment – enterprise software turns it into building blocks (here, Snaplogic). LINK
Outside interests
Data
GroupM’s annual advertising forecasts: about $1tr in 2024, over 70% digital. LINK
Interesting e-commerce data from Marketplace Pulse – especially on China. LINK
OpenAI’s privacy statement lists some of its third-party software stack. LINK
A market map of AI ‘agents’. LINK
New Consumer’s annual trends deck. LINK
YouTube’s latest consumption stats. 1bn hours streamed a day, 400m hours of podcasts/month on living room devices. LINK
Pew on US teenager use of social media. LINK
ETO AGORA – a database of AI regulations. LINK
Column
It’ll take a while
‘Nothing is ever done until everyone is convinced that it ought to be done, and has been convinced for so long that it is now time to do something else’ – F. M. Cornford
As I talk to people inside and outside tech about generative AI, one of the more interesting disconnects I encounter is between people who think that every big company is going to deploy this Right NOW, and people who have some experience of how enterprise software works, and think that this will take years. This is pretty much what all the available data shows so far – in my latest presentation I cited data from Morgan Stanley and Bain that says perhaps a quarter of CIOs have deployed any generative AI project so far, and another quarter only expect to do so in 2026 or later.
This kind of timing isn’t specific to AI. Generative AI started working about two years ago now, and that’s pretty much as long as it takes to build enterprise software. But then it takes 9 or 12 or 18 months for a big company to decide to buy the software… and then there’s the time to actually buy it and deploy it, and that’s just for one piece of software.
If you work in a field where the software is the product and the customers can all leave tomorrow, moving at this speed sounds suicidal, but if you’re a supermarket, or an airline, or a cement company, then software is not the point of leverage, at least not on that time frame. Software is a support function, and so you need to persuade quite a lot of people that this is worth having and worth buying, you need to find budget and opportunity cost, and you need to be comfortable that you can deploy it without breaking anything (Crowdstrike should have reminded us all why people who run large complex infrastructure that lots of people depend on don’t like making hasty changes).
Of course, sometimes big companies are just slow, sometimes they underfund or mismanage their IT, and sometimes their priorities are just elsewhere. I once met the CEO of a large water company who listened intently to my presentation and, when it was over, declared that perhaps, ‘innovation’ should be one of their top five priorities next year. This sounds insane until you remember that he was, after all, running a water company. If infrastructure works, and changing it will not bring some fundamental ‘hair on fire’ business benefit, there are always other things to spend money on. This can get very corporate indeed sometimes: one of the issues moving from on-prem to cloud (still only 25-30% of enterprise workflows, after 25 years!) is that on-prem is capitalised and SaaS subscriptions might be opex, so making a change that saves the company money might need a write-down and hurt your EPS.
On the other hand, after 25 years of cloud, the typical big company today has 4-500 SaaS applications deployed. One of the reasons SaaS works so well is that it’s easier for a big company to deploy – it’s ‘just’ a website and you don’t need to get onto the enterprise datacentre roadmap. But each of those applications is doing something very specific, and that’s the other side of the problem: each of them is solving a problem an entrepreneur had to realise existed, and built a product around, and then built a sales force around the product, and worked out how to persuade the right people in the right companies that they needed this, one product, one company, and one team at a time.
Conversely, the great fallacy of enterprise software in the last decade or two was that SaaS meant you could bypass sales and go direct. Build a better mousetrap and people will come to you! But the reality was that this got you 5-10% of the market and then you ran out of people able, willing, and empowered to look for new tools. Most people who got sold a SaaS app were already doing that thing in Excel, email, Salesforce, SAP, or whatever and had never realised that it could be done better in a dedicated app, or even that they were doing that specific thing at all. It’s the job of the entrepreneur and the software company to think how software could do something new. And just finding this problem takes time, even before you build the product and persuade people to buy it.
All of this applies a thousandfold to LLMs, because ChatGPT isn’t solving any problem in particular for most people. For software developers or marketers, and a few other fields, this is immediately useful, and so too for that 5-10% of people who will always look for a better way and have problems where this works. But for everyone else, we need better models, or we need products that map those models into ideas for problems, or both. And that takes time.
