AI in the creative industry, Google I/O and AI everywhere, the difference between AI agents and Agentic AI, and a linear thinker, a design thinker and a systems thinker walk into a bar…
This week’s provocation: Creative Risks, radical change – Hollywood’s transition to ‘talkies’ and AI-driven transformation
AI has generated a real inflection point for every sector that involves/uses/hires creative thinking and talent. For many creative businesses it can easily feel as though they are facing an unprecedented level of uncertainty or even an existential threat. I’m doing a talk at an upcoming event on AI for creatives and production folk and I’m theming the talk around transformational thinking, because I think AI is going to require just that. As part of the research for it I’ve fallen down lots of rabbit holes looking at examples from the film and production industry where innovators thought differently and reimagined fundamental assumptions when the industry underwent major technological or structural change. One of the biggest of these inflection points was the end of the silent film era and the dawning of ‘talkies’, and stories from that time can tell us a lot about how to think differently about the huge changes that AI is already bringing.
Embracing creative risk
When Warner Bros released The Jazz Singer in 1927 it was the first feature-length movie that synchronised music, sound effects and spoken dialogue with the film’s action, marking the dawn of an entirely new era. At the time many studios were not convinced that audiences actually wanted sound in cinema. It was complex, potentially costly, and unknown.
Warner Bros took a big creative bet on an innovative sound-on-disc technology, but in many ways The Jazz Singer was the perfect film to demonstrate the new capabilities of synchronised sound with filmed action. Cultural sensibilities had shifted towards energetic jazz music, urban sophistication, and an appetite for realism. There was music, and singing, and actor Al Jolson whose unique voice, spontaneous dialogue and improvised banter (lines like ‘Wait a minute, wait a minute, you ain’t heard nothin’ yet!’) created a dynamic realism previously impossible in silent cinema. There was now no need for the exaggerated acting that had been used to convey emotion in the silent era. It was the sound that helped create that emotional realism and connection, and the audiences’ relationship with movies was changed forever.
Lesson 1: It’s the creative risk-takers that move the industry forwards. Technology achieves its greatest impact when it is intimately tied to storytelling, allows for spontaneity and authenticity, and is aligned with broader cultural movements.
Reimagining possibilities
Many of the earliest ‘talkies’ regressed into theatrical staging so that they were essentially like filmed plays. The sound recording equipment was bulky and static, and synchronised sound was often treated by directors as a limitation. Ironically it was a theatre director making his film debut who showed what could be done.
Applause, the story of an aging burlesque queen who makes sacrifices to protect her convent-raised daughter from her own low-down life and abusive lover, was released in 1929. Director Rouben Mamoulian broke free from the restrictions of cumbersome sound technology so that he could shoot on location around Manhattan using mobile microphones, early boom rigs, and innovative sound editing that could allow the camera to move freely. He was able to integrate dialogue but also to layer in ambient city sounds into scenes for realism (anticipating modern sound design). The result combined the visual dynamism of silent cinema with the rich, newly expressive possibilities of sound, creating a truly immersive cinematic experience.
Lesson 2: True progress comes not from replicating old workflows with new tools, but from reimagining what the tools make possible.
Enriching creativity
Also released in 1929, King Vidor’s Hallelujah! was ground breaking in a number of ways, not least because it was the first studio film with an all black cast. Metro-Goldwyn-Mayer had given King Vidor permission to make the film under the condition it would be a low-budget ‘experiment’.
Rather than see this as a constraint, the director seized the opportunity to shoot on location in Tennessee and Arkansas and used synchronised sound not just for dialogue, but to immerse audiences in spirituals, gospel, and field calls from the southern black community. Eschewing the traditional studio shoot, he battled technical challenges to capture real-world sound atmospheres which enabled him to create a deeply emotional, culturally rich film that felt alive and modern. Sound was not just used to capture dialogue but to add authenticity, and to construct an immersive audio world.
Similarly, in the musical film The Love Parade from the same year, Ernst Lubitsch used synchronised sound to integrate music and dialogue into a flowing narrative in a way that never been done before. Most early filmed musicals were clumsily done with abrupt transitions between spoken scenes and musical numbers. Ernst Lubitsch saw the arrival of sound as an invitation to fuse music and narrative, choreographing not just the actors but the entire soundtrack including dialogue, ambient sound, and orchestrations to flow rhythmically with the storytelling. He shifted between diegetic (sound that can be heard by the characters) and non-diegetic elements, experimenting with overlapping sound and editing, and pioneering the idea of the ‘invisible orchestra’ where music and action feel unified. He was a pioneer of cinema as a fully orchestrated audiovisual experience, his work laying the foundation for sound editing as a storytelling device.
Lesson 3: New technologies (like AI) are not just about efficiency, but about enriching creative possibilities.
Tying new technology to storytelling, visionary innovators who embrace what’s newly possible, and enriching creativity. It’s easy to forget in the hype surrounding AI that almost every industry has undergone paradigm shifts driven by technology before. We lose the lessons from those shifts at our peril.
Rewind and catch up:
Innovating employee experience in the age of AI
On AI model collapse and the era of experience
Image: English: Illustrator unknown. Distributed by Paramount Pictures, Public domain, via Wikimedia Commons
Image: Al Hirschfeld, Public domain, via Wikimedia Commons
If you do one thing this week…
This week at the Google I/O event the company unveiled a blizzard of new announcements which marks something of a watershed – this really goes beyond the usual model upgrades to set out an ambition for how Gemini can be integrated seamlessly into multiple areas of our lives. Key highlights included:
Model enhancements: Big developments to the core models. Gemini is now in Chrome (US only for now), which will significantly reduce friction for AI use. Gemini 2.5 Pro now has a new ‘Deep Think’ mode for enhanced reasoning, and their lauded video generation model Veo 3 can now be paired with voice and music generation demo here). Apropos of my post above, there’s also a new AI-powered filmmaking tool called Flow which combines Imagen 4 and Veo 3 to create and edit scenes and characters with greater consistency.
Project Astra: Further development of its vision for a universal AI assistant that can understand and interact with the world around it, with its capabilities being integrated into Gemini Live (interaction with AI using your camera and screen). This points to a future of more context-aware and proactive AI (demo here)
Search: AI Mode is being rolled out in the US for Google Search, offering a full AI-powered search experience with the ability to handle longer, more complex queries and follow-up questions, aiming to make search more intelligent and conversational. More AI-driven shopping experiences, including a virtual try-on mode that uses a photo of you to see how clothes would fit (demo of that here)
Extended reality: There’s updates on Android XR, its platform for extended reality, and they showcased a live demo of Android XR Glasses with features live language translation and Gemini integration (demo here). There’s also Google Beam, their AI-first video communication platform that uses AI to transform 2D video streams into realistic 3D experiences for more immersive and natural conversations (remote working will never be the same – demo here)
Agentic AI: Project Mariner is a plan to build AI agents capable of autonomously carrying out web-based tasks for users, with an initial tool expected later this year.
We used to talk about the ‘thinternet’ as a way of describing how digital is integrated as a layer across everything we do. It feels as though this is now happening with AI.
Links of the week
-
Crikey. OpenAI is acquiring io, Jony Ive’s company, for $6.5 Bn in stock – the announcement reading like a mission statement for building ‘a new family of products’. The first AI hardware product is reported to be ‘pocket-sized, contextually aware, screen-free, and isn’t a pair of smart glasses’.
-
This was the week that Sam Altman said: ‘Young people don’t really make life decisions without asking ChatGPT what they should do… It has the full context on every person in their life and what they’ve talked about.’ There’s a half hour interview with him here in which he talks about building the ‘core AI subscription’ for your life. Good or bad, they are not thinking small.
-
Meanwhile ChatGPT is improving shopping results meaning that users can see shopping links based on their searches. The announcement post contained details of how it chooses products to present which seems very similar to how Google does it, but it’s another big step forwards in the march towards more conversational commerce (HT Dan Calladine)
-
AI agents and Agentic AI are not the same thing. Ross Dawson had a useful high-level summary on the differences and application based on this study. TL;DR AI Agents perform narrowly defined tasks and are not thinkers, Agentic AI involves systems of multiple agents that collaborate.
-
When it comes to AI and education I’m somewhat more optimistic than this article which features accounts of cognitive outsourcing at scale already happening in the US education system. Perhaps it will be the catalyst for some much needed change. Related – if you know anyone that is a student, Google is also offering UK university students a free Gemini upgrade for 15 months.
-
The Chicago Sun times recently published a list of ‘Summer Reads 2025’, but eleven of the books don’t even exist. The journalist involved admitted that the list was AI-generated and that he was ‘embarrassed’ (HT John Willshire)
-
I chatted to Houda Boulahbel this week and she mentioned a piece she’d written which is a brilliant articulation in one post of the fundamental differences between linear, design and systems thinking, but also why each one of these is necessary. Loved it.
Quote of the week
John Steinbeck on the challenge of capturing life in writing, from Cannery Row via
And finally…
After I featured rabbitholes.ai in last week’s episode subscriber Rafa Jiménez got in touch to tell me about Seenapse, a tool that he’s building focused on idea generation beyond the normal distribution of LLMs (which is an application idea I can get behind). You can check out a short preview video here.
Weeknotes
Bearing in mind how tough it is out there for strategy-related freelancers right now, last week I offered up ten diary slots for a chat to see if I could help (I’m not a specialist coach in this area but I do have 16 years of independent experience). The demand took me totally by surprise and I was somewhat inundated (which I think says a lot about the state of the market) and so I spent a good two days doing nothing but speaking to other independents this week. It reminded me of two things – first, how wonderful it is to speak to smart, interesting strategists and, second, what an amazingly rich seam of talent there is in the freelance market right now. Outside of that it’s been a week of prep for more workshops out in the Middle East with my banking client, and revamping the IPA AI course which I’ll be delivering in early June.
Thanks for subscribing to and reading Only Dead Fish. It means a lot. This newsletter is 100% free to read so if you liked this episode please do like, share and pass it on.
If you’d like more from me my blog is over here and my personal site is here, and do get in touch if you’d like me to give a talk to your team or talk about working together.
My favourite quote captures what I try to do every day, and it’s from renowned Creative Director Paul Arden: ‘Do not covet your ideas. Give away all you know, and more will come back to you’.
And remember – only dead fish go with the flow.
