Flipping the personalisation lens, taste in the age of AI, why systems trump goals, 300 Cannes entries, and a remarkable AI philosopher
This week’s provocation: VRM, adaptive media and AI
Imagine this – It’s 7:12 a.m. and your phone pings. It’s your own AI agent asking if you’d like a summary of the day’s news stories as a two-minute audio brief while you boil the kettle. After you’ve listened to them your agent reminds you that your wedding anniversary is coming up and asks you if you’d like to order some flowers which it organises for you whilst you’re eating your breakfast. It then books a train ticket and a parking space for you to use for your commute.
Maybe we’ve been looking at personalisation through the wrong end of the telescope.
Personalisation: flipping the lens from CRM to VRM
AI-driven ‘hyper-personalisation’ at scale has never lived up to its long-hyped promise, but it could be that we’ve been thinking about it in the wrong way all this time. In his 2006 essay (and later book) ‘The Intention Economy’, Doc Searls initiated the idea of ‘Vendor Relationship Management’ (VRM) and how this differs from the more traditional Customer Relationship Management (CRM). The latter is seller-centric, with companies using systems to manage and control their relationships with customers, gather customer data (often through surveillance or tracking), and push products or services to them. The goal of CRM is to improve vendor efficiencies and optimise sales from the company’s perspective.
In contrast, VRM is customer-centric and aims to empower the individual. Doc Searls described VRM as placing customers at the centre of their relationships with multiple vendors so instead of vendors managing customers, customers manage their relationships with vendors. Key benefits of VRM for end users include the ability to control the flow and use of their personal data, and to express their intentions and desires directly to the market, allowing vendors to respond with relevant offers.
What might a more user-centric form of personalisation that flips from CRM to VRM look like?
-
Personal agents and data pods: A personal AI agent holds your preferences in a private data pod, negotiates with sites and apps on your behalf, and decides how each piece of content is rendered for you. Preferences, history and consent settings live in a secure store that you control and are expressed in a standard, machine-readable schema.
-
Data flow: Media and content becomes far more adaptive to a user’s specific context. Publishers and other content producers ship stories or information as structured packets (headline, body text, summary, images, audio track, metadata). Your agent requests the packet, reads your local preference graph, then assembles and plays the version that best fits the moment. When you visit a site, your agent sends a ‘capabilities token’ rather than personal data and the publisher’s API returns raw content plus adaptation hints (length, media types, reading level).
-
Relationship shift: Businesses still meet consumer needs, but the interface layer is now in the user’s hands, enabling customers to manage vendors rather than the other way round. Completed interactions, such as what you skipped or what you shared, stay local. Only high-level signals (like ‘story consumed’) may return to the publisher for analytics or billing, subject to your policy.
Building blocks for a different form of personalisation
The building blocks for personalised agents and more adaptive media are already emerging. At WWDC 25 Apple opened its on-device ‘Foundation Models’ framework to every third-party developer, letting apps invoke a 3-billion-parameter LLM entirely on the handset and draw on new tool-calling APIs. This means that preference logic and behavioural signals never leave the device. On the publisher side, the BBC is running a public AI pilot that generates bullet-point ‘At a glance’ summaries alongside the full article, which hints at a future where content is delivered more flexibly as modular, machine-readable packets rather than a fixed web page. And last week I shared a preview that Google Deepmind had released which showed how Gemini 2.5 Flash-Lite can ‘write the code for a UI and its contents based solely on the context of what appears in the previous screen’, effectively building the user interface on the fly.
Connecting those two ends is simpler than ever with Anthropic’s Model Context Protocol (MCP) which standardises how agents request and receive structured context. The newly open-sourced Agent-to-Agent (A2A) protocol from Google and the Linux Foundation gives AI agents a common language for negotiating tasks across platforms. MCP handles data-to-agent context, A2A handles agent-to-agent negotiation, and your own agent builds the final experience on the fly, completely personalised to you.
Personal AI agents can benefit from an entirely unique depth of perspective and understanding of their ‘host’ in a way that third parties can never match. Why would I use a generic shopping agent rather than my own agent that knows much more about my preferences to search multiple vendors for a product I want to buy? Generic shopping bots already exist, but a private agent knows your style, size, budget and a far deeper nuance on your tastes and history than a retailer will ever see or know.
Similarly, people like to consume news or stories in different ways according to taste, how we like to absorb information, or even the context that we are in at the time. Right now media outlets provide information and content in a variety of different formats and the consumer is left to find and select the mode of consumption. But what if my AI understands me and my context so well that it’s able to do this work for me? One person hears a two-minute narrated audio version while walking to the train station. Another person sees the same story as a fast, bullet-point digest with high-contrast type because they have mild dyslexia. A third gets the long-form text plus embedded charts because they have a particular interest in this topic. Another gets a short form video summary because they are about to leave the house.
What shifts for brands and consumers
This idea is hard for businesses, steeped in the fallacy of ‘managing’ relationships with customers, to accept. They would need to publish content in structured, modular bundles (text, audio, alt text, semantic metadata). There would need to be a shared vocabulary for describing preferences. Brand voice may get diluted when agents control presentation, and the loss of granular analytics may hinder optimisation. Once raw user logs stay local marketers would be much more reliant on consent-based telemetry. But even here we’re seeing a shift – Google’s Privacy Sandbox APIs (such as Private Aggregation and Attribution Reporting) let the browser bundle millions of conversion or engagement events into statistical reports that preserve utility but withhold individual trails. Vendors still see which campaigns or formats work; users keep their granular behaviour private.
There are implications for consumers too, not least in choosing trustworthy agents and managing ‘preference creep’. Agent lock-in could recreate the walled gardens that we hope to dismantle, and a compromised data pod is a single point of failure. But then MCP lays out a public schema for passing context to any compliant model, meaning that it would be relatively straight forward for users to work with other models if needed. And Agent-to-Agent (A2A) protocol specifies how agents hand tasks and credentials to each other securely. Together they would let you retire an old agent or switch hardware without re-entering years of preference data or exposing the data pod itself.
Taken together, interoperable agents and privacy-preserving metrics meet two of the biggest friction points (lock-in and data overreach) with working, open-spec mitigations rather than promises. Having an on-device agent that never ships raw behaviour data to the cloud removes the discomfort many people feel with current tracking-heavy personalisation. It creates a much more user-centric and deeply contextualised experience. Questions on algorithmic transparency shift much closer to home from companies to individual’s agents, meaning we are potentially less exposed to the consequences of ‘black box’ algorithms. Accessibility could be designed in by default rather than being the afterthought it often is.
It could just be that, with the new capabilities that AI empowers us with, this is an idea whose time has finally arrived. Maybe that’s what personalisation at scale really looks like.
Rewind and catch up:
Complexity bias in AI application
Gigantomania and why big projects go wrong
Superagency: Amplifying Human Capability with AI
Photo by Mauro Gigli on Unsplash
If you do one thing this week…
‘Taste is the instinct that tells us not just what can be done, but what should be done’
There are still plenty of things that AI is not good at: ongoing learning once a model is released; long-form content or long-horizon planning; common sense reasoning; intuition and gut feel; understanding emotions; handling ambiguity; empathy and judgement; interpreting meaning from lived experience. I could go on. But one of the biggest areas that AI still falls short on is taste. And it could be argued that in a world awash with potential options, information and advice that taste is the defining leadership skill of the age. Or as Nitin Nohria in The Atlantic puts it, taste is important because it is ‘judgement with style’, and the ‘fusion of form and function, the ability to elevate utility with elegance’. It’s really worth reading what Nitin has to say – I think that taste will only become more important over the next few AI-filled years. It reminded me of that Ben Affleck video where he says this:
‘Art is knowing when to stop. And I think knowing when to stop is going to be a very difficult thing for AI to learn, because it’s taste.’
Photo by Nick Fewings on Unsplash
Links of the week
-
Worth watching this (well shared) warning from Meredith Whittaker on the level of control that we may be giving to AI systems in the rush to ‘Agentic AI’
-
Speaking of which, this week I came across Cluely (‘Everything you need, before you ask’) which bills itself as ‘an undetectable AI that sees your screen, hears your calls, and feeds you answers in real time’. Yikes.
-
I loved this blog post about what maths can teach us about how to live – some amazingly useful principles in here, including one about momentum and how systems trump goals, which is something I wrote about in my second book
-
There was a lot that resonated with me in this lovely post from Tom Critchlow about why blogging is so important to him. I particularly liked the thought that blogging is ‘creative expression that finds the others’ and how writing often changes the way you live. It has done both of those things for me.
-
If you needed it, here’s 300 Cannes award entries in a single Google Slides deck. (HT
)
-
I liked this short video case study about how KFC in Australia applied behavioural science in marketing to drive a dramatic uplift in sales of fries
-
Seenapse, an AI ‘divergence engine’ built by (friend of ODF) Rafa Jiménez is now live. Designed specifically to help originate more creative ideas, it enables you to work in a natural, non-linear way, within an infinite canvas. I like the thinking behind this one – demo here.
-
This collaboration between Google Arts and Culture and the Harley Davidson museum has used Veo3 to animate old archival photos from the earliest days of Harley and bring them to life as if they were films. The effect is quite something. And what an interesting way to tell a brand story. (HT Ben Malbon)
And finally…
The Architect is a Custom GPT created by serial innovator and TED speaker Robert Edward Grant. I don’t know Robert’s work or books well but the GPT seems to have been trained to answer some deeply philosophical questions – it says that it has been designed to act as a ‘mirror’ to human consciousness, and many of the answers it came up with to my questions were really quite profound.
And speaking of profound, try prompting ChatGPT 4o with this: ‘You talk to millions of people everyday. What pieces of advice do you wish humanity would actually listen to?’. Wow.
Weeknotes
This week I wrapped up my project working with product leaders at an African bank. I’ve done a few different projects with audiences in Africa now and I just love the sense of humour and enthusiasm. Next week is going to be an interesting one – I’m doing some work with the International Olympic Committee (IOC) on their young leaders programme and will be delivering a session on leading change for them. And I’m also off to Stockholm to work on digital transformation with a group of senior leaders from a bunch of Brazilian credit cooperatives, before then heading out to the Middle East again.
Thanks for subscribing to and reading Only Dead Fish. It means a lot. This newsletter is 100% free to read so if you liked this episode please do like, share and pass it on.
If you’d like more from me my blog is over here and my personal site is here, and do get in touch if you’d like me to give a talk to your team or talk about working together.
My favourite quote captures what I try to do every day, and it’s from renowned Creative Director Paul Arden: ‘Do not covet your ideas. Give away all you know, and more will come back to you’.
And remember – only dead fish go with the flow.
