This is very interesting, and I imagine this is a challenge for *all* LLM tracking tools.
If every tracked prompt is (potentially?) modified to include « I am based in Germany, » or something similar, for a logged-out account, it’s going to produce different results than a logged-in account with context/memory/location detection tied to Germany.
I think seeing responses in LLM trackers that consistently state « if you’re in Germany » confirms that. Compared to regular conversations in the interface, where the location is implied, rather than explicitly stated at the top of many responses (63%!).
It’s a slight difference, but it’s another example of how what the LLM trackers see and what users see are likely going to continue to diverge over time, due to personalization.
Also, my Gemini and AI Mode are also consistently personalized now… I ask generic questions and it answers with local information related to Brooklyn (where I live).
I’ve stated various times that I still think directional data from LLM trackers is better than no data, but it seems that data is becoming increasingly directional over time.
I’m concerned about the accuracy of location-based prompt tracking in Profound 🤨
I am tracking prompts with region: Germany and platform: ChatGPT for a German client. But I’m puzzled by the responses I’m seeing.
138 of 219 responses (63%) mention « Germany » in the first paragraph. For example (translated to English):
– If you are in Germany, then…
– … specifically for users in Germany …
– If you want to do X in Germany, then…
As a user based in Germany, I never get these responses from ChatGPT. How ChatGPT tailors its response to my location is usually implicit, I don’t get reminded by ChatGPT of my location in every response.
(Note that none of the prompts mention Germany in any way. )
Profound says it does location tracking by proxy servers in the specific region, but this looks more like the location is injected into the prompt somehow.
I’m aware we can never get to the « universal response » that a « real user » will see because of all the personalization in ChatGPT, but if we’re accepting that prompt tracking is still useful, I’m concerned that this is skewing the results in a meaningful way.
Anybody got thoughts on this?