Technology

This Shocking New ChatGPT Trend Lets Users Spy on Locations from Photos!

2025-04-17

Author: Kai

A troubling new trend is sweeping across the internet: users are employing ChatGPT to identify locations from images. What could possibly go wrong?

Just this week, OpenAI launched its latest AI models, o3 and o4-mini, which come equipped with remarkable image-analyzing abilities. These models can crop, rotate, and even enhance blurry or distorted photos, making them an incredible tool for pinpointing locations.

But it's the combination of these features with the AI's web-searching capabilities that really raises eyebrows. Users on X (formerly Twitter) have discovered that o3 excels at deciphering cities, landmarks, and even eateries from subtle hints in photographic clues.

What's particularly shocking is that these models don't rely on memories from past conversations or EXIF data, which usually reveals where a photo was taken. Instead, they seem to function entirely on their reasoning and analysis.

Social media is awash with examples of users challenging ChatGPT with everything from restaurant menus to selfies, asking it to play 'GeoGuessr'—the popular game where players guess locations based on Street View images.

However, this could easily lead to privacy violations. Imagine someone snatching a screenshot of your Instagram Story and using ChatGPT to uncover your location—definitely a cause for concern.

Interestingly, TechCrunch recently tested o3 against an older model, GPT-4o, which lacks the new image-reasoning capabilities. Surprisingly, GPT-4o often returned the same correct answers as o3, but in less time. Yet, there were instances where o3 succeeded in identifying places that the older model could not. For example, when shown a photo of a quirky mounted rhino in a dimly lit bar, o3 correctly identified it as being from a Williamsburg speakeasy, while GPT-4o mistakenly suggested it was from a U.K. pub.

But don’t be fooled—o3 isn’t perfect. Several attempts at location deduction fell flat, with the AI getting stuck in loops or arriving at incorrect destinations. Users have also noted that accuracy is hit or miss.

This trend serves as a stark reminder of the emerging risks associated with advanced AI models. Currently, there seems to be a lack of safeguards to curb this new 'reverse location search' trend, and it hasn't been highlighted in OpenAI's safety reports for its latest models.

We've reached out to OpenAI for their take on this concerning issue and will provide updates if they respond. In the meantime, users need to tread carefully with this powerful tool!