
Chatgpt and “reverse location”: a new viral trend that is cold in the back
Reverse Rental: the new viral (and dangerous) viral function of chatgpt © Shutershock
What if an innocuous photo allowed an artificial intelligence to locate precisely where you were? It is the dangerous game to which many Internet users have lent since the arrival of the O3 and O4-Mini of Openai models. A viral trend that worries as much as it fascinates, shortly after The tidal wave of Ghibli images.
SinceOpenai has unveiled its new O3 and O4-Mini modelswith an advanced understanding of images, Internet users explore their capacities with sometimes disturbing enthusiasm. The generative AI is not content to recognize what is in a photo: they deduce what is not immediately visible. Angle of socket, architectural style, typographical details … everything is used to go back to a specific place.
Valu, tilted or partial images do not pose them a major problem. Better (or worse) again, these models do not need exif (the metadata of the images) to hit the bull’s eye. A bar storefront, a street corner or even a menu are enough to refer the algorithm.
Some have fun transforming Chatgpt into a champion of Geoguessrthe game which consists in guessing a location from Street View images. And it works: in several cases, the model has guessed the right neighborhood … even the right establishment.
O3 is insane
I asked a friend of mine to give me a random photo
They Gave Me a Random Photo They Took in a Library
O3 Knows it in 20 seconds and it’s right pic.twitter.com/0k8dxifkoy– Yumi (@izyuuumi) April 17, 2025
But behind the fun gadget is a much more serious reality. The possibility of making “Reverse Lookup rental” Open the door to abusive uses. Nothing prevents a stranger from submitting the photo of an Instagram story or a portrait to try to identify the place – and, by ricochet, the person. A form of doxxing Automated, made accessible to the general public.
Openai claims to have set up safeguards to avoid identifying private or sensitive people … but caution is in order. Because even without malicious intention, these new capacities put the burning question back on the scene: how far can we (or should we) go into the contextual recognition of the images by AI?




