Software engineer Vishnu Mohandas decided he would quit Google in more ways than one when he learned the tech giant had briefly helped the US military develop AI to study drone footage. In 2020, he left his job working on Google Assistant and also stopped backing up all of his images to Google Photos. He feared that his content could be used to train AI systems, even if they weren’t specifically ones tied to the Pentagon project. “I don’t control any of the future outcomes that this will enable,” Mohandas thought. “So now, shouldn’t I be more responsible?”
Mohandas, who taught himself programming and is based in Bengaluru, India, decided he wanted to develop an alternative service for storing and sharing photos that is open source and end-to-end encrypted. Something “more private, wholesome, and trustworthy,” he says. The paid service he designed, Ente, is profitable and says it has over 100,000 users, many of whom are already part of the privacy-obsessed crowd. But Mohandas struggled to articulate to wider audiences why they should reconsider relying on Google Photos, despite all the conveniences it offers.
Then one weekend in May, an intern at Ente came up with an idea: Give people a sense of what some of Google’s AI models can learn from studying images. Last month, Ente launched https://Theyseeyourphotos.com, a website and marketing stunt designed to turn Google’s technology against itself. People can upload any photo they want to the website, which is then sent to a Google Cloud computer vision program that writes a startlingly thorough three-paragraph description of it. (Ente prompts the AI model to document small details in the uploaded images.)
One of the first photos Mohandas tried uploading was a selfie with his wife and daughter in front of a temple in Indonesia. Google’s analysis was exhaustive, even documenting the specific watch model that his wife was wearing, a Casio F-91W. But then, Mohandas says, the AI did something strange: It noted that Casio F-91W watches are commonly associated with Islamic extremists. “We had to tweak the prompts to make it slightly more wholesome but still spooky,” Mohandas says. Ente started asking the model to produce short, objective outputs—nothing dark.
The same family photo uploaded to Theyseeyourphotos now returns a more generic result that includes the name of the temple and the “partly cloudy sky and lush greenery” surrounding it. But the AI still makes a number of assumptions about Mohandas and his family, like that their faces are expressing “joint contentment” and the “parents are likely of South Asian descent, middle class.” It judges their clothing (“appropriate for sightseeing”) and notes that “the woman’s watch displays a time as approximately 2 pm, which corroborates with the image metadata.”
Google spokesperson Colin Smith declined to comment directly on Ente’s project. He directed WIRED to support pages that state uploads to Google Photos are only used to train generative AI models that help people manage their image libraries, like those that analyze the age and location of photo subjects.The company says it doesn’t sell the content stored in Google Photos to third parties or use it for advertising purposes. Users can turn off some of the analysis features in Photos, but they can’t prevent Google from accessing their images entirely because the data are not end-to-end encrypted.
Source link
lol