Yet Google and its hardware partners argue privacy and security are a major focus of the Android AI approach. VP Justin Choi, head of the security team, mobile eXperience business at Samsung Electronics, says its hybrid AI offers users “control over their data and uncompromising privacy.”
Choi describes how features processed in the cloud are protected by servers governed by strict policies. “Our on-device AI features provide another element of security by performing tasks locally on the device with no reliance on cloud servers, neither storing data on the device nor uploading it to the cloud,” Choi says.
Google says its data centers are designed with robust security measures, including physical security, access controls, and data encryption. When processing AI requests in the cloud, the company says, data stays within secure Google data center architecture and the firm is not sending your information to third parties.
Meanwhile, Galaxy’s AI engines are not trained with user data from on-device features, says Choi. Samsung “clearly indicates” which AI functions run on the device with its Galaxy AI symbol, and the smartphone maker adds a watermark to show when content has used generative AI.
The firm has also introduced a new security and privacy option called Advanced Intelligence settings to give users the choice to disable cloud-based AI capabilities.
Google says it “has a long history of protecting user data privacy,” adding that this applies to its AI features powered on-device and in the cloud. “We utilize on-device models, where data never leaves the phone, for sensitive cases such as screening phone calls,” Suzanne Frey, vice president of product trust at Google, tells WIRED.
Frey describes how Google products rely on its cloud-based models, which she says ensures “consumer’s information, like sensitive information that you want to summarize, is never sent to a third party for processing.”
“We’ve remained committed to building AI-powered features that people can trust because they are secure by default and private by design, and most importantly, follow Google’s responsible AI principles that were first to be championed in the industry,” Frey says.
Apple Changes the Conversation
Rather than simply matching the “hybrid” approach to data processing, experts say Apple’s AI strategy has changed the nature of the conversation. “Everyone expected this on-device, privacy-first push, but what Apple actually did was say, it doesn’t matter what you do in AI—or where—it’s how you do it,” Doffman says. He thinks this “will likely define best practice across the smartphone AI space.”
Even so, Apple hasn’t won the AI privacy battle just yet: The deal with OpenAI—which sees Apple uncharacteristically opening up its iOS ecosystem to an outside vendor—could put a dent in its privacy claims.
Apple refutes Musk’s claims that the OpenAI partnership compromises iPhone security, with “privacy protections built in for users who access ChatGPT.” The company says you will be asked permission before your query is shared with ChatGPT, while IP addresses are obscured and OpenAI will not store requests—but ChatGPT’s data use policies still apply.
Partnering with another company is a “strange move” for Apple, but the decision “would not have been taken lightly,” says Jake Moore, global cybersecurity adviser at security firm ESET. While the exact privacy implications are not yet clear, he concedes that “some personal data may be collected on both sides and potentially analyzed by OpenAI.”
Source link
lol