VB Transform 2024 returns this July! Over 400 enterprise leaders will gather in San Francisco from July 9-11 to dive into the advancement of GenAI strategies and engaging in thought-provoking discussions within the community. Find out how you can attend here.
Elon Musk, the billionaire entrepreneur behind Tesla and SpaceX, declared on Monday that he would ban Apple devices from his companies if the iPhone maker integrates OpenAI’s artificial intelligence technology at the operating system level. The threat, posted on Musk’s social media platform X.com, formerly known as Twitter, came hours after Apple unveiled a sweeping partnership with OpenAI at its annual Worldwide Developers Conference.
“That is an unacceptable security violation,” Musk wrote in an X post, referring to Apple’s plans to weave OpenAI’s powerful language models and other AI capabilities into the core of its iOS, iPadOS and macOS operating systems. “And visitors will have to check their Apple devices at the door, where they will be stored in a Faraday cage,” he added, apparently referring to a shielded enclosure that blocks electromagnetic signals.
Escalating rivalry among tech giants
Musk’s broadside against Apple and OpenAI underscores the escalating rivalry and tensions among tech giants as they race for dominance in the booming market for generative AI. The Tesla CEO has been an outspoken critic of OpenAI, a company he helped found as a non-profit in 2015 before an acrimonious split, and is now positioning his own AI startup xAI as a direct competitor to Apple, OpenAI, and other major players.
But Musk is not alone in expressing concerns about the security implications of Apple’s tight integration with OpenAI’s technology, which will allow developers across the iOS ecosystem to tap the startup’s powerful language models for applications like natural language processing, image generation and more. Pliny the Prompter, a pseudonymous but widely respected cybersecurity researcher known for jailbreaking OpenAI’s ChatGPT model, called the move a “bold” but potentially risky step given the current state of AI security.
VB Transform 2024 Registration is Open
Join enterprise leaders in San Francisco from July 9 to 11 for our flagship AI event. Connect with peers, explore the opportunities and challenges of Generative AI, and learn how to integrate AI applications into your industry. Register Now
Security Concerns Loom Large
“Time will tell! Bold move integrating to this extent, given the current state of llm security,” Pliny posted on X, using the acronym for large language models like OpenAI’s GPT series. In recent months, Pliny and other researchers have demonstrated the ability to bypass the safeguards on ChatGPT and other AI models, prompting them to generate harmful content or disclose confidential information used in their training data.
The tech industry has struggled in recent years with data breaches, cyberattacks and the theft of sensitive user information, raising the stakes for Apple as it opens its operating systems to a third-party AI. While Apple has long championed user privacy and insists OpenAI will respect its strict data protection policies, some security experts worry the partnership could create new vulnerabilities for bad actors to exploit.
From our perspective, Apple is essentially installing a black box into the heart of its operating system, and trusting that OpenAI’s systems and security are robust enough to keep users safe. But even the most advanced AI models today are prone to errors, biases and potential misuse. It’s a calculated risk on Apple’s part.
Musk’s tumultuous history with OpenAI
Apple and OpenAI both insist that the AI systems integrated into iOS will run entirely on users’ devices by default, rather than transmitting sensitive data to the cloud, and that developers leveraging Apple Intelligence tools will be subject to strict guidelines to prevent abuse. But details remain scarce, and some worry the allure of user data from Apple’s 1.5 billion active devices could create temptations for OpenAI to bend its own rules.
Musk’s history with OpenAI has been tumultuous. He was an early backer of the company and served as chairman of its board before departing in 2018 over disagreements about its direction. Musk has since criticized OpenAI for transforming from a non-profit research lab to a for-profit juggernaut and accused it of abandoning its original mission of developing safe and beneficial AI for humanity.
Now, with his xAI startup riding a wave of hype and a recent $6 billion fundraising round, Musk seems eager to fuel the narrative of an epic AI battle for the ages. By threatening to ban Apple devices from his companies’ offices, factories and facilities worldwide, the tech magnate is signaling he views the looming competition as no-holds-barred and zero-sum.
Whether Musk follows through with a wholesale Apple ban at Tesla, SpaceX and his other firms remains to be seen. As Meta’s chief AI scientist recently pointed out, Musk often makes “blatantly false predictions” in the press. The logistical and security challenges alone of enforcing such a policy among tens of thousands of employees would be enormous. Some also question whether Musk truly has the legal right as a chief executive to unilaterally ban workers’ personal devices.
But the episode highlights the strange alliances and enmities taking shape in Silicon Valley’s AI gold rush, where yesterday’s partners can quickly become today’s rivals and visa versa. With tech superpowers like Apple, Microsoft, Google and Amazon all now deeply in bed with OpenAI or developing their own advanced AI in-house, the battle lines are being drawn for a showdown over the future of computing.
As the stakes rise and the saber rattling intensifies, cybersecurity researchers like Pliny the Prompter will be watching and probing for any signs of vulnerabilities that could harm consumers caught in the middle. “We are going to have some fun Pliny!” quipped Comed, another prominent AI security tester, in a playful but ominous X exchange on Monday. Fun, it seems, is one word for it.
Source link lol