A major security flaw that could put Apple GPUs at risk when using AI has been discovered. While some of the GPUs in question have already been patched to provide added security, many are still at risk.
AI processing can be a demanding prospect, so the raw power of GPUs is often pulled in to process that data. Essentially, when generative AI, like that found in text generation and prediction, is used, it runs a large language model (LLM) that analyses data incredibly quickly to get responses. Unfortunately, given this works in a way that many GPUs aren’t traditionally designed for, it has some unintended consequences.
GPUs keep local memory that's easy for the computer to access. That memory tends not to be safeguarded, leading to vulnerabilities for potential exploits like this one. If a bad actor can connect to your device, as shown in the original LeftoverLocals report from Trail of Bits, a code can be written with just 10 lines that can access that local memory. This means, in the case of generative AI, they can reconstruct answers given with the LLM.
This currently affects Apple, Qualcomm, AMD, and Imagination GPUs but not Nvidia and ARM, as confirmed by Trail of Bits.
What Apple devices are safe?
According to the report, some devices, like the 3rd generation iPad Air, have been patched, but the issue is still present in the M2 Apple MacBook Air. The latest Apple devices like the iPhone 15 line and M3 devices don’t appear to have this problem, but it seems like many iPhones, iPads, and Macs before this point are still vulnerable.
The fact that the 3rd gen iPad Air could be patched to solve this problem is likely a good sign, but Trail of Bits has not reported on any updates to the rest of Apple’s lineup.
In a comment made to iMore, an Apple spokesperson thanked researchers for their hard work in spotting this problem and confirmed fixes have been made for M3 and A17 Apple silicon chip devices, but no such confirmation was made for older Apple products.
More from iMore
Master your iPhone in minutes
iMore offers spot-on advice and guidance from our team of experts, with decades of Apple device experience to lean on. Learn more with iMore!
James is a staff writer and general Jack of all trades at iMore. With news, features, reviews, and guides under his belt, he has always liked Apple for its unique branding and distinctive style. Originally buying a Macbook for music and video production, he has since gone on to join the Apple ecosystem with as many devices as he can fit on his person.
With a degree in Law and Media and being a little too young to move onto the next step of his law career, James started writing from his bedroom about games, movies, tech, and anything else he could think of. Within months, this turned into a fully-fledged career as a freelance journalist. Before joining iMore, he was a staff writer at Gfinity and saw himself published at sites like TechRadar, NME, and Eurogamer.
As his extensive portfolio implies, James was predominantly a games journalist before joining iMore and brings with him a unique perspective on Apple itself. When not working, he is trying to catch up with the movies and albums of the year, as well as finally finishing the Yakuza series. If you like Midwest emo music or pretentious indie games that will make you cry, he’ll talk your ear off.
Question is, which apps are using AI and maybe putting us at risk ? and the same question about browsers ??Reply
The simple answer is pretty much all apps.Wotchered said:Question is, which apps are using AI and maybe putting us at risk ? and the same question about browsers ??
All the apps use the GPU, if only to display an output. This vulnerability has much more nuance, like it requires executing code locally on the device.