Update: Apple has provided iMore with the following comment on the XARA exploits:
The XARA exploits, recently disclosed to the public in a paper titled Unauthorized cross-app resource access on Mac OS X and iOS, target the OS X Keychain and Bundle IDs, HTML 5 WebSockets, and iOS URL schemes. While they absolutely need to be fixed, like most security exploits, they have also been needlessly conflated and overly sensationalized by some in the media. So, what's really going on?
What is XARA?
Simply put, XARA is the name being used to lump together a group of exploits that use a malicious app to gain access to the secure information transited by, or stored in, a legitimate app. They do this by placing themselves in the middle of a communications chain or sandbox.
What does XARA target exactly?
On OS X, XARA targets the Keychain database where credentials are stored and exchanged; WebSockets, a communication channel between apps and associated services; and Bundle IDs, which uniquely identify sandboxed apps, and can be used to target data containers.
On iOS, XARA targets URL schemes, which are used to move people and data between apps.
Wait, URL scheme hijacking? That sounds familiar...
Yes, URL scheme hijacking isn't new. It's why security-conscious developers will either avoid passing sensitive data via URL schemes, or at the very least take steps to mitigate the risks that arise when choosing to do so. Unfortunately, it appears that not all developers, including some of the biggest, are doing that.
So, technically, URL hijacking is not an OS vulnerability so much as a poor development practice. It's used because no official, secure mechanism is in place to accomplish the desired functionality.
What about WebSockets and iOS?
WebSockets is technically an HTML5 issue and affects OS X, iOS, and other platforms including Windows. While the paper gives an example of how WebSockets can be attacked on OS X, it doesn't give any such example for iOS.
So XARA exploits primarily affect OS X, not iOS?
Since "XARA" lumps together several different exploits under one label, and the iOS exposure seems much more limited, then yes, that appears to be the case.
How are the exploits being distributed?
In the examples given by the researchers, malicious apps were created and released to the Mac App Store and iOS App Store. (The apps, especially on OS X, could obviously be distributed via the web as well.)
So were the App Stores or app review tricked into letting these malicious apps in?
The iOS App Store was not. Any app can register a URL scheme. There's nothing unusual about that, and hence nothing to be "caught" by the App Store review.
For the App Stores in general, much of the review process relies on identifying known bad behavior. If any part of, or all of, the XARA exploits can be reliably detected through static analysis or manual inspection, it's likely those checks will be added to the review processes to prevent the same exploits from getting through in the future
So what do these malicious apps do if they're downloaded?
Broadly speaking, they intermediate themselves into the communications chain or sandbox of (ideally popular) apps, and then wait and hope you either start using the app (if you don't already), or start passing data back and forth in a way they can intercept.
For OS X Keychains, it includes pre-registering or deleting and re-registering items. For WebSockets, it includes preemptively claiming a port. For Bundle IDs, it includes getting malicious sub-targets added to the access control lists (ACL) of legitimate apps.
For iOS, it includes hijacking the URL scheme of a legitimate app.
What sort of data is at risk from XARA?
The examples show Keychain, WebSockets, and URL scheme data being snooped as it's transited, and Sandbox containers being mined for data.
What could be done to prevent XARA?
While not pretending to understand the intricacies involved in implementing it, a way for apps to securely authenticate any and all communications would seem to be ideal.
Deleting Keychain items sounds like it has to be a bug, but pre-registering one seems like something authentication could protect against. It's non-trivial, since new versions of an app will want to, and should be able to, access the Keychain items of older versions, but solving non-trivial problems is what Apple does.
Since Keychain is an established system, however, any changes made would almost certainly require updates from developers as well as Apple.
Sandboxing just sounds like it needs to be better secured against ACL list additions.
Arguably, absent a secure, authenticated communications system, developers shouldn't be sending data through WebSockets or URL Schemes at all. That would, however, greatly impact the functionality they provide. So, we get the traditional battle between security and convenience.
Is there any way to know if any of my data is being intercepted?
The researchers propose that malicious apps wouldn't just take the data, but would record it and then pass it on to the legitimate recipient, so the victim wouldn't notice.
On iOS, if URL schemes are really being intercepted, the intercepting app would launch rather than the real app. Unless it convincingly duplicates the expected interface and behavior of the app it's intercepting, the user might notice.
Why was XARA disclosed to the public, and why hasn't Apple fixed it already?
The researchers say they reported XARA to Apple 6 months ago, and Apple asked for that much time to fix it. Since that time had elapsed, the researchers went public.
Strangely, the researchers also claim to have seen attempts by Apple to fix the exploits, but that those attempts were still subject to attack. That makes it sound, at least on the surface, that Apple was working on fixing what was initially disclosed, ways to circumvent those fixes were found, but the clock wasn't reset. If that's an accurate read, saying 6 months has passed is a little disingenuous.
Apple, for its part, has fixed numerous other exploits over the last few months, many of which were arguably greater threats than XARA, so there's absolutely no case to be made that Apple is uncaring or inactive when it comes to security.
What priorities they have, how difficult this is to fix, what the ramifications are, how much changes, what additional exploits and vectors are discovered along the way, and how long it takes to test are all factors that need to be carefully considered.
At the same time, the researchers know the vulnerabilities and may have strong feelings about the potential that others have found them and may use them for malicious purposes. So, they have to weigh the potential damage of keeping the information private versus making it public.
So what should we do?
There are many ways to get sensitive information from any computer system, including phishing, spoofing, and social engineering attacks, but XARA is a serious group of exploits and they need to be fixed (or systems need to be put in place to secure against them).
No one needs to panic, but anyone using a Mac, iPhone, or iPad should be informed. Until Apple hardens OS X and iOS against the range of XARA exploits, the best practices for avoiding attack are the same as they've always been — don't download software from developers you don't know and trust.
Where can I get more information?
Our security editor, Nick Arnott, has provided a deeper dive into the XARA exploits. It's a must-read:
Nick Arnott contributed to this article. Updated June 19 with comment from Apple.
Rene Ritchie is one of the most respected Apple analysts in the business, reaching a combined audience of over 40 million readers a month. His YouTube channel, Vector, has over 90 thousand subscribers and 14 million views and his podcasts, including Debug, have been downloaded over 20 million times. He also regularly co-hosts MacBreak Weekly for the TWiT network and co-hosted CES Live! and Talk Mobile. Based in Montreal, Rene is a former director of product marketing, web developer, and graphic designer. He's authored several books and appeared on numerous television and radio segments to discuss Apple and the technology industry. When not working, he likes to cook, grapple, and spend time with his friends and family.
Thanks Rene for clarifying the issue !
Thanks for looking into this. Apple should have commented on this as it's blowing up and they should be seen transparent and working hard to solve this instead of letting this blow up online. Thanks again. I'm sure you had to talk to a few people to get all this info.
Sorry, but I have to disagree with much of this article's take. I've read the actual research paper, and what it boils down to is most of OS X's and iOS's basic security features are fundamentally flawed. I feel this summary is minimizing how broken things are. At the same time, this is not likely to be a widespread problem on Mac App Store or iOS App Store apps (it's a different story of OS X apps off the web or possible malware installed locally to a Mac or iOS device). Apple can always pull and app or revoke a developer if they're caught. But it's the catching them that's currently the tricky part. The conclusion is true, "don't install software you don't trust." But it's much more complicated because users have been conditioned to trust software on the Mac and iOS App Stores, and that trust is entirely misplaced based on the flaws this research uncovers. As it stands now, apps that pass the vetting process can do very bad things easily. A user can think that great new free app is safe and secure, and on the surface it is. But under the hood, it's harvesting your keychain or private data from a sensitive app or service (Dropbox, Endnote, WeChat, Facebook, Pinterest, Keychain, password manager, etc.). The worrisome part about all these flaws is the malware can be crafted to keep it invisible from the user. Their proof-of-concept apps provided end-to-end hacking via these methods, and especially on OS X it would currently be very hard to know a background process with no UI isn't misbehaving. iOS is inherently more secure, as the inter-app interactions are much more limited, but even there advanced attacks when calling up and passing along a Facebook or Google login through an unrelated app (Pinterst is their example) can be made transparent to the user. 2 other key points. First, this isn't limited to Apple devices; some of these vulnerabilities are platform-agnostic, like WebSockets, or have already been seen in similar form on another platform. And second, an app doesn't have to be "evil" to do this (to some extent) on any platform, regardless of security measures. Uber on Android harvested contacts, SMS, local WiFi and router info, etc. and Twitter on iOS harvested all other installed apps to serve targets ads. We live in an age where even established and trusted companies look to invade our privacy in big and small ways all the time. Lastly, the research paper has solid suggestions on how to mitigate these threats. Unfortunately, most of them require a major retooling of the basic security structures of OS X and iOS (and creating some new ones for developers to then use securely). This is a major set of problems, and it's going to take some major work on the part of Apple and developers to solve. The good news is it looks like OS X 10.11 does take some steps to improve things. But I'm not sure it goes far enough, and I'm not sure how this gets ported back to earlier OS X versions. And iOS will be left with some major regression of inter-app interactions unless a new, secure scheme is created.
FWIW, Nick and I both read the paper multiple times, and spoke with several developers and security people familiar with the systems in question. Also, your comment is pretty much in line with what we wrote. I do think people trust App Stores more than the web, but they also trust Macs more than Windows. The reason is the same — generally, they've been less targeted and more secure over time. My guess is we'll see specific fixes for this unrelated but grouped together by name set of exploits as soon as is engineeringly possible. :)
Fair enough. There has been plenty of hyperventilation by some of the more mainstream press, and I understand the purpose of the article is to help correct that. To be fair, even just a few years ago there were much easier ways to do much bigger damage to pretty much every platform. So it’s certainly commendable that Apple has been able to ship a (mostly) functional sandbox model on 2 platforms for several years. In many respects, they’ve lead the way in this paradigm shift. But flaws like these show how much more work there is to do, and it’s frustrating to think about how much work has already been put in to adapting to this more restrictive security model that in the end has be shown to be fundamentally broken. Also, at times Apple’s response to such flaws have been somewhat lackadaisical (to be kind). In this instance, I don’t think it’s wrong of the researchers to chide Apple’s only response over the 6 months. Solely implementing a random ID to try and mitigate the iCloud attack is not much of an attempt. Again, it does seem like some 10.11 features may also be targeting these issues (and they likely didn’t/couldn’t share such code before WWDC), but I’m not sure what good that does for everyone now. I’m not sure we’ll see much of a response before that, and I don’t hold out much hope Apple will port many of such features to say 10.9.x and 10.10.x. Upgrading the App Store vetting process on OS X and iOS and releasing a background task that monitors for malicious activity on OS X are probably the only realistic responses before OS X 10.11 and iOS 9, and I’m not sure why that couldn’t have been done months ago.
Uh, no. Apple can't say anything much. If they do, it will be blown way out of proportion, like when they admitted there where a few graphics glitches in their initial maps app. That app is arguably better than google maps now, but literally everyone (technical or not) has heard the fandroid insults that were only made possible via Apple's admission. If Apple comments, it will really blow up, regardless of the fact that there are no known exploits, or the fact that the method is rampant (and completely unprotected) in Android.
How is Apple Maps now better than Google Maps? Apple Maps doesn't have street view yet. Posted via the iMore App for Android
Glad you asked. I find Street view to be pretty well useless, compared with flyover mode in Apple maps. This is because while Street view is cute, allowing you to virtually walk, apples fly over mode let you see anything from any angle and allows you to virtually fly over the area – much more useful .
"That makes it sound, at least on the surface, that Apple was working on fixing what was initially disclosed, ways to circumvent those fixes were found, but the clock wasn't reset. If that's an accurate read, saying 6 months has passed is a little disingenuous." - why on earth would you reset the clock? Apple responded as they usually do with security threats; badly. nothing new to see.
This. ATM I can’t even remember what software I’ve purchased and where from. Not sure what risk I run and Apple say nothing. Mind you the community will come up with something so that’s a silver lining.
If you read the "article" it starts by stating up front that these issues are well known and ignored on both Android and Windows, and that the "researchers" thought they'd try similar techniques targeting "more secure" Apple systems. And it reads like a High School term paper, using bad English like "lessons learnt". Indiana University should be very embarrased.
'Learnt' or 'learned'? - Oxford Dictionaries
Both are acceptable, but learned is often used in both British English and American English, while learnt is much more common in British English than in than in American English. We learned the news at about three o'clock.
They learnt the train times by heart. There are a number of other verbs which follow the same pattern in forming the past tense and past participle: I burned/burnt the toast by mistake.
He dreamed/dreamt about his holiday.
Luke kneeled/knelt down to find his contact lens.
Tanya spoiled/spoilt her dinner.
She spelled/spelt her surname an unusual way.
Yeah, I know it's "acceptable" English to say "learnt" (checked it myself). They also say "securer" a lot. e.g. MAC (sic) OSX is "securer" than Windows or Android. My point is it's a difficult read, not only for the technical nature of it but also due to its extreme awkwardness. When a writer or "researcher" can't even spell "Mac" (that they are writing about!) it undermines their credibility, and by extension, that of IU.
It's not only acceptable, it's the more common way of doing it in "Received" British and International English. Similar spellings include "spelt" instead of "spelled" and "dreamt" instead of "dreamed." As to why they (inconsistently) used "MAC OS X" instead of "Mac OS X," I am at a loss. But it's not a misspelling. Do I smell a Purdue grad? ;-)
It's a minor thing I suppose, but the paper wasn't written in the UK, and the company that produces the software isn't in the UK either. And no, you don't smell a Purdue grad (but you probably could some some distance). I do consider 'MAC' a misspelling (and it's commonly referred to as such). I didn't see a single properly capitalized "Mac" in the entire paper, but I'm not wanting to look at it again. I've come to expect platform haters to use 'MAC' as if it was an abbreviation. In fact, it is in other contexts. It's kind of a red flag that the writer has no experience with the Macintosh. The paper is full of unsubstantiated assumptions. Such that "MAC" is "Less studied" than other platforms. I call bollocks :-) on that, too. What could be more 'studied' than the platforms that have been most copied? What is Windows but a rehash of Mac? Ditto with iOS and Android. Google studied iOS plenty--they had direct access to it as a partner. It's amazing it took them as look as it did to reverse engineer it, considering the head start they had. Of course, MSFT took 10 years to come up with a credible work alike to Mac, while also having access to pre-release systems, but their users really couldn't care less, were plenty happy with DOS, so why should they bother? :-)
Not much explanation meat from Apple. It's been 6 months and never a whisper as to personal data at risk. This is a bad Apple. My security app points to MIM attacks every day and blocks them. I knew 3 months ago something was wrong when iStat was reporting activity that should not have been happening. That's when I added the security monitor for protection.
That explanation is crap. Much like most scripted answers from Apple.
Can you add a bit more meat to, “………...when iStat was reporting activity that should not have been happening”, please?
Seems like a list of the offending apps is called for.
It's not an OS flaw it's the developers? Are you interviewing with Apple?
I'm sure if this were windows the sky would be falling and itd be yet another reason to switch to an iPhone lol.
Nice apology on behalf of Apple, Rene. Bad things will happen in any business environment. The difference between the good companies and the bad ones is how they react to what happened. Six months and no fix? This is a serious security issue. You'd think they would have had their best and brightest working on it. Oh wait, maybe they're trying to fix iOS 8.whatever.
Yikes, they're just fixing a problem that was reported six months ago? Have they fixed the iMessage security flaw too? The one that allows anyone to crash any iPhone.
Wow if this was Android Rene's tone would have been so much different. Posted via the iMore App for Android
Get the best of iMore in in your inbox, every day!
Thank you for signing up to iMore. You will receive a verification email shortly.
There was a problem. Please refresh the page and try again.