Jekyll apps: How they attack iOS security and what you need to know about them

While not new, and nothing that should cause a panic, research into Jekyll apps could help Apple better secure iOS and keep our data safer.

Today researchers Tielei Wang, Kangjie Lu, Long Lu, Simon Chung, and Wenke Lee from Georgia Tech gave a talk at the 22nd USENIX Security Symposium and revealed the details of how they got a so-called "Jekyll app" through the App Store approval process and into a position where it could perform malicious tasks. Their methods highlight several challenges to the effectiveness of the Apple's App Store review process as well as security in iOS. The researchers immediately pulled their app from the App Store after downloading it to their test devices, but demonstrated techniques that could be used by others to also sneak malware past Apple's reviewers.

The details of Apple's app review process are not publicly known, but aside from a few notable exceptions it has been largely successful in keeping malware away from iOS devices. The basic premise of a Jekyll app is to submit a seemingly harmless app to Apple for approval that, once published to the App Store, can be exploited to exhibit malicious behavior. The concept is fairly straightforward, but let's dig in to the details.

The App Store review process

When a developer submits their app to Apple for review the app is already compiled, meaning that Apple does not have the ability to view the actual source code. It is believed that two primary components of Apple's review process are a hands-on review of the app and static analysis of the application binary. The hands-on review consists of Apple actually putting the app on a device and using it to make sure that it meets the App Review Guidelines and does not violate any of Apple's policies. The static analysis portion is likely an automated process which looks for any indications of linking to private frameworks of use of private APIs in the compiled code. Apple has a number of private frameworks and APIs that are necessary for the functionality of iOS and are used for system apps and functions, but for one reason or another are not permitted for use by developers. If an app links to a private framework or calls a private API, the static analysis will usually detect this and the app will be rejected from the App Store.

A Jekyll app begins like any normal app that you can find in the App Store.

A Jekyll app begins like any normal app that you can find in the App Store. In this particular case, the researchers used an open source Hacker News app as their starting point. Under normal conditions, this app connects to a remote server, downloads news articles, and displays them to the user. This is exactly the functionality that Apple would see during the review process. Apple would see a functioning app that meets their guidelines, static analysis would reveal no use of private frameworks or APIs and the app would likely be approved for the App Store. Once a Jekyll app has been approved and released into the App Store, that's when things take a devious turn.

Inside of the Jekyll app, the researchers planted vulnerabilities in their code, providing an intentional backdoor. After the app had made it on to the App Store and they were able to download it to their test devices, the researchers placed specially crafted data on their news server for the apps to download, which would exploit the vulnerabilities that they had planted in the app. By exploiting a buffer overflow vulnerability in the app, the researchers are able to alter the execution of the apps logic. One of the ways the researchers utilize this is by loading numerous "gadgets" that are spread throughout their code. Each gadget is just a small piece of code that does something. With the ability to alter the execution of the code, the researchers can chain together multiple gadgets which will cause the app to perform tasks that it could not perform originally. But in order to locate these gadgets and call the desired pieces of codes the researchers need to know be able to reliably call the memory location of these pieces of code. In order to do this they would need to be able to determine the layout of their apps memory on a given device.

iOS employs two notable security methods for hampering buffer overflow attacks: Address Space Layout Randomization (ASLR) and Data Execution Prevention (DEP). ASLR works by randomizing the allocation of memory for processes and their various components. By randomizing where these components are loaded into memory, it makes it very difficult for an attacker to reliably predict the memory addresses that will be used for any various piece of code that they might want to call. DEP strengthens the protection against buffer overflow attacks by ensuring that pieces of memory that can be written to and pieces of memory that can be executed remain separate. This means that if an attacker is able to write to a piece of memory, for instance to insert a maliciuos piece of their own code, they should never be able to execute it. And if they were able to execute what was in a particular piece of memory, that piece of memory would be one that they are not permitted to write to.

The researchers noted a weakness in the iOS implementation of ASLR. iOS only enforces module level randomization. This means that each executable file, the app, a library, etc., is assigned a random location in memory in which to operate. However, within each of these modules, the memory layout will remain the same, making it predictable. As a result, if you can get the memory address of a single piece of code, you can then infer the memory layout for the entire module, allowing you to call to any other piece of code within that module. To acquire a memory address for this, the researchers plant information disclosure vulnerabilities into their app which leak memory information about modules in their app. This information is then sent back to the server which is able to determine the memory layout of the entire app, allowing it to determine the memory address of any pieces of code it is interested in running and arbitrarily execute them.

As for DEP, this is generally intended to prevent an attacker from exploiting a buffer overflow in an app that they have limited control over. A Jekyll app is a much different scenario in that the attacker is also the developer of the app being exploited. In this situation, they don't need to control writing to memory and executing it. Any sort of payload or malicious code that an attacker would normally need to write to memory as part of their buffer overflow exploit, a Jekyll app developer can just include in the code of their original app. They can then use the buffer overflow to alter the execution of memory in order to load the gadgets that they want. DEP on other systems has been demonstrated to be susceptible to what is called return-oriented programming. The idea is that an attacker can bypass DEP by reusing code that already exists in memory. In a Jekyll app, the developer can plant whatever code that will later need, and effectively bypass DEP by reusing their own code that they've put in place.

At this point, the researchers have an app in which they have embedded a number of code gadgets.

At this point, the researchers have an app in which they have embedded a number of code gadgets which they can now call and chain together at will, and they are able to alter the flow of the app's logic without any user knowledge. They could use this to perform behavior that would normally get an app rejected from the App Store, such as uploading a user's address book to their server (after first convincing the user to grant access to their contacts). But iOS restricts apps to their own sandbox and Apple won't allow apps that use private APIs so the impact of a Jekyll app is still fairly limited, right?

Private parts

As mentioned previously, Apple will generally reject any apps that link to private frameworks or call private APIs. Due to the lack of transparency we can only guess at how exactly Apple goes about detecting these, but the most likely answer is Apple uses static analysis tools to detect any private frameworks that have been linked to or any private methods that have explicitly been used in the code. But with a Jekyll app, we've seen that the researchers have the ability to dynamically alter code, so how does that affect private APIs?

There are two private APIs of particular interest here: dlopen() and dlsym(). dlopen() allows you to load and link a dynamic library by just its filename. It just so happens that private frameworks always reside in the same location on a device, so that's easy enough to figure out. dlsym() allows you to look up the memory address of a specified method for a framework loaded by dlopen(), which is exactly what you would need to to call a private method in a Jekyll app. So if the researchers can manage to locate dlopen() and dlsym(), they can use those private methods to easily load any other private APIs on the device.

Fortunately for the researchers, these two APIs are commonly used in public frameworks. Public frameworks use these APIs through what are called trampoline functions. Through the use of a debugger, the researchers were able to identify the offsets of these trampoline functions relative to the beginning of some public frameworks. Using the information disclosure vulnerabilities discussed above that allow the researchers to leak information about the memory layout of any given module, the researchers can use these known offsets to point to the trampoline functions for dlopen() and dlsym() with their app. Pointing to those functions, the researchers can now dynamically load any private framework and call any private API in their app. And remember, none of this is happening when Apple is reviewing the app. This only gets triggered after the app has been approved.

The attack

Now that we see how the researchers can alter the flow of their app and call private APIs, let's see what that amounts to in terms of malicious behavior in a Jekyll app.

In iOS 5 and 6, the researchers have been able to access private APIs for posting tweets, accessing the camera, dialing phone numbers, manipulating Bluetooth and stealing device information, all without user intervention.

The researchers noted a number of different attack possibilities (though it should not be taken as a complete list of possible attacks) for both iOS 5 and 6. In iOS 5 they are able to send SMS and email without any user interaction or notification. By using private APIs to send SMS and emails directly to the iOS processes responsible for actually sending these messages from the device, the Jekyll app was able to send these out without showing anything to the user. Fortunately, the way these operations work has since changed and these attacks do not work as of iOS 6.

In iOS 5 and 6, the researchers have been able to access private APIs for posting tweets, accessing the camera, dialing phone numbers, manipulating Bluetooth and stealing device information, all without user intervention. While posting unauthorized tweets may not be the end of the world, the others are cause for a little more concern. Access to your camera would mean an attacker could covertly take photos and send them back to their server. Dialing phone numbers without user knowledge could be used to make toll calls, or even to set up call forwarding to have all of a victim's incoming phone calls forwarded on to another number. Clearly when an app can access private methods, things can get creepy and it's apparent why Apple restricts access to these functions.

Addressing the problem

Unfortunately, Apple's current review process isn't set up to detect this type of behavior. Apple only reviews the app's behavior as it is at the time of review. If its behavior is altered once it is live in the App Store, Apple is not at all equipped to detect these changes and monitor the real-time behavior of apps after they have gone live. Apple could require developers to submit their source code as well, but it would be infeasible for Apple to go through and inspect the source code of every application submitted to the App Store. Even if they could inspect every line of code either manually (not even close to possible) or with automated tools, bugs are often times not easy to visually spot in code, especially if you have a malicious developer determined to hide bugs intentionally. The researchers did say that Apple responded to their findings with appreciation, but the researchers do not know what, if anything, Apple plans to do about the issues. It's also worth noting that these challenges are not unique to Apple.

There also isn't much that users can do for themselves in this case. While you could proxy your device's traffic to try and see what it's doing, a developer intent on hiding what they're up to could easily encrypt the app's traffic. They could also use certificate pinning to ensure that nobody is able to perform a man-in-the-middle attack to decrypt the traffic. If a user had a jailbroken device, it's possible that they could perform real-time debugging while the app is running to determine what it's doing, but this is well beyond the capabilities of most users. A Jekyll app could also be set up to only attack certain users, so even if a person knowledgable enough to perform such debugging installed the app on their device, there would still be no guarantee that they could easily get it to exhibit the malicious behavior.

iOS 7 and what is there left to do?

Many of the attacks they placed in their Jekyll app did not work on iOS 7.

One piece of information the researchers were able to share with iMore is that many of the attacks they placed in their Jekyll app did not work on iOS 7. While we don't know specifically which ones still worked and which didn't, it's possible that Apple mitigated some of the threats in a similar fashion to how they broke the ability to send SMS and email without user interaction in iOS 6. While this doesn't directly address underlying issues in iOS that allow for dynamic code execution, it's not entirely clear if that's something Apple could, or even should do.

Altering the behavior of an app based on responses from a server is nothing new, it's just usually not employed with malicious intent. Many perfectly legitimate apps in the App Store make use of remote configuration files to determine how they should behave. As an example, a TV network might make an app that behaves differently during the slow Summer than it would in the Fall when everybody's favorite shows are starting back up. It would be reasonable and perfectly legitimate for the app to periodically check with the server to find out if it should be in summer or fall mode so that it knows how to display what content.

There are also legitimate reasons for apps to obfuscate and discretely hide pieces of code in their app. A developer of a news app might embed authentication credentials in the app to allow it to authenticate with their server, but might obfuscate that information in the app to make it difficult for somebody to retrieve them through analyzing their app.

The bottom line

The team at Georgia Tech has provided some very interesting research. In evaluating Apple's security mechanisms in iOS and practices in their App Store review process, they were able to uncover weaknesses that could be exploited to get malicious apps onto users' devices. However, the same result can be accomplished through simpler means.

A malicious developer could obfuscate calls to private APIs by breaking them up across multiple variables that would later be combined together into a single string of text that could call the API. The developer could use a value in a simple configuration hosted on their server to tell the app whether or not to run that code. With the flag disabled during the review process, the malicious behavior would go undetected by Apple and once approved, the attacker could change the flag on the server and the app could begin its assault.

These types of attacks are definitely possible on iOS and have been for some time. So why don't we see them exploited in the wild more often? There's likely a multitude of reasons:

  • Even legitimate developers with great apps struggle to get noticed. - With over 900,000 apps in the App Store, it's easy to have your apps go unnoticed by users. Legitimate developers who put their heart and soul into developer apps that believe will be truly delightful to use often struggle with getting any significant number of people to download their app. A Jekyll app could used to target particular individuals that you might be able to convince to install the app, but getting any significant portion of Apple's user base to install or even notice your app is no small undertaking.
  • There's much lower hanging fruit out there. - The Google Play store has struggled with keeping malware out since its debut as the Android Market in 2008. You also have unofficial app stores used by jailbreakers as well as pirates that don't have the same review process as Apple, where it would be much easier to get a malicious app hosted. The bottom line is, there are many places other than the App Store to spread malware that could do far more damage while requiring much less effort. To keep your house safe from burglars it doesn't need to be completely secure, it just has to be more secure than your neighbor's house.
  • Apple can easily pull apps from the App Store at any time and revoke developer accounts. - As we've seen on numerous occasions, if an app manages to sneak through Apple's gates that doesn't conform to their guidelines, it quickly gets removed from the App Store once Apple realizes their mistake. Additionally, for larger offenses, Apple can and has terminated developer accounts. A developer could sign up for another developer account with different information, but they would have to pay another $99 each time.
  • Once malware makes it past the gate, it's still playing in a sandbox. - Apple has employed multiple layers of security in iOS. There is no single point of failure in iOS that renders all other security mechanisms broken. One of the security measures that iOS employes is sandboxing. Sandboxing restricts all apps to their own area on the system. Even an app run amok is very constrained in how it can interact with others apps and their data. Some apps allow for other apps to interact with them through use of customer URL schemes, but this communication is very limited and many apps do not have them. With each app restricted to its own sandbox, its ability to carry out malicious tasks is quite limited.

This certainly isn't an exhaustive list, but shows some of the reasons that, while its technically possible to distribute malicious iOS apps, we don't see a more rampant problem with malware on iOS. This is not to say that Apple should shrug and do nothing of course. As mentioned earlier, Apple is ware of the research that has been done here and is likely looking at their options for mitigating the threat. In the meantime, users should try not to worry too much. It is extremely unlikely that this research will lead to an outbreak of malware for iOS.

Source: Jekyll on iOS: When Benign Apps Become Evil (PDF)

Nick Arnott