Android

Android OS: Malicious apps can steal permissions

It's common knowledge that Android has issues with permission leaks, albeit minor ones. However, Michael Kassner finds that this issue with third-party apps still has the potential to do a lot of harm.

Android's permissions-based security system is an elegant idea. Informing us what phone capabilities -- permissions -- a third-party app wants before the software is installed, and the opportunity to stop the loading process if we don't like what permissions the new app is suggesting it needs.

But...

What if the Android permission process is flawed? So much so, an untrusted application could send SMS messages, record conversations, or erase data from the smart phone -- all without user consent or knowledge.

William Francis -- fellow TechRepublic writer -- and I flirted with the idea in Android's permission system: Does it really work? William built a demonstration app for Android Market that could use a phone's GPS, even though the app did not have permission.

Recently, our suspicions were confirmed by Professor Xuxian Jiang -- a prolific Android bug hunter and my advisor for Bad apps: Avoid them. He and members of his North Carolina State research team, Michael Grace, Yajin Zhou, and Zhi Wang discussed the same problem in their paper, Systematic Detection of Capability Leaks in Stock Android Smartphones.

How permissions work

My understanding of Android permissions is a bit hazy -- not a good idea, since it's my article. So I asked William for help.

Kassner: Can you explain how permissions work, so even I understand? Francis: To start, you need to understand how the inter-process communications model (IPC) works on Android. Here is an example.

Say I wrote an app to display PDF files on the phone. I don't want to make someone open my app every time they wish to view a PDF. So I expose a public intent that names the app. That way developers do not have to include code to open PDF files in their apps. They simply scan the phone for my app to make sure it's installed, call the correct intent, and finally send a request to open the PDF.

What if -- instead of an app for PDF files -- I write one that uses the phone's GPS. And, I expose the GPS capability via a public intent, in the same manner as my PDF app. The result would be the same; other apps can reach the GPS function.

This is how I exploited the GPS function in our article. The power widget had a public intent for controlling the GPS function. My app didn't have permission. So, I leveraged the exposed interface in the power widget and let it toggle the GPS on my app's behalf.

In that particular case, it was the worst possible scenario for the end user because the power widget came preinstalled on the phone's Read-Only Memory (ROM). This means Google or the device manufacturer agreed to let the power widget toggle the GPS, and because it was installed in ROM, the user didn't know the leak exists nor had the choice of removing it.

Preloaded apps

Remember how William's app stole permissions from a preinstalled application? At the time, I'm not sure either of us understood the significance of exploiting a preloaded app. But we do now.

Preinstalled apps as part of the "development inner circle" are trusted and given more rights. The screen shot below displays permissions automatically given to a preinstalled application:

If the preinstalled app with these permissions works as William described, a malicious third-party app could invoke any one of the above permissions. And there's a lot to choose from.

Those in the digital badlands understand the implication, hence, the research team's focus on preinstalled applications.

Capability leaks

The research team named what we've been discussing. They call it a "capability leak". The researcher further divided capability leaks into:

  • Explicit: Allows an app to access certain permissions by exploiting publicly-accessible interfaces or services without requesting the permissions itself.
  • Implicit: Allows access to certain permissions by permitting an app to acquire or "inherit" permissions from another app with the same signing key (presumably by the same author).

So far, we only been dealing with "explicit capability leaks". I wanted to make sure of the difference, so I asked Dr. Jiang.

Kassner: Would you please explain how the two types of leaks differ? Jiang: Sure! The explicit one exposes a service to any (untrusted) app so that a dangerous operation can be exercised without asking for the related permission.

The implicit one allows an app to inherit the permission from another app that is signed by the same developer certificate. In other words, the implicit one involves two apps from the same developer. The explicit one does not have this requirement.

Kassner: I was curious as to what advantages one leak had over the other, from the perspective of a bad guy. Francis: As an attack vector, each capability leak has its place. Here are some pros and cons of each:

Explicit capability leak advantages:

  • Your app never has to ask for the permission you wish to exploit.
  • If the interface you are exploiting, exists on the ROM, the user can't get rid of the security hole easily.
  • If the interface you are exploiting was included in the baseline release of Android, the hole likely exists on every device running that version of the OS.

Explicit capability leak disadvantages:

  • You have to find the exposed intents in other apps to get your "foot in the door" so to speak.
  • You don't control when the "door" is closed or even if a newer version of the component you are exploiting changes the behavior of your exploit entirely.

Implicit capability leak advantages:

  • You know exactly how to call the exposed intent to exploit the permission you are looking for (since you wrote both apps).
  • Because the attack relies solely on your software you can potentially run it on any version of Android providing you get the user to install your apps.

Implicit capability leak disadvantages:

  • More than one of your apps has to be on the same device simultaneously.
  • You must get the user to agree to at least one of your apps having the permission you wish to exploit.
  • Users can almost always easily stop this type of vulnerability by simply uninstalling one or more of your components.

Woodpecker

Dr. Jiang's team decided to automate the process of finding applications that allow another app to use granted permissions. I forgot to ask why, but the team called the tool -- Woodpecker:

"Woodpecker systematically analyzes each app on the phone to explore the reachability of a dangerous permission from a public, unguarded interface."

Kassner: I wasn't clear about something in the definition. So, I asked Dr. Jiang; what does "reachability of a dangerous permission from a public, unguarded interface" mean? Jiang: Woodpecker analyzes each pre-loaded app and checks whether it has the permission (or capability) to exercise certain "dangerous" operations in Android (for example, sending an SMS message or deleting an app).

If such a pre-loaded app is identified, Woodpecker verifies whether the app defines a service that is not "guarded" and can be freely invoked by any untrusted app to exercise the "leaked" dangerous operation. If that is the case, we consider the permission to exercise the dangerous operation is leaked from the pre-loaded app to any untrusted app.

Kassner: Another question Dr. Jiang. How does Woodpecker differentiate between explicit and implicit capability leaks? Jiang: When detecting explicit capability leaks, we focus on those apps that request permissions of interest in their manifest files. If an app has a sharedUserId in its manifest but does not request a certain (dangerous) permission, we also need to investigate the possibility of an implicit capability leak.

To detect implicit capability leaks, we employ a similar algorithm, but with changes to reflect a fundamental difference in focus. Specifically, explicit capability leak detection assumes the caller of an app's exposed API is malicious, while implicit capability leak detection assumes the app itself might be malicious.

Accordingly, instead of only starting from the well-defined entry points in the explicit leak detection, there is a need to broaden the search to include the app's initialization.

Permissions checked

Dr. Jiang mentioned, "permissions of interest". The following is the list he referred to:

  • Access coarse location: Non-GPS radio
  • Access fine location: GPS
  • Call phone: Initiate a phone call (no user confirmation)
  • Call privileged: Same as Call phone, but emergency phone numbers
  • Camera: Access camera device
  • Delete packages: Remove apps
  • Install packages: Install apps
  • Master clear: Remove user data (factory reset)
  • Read phone state: Read phone-identifying info (IMEI)
  • Reboot: Reboot the device
  • Record audio: Access microphones
  • Send SMS: Send SMS messages
  • Shutdown: Power off the device

As you can see, gaining access to any of these features would be advantageous to those wanting to do digital harm.

Test results

Table 2 provides information on the vendor, phone model, Android version, and number of installed apps tested on each model of phone:

Table 3 displays the phone model, permissions tested, and the results from explicit and implicit leak test:

Here's what the paper said about the test results:

"We believe these results demonstrate that capability leaks constitute a tangible security weakness for many Android smartphones in the market today. Particularly, smartphones with more pre-loaded apps tend to be more likely to have explicit capability leaks."

The research team also notified the affected phone vendors:

"As of this writing, Motorola and Google have confirmed the reported vulnerabilities in the affected phones. HTC and Samsung have been slow in responding to our reports/inquiries.

Though the uncovered capabilities leaks on the HTC and Samsung phones have not been confirmed by their respective vendors, we have developed a test app to exercise and confirm all the discovered (explicit) capability leaks on the affected phones."

I wanted to mention that the research team created a video demonstrating their capability leak research.

Possible solutions

Always interested in potential solutions, I asked Dr. Jiang a few more questions.

Kassner: I see "implicit leaks" being fixed by better inspection in Android Market. I do not see a way to solve the "explicit leak" issue you have discovered. Do you have any ideas? Jiang: Agreed. The best way to address the explicit leaks is from the vendors (e.g., by releasing a patch). To be honest, I don't see a good way for users to protect themselves, except not downloading and installing untrusted apps... Kassner: William Francis and I have been discussing the research team's findings. William had this to say:

"I don't know the iOS platform that well, but my understanding is that if I want to share a file between my app and yours, the 2 apps can't talk directly. I have to export the file (I'm speaking about in code, not something the user must do) to a common area.

Then iOS provides the API for another app to come along behind and import the file so it can be displayed. The result is that app 1 never talks directly to app 2 like it does on the Android platform. The advantage being that Apple gets their fingers on each request and can do some sanity checking for security exploits."

Is that something Android should consider?

Jiang: I don't think so. The reason is that it is not consistent with the open principle behind the Android design.

William's thoughts

I asked William what he thought could be done to rectify the two leaks. I used William' opinion on explicit capability leaks earlier. He made a comparison to iOS. As for the implicit capability leak, he offered some interesting advice:

"Implicit capability leaks aren't that worrisome. If they were, Google could easily fix the issue by changing their code-signing procedure from the developer to the app.

Meaning, as a developer I get one ‘digital signature' and apply it to all my apps. Thus all those apps share a set of permissions. If Google required me to create a new digital thumbprint for each app, then each of my apps would have its own signature and isolated permissions."

Final thoughts

Capability leaks have the potential to be extremely harmful. But, as with most digital technology, being informed of what's possible and using caution when installing third-party apps should keep you safe.

Thanks to Dr. Jiang and the research team for shedding light on a major weakness in Android. And thank you, William, for explaining the inner workings of Android permissions.

About

Information is my field...Writing is my passion...Coupling the two is my mission.

25 comments
Dinesh M
Dinesh M

This article really helpful for my final year project.Thank you

JCitizen
JCitizen

I love that name! It likens it to the proverbial bird that pecks the wood to see what bug comes out; and if not, drills down until it finds the culprit! Another prescient article Michael!! I love these articles because I don't support phones or tablets yet, and don't own any, so I need to keep up with the latest poop!!

Jacdeb6009
Jacdeb6009

Reading through the article it would appear that part of the problem (not all) results from the software that is preinstalled on the various phones. Applications that the user installs can also create problems, but assuming the user takes the necessary care when accepting the permissions required, and ensures that the software is from a trusted source, these problems can be minimised. The question is whether installing a "plain" or "vanilla" android version on your phone avoids the first problem, that is the one created by preinstalled apps. So, for example, would installing the Cyanogen Mod Rom avoid this problem or does this have its own associated "can of worms" ? Would be interesting to hear whether this has the same or worse problems.

nwallette
nwallette

I always enjoy your articles. This is well written, thoroughly researched, informative, and has excellent sources. And despite the nature of the content, it's not in the least sensationalist, nor does it cite platform wars. True journalism. Things are always more complicated than it seems from the surface, but some of the design decisions here seem almost negligent. I think maybe I misunderstood the part about developer signatures... Is it really the case that you can write one "legit" app, and be awarded all sorts of access, then any future app shares the same permissions? It seems to make implicit exploits a moot point if App A can (e.g.) automatically access the contact list just because App B (by the same author) was previously given that permission. Or does the implicit exploit just get around having to ask the user first?

arewaah
arewaah

Would it not be simple if the calling activity must have at least all the permissions that a called intent has?

amdolon
amdolon

I don't know much about all this, but I try to read to keep up. Does the AVG Mobilation APP Scanner complete this function of making sure there aren't explicit/implicit leaks? I couldn't find it on their site. Thanks!

Craig_B
Craig_B

It almost sounds like you may need an ACL (Access Control List) between App D (downloaded/installed) and App E (embeded/preinstalled) or even something between App E and outside the phone access.

Michael Kassner
Michael Kassner

You mention helps. We hope to reach out, but one never knows.

authorwjf
authorwjf

First, it is important to note that there are a few phones available from carriers that come with "vanilla" Android installs. Nexus One, Nexus S, and Nexus Galaxy all fall into this category. When friends and family ask me what phones I recommend, I always point them toward phones that ship with stock Android. That said, sometimes explicit security leaks will slip into even the base build of the platform. The power widget which leaked GPS, WIFI, SYNC, and LCD backlight permissions was a part of the base build. The difference is that when this issue was fixed, pure Google phones received an OTA update that patched the hole, while a number of high end phones are still shipping at this very moment with custom carrier builds that have not bothered to apply the patch. I am a fan of custom ROMs and in particular Cyanogen Mod. These builds however are not immune to explicit security leaks either. The difference is that with a custom ROM, (and rooted phones in general), if any app is leaking permissions you can just remove it from the phone. Obviously there are some core apps that if removed the phone would cease to operate (I'm thinking of things like the dialer app). Luckily, to my knowledge anyhow we've yet to see any notable leaks in the phone's core components.

Michael Kassner
Michael Kassner

And, it's best answered by William. I'm betting he will see it, but just in case; I will let him know. It will be interesting to see if rooting the phone to avoid the preinstalled apps is less of a threat than not getting updates because the phone is rooted.

Michael Kassner
Michael Kassner

I do have excellent sources, Dr Jiang and his team are more than amazing when it comes to sorting out Android exploits. And William, what can I say? He hold my hand through each and every one of these articles. I'm betting William will respond, but my take is that yes, once a developer has an app on board; all others have the same permissions. I asked William the same question. Getting the app installed is not a for sure thing and all it takes is the user to remove it. I think bad guys want better odds than that.

Julie9009
Julie9009

I think that's a great idea, and would definitely solve the problem for implicit and explicit permissions leaks. I'll admit now that I don't understand the permissions system at an application level, so I may be wrong, but I do see a potential difficulty. The difficulty would be that each intent would have to somehow indicate the permissions that it uses, or else a calling app would have to ask for extra permissions it might not need. Consider this scenario: I have an app that scans barcodes and displays the decoded string. That's all it does, so my app would only need permission to access the camera. However, I want to use an intent from FancyCameraXYZ, because it has good low-light capability. Unfortunately, FancyCameraXYZ also has some permissions for features that I don't need: It tags photos with GPS coordinates, and it can send images as MMS to contacts. So FancyCameraXYZ quite validly asks for those permissions. If we force calling apps to have the same permissions as the called app, the I would need to ask for MMS, GPS and Contacts permissions, which would make my users (quite rightly) very nervous! In order for this system to work, there would have to be a Take_Photo_Only intent on FancyCameraXYZ which only allows the Camera permission. I imagine that this might be quite complex to implement, both from an Android system perspective, and from the perspective of the application that exposes the intent. Also, in the example above, how does Android know that the Take_Photo_Only intent does in fact only take a photo? How do we know that the developer of FancyCameraXYZ doesn't use the GPS or MMS functionality when a calling app uses the Take_Photo_Only intent (either for its own purpose, or as a vector to deliberately allow other apps access to those permissions)?

authorwjf
authorwjf

I think a permission group is a valid approach. It's just not how the system is designed at present. Hopefully as the platform matures we will see these types of precautions taken when an activity calls an intent. In the mean time however the model leaves it up to one app to police incoming requests from another and that just doesn't happen with any sort of consistency.

Michael Kassner
Michael Kassner

William thought using different digital signatures would work. The problem as I see it is that Android is so fragmented, how will a common fix work on all variants?

Neon Samurai
Neon Samurai

The third-party band-aid aproach like we've seen with add-on AV software probably isn't the way to go. It is taking yet more of the limited system resources to implement a detection method that is reactive at best and increasingly ineffective. I like the idea of a manual scanner like the project team provided. It sits dormant and runs only when you click it to see if anything is leaky; maybe after a new app install. I just don't like the idea of adding more memory resident code. Really, it seems like flaws in the OS and/or management of the software repositories. These are things that should be fixed at the problem source rather than by layering on more convelution. The risk of a malicious developer may come down to a per app vetting instead of a developer vetting. I think Google could provide far better oversight in the repositories without going to the extreme of Apple's broken submission process. As the article highlights though, the greater risk is exploiting a vulnerable app. That comes back to vetting of submitted apps and maybe a manual run scan utility if the vulns are common across programs. Based on this being a "how programs advertise available functions within the OS" it's back to Google yet again. If I go Android with my next phone, it'll have to be a Nexus so I can flash the Google stock firmware and get future updates directly; no vendor child distribution, no vendor using software updates to drive future hardware sales.

Michael Kassner
Michael Kassner

I'll ask William, but my thought is that the code is not considered malicious. Asking for permissions and using capabilities of other apps is how Android works.

Michael Kassner
Michael Kassner

I think William was going that way with individual digital signatures.

Jacdeb6009
Jacdeb6009

Thanks Michael and William! Unfortunately out here (Vietnam) there are no "vanilla" phones available. While we don't get phones with all the "bloatware" installed by typical network operators elsewhere, the phones here still come preinstalled with whatever HTC, Samsung or the like decided to put on the unit. For now (besides being careful) it seems that a Cyanogen Mod ROM is a good option in addition to keeping my eyes and ears open (and making sure that I keep up to date with your articles!)

authorwjf
authorwjf

Implicit permission leaks work because all apps signed with the same developer signature get the same permission set. It is designed that way because of the sand boxing model. It has definite legitimate uses. For example, if I am writing a modular mobile office suite, it's important that my word processor, spreadsheet, and presentation apps all can easily exchange info, files, preferences, etc. Part of the problem is simply how the permissions are presented to the user. In reality, when you agree to install an app I've written you are agreeing that I as the developer, not my app, warrants your trust. So any future apps you install of mine are granted that same level of trust. Honestly I think just wording some of the permission dialogs differently would go a long way to making the process more clear to users.

authorwjf
authorwjf

These are all well thought out and valid concerns. The real solution as it exists now falls on the developer of an exposed intent. According to the Google documentation: "Arbitrarily fine-grained permissions can be enforced at any call into a service. This is accomplished with the Context.checkCallingPermission() method. Call with a desired permission string and it will return an integer indicating whether that permission has been granted to the current calling process." In other words if I expose a public interface I should be responsible for checking the permissions on the caller and throwing an exception if they are not there. Which is fine and dandy and I know a number of developers adhere to these docs myself among them but permission enforcement needs to happen at the OS level, not at the application level. As it exists now mistakes get made, programmers get lazy, or someone with less than honorable intentions takes advantage of this well known platform shortcoming.

Neon Samurai
Neon Samurai

I don't know that fragmentation is that much of an issue in this case (though I am vocal about fragging the distro in general). Something like this seems pretty low level and in place from an older version. Hopefully the same patch or module replacement can be implemented across the whole version history. With the nature of *nix it should just be a kernel mod replacement or similar commodity part. That's on the phone side. Validating the app rather than app developer may be the way to go. On the developer side, it adds some update workload but they should already be maintaining whatever apps they posted. Ship another update with the new certificate signing and your done. Part of what the dev is being paid for is maintaining the apps. Free apps that don't have the developer's attention should be culled out anyhow. The biggest challenge is probably update delivery more than OS compatability with the update patch. Manufacturers are in the business of selling new phone hardware not shipping software updates for sold phone hardware.

authorwjf
authorwjf

That's exactly right, Michael. A security leak in itself is not actually considered malicious.

Michael Kassner
Michael Kassner

It sounds like you are well aware of what's going on. We are working on one about QR codes. Stay tuned

Neon Samurai
Neon Samurai

The vendor specific variants are what I refer to as the child forks. When Mandriva creates a derivitive work based on Red Hat; that is a child fork. When HTC creates a derivitive work based on Android; that is a child fork. BlackXP, though not distributed legally, is a child fork of the WindowsXP distribution. Breakage may be what I'm not fully understanding. Variants within a major version The vendor need only manage there variant and HTC should be perfectly capable of implementing a patch in HTC Android. If the patch affects something common to Google Android and HTC Android then there is no excuse for not inheriting that patch. Maybe the vendor should have stuck with the stock Android so they can benefit from Google's patching process. Various major versions So we're looking at something like Android 2.1 through 4.# no win public use. I'm not sure the exact issue but something came up recently and Nexus owners had the patch the day it was available. Not all the Nexus devices run version 4.# so that means Google did provide a patch for older major versions. The broken part Here we're talking the security settings for running programs. This should have been standardized early in the major versions and should be something low level enough that vendors are not mucking with it to somehow differentiate product. If vendors are mucking with something that far out of the user's view then they really need to rethink why they are doing so. A custom UI, sure. A custom driver for hardware only on this handset; sure. A custom security framework; wait.. why? It is possible I'm missing the breakage angle or some other detail. With my current understanding of Android, carriers and business in general, I'm feeling the profit motive on this one. Even breakage points me back to profit; pay to maintain our one-off customization at patch parrity with the parent distro or tell the customer to give us money for a newer device.

Michael Kassner
Michael Kassner

To make sure we are talking about the same thing, I am referring to all the OS variants that exist. Telcos are not sending out updates as you said, but it might be a breakage issue more so than financial.

Editor's Picks