Here’s the deal. Bad guys know more about smartphone security than we do. And, they like it that way.

It’s time to change that. To help, I enlisted smartphone-app developer and TechRepublic writer William Francis, and Symantec VP and Fellow, Carey Nachenberg.

Nachenberg wrote: A Window into Mobile Device Security. In it, Symantec takes a hard look at the inherent security of Android and iOS, with special emphasis on the following:

“Complicating the security picture is the fact that virtually all today’s mobile devices operate in an ecosystem, much of it not controlled by the enterprise-they connect and synchronize out-of-the-box with third-party cloud services and computers whose security posture is potentially unknown and outside of the enterprise’s control.”

Don’t be turned off by “enterprise.” What’s in the paper applies to everyone.

What’s the plan?

Normally, I would just consolidate what Nachenberg said in the paper. But, having Bill on board, allows me to add real-world experience to the mix. Fortunately, I haven’t worn out my welcome since we worked together on another article about phone apps and malware.

We decided to follow the outline of the paper, with Bill providing his thoughts on what Nachenberg calls the “five pillars of security.”

The pillars

To begin with, the report points out:

“The latest mobile platforms were designed with security in mind-both teams of engineers attempted to build security features directly into the operating system to limit attacks from the outset.”

Next, Nachenberg tests both Android and iOs on how well they deal with the following:

  • Traditional access control: Traditional access techniques such as passwords and idle-time screen locking.
  • Permissions-based access control: Permission-based access control grants a set of permissions to each application.
  • Application provenance: Provenance is an approach where each application is stamped with the identity of its author and then made tamper resistant (using a digital signature).
  • Encryption: Encryption seeks to conceal data at rest on the portable device.
  • Isolation: Isolation techniques attempt to limit an application’s ability to access the sensitive data or systems on a device.

How they did

Bill and I thought it best if he would give his opinion on how well Android and iOS did relative to each of the pillars first. Then I’ll follow, paraphrasing what Nachenberg said in the paper.

Pillar One: Traditional access control

Francis: In terms of traditional access control, it is my experience that both iPhone and Android do a satisfactory job.

However, simply leaving fingerprint smudges on the touch screen make getting past the first line of defense easier. In my opinion, not enough app developers implement an option for locking individual applications separately from the OS lock/unlock screen.

On the Android platform, I make use of a tool from Carrot App called App Protector Pro that allows me to add an additional layer of security to applications like Gmail, Exchange, and Facebook. That way if someone ever did get physical access to my phone and past the OS lock screen, I’d have some additional time to go change passwords on those services.

To my knowledge, there is not a similar application available for iOS. This I suspect is because iOS has a more restrictive sandbox model.

Kassner: Nachenberg concludes the access control features provided by iOS have a reasonable amount of security if the device is lost or stolen. In this regard, Nachenberg considers iOS to be equivalent to Windows-based desktops.

The paper is less kind with Android. It mentions Android will sufficiently protect against casual attacks. But, Android does not encrypt data stored on the removable SD memory card. Physical access to the memory card and the stored data negates any password control on the device. (Convenience versus security yet again.)

Pillar Two: Permissions-based access control

Francis: My knowledge about the permissions system on iOS is pretty limited. I understand they have far less permissions than the Android platform, and that certain permissions require the user to agree only once while access to other protected subsystems prompts the user every time the application requires the resource.

In contrast, I have a lot of familiarity with the permissions system on Android. My opinion is that it’s theoretically a success, but largely ineffective in the real world because it relies heavily on the user being technically savvy.

While it may have gained its start as the platform preferred by 4 out of 5 hackers, Android has gone main stream and typical users don’t have the know-how to choose which permissions are acceptable and which are not on a per application basis.

Nor should they have to. When I visit my dentist, I don’t expect her to ask me which instruments she can use to poke around at my teeth with. I pay for a service, and I rely on her to use the resources she needs to get the job done.

Kassner: I have heard many opinions about the permission system used by iOS. Nachenberg’s paper solidifies my understanding:

“There are four system resources that apps may access that first require permission from the user. All other access to system services or data is either explicitly allowed or blocked by iOS’s built-in isolation policy. Here are the permissions that an app may request:

  • Access location data from the device’s global positioning system.
  • Receive remote notification alerts from the Internet.
  • Initiate an outgoing phone call.
  • Send an outgoing SMS or email message.

If an app attempts to use any of these features, the user will first be prompted for permission before the activity is allowed. If the user grants permission to either the GPS system or the notification alert system, then the app is permanently granted access to these systems. In contrast, the user is prompted every time an app attempts to initiate an outgoing call or send an SMS message.”

Android uses a vastly different approach. It is based on “all or nothing.” I’ll let Nachenberg explain:

“Each Android app contains an embedded list of permissions that it needs in order to function properly. This list of requests is presented to the user in non-technical language at the time an app is installed on the device, and the user can decide whether or not to allow the app to be installed based on their tolerance for risk.

If the user chooses to proceed with the installation, the app is granted permission to access all of the requested subsystems. On the other hand, if the user chooses to abort the installation, then the app is completely blocked from running. Android offers no middle ground.”

Pillar Three: Application provenance

Francis: The process of identifying provenance and determining authenticity differs significantly between Android and iOS. People can argue all they want about the merits of either approach but at the end of the day you see considerably more malware for Android than you do iOS.

I don’t believe there is any single point where Google has failed but rather a culmination of multiple failures makes the Android platform more susceptible to security threats. There are too few barriers for developers getting started on Android, especially those releasing free or ad-supported apps.

Google has no review process prior to submission. No requirement to prove you are who you say you are. No central signing authority. There are multiple app distribution channels. The list goes on. On top of the holes in the process, from a technical perspective it’s not terribly difficult for determined hackers to take a legitimate Android app, reverse engineer it, insert some malicious code, repackage the app, and stick it back in the store.

While it’s not unthinkable that someone could reverse engineer an iPhone app, the nature of iPhone’s primary development language, Objective C, makes this more complex than disassembling the Java (sorry Dalvik) byte codes running on Google’s platform.

Kassner: There is no sense in belaboring the point. Symantec’s Nachenberg agrees with Bill. iOS is doing a better job when it comes to vetting apps.

Pillar Four: Encryption

Francis: I once had the opportunity of working on a multi-platform mobile project that had some significant privacy requirements and ultimately required a code review by an independent panel of security engineers.

What I discovered early on in my development was that while iOS user-settings were stored encrypted in the key-ring mechanism by default, the Android platform left the encryption of that data up to the individual application.

That does not mean sensitive data on Android devices can’t be encrypted or that the encryption techniques employed are less secure than those on iOS. It just means in a number of cases the burden of encryption shifts from the operating system to the application. This can be both good and bad.

When application developers on the Android platform apply due diligence, your data is not only as safe as it is on iPhone, but in some ways safer because a hacker would have to break the security algorithm of each application.

However, you as the user have no guarantee that an application you download employs any encryption mechanism at all. This latter case presents a particularly troubling issue on Android devices since most have an SD card that could simply be removed from the phone and examined at a later time from any desktop computer with a USB port.

Kassner: Regarding encryption, Bill will not get any argument from Nachenberg. That said, I’d like to point out something I learned about each platform.

First, iOS uses encryption, but there is a gotcha (good explanation). Many apps run in the background (even when the user is not logged in) and need access to stored data. For that to work, iOS needs a copy of the decryption key locally. That means if an attacker has jail-broken access, most of the stored data can be read without needing the device’s master passcode.

As Bill mentioned, all versions of Android–other than Android 3.0–do not encrypt any data. That means anyone who jailbreaks or obtains admin-level access can have his way with virtually all the data stored on the device.

Pillar Five: Isolation

Francis: I personally believe both Apple’s and Google’s sandboxed models of isolation have proven themselves to be robust. In my mind, the mechanism employed by the Android OS is slightly more complex, but this adds flexibility and is probably necessary. Unlike iOS, Android deals with true multi-tasking of its apps.

As a developer, I see an additional advantage to these isolation models that users do not. Quite simply, the challenges these models provide during the engineering phases of app development, get us mobile developers thinking in terms of tightly-coupled security right out the gate, whereas on more traditional platforms, like a desktop, security is often an afterthought.

Kassner: Nachenberg and Bill are yet again on the same page. The fact that both isolate individual apps, prevents attackers from compromising other apps or the operating system.

It comes down to

Me, again. I’m impressed with iOS and Android. But I wasn’t ready for the common weak link. Both succumb to that “four-letter” word: Vulnerability. Can you believe it? It’s still the bad guy’s way in.

Nachenberg mentions at time of publishing, iOS has only a few severe vulnerabilities. And, they are mainly used for jail-breaking. There is no knowledge of malware exploitation.

Android only has a few severe vulnerabilities as well. But, one is of concern to Nachenberg. It allows malicious third-party apps to obtain admin-level control of the device. And, malware leveraging the vulnerability is in the wild; appropriately named, Android.Rootcager.

Android.Rootcager introduced a bit of irony and made things difficult for Google. Nachenberg explains:

“Even more interestingly (and controversially), Google’s fix tool for Android.Rootcager also had to exploit the same vulnerability in order to circumvent Android’s isolation system to remove parts of the threat from the device.”

Final thoughts

There you have it: Two heavy-weights, each subscribing to a different security philosophy. My goal has been to point out those differences. It’s now up to you.

I also wanted to mention that the Symantec paper is well worth the time to read. Particularly when it mirrors the opinion of someone I trust. Thanks for your help, Bill.