
Recently, I posted a piece about AppBugs staking claim against a number of Android apps that, on the surface, seemed like little more than FUD.
I tested my claims and did research against them… but in the end, I was called out by the company, because they didn’t think all of my claims were accurate. When I received that missive, I reached back out to some of the apps in question a second time. This time, one app maker stood up and said the claims were accurate and that they were now beta testing the fix against the vulnerability.
After speaking at length with one of the AppBugs staff, I came to a few conclusions that I want to share.
FUD
In my original piece, I claimed that AppBugs (like so many other companies) was doing more to spread FUD than solving the issue at hand. Well, it turns out that the company does actually have Android’s best interest in mind. In fact, the gentleman I spoke with (Stan) went so far to say:
As a business, we’ve decided to help Android become a safer platform and put aside the revenue goal. We’re providing FREE vulnerability information to smartphone users, FREE information to developers as soon as we find a vulnerability in their app, and we even offer fix guidelines for those developers who don’t know how to fix the issue. We do all of that with the only goal to genuinely help mobile users and developers.
I have to say, in today’s world, that’s refreshing to hear. And yes, AppBugs does provide vulnerability information to smartphone users for free… although the free information is limited to only stating that the app in question is vulnerable, because it gives zero details (unless the user pays the in-app purchase fee for the premium edition). When pressed on this issue, my contact responded with:
I’m sorry if there was any confusion about what we offer. You can see below what we offer for free and what we charge for. We feel what we charge is reasonable, based on the amount of work we do for it.
What is free?
- For mobile users, the app tells the user which apps on his/her device have security flaws and the fix status of the flaws.
- For app developers, we send the vulnerability details to them without charging anything. We also do a retest of those bugs to check whether they have fixed the issues or not for free.
What we charge?
- For mobile users, if they want to see the bug details, our suggested actions to reduce risks of being attacked, and enjoy other advanced features, they need to purchase our PRO service through an in-app payment.
- For app developers, if they want us to provide fix guidelines, we charge $29.99 per bug.
As to the actual issue itself (and why it exists), here’s AppBugs explanation:
For your question about why those apps have issues, it’s because app developers often need to validate SSL certificates in their own code or they use some libraries (such as ad libraries, social login libraries) in which SSL certificate validation is handled by the libraries themselves. The app developers and the authors of those libraries are not security experts, and they often make security errors and allow a man-in-the-middle attacker to use a self-signed certificate to decrypt traffic between the app and web servers. You mentioned that Google applied a patch to WebView to fix the problem. According to a VentureBeat article, Google did not make a fix, but they added the capability that allows developers to write their own code to validate the certificate. Therefore, if the developers did not do that or did it wrong, the apps are still vulnerable.
So, AppBugs claims that Google did not actually fix the WebView vulnerability–instead, they placed it in the hands of developers to solve the problem. What I find interesting about this is that, as of Android 5.0, WebView has been broken out of the Android stack to become its own application, which means that timely updates to the app could be made without needing an entire firmware upgrade. So, if Google didn’t bother to fix the vulnerability, why did they split WebView off?
Even more interesting is that none of the developers (from any of the vulnerable apps) reached out to AppBugs after the findings were posted. Honestly, where it the disconnect?
I understand that there are certain apps out there that are developed and then forgotten. But in the case of, say, ASTRO, you have an app used by hundreds of thousands of people, with a known vulnerability, and you don’t respond to a company laying claim to a vulnerability?
I was lucky to get the developers of ASTRO to respond to me with:
They are correct on this one. There is a way someone can spoof an SSL certificate, and we don’t always catch it. We have fixed the issue, and it is in beta testing right now. We sent a copy of the APK to AppBugs to verify the fix. (We also are working with someone from a university in California who has verified the fix.) The fix should be in the Google Play Store early next week.
With a follow up of:
Apparently, they did not get our upload, so we are uploading again. We have also verified the fix with another group.
Going forward
Over the last few quarters, the FUD against Android has been merciless. It’s reached such a boiling point that it’s sometimes hard to sift through the detritus to find the truth. That’s why it’s so easy to brush off the claims made by AppBugs as FUD. This is a problem (that I clearly fell victim to, in this case)… one that doesn’t just affect writers, marketers, developers, and companies. This affects users and consumers–the latter group being the most important, because the overwhelming majority aren’t in the know enough to understand that FUD is a thing and how to tell the difference between something that might actually put their security at risk and something that’s just trying to scam them out of a buck. This is why users are such easy prey for ransomware attacks like CryptoLocker. And with the numbers of mobile users growing exponentially (to the tune of overtaking desktop users), it’s become more and more important that they understand what is and what is not a scam.
Security is and will continue to be an issue for IT pros, developers, and end users. As long as there are those who want your information (or money), your devices aren’t 100% safe. And even when you think they’re safe, they probably aren’t. Why? Because you may have a weak password, you’re not using two-step authentication, or that app you rely on every day has a vulnerability that can be exploited.
Does that mean it will be exploited? Well, in the case of ASTRO File Manager with Cloud Support–the only way you would have been affected (prior to AppBugs tagging it as vulnerable and the company fixing the issue) would have been for you to make use of the cloud feature and for someone to have spoofed an SSL certificate. It can happen. Does that mean it did happen to you? Unless those two variables came into play in just the right way, probably not.
I believe there needs to be a system of checks and balances here. In this case, the AppBugs claims were true, and Google should have verified the claims and then taken action against the apps in question. How? They could have easily pulled every app listed as vulnerable and not allowed them back into the Play Store until said vulnerability was patched. This could have forced the hands of all the developers to actually respond to the claims and patch their code. And if the claim is true that Google hasn’t actually patched WebView (and foisted that task onto app developers), then they to take responsibility for this and fix the problem immediately.
This issue is not going to go away. There will be more (and maybe worse) vulnerabilities found. How the developers respond to such claims will be telling. How do you think such claims should be handled? Should Google pull apps that are known to be vulnerable? Share your thoughts in the discussion thread below.