Ethical Hacking is one of the most intriguing and exciting
elements of our work at Pivot Point Security. A recent engagement for an
International Bank took us a bit by surprise as the level of security provided
by an Application Service Provider to protect the identities of the banks clients
and hundreds of millions of dollars was notably less than one would expect. I’ll
show you the techniques that we used and how our efforts turned from hacking
their critical application, to hacking the Application Service Provider, to
hacking another bank’s hosted network.
A call to arms
On a Monday morning in the not-too-distant past, we received
a call from an Information Security engineer at a major international bank, who
we will refer to as Bank Client (BC) from this point forward. An industry
colleague that frequently worked with us in support of our projects (and vice
versa) on network and security architecture referenced them to us. This was not
a typical introductory call to vet our capabilities; this was a call to engage
“We have a few concerns regarding the security of an
application that is hosted by a third party on our behalf. How soon can you
come on site and perform an Ethical Hack against the application?” he queried. Still
surprised by the directness of the call, I offered, “I think we could get
resources on site early next week.”
He replied, “We were really
hoping that we could get this done no later than the end of the week”
reinforced the urgency of the call.
“If it’s that important I think we can move some
personnel around and get there Thursday,” I said quietly as I prayed that
I wouldn’t take too much grief from our project manager for reallocating his
resources, but it’s not every day that an opportunity this intriguing rears its
“OK, let me confirm everything with our management,” he said.
“We’ll be in touch, shortly.”
On Tuesday morning a signed purchase order rolled off our
On Thursday afternoon we were on site for a project kick-off
meeting in a conference room with carpeting so deep I dropped my pencil and
decided not to bother looking for it.
Dinner and dessert
Judging from some of the titles of the individuals at the
kick-off meeting (Chief Information Security Officer, Chief Information
Officer, Sr. VP Auditing) , we quickly surmised that their concerns were of a
significant nature. Interestingly, they would not detail any specific concerns
and we spent the better part of the two-hour meeting discussing their business
environment and the critical role of the application under review. After
understanding that the application we were looking at processed billions of
dollars of transactions on a daily basis, our interest in kicking things off
Since the application and supporting systems include
interfaces to Federal Reserve Banks, we were advised that we could not begin
Penetration Testing until after 6:30PM. We gladly accepted an invitation to
grab a bite for dinner with the CISO and some of the other key team members on
At 6:45 PM we were back from dinner.
At 6:55 PM we owned the hosted network. That is, we were the
Domain Administrator for all of the hosted devices that encompassed the ASP-hosted
solution (including redundant database servers, application servers, Domain
Controllers, and gateway router.)
At 7:00 PM we owned the application and the database.
At 7:01 PM we jointly realized that we could transfer $100M+
between accounts with the level of privilege we had achieved. The BC Security
Administrator monitoring our activities immediately halted our testing.
At 7:10 PM we were on a conference call with BC’s executives
to discuss next steps.
I would like to tell you that our rapid success in this
engagement was a reflection of the brilliance of our Ethical Hacking team, but
that wouldn’t be the truth. Unfortunately, our success was a reflection of the
egregious security the ASP had provided on behalf of the client’s data. The
next section outlines our techniques.
h2>The means to an end (game)
A Terminal Services login screen, that fronted the
application, greeted us after connecting across the VPN. The ASP intended for
the user to authenticate to the Domain, but had actually left the option of
allowing the individual to log on locally to the system by selecting it in the
drop down box. Selecting this option, under the assumption that local system
security is usually weaker than domain security, we attempted to log in using
the account “administrator” with a password of “password”. Our
second attempt was the combination “administrator”/”ASP”
(where ASP was the name of the ASP.) Sadly,
we were local administrators on the box. To our sheer disbelief, we proceeded
to find that every device on the hosted segment was using the same password.
In many applications, hacking the application and network
devices are only a means to reach the ultimate goal, which is often the
database. Therefore, we set our sights on the Microsoft SQL Server. Noting the
incredibly low security we had encountered to date, we joked that it would
probably still be running with the default administrator account (user “sa”
with a blank password). Our level of astonishment continued to rise when we
found we were indeed correct. Perusing the database we noted the client’s
included Republics, Corporations, and Royalty.
Before you bail out of the article at this point to click on
the “Add your comment” link to blast this as an absolutely
unbelievable story, consider our intention. Detailing our compromise of a
system that a secretary could have hacked, does not provide us any true benefit.
If we wanted to tout our capabilities I would regale you with a VoIP hack inside
a governmental agency or some other more compelling tale. This article is
intended to raise the awareness of those readers whose business critical data
is in the hands of strategic business partners. Unfortunately, the level of
security detailed in this narrative is 100% accurate.
Two for the price of one
The most compelling question asked during our conference
call with management was, “Can another BC-client compromise our infrastructure,
access our client data, and gain the ability to transfer funds, in the same way
that you did?”
We jointly agreed that testing this scenario was critical,
but were uncertain of the legal implications of the effort. We quickly convened
a second conference call, which would include BC counsel and our counsel.
After considerable discussion with our respective attorneys,
we reached a consensus that we would continue the ethical hack to ascertain
whether another bank could potentially take the same actions that we did, but
that we would make every effort practical to ensure that we did not breach
another ASP client’s confidentiality.
With our foot already planted within the ASP infrastructure,
we set out on behalf of BC to see if their data was at risk to another hosted
bank. The ASP had done a good job of segregating their clients from each other.
Via ICMP ping sweeps we could confirm the existence of duplicate
infrastructures for dozens of other banks. We attempted to enumerate other
clients’ hosted servers on the ASP network but to our disappointment, all we
could do was ping them.
Fortunately, one of our Test Team members came up with the clever
idea of writing and deploying a quick script that would feed periodic netstat
output back to the console we were sitting at. Netstat
is a windows utility that displays active TCP connections and the ports a
computer is listening on. We had noted several “interesting” ports
that multiple systems were listening on and our hope was that we may catch a
connection in progress.
After an hour or so, we observed a connection to one of the
boxes that we were watching from a network we were not aware of previously. Fingers
crossed, we attempted to telnet to the new found IP address with no success. Our
second attempt to establish a secure shell connection (SSH) to the box was more
promising as we were challenged for a user name/password combination. As you
likely guessed –“administrator”/”ASP” put us on the box with root
It was a Linux system running Little Brother, an open source
network monitoring tool that was monitoring all of the ASP’s clients. We SSH’d from the Little Brother box, into another hosted bank’s
network, and were not surprised to find that the
“administrator”/”ASP” combination was in use on their
hosted domain as well. In short order, we had confirmed that a malicious
individual at any one of the dozens of banks hosted by the ASP could connect
into another bank’s fund transfer system and move hundreds of millions of dollars
across banks and accounts.
The technical briefing that closed the engagement with the
client yielded one last surprise. The ASP had provided the client with a
“clean” SAS-70 Type II Audit Report issued by a prestigious CPA firm.
A SAS-70 is a widely recognized independent auditing standard that includes an in-depth
audit of a service provider’s control activities, which include controls over
information technology and related processes. Accordingly, BC had felt
confident that their clients’ data would be well protected by the ASP.
Ultimately, even in the case of an ASP or Business Partner citing
independent validation of their security practices, the onus lies with you (the
client/partner) to perform due diligence and due care to corroborate that the
validation is accurate and relevant to your
security requirements. Current regulatory requirements including HIPAA,
Sarbanes Oxley, and SB-1386, mandate this due diligence/due care.
In most cases where we have been engaged to evaluate the
effectiveness of a business partner’s level of security, we have found it to be
notably below that required by the client. Of recent note, was a marketing firm
who carried its database on its balance sheets as a $55M asset.
We found that the data mining company they had engaged to improve their
penetration into an emerging market sector, had security practices so poor,
that we were provided an unencrypted copy of the clients database, with little
more than a spoofed e-mail.
Once again, the 2,000+ year old maxim holds true. “Caveat
emptor” — let the buyer beware.