When we think
about big data, we think about breaking old processing models in IT, but there
are traditional practices that are pretty useful. One of these practices is
asset management, and the reasoning that gets applied when it’s time to factor
variables into the decision making that precedes budget proposals.
Unlike transactional
servers in the data center, big data servers are not good virtualization
candidates because they must parallel-process data. Big data servers are always
at work, and it is much more difficult to resource share in this environment. If
companies also want access to near-real-time analytics from their big data, it isn’t
very feasible to go to a big data cloud analytics provider either, because of
the latency and security issues that potentially come into play.
All of this
points to a need to maintain “physical” server resources for big data
in the data center. In turn, the acquisition of physical servers invokes traditional
budgetary approaches to hardware like asset life cycles and amortization pitted
against return on investment (ROI) that the CFO is likely to expect.
What’s different
about big data in this very familiar process is that you can’t expect to
develop a ROI in the same way that you have in the past for transaction servers.
ROI for transaction servers was often predicated on the speed per transaction, which
was then extrapolated into how many more transactions (and resulting revenues)
the business was projected to capture. Investment returns on big data don’t work
this way.
You are better
served to estimate the number of actionable analytics outcomes that your processing
is likely to deliver, agree on the business cases these results will be applied
to, and then measure whether you are enhancing revenue opportunities, product
time to market, or other goals you might set. From the standpoint of a big data
server, results are likely to be measured in lapse time per job, how close to
real time these analytics outputs are, and how many concurrent jobs you can service
on the server in a specific period of time.
The other
element of big data asset management is making a smart decision when choosing a
platform. There are advantages in big data processing speed in a reduced
instruction set computing (RISC)
versus x86 server, because RISC can process twice as many concurrent threads of
data, but now solid state drives are making a real performance difference in
x86 class servers.
The question for
the IT decision maker becomes: What platform do I invest in for a long-term big
data strategy that will protect my investment? To answer the question, the IT
leader must meet with vendors to see where the vendor is headed long-term. He
or she should also make it a point to have very direct discussions with vendors
to see where the vendors are planning to invest and how willing they are to
offer investment protection alternatives if they change direction.
It might also be
time to consider cloud-sourcing as a big data strategy. Cloud-based big data
offerings are maturing, and if your need is not for immediate real-time analytics,
you could be well served by a cloud-based big data provider that takes on all
of the asset management. By going to the
cloud, you also save on energy costs and infrastructure investment in your data
center, and you lower your long-term risk from facility and asset standpoints
because you are with the cloud provider purely on a subscription basis, with no
hardware to buy. At the end of this process, the IT decision maker (and the CFO
heading the budget process) is looking at several asset scenarios for big data.
- Do
you invest in the data center, or do you go to the cloud? -
If
you go to the data center, which computing platform do you run your big data
on? -
What
are the financial, security, operational, and strategic risks?
For questions in
asset management and acquisition, when to outsource and when to insource, and
even how to develop a workable ROI, the traditional playbook on IT asset management
continues to deliver value for a new technology.