Patents

IT's new holy grail: Break out of the 70% maintenance loop

There's a new favorite number among IT vendors and CIOs: the 70/30 split. See what it means and why everyone wants to break out of it.

If you've been to any IT conferences or trade shows in the past 18 months then you've probably heard a similar story repeated by IT leaders and tech vendors. The concept is that IT departments spend roughly 70% of their budgets on maintaining existing infrastructure and only about 30% on new innovation.

This has been a huge topic at EMC World 2010 in Boston, with nearly every speaker referencing it. EMC CEO Joe Tucci kicked off his Monday morning keynote by citing it as the top factor driving businesses to the cloud (see slide below).

On Tuesday morning, Hewlett-Packard sounded a similar note in launching a new program called Break IT Innovation Gridlock.

Thomas E. Hogan, executive vice president, Sales, Marketing and Strategy, HP Enterprise Business, said

"In this era of constant change, breaking the innovation gridlock can mean the difference between being a market maker or a follower. With HP, CIOs can capitalize on change by reclaiming funds locked in operations to drive new innovation projects."

To make matters worse, the estimate of 30% of IT budgets being spent on innovation is actually pretty inflated, as Tucci correctly pointed out in his presentation at EMC World. That's because of that 30%, a lot of that investment typically goes into new technologies to help deal with the burden of legacy code and old systems.

EMC, HP, and CIOs agree that this is a recipe for disaster. Enterprises with a heavy load of old technologies to maintain are on a collision course with IT overspending and inflexibility if they don't adjust. Why? As Tucci put it, "We're on the cusp of another information explosion."

Tucci quoted an EMC-sponsored report that predicts digital data is going to increase by 44x between 2009 and 2020, fueled by more and more systems converting their data to digital formats, as illustrated in Tucci's slides below.

The bottom line is that with all of this additional data on the way to manage, IT departments have to become more nimble and flexible or else they will get completely bogged down in maintaining existing systems. EMC and HP offer different ways to do it but they're both trying to attack the same problem - the 70/30 maintenance loop - before it gets worse.

About

Jason Hiner is Editor in Chief of TechRepublic and Long Form Editor of ZDNet. He writes about the people, products, and ideas changing how we live and work in the 21st century. He's co-author of the upcoming book, Follow the Geeks (bit.ly/ftgeeks).

48 comments
Saurondor
Saurondor

Only recently have we seen the focus in application development turn from quick deployment to quick refactoring. Change control was a secondary priority to time to market. I find it of no surprise that now the industry is all bogged down by a lot of tools rushed out quickly.

mandrake64
mandrake64

We face similar challenges in automation support where we are basically measured by how much we minimise the basic system maintenance and increase the proportion of our time spent on improvements to applications. Everyone wants a return on their investment, particularly managers of the maintenance staff. It also improves job security

JasonKB
JasonKB

This is so easy. If IBM and HP etc. expect us to shift "Innovation" to a larger wedge of the pie, just provide free on-site, lifetime support for all of their products. Now that we don't have to take care of any of the previous innovation and we can run out and buy new stuff every week and innovate until we pass out from the endorphin rush of "new computer smell" we get from buying the newest toys on the market. Remember the gospel according to IT sales and marketing (S&M). Today's innovation is tomorrow's implementation project and next week's legacy boat anchor.

geldernick
geldernick

Paul Tedesco addresses these issues directly showing new approaches and a methodology overlay to you current methodology in his 2006 book: Common Sense in Project Management. Get it from Amazon.com, Borders.com or BarnesAndNoble.com We have now moved from "Do what works" and "Fix what is broken" to "Build what you will use." I was managing edior of the book and wrote the foreword. Donald Geldernick geldernick@aol.com

kimaterry
kimaterry

I recall when I worked at a very large insurance company in the mid-80s our CIO engaged a consulting company who specialized in comparing productivity of IT organizations against like companies. Lo and behold, everyone seemed to spend 80% of their time on maintenance of existing systems. Things have not changed much in all this time... However, I would like to see a break out between overhead maintenance (patches, fixes, bugs) and modifications needed because of business changes. These numbers are harder to come by, but would be more useful. Kim Terry www.terrosatech.com

rodsmail
rodsmail

Maybe these monolithic giants (IBM, HP, EMC, HDS) could spend some of that 70% we give them on innovation instead of continuing to buy the "Buzz word of the year" companies...They all seem to have abandoned innovation in favor of acquisition!

renodogs
renodogs

New Holy Grail? BALONEY. I think anyone that thinks cloud computing is the way to go is a fool. The whole idea of ridding ourselves of the mainframe was to eliminate the inevitable catastrophic failure of the centralized data cpu center, the expense thereof, and the lack of control of the accumulated intelligence within. It is apparent to me that if you do this, then a whole company could be vulnerable to espionage, vandalism, natural disaster, data theft, or perhaps extortion/conspiracy when the very thing they need to survive (their information) is handed over to strangers. Gee whiz, does anyone really believe in the wholesome goodness of strangers enough to hand over the keys to their entire fortunes? If you do, please put your money where your mouth is, send me your personal data so I can 'take care of it for you'. I promise, I won't sell it to the Russian mob, the Mexican Drug Dealers/Coyotes, Telemarketers, or your business competition. Do you really want to risk the likes of a data Pearl Harbor? The whole idea of decentralizing computing power was to reduce the catastrophic effects of a data breech, failure, or outright outage due to an attack, be it physical or over the T1- T3 lines. Server farms controlled locally, with tasks divided amongst the servers do exactly that. A small company can't afford large server farms, and frankly, they don't need them. But one can whistle past the graveyard and muse that "we'll never be attacked", right? Ask the Federal Reserve why they maintain 12 Federal Districts with computing/communications in each district. All this talk about cloud computing is nothing more than the CFO of ANY corporation targeting the wrong crowd. They want to slash operating expenses, and they see the IT department on-site as expensive. Well, not compared to data retention expenses of the past. Remember when all those records on paper were stored in a steel cabinet, and the army of secretaries and custodians were necessary to maintain those files? And the hidden cost of time needed to find those files for review?

david.tracy
david.tracy

The article does not reflect the real reason why IT maintenance is usually high. The real issue is not OLD vs. NEW or even outsourcing (AKA shifting the problems/costs elsewhere). A very large portion of "IT maintenance" is usually spent on performing day to day configuration, trouble shooting, data manipulation and data management activities regardless of how new the technology is or where it is hosted (hosting within the cloud can actually add a layer of bureaucratic delays). Often, significant cost reductions can be made by using simple software scripts that can automate recurring and labour intensive tasks where ever practical. Automated maintenance procedures supplemented with exception alarms that warn of problems early on and if possible even remedy it, will work equally well on new and legacy solutions regardless of their location. Note that while most newer software solutions do provide added/improved business efficiencies, they initially consume considerably more labour to support than legacy systems that have stable and (where practical) automated maintenance procedures already in place. COTS software vendors could help a lot by including maintenance automation tools and best practice guidance in streamlining maintenance activities of their applications. In my view COTS and in-house software should always include automation tools for data management, fault tolerance, fault auto-repair, fault (email) alarms, performance monitoring and deployment services. Until this happens, we can continue to roll our own.

FlNightWizard
FlNightWizard

I speak from 10+ years of personal work experience within a certain company mentioned in this article. It is a nice sales marketing tool to tell others what they 'should' do. I think the IT physical infrastructure there, is getting to be state of the art. Their own software piece of the pie is what lacks in modernization. As with any company, practice what you sell. How do you expect others to believe your hype if you don't believe it yourself internally.

vova1981
vova1981

Easily said rather than done. The problem with many companies is that they heavily rely on old infrastructure and scared to move to new horizons due to uncertainty of whether new systems will make them or break them. Innovation would then mean that companies should completely redesign their IT infrastructures from ground up and then the idea will make sense.

Dr.C
Dr.C

Hmmm.... 30 percent of my budget on new kit including licences, 70 percent on: Consumables (10) mostly printing Repair materials (10) (PSUs, screens, laptop batteries) Support technicians (50) - our user service front line. Won't change much while users need our help with new software.

Tony Hopkinson
Tony Hopkinson

The standard practice in IT. 1) Is buy something 2)Broggle, twoddle bodge and limp to a point where the replacement cost is cheaper than the next absolutely definitely needed change. 2) Goto 1 That's consumers and producers. Don't even get me started on the things that impact the length of teh purchase to destroy cycle. Going to the cloud, outsourcing won't solve anything. My recommendation would be pulling 'your' head out of 'your' arse.... Change is a given.

CG IT
CG IT

in other words, cut down the lifecycle of hardware and software from on average 3years to 6 months by not maintaining the existing hardware rather replacing it with the latest and greatest gadget. Depreciation is so quick as to be written down in months, not years. Heck, buy a cell phone today and in 2 months it's obsolete. Buy a printer or PC today and in 6 months it's obsolete and worthless. I can't wait until the Cloud vendors have everyone paying a monthly fee to access any application they want online using a cheap, throwaway device. The only computer mfgs that stand to remain in the mfg business is displays. Who wants to squint to watch a movie on a puny 4" or even 6" screen? Like laptop docking stations that connect to a large screen, the throwaways will connect to your LCD TV and you simply use your wireless KB and Mouse. The PC industry is set to change dramatically where consumers and businesses no longer buy PCs, the operating systems for them and the applications employees will use. Smart devices which are cheap to make, don't require an operating system, and a user simply accesses applications online as part of the their monthy fee they pay to the cloud provider. That's the future of IT. If the IT mfgs didn't do this then the telecoms would have with their every more powerful cell phones.

geldernick
geldernick

The industry creates the need for expensive or impossible upgrades to Apps because of power changes to new versions of infrastructure. Often, the conversion is too expensive and just not worth it OR a new App does a better job and in both cases, we throw the baby out with the bath water and start afresh. Might be some version of VMware and *.rtf that would make connectivity universal and perpetual.

matthew.lunsford
matthew.lunsford

...and the issues I have with this whole "cloud" thing. I view it as the modern version of the "thin client" idea in the 1980s that was supposed to be the next big thing. Except the cloud is worse, not better, in that you pass management of your data to someone else. Maybe I have my head in the sand and we're not going to get burned, but I just don't see everyone racing to put their data into the cloud.

SaintGeorge
SaintGeorge

Or swindling, lack of ethics, misrepresentation, hypocrisy, or just plain ol' business practice. Fake doctors selling potions out of wagons.

wizeppi
wizeppi

I have found over the years that 100% commitment to any one paradigm or technology or concept is not usually possible and if it were possible it would not be optimum. I believe that the key is to find a balance between this 70/30 ratio. This is probably an average that varies depending on many factors for every individual company. For some it may be 20/80 for some 90/10 the point is that finding a proper balance between maintaining old technology and adoptiing new technology is not a trivial task. As an example we can think of relational databases, when they first came out, it was in a sense perceived as the panacea of all database related problems. Well as it turned out we are still using hierarchical databases and other types of databases in even newer technologies. This is because the solution to the 'changing times' is not to abandon one technology for another but to know the strenghts of each technology and apply that appropriately. That requires skill, experience and an ability to ride on the waves of existing technologies lightly to discover their strenghts and at the same time be acutely aware of the business needs, requirements and opportunities to apply the different technologies. A simple example of this is a previous company that I worked for. Medium sized company that used an old unix programming language to run their distribution, order point and accounting systems, but at the same time it used bridges (technology, software) that allowed it to talk back and forth with newer technologies, i.e., windows state of the art technologies. There was a mix of old file server database technologies and client server database technologies.

JulesLt
JulesLt

The other issue is that it's not just infrastructure - a lot of big blue chip business made massive (6 to 7 figure) investments in software back in the 80s / early 90s. A lot of their business logic is locked up in these systems, often written in obsolete, and possibly dead, technology. On the other hand, it works, and even if you allow for a 10x improvement in development productivity since then, you're still looking at a large investment just to modernise. And given that this will only pay off over maybe a 5-10 year period, and is always a business risk, it's generally a lot easier to keep paying operational costs / keep trying to squeeze the operational costs, than actually simplify the underlying system. The tendency is to create new systems by bridging existing systems with middleware, web-services, etc (which is preserving the life of things we'd once have replaced). I also think there's something a big misleading about the data explosion chart - I don't see this affecting many legacy IT systems at all.

AnsuGisalas
AnsuGisalas

Ultimately I'd be more impressed with a changing the size of the pie than with redistributing the slices. Who wants to buy new stuff unless they're experiencing growth and had a too-fitting environment to begin with? 7/3 sounds like a good ratio, if you can keep the obsolescence monster at bay. New systems require a lot of work (that's maintenance too) while old systems after a while remain pretty steady. So, if you're constantly adding 30% in one end, tying up another 30% on "maintaining" that new stuff into working order, then you spend a 30% on the real maintenance of keeping the old stuff trimmed, and a 10% on handling the removal of the unmaintainable at the oldest end. That sounds like a pretty ideal scenario, in this snafu world we live in. Better would be if new stuff would work with less effort, and old stuff could be trimmed more easily, but that would be simply shrinking that pie, not changing it's looks. All this is for a steady-state business with an already adequate machine pool; rapid growth is different.

MyopicOne
MyopicOne

David.tracy is largely correct, in that new software is not self-maintaining. Perhaps more importantly is that many Project Management techniques AS IMPLEMENTED (as opposed to the theory) ignore maintenance and operations as a cost. The project is done - wipe out hands and move on to the next one. Never mind that: 1) there is work still outstanding that was part of the project; to complete the project incurs costs that are not associated with the project itself. 2) Maintenance and operations was not budgeted or planned for by the units that will actually be doing the work because a) they didn't know about the project, or b) the true ops and maintenance requirements were understated by an order of magnitude -thus a finite set of resources is now spread more thinly. And the ops and maintenance staff will continue to be beaten until morale improves...

rhowelljr
rhowelljr

The holy grail of reducing costs and managing increasing data demands may not be as motivating for process change as it is for personnel change. From what I?ve seen, there?s often not much leadership when it comes managing the redundancy of expended effort. Reinventing the wheel is the standard practice. Infrastructural software and database mechanisms can and do grow in diversity and complexity. This can and does lead to issues that naturally thwart automated management. When something comes up, the hero steps in an rattles off a bunch of commands from a keyboard and saves the day, but that effort is instantly lost to the ether. Two minutes later, the same thing happens again. I?ve rarely seen a shop, small or large, where that wasn?t considered completely normal and not the least cause for concern. I?ve tried to lead bottom-up reform by trying to introduce uniformity and apply automated systems to manage complex infrastructure, but have been relentlessly, although not deliberately, counteracted by free-for-all introduction of ?innovations? from contributors with little to no oversight when it comes to implementation. Schedule is the only concern. Anything goes. Similar problems get solved with wildly different and independent implementations. Complexity grows unchecked. When the wheels start to come off, the ?solution? often comes from a software vendor with a sales staff that?s always three steps ahead of the management team. Money gets spent and [another] prolonged ?transition period? begins as the old free-for-all continues while more people and effort are piled on to bring the new solution into the flow. I?ve seen shops where poor results and failed projects erode confidence - paralysis sets in. Management doesn?t want to touch anything. The old MRP system that was kept running, through the various attempts to supplant it, goes on. (And shops continue to pay thousands/millions of dollars for unused software licenses because no one wants to admit failure. I?ve even seen teams set up who spend years ?planning? how to use expensive database software so the ongoing expenses can eventually be justified.) But, in the end, personnel change can win out over process change. Keep doing the same thing, just find other people willing to do it for less money. Without process change, there really can?t be a positive shift in efficiency. It?s sad, but finding a like-minded management team that?s willing to fundamentally change how they deal with growing data demands is the true Holy Grail. So, if you?re in a shop that knows how to work smarter, not harder ? count yourself as very fortunate.

davecryerbze
davecryerbze

Not utterly convinced that shifting data to the cloud and responsibility to a vendor, reducing my own IT departments control of key systems provides a real solution. Surely the vendor now has the issue of maintaining equipment and ensuring seamless backups and transitions to redundant systems when things go wrong. That vendor now faces the horrible reality of constant maintenance with no innovation. So reliance on something as wispy or ethereal as a cloud or as historically un-attainable as a grail is not perhaps such a good idea :)

SeasonedsysDBA
SeasonedsysDBA

Excuse me, but didn't the source of this information create the complex infrastructure that feeds them a constant revenue stream of maintenance fees, leaving their customers "so little time to innovate"?

prezbedard
prezbedard

ok I learned nothing form that. It seems more and more of these are just sales pitches and nothing more. Come on Tech republic get your head on straight ...

codepoke
codepoke

All those servers in the cloud won't appear magically and they won't maintain themselves, so someone's going to have to purchase/upgrade/*maintain* cloud hardware and middleware. All we're really talking about are economies of scale where an IIS admin in the cloud maintains servers for multiple companies. That expense will be passed on, so we end up talking about a marginal cost difference. So 70/30 becomes 68/32?

olddognewtricks
olddognewtricks

I suppose, besides the consumables mentioned above, the 70/30 breakdown is calculated based on what percentage of IT personnel's time is devoted to "maintenance" and what is devoted to "innovation" ("new" innovation, by the way? As opposed to "old" innovation?). Is time spent testing and applying patches "maintenance" or "innovation"? What about new releases of installed software? Service packs? Adding a batch of new servers to your racks to handle increased data loads? Depending on where you put the numbers, I'd say IT should maybe strive to get the balance to 80/20.

disasterboy.info
disasterboy.info

The promise of IT is eroded by the profit of IT. The companies that have sold the old infrastructure and software solutions are again telling us we need to spend more money on new stuff? Purported maintenance savings with new architectures don't seem to have materialised in the past so why should they now? Be realistic about the new stuff it may provide better flexibilities or functions but I'm suspicious about $ savings on maintenance. Usually to be marketable it needs to have enough propriety content to make it inflexible and lacking adaptability. The maintenance costs will hit somewhere in the future part of the software life cycle.

monicabower
monicabower

...but it sounds more prophetic than what you hear in a keynote, too. There are lots of pieces and parts in the average data center now, even the midsize computer room or the small business server closet, for that matter, and those places don't get much mention in keynotes either. But the value of the pitch is where you can save in software and hardware maintenance by reducing the number of vendors you have to pay for it, and reducing the amount you have to pay overall. OEMs aren't the only ones who can support their hardware, software, and OS now, and you can go from having 30 OEMs supporting your stuff to a single independent maintenance provider supporting all of it. But the keynotes and conferences aren't going to talk about that, they prefer to suggest that buying new hardware and new software to run on it will save money rather than increase expenditures.

jasonhiner
jasonhiner

This is a concept that lots of CIOs and IT vendors are talking about so we summed it up and threw it out there for TechRepublic members to discuss.

uberg33k50
uberg33k50

Now that Apple is taking a little heat with their "Heil Steve Jobs" attitude we have to push some other ridiculous retoric! It is a shame Tech Republic used to be written by people who actually worked in IT and didn't just attend shows and conferences. Now we get the latest propaganda from all the big IT businesses...gee I wonder what their interest is in trying to tell us that we should invest more money in new tech???

dnevill1
dnevill1

And now all that sensitive information that you've kept onsite for so long is floating around on someone else's hardware. My level of trust of other people with my information hasn't reached the cloud yet. Not just on a security level either.

SaintGeorge
SaintGeorge

I have Windows XP SP3 deployed in about 500 PCs. I'm guessing "new innovation" would be migrating them to Windows 7, while "old innovation" would be going to Windows Vista. Well, of course that would also be "stupid innovation" ...

SaintGeorge
SaintGeorge

It's old and it's true. Of course vendors want companies to increase their budget for innovation. That 70% remains in the company and they don't see a dime of it. Runaway progress is not good for anyone but the guys who charge you for the new goodies and the endless retraining of your personnel. Customers and users are dead in the water with help desks that never seem able to catch up with new technologies, that are mostly nothing else than redressing of older ones. Error ridden system, released by vendors in haste just to stay ahead, are replaced by barely tested new ones that will address a few of the problems while introducing a host of new ones. This is not limited to IT. It's the way of our brave new world. The spillage in the Gulf shows technologies for which's problem there are no developed remedies. The pressure to change cell phones, computers, cars, everything, in always faster cycles that are threatening to drown us in garbage. Do you think I'm disgressing? Read the book Made in Japan, by Sony founder Akio Morita. He explains how Japanese industry steamrolled America by accelerating the innovation cycle with trivial change. I`m a techincian, engineering oriented, not a buyer. Every time I see vendors meeting with deciders, I can see the latter wide eyes saying "Uhhhh shiny!!" I know I have trouble coming...

monicabower
monicabower

There is a credibility gap when the ones complaining about the aging infrastructure are the ones who sold it, and who are now selling the new hardware to replace it. Using an independent maintenance provider for hardware makes a lot of sense, especially if you have a multivendor or multi-lifecycle thing going already. The solution to that probably isn't simply to buy more hardware so that now you're spending 50% on new technology, but the same amount as always on maintenance too. You don't fix a 30%/70% split by spending more in the 30%, you fix it by spending LESS in the 70%. And EMC can only support EMC stuff, which is only a small part of a data center.

Robert Przybylowicz
Robert Przybylowicz

Correct me if I am wrong, but, I am guessing you like open source? I just get that feeling reading your reply for the first time. Why would anyone want to buy a new car if the old one works just fine. I am not tying to make enemies I am just pointing out that fact. If what you are saying is what you really believe than the old Gui or monitors are just fine. I could be way off, sometimes I am, but to say they are just for profit is not giving the designers credit. I am not against open source, but you have to upgrade and maintain that too for a profit.

SaintGeorge
SaintGeorge

Just to dispel the notion that it's pushing the same idea...

AnsuGisalas
AnsuGisalas

After all, this is practically a bootstrap for overseas outsourcing. Small companies don't much outsource overseas, but they could be convinced to go to cloud services, which means that they de-facto outsource to a company that then outsources overseas. Of course there's a bulk hardware gain in having huge cloud-systems under one owner with many customers, but there's also the wage-undercut. Or isn't there?

monicabower
monicabower

But if Tech Republic didn't give a rundown it would be a bizarre omission. Simply restating the main keynote slides - albeit without the quirky videos - implies no agreement and is essential for those of you who didn't attend to be able to discuss them. It was a HUGE show, for what it was, but even 6,000 attendees can't represent more than a percent or two at most of the businesses with high level EMC implementations. As for how Tech Republic makes its money, they sell ads and are part of ZDNET. No need to stretch for a conspiracy, there are sponsored links covering 1/3 to 1/2 of every page on the site.

jasonhiner
jasonhiner

There aren't many IT conferences and trade shows left but TechRepublic still attends the big ones, gets the vibe for what IT leaders are talking about and reports it back to the IT audience. This isn't the only thing we do (not by a long shot), but it's important for everyone that works in IT to know what CIOs and vendors are talking about at these events.

vince
vince

I thought I was the only one who saw Tech Republic as a marketing tool for vendors. The article fosters the old failed model of the IT industry pushing implementation (bubble) rather than organizations seeking out a solution for an identified need. Just because it exists doesn't mean it is needed. This is very similar to many personal items in our life. So, if we start splitting up the finite pie (60/40), will maintenance suffer due to more money going into innovation? The pie must grow in order to increase the amount of investment in innovation.

Tony Hopkinson
Tony Hopkinson

If you can't access the control and then set selectindex or selecteditem, it's going to be a lot of googling, proably for something so obvious no one mentioned it....

Tony Hopkinson
Tony Hopkinson

development anyway. What makes the cloud pay off as a platform is variable demand. Staggered e-campaigns for instance, as one starts ramping up the other is tailing off, so instead of having to buy double resource to cover, or just as bad choke one of them you can do both, by twiddling with the amount of resource you allocate. For a straight 9-5 operational business model I can't see it worth the effort of switching over, that by the way could have a substantial development overhead. You need a lot of decoupling and modularity to make the cloud really pay off, without it your stuff will run like a dog.

monicabower
monicabower

...you're saving because you've virtualized to a resource on-demand model. Their cloud is MUCH more about uber-virtualization than about access-anywhere. In fact they had a segment and a video in a keynote speech - not just a breakout, but a keynote - that took up the difficult task of breaking it to storage managers who were in the audience that most of them could probably be fired if this methodology were rigorously employed. They had better become 'unified computing engineers' or some other more or less made up term, because with the new Vmax/Vplex storage from EMC there was literally nothing for them to do any more if all they did was manage storage. So clearly they're also pushing a savings in payroll costs apart from any expense of the data center itself. I definitely like their take on private vs public cloud, and I completely drink the unified computing kool aid as far as that goes - looking at storage apart from servers, networking, and for that matter power and cooling, is a mistake; but I take issue with the original premise, which is that 30% investment in new technology is somehow insufficient. Imagine if you ran your desktop with the same philosophy, adding 3 new applications for every 7 you update? And isn't it logical that the potential for saving money is much greater if you can get the 70% portion under control - IE the costs inherent to config and maintain, which you're completely correct in pointing out? Whatever is new today will be legacy eventually, and new for the sake of new is still a message that sounds a lot better to OEMs than it does to people who actually manage data centers.

codepoke
codepoke

I still don't understand, Monica. The privately owned cloud satisfies my need for insecurity, but I don't see where the value-add is hiding. The promise is that I will be able to go from 30% new investment to some vague but larger figure. How do I save $millions by bringing in a private cloud? The cloud adds a layer of abstraction, but it's not a $free layer so the ROI has to be pretty substantial. Where is it? My developers no longer need to hard code a share name and other gains of that magnitude are readily apparent, but that's .01% even if I'm still lifecycling servers every 3 years. When I lifecycle a puff within a cloud, the new server that replaces it still needs to be configured, so Infrastructure ends up paying back the .01% Development saved. Whether EMC maintains my cloud or I train up my own staff, the roads must roll.

monicabower
monicabower

They stomped out the idea of hosting your stuff in the public cloud from minute #1 of EMC world. They're all about leveraging that technology inside the data center, so you're doing provisioning based on resources and properties rather than continuously manually assigning physical machines. The EMC pitch makes no sense if you think of cloud the way everyone has been thinking of it for the most part. Its all still your stuff in your data center, you just portion it out using a cloud model - virtualized, basically - instead of just adding more and more and more storage and hardware.

codepoke
codepoke

And I'm in Orlando, too, dnevill1. We should touch base.

Editor's Picks