Servers

Do cloud computing and parallel processing lack use cases?

In response to a post on ZDNet by Larry Dignan, Justin James asserts that there are no killer apps for cloud computing or parallel processing. He also discusses why cloud computing and parallel processing have not been bigger trends.

 

Several weeks ago, Larry Dignan wrote a post on ZDNet entitled, "Yes folks, the cloud and parallel processing need killer apps." Normally, I tend not to pay terribly close attention to industry pundits when it comes to the programming industry, simply because I find that the "latest and greatest" that these folks are really interested in writing about simply is not the reality for your typical developer. These pundits are a great resource if you want to know what might be mainstream five years down the road, but most developers are too busy to worry about anything other than the here and now. In this case though, Larry really seems to have his finger on the pulse of the industry, and I wanted to expand on his thoughts. (In the interest of full disclosure, I feel that I should note that Larry Dignan is Editor in Chief of ZDNet and Editorial Director of TechRepublic.)

First and foremost, "the cloud" and parallel computing are both in approximately Year Three after becoming in a position to be used on a mainstream basis. In the Spring of 2005, the first Intel dual core CPUs (the Smithfield models of Pentium D) hit the market. In late 2006, the Core 2 Duo series drove the price/performance curve in a fantastic direction, delivering unbeatable dual-core performance at prices that were considered cheap for single core performance a few months earlier. Cloud computing also became broadly available around this time and, in the Spring of 2006, Amazon's S3 service heralded the availability of cloud resources from mainstream vendors.

AJAX as a technique had killer apps like Google Maps and Outlook Web Access to show developers that learning that technique would be useful. Java and .NET both had provable, obvious value that developers saw, driving their adoption. At this point, hardware vendors cannot seem to really crank single core clock speeds significantly higher without severe heat problems, which is why dual core (and now quad core) architectures are continuing Moore's Law on the motherboard. WAN speeds and reliability are now at the point where vendors feel that there is opportunity in cloud computing.

But as Larry asks, where are the killer apps? Heck, where are any apps, killer or not?

Games still are not making heavy usage of parallel computing (although the AI in them could definitely use it, in my opinion). Graphics, video, and audio editing programs such as Photoshop and Premiere make heavy usage of multithreading. Modern compilers are getting better at using parallel processing to speed build times. Of course, operating systems and network services like Web servers, database servers, and e-mail servers have always had to do a lot of multithreading. See a trend here?

While these are applications that nearly every user is accessing at some level, they are applications that only a small segment of elite developers are working on. And those developers were writing code to take advantage of multi-core machines long before they became mainstream because many of those applications were being run on high-end hardware years ago. Multimedia editors have been using SMP workstations for well over a decade. SMP servers (even x86 ones) were available in server rooms for more than a decade (in fact, I just shut down a Pentium II server circa 1996 or so that had two processors in it). So the folks writing these high-end applications had motivation to write their code to be multithreaded. More importantly, that means that the techniques to write code like this are established and documented, although they are not widely known.

Cloud computing, on the other hand, is a relatively new idea. Frankly, it is going nowhere fast. Wikipedia's list of "Notable uses" of Amazon's S3 is... well... not very notable. Amazon's S3 service gets more press from being down than it does for signing big clients. The other cloud computing vendors that I looked at also seemed to have equally unimpressive track records and customer bases.

So what is going on here, and what kind of killer apps could give these two trends some traction? It is a cinch to tell you why parallel computing hasn't been a bigger trend: There is simply no need for it in the vast majority of applications. Regular readers of the Programming and Development blog know how much I like to write about parallel computing, but I recognize the reality, too. Very few applications bring a CPU more than a few percentage points off of idle. If they do use a lot of CPU time, it is going to be in a database request, at which point it is out of the application developer's hands. Modern CPUs are simply too fast to justify using parallel processing in many cases.

If you do happen to see an application use multithreading, it is usually in an asynchronous pattern like downloading an item with the option of cancelling it; also, it is done for usability reasons rather than for performance reasons. Another place you see multithreading in typical applications is in third-party components like a graph-rendering component. In a nutshell, the typical business developer makes use of resources that may kill the CPU but does not directly write any code that could or should be converted to parallel processing.

Additionally, mainstream OSs juggle requests for CPU time very sanely. I was recently doing some experimentation with parallel computation of Fibonacci sequences. I had my Core 2 Duo processor pegged at 100% on both cores. The MP3 I was playing did not exhibit any problems, and the computer was still perfectly usable. On top of that, the perception is that writing multithreaded applications is very difficult, even though it is getting easier and easier.

Cloud computing's lack of success is even easier to explain: trust. Do you trust that your Internet connection is 100% perfect? Much less than you trust your internal switches. Do you trust that your company's data is safe in the hands of people that you have never met? Much less than you trust your DBA and your system administrator down the hall. After all, you know where they live. and your HR department ran background checks on them. Do you trust that the third-party vendor is really doing the nightly backups that you are paying for? Much less than you trust your in-house nightly backups; you see them take the tapes offsite once a week, after all. Until the cloud computing vendors built up a long-term reputation of being reliable and trustworthy, cloud computing is dead in the water.

I am not saying that cloud computing or parallel processing do not have a place. I think that cloud computing is a good idea for consumer-oriented applications that either act as a redundant copy of data you already have locally (such as using Flikr to publish some photos on your hard drive), or to provide services and store data that is not critical that you always have access to (like Skype). The cloud vendors do have better uptime and backup procedures than your typical consumer, and the typical consumer is less likely to have information that would be catastrophic if it is lost. Likewise, where you will be seeing a lot of parallel computing is in minor functionality; think of applications that get a lot more graphical and a lot more real-time (e.g., Microsoft Photosynth). But it is very unlikely that there will be any applications that are both business oriented and 80% or 90% cloud computed or parallel processed or that show off either of these ideas extensively.

So, Larry, in response to your question, "Where are the killer apps?" I think the answer is: There are no killer apps for either cloud computing or parallel processing. At best, there may be killer widgets. While we may remember the handful of applications that are fully AJAXed, the reality is, most sites employing AJAX use only a widget or two where it makes sense. Likewise, we may remember some super-neat ray tracer or application that magically retains your data wherever you are, most applications will be incorporating these techniques as a side dish, not the main entrée.

Eventually both cloud computing and parallel processing will enter the average developer's bag of tricks, but don't hold your breath.

Related TechRepublic resources

J.Ja

Disclosure of Justin's industry affiliations: Justin James has a working arrangement with Microsoft to write an article for MSDN Magazine. He also has a contract with Spiceworks to write product buying guides. Get weekly development tips in your inbox Keep your developer skills sharp by signing up for TechRepublic's free Web Developer newsletter, delivered each Tuesday. Automatically subscribe today!

About

Justin James is the Lead Architect for Conigent.

77 comments
atsanos
atsanos

Most programmers don't use a modern design model or don't have a formal education on software design, and think software solution as a single flux of instructions, not a galaxy of autonomous objects that cooperate to get a solution.

mikifinaz1
mikifinaz1

People spent the last half of the past century cutting the cord to the mainframe. Why would anyone go back to that via the cloud?

JohnOfStony
JohnOfStony

Parallel processing comes in many forms, the simplest of which and the one with the greatest potential is where every processor runs the same program but with different data. Examples of possible applications using this technique include payroll, air traffic control, meteorology, graphic processing, to name a few. The program can be relatively simple which means reliable and easy to debug - extremely important in cases such as air traffic control. The general trend of programming seems to be to get more and more complex both in applications and operating systems which leads to unreliability and difficulty in debugging - we seem to have forgotten KISS - Keep It Simple, Stupid! So in my view, parallel processing enables rapid processing of huge volumes of data with simple programs and relatively low processor speeds. I worked on an array processor simulator back in the mid '80s but the company developing the idea went bust - but not because it was a bad idea!

Jaqui
Jaqui

nope, I don't trust it to be always available, for one simple reason. I have been offline with a dead modem for two days. since hardware can, and does, fail trusting in 100% availability is not really possible.

SObaldrick
SObaldrick

where do the use cases come into this article? i was interested in the title, but could find nothing in the article about use cases. :-( Les.

Tony Hopkinson
Tony Hopkinson

Always has been, giving away that much power over your business function in conjunction with divesting yourself of the resources you use to do it now (and in the future), just doesn't make sense from a business point of view. I'm sure there's a few things it can be used for that you could get away with, and maybe if that happens we'll build up some confidence in the ability of vendors to meet the need. The thing about the model is, is that the more successful it becomes the less profit there is for the 3rd party in meeting any customer's specific needs without substantially adding to price... People like to sell it as as new and gee whiz, but it's no different to renting space on a mainframe, buying web space, even outsourcing. Short term gains for the bean counters maybe, but a longterm shortage of viable options in the future. Anyone who starts talking about competition I will laugh at. If the idea takes off you are going to end up with very few vendors, scaling will cut their costs big style. Parallel computing, like you I think it's a very small niche from a practical point of view. There will be very few applications that couldn't benefit somewhat from parallelism, enough to make the expense of building and maintaining it, I have doubts.

Tony Hopkinson
Tony Hopkinson

While there are all sort of possibilities for multi agent, parallelization, threading etc, at some point everything goes sequential at both the micro and the macro level. Certainly education in better software design is a problem, but seeing as academia hasn't mastered meaningful variable names yet, I wouldn't hold your breath on a big practical change in the near future.... Galaxy of autonomous obects... sheesh How about a bit less meaningless waffle and more practical uses. Why should anyone who sits down doing basic crud applications all day invest in learning this sort of technique. Are they going to use it, no. Will they get paid for having it, no. We need mainstream OSes that are capable of true parallelisation without having to micro-engineer code to get any meaningful result. We need languages to code in these environments which don't require us to spend 80% of the development budget doing swim diagrams. And we need all this to be mainstream, ie the sort of thing some newbie VB6 type can use. Don't hold your breath for any of that either, it's cheaper to throw more hardware at the problems. Much cheaper.

justinco
justinco

Those are among the most pressing reasons why. Increased security, automation and improved utilisation are 3 more. The cloud does not = mainframe either, clouds can be built on x86, Unix and Mainframe infrastructure, though might resemble the mainframe from a functional perspective.

Justin James
Justin James

... hey, the trend towards VMs in the server room is a *very* mainframe idea; it came from VMS, after all. :) In fact, more and more, the trend in servers rooms is to use commodity x86/x64 hardware and turn it into something which conceptually is not terribly different from a mainframe. The big difference is that instead of having 1 huge box with hundreds of CPUs on daughterboards and 1 OS installation which can handle hotswapping of RAM, CPUs, etc., we have hundreds of 1U or blade servers in a cluster configuration, where the servers themselves get hotswapped. Intel says "tomato", IBM says "tomatoh", but it is all effectively the mainframe concept. :) J.Ja

DukeCylk
DukeCylk

The cloud is about high speed access to data and leveraging the computing power of the internet and making local hardware cheaper...paralell processing is more about displays, rocket science, high speed data collection for testing...so the later is like burst of high tecnology, like a drag race car where cloud computing is the whole interstate system.

Tony Hopkinson
Tony Hopkinson

Is it a generally practical one though? I can think of a lot of places where I can and could have used it over my career, but could I have justified the extra expense in hardware and software in terms of the benfits? No afraid not.... I agree completely on the complexity issue and try to reduce it where ever possible. Parallel processing though is not always (read hardly ever) an applicable method at a gross level though for the bulk of applications. Most of the problems we solve have a bottleneck inoput and output to the user. Short of seriously large processing burdens, which are almost always taken care of by a few server applications, parallelising at the client end would cost more than it saves.

chris
chris

We have servers die in house too. Do we cry about how computers are no good and we should've outsourced that function? No, we buy and configure another one, OR, if we have had enough extra cash lying around, we grab our spare and install it. That is a weak argument against it. NOTE: I am not arguing for it, but we gotta get beyond saying something that can only scare business people into agreeing with us. IT people need to think through things.

Justin James
Justin James

Here in Columbia, SC we have had within recent memory week-long power outages (no laws mandating that the utility company trim trees near the poles, one good ice storm, lots of trees sag and take out lots of power lines). I would be cautious about moving to a cloud computing model without a full disaster recovery plan that involves *at least* two geographically separated sites (multiple hundreds of miles between them, preferably in different regions of the country completely), and a provider with the same. Amazon might have such a setup, but many (if not most) smaller vendors can't afford it, especially if they are startups. J.Ja

Justin James
Justin James

Les - That was the point. Looking at the possiblilities for using these techniques, it is hard to find any use cases for them. Do I formally lay things out in a PMP-style? No, I think that would be pretty boring to read. But I did take a good look at the possible scenarios to use these techniques in. :) J.Ja

mattohare
mattohare

So many things seemed bust, then all of the sudden they seem to be everywhere. I remember OOP being this way. It was great fodder for filling pages in software engineering magazines, but nothing more than some idea that sounded good and not exercised. Suddenly, it seemed everything was programming to class objects, methods and properties. Where I could not imagine life with OOP, now I can't imagine life without it. Same with internet applications. I'm not just talking about AJAX either. When internet connections were measured in 10s of Kbytes and we paid by the hour, the idea of an office application or webmail seemed laughable. Now we're buying travel, books and even shoes, over the internet. Travel web applications are becoming some of the most intense when you get your air, hotel and car hire in one go. Cloud and Parallel are coming. They'll be here very suddenly, I think, too. I just hope it's a wave I can stay on long enough to make a few bucks (or quid) in the process. Right Justin?

CharlieSpencer
CharlieSpencer

In order for a business to consider cloud computing as a viable alternative, it has to reach the reliability levels of other third-party utilities: electricity, water, telephone, etc. Businesses don't bother providing these for themselves except on a disaster-recover basis. It's difficult to plan DR for cloud computing; you can rent processor resources, but how do you access your data? Until cloud computing can achieve the up time of the local electrical utility, it will remain useful only for personal, non-essential applications.

Tony Hopkinson
Tony Hopkinson

Security, all your data is in the hands of a third party, and you have to go out of your own netwiork across multiple nodes out of your control. Automation, of what, services like that will be unique to a business. Every one will need to be costed and maintained and you can expect a bill for that. Improved utilisation, of what? Yes this could be offered, but will it and for how much. Any sensible service in a cloud that a business can make money from. profit is going to be leveraged by scaling. As for social responsibility, well you could claim, server farms are greener (through scaling), but obviously they are going to work it to employ less people, and social responsibility is far more than claiming to be green and making a big fat profit for a couple of rich boogers. Try and think all of the consequences through.

mattohare
mattohare

cheep commodity PCs they got from anyone tossing them.

mattohare
mattohare

Sounds like you used to play with a full DEC?

Tony Hopkinson
Tony Hopkinson

I'm an earthling myself. It isn't ever going to be faster than getting it off your own network is it, given any vaguely sensible set up? Speed and power, sheesh.

Jaqui
Jaqui

can you trust "mission critical" data to a system you have ZERO control over keeping online? no sane CEO would.

DukeCylk
DukeCylk

My arguments are we as developers should not be proclaiming it as a bust, but putting it to the test...I think JJ is making a rhetorical point, where's the killer app...get to work and make it, or else the opportunity of a cool new tech may die on the vine. If it can't deliver then it's a bust.

Jaqui
Jaqui

unless whoever is offering the service has such redundancy they could be offline from external factors they cannot control, costing business. I would think that triple or better redundancy would be the target. Natural disasters and hardware problems being unpredictable, having two centers taken out is possible, even if unlikely, three is even more unlikely.

Tony Hopkinson
Tony Hopkinson

option from a DR standpoint. Economies of scale would force mergers and buyouts, leaving very few providers. They might not even be domestic! What's the point of the internet, if you can effectively make it useless by taking out a handful of nodes on it? It's not just a question of one or two business' not being able to could be half the damn country.....

Justin James
Justin James

I think that when some quality cloud vendors get here, and they get some maturity, and build a LOT of trust, that cloud computing will become more popular, especially amongst Web startups. Twitter or MySpace type companies would do well using it. I think that the barriers to cloud computing are cultural, business, and legal, not technical. Parallel computing, on the other hand... most developers just are not doing anything where it makes sense. I hate saying it, because I love the topic, but until more developers are working on "personal computing" as opposed to "data processing", it won't get much traction amongst typical developers. J.Ja

Tony Hopkinson
Tony Hopkinson

OK Robustness, fail over, DR etc could be improved, but when is this not the case. The only time real money spent on them is just after you needed it, or some auditor types is going to down check you. It's the buiness problem that concerns me. When there's lot of competition in the market, one of the key ways of ensuring and maximising your cut as a provider will be scaling. Obviously if you go down that route, short of some serious regulatory imposition, you are going to eat up teh comptition. Now there is none, who are you gonna call. It's no accident that the big boys like MS want this to go forward, think your nuts are in a vice now?. If this takes off practice hammering 'em flat with a mallet, increase your pain threshold.

chris
chris

It may simply be a matter or someone coming in and packaging services in an offering. On the backend, you don't know which company is doing what, but the company you signed with is managing all that. You want something and this guy can provide it for you.

Tony Hopkinson
Tony Hopkinson

Not too far from me they regularly get brown tap water. One day they'll get round to fixing that, it's only happened six times (over weeks !) over the last four years. Of course they get paid anyway, and it's not as if you can have another pipe laid to your house, so ...... If it did become viable, how long would it be before they aggregated into one or two huge 'geographically' restricted providers and we'd have to take whatever they deigned to give us? Four years max, I'd say. There's a reason the big boys all like this idea, and it isn't to save us money, or provide a much better service. Going down this route would be like volunteering for the wrong side of a firing squad.

justinco
justinco

Hi, sorry did not post this response in context of my view of cloud computing. I agree with you that cloud computing will not be useful for most businesses, a few small players might find it convenient for non critical data, but overall not many business will, cloud I believe is more suited to individuals utilising the low cost storage and some useful apps. Please read my original response to the first post on privately owned clouds, perhaps then my comments in this post will be more accurate, lets see.

santeewelding
santeewelding

Still. So far, I'm on your coattails. Caution does suggest thinking through consequences. Or, the reverse, consequences breed caution. Either or both, cloud as mimic or cloud to be it, the thing goes well beyond just business. Caution.

Justin James
Justin James

At my last job, a bunch of "us" would do stuff like that too. It started with the CEO; when he'd be drinking with the salespeople, if they left their BlackBerries alone while in the bathroom, he would use it to fire off emails to their bosses. Our President would do their same with unlocked PCs. Me? I would be gentle about it, just bring up Word and type a message like, "the invisible hacker was here... your PC was unlocked so I did not have to work to hard. I wonder what I did with your computer when you were away?" People would go nuts, and the lock rate would soar. Beleive it or not, it had a purpose. We business did revolved around private data like SSNs, DOBs, arrest records, etc. If there were to be theft or alteration of our data, we would be out of business in a heartbeat and people's lives could be wrecked. PC security was not something we took lightly. While it may have been a juvenile gag, it put out a lesson that was mess harder to forget than boring emails from the IT department! J.Ja

mattohare
mattohare

some would log on to terminals to reserve them, and leave themselves logged on. Now I'd never lower myself to this level, but some had a .com file just ready for such a situation. The vacant user's login.com would be renamed. A new login.com (that file held in readiness) would be copied to their root directory. Next time they log in, the new login.com would say "you really shouldn't leave yourself loged in. Someone could make it so you delete all your files, like this:" and then list all their files w/o actually deleting them. Oh, and the new login.com would turn off any form of aborts. The last thing it would do is to delete itself and rename the original login.com back, and then stop the process w/o any logout message. Once the person recovred from the shock and logged back in, they found everything as they left it. Well, maybe not their heart rate. LOL I never did this mind.

Justin James
Justin James

... when you can just get to class early, login to yours, fire up a text editor, then swap the keyboard cable with the person next to you? This way, they are now typing their credentials into your text editor without noticing... then quickly plug your cable back in, and logina s them and change their password. Not saying *I* personally did these kinds of things... *I* prefered to stick with much more mature things, like a batch file that sent system messages to all of the PCs on the Vines network, which would make the user hit CTRL+ENTER to acknolwedge it, and without doing so, all other keyboard input was ignored... :) J.Ja

mattohare
mattohare

We had some 8-bit computers at my high school (around 1979). They were cool and all, my favourite being a HeathKit with OS and Basic interpreter integrated. I still think back fondly on some of the things I put into a login.com file.

Justin James
Justin James

I was around VAX's, my father went to them after replacing their Wang's (that sound so wrong...). I suspect that I learned to code on a VAX or an old-school *Nix system; it is really hard to figure out which it was, since I didn't get to do anything in the OS other than start the text editor and compile/run my code, and the code was BASIC in year 1 and COBOL in year 2. So you can't even figure out what kind of system it was from the tools... "it's all green screen to me!" J.Ja

Tony Hopkinson
Tony Hopkinson

Did another two years on it job before this one.

michael.brodock
michael.brodock

VMS was according to HP a few years back what 70% of the worlds finances ran on. I don't know what that number is now, but it probably isn't much different. I know the US Post Office had about 80 VMS clusters at the turn of the century (Y2K for those thinking 1900), which runs pretty much their whole business. I love VMS. Too bad there aren't more people using it.

Tony Hopkinson
Tony Hopkinson

Like watching custard set when I work from home when doing charry things like resynching code bases. It's not connectivity I have a problem with. the bit I have a problem with is relinquishing control to a third party. It's like outsourcing through a firm of consultants, to by space on a server to run an off the shelf suite. I know you are steering clear of suggesting such foolishness, but some believe the functionality you are talking about scales up to a full on rich gui 24/7 does waht we the business needs it to solution. It doesn't didn't and never will. the more customisable the service in any terms the more it will cost to provide. We are talking things like, you must use .net, and IE and this encryption mechanism and this office version, and, and, and. It can't be anything but a limited set of choices because to provide them all would cost too much.

DukeCylk
DukeCylk

that wasn't my point, my point is, how about if you aren't physically connected to your own network, and VPN isn't available (which IMHO is a pain in the arse anyhow to maintain) ...we're not all brick and mortar... how about if your users are exclsuively remote like sales, or emergency first responders?

chris
chris

but the site was up to others. so to me, the customer, it was down but according to the host, it was not.

mattohare
mattohare

Fifty-five minute music hours. Turns out they 'meant' only five minutes of commercials each hour. News and 'DJ chatter' counted as 'music'. Pardon me, but some of those Seattle/Tacoma DJs were FAR from musical in their 'chatter'.

Justin James
Justin James

Yeah, under those circumstances I'll promise 5 nines too... with the additional disclaimer that it will be measured by an independent party, also with 3 testing centers on 3 continents, not by the customer. :) J.Ja

Jaqui
Jaqui

with three centers, all replicating each other, I would promise 5 nines. as long as each center was on a different continent. :D otherwise only 99% promised.

Justin James
Justin James

Yeah, DR is a real nightmare. I remember seeing contracts with customers that promised "5 nines of uptime", and I went to the sales person and said, "you do realize that is around 5 minutes of downtime a year, right? And that we average 10 hours of *unexpected* downtime a year now with our other customers, not to mention scheduled downtime?" The salesperson was quite stunned... "but everyone promises five nines, I thought we could too!" I explained to him at that point that everyone "promises" five nines, but (unlike him), few were dumb enough to put it into the contract... J.Ja

mattohare
mattohare

They can all be open and working. But, if the village shop is burned down, we're still out of luck.

adam.howard500
adam.howard500

For 100% uptime guarantees, you also have to have a network that can handle not only your users' traffic, but also capable of sending EVERY data update to multiple databases in different regions of the country (or the globe). This could get real expensive real quick for the provider with any kind of high volume updates. Real expensive for the provider = real expensive to use the service. Oh, and what happens when their internal data transfer network has a failure in some point? Do they have a reliable method for synching everything up?

Justin James
Justin James

As you say, the Internet is redundant... but unfortunately, while a path to the "last mile" may always be available, it is in the "last mile" where there is no redundancy. The only want to mitigate this, it to have multiple nodes providing the same service, which is extremely expensive and difficult to set up, configure, and develop applciations to work well with! J.Ja

Tony Hopkinson
Tony Hopkinson

So if a large proportion of 'everybody's' services are only on one node. There might be several physically, but they can be considered as one, given said provider can interrupt them at will, you become a hostage. Redundancy in the internet means may be you can come on to TR and whinge about the cost of getting your ability to do business back. That's of course if TR are still paying said provider.

gypkap
gypkap

It was designed that way from the start, as it was originally a prototype for a computer communication system that wouldn't go down when one or more nodes went down. There were many loops in the system so that data packets could be routed around failures. However, that might not be the case anymore as the redundancy might not be available for individual nodes (like nodes created by your cable provider, which are probably not redundant).

mattohare
mattohare

One thing I liked on my current and previous major project is that I was able to encapsulate things into objects tight enough not to break things while going forward. As a maintenance programmer, I really had to fight to get the previous application into current (last year's latest and greatest) technology. Mind, I felt like I'd solved world peace after a few of these changes.

mattohare
mattohare

These are developers that read all the glossy Flash(r) pages about what a new technology can do, and all "without writing a single line of code". Some of the trendsetters will take that at face value without any investigation, and make promises to the sales staff based on it. They may do this over drinks after work when the sane people can't bring them down to earth. Then we end up with stuff put into print and all the trendsetters have to live up to the promise.

santeewelding
santeewelding

Like having a baby. Divided about it, are you?

Justin James
Justin James

Yesterday, I talked to Jeff Hansen, GM of Windows Live Services (aka, "Live Mesh") on the PDC wrap up (post coming out next week). After I talked to him, I thought a lot about this very same thing. Developers working at new places, or on new projects completely divorced from the old code, have the luxury of trying things like the .Net 3.0 technologies, cloud computing, RIA's, and other "what new now" technologies. People working with legacy code simply do not have that opportunity. And you don't even need to be doing maintenance! If my company has a database that we use, and my brand new project needs to access that data, even though it is a new project, I need to use the existing data access objects, or I spend huge amounts of time re-implementing them in "the latest and greatest" system (an ORM system, LINQ, whatever...), and possibly creating bugs or inconsistencies in the process. Right there, I'm limited to a system/language that can use the existing libraries, and possibly stuck with a lot of other legacy stuff too. If you look at the results and comments on some of the polls that I ran this month and last month, that divide is pretty clear. For the people who can't keep up, they aren't really saying, "this new stuff stinks!" they are (mostly) saying, "I don't have time to learn it because I'm stuck working with legacy stuff, and even if I knew it, my project is saddled with legacy stuff". Sad that a group of people who (mostly) are super-into the new and cool re (mostly) stuck with last year's survivors that we call "best practices". J.Ja

DukeCylk
DukeCylk

I think what you are also talking about, which frustrates IT maintenance is marketing people needing to get a product to market before it's ready, Silver Light 1.0 (sorry MS is such an easy target) for example, and 2.0 is supposed to have gotten it right...maybe it has all the features finally but is it still fully functional? IT Maintenance will be the first to know, and trend setters will be happy with the features, but unless they full test, well...

mattohare
mattohare

Maintenance and Trendsetters. Maintenance are so stuck in what they do that they have neither time, energy nor desire to use the new facilities. Trendsetters move from job to job, never maintain anything, and seem to know the latest and greatest, and forget the past. It's a tough divide. Maintenance develoeprs eventually have the new stuff thrust on them. Once they do, and get used to the shock, they tend to do more stable aps than the trendsetters. That's probably how we can get better products past the Version One stages. And I'm not just talking about Microsoft here. Some organisations' first websites are awful, but later upgrades of them are great.