Leadership

Can Microsoft win over enough developers to change the paradigm?

Justin James poses this question: Can Microsoft create enough demand in the minds of developers so that its vision transitions to a state of self-fulfilling prophesy?

Over the last few months, I have written about some of the changes occurring in the development landscape. Some of the other writers in TechRepublic's Programming and Development space have also been approaching the same topic with hands-on articles, such as Tony Patton's recent piece on Silverlight 2.0, and Peter Mikhalenko's continuing articles on various Java APIs. Today, I want to touch on the concept of reflexivity.

Reflexivity, in a nutshell, is the idea that when many people believe something is going to happen, they unwittingly cause it to happen. The idea originated in economics, and I first stumbled across it in a piece about stock prices. In the stock market, when companies fear that a crash may be coming, they take certain measures (such as cutting back orders of stock or laying off employees) that play a significant role in creating a stock market crash.

We all know who the 800 lb. gorillas in IT are: IBM, Oracle, Microsoft, Sun Microsystems (which is slimming down to be a 500 lb. gorilla), and so on. Now, how many of these gorillas are really involved in what programmers do? Or, you might think about where the tools and languages you use originate. It's a pretty short list. If you are a Java developer, your language comes from Sun, and your tools are either Eclipse (originally an IBM AlphaWorks project), JDeveloper (Oracle), or NetBeans (Sun). PHP developers seem to stick with text editors of some variety or another, by and large. This leaves nearly 100% of Windows desktop application developers, and probably 30% - 60% of Web developers using one tool: Microsoft Visual Studio. Talk about an 800 lb. gorilla!

Microsoft has conclusively proven through The Great ActiveX Debacle that it is incapable of ramming anything down the throats of developers that developers have no use for. Even UAC isn't enough to get programmers to start following the guidelines and guidance that Microsoft has been giving for more than five years. A huge number of apps are still being maintained in classic ASP and VB6, despite the huge advantages in moving to .NET; heck, .NET 1.1 seems to be the dominant version. Windows Vista is still struggling against the success of Windows XP. So, clearly, just because Microsoft thinks something should happen does not mean that it will.

Microsoft is not completely powerless either. When Microsoft decided that ODBC was the way to connect to databases, it happened. When Microsoft got serious about entering Web development, it went from a provable 0% market share to probably 30% - 60%, depending upon who you ask. When Microsoft released VB3 around the same time Borland released Delphi 1, it was able to reverse what should have and could have been Borland's ultimate triumph into the beginning of the end for Borland as a development tools company. Microsoft was an early AJAX pioneer (Outlook Web Access is still quite slick), and it pushed hard for Web services, and we live in the world of AJAX and Web services today.

I believe that when Microsoft thinks things should change in a particular way, it happens, providing that none of the alternatives are exceedingly better. In other words, people find it much easier to go along with Microsoft's vision that the vision needs to be pretty murky for people to resist it. Largely, Microsoft's vision has been good, despite frequent botches on the implementation end.

Right now, Microsoft has a particular vision. There are others with it as well, but Microsoft is the only player pushing this combination of changes. And what is this vision? It can be broken down into the following pieces.

Multithreading: Microsoft's analysis of the CPU situation is that clock speeds on a per-core basis are not going up very much, but the core count per physical CPU is continuing to rise to keep up with Moore's Law. Microsoft is placing huge bets on development tools for multithreaded development -- some in the C/C++ world and some in the .NET arena. When mobile devices get multi-core CPUs, these tools (especially the .NET tools) should seamlessly transition to those platforms as well. Microsoft has been on a massive hiring spree, managing to snag lifelong academics with specialties in parallel computing and the mainframe world and getting them working on research. Functional languages: F# is just the beginning. LINQ brings many trademark techniques of functional programming into VB.NET and C#. IronPython and IronRuby are both efforts to productize research and development of dynamic languages with heavy functional tendencies. The push for functional languages and their techniques ties closely to the multithreading angle, since functional languages lend themselves especially well to multithreading. Look for a big push on F# in the near future; its lazy evaluation makes it a perfect target for automagical parallel processing without programmer intervention. Microsoft has been hiring heavily in this area as well. Mobile computing: Microsoft has nearly wiped Palm off the map. Microsoft has also signaled that it envisions a world in which a significant portion of "real work" occurs on mobile computing devices. Looking at Microsoft's research into Human/Computing Interaction (HCI) shows that the majority of the company's curiosity is around finding newer and better ways of interacting with alternative form factors, primarily mobile devices, but also non-mobile devices such as Microsoft Surface, the giant table computer. Microsoft has been hiring here too.

When we think of major bets that Microsoft is making, we think of things like the aborted Yahoo! acquisition, Windows Vista, MSN vs. Google, and so on. The reality is, those are just sideshows. The real game is being played for the hearts and minds of developers, same as it always was. The x86 dominated due to the software than ran on it and not on its own technical strength. Ditto for the Windows operating system on the client side and Windows Server and *Nix in the server rooms.

Remember "embrace and extend?" That refers to programming, not networking or hardware. What about the famous Steve Ballmer "monkey boy" dance? He was chanting "developers, developers, developers!" not "networking engineers and system administrators!" Let's get some perspective folks. We're the programmers. At the end of the day, the entire IT industry must be focused around catering to us, and we ultimately (and hopefully) serve the end user. It's simple, really -- we're the only ones who are trying to directly meet the end users' goals. Everything else simply supports that. A PC without software is useless. A network without bits flowing over it is useless. But we can write software for any platform that can communicate over any network and, for a while, we didn't need networks. It is development in software that drives the deployment of new hardware and new networks, not the availability of enabling new software. The user doesn't care if you are using a PowerPC or an x86 CPU -- they care about running the apps they want. Ditto for networking technologies.

Microsoft's really big bets are in development tools and not the OS or the hardware. What bets are Microsoft making on the OS front? The company's only bet is, "Users will stick with Windows." In hardware, the company still sells mice and keyboards, and it will work with any hardware vendor if there is money in it. But development? Microsoft has been doing so much hiring that it is pushing for the H-1B vis pool to be expanded. Take a look at Microsoft Research's Web site, and you will get an idea of that the company is working on. Very little of it has to do with traditional desktop computing or even traditional client/server computing. A lot of it is about mobile computing, multithreading, and functional programming languages. Microsoft is betting that developers are going to transition away from both Web applications and traditional client/server applications in favor of RIAs. At that time, Microsoft expects that the paradigm will be, "major, parallel processing on the server, traditional desktop app loads (possibly with parallel processing) on the client side, in a mobile computing form factor." It is not that Microsoft has given up on the desktop, but I think the company takes its domination in that arena for granted now.

Bill Gates is just about out. Jim Allchin is out. Paul Allen is out. Steve Ballmer could very well decide he's through at any moment too. Ray Ozzie and Craig Mundie are carrying the torch, by and large. The question is not, "Do they know the vision?" but rather "Does Microsoft carry enough weight to make this happen?" And the answer to that question is, "quite possibly." While Microsoft has lost an awful lot of credibility lately, particularly with regards to Windows Vista, it is the largest employer of truly brilliant people in the industry, and the only player in the industry left with any kind of substantial R&D budget. If any innovation is going to happen -- that is, if it happens at a mature, established company and not a startup -- it will most likely happen at Microsoft.

Microsoft is trying very hard to be the cause and beneficiary of reflexivity, but can the company create enough demand in the minds of developers so that its vision transitions to a state of self-fulfilling prophesy? Only time will tell.

J.Ja

Disclosure of Justin's industry affiliations: Justin James has a working arrangement with Microsoft to write an article for MSDN Magazine.

About

Justin James is the Lead Architect for Conigent.

83 comments
globaldefenses
globaldefenses

Why "MS has lost weight over Vista?" Vista has quirks but is fast & stable. I have been using it for nearly 2 mos now and am running every kind of application conceivable... never crashed. This forum is all bullshit. You people need to experience reality so you can acquire a value system that makes your opinions worth reading.

mikifinaz1
mikifinaz1

They only want to do this so that they can put everyone in a strangle hold.

wyattbiker
wyattbiker

You left out the 1,000 LB gorilla Google. MS is already acting like a panicky little pussy just at the mention of Google. MS has never innovated anything of substance without stealing it. Start with OS (DOS wasn't even theirs, Windows concepts stolen from OS2, Apple), Languages (Basic --> Waterloo), (IE --> Netscape) IDEs (IDE's preexisted MS), Virtual Machines and OOP (Smalltalk, Java), Databases (SQL ->Sybase, Bought Foxpro and destroyed it, created crappy Access). Now they try to buy Yahoo for searching? ha ha ha. I can go on for days. Would you believe most of the concepts of computing were invented and implemented in the 60's and early seventies? Bill and Balmer have made MS succesful because because they know how to use scare tactics, FUDs and fake patents (E.g. Linux FUDs). Now let Google (a real tech and innovative company) FUD them. By the way, thanks for disclosing you are writing for MSDN. I dont blame you because we all have to suck up or use the MS monopoly to make a living (I know I do).

SnoopDougEDoug
SnoopDougEDoug

Maybe I missed an argument, but don't most developers develop on Windows? And don't almost all Windows developers use Visual Studio? Why would Microsoft want to "change the paradigm"? To what? A non-Microsoft development world? That would be a sea of change. It would make more sense to me to write an article titled "What does Microsoft need to do to keep developers?" Exactly what they have always done--incredible number of SDKs, massive doc/sample/tool offerings, developer conferences with free swag. Name one other company that churns out SDKs like Microsoft. Of course we bemoan that the offerings are not perfect, but come on, just the breadth is spectacular.

jean-simon.s.larochelle
jean-simon.s.larochelle

...win a lot of programmers over. However, I think and I also hope that MS never wins over too many of the developpers. The reason for this is that diversity is absolutly needed in the developpement ecosystem and it would be jeopardized by a too dominant MS. MS is not the only company working on some of the items you have mentioned. SUN has been improving multiprocessor support in Java since version 1.5 through improvements of the java.util.concurrent package and will have more specific multiprocessor support in 1.7. The Ruby language and other such efforts have been a big influence and I believe (and hope) that they will remain a big influence. I believe that innovation is not something that is easily tamed and put in a box (M$ in this case). I also believe many developpers have a rebel side that will preserve them from total assimilation. I also think that if you look at the influence that Java and other players have had on M$ in recent years it would not be in its best interest to become to dominant. Who would have though that a JIT and associated technology would have become the heart of the M$ development ecosystem ? JS

mikifinaz1
mikifinaz1

I for one don't want to trade trickets for chains.

asandefer
asandefer

Microsoft has a long, storied history of getting to the table last when it comes to true innovation. It took the massive growth of Google emerging from nothing to what it is today to force them to embrace the reality that users won't be tied to PC fat clients for applications and that the future is web apps. Walk down memory lane with me and I'll show you how Microsoft isn't deciding for the rest of us but rather waiting to make some company's trick their treat and then outmarket them. And with turds like Windows ME and Vista; releasing a server OS (Longhorn) which could not support Exchange 2003 or SQL Server 2005 - now there are 2008 versions of those products so that they can run on Longhorn; outmarketing true innovators is going to get tougher if they don't get their act together. -ODBC, How much of this came from Microsoft and how much of it came from the SQL Access Group and Simba Technologies? -SQL Server, half way stolen from Sybase. -SilverLight, developed a decade after Flash because Visual Studio developers were tired of not being able to make pretty stuff on top of their dull gray vb apps. -ASP, bought from HotLava who probably developed the idea of writing basic for web pages after seeing Allaire's ColdFusion product. -.NET, born from the concept of Java - and I truly think that when you have an entire decade to analyze what SUN did you've got to do better than the first release of .NET. Bill Joy reportedly laughed when he saw the first release of .NET. -C#, architected by the mastermind who created Delphi for Borland. -Windows 95, kind of looked a bit like MacOS to me. -Killing Palm? I think that we both know that the credit for that truly goes to RIM's BlackBerry. -Microsoft Dynamics (formerly known as Microsoft Business Solutions) featured not one ERP or Accounting System developed by Microsoft - all products were flat out bought through acquisitions. So tell us again about how Microsoft has always led us developers down the correct path and will continue to do so? Face it, Microsoft is way better at business and marketing than innovation or developing anything fresh at all.

jck
jck

You are demented...clearly. I challenge you to do a dual-boot config, Vista for Business on one partition, Windows XP Pro on the other. Benchmark both. See which is faster. I even turned off all the effects, and my XP partition on the laptop ran faster than Vista 32-bit home premium. Plus, I've had Vista crash a program about every 4-8 days. I do not have anything crash under XP. So, I can't say Vista is absolutely unstable, but it has nowhere near the stability of XP.

Tony Hopkinson
Tony Hopkinson

Every concievable app, yeah right. I've been using Vista a lot longer than you, at work and at home. It's crashed on me maybe four times, which is quite good for MS I must admit. Speed wise who can tell? My works PC, after being set up by IS, is ridiculously slow. My quad at home is so fast sometimes I have to click on things twice , because I'm nor sure they happened. Sort yourself out, if you want a flame war, at least put some effort into it, intead of sounding like an MS fanboy.

Justin James
Justin James

I don't think they'd put developers in a "strangle hold", but defnitely systems that didn't participate in the "Microsoft ecosystem". J.Ja

Justin James
Justin James

google may be a 1,000 lb. gorrilla... but not for developers. Outside of "mashups" involving new uses for GMail (fairly limited in potential) or mixing a dataset that contains geographic data with Google Maps, Google has no mindshare in the development community. And frankly, Microsoft isn't terribly panicky about Google, they don't even compete except for in a few markets that Microsoft has never made much money in (although they would like to). In other words, Google isn't eating Microsoft's lunch, but Microsoft would like to eat Google's. J.Ja

alaniane
alaniane

between IBM and Microsoft. Besides it was Xerox and not Apple that developed Window concepts. Both Apple and Microsoft built upon Xerox's idea. In reality there hasn't been an original idea without a previous basis in centuries. All of our ideas are built upon previous ideas. Isaac Newton didn't develop Calculus out of thin air. It was developed from previous mathematical concepts. Our knowledge and innovations are accumulative. Name one invention or concept that was developed without needing to reference any previously acquired technology or concepts.

Justin James
Justin James

Doug - Good question. You're right, most developers are developing for Windows, and most of them are (probably) using a Microsoft technology to do so. I think Microsoft's play here is to get their app servers more traction. Right now, plain vanilla ASP.Net is pretty comparable to J2EE, especially for shops that don't tightly integrate their ASP.Net to Microsoft-specific technologies. What they *do* get out of Silverlight is finding a way to let desktop developers transition to XAML in a "sexy" way, which then helps them stop using WinForms and start using WPF for their desktop apps. It also encourages a "write once, run on any Windows platform" approach to development, which helps push Windows Mobile significantly. I think Microsoft's big fear is twofold: 1) They are afraid that a substantial number of users will do much of (or even all of) their computing from mobile devices, and unless some killer apps come out for Windows Mobile, those people will be using the PCs basically for Word and one or two other office suite type apps, and a BlackBerry for everything else. Making it much easier to write something that runs on Windows Mobile AND a desktop AND the Web is a big win in that area. 2) Protect and expand ASP.Net. Right now, developers ask themselves two questions when writing a Web app (unless policy doesn't give them a choice): "is this project going to require more power than PHP provides?" And for a great many apps, the answer to that is "no". If the answer is "yes", they ask, "so I want to use J2EE or ASP.Net"? By going this route, they (hopefully) make ASP.Net (as the app server, talking to a RIA/Silverlight app) a much more attractive choice, particularly when developers are saying, "hmm, I could cobble together a huge pile of AJAX, or I could just use Silverlight". Personally, I'd take developing for a RIA (Flash, Silverlight, etc.) system over trying to write AJAX any day of the week. So you're right, this isn't "can they get more developers" as much as it is "can they retain developers"? Do they need to retain them? I don't think they're bleeding them. But this is an insurance marker; if the market turns towards this vision, they've got a good offer in palce, rather than trying to play catch up. So it makes sense for them to try to move the market in this direction, before their competitors have good offerings too. J.Ja

miguel.tronix
miguel.tronix

the threading model in .NET is pretty crappy. And M$ never implemeneted Pthreads even though NT had POSIX layer. M$ are the reason processors need all that clockspeed, and I doubt they will EVER utilize resources well. .NET has a good garabage collector, but just yesterday I had to debug a peice of code that was calling an out scope object which the GC hadn't got to. Hell, threading in .NET isn't even REAL threading so I very doubt MP stuff will be any better. Why have I never heard of M$ talking about OpenMP

SnoopDougEDoug
SnoopDougEDoug

I'll put up a year's salary that the net income from Web apps will never reach that of PC apps in the next 10 years. You on? Thought not. Microsoft makes tons of $$$ from business. Business is not about to put any proprietary data in someone else's data center. All this brouhaha about "the future is Web apps" is a red herring. Google makes its money selling ads on search pages. Nothing more, nothing less. Microsoft wants a piece of that pie. Nothing more, nothing less. Microsoft is/will offer Web dev solutions just to cover their a$$, just in case there is a business case they hadn't thought of. And if you want to go after Microsoft for copying instead of innovating, why not do the same for Google? They sure as heck did not invent search. Their ENTIRE business case is based on improving ideas that we around for what, 5 years+? Google is successful for the same reason Microsoft is successful--see a business opportunity, embrace it, extend it, improve it, make it indispensable to those who use it.

LBiege
LBiege

Yawn... The thin end web apps beat fat end desktop theory again. Once I reached that part, I knew there's no need to waste time reading the rest. Just remember this pal, at the end of the day users want fat end rich experience. Don't fight that or even try to. Do customers prefer a thin end text-based cellphone or a fat rich iPhone? I think the market has the answer already, ain't it. My pc has a multi-core cpu, an advanced video card, 4G memory, a 22' screen among other hardwares. Whoever delivers apps fully utilizing my fat resource is the one to win. Google or whatever web companies can talk about thin end, cloud computing, search or what not all they want. If HTML/JAVASCRIPT/CSS are all they can provide, I'll just say get ready to be run over by M$ and Apple, those two guys clearly understand what users really want. Finally a word on winning over developers: again the tool providing better development experience is one to win, and Visual Studio happens to fit such prototype. Don't fight that.

Justin James
Justin James

"So tell us again about how Microsoft has always led us developers down the correct path and will continue to do so?" I never really said that they develop lots of great things or lead us down the "correct path". But if you look at most of those technologies you mention, they all have significant market share, regardless of their origin (point of clarification: they didn't "steal" SQL Server from Sybase, they licensed it, just as NT and OS/2 both came from a shared codebase in an IBM/Microsoft collaboration). Have they ended up doing a lot of acquisitions? Yup. Some of them a success (Hotmail, Word, Excel, SQL Server, Microsoft Dynamics), some of them not (too many to count). Do they often lag on the "innovation" front. Most certainly. But the market share speaks for itself. Regardless of where the products come from, regardless of the level of innovation (or lack of innovation), the simple fact is, as you mention, that Microsoft is good at "business and marketing". Which means that, one way or another, and awful lot of developers are using their products, or targeting their products. So, to repeat the point of my original post (in a more condensed form), the question really isn't if their vision is innovative or will be technically sound. The question is "can Microsoft get traction with it?" I beleive they can, regardless of how much "innovation" or "technical superiority" they do or do not acheive. The vision itself is decent (regardless of where it originated from), in line with current trends, and so long as their offerings allow developers to do what they need to do "good enough", it will likely be adopted. J.Ja

Justin James
Justin James

I cannot remember the last time my Vista machine crashed on me, at least my personal desktop. And I abuse it, considering that I write things like multithreaded image processing apps in .Net on it. ;) My work laptop periodically tried to install an update that I *know* makes it unstable (removing the update restores it to stability); outside of that, it has never once crashed on me. One thing I *have* found though... XP doesn't know how to use more than 1 GB of RAM. Period, end of story. Vista does. Not only that, but Vista's memory management model randomizes where in the physical memory things get stored, as a security mechnism against buffer overflow and similar attacks. As a result, you are a gazillion times more likely to hit a bad piece of RAM on Vista than on XP. I know this from personal experience. When I first installed Vista (brand new machine), it crashed all of the time, every few minutes up to every few days. No rhyme or reason. I put XP on it, ran like a champ. Vista again, it couldn't stay up. 32 bit, 64 bit, didn't matter. On a whim, I did a RAM test. Bingo, there was the problem. The section of RAM around 800 MB was bad. XP was never using more than 600 MB of physical RAM (it also likes swap file a lot more than Vista does), so it never hit the bad piece of memory. Vista, between being much more aggressive in its usage of physical RAM (my experience has been that it always uses up to 80% of your RAM, regardless of how much you put in), and the randomization of the memory locations was frequently hitting the higher areas of the DIMMs. Replaced the DIMMs, got a rock solid system. If Vista is truly unstable, I would be inclined to suspect hardware or drivers. Everyone I know with Vista has not had crashing issues with it. Now, the speed on the other hand... it's slow. :) J.Ja

jslarochelle
jslarochelle

..DOS extended (Pharlap 286) applications. It could even handle the DSP driver. Get a GP fault, no problem, just restart the application. No need to reboot like on a plain DOS machine. I would have liked OS2 (the "Warp" version) to stick around as an alternative to Windows but IBM screw-up when it did'nt renew the license for the Windows source code (and had to face the inevitable degradation in Win32 compatibility). Again here the words are diversity and choices. JS

ssadler
ssadler

Boy, after reading all these posts I'm sure glad I do development for embedded devices. Much easier!! A little C/C++, a few RTOSes and lots of threading seems a lot less convoluted than programming to the Microsoft APIs, framworks, etc. lol!

miguel.tronix
miguel.tronix

Seems I need to exculpate myself for some earlier not entirely well considered rants. PureMPI.net is a managed code .net implementation of the Message Passing Interface for multi-processor parallelization. http://www.purempi.net/default.aspx?pg=6ccf2c3a-a641-4ee5-b010-ec291dc91e9a There are some great libraries in .NET (java 2) but I still stand by notion that what I want a developer is REAL HONEST OPTIONS/COMPETITION... maybe its just the industry/geographical location I'm in but EVERYONE uses VisualStudio and are almost entirely M$ shops (Flash would have already been replaced by silverlight if my managers were in charge of the grphx design teams...), and I still say that NOT EVERYTHING SHOULD BE DONE THROUGH A SINGLE VENDORS OFFERINGS!!! Oh that the M$ marketing crap that always tell the managers/sales reps how f***ing aw3some their product is - and trust them and not your developers: after all developers come and go but M$ lock-in is forever ;)

Justin James
Justin James

Personally, I've never had a problem with the .Net threading model, other than it being OO-style. In my mind, threading is a "procedure" so procedural-style code (like fork/join) make a lot my logical sense than creating thread objects and passing delegates to functions, etc. I am not sure why you say that .Net threading isn't "real" though. It certainly is real, even you you are not a fan of the model that it follows. The reason why you haven't heard of them talking about OpenMP is because that's something for the C++ folks, and Microsoft is pretty quiet about their C++ tools and such, for the last 5+ years. OpenMP is good, but I've seen even more impressive stuff from Intel recently, the Ct libraries that make OpenMP look like a ton of work in comparison. In terms of your garbage collector issue, the fact that something was calling an out of scope object, that isn't Microsoft's fault, nor their garbage collector's fault. The GC doesn't kick in until it needs to (someone's trying to put something on the heap and there is limited room). If the GC ran "whenever", people would complain that it used too many resources unneccesarily. If someone's calling an out of scope object, they've made a fairly common mistake, and the fact that it compiled correctly indicates that it was indirectly referenced at that, which is well known to be a dangerous coding practice for just that reason. J.Ja

jean-simon.s.larochelle
jean-simon.s.larochelle

...twist of fate it turns out I'm going to have to use the thing. I need to do the following: Java -> JNI DLL -> C++/CLI -> C# Of course the C# stuff needs to have a thread running. ...I know, its insane, but I don't have any choice. Fortunatly this is for a prototype and eventually I should be able to throw away the C++/CLI and C# stuff and stick with Java and C++ (possibly just Java). JS

Justin James
Justin James

Doug - That's something that I've always agreed with: businesses don't want the data outside of the firewall, period. They want to see the tape that it was backed up onto. They want to know the SSN of the person with the root password. And so on. I can't blame them, and I agree with them too. That's why, for all of the noise that Salesforce.com gets in the press, their market share is still in the single digits. Where the "SaaS" folks fail is that they rely upon holding the data themselves. This is where I see an opportunity for the RIAs, if they use the right business model (so far, they are). If someone says, "hey, I wrote this app that uses a central server LIKE a Web app, but you can deploy it yourself on your own servers" then they will do very well selling to businesses. Basically, the traditional client/server sales and deployment model with a more Web app or SaaS configuration and local installation profile, really the best of both worlds. J.Ja

Justin James
Justin James

That's the funny thing about these RIA's, like AIR and Silverlight. They follow the thin client deployment model (single install on a central server, server based storage, user is very constrained in terms of how they modify the app, etc.). Yet the resource usage profile, other than data storage, is the fat client model. Basically, instead of having an application server generating HTML from the data as a presentation layer (Web app model), they handle the creation of the presentation layer themselves, and use a fairly raw (direct connection, or a Web service) connection to the data source. So I'd describe them as "really fat thin clients". Personally, I am attracted to the idea because it gets me a lot closer again to desktop app development, when I didn't have to worry so much about HTTP, HTML, JavaScript, and all of the bizarre concurrency, session, and other issues that go along with Web apps, so long as the data layer with a Web service is done right. It also lets me leverage a full CPU to get "Real Work" done, instead of being mindful that 100's of other requests may be hitting that Web server at the same time. Finally, with the UI options available again, I can work on apps with much richer interfaces, without getting bogged down in JavaScript, AJAX, etc. While the RIAs are not fully baked yet (particularly working with Silverlight in Visual Studio), I really am looking forwards to them. J.Ja

steven.taylor
steven.taylor

You brag about your wonderful machine that can handle any fat client you through at it. The bigger the better. Well, reality is, in most businesses, we can't afford dual core, 4gb ram, big hds, humongous screens. As fat clients get bigger, and they do, our 1.8 ghz, 1 gb machines are choking. And that's a big machine for most SMB. 1.5 ghz, 512 MB RAM, 40GB hard drive is more the norm that I've seen. In this setting, economics wins. Thin clients can get the job done. IT people seem to have such big egos. Many forget who keep them in business, which are users.

Jaqui
Jaqui

don't try to wake up the marketing sucked in, they all think that dumb terminal wireless connections to mainframe powered web apps is the wave of the future.

jck
jck

yeah it's slow... on my game machine, I use XP Pro x64...and it utilizes my 8GB of memory...i heard that 32 bit OSes only use 3.25GB max...but why would it only utilize the 1gb limit? That's weird. It's not like i get BSOD 3 times a day with Vista, but it's more prone to apps failing....much more than my XP. probably cause of internal operative changes in the OS? btw...my Kubuntu Linux...has never died once lol

Justin James
Justin James

I tried OS/2 Warp for a while (about 6 months), the final straw was some incompatible hardware, but I thought it was great. Super fast, and stable. I also really loved BeOS, too. :( J.Ja

alaniane
alaniane

I only write Assembly programs as a hobby. Professionally, I develop custom database apps. I mainly work on the backend write stored procs for front end apps to hook into. Although, sometimes I get rooked into working on the front end. I particularly hate it when I have to work on web front end.

Justin James
Justin James

... that is exactly my thoughts on embedding programming. My uncle does it off and on (he used to work on satellites for Sperry/Unisys, now he does it for the boat alarm company that he owns), and listening to him talk about it sounds like pure mystery to me. Likewise to a friend of mine working on firmware at IBM. And they look at me like *I'm* the wizard. C/C++ embeded? Maybe. But Assembly? I really don't know if I could think at that level, particularly after I've been working in high level languages for so long. J.Ja

ssadler
ssadler

I haven't had to do assembly coding in years (even then it was just a few sub routines for optimization purposes). The embedded C/C++ compilers have become extremely efficient in recent years. Even the DSP work I've done has been in a C.

alaniane
alaniane

as it seems. Many of those commenting against the platform have never programmed on the platform. As with any extensive API there can be a few caveats. Personally, I prefer programming on a lower level than the .Net platform, but I don't mind the platform. Besides, it would be an impossible sell to convince the company I work for that I should develop everything exclusively in Assembly and I would probably grow to hate Assembly if I had to develop exclusively in it.

Justin James
Justin James

So true about that. That's one thing I like about .Net, having a bit of language flexibility (I really wish IronPython & IronRuby had more "fit and finish" though). .Net, J2EE, both are more than good enough for 95% of applications, while they may not be "the best", it is easier to write the whole shebang in them than trying to get different things to interoperate. J.Ja

jean-simon.s.larochelle
jean-simon.s.larochelle

I will check those out. The Java 1.5 already included a FutureTask class (implementing the Future interface) and the one in the MS library looks similar. Since I already use this in Java I will probably feel at home using the one in the MS library. Using those higher level constructs is really nice. This should make my C# expedition less painful. JS

Tony Hopkinson
Tony Hopkinson

stuff again. If you've already bought a big slice of the pie, you might as well humbly eat the rest of it. Unless it was already there from some hisrorical reason, if you have something in .Net, or Java, or XXX, and it can do Y, then it would be daft to do it in Z just because you could. Even doing things like writing all server side in X and all client in Y, unless you have a vast separation of development teams is cost ineffective and often counter productive in the long run. How about collecting data on a PLC from custom built intelligent sensors, picking it up on a VAX Alpha with Fortran code and writing it across to a MySQL database (using a C++ library wrapped in Fortran calls) to a linux box and then presenting it through a VB6 app on a winder's box. Things can get silly if you are not careful. Good job I'm not an expert.... :p

Justin James
Justin James

One thing you might want to check out is the Parallel Extensions Library, currently availble on the MSDN site (http://www.microsoft.com/downloads/details.aspx?FamilyID=e848dc1d-5be3-4941-8705-024bc7f180ba). It is currently CTP, unfortunately, but unlike some of their CTPs, they are taking is extraordinarily seriously; I expect it to be finished by October. This library contains some really good methofds of working with threads. Instead of completely re-writing the threading model, what it does (s far as I can tell), is re-expose the threading model in more task-specific manners. My two favorites are a parallel Do() item (pass it a black of statements, and Do() blocks until they all finish executing) and a Future class (it runs the command at its discretion, but blocks when you request the result until it finishes, great for when you have all of the parameters to do something long before you need the result). It also includes PLINQ (parallel version of LINQ). My biggest disappointment with it (other than not being complete!) is that I thought it was supposed to include fork/join, but it does not have it, at least not in the December 2007 CTP. Hopefully, fork/join will get there soon. Sadly, it does not seem to include any equivalent code to assist with data concurrency, which is where 75% of the truly hard work in parallel processing is anyways, but having built-in patterns of performing the threading itself is a huge help to the folks who only vaguely "get" MT/PP work. Thanks for the MPI link, I am checking it out now. Message passing is the third "trick" in MT/PP code, and most developer (right or wrong) handle it by using a shared data structure to pass messages, instead of a proper semaphore/signalling system. Heck, that's how I do it, since it's easier to do it wrong than right. J.Ja

jean-simon.s.larochelle
jean-simon.s.larochelle

...of variables is also a good general practice . Which again shows how basic good practices go a long way. JS

Justin James
Justin James

Tony - Pretend I used hdsTemp before the loop then, and it's a great example. ;) In all seriousness, you are right about using a local scope for those purposes. VB.Net, however, does not lend itself very well to that at all. Indeed, I am not even sure if it is POSSIBLE to arbitrarily declare a sub-scope block (the equivalent of simply throwing out a curly brace pair in C#/Java/Perl/C/C++/etc.) in VB.Net. So that's not really a great solution. That might explain where I picked up the habit though, I do recall seeing a lot of VB.Net code that set things to Nothing to force it to be eligible to be collected, but I don't see much of that in C#... and I only shifted from VB.Net to C# about 6 months ago. :) J.Ja

Tony Hopkinson
Tony Hopkinson

Shouldn't create huge if it it doesn't use it. So the real solution is not to null the reference but to localise the allocation so it will go out of scope earlier and or by itself. The real trick with the GC I've found is to treat resources as thouh they were unmanaged and scarce. ie create them jit, get them out of scope asap, even to the point of taking the hit of creating / initialising them again. One of the things you have to be careful of is aggregates, (a place where I do set references to null :p) Then on the accessor method if the internal referenence is null, I get them again. Sort of controlled deffered instantiation. Try and avoid it though, introducing it for a laugh is a bad idea.

Justin James
Justin James

From Tony: "Forcing references to null, aside from as you suggest causing your code to blow chunks, I'm not sure that will do anything for the GC. In fact the need to do that sort of thing either suggests a scope/lifetime problem, or that like many (including me), you are still struggling to write your stuff in way that fits in nicely with the GC, therefore possibly both. I sometimes get the impression, I'd have less problems if I'd never made some vain attempts to manage lifetimes in pre .Net environments." This is *precisely* my problem. I spent enough time around unmanaged code to still have the instinct for it. Something inside of me is dying to call an explicit destructor or deallocator whenever I am finished with something. It's like cleaning my toys up when I'm done. From JS - "Can Microsoft win over enough developers to change the paradigm? Tags: feedback, windows, linux, programming Report as spam Discussion - Post 53 of 72 I have read that unless it is actually required... ...setting references to null is not a good idea. The reason is because it can screw-up the CPU cache logic. Data that might have been swapped out of the cache ends up staying there longer because of being updated." There was a GREAT series of articles on the GC in MSDN Magazine ages ago; sadly, it did not (as far as I recall) discuss this topic one way or the other. From what I can tell, "typical practice" (I hesitate to call anything "best practice") seems to be to let stuff fall out of scope on its own, and let the GC take care of it. My practice is to worry about the following scenario: boolean SomeFunction() { HugeDataStructure hdsTemp = new HugeDataStructure(); //Something that uses, say, 200 MB of RAM int iCounter; for (iCounter = 0; iCounter

jean-simon.s.larochelle
jean-simon.s.larochelle

...setting references to null is not a good idea. The reason is because it can screw-up the CPU cache logic. Data that might have been swapped out of the cache ends up staying there longer because of being updated. In Java I avoid doing that and leave the GC alone. In Java more and more the recommendation is to leave the GC alone because most attempt at low-level "optimization" will actually hurt performance. Now, I'm just starting with .NET so we will see (I am not at the benchmark point yet). JS

Tony Hopkinson
Tony Hopkinson

If its' unmanaged the GC won't touch it. So while implementing IDispose does free up memory, it's got nothing to do with the GC, it simply avoids having any unmanaged resources left lying about when the managed wrapper instance that you were using to access them is burgered off. Forcing references to null, aside from as you suggest causing your code to blow chunks, I'm not sure that will do anything for the GC. In fact the need to do that sort of thing either suggests a scope/lifetime problem, or that like many (including me), you are still struggling to write your stuff in way that fits in nicely with the GC, therefore possibly both. I sometimes get the impression, I'd have less problems if I'd never made some vain attempts to manage lifetimes in pre .Net environments. :p

Justin James
Justin James

IDisposable does cough up memory faster... when the unmanaged resources are memory hogs! If you don't beleive me, wrap a Microsoft Office object as a COM component in a .Net app, and then watch the resource meter when you Dispose() the object that owns it. ;) If someone really wanted to, they could add in code to Dispose() that first dereferenced the pointers (putting the objects in the GC's path) and then explicitly called the GC to do a sweep, but that would be much worse for performance in most cases than just letting the GC do it's job. Personally, I like to give "hints" to the GC by setting objects equal to NULL (or Nothing in VB) when I am done with them, it also keeps me from accidentally referring to the later. I've found that by explcitly dereferencing the object, the GC seems to work a touch more efficiently, but that could just be bad intuition on my part too. J.Ja

Justin James
Justin James

A lot of people missed this, but a huge amount of the .Net source code has been released by Microsoft, if you're curious to see what it looks like. :) J.Ja

Tony Hopkinson
Tony Hopkinson

cough up memory quicker. The GC frees up memory when it start's running low, or when when the coder forces a run. Idisposable is for freeing up unmanaged resources. All forcing a destructor does is de-refernce some pointers so the GC on it's next pass can free up some of the heap. I agree about the lazy bit, and personally I think the gargbage collector is aptly named. But, I suspect you are spending more time fighting it, than you are making friends. Hths

miguel.tronix
miguel.tronix

I say .NET threads aren't real only because its not guarantueed by the CLR that a thread will be executed when you (the code) thought it was. Its not like invoking a PThread. I know i've vented speen in my earlier posts, and no the threading model is not that bad, but its counter-intuitive to what one is really trying to achieve. I also doubt its parallelisation routines are anything to boast about, but thats pure speculation... As for the GC, well I'm just trying to make the point that IT isn't perfect either - many of the comments had been about how freeing up memory pointers is the worst thing a programmer could possibly be asked to do. I think GC has problems too, and poor programming practice causes these kind of issues regardless of the compiler/translator. Its just that .NET makes the programmer a lot lazier(Java does too from that respect). And I still implement IDisposable and call the destructor on things like Linked List nodes, just to make the GC cough up memory a bit quicker - MOST .net devs would never dream of it! "Why? just get more RAM and let the GC take care of it". I think if we could peer inside M$ code we'd see that its more of the 1000 monkeys at 1000 typewritters thing, than 10 absolutely unbeatable,world class, can not be replaced or reproduced anywhere else Project Manangers and Engineers -- once again pure speculation, but I do remeber all those NT source files that were leaked a few years ago

Vladas Saulis
Vladas Saulis

It's like sitting on two chairs simultaneously. If we supposed that the future is for RIAs, what would be the sole purpose of the whole underlying platform in that case? What REAL functions would be left for ex. for the Windows OS? On the other hand - modern Web apps lack one and only one feature - the backward TCP socket to establish true stateful prtocol. I think it's not a big deal to implement it in near future. As for Javascript and AJAX - it's only matter of taste and competence to use them at full power. Javascript is same LISP, just inside-out and humanized. XML is same LISP, just without functions. Etc... :) Everything still is spinning around technologies of 50's and 60's.

jck
jck

about $2400. i've done about...$1200 of upgrades in under 2 years...as it stands now: Athlon64x2-5600, 8GB PC2-6400, built-in Dolby 7.1, built-in 802.11g, dual 1Gb/s network jacks, 10 USB (2 used wireless), 2 eSATA jacks, 2 1394s, 6 SATA 3.0Gs, 4 SATA 1.5Gs, 3-250GB WD HDs in RAID config, 2 NEC DVD+/-RW drives, OCZ GameXStream 700W supply, 2-MSI 8800GTS 320MB video cards, 2 20" Sceptre NagaII gaming LCD monitors, Logitech 5.1 surround speakers (used to have CL 7.l, but the headphone thing went out :( ), keyboard, optic 5 button mouse, boom-mic headset, 5 blue led case fans, 3 drive cooler fans, Arctic Cooling Pro64 CPU fan with arctic silver 5, 3.5" floppy drive (yes, I'm nostalgic lol), and a temp monitoring panel with fan controls and all I built a machine to do anything...from write letters, to writing web apps, to playing 10 sessions of Shadowbane at once :)

Tony Hopkinson
Tony Hopkinson

set me back ?750, it would be a pity to treat yourself to something like that and then just mangel a few words docs really slowly off some server in Kenya. 2.8 quad 1300 FSB, two network card, 8 usbs, 8 satas...... I want obese clients... :p

jck
jck

I have a monster machine...8GB, 750GB disk in RAID, 2 8800GTS video in SLi, etc etc... I spent way more than $400...but, I don't just do spreadsheets and write memos. lol

alaniane
alaniane

that I would never need anything more than 32K of memory. Of course, I was only a teenager programming as a hobby. Now, I can even get a Hello, World program to compile under 30K. Unless I use Assembly then its only 300 odd bytes.

Justin James
Justin James

If you simply look at the resources need to run a Web app on the client side, it's not "thin". The Web browser is the fattest app on most PCs, with the exception of an office suite. The thin client model has good points, like a single install point, management, security, etc. And RIAs all get that too. But RIAs are even fatter than Web apps. I don't mind fat apps, I hate to say it, but we have the hardware to support it. What I mind are people with deluded ideas that conflate Web apps and the adjective "thin". A true "thin client" would be an X Terminal, or a Wyse green screen, not a Windows Vista Ultimate PC running IE 7 to view Google Maps. That takes more resources than running a local copy of MapPoint. :) J.Ja

Tony Hopkinson
Tony Hopkinson

into the argument. Stop it, you'll just confuse people. :p A .NET client of any description is fat. Just as for fashions in the female form, fat is a relative term. I personally don't find sticks with bumps on that attractive, more of a Ruebens type myself. :D My problm with the thin client model has always been that it is restrictive. Effectively it must turn your PC in to a console, if I wanted a console I'd by one, they do what they do better. Paying for a PC, and then having to subscribe to functionality to make any use of it does absolutely nothing for me. So leaving aside many fundamaental problems in actually achieving it particularly over HTTP, I don't want a thin client.

Vladas Saulis
Vladas Saulis

I just wonder how many people can't get that so-called 'thin' clients are, in fact, even more 'fat'! Here is a brief schema of abstraction layers that are in use in today's Microsoft OS: Hardware -> Low level drivers -> HAL -> NT (VMS) microkernel -> subsystems (POSIX, Win32, OS/2,...) -> .NET -> GUI (connected to both Win32 and .NET). For the 'thin' application we have in addition: Broswer App -> Rendering -> Box model -> DOM -> HTML/JS/CSS. Microsoft is a "Big Integrator" of technologies. This is not bad on itself. On the other hand it is a "Big Disintegrator" for standards and for groups of developers. All Miscrosoft did in the past 20 years - it was building new and new layers of abstractions up (and wide), so we finally got up to 8 levels, which often duplicate each other on each new level. With every new abstraction level developers are told that from every now they would be able to develop more simlpy and fast. All these promises, as we know, are only marketing things. For example, take a simple file read operation in C#. It seems that only .NET is to be involved into this operation. But wait, when we operate with the file, all underlying abstractions _repeat_ all file operations on their level. So in the result we get file existance check on: .NET, Win32, POSIX, Kernel and HAL levels! Every level has its own (and different) error handling protocols, and we are supposed to handle ALL of them in our applications! In a 'thin' application these things are doubly worsened, because we get up to 4 additional abstraction layers. So, what if we could eliminate some redundant abstraction layers in the future? I think it would be the only way! What I'd like to see can be as follows: Hardware -> Drivers -> HAL -> Microkenel -> DOM -> etc. DOM would become the new programming paradigm and the language on itself. On the next level colud be higly dynamic languages like Javascript, Erlang, Ruby or even LISP. Bunary executables should have been isolated into separate VM, so the system could be fully written in dynamic languages. This can only be acheived with the new OS. And, as I understand - Microsoft isn't on this way yet. And only in this scenario thin clients would be really thin!

Vladas Saulis
Vladas Saulis

Just as Bill.G said that we'll never need more than 640K of RAM.

Justin James
Justin James

Do indeed need the local resources. "Back in the day" of mainframes, they "got" this concept, and certain users who worked with apps like that got "workstations". Today, a "workstation" just means a desktop with RAID support or maybe dual monitor support out of the factory, but historically, it meant that you had local processing capabilities, not just a green screen dumb terminal. One thing I'd love to see in a RIA system is a way of dynamically determining client capabilities, and "under the covers" shifting processing duties to the client from the server as possible. That would allow an app like CAD/CAM/video editing, etc. to run on a thin client (at the cost of server time), or to run locally (except for storage) on a workstation-class client. J.Ja

Forum Surfer
Forum Surfer

Thin clients will never werk in cad/cam or GIS communities. We can actually use that monster dual core processor and 4 gigs of ram. On the same note, if I could come up with a thin client solution to run our day to day lower end pc tasks I would. I'm all about saving money so I can get cooler toys in the big $$ side of things. :)

steven.taylor
steven.taylor

You ask "since when do you need 4GB RAM...for normal biz use? Well, you don't. Speaking of refurbs, the Dell I have at home and have been using for 2 years is a refurb (2GHz Pentium, 1 GB RAM). I just bought a HP refurb (2GHz dualcore, 2 GB RAM, 320 SATA HD, 15-1 card reader, CD-DVD/RW with lightscribe all for $419.00) While it has Vista Home premium, I own an extra copy of XP if I don't like the Vista performance. Cheers...

Tony Hopkinson
Tony Hopkinson

Thin client is economically more palatable than fat. Fat client is from a user perception far more palatable than a thin WEB client. While it's stateless, take the dream of everybody on thin client and stick it on the shelf, next to the 7 or 8 times this sort of thing has come up in the past, to die a complete and total death in the market place. Not just because users don't want it either, what do you think PC manufacturers think of it?

jck
jck

Me? Big ego? Nah. And, I have been doing puters for over 15 years. MS really is driving the fat client...with silverlight...which is what is weird. Just a few years ago, they were all about server-centric processing. Now, it's "take the load off your server and let it handle more users". BTW, I have to ask you, Steven: Since when do you need 4GB of RAM or super-large capacity drives for a normal business use? Dell sells $500 dual-core base model PCs all day long that run Office just fine with 1GB of RAM in them. If you want a real bargain, go to their outlet site and get a refurb or open box model. Of course, I'd suggest if you're going to buy PCs...make Dell give you Windows XP. Vista will slow you down more than anything.

Editor's Picks