Developer

HTTP and HTML: The paradox of dominance

Justin James says that the paradox of the dominance of HTML and HTTP is that it would be unlikely to see anything better suited to the Web development task emerge, while HTML and HTTP struggle to adapt to the task.

The saying, "When all you have is a hammer, every problem looks like a nail," makes me think of the mess that we're in when it comes to the dominance of HTML and HTTP.

I tend to be down on the concept of Web applications -- especially those that make use of AJAX -- but they exist for a number of valid reasons. Nonetheless, the paradox of the dominance of HTML and HTTP is that it would be unlikely to see anything better suited to the Web development task emerge, while HTML and HTTP struggle to adapt to the task.

To get a better understanding of the problem, think back to the era between 1990 and about 1997/1998. By 1990, the client/server revolution was in full swing. Novell NetWare dominated the non-UNIX server world, and UNIX was in the process of pushing mainframes out of the way. In the middle of all of this, some developers wanted a better way to publish documents, and HTML and the corresponding HTTP protocol was born. While the two are not inseparably linked (you can view HTML that is not transmitted over HTTP), HTTP was custom tailored to meet the needs of HTML at the time, in a way that FTP (and other existing protocols) would be a bit too heavy-handed or sometimes inadequate. HTML had some basic GUI widgets, and the CGI system was cobbled together to handle those widgets. Most Web sites were static HTML with the occasional CGI program mixed in.

At the time, there were some needs that the client/server architecture did not handle very well. For instance, the architecture required a level of connection quality and reliability that was not available to most users who were not on a LAN. It also had high administrative overhead, since the applications were installed on the clients and then configured to communicate with a central server. It was very painful to troubleshoot those connections; it involved trapping and analyzing packets.

Some time around 1997 or 1998, developers all over the world realized that a lot more could be done with HTML's widgets than the shopping carts and BBS replacements that passed for Web applications until then. Netscape added JavaScript to the browser, which allowed developers to add enough responsiveness to the user's actions to have HTML no longer feel like a pure document format. Even more importantly, Netscape added cookies to the browser, allowing developers to compensate for HTTP's connectionless, stateless nature.

Let's fast forward 10 years to when Web applications dominate mindshare -- even though they don't penetrate the market. Look at the buzz about Google Docs and Office Live compared to the sales numbers of Microsoft Office. Office gets zero buzz, yet for every user of a Web office suite, there are probably 100,000 Office users (I'm just throwing out a number). The assumption that everything will be a Web application comes from the fact that no one sees any other way to fill the needs that HTML and HTTP address -- even when they do it poorly.

At this point, coming up with a better alternative would be a pretty tough sell. I've described my ideal alternative, and every programmer I have talked to (as well as many networking and systems engineers) envision similar alternatives. This is hardly a scientific sample, but even in discussions in this Programming and Development blog, it is pretty rare to find someone who thinks that Web applications are really what we need. We all agree that the needs that Web applications try to address are real and that Web applications actually meet them rather poorly.

I feel that something like the X Window System would be a much better approach for the distributed, single point of storage and installation gap that is being filled with Web applications. Other people I talked to have suggested something like Windows Terminal Services (Citrix), VNC, or other remote display technologies.

Network engineers now design their entire networks around the HTTP protocol carrying (primarily) HTML traffic. After all, HTTP and e-mail are the main attack vectors for viruses, spyware, etc. In addition, the huge bulk of Internet bytes are HTML (and resources referred to by HTML documents) being fetched over HTTP. Even for remote application and data uses, HTTP carrying XML (which is close enough to HTML to be handled the same way by many programmers) is now the favored vehicle. As much as systems engineers and network engineers hate having to stay on top of the HTTP traffic, they are even less likely to let a new (or old) protocol through for remote applications. For one thing, as dense as some of that HTML (and JSON and XML) can be, it is fairly easy to detect viruses and inappropriate usage in it. With a protocol that carries a true remote terminal session, the IT team loses all monitoring, logging, etc. abilities. It's pretty easy to keep a list of URLs visited; it is a lot harder to store every mouse click and screen update that a remote terminal system would put through.

Now we have an IT environment that has become a victim of its own success. It is highly unlikely that IT departments will be willing to let anything more complex than HTTP through now that it looks like "anything can be done with HTML and HTTP." Meanwhile, the HTTP protocol has not changed in far too long, and HTML is struggling to try to adapt to how people are using it.

It is a pretty sad state of affairs that no one ever intended to happen. Back when HTML 4 was standardized, the talk was all about the Semantic Web and not the "Programmatic Web." It was obviously a push to get HTML closer towards being a document standard and allow applications other than Web browsers to consume it. HTML 5 is a push towards trying to incorporate the best efforts of Web developers into the standards, so at least the problems with Web browser incompatibility can be resolved.

While I disagree with this as a direction, I completely understand the motivation behind it. Programmers, not just of Web applications, but also of tools, browsers, and so on, all need these techniques to be standard somewhere. We all agree on that -- but we disagree on where. A lot of programmers choose HTTP and HTML as this place, simply because they are protocols that get through the firewall. Other programmers look to HTTP and HTML simply by default due to inertia in the industry.

It seems the existing success of the HTTP/HTML combination will continue, with it nearly pushing traditional client/server techniques out of the way. This replacement will occur regardless of the technical merits of client/server methods, the shortcomings of HTTP/HTML, or the existence or future creation of better alternatives. In other words, the stunning success of HTTP and HTML is guaranteeing mediocrity in the world of application development's future.

J.Ja

About

Justin James is the Lead Architect for Conigent.

44 comments
Mark Miller
Mark Miller

Mediocrity fits the economic models for the IT projects we're talking about. The priorities are centered around what can we get away with for the lowest initial cost. The reason is there is little sense of R&D at this level in companies that develop IT solutions, either for themselves or others, and there is no sense of long-term costs. Some have gone so far as to say that IT in general is an "exploitative" market, both in terms of how employees who work in it, and customers, are treated. I think the answer is for customers to get wiser about technology choices and realize that the cheapest solution is not always the one that will serve their best interests. By the same token, expensive solutions are not always the best either. Technology still needs to be evaluated at a deeper level than that, but it's not necessarily worth it to look at things just on their technical specs., though that's important. What I'm increasingly realizing is that we need to evaluate technology based on its [i]architecture[/i]--how well it scales--not just in an IS sense, but how effectively things can be built in it that are greater than itself.

Tony Hopkinson
Tony Hopkinson

It should be noted though that the problems of continual attempts to leverage the success of HTML over HTTP to provide web apps is the real problem. The biggest real benefit of web apps is distributing them. The theoretical no install, no congiguration and work on any platform. We all know that's a rib cracking joke as soon as running alien code on your machine is taken into account. Every step forward results in an exponential increase in complexity, and it's still crap compared to a client based executable. The entire idea is flawed, the so called benefits were totted up when ActiveX wasn't considered a security problem. Secure web services to pre-installed widgets is the way to go. Sending mouse messages across the world is possible, but it will never be practical. Any bandwidth made available to do it, would be used up before it could make use of it. It was a stupid idea, lets just bin it and do something that will work.

Jaqui
Jaqui

The cgi system is meant to give a lot of the functionality required for a thin client environment. With The Apache Software Foundation's HTTPD server and the Apache Software Foundation's Tomkat server you have a system for application serving over http. [ as long as your server has huge reasources to handle the bloatware java based Tomkat and java apps it can serve. ] The real issue is the lack of security in the http protocol. Any distributed application over the web would need to have the data streams secure, which currently means https. No SANE person would rust their confidential data to a non-secure stream, that would be asking to have your business be screwed over.

MadestroITSolutions
MadestroITSolutions

That's what we get for using HTTP and HMTL for something they were not designed for, and continue to build on our stupidity. Granted, I don't think anyone ever imagined the "Internet" would explode like it did.

TJ111
TJ111

There are some promising new protocols being developed to handle these issues, mainly comet. The problem is that the HTTP protocol is grandfathered into the internet for the time being, and theres not much we can do about it. For a new protocol to be worth developing for, it would have to be natively available to the majority of internet users. For that to happen, browser/client developers would have to design and build the browser/client to handle the protocol, and then market it to the masses. They won't start developing until a protocol is nearly finalized, and then even after the browser/client is release it'd be a good 3-4 years before it had the market share to be worth developing for. Even if some new awesome protocol became available today, it'd be 5-10 years before you would see it actually being used (outside of niche uses), and by then it'd be old news anyway. It's a nasty cycle, and one we'll be stuck with for a while. See: True XHTML, Ogg/Flac media, etc.

mattohare
mattohare

At the start of the article, Justin, you mention HTML and HTTP as separate things, in your history segment. I think it a good idea to keep them separate as the article continues. As I see it HTML is simply a document markup language. The documents could travel on any protocol, even on an installation disk. HTTP is one of the real challenges here. I think if we gave it native ability to manage state, most of the other issues would fall away. You mentioned web applications operating like a Remote Desktop application, and tracking every mouse click. I believe this is overkill to the point of being beyond most projects??? scopes. That leads me to the other real challenge, as I see it. We could use a different client, besides a browser. One that handles the state, and safely handles the more complex GUI interfaces. I see it as being something like a Java framework with out the ability to interact with the local computer system (without permission). Browsers are here to stay, and for brochureware sites, that???s dead on. Same for library-style resources like wikipedia.

Deadly Ernest
Deadly Ernest

I think this is really a case of putting the cart before the horse in many cases and not looking at the true business case situation. You have to consider if you're looking at a remote service for use in an Intranet or the Internet and all the ramifications of those differences. Many Web Services such as Google Docs do have a place in the world, but an extremely limited place, one much more limited than many of it's supporters are prepared to admit to. it would be a major dereliction of duty for ANY business to use a web based service to prepare their important or confidential documents except on a system and network over which they have full control. The same applies to the government and any volunteer organisation that handles any data that comes under any privacy concerns. If you use a web based application for your documents and any part of that system is outside of your control, ie going over the Internet or stored at Google, then it's extremely vulnerable. If it's all on a system you have total control of, such as on an internal network, then you can do what you like; but this is already happening with file servers and thin client systems - so why bother trying something new and different that may be more vulnerable to interception and improper redirection. At present if I use a thin client system within my business it's not easy to redirect any of that output to travel over the Internet to another location so you can spy on my business, mainly because of the differences in protocols and operation settings. But switch to a web based system like Google Docs and it's much easier to intercept and redirect. Once you take business and the government out of this loop for web based applications like Google Docs etc, there's not much left. And definitely not enough to justify the major changes to the way the Internet operates that same people seem to think will be needed for a decent set of web based applications. most of the private individuals who would be interested in using Google Docs would do so because the can't afford to pay out the dollars to buy MS Office and the like, but with more and more people becoming aware of Open Office and it's availability for Windows, why should they give up their confidentiality and security to a web based application? I think the whole underlying concern you're dealing with here needs to be re-evaluated from the start with regards to the real business needs of those involved and who are truly likely to be involved. As to the reasons why HTML and HTTP are still used, yes some of it is inertia, but a lot of it is due to the fact that's it's fairly simple and is proven to work well, so why fix what isn't broken?

rpislacker
rpislacker

Assumption: this article was written to comment on the web experience shortcomings over service provider networks. That said, it really doesn't seem feasible to have something like Citrix or X-Windows running over the Internet at this point in time. HTTP and HTML certainly lack in capability, but they are lightweight and represent the lowest common denominator amongst browsers. Until the Internet and ISP's support a standard hierarchy of QoS implementation (which might lead to more bandwidth requirements), how can the web experience increase productivity, capability, and usability simultaneously? Now I'll change the assumption: this article was geared towards the web experience on enterprise networks. Here's a place where more server-end computation for a thin client a la X or citrix can be possible. However, what happens when client PCs, servers, or networks crash? This seems to be about balance: what should be performed and cached on the client, and what should be performed / cached on the server? Will network latency be small enough to provide adequate response? The next generation of the Internet will revolve around the standardization of this balance, and until this is achieved we will still be stuck in HTML/HTTP land.

Justin James
Justin James

I think that for many companies, IT is the most visible major capital cost, so the "low investment, we need ROI within 2 quarters" mentality hits us the hardest. But I also think that what you've identified is happening across the board in business. I stopped seeing true innovation (forget inventiveness) a while about, in favor of (barely) incremental improvements. Not to get too far off topic (yeah, right), but I beleive the fact that 2/3rds of the US economy is consumer spending has a lot to do with it. Under the circumstances, it is much easier to make money by repackaging and advertising an existing product ("same great taste, great new look!") than to actually make a new product. Conciously or unconciously, companies no longer want to invest in R&D as you point out. Bell Labs became Lucent became nearly out of business (and nearly took the entire IT sector with it!). Judging by the number of patents filed, Microsoft, IBM, Apple, and (oddly enough) Nintendo are the only people putting any kind of effort into software/hardware invention. Everyone else is making incremental improvements on ideas that one of the "big guys" took the risk to explore. Even medicine has gone this route; every body and mental condition outside of the standard Bell Curve is declared to be a "disorder" or "syndrome" and a "medicine" is crafted to treat the symptoms (almost never the problem itself, though). Meanwhile, cancer is still incurable. Talk about mediocrity! I can get a "treatment" for my "restless leg syndrome", but many people all over the world are doomed to a painful death by a mere mosquito bite (malaria). "What I'm increasingly realizing is that we need to evaluate technology based on its architecture--how well it scales--not just in an IS sense, but how effectively things can be built in it that are greater than itself." This could not be more true. Indeed, you've summarized here in one sentence my worldview overall, not just IT/IS. J.Ja

Justin James
Justin James

"The cgi system is meant to give a lot of the functionality required for a thin client environment." Sure, but it's painful. It involves using HTML as a "presentation serialization format" (ugh) and constantly building up and tearing down HTTP connections which is quite costly, especially once you add .htaccess and/or SSL into the mix. Buch better would be to open an SSH session, and pass screen updates and input back & forth in a constant communication. J.Ja

Justin James
Justin James

You are precisely right about the "grandfathered" nature of HTTP. It will be literally decades before it ever (if at all) gets replaced, so the only realistic alternative is to mutate the standard itself into what we need. Sadly, to mutate HTTP that much would require breaking it. So it's a no win situation. :( J.Ja

Justin James
Justin James

"The documents could travel on any protocol, even on an installation disk." Part of my point, which I definitely did not enunciate well, is that what you say here is no longer true. It used to be (as the "history lesson" points out). But now? Well, not only do a great many "Web pages" (HTML documents) not only cease to function as intended if an HTTP connection did not carry them (or cannot be made after the document is rendered), but the "Web pages" cannot even be transported to another storage area. In other words, somewhere along the line, HTML stopped being a "document format" and became a "presentation serialization format". That's a world of difference. "HTTP is one of the real challenges here. I think if we gave it native ability to manage state, most of the other issues would fall away." *I agree completely.* What we're seeing though, is HTTP's shortcomings being worked around by tool makers with kludge, and now the W3C at the HTML level. It's a wreck. "We could use a different client, besides a browser." I agree 100% on this too. Indeed, I would imagine that even AJAX and/or Flash pieces improve significantly in usability simply by being hosted in a GUI window that does not look like a Web browser. Why? Because the user won't be trying to do the things they associate with Web browsing (Back button, bookmarks, etc.), even if you replicated the precise functionality with a slightly different look/feel. For example, re-label the "back" button to be "previous screen" and reposition it. When you try to host an application within an application, users get quite confused. "I see it as being something like a Java framework with out the ability to interact with the local computer system (without permission)." That's one vision of it that works well (BTW, I beleive that the .Net "click once" and "Web install" systems do this) in principle, but we have yet to see it implemented in a way that it gets widespread adoption. That's actually why I favor an X-like system; instead of dealing with trying to keep the code secure, it is simply a means to request that the user's system render windows and receive the input. This gives the user a lot of control over the look/feel, their needs, preferences, etc. while keeping bandwidth low. Indeed, there are a zillion different approaches we could take here, and my experience has been that most smart people *who actually work with this stuff* (and I've always put you in that bin!) seem to feel that the browser paradigm isn't really what we want to be using for this "single install, no client code, easy access application framework". "Browsers are here to stay, and for brochureware sites, that???s dead on. Same for library-style resources like wikipedia." Yup, for perusing documents, the Web browser is well suited to the task, which makes sense, since that was its sole goal to begin with. Part of the problem is that it is a lot easier to put together an "application" and sell ads on it that to assemble actual "content" and sell ads on it. There were 2 phases of the first dot-com boom: content sites (About.com style sites, search engines to cope with the content, etc.) and online retailers (Amazon, Pets.com, etc.). The current dot-com boom is about applications. Someone discovered that it is better to write an app in your basement than to pay 200 writers to create content. :) J.Ja

Mark Miller
Mark Miller

Re: Lack of R&D There have been periods like this before. The person that came to mind as you were talking about this was Joseph Schumpeter. He's the one who came up with the theory of "creative destruction", where smaller players who "think outside the box" outmaneuver and defeat established ones. He says where this ends up though is right back where we started: large established institutions. They're just different from the old ones. But the cycle will repeat, as unlikely as it seems now. I can remember that back when I was a teenager IBM looked big and undefeatable. I've even heard Bill Gates talk about this in an interview, about Microsoft's early days. Steve Ballmer used to describe Microsoft's situation as "riding the bear", the "bear" being IBM. You had to "stay on top of the bear", or else you were "under the bear", and that was not where you wanted to be. I think what makes innovation so slow on the web platform is industry dominance by big players, who establish de facto standards, and the notion of industry standards (ANSI, ISO, etc.). Standards bring stability to the platform, but they stifle new ideas. If your idea is not approved by either of these institutional forces, by and large people won't use it. So people have to pick what they want: stability or innovation. Or find a balance between them. As far as the societal de-emphasis on R&D, I don't know what's driving that. It could be the same thing: a desire for stability rather than the chaos innovation brings. I've heard some say that increasingly R&D is taking place in Asia, rather than here. There are more technical PhDs over there (trained here), and (I assume) they're cheaper. What's disappointing to me right now is I'm increasingly realizing that the "PC revolution" is dying away here. In fact it seems apparent to me that we're going back to an updated version of the old mainframe model. It's not punch cards and green screen terminals anymore. There are no more "priests" being the gatekeepers of computer access. That's democratized and open now, just as the "PC revolutionaries" wanted it. According to what I've been told for years, that was the motivation for people to buy microcomputers in the late 70s and early 80s: to get access to computing resources anytime they wanted it. Now that's happening on a distributed basis. It's no longer necessary to have that just on your own machine. I wrote about this on a guest blog post on Paul Murphy's blog, on ZDNet, on Monday (at http://blogs.zdnet.com/Murphy/?p=1088). I feel like a significant piece of the original vision for the PC was lost, right from the beginning of it all. Joining you off-topic :): I agree with you on the medical front. Personally I'm sick and tired of the long medication ads you see on TV all the time. I feel like saying, "Umm, could you take your love life/sexual problems somewhere else??" I feel like I'm watching a prelude to a porn flick when I'm trying to watch something else... I also remember there being a time when "It's Prilosec time!" was on the tube about every 15 minutes. I had no idea what the drug did (because the ad didn't tell me) until a relative was prescribed it for an acid reflux problem. Re: medicating those outside the norm Something that's bothered me for years is hearing about how kids are being medicated for ADHD, and now bipolar disorder. Back when I was going to college my mom and I suspected that I had ADD, and we looked into treatment for me. Back in those days it was necessary to go through a thorough evaluation that would take a couple hours, before I would be prescribed any drugs. It was described as comprehensive. There would be follow-up evaluations and such. I didn't go through with it. It sounded expensive. There may have been concerns about it negatively affecting your mental state, too. I don't remember. I was hearing 10 years ago stories about school nurses and teachers, even, recommending that students be put on these medications, and I'd hear parents who gave their consent to put them on them just on that. Maybe I misunderstood, but apparently they had these drugs right in the schools, so parents didn't even have to take their kids to a psychiatrist (psychologist?) to get them. Who do these people think they are?? From what I hear now there are a lot of kids who are on some kind of medication for "behavior disorders". It's scary. Apparently there are treatments for some cancers, like with AIDS, where people can now at least "live with" the disease, as opposed from suffering an untimely death from it. There's still no cure for lung, pancreatic, breast cancer, and some others. As for malaria in developing countries, that gets into a whole other topic that we may not want to discuss. From what I understand, Malaria was being kept under control many years ago, but some political forces killed off the effort for their own purposes. It's hard for me to say if it was done with malicious intent though. I'm intrigued by your closing statement. I was just referring to technology when I talked about architecture that scales, but you think it applies outside of that. I don't have a clue. Could you elaborate?

mattohare
mattohare

Now, lets sit down, hammer out a spec, and roll it out. *chuckle* In addition to what we've been discussing, I've been coming up with ideas on how even the reference document works can go better.

Justin James
Justin James

I know that the jump from 16 to 32, and then to 64 were not the direct causes of the changes in driver models... they were just the times that they happened to change driver models. :) J.Ja

Deadly Ernest
Deadly Ernest

commands. The 16 / 32/ 64 bit size is relevant to what's running around within the hardware and the operating system; but when you start sending data down the buses to the peripherals and other hardware the command set instructions don't care if they come as 8 bit or 128 bit - the signal that says 'on' is the same regardless of the bit size, ditto with the 'off' signal and the other commands. Back in the early 1990s a set of standard industry command sets were set up so that all hardware of a particular type would use the same signals and the operating systems were to give the same signals. Thus the existence of plug'n'play came to be and worked in Win 95 and Win 98. But why MS didn't use the industry commands in Win NT, and have been changing them around since is solely to make more money by selling people the command sets code to write drivers for their hardware. I have a 64 bit system and my son has a 32 bit system, I also have a couple of old 16 bit systems connected to my home network. They all talk to the switch OK, they all send printer commands to the printer OK, and that's almost ten years old. The only problem I have with the printer is that I need to load a special printer driver to make my son's 32 bit Windows XP Pro talk to the printer. My 64 bit Kubuntu does it without any printer specific driver by using the industry standard command set, my old 386 16 bit system with Win 95 does it as it recognises the printer as a plug'n'play standard printer, and my old 16 bit 486 with Red Hat 6 system does it no trouble, ditto my 32 bit P3 running Mepis Linux - Windows XP is the only one that needs a driver. I must really take time to standardise my operating system now that I've decided to stay with Kubuntu. The issue you have with hardware NOT working with Vista is due to MS changing the command set instruction coming from the OS. Now, depending upon what the hardware is, it may run with industry command sets within Linux or it may not, depending if it was originally designed and programmed to run with a specific MS OS command set (as some were to encourage more sales by immediate plug'n'play with that version of Windows) or not. To make it work with Vista a driver to convert the commands from the Win Vista command set to the one used by the hardware has to be put on the system, and MS aren't doing that, and not all hardware companies are paying to have the drivers made either. This is also a problem for Linux users as some hardware companies deliberately favour MS and make their hardware to use the latest MS OS command set. Some are now reviewing this policy as they're seeing more and more sales go by them and people complaining about NOT being Linux compatible and not being universal MS Windows compatible either.

Justin James
Justin James

I think that you are 100% spot on about the Windows driver situation. First, there was the shift to 32 bit drivers circa '95 - '98. Then, the shift to the Windows NT driver model around the same era, in parallel. Then, the XP driver model. Then the 64 bit driver model, which ran concurrent to the new Vista/2003 driver model, which is the driver system du jour. It's rediculous. There are good reasons behind it, matters of stability and security primarily, but I wish they'd just find one that's "good enough" and stick with it for the next 20 or so years... I have equipment that I bought less than 2 years ago that won't work with Vista! J.Ja

Deadly Ernest
Deadly Ernest

you also have take into account the instructions Microsoft send people with the details on how to write code for use with Windows. One place I worked at they were working on a major in house application to work with Microsoft Windows 2000 Pro as that's what the organisation (A government dept) had on the desktops. They'd paid for the information needed from MS to make the system work. I got a look at the instructions and the directive - note I said directive and not advice as the MS notes said the coder MUST do certain things or it won't work properly with Windows; the directive was to place the DLLs in 'System' folder for the Windows operating system on that system and that it should be 'C:\Windows\System' but some installations place it elsewhere if instructed to. It also said the system always required a reboot for the changes to take effect. Now I have no doubt that the programs would work with the DLLs elsewhere and didn't always need to be rebooted, but the instructions from MS themselves were telling people to do that. It's very likely this is due to poor coding and poor coding instructions by the MS people to begin with and many coders never tried any alternatives - but that's how things go. Also, you're right about the differences between what's technically possible and what users perceive as possible because of the way things are implemented. I also know a lot of the DLLs and drivers would NOT have been needed if Windows used the common command codes instead of switching to special MS ones specific for that version of Windows. The bottom line for me (after two decades of using Windows) is the incompatibility of Windows and the excessive anal retentive processes to make it work they now apply, means it's NOT economically viable to use Windows XP or Vista unless you have broadband and can afford to have it set for constant upgrades. The way MS keep changing the command sets means that they force you to have to replace equipment before it's worn out as you can't always make hardware work with any version of Windows except the one current at the time of it's manufacture - and that's wrong. I should be able to continue to use peripherals and other hardware until it wears out, even if it takes ten years.

Justin James
Justin James

"only relevant in a MS Windows network. DLL hell doesn't affect Unix or Linux nor does the upgrade reboot requirement." This is 100% untrue. DLL Hell is quite possible on *Nix. It's just slightly better managed than Windows did. I may also add that DLL Hell was caused more by sloppy programmers sticking their DLLs into C:\Windows\System and if they had put them into the local install directory, at the expense of storage space, DLL Hell would not have been a problem. Regarding the reboots, that is also positively not true. The reson why every installer requests a reboot is because it was easier for the person writing the setup script to check "this requires a reboot" than to investigate whether or not a reboot was truly required. The only time a reboot is every *really* required is when the core OS is modified, or the system is caching something and updating the underlying item does not poison the cache. I can not tellyou how many apps have demanded reboots from me, but worked just fine without the reboot. I know you're speaking from experience, but your experience here is from the user's perspective (ironically, that's the one that truly counts the most!). From the user's perspective, the point of commonality is Windows, therefore this is a Windows problem. In reality, the common denominator is bad programmers who happen to be working on Windows. :( J.Ja

Deadly Ernest
Deadly Ernest

only relevant in a MS Windows network. DLL hell doesn't affect Unix or Linux nor does the upgrade reboot requirement. Sure if you're upgrading an application you have to close and restart that application to allow the upgrade to take effect, but you don't have to restart the computer in a Unix of Linux system - this issue is strictly a Windows issue due to the poor way they write the OS. On my son's Win XP box, when he does an upgrade of his video driver or Firefox, he has to reboot the whole system. When I do the same for my Kubuntu Linux box I just have to restart that particular service, I can continue doing other things while that happens - FF restarts while I check mail on Thunderbird, no issue and done in seconds, not productivity loss. there's no technical reason why MS can't do this, they choose not to.

Mark Miller
Mark Miller

[i]Mark, I think we may agree on the basics, but not quite agree on how we got there. When I was referring to the pre-standards era I was talking about the mainframes of the 1960s and 1970s and the early days of the micro-computer (what we now call the PC). I agree we need to establish the infrastructure before we look at the app and the how. But I think it's a very bad mistake to design a business app that's based on browser technology as it just makes it so simple to use a browser to access it improperly. One principle of security is that if the app or it's data is NOT able to be carried over the Internet as normal, then it's less likely to be stolen that way. Using VPNs to connect nodes is OK as you're not using the Internet protocols then, so you can forget the browser and Internet based restrictions. The fact that some business management is asking for browser usable apps for internal use shows extreme stupidity by them or their advisors.[/i] I agree managers/advisors are extending the browser's reach to absurd levels, but as far as I can tell the people who are doing it are in the majority. Several years ago I was reading that managers who were directly in charge of new software development inside businesses had established company policies that disallowed thick client (GUI) apps. They wanted browser apps. almost exclusively. From what I've been reading this hasn't changed, though as Justin has discussed I think mobile apps. on PDAs and cell phones will increasingly come into play as well. What media they'll use (GUI or web) is still a big question, I assume. I think the web browser idea has some promise. What I was trying to suggest was a way to make it less stupid. With the design I was talking about, based off of a concept Alan Kay talked about 11 years ago, it would have the potential to make the browser into a more generalized platform than it is now, one that is more customizable. I see this as a possibility because I've seen situations in the consumer realm where the browser makes going from a content-oriented metaphor to a functionality-oriented metaphor seamless, and it enables this to be implemented more easily than in a thick client app. You see this on e-commerce sites, for example. The browser has a very simple and easy to learn operational interface. So I see things that are attractive about it. What I don't like about browser apps. is their statelessness, and the fact that you have to use a single language on the client (Javascript), but have more flexibility on the server end. With thick client apps. you don't have to think about maintaining state, because it just happens. I agree with you about the security issues with browser apps. It created a standard way of interfacing with any web server, which makes it an easier target for hackers. In the days of client/server I think this was harder, because there was more of an inclination to create proprietary communication protocols between client and server, and they were usually binary, rather than text-based. [i]I think we need to look at how we want the business apps and network to operate. As I see it we have three basic models and should choose one and then work with that. 1. Centralised processing and data storage - basically going back to the 1960s style mainframe with dumb terminals working off them. 2. Totally decentralised processing and data storage - basically everyone using a PC and using a peer-to-peer data sharing process. 3. Centralised data storage and decentralise storage. Basically the file server type set up where the processing is done at the local PC and the data is sucked from the server and stored back there. Personally I thin option 3 is the best as it gives the best management control. Going with web apps you are really going for a version of option 1 and the X-terminal process is very close to option 1 as well. We all know option 2 doesn't really work in a large organisation.[/i] Option 1 is not too attractive to me, if only because the mainframe architecture insists on a batch processing model--basically back to statelessness. At first glance I agree with you about option 2. If the client architecture was better than a PC is now I think it could work if a mesh network was used. Option 3 is your basic client/server setup. I don't think the idea was inherently flawed the way Paul Murphy does, but its implementation by and large in the industry was terrible. I worked at a company that used client/server in its product line, for 4 years. We did not use COM, and so didn't run into the "DLL hell" issues I discussed earlier. Instead we built our apps. in C for Windows using the windowing API. We still had installation problems on clients that needed to be resolved on a case by case basis. I think that's what large deployments are trying to avoid. Keeping client versions up to date is a problem because of the early-bound software architectures that are still being used. When we updated client software, whoever received an update had to stop what they were doing and reboot their machine. Not a smart way to go. That's something that web apps. have the potential to avoid. It doesn't always work out that way. A lot of times the web app. still has to be shut down while it's being updated. One reason for this is that the data model in the database (on the server) can change on a software update, making it very difficult to seemlessly transition from one version to the next. If the interaction model of the app. changes at all that's another reason to shut down access during an update, because otherwise state can be lost if people are using it during an update. I think what would make client/server work better is a late-bound software architecture. It wouldn't get rid of all the version update issues I describe, but it would make them easier to deal with. [i]Most LAN games operate in a similar process, and it's this basic process that I think is behind the concept of thin client - I say think as I've never used a think client myself so I'm not sure what happens there.[/i] Thin client is basically the web browser architecture. What you describe is what's called a thick client, because you have the simulator running on each workstation, so quite a bit of processing is happening there. I know what you're talking about. I worked on a video game package with a friend 4 years ago, and he used the architecture you describe. The idea of thin client is you have a runtime (a browser, a VM, what have you) already installed on the workstation, and the app. is "streamed" onto it, and run on it. When the app. is shut down, it disappears from the client system, or some of it is cached for loading later when it's rerun. If you've used a browser app. or used a Flash app. on a browser, that's what thin client looks like. Thin client is just another name for the mainframe architecture. The bulk of the processing takes place on the server. X-terminals are kind of another version of thin client in terms of how the user accesses the UI, but I think of it as a hybrid between client/server architecture and thin client. Processing takes place on a server, but you have the same level of interaction you can achieve with a thick client app. Plus, state is managed the same way as on a thick client. [i]There's no reason why we can't use a similar process with the design of business apps, and that's what the browser based apps are trying to do, and I think that's what you want to do.[/i] I agree there's no reason it can't be done. Just understand that what you're talking about is essentially client/server. If this were a large installation, think about how software updates would be propogated. [i]When I go to use any data the first burst of data is along the lines of a set format that tells my system how the rest of the data is formatted and sets it up, then along comes the data. I think we actually do a lot of this already, as the start of any files has the info as to what type of file it is so we don't get weird sound when we view an image files etc - you just want to extend it a bit further. Ad that's OK, I just thinks it's silly to use a browser and Internet processes for that. Although you may be able to get around that by having the data transfer done by a none Internet protocol. However why not design a business app browser that's not an Internet browser and then design you apps to use that - most apps go outside the Internet regime anyway, so why not accept that and step out to a special one of your own or look to have a new industry standard created for Internal use browsers only. It's all very much theoretical anyway as what ever is sorted out it will be ignored by MS (unless they create it) and their huge marketing budget will force their variant down the throats of much of the ignorant management market.[/i] Sounds like a good idea. I have had that idea in my head from time to time. I think the current browser architecture is unsustainable for business apps. in the long run. Rich Internet Application architectures are the newest attempt at an improvement of this model, but they're not the best fit, because they operate counter to the browser operational model. They're basically thin clients running as if they're thick clients. I can imagine users getting confused trying to use the Back and Forward browser buttons with them...

Deadly Ernest
Deadly Ernest

not quite agree on how we got there. When I was referring to the pre-standards era I was talking about the mainframes of the 1960s and 1970s and the early days of the micro-computer (what we now call the PC). I agree we need to establish the infrastructure before we look at the app and the how. But I think it's a very bad mistake to design a business app that's based on browser technology as it just makes it so simple to use a browser to access it improperly. One principle of security is that if the app or it's data is NOT able to be carried over the Internet as normal, then it's less likely to be stolen that way. Using VPNs to connect nodes is OK as you're not using the Internet protocols then, so you can forget the browser and Internet based restrictions. The fact that some business management is asking for browser usable apps for internal use shows extreme stupidity by them or their advisors. ------------ I think we need to look at how we want the business apps and network to operate. As I see it we have three basic models and should choose one and then work with that. 1. Centralised processing and data storage - basically going back to the 1960s style mainframe with dumb terminals working off them. 2. Totally decentralised processing and data storage - basically everyone using a PC and using a peer-to-peer data sharing process. 3. Centralised data storage and decentralise storage. Basically the file server type set up where the processing is done at the local PC and the data is sucked from the server and stored back there. Personally I thin option 3 is the best as it gives the best management control. Going with web apps you are really going for a version of option 1 and the X-terminal process is very close to option 1 as well. We all know option 2 doesn't really work in a large organisation. In some cases I support the X-terminal situation and in others I object to it. You have to choose for the situation. The very best answer to this problem I see has actually been solved by the gaming community and used by them for ages. Over a decade ago I was privileged to see a major military simulator program working using X-terminals. The terminals were full blown computers and had the software program running on them. The server was powerful and had a variant of the software running on it (server version). The parameters for the exercise was loaded onto the main server, another server approved login access and allowed connection from the terminals to the simulator server, it then uploaded the specifics for the current simulation to the x-terminals as they logged in. After that the terminals sent updates from it's operator to the server which then sent those updates to the other terminals that were entitled to that info. The part of the simulation that was accessible to a specific terminal was actual processed and run on that terminal and altered via the updates it was allowed to receive, while the server worked as a data router and a store of all the changes. This meant the bulk of the processing was done on the local unit while the data was stored at the server. Most LAN games operate in a similar process, and it's this basic process that I think is behind the concept of thin client - I say think as I've never used a think client myself so I'm not sure what happens there. In effect, when I play a LAN game like Starcraft, I run the basic data on my system to prepare to receive the data from the server, the server sends me the map and basic set up info. After that I get data updates from the server and all the processing is done by my system. There's no reason why we can't use a similar process with the design of business apps, and that's what the browser based apps are trying to do, and I think that's what you want to do. I have my PC with basic software, including an office package. When I go to use any data the first burst of data is along the lines of a set format that tells my system how the rest of the data is formatted and sets it up, then along comes the data. I think we actually do a lot of this already, as the start of any files has the info as to what type of file it is so we don't get weird sound when we view an image files etc - you just want to extend it a bit further. Ad that's OK, I just thinks it's silly to use a browser and Internet processes for that. Although you may be able to get around that by having the data transfer done by a none Internet protocol. However why not design a business app browser that's not an Internet browser and then design you apps to use that - most apps go outside the Internet regime anyway, so why not accept that and step out to a special one of your own or look to have a new industry standard created for Internal use browsers only. It's all very much theoretical anyway as what ever is sorted out it will be ignored by MS (unless they create it) and their huge marketing budget will force their variant down the throats of much of the ignorant management market.

Mark Miller
Mark Miller

Ernest- I agree with what you say here, but management is reticent to go back to client/server architecture. I assume that's what you're talking about. I would like people (managers, customers, developers--including myself) to think more about the architecture we use in our projects, because the fact that we don't, but instead just slap stuff together at first, say "That'll work," and then everyone else copies it because, "It's the new thing" creates a lot of problems that are avoidable. I wasn't talking about business apps. on the internet (and neither has Justin talked about this). We're talking about business web apps. that are on a LAN. I know it probably sounds strange, but that's what's been developed over the last several years. Part of the reason client/server was abandoned was the "DLL hell" experience. That was Microsoft's fault. Their COM architecture sucked for anything but creating something akin to a kiosk. They're trying to come back with WPF/Silverlight. Silverlight is at least creating interest. From a traditional perspective the problem was application version management. .Net deals with this better than even Java does now. I've been talking to Justin (in a thread above this one) about a solution another blogger, named Paul Murphy, advocates, which is going to a distributed Unix architecture using X-terminals. This would bring a more traditional thick client architecture back into play, and there would be fewer problems of the Windows variety, with no need for a browser. You would get the same ease of installation of web apps., because all apps. could be centrally installed and accessed. I like the idea. I think it would be a move in the right direction for large deployments, but looking to the future it would be nice if we could move beyond Unix/Linux to a more powerful, and expressive architecture that's secure. You know, Unix is "so 1970s". IMO Linux is a knock-off. I've already talked about solutions that have been created in the past, but haven't become widely used, in comments in other posts Justin has done. So I won't repeat them here. Re: standards I can remember another time that's similar to what you're talking about. In the 1980s the PC business was the same way (maybe you were referring to the same time). There was no standard, except for ASCII. The fact of the matter is that widely accepted standards do get in the way of new ideas coming into the space. In the 1980s there were lots of tries at creating a PC; all sorts of different models and capabilities. Like I said earlier, it was chaos. For someone like me it was exciting, because there was innovation happening all over the place. For businesses it was a struggle to adopt PCs for the same reason. What I think Kay was arguing for was the idea that software architecture could act as a substitute for standard formats. Instead of having a standard format, have a standard interface to a venue for data (like a web browser). Let there be an intermediary between the endpoint software (the browser) and the data which acts as a "data driver" (like a device driver). The "driver" is downloaded (I'd also cache it) along with the data. The "driver" handles the reading and writing of that data. The endpoint software doesn't have to worry about format. The code and data could be packaged together in a zip file or some other medium that allows them to be combined. That's all I was talking about. I thought it was a neat idea. What I've tried to say "in so many posts" here is that in most cases I don't care that there is a standard, and that it's stifling innovation in some parts of a system because I don't dig into those parts. If that's what's happening, fine. The only reason I care about software standards that affect application development is because that's the space I work in. Since several years ago the browser has had increasing influence over what we as developers do, and we've had little control over that, because managers/customers want browser apps. no matter what. They don't evaluate the architecture for whether it creates good software solutions that are reliable and secure. What they've been looking at with web apps. is the cost of maintenance, accessibility (whether the app. can be accessed outside the facility through VPN, say), and standards (de facto or by committee). The big attraction to web apps. was there wasn't a need to install anything on the client. The browser came with the OS, and it's been assumed that's all that's needed. This was "the answer" to "DLL hell" and the other (Microsoft) problems of client/server. I think the main attraction to this, as opposed to X/Windows, was that the browser became free, along with the HTML format and Javascript. Back in the 90s, at least, X clients were commercial, at least on the popular PC platforms. So the browser became the chosen option.

Deadly Ernest
Deadly Ernest

It's all about giving MS and a few others more direct control over our computers and bigger bank accounts. But while many people just accept their word that it's needed, it's more likely to happen. I actually have a book out on one very probably end to this, available at www.dpdotcom.com, called a New Computing World if you want to read a computing horror story - it's almost straight extrapolation of trends over the last ten years. However, another probability that's increasing more with each month at the moment is that Wintel will get their Trusted Computing put in place and we will find we have two totally incompatible Internet systems - one that's Trusted Computing and refusing to talk to anything else, and one that's not Trusted Computing. The Trusted Computing version is very likely to be limited to a various USA based businesses and government departments, and thus they'll be unable to access the main web around the world. This will be due to the TC systems refusing connections with non-TC systems and the EU banning the use of TC in Europe and many countries banning the use of systems that won't work with their Linux/Unix based government systems. That's when we'll start to see the brown stuff flying within the Wintel empires.

Deadly Ernest
Deadly Ernest

Mark, What you say about databases and that is true, but we've had the answer to those problems for decades in regards to making them work properly on local area networks or across VPNs. In such case you don't have a stateless situation. But the Internet itself is stateless and only a fool wold be trying to work multi-user databases across such a situation. Also, only a fool would be trying to work business apps across totally uncontrolled and unsecured connections like the Internet for such data management. No one has yet explained why we need so many proprietary systems that do what is already done by the normal browsers anyway, and why use a mashup; mashups are for people who are too lazy to put a proper web page together, so they just steal bits from other people. The main problem, as I see it, is people are trying to move things into the web and browser sphere that DON'T belong there and then get pissed when it doesn't work easily. It kind of like trying to use your sports car to move house and getting upset when the fridge won't fit, instead of just going to use a truck for the job. You don't need to use complex apps to do things on the Internet, but you do need them for normal business activities that shouldn't be anywhere near the Internet for many reasons like privacy and commercial security. ----------- Re standards, We had no standards in the early days of computing and everything got fragmented with billions of dollars being wasted and no systems were interconnectable without having each connection specially made. The introduction of standards solved many huge issues that situation caused. If a standard TOTALLY stifles any chance of innovation it may need to be reviewed, but we don't want to go the way of no standards, that's where MS is headed and look at all the trouble that's causing with software incompatibility and special drivers for each OS and not being able to read data more than a couple of years old. We need standards for business to work; what we need is more regular reviews of them and some expansions of their capabilities.

Tony Hopkinson
Tony Hopkinson

In fact I know we don't. I'm sure the big boys want it, or at least have been led to believe so. So I'm trying to make the best of a bad go, as the big boys are very good at manuafcturing a need. The writing is on the wall, we are going to get it. If it takes off, certain things we want we will have to do without or accept strangers furkling about with our kit, using our resources to provide us with what it is in their best interests for us to have. It sets my teeth on edge and up to press I've been able to avoid doing corporate theft as an earner, but if that's all there is, well there's food to put on the table.

Tony Hopkinson
Tony Hopkinson

In order to use this site in all it's glory I must allow javascript to run for instance. Now things are better than they were, in that it's sandboxed under IE7, but that still isn't as secure as specifically allowing this site through NoScript on FireFox. If I was still back in the day of IE5 et al, I would have javascript off, and my use of this site would be much less, because of that. I am totally uninterested in any mechanism that reduces the level of control over what can happen on my pc in order to work. So my concern with some of the mechanisms that are coming down the pipe isn't that they are activex but that the driver for their existance is activeXs, ie (no pun intended) usability, wow factor, gee whizzery and lower TCO. In other words nothing to do with what I want. Of course I trust you TR, no really, well in this andbox anyway, as a low privileged user and all my anti software running...... Nothing personal you understand, I once stopped being paranoid and then I got paranoid about why.

Justin James
Justin James

Mark - I agree that the centralized systems definitely have a much lower TCO, particularly since getting someone back up when they're down is just swapping the unit, instead of trying to recover locally installed software and setting. But it is a *really* tough sell on the business end of things; they see (rightly) a device that is technically less capable than a desktop PC, and costs as much (if not more). Even the savviest of CIOs (or CTOs) would simply ask, "wouldn't it be cheaper to buy cheap PCs and use Windows Terminal Services if I want to do something like this?" The major, major problem is that Sun is a company run by engineers. They tend to have really great technology that no one asked for or wants. The Niagara CPUs are a good example. While it is a stunning architecture, and for certain scenarios they make sense, no one wants 32 threads running at 800 mHz, they want 4 threads running at 2.8 gHz, even though you and I both know that 800 mHz is more than enough for 90% of server tasks, and the thread multiplexing should be a huge help. But regardless of the technical advantages, is a comerical disaster. Solaris itself... AMAZING tech, and no one wants it. Ever try installing it? I did, a few months ago. It was difficult just to figure out how to accept a selection. With that kind of attitude, it is easy to see why Sun is struggling, despite the overall quality of the offerings. At the end of day, higher ROI almost always loses to a lower sticker price if an MBA is making the choice, *particularly* if they've been burnt by IT projects failing to deliver promised ROI in the past (*cough*ERP and CRM*cough*). :( J.Ja

Mark Miller
Mark Miller

Earnest- For simple apps. which will not grow beyond their simple designs, then yes the platforms you listed are adequate. The problem is systems like what Justin has been describing for the past year or so. For example a widely used web app. that's used by employees, which needs to lock database records while they're using it, but unlock them if they navigate away from specific pages in the app, or close the browser. How do you solve this problem when you're dealing with a stateless model like a web app.? The answer is not very easily. How do you create a web app. that has features of GUI programs? The answer is something like AJAX, Flash, or Silverlight. What if you want to create a mashup between this sort of app. and a site that has its own AJAX/Flash/Silverlight or HTML elements? The answer is not very easily. So you see, for the simple apps., yes, you can get away with using HTML, PHP, Perl, etc. If you're building a complex system these technologies get increasingly problematic, and come to resemble "steaming piles of manure" after a while, all because we're trying to work within an architecture that is ill suited to the task. Tony- Re: insecure systems The reason your systems are insecure is because you chose ones that were insecure. There are systems along with complementary architecture that are more secure, if that's what you desire. Coming full circle, the topic was that innovation gets stifled by standards. I think the point was well made that some standards support innovation, and I agree with that. It just depends on what level of the machine you care about. I, for example, don't care too much about the OS on my PC. There are other people who do. What I care about is writing software that will run on my computer and that of others. I don't need to care about the OS to do that. So I'm happy with what I have. Justin and I are software developers. We care when standards get in the way of innovation in software development. So that's why he and I got to ranting a bit about it on here. It's very frustrating having to deal with complex problems within a standards-based architecture that we have to adhere to because it's what everybody wants, when just using a better architecture would make the job a lot easier, less costly, and less error-prone. I think some of the arguments that have been made here amply champion the position that innovation isn't worth it because it's too much trouble, and/or too much risk. So both sides have had their say here. :)

Mark Miller
Mark Miller

Take a look at Tony Hopkins's comments below in the discussion about Alan Kay's ideas for the web. He talks about how paralyzed IT is when it comes to installing any new software on PCs, because of security considerations. This is what Paul writes about all the time. The reason he talks about Sun Rays (or X-terminals) is they reduce IT headaches in large installations. Yes, you could install PCs as X-terminals, but you'd still have some of the same problems as you have currently with PCs. There would still be the risk of attack vectors, because of compromised PCs. If security patches are applied you might need to reboot systems. This can also happen with Unix. The difference is that since processing is distributed on servers, people can just run apps. on a different server, or a different file server, if another one is down, without having to move to a different desk. In small settings, like small businesses, I think PCs do fine. In large installations though, I imagine that small problems multiply to become huge ones simply because you have the potential of every single "terminal" having the same problems, or worse, different ones. Just saying that by thinking "these cost more than PCs with X software on them" you might be falling into the trap of thinking about short-term cost savings which leads to greater costs in the long run. As we've discussed before, sometimes making a good investment now leads to cost savings in the long term.

Deadly Ernest
Deadly Ernest

It's my computer and I want to say what goes on it. We already have standards and built in systems that work within browsers to display video files and play music, why do companies have to create new proprietary ones? Answer, so they can own and control it and make you pay for it. Trusted computing really means 'give Microsoft total control of your computer.' Something they tried to push through over a decade ago, and keep raising in different formats, and since Win XP have been trying to slip it in the back door. The thing that really worries me is that MS have said four times since Win98 was released that they've rewritten the Windows kernel - yet each new variation has exactly the same security holes and back doors that people have been exploiting for a decade. This means one of two things: 1. The people at Microsoft in management and coding are so stupid that they can't find a front door without it being marked with huge foot high signs. A valid possibility. or, 2. They WANT people to make viruses and exploit the operating system so people will get so upset with this sort of behaviour that they'll agree to anything to make it stop. If the world accepts the trusted computing system that MS has been pushing for years, it means no system can connect to the Internet unless it has a current licence of the latest MS OS installed and all fees paid up because nothing except such a machine will communicate with it. Sorry people, but I DON'T trust MS that much. Now back to the main issue. There are three major types of Internet wen site activities: 1. I have information for you. Perfectly done by HTML and existing file types that the browsers handle without anything extra. 2. Send me information; we already have a whole range of methods for doing that, like PHP, Perl, etc. 3. Let's exchange information as we go; again already handled by PHP, Perl, etc. What more do you need? BTW: A very large number of people in many developed nations still run older PCs and also still use dial up connections services as they don't have broadband capability - so we still need to cater to those needs.

Tony Hopkinson
Tony Hopkinson

unconvinced. The fact that some clever arse can propagate a virus in a word doc, or exploit a video file, makes me seriously nervous about running what amounts to an exe. Security considerations have put a massive crimp on what can be done, and the trusted computing model is some sort of bad joke. On top of that, while todays extra bandwidth and processor speed makes it possible it's still a finite resource, more importantly it's mine, not acme corps. I like the silverlight idea, but aside from being proprietry, it would take a lot more eductaion to cinfidently secure a PC, than exists in the places where something like that would be targeted most often. I mentioned ActiveX not as a specific tech but as a specific example of how the desire for a 'good' user experience, became a user nightmare.

Mark Miller
Mark Miller

I wasn't talking about specific technology use, and neither did Kay, though I did mention Java and Flash, given the current state of things. You're right about the JRE issue. I'm with you on that. There are better VMs out there (I'm talking besides JVMs). Another obstacle to the sort of vision that Kay outlined was that in the early days when the internet became popular and commercialized most people couldn't afford broadband. They used dial-up. Businesses could afford it, and brought that in. Even so, it might've been possible to have very small plug-ins for simple content, so it would've been in the realm of feasibility. Secondly, most computers weren't that fast, and doing everything on a VM probably would've slowed things down to a crawl. So I figured the browser model as it was was the only practical solution given the constraints. However if you think about what's possible now, then this sort of thing becomes more feasible.

Mark Miller
Mark Miller

I read through the responses here and it seems a couple of you are getting hung up on the idea of format, and that's not what I meant. I didn't even mean scripting, though that's a possible solution as well. Remember I did mention a VM... What I envisioned from Kay's comments was more along the lines of a browser API with hooks in it. The browser would basically be a container for content, but would know nothing about the format of incoming data. A late-bound module could plug into it, and it would conform to the interface of what's needed by the browser. In essence everything would become a plug-in. The producer of the content could create whatever format they wanted. So long as the code that came with the file (however you package it. Think of a zip file, maybe), conformed to the API then the browser would just have to make the calls to render and carry out interaction actions. The plug-in would take care of parsing, and interacting with the content, if need be. What I envision out of this is a very simple, broad API for the plug-in hooks (ie. not focused on narrow features). A plug-in element would contain routines for rendering its content, and interacting with it. The browser would need to have some callback routines as well, because the plug-in wouldn't know the specific graphics system of its host. So it would need to expose a predictable interface for drawing things. Perhaps there could be an extension interface as well, so that if the content producer wanted to add specific routines (sandboxed and namespaced by content producer) that are platform-specific to the browser's API for rendering special content, it could do that as well. I would think as well that the lifetime of such extensions would only last the lifetime of the session, after which they'd disappear, so as to not clutter the system with them. One aspect that would be preserved is the idea that if there was some API incompatibility that it would not bring the relationship between the browser and plug-in to a screeching halt. Like with current HTML rendering, if a part of the content doesn't makes sense, it's just skipped, and processing continues at the next logical point. It would make weird things happen, but at least something would happen. This is a rough outline. I'm sure others could poke holes in it, but this was the basic idea I had in mind when I said this stuff. In conjunction with his criticism of the browser, Kay talked about a system he saw that was used in the Air Force in 1960. Perhaps this will provide some context. He said when he was first brought into the AF they would get tapes that were all in various formats to be read by their computers. So code had to be written and used that could parse the formats. At one point though an enlisted man (Kay would dearly like to know who) came up with an ingenious scheme. Instead of the tape just containing data, the first part of it contained binary routines. They were loaded into standard slots in the computer's memory, so they could always be called in a predictable, standard way. They handled parsing the data on the rest of the tape. The people and programs accessing the data did not need to know a thing about the format. The routines handled all of that. This enabled the format to change, and vary depending on who created it, just as it was before, but without the chaos of having to adapt to yet another format that came from some new source. The ease with which data was accessed was as if they had been using a standard format the whole time, but they weren't. Instead they were using what in essence was a standard API for accessing data. Unfortunately this was not to last. Eventually the AF standardized on COBOL, and this scheme went away. Instead they chose to adopt standard formats, written and read by COBOL code. This is the analogy he made to how web browsers operate.

Tony Hopkinson
Tony Hopkinson

Not to mention Jacqui's oft mentioned resource stealing. In some ways I quite like the idea of SilverLight, in that you can use config files client side to contrain what can be done. Course the tools to do that are currently far from brill, and of course it relies on the designer degrading gracefully. Tony watches himself turn purple, can you hold your breath for a decade? This is always going to be a compromise, between security, functionality and optimisation. My personal choice it something that functions in the way I want, where the security is under my control and is optimised as best as it practical. Nor a popular point of view in certain circles.

Justin James
Justin James

... yes. Or something along the lines of a format that allows code of some sort to be embedded in it, in a standardized way. Not just for HTML either. Imagined OLE where the EXE to edit the embedded object was part of the embed, and that's kind of the picture. Quite icky from a security standpoint, but really quite mandatory if you want to have a standard that we all adhere to, and encourage and enable innovation & inventiveness at the same time. J.Ja

Deadly Ernest
Deadly Ernest

OK, lets go back a decade or so and see what and why certain basic industry standards came in. In the early days everyone used their own way of doing things and none of it was compatible with anyone else. there a few basic industry standards were set up. I won't go into the arguments about which were chosen or why, but two major sets were set up to make it easy for everyone to work within the seven layers of the OSI model. We already had a couple in a third layer that gave a trio of standard to do everything needed. Hardware IDE - was the first of the hardware industry standards where a set of common command codes were established and EVERY device was to be manufactured so that receipt of that code always did the same thing. The device had to have the on board smarts to convert those codes to the machine language of the device, an all software and operating systems were to use these common code sets. Network and Transport We already had the TCP/IP and other transport and communications protocols sorted out as standards, and the command sets for these were well set. These codes were required to be used by all software, hardware and operating systems so everyone can now talk to each other. Software A set of command codes was established that all operating systems and applications were to use to allow all the software to be nice to each other. What this meant was that a file could be created on one platform and sent to another and open properly. Also the applications all had the same signals to send to the comms system of the hardware to have anything done at that level. These included some basic file and document formats, and more lately they've been extended to other formats. The command to save a file was the same for all software and hardware, ditto with all the universal functions. Files of a particular type were formated in a similar manner. It made writing the software a lot easier as most of the instructions were already written. It's because of this that Plug'n'play devices worked, that some software worked on multiple versions of OS's and a few worked on multiple platforms fairly well; the install detected which OS and changed a few handshake commands to suit the OS. For a few good years everything worked well, then Microsoft decided that they didn't want to play ball any more and stopped using the industry command sets, that's why you need special device drivers for Win XP different to Win NT, to Win 98, to Win Vista, despite three of these using the exact same kernel code for most of the OS. It's because of this that some service packs have broken the ability of applications to work. To make an application compatible with a particular Windows OS you have to write the commands to suit that specific OS as it doesn't use the industry standards, the same with MS document formats. MS have so messed up their formats that they don't read older formats as they're far too different. Much of the difficulties being discussed can be resolved if everyone used the existing standards as then you'd have a clear frame work to establish good object coding and simplify the handling of the data objects. At this point we're starting to move out of my areaof expertise, so I'll leave it there for james and a few others to carry forward.

Tony Hopkinson
Tony Hopkinson

Is he talking about ActiveX, a web service or some unholy conjoining of the two. To me all that is doing is moving the problem, now you need a standard for how Formats are described and how they can be extended. Otherwise it is activeX simply coded in java, with everybody using the same JRE, (pause for belly laugh). Go too far down the pass by code route and you end up with effectively downloading a program which you then compile and run. We have that capability now, SilverLight, Flash and Java.

Justin James
Justin James

Glad that all made sense. What Murphy suggests (the Sun Rays) are precisely X Terminals in concept, but with a different format. I think he is 100% right on many levels that this is the way to go. There are just a few problems with his arguements. 1) Sun Rays are as expensive, if not more expensive, than a full Windows PC. It is actually cheaper to buy a crummy Windows PC and install Exceed on it and turn it into a dumb terminal talking to a *Nix X system (in other words, a Wintel X-Terminal) than it is to buy Sun Rays. Sun needs to drop the price to about $100 to make them make sense. After all, if it is cheaper to buy Wintel machines, and they have a local processor which therefore allows you to buy less-beefy servers... you can see where this is going. They would have to *give away* Sun Rays. No, they would need to *pay the customer* to take Sun Rays, because every Sun Ray on your network is load on your server. Every Windows PC is only load on your server if they are running client/server apps which communicate with a server process. For local computing apps like Office, Web browsing, etc., they are zero load on the server. 2) Paul has *never* adequetely explained (in my opinion) how the Sun Ray solution is any different from a mainframe. If anything, it is conceptually identical. That being said, I agree completely that something like the Sun Rays make perfect sense. The reduced ownership/support costs (just not paying for the electricity to a full Core 2 CPU matters) give it good indirect ROI, and should make up for the increased demands on the server. More to the point, it provides a locked down, secure, and controlled computing environment with no chance of data walking via an unauthorized ventor, gaurantees that all files get stored where they will get backed up, etc. In other words, I think Paul has the solution 95% right, and his 5% wrong is in the Sun Ray platform itself. :) This is for business, IT, of course. Even consumers could rent the terminal (put it in with Ethernet to the ISP, bang, a lot of headaches just got eliminated for the ISP *and* the consumer, and the ISP gets a big piece of hardware they can charge for). But since the ROI is, at best, indirect, it is a tough sell to management, and since it is a Sun propriety item, it is an impossible sell to management. J.Ja

Justin James
Justin James

That's the fundamental philosophy underneath UNIX, the idea that *everything* is a file handle, even devices. It's not the precise same idea, but it is the precise same concept. Because of this, once you know how to do certain things on a UNIX OS, the underlying device (screen, remote terminal, disk drive, another process, network socket, etc.) is completely irrelevant, and any file handle that can parse your output can interpret your output as it sees fit. The problem is, as you say, when you want to express something that cannot be well expressed in this metaphor. In the case of UNIX, it is miserable for anything that is not streaming data, preferably in a text format. Certain concepts just do not lend themselves well to the file handle concept, particularly when you start needing to do some jumping around within the data. Another lesson that we can draw from UNIX is that standards are positively disastrous when the original spec stinks. Too often, someone wrote a bit of garbage code, and by the time they felt like changing it to actually have useful or easy to work with output, someone else already started using it. So now, those inconsistent command line arguements or the quirky output has to stay, and someone else wants to write a better version so they have to emulate what you've done... needless to say, I beleive that a major component of the open source drive on *Nix platforms is due to the fact that to improve anything, you need to refer to the existing code to find out all of its historic quirks. The Microsoft Office file formats are the same way. The Excel format, for example, contains as its very first item a flag indicating when the "epoch" should start for that file. Why? Because of a difference between Excel and Lotus 1-2-3, in the 1980's. So anyone coding against that file format today needs to account for a historic quirk in an application which died well over a decade ago. So of course, even with full and complete documentation of the file format, the only application that can be written to follow the spec 100% is going to be a feature-by-feature clone of Excel. So you are in this dilemma. Either your standard includes methods for application developers to extend it, eithin the standard itself, in a manner that allows the format to contain instructions (in a standard format) that indicate how these extensions get used. Or you etch your standard in stone. So let's imagine the following ideal standard, for, say, a spreadsheet format. The standard itself declares, "there is this language, Chai, and any application that handles me must contain a fully comformant implementation of a Chai parser. I will define myself to be a simple XML document; each element contains an X-Coordinate attribute, a Y-Coordinate attribute, a data type attribute (string, integer, or floating point numeric), or they use a GUID for the data type. This GUID type is the hitch. When someone uses a GUID for the data type, they add an element to this document which contains a CDATA block; within that CDATA block is Chai bytecode (or source code to be interpreted) that gets executed in order to establish an object in memory that implements the IDataType interface." Yes, this could work. It's a bloody mess. That's the only way (that I can conceive) to make this work, is to have the format standard provide a framework for what a code block needs to do to extend the format, and provide a mechanism for embedding the code to handle itself within the file itself. So right there, you need a language that is dynamic enough to do that (maybe an eval.() would be sufficient, .Net's run-time assemply loading would definitely be good enough, certainly Smalltalk too). And then you need to pray that everyone who decides to implement their own types does it perfectly, and that your run-time interpreter (or bytecode interpreter) is amazingly robust and sandboxed to handle all of this code safely. So now, your simple spreadsheet standard has essentially re-implemented a significant portion of the Java or .Net run time environment specs. Ouch. So yes, we either have innovation-stifling standards, or we have people breaking standard. :( J.Ja

Mark Miller
Mark Miller

If you read my later responses to Justin, you'll see more of what I was talking about. I thought what you said about IDE and drivers was interesting, because it's a similar approach to what I was talking about with data. Alan Kay had an interesting argument to make about the web browser back in '97. He said he would've preferred to see web data tied to code, so that when you got data, there would be a set of routines with it. If you used that to access the data, rather than directly trying to access the data yourself, then you don't have to worry about the format. You just interact with the routines. This is like a device driver. Your software is not directly reading the tracks on the disk. It just says "I need some data here", or, "I want to write some data here". The OS and drivers take care of those actions. He was arguing that our data access (re: format) should be the same way. What standard formats do is take away the need for an intermediary. Since you always know what the format is going to be, you can just code the format into your software, and access it directly. The problem comes in where someone needs to express something that's more complex than what the format was designed for. In that case they have to do a lot of work to get what they want done. They basically have to create a hack somehow that comforms to the format rules, to jury rig the system to do what they want. With the way Kay talked about it, they could create an extension to the format, and they could modify the code that came with the formatted data to deal with it, or treat the data differently. Then they could distribute their content and/or revised semantics, and everyone would still be able to view it.

Deadly Ernest
Deadly Ernest

but I agree that we may have too many. I'm sure there's some other dinosaurs out there who remember the early days when we had no standards except those that come across with the telephone communications systems. Every danged piece of equipment had to be set up differently and it took ages to get a system working. Then the first standards on equipment came in and they were all required to use the same command sets from the software, adding new equipment was dead easy. That was it was until MS threw all the standards out the Window and sent Plug 'n' play out the window with them. When IDE first came in any IDE device would attach to any IDE compatible computer and would work with any software using the IDE standards. MS tossed that away after Win 95/98 and went to their own standards, thus the need for separate drivers for each MS OS now. I remember the days when each software company had their own way of doing the output for documents and NOTHING could be read on any other operating software unless it was on one of the few industry standards of .txt or .rtf - sadly that's still very much the case although some software writers are making their software capable of reading the other people's files - that means almost everyone except MS. You're right about people getting so big that they push their way of doing things on to everyone else as a de facto standard. Also we have many standards where they aren't really needed, but they are needed for the core areas to allow an easy interchange between the various platforms and OS and systems. The lack of standards was the biggest bug bear of the mainframes, it was almost impossible to get an IBM to talk to a Prime or an Amdahl or any other system at all. Some standards on interchanging information solved most of that. But we shouldn't let the standards become total straight jackets that stop any innovation either, as we can always move on to a new standard provided it gives us an improvement.

Mark Miller
Mark Miller

[i]it is really telling that the "great innovator" Google never enters a market until someone else has proved that money can be made in that market.[/i] Well isn't that what Microsoft has been doing for years? It seems to work from a business perspective. I think what Google did initially was innovative. I can still remember it. The thing that was unique about it is it tended to do a better job of finding the results you were looking for. There were many search engines back then, and they all pretty much sucked equally. I remember it being really frustrating sometimes to try to search for something and get a bunch of unrelated results. Google cut down on that. The only other things they've done that I've found of any use at all are their translation service, and their blog search. [i]Yes, yes, YES. It is so refreshing to see someone else who "gets" this. Standards are great, and form a framework for people to relate the discussion. But they limit the discussion.[/i] Until I started listening to Alan Kay I didn't get this either. He made a really interesting argument more than 10 years ago [i]against[/i] the web browser. Not against the web per se, but against having a standardized format that everyone had to adhere to. He said it was "the worst thing since MS-DOS". He meant that analogy deliberately. Once again we were tying ourselves down to a single platform template, compatible with a single format. Perhaps you saw the speech. He had a kind of pie in the sky idea that what would be better would be to take an OO approach: tie code to data, and provide a standard interface to the code. That way the client doesn't have to care what format the data is in. The question is how do you do that? I imagined a VM methodology where everyone would have/download a VM that could execute this object code, and run it, regardless of hardware platform. That would basically mean Java or Flash at this point. Had this sort of solution come early on in the web's evolution, perhaps a different VM would've been preferred. It wasn't until the internet became a dangerous place that people got squeemish about placing "non-standard" software on machines. [i]We *have* to go back to the mainframe model for business.[/i] Paul Murphy would disagree with you here. You're right that he argues against PCs, but he proposes a Unix-based model of distributed, shared processing, in an infrastructure that scales well. I've told you about this before, that he's been pushing the idea of Sun Rays for a long time. He says it gives you the PC experience without the management headaches, because the Rays are just displays. They don't run anything. They display output and take input (they have a keyboard and mouse), but all processing takes place on a server. Users can run desktop apps. in safety, because the security environment is centrally managed. He says it allows users to customize their environment, and the Unix platform is more open to collaborative activities than is a mainframe. I haven't seen the exact setup he's talking about so I don't know how it works. The best I can relate it to is X-terminals. I used to use those in school. They worked on the same principle. IMO the X/Windows environment is not as nice as using a Windows or Mac machine in terms of ease of use, but it's utilitarian. The UI architecture in X has not been praised either. Murphy has spoken in the past about running a Java desktop in this environment. I'm not sure if he'd still endorse that. He may have even said one can run Windows apps. on it, too. I can't remember. Maybe he was referring to an emulation environment. Those have existed on Unix for many years. Re: treatment vs. cure I agree with you, but this has been the state of affairs for a long time. My mom's complained about it since she was young. She always used to tell me that conventional medicine doesn't get to the root causes of problems. More often than not they just cover over the symptoms. She turned to alternative medicine and mental/spiritual therapy for cures for what ailed her and myself. It's seemed to work. It does require that you educate yourself on the alternative medicine stuff. In some areas there are knowledgeable people who can help, but otherwise you just have to get interested and read up on it. I have turned to conventional medicine sometimes when I have a serious condition that requires immediate help. There have been a few times in my life where I've gotten really bad eye infections. Sometimes conventional medicine is just better, and that's fine. Use what works. Sometimes the natural remedies you can get at the store don't work too well, but it could be I'm just going for the wrong solution. It goes back to education about how your body works. I've gotten most of my knowledge about this stuff from my mom. I haven't been as into it myself. [i]In summary, though, what was under control at one point was the mosquito population itself, not the malaria disease.[/i] Yeah, that's what I meant. [i]If I am not simultaneously evaluating something within the greater context, and looking to add value in a way in which the whole is more than the sum of the parts, I need to be. And not just with IT. I hope this makes sense, it is a topic which I am used to thinking about but not used to verbalizing.[/i] Mmm...yeah. It makes sense. :)

Justin James
Justin James

"But the cycle will repeat, as unlikely as it seems now." I agree, but I also think that we spend most of our time in "innovation off" mode. Mainframes ruled for decades, and the client/server revolution was only a few years old when in devolved into "Mainframe 2.0" in terms of rick aversion. The Web revolution stopped taking risks ages ago; it is really telling that the "great innovator" Google never enters a market until someone else has proved that money can be made in that market. "Standards bring stability to the platform, but they stifle new ideas." Yes, yes, YES. It is so refreshing to see someone else who "gets" this. Standards are great, and form a framework for people to relate the discussion. But they limit the discussion. For example, if I invent a word processing document that only has paragraphs, bullet lists, and fonts, an application adhering to that standard is limited to implementing only those features... any inventiveness or innovation much violate the standard, unless the standard defines a way to define new feature (like CSS allows HTMl writers to define how new classes look). When the push on Web browser vendors became "standards compliance", what passed for "innovation" was tabbed browsing. 10 years into the "Standard Compliance" push, and tabbed browsing is the most "innovation" we can have in our browsers while still being compliant? "As far as the societal de-emphasis on R&D, I don't know what's driving that. It could be the same thing: a desire for stability rather than the chaos innovation brings." This is the principle that drives most of the decisions in business today. Risk aversion. "In fact it seems apparent to me that we're going back to an updated version of the old mainframe model." We *have* to go back to the mainframe model for business. As Paul Murphy said recently, PCs are basically the most expensive, least reliable green screens out there. If you want the security and reliability that modern businesses need, you can't allow PCs on the network unfettered access to the Internet or allow users to install programs or even touch the local hard drive at all; you may even consider re-imaging the PCs weekly automatically; you will force all storage to happen on the network where you can guarantee a virus scan and backups and RAID and who accesses it and who is allowed to access it and so on. And you know what? The "Web revolution" does more to accomplish this than anything else. It is a lot easier to centralize storage, eliminate installs, etc. when the entire application is on the server, and only the presentation layer is sent to the client, and with a declarative system at that. "Something that's bothered me for years is hearing about how kids are being medicated for ADHD, and now bipolar disorder." To keep from getting *too* off topic (in public), I highly recommend the book "Toxic Psychiatry" on the topic for more details on this. It has some great information in it. My experences are ismilar to yours. It actually seems to be common within the IT industry. "Apparently there are treatments for some cancers, like with AIDS, where people can now at least "live with" the disease, as opposed from suffering an untimely death from it." Yup, same situation with the psychiatric stuff too. They can treat the depression (schizophrenia, etc.), but not cure it. The problem is, "treatments" handle the consequences, and with minimized consequences comes less motivation to cure the problem. It seems like far too much of our medical research is focused on "treatments". I try to not be "conspiracy theory" about it, but "treatments" provide a perpetual revenue stream while "cures" do not... "As for malaria in developing countries, that gets into a whole other topic that we may not want to discuss." We probably don't want to discuss it, at least not here. :) In summary, though, what was under control at one point was the mosquito population itself, not the malaria disease. "I'm intrigued by your closing statement. I was just referring to technology when I talked about architecture that scales, but you think it applies outside of that. I don't have a clue. Could you elaborate?" Sure can. :) I beleive that, just as every development project fits into the architecture of the computer system and the ecology of the other programs (Office and the various server techs associated with it immediately comes to mind), the same can be said of my actions. Nearly everything I do has a spiritual/moral/ethical principal that it can be measured against (which relates to its properties on a spiritual plane, how it affects my personal conception of doing the right thing, and how it affects others, in that order). If what I am doing does not work towards the enlargement and enrichment of my spiritual life, does not agree with what I consider to be right, and does not affect you in a positive (or neutral at the very least) way, I should not be doing it. For too long, it was easy for me to be rather self centered, and take the stance of, "it benefits me and doesn't overtly harm anyone else, so why shouldn't I?" as a guideline to action. Somewhere along the way, I realize how narrow this vision was, and it had to change. It basically meant the adoption of one form of Kant's Categorical Imperative for me (always treat others as an ends unto themselves, not merely a means). And that makes it extraordinarily similar to what you said about technology. If I am not simultaneously evaluating something within the greater context, and looking to add value in a way in which the whole is more than the sum of the parts, I need to be. And not just with IT. I hope this makes sense, it is a topic which I am used to thinking about but not used to verbalizing. J.Ja

Editor's Picks