Developer

What To Do About The Sorry State Of Web Development


What To Do About The Sorry State Of Web Development

A commenter on previous article (The Sorry State Of Web Development) make a good point: I put out a lot of negativity without offering anything constructive in return. Well, I’m going to make rectify that mistake.

Here is what I think needs to be done to improve the Web, as far as programming goes. I admit, much of it is rather unrealistic considering how much inertia the current way of doing things already has. But just as Microsoft (eventually) threw off the anchor of the 640 KB barrier for legacy code, we need to throw off the albatrosses around the neck of Web development.

HTTP

HTTP is fine, but there needs to be a helper (or replacement) protocol. When HTTP was designed, the idea that anything but a connectionless, stateless protocol would be needed was not in mind. Too many people are laying stateful systems that need to maintain concurrency or two-way conversations on top of HTTP. This is madness. This applications (particularly within AJAX applications) would be much better served with something along the lines of telnet, which is designed to maintain a single, authenticated connection over the course of a two-way conversation.

HTML

HTML is a decent standard, but unfortunately, its implementation is rarely standard. Yeah, I know Firefox is great at it, but its penetration still "isn’t there" yet. More importantly, while being extremely standard compliant, it is still just as tolerant of non-standard code as Internet Explorer is. If Internet Explorer and Firefox started simply rejecting non-standard HTML code, there is no way that a web developer could put out this junk code, because their customer or boss would not even be able to look at it. Why am I so big on HTML compliance? Because the less compliant HTML code is, the more difficult it is to write systems that consume it. Innovation is difficult when, instead of being able to rely upon a standard, you need to take into account a thousand potential permutations of that standard. This is my major beef with RSS; it allows all sorts of shenanigans on the content producer’s end of things, to make it "easy" for the code writers, which makes it extraordinarily difficult to consume it in a reliable way.

When developers are allowed to write code that adheres to no standard, or a very loose one, the content loses all meaning. An RSS feed (or HTML feed) that is poorly formed has no context, and therefore no meaning. All the client software can do is parse it like HTML and hope for the best.

JavaScript

This dog has got to go. ActiveX components and Java applets were a good idea, but they were predicated on clunky browser plug-ins, slow virtual machines, and technological issues which made them (ActiveX, at least) inherently insecure. The problems with JavaScript are many, ranging from the interpreters themselves (often incompatible interpretation, poorly optimized, slow) to the language itself (poorly typed, pseudo-object oriented, lack of standard libraries) to the tools to create it (poor debugging, primarily). JavaScript needs to be replaced by a better language; since the list of quality interpreted language is pretty slim, I will be forced to recommend Perl, if not for anything else but its maturity in both the interpreter end of things and the tools aspect. Sadly, Perl code can quickly devolve into nightmare code, thanks to those implicit variables. They make code writing a snap, but debugging is a headache at best, when $_ and @_ mean something different on each and every line of code, based on what the previous line was. Properly written Perl code is no harder to read and fix than JavaScript. Perl already has a fantastic code base out there.

Additionally, the replacement for JavaScript needs to be properly event-driven, if it is to ever be able to work well in a web page. Having a zillion HTML tags running around with "onMouseOver()" baked into the tag itself is much more difficult to fix (as well as completely smashing the separation of logic and presentation which I hold to be the best way of writing code) than having TagId_onMouseOver() in the

Application Servers

The current crop of application servers stink, plain and simple. CGI/Perl is downright painful to program in. Any of the "pre-processing" languages like ASP/ASP.Net, JSP, PHP, etc. mix code and presentation in difficult-to-write and difficult-to-debug ways. Java and .Net (as well as Perl, and the Perl-esque PHP) are perfectly acceptable languages on the backend, but the way they incorporate themselves into the client-to-server-to-client roundtrip is current unacceptable. There is way too much overhead. Event driven programming is nearly impossible. Ideally, software can be written with as much of the processing done on the client, with the server only being accessed for data retrieval and updates.

The application server would also be able to record extremely granular information about the user’s session, for usability purposes (what path did the user follow through the site? Are users using the drop-down menu or the static links to navigate? Are users doing a lot of paging through long data sets? And so on). Furthermore, the application server needs to have SNMP communications built right into it. You can throw off all the errors you want to a log, but it would be a lot better if, for example, a particular function kept failing that someone was notified immediately. Any exceptions that occur more than, say, 10% of the time needs to be immediately flagged, and maybe even cause an automatic rollback (see below) to a previous version so that the users can keep working, while the development team fixes the problem.

Presentation Layer

The presentation layer needs to be much more flexible. AJAX is headed in the right direction with the idea of only updating a small portion of the page with each user input. Let’s have HTML where the page itself gets downloaded once, with all of the attendant overall layout, images, etc., and have only the critical areas update when needed. ASP.Net 2.0 implements this completely server-side with the "Master Page" system; unfortunately, it’s only a server-side hack (and miserable to work with as well, as the "Master Page" is unable to communicate with the internal controls without doing a .FindControl). Updates to the page still cause postbacks. I would like to see the presentation layer have much of the smart parts of AJAX built in; this is predicated on JavaScript interpreters (or better yet, their replacements) getting significantly faster and better at processing the page model. Try iterating through a few thousand HTML elements in JavaScript, and you’ll see what I mean.

The presentation layer needs to do a lot of what Flash does, and make it native. Vector graphics processing, for example. It also needs a sandboxed, local storage mechanism where data can be cached (for example, the values of drop down boxes, or "quick saves" of works in progress). This sandbox has to be understood by the OS to never have anything executable or trusted within it, for security, and only the web browser (and a few select system utilities) should be allowed to read/write to it.

Tableless CSS design (or something similar) needs to become the norm. This way, client-side code can determine which layout system to use based upon the intended display system (standard computer, mobile device, printer, file, etc.). In other words, the client should be getting two different items: the content itself, and a template or guide for displaying it based upon how it is intended to be used. Heck, this could wipe out RSS as a separate standard, just have the consuming software display it however it feels like, based upon the application’s needs. This will also greatly assist search engines in being able to accurately understand your website. The difference (to a search engine) between static and dynamic content needs to be eradicated.

URLs need to be cleaned up so that bookmarks and search results return the same thing to everyone. It is way too frustrating to get a link from someone that gives you a "session timeout" error or a "you need to login first" message, and significantly impacts the website’s usability. I actually like the way Ruby on Rails handles this end of things. It works well, from what I can see.

Development Tools

The development tools need to work better with the application servers and design tools. The graphics designers need to see how possible their vision will be to implement in code. They graphics designers will also be able to see how their ideas and designs impact the way the site handles that; if they can see, up front, how the banner they want at the top may look great on their monitor, but not look good on a wider or more narrow display, things will get better. All too often, I see a design that simply does not work well at a different resolution that what it was aimed at (particularly when you see a fixed-width page that wastes half the screen when your resolution is higher than 800x600).

Hopefully, these tools will also be able to make design recommendations based upon usability engineering. It would be even sweeter if you could pick a "school" of design thought (for example, the "Jakob Nielsen engine" would always get on your case for small fonts or grey-on-black text).

These design tools would be completed integrated with the development process, so as the designer updates the layout, the coder sees the updates. Right now, the way things are being done, with a graphic designer doing things in Illustrator or Photoshop, slicing it up, passing it to a developer who attempts to transform it into HTML that resembles what the designer did, is just ridiculous. The tools need to come together, and be at one with each other. Even the current "integrated tools" like Dreamweaver are total junk. It is sad that after ten years of "progress", most web development is still being done in Notepad, vi, emacs, and so forth. That is a gross indictment on the quality of the tools out there.

Publishing

The development tools need a better connection to the application server. FTP, NFS, SMB, etc. just do not cut it. The application server needs things like version control baked in. Currently, when a system seems to work well in the test lab, then problems crop up when pushed to production, rolling back is a nightmare. It does not have to be this way. Windows lets me rollback with a system restore, or uninstall a hot-fix/patch. The Web deployment process needs to work the same way. It can even use FTP or whatever as the way you connect to it, if the server invisibly re-interprets that upload and puts it into the system. Heck, it can display "files" (actually the output of the dynamic system) and let you upload and download them, are invisibly, the same way a document management system does. This system would, of course, automatically add the updated content to the search index, site map, etc. In an ideal world, the publishing system could examine existing code and recode it to the new system. For example, it would see that 90% of the HTML code is the same for every static page (the layout) with only the text in a certain part changing, and take those text portions, put them in the database as content, and strip away the layout. This would rock my world.

Conclusion

What does all of this add up to? It adds up to a complete revolution on the Web in terms of how we do things. It takes the best ideas from AJAX, Ruby on Rails, the .Net Framework, content management systems, WebDAV, version control, document management, groupware, and IDEs and adds them all into one glorious package. A lot of the groundwork is almost there, and can be laid on top of the existing technology, albeit in a hackish and kludged way. There is no reason, for example, for SNMP monitoring to be built into the application server, or version control or document management. The system that I describe would almost entirely eliminate CMS’s as a piece of add on functionality. The design/develop/test/deploy/evaluate cycle would be slashed by a significant amount of time. And the users would suffer much less punishment.

So why can’t we do this, aside from entrenched ideas and existing investment in existing systems? I have no idea.

J.Ja

About

Justin James is the Lead Architect for Conigent.

0 comments

Editor's Picks