I want to send my thanks out to all of the readers and commentators who have been spending so much of their time to post comments, debate, criticism, and more on my most recent string of blog posts. You guys are the greatest, and I think it is really awesome that you are putting as much, if not more, time and effort into discussing this blog as I spend writing it. My recent post regarding the relationship between users and developers has started a great thread of comments. One of the comments, from gbentley, provoked quite a bit of thought in my mind:
"One of the respondents made the comment that software only exists to make tasks easier. Disagree vehemently. There are whole ranges of applications that have only become possible due to cheap fast computers. They weren't done at all before. Simulation, various kinds of graphics, deep analysis of histroic data, etc." - gbentley
This quote contains two commonly accepted ideas. The first is that "software only exists to make tasks easier." The second idea is "[t]here are whole ranges of applications that have only become possible due to cheap fast computers."
Are these two ideas actually in opposition? Are either of these ideas even true?
"Software only exists to make tasks easier."
I will call this the assistant argument. This one is so close to being a truth with no questions required of it. But it has a really big problem: the word "only". Without "only", this is a "duh" statement, of course software exists to make tasks easier. But to claim that software's only purpose is to simplify tasks, without enabling new tasks? That's a pretty bold claim. But it is easy to see why people hold this idea. Email replicates the postal system or letter and packages, spreadsheets replicate ledger books, word processors replicate typewriters (which themselves merely replicate paper and pen), and so forth. Indeed, with the exception of software that are directly related to computers, there are very few pieces of software which do not replicate an existing task that can be done without a computer (the qualifier is in reference to things like development tools or backup software; without computers there would be no need to write software and nothing to backup).
"There are whole ranges of applications that have only become possible due to cheap fast computers".
I call this idea the enabler argument. This claim seems to be much more reasonable than the first one does. It does not deny that software simplifies many tasks, but simply states that software and computers allow work to be done that could never be done before. This claim has two possible variations. The literal variation boils down to there are certain things that simply cannot happen without a computer. This is pretty hard to prove. Even in the examples given by gbentley are all things that someone with amazing patience and/or dexterity, or a huge team of people, could do without anything more than paper and pencil. From there, we have the pragmatic variation: there are certain tasks, that while able to be performed without a computer, that it is unrealistic to expect them to be done without a computer. gbentleys examples, I believe, fall firmly within this argument, and quite nicely at that.
It is actually impossible to evaluate either of these claims without an understanding of what a computer is.
For example, I could state, a ledger book is a form of computer, albeit a mechanical one; it is a mechanical database where all C/R/U/D (Create/Read/Update/Delete) is performed by the user with a pencil, and all calculations must be performed by the user, but it is a database all the same. With that statement, we have essentially said that even a checkbook is a computer, and more or less, even the most basic accounting is impossible without either a computer, or superhuman memory and math skills.
Or one could interpret computer to mean any electronics device using a binary storage and logic system to represent arbitrary data. This would be more appropriate; anything that we accept as a computer is a binary device. Or is it? Scientists are working very hard to develop computer that make use of quantum states, where the answer set is expanded from on and off to include maybe. Other scientists are working on computers incorporating organic matter, which requires an analog interface. Even a hard drive is hardly a binary device. It records magnetic states. Magnetism isnt an on or off property, it is a relative property. A hard drive is really reading the relative strength of tiny magnetic fields. If a field has more than a certain amount of strength, it is considered on. So it is hard to say that the use of binary storage or processing is what qualifies a device as a computer. We could also try to make a derivative argument, which is that a computer needs to have transistors, but again, there are established computing devices without transistors, with more on the way.
Luckily for us, The Enabler Argument makes it clear what kind of computer it means: fast and cheap. That rules out anything below, say, a four year old PC. A checkbook, abacus, etc. should be unable to perform any task faster than a piece of modern computer hardware, given the right software. And therein lies a major problem. The software.
This takes us back to where the whole argument began in the first place, the relationship between software creators and software developers. We can safely ignore the folks who think that we are on the verge of having the group groups suddenly overlap. Ive been told countless times that Application XYZ is going to let users create their own code without a developer needed, with simple drag n drop interface (or plain English queries or natural language queries or GUI interface or WYSIWYG interface, or whatever)! Luckily for me and my career, this magic application has never been written, nor do I see it on the near horizon.
For one thing, it would require an entirely new way of writing and maintaining computer code. I have been thinking about this quite a bit lately, but I am unable to share my thoughts at the moment for a variety of reasons. Hopefully I will be able to share them in the near future.
Another problem is the way developers go about thinking about the concept of how users perform their tasks. This also relates to the problem of user interfaces. Developers ask users certain key questions when writing code to help them translate the users business needs into Boolean logic and procedural code (call it what you will, but even event driven and OOP code is actually procedural code, once the event is fired or method executed). Developers ask questions like:
If this value is higher than the threshold you have laid out, should I reduce it to its maximum allowed value, or produce an error message?
Is this data going to be in a list or a spreadsheet format (when a developer says spreadsheet to a user, they really mean database, but users understand the idea of spreadsheets better than databases)?
Under what circumstances do I need to highlight this data in the output stream?
These are questions that are needed to be asked to code business logic, and there is nothing inherently wrong with them. But where this model of specification creation fails, is in not understanding what the users actual needs are. We are finding out how they do their job, but not why. A great example of this is the Microsoft Office Macro Recorder. Sure, the end result may be a bunch of VBA statements, so you could claim that the MOMR turns ordinary users into developers. But look at the code it produces; it creates code that replicates every stray mouse click, key pressed, etc. It replicates how the user does their job. This is why the MOMR code is so useless for all but the most trivial tasks. It does not have any understanding of the users intentions. If the user selected a particular column and ten made it bold, the macro simply says select column C and then make it bold. It isnt even clever enough to reduce that to make column C bold and eliminate the selection aspect of it! Running that macro while the user has another column selected loses the users prior selection. A developer will re-write the code appropriately. But even then, the software has no idea why the user made column C bold. This code will always make column C bold every time it is run, even if column C was the correct column to make bold when the macro was run. If that decision was based upon the contents of the column, or maybe the nearby columns, the macro does not accomplish the users goals in the least.
This is where the developer fits is. The developer sits down with the user and asks question like under what circumstances do we make column C bold? Even better, the developer will ask under what circumstances do we make any particular column bold? The best question to ask, though, is why are we making columns bold? That is when we transition from programmer to me how and start programming to meet why. There is an amazing thing that happens at this point in the conversation: the developer may learn at that point that the users why is not best served by the how that was requested in the project specifications. The user may say, for example, well, we want to column bold, because any bold column indicates poor sales numbers. The developer could turn around and say, it sounds like what you need out of this is a method to identify underperforming accounts, would it be better if we created a second tab on the spreadsheet that contained only the underperforming accounts along with the relevant information to help you see why they are not doing well? All of a sudden, our job just became a lot more difficult, but at the end of the day, we have a much happier customer.
It is at the point when software ceases to be written to replicate the how and fulfill the why that we actually answer the questions that started this article.
Up until relatively recently, software creation was merely a how oriented task. You could throw as many new languages and pieces of hardware as you wanted at the situation, but that is where we were. Only recently has software started becoming a why oriented task. When software is only written to meet the how, then we end up with software that only makes existing tasks easier. When we write software to accomplish the why, we are writing software that allows things to be done that could not have been done (meeting either the literal or the pragmatic variations) through the use of software. The users why is not to slightly alter the color code of some pixels. Thats the how. The users why is to remove the red eye from a digital photograph.
When we begin to address the why, we write great software. Bad red eye removal software will turn any small red dot in an image into a black dot, after the picture has been taken. Good red eye removal software will hunt only for red dots in areas that are potentially eyes and turn them into black dots. Great software will first find a face-like shape, then search for eye-like shapes, and then turn the red dots into black dots. The best red dot removal software doesnt even need to exist on a computer; it would be in the camera! The camera would alter the flash (length, brightness, maybe have a few different flashes that use different spectrums) to use as little flash as possible and maybe even do something to prevent the red eye from even making it into the image file, let alone reach the lens.
Thats just one (probably not great) example of what I mean. And this is where we are seeing the user/developer gap. The developers are simply too far away from the users, in so many aspects, that they have a very hard time understanding the why of the software. I am certainly not claiming that the develop needs to know how to do the users job (it certainly helps though!) or that we find a way to make tools to allow users to generate software (impossible to do well at this point). I am saying that developers need to work to determine the users why as long as the how, and figure out how to code to the why as opposed to the how. Some developers (including myself) are lucky enough to be writing software that will only have a handful of users. We are able to have these conversations with our users. Imagine trying to write a word processor that meets the potential why of every potential user! Some users use word processors as ad hoc databases like grocery lists. Others use it to replicate typewriters. Others dont even create content, they use a word processor to review and edit content. And so on and so on. From what I have read about Office 12, I think Microsoft has the right basic idea: make the interface present only the options that are contextually relevant to what the user is trying to accomplish. But it still isnt anywhere near where it needs to be. It is still working with the how.
File systems are another great example of a system that meets how but not why. When computers were fairly new, the directory tree made sense. The information contained within the directory names and file names themselves was a form of metadata. Documents/Timecards/July 2005.xls is obviously a spreadsheet from July 2005 that is a timecard document. This met the how which was the filing cabinet metaphor, that was how these things were done for 100 or so years prior to the computer. The why was identifying and finding my data, which is what the directory tree structure does not accomplish very well. In reality, the user would be much better served by a system that would assign certain properties such as Timecard, Document, Spreadsheet, pertinent to July 2005, etc. to a file object. All of a sudden, we can do away with the Timecard directory, and have a Timecard group that contains all documents marked as timecards! That is a very powerful change indeed.
Again, all too often our design and development process gets sidetracked by how and not why. How many times have we as users said something like I think a drop down box for that information would be great? We think a drop down box is best, because thats how we saw it done somewhere else for something similar not knowing that there may be a much better interface widget that we simply were not aware of. As developers, how many times have you heard a customer dictate architecture details to you to the point where you wonder if it would be easier to just teach them a programming language and let them do it themselves? This is because the user/developer conversation is focused on how. As a developer, in the most metaphoric sense possible, it is my job to translate your why into a software how. That is it. Writing software that accomplishes what you need to do better than how you are doing it now. And that is the crux of the problem. If the users current methods are so great, why are they asking me to write software for it? When we begin to address the why instead of the how, we are no longer replicating what may be an inefficient process (or taking what is a great process when performed manually, but inefficient as software), but helping to create a whole now process appropriate for the medium.
A large part of the problem, from my viewpoint, is that programming languages are still how oriented. There is no why in writing software. Some languages are a bit better than others. Perl, for example, with its Do What You Mean, Not What You Wrote attitude is a better one. C is a very bad one. C will be more than happy for you to alter a pointer itself instead of the bits it points to, when anyone reading the code would understand that you wanted to alter the code itself. This is at the heart of my complaint with AJAX. People are using AJAX to replicate rich client functionality, with tools that were simply never designed to do that. Of course it will be sloppy. I wont go into the technical details at this time, I have been over them a number of times already. But AJAX, and many other Web applications are replicating what is frequently poor rich client how to begin with, and compounding it by inventing a why that does not exist. The user does not use a word processor because they need to be able to type from any computer with an Internet connection. Thus, a Web based word processor does not address any actual why, but creates a bad one, then replicates a horrendous how in the process, using bad tools to boot.
I have a lot more thoughts on this topic, and what changes need to occur for developers to become empowered to address the why while not sweating the details of how. Stay tuned.
Justin James is the Lead Architect for Conigent.