Software Development

My Language Choice Conundrum Deepens


At the suggestion of a zillion people and Web sites, I have finally started reading Code Complete by Steve McConnell. The book is really good, and touches on so many of the topics that I have been writing about lately and that people have been talking about in the forums. One of the interesting items I found in the book was a chart comparing code efficiency, as compared to C and measured in “statements

About

Justin James is the Lead Architect for Conigent.

73 comments
SeoNashir
SeoNashir

Ashraful Haque, CEO of http://www.hisoftltd.com, Fun of your book. He implement code and exercise through your coding idea. He hope to publishes this kind of books.

a_chadili
a_chadili

You cannot measure a programming language's effectiveness only by elegance or number of lines it takes to write code. I remember APL was a great language for writing elegant concise code, but good luck trying to understanding the code later (The joke back then was APL is a write only language :-). What matters for me to pick a language is: First, it should be able to have access to all machine resources required to design a solution (graphics, IO, etc...). A second criteria of effectiveness is its conciseness and readability (How fast can you understand what the code does). The third criteria to measure a language's effectiveness is the code maintainability (How much code is impacted when a change is required, how nested and easy to find impacted pieces?). Finally, a language's effectiveness can be measured by its robustness. By this I mean how bug and accidents prone is the code? A language can incorporate in its constructs features that help create bug-free code and find bugs easily (e.g Java can detect certain type related bugs at compile time). Me, I would replace 20 star programmers (or hackers :-) by couple professional developers that create efficient, maintainable, bug free code.

Mark Miller
Mark Miller

I can sympathize with your analysis. Alan Kay has said many times that the way we do software engineering today is like the way the Egyptians built pyramids: thousands of "slaves" or "drones" who do simple tasks, but who end up building huge structures that basically work. It's not efficient but it gets the job done. He contrasted the pyramids with the building of the Empire State Building to contrast modern engineering vs. that of the ancient Egyptians. The Empire State Building was built in less than a year, using fewer workers (not the hundreds of thousands necessary to build the pyramids), whereas a pyramid took about 20-30 years to build. Even with a language like Smalltalk he said that while it's more advanced in many ways than the popular languages of today, in the scheme of things (projecting into the future), it's like the invention of the arch. It's more sophisticated, and allows you to build large structures with less building material. It's still ancient compared to modern structural/civil engineering methods though. We've got a long way to go. In terms of readability, I agree the more verbose languages document more, but in effect they document the "processor", not the process, of a business function. This is the reason the code is nicer to read if the programmer puts comments in the code explaining what the process is. The code doesn't document that too well, usually. I use the term "processor" as an analogy. What's documented is the data flow of the process, the transition of data from one type to another. My own opinion on this is the reason this works for many programmers is they are trained in "This is how the computer/VM works". So everything has to be translated into "How would the computer do this?" Programming in a dynamic language takes a different mindset. You focus more on "What is the process?" and "What am I really doing?" Often you have to build up some layers to get to this point, but it's much more possible. Dynamic languages are readable in a different way. If used properly they self-document the process of the business function, and hide the technical details of the data flow under the covers. Not to say this technical detail isn't important. I find myself documenting the data flow of types through comments and variable names, but I like the fact that the [i]intent[/i] of the business process shines through. I don't have to comment it much, if at all. It speaks for itself. You might want to make more of an effort to look at F# or Ruby. I haven't used Perl (read up on it a bit years ago), but I've heard from several people that it's a "write-only" language (ie. easy to understand at the time it's written, but hard to understand later). Even though I've heard people make the comparison between "dynamic languages" and "scripting languages" I think there is a difference. Modern dynamic programming languages make more of an effort to make some things clear. IMO scripting languages like Bourne shell, AWK, Perl, etc. are even more terse and obtuse. Not all of these languages are created equal in terms of readability. Maybe you know this, but Perl originally became popular because system administrators used to use it as a powerful scripting language for managing processes. I was listening to a podcast today on Powershell. One of the designers was being interviewed. He had a lot of experience with Unix shell scripting languages. He said what typically would happen with Unix shell scripting is people would "graduate" up to more powerful scripting languages the more power they needed. They'd start with sh. Then go up to AWK if they needed to, and then finally Perl if they needed something really powerful. My experience with Smalltalk is that it's easy to write readable code. You get enough visual cues that you can see what's going on as long as you understand the language. Ruby is much the same way. As I've said before Microsoft is introducing dynamic language features into C# and VB.Net, particularly with regard to data access, with the Orcas release, set for sometime this year or 2008. So it'll create a mixture of programming with dynamic and static types.

ramacd
ramacd

FDR said 'the only thing we have to fear is fear itself.' The fear of a collision with a bus has done more damage than the bus ever could. As one who has been "hit by a bus" (I suffered a stroke in '02, as a matter of fact the first thing my business partner said to me in the hospital was exactly that.) the collateral damage was much less than was feared. Working in APL, which I've always figured has at least a 25:1 code reduction ratio, (my Code Complete _is_ in storage, so I'm not sure if McConnell is silent on APL or not) I could support clients from my hospital bed, dictating changes over the phone. Our systems were even open enough to allow end-user tweaks, something they would not have dared with VB or even VB.Net. as an aside, I'm amused to see a current thread on the Yahoo extremeprogramming group, something about Conway's Life in Ruby, a one-liner at most.

onbliss
onbliss

In my experience, I find maintaining coding conventions and discipline; and having decent design reviews and code-reviews helps the developers and the company in the long run. Choice of language is little below on the totem pole.

rmrf
rmrf

I absolutely abhor the term self-documenting. Source code is a perfect record of what you did, but not what you intended to do. The difference between the two is a defect, and finding those in someone else's code is made more difficult when they drank the self-documenting kool-aid. It's been years since I read Code Complete, but I believe one of his bits of advice is write comments *first*, then write the code. That's sound advice.

RexWorld
RexWorld

The conclusion at the end of your piece seems to contradict your headline. It sounds as though you've already made a language decision, or are much closer to a decision. It certainly sounds like you're leaning towards one of those "verbose" languages :-) I've spent a lot of my career as the maintenance programmer struggling with other-people's code and architecture. As a result I definitely agree with your idea that a more-verbose language usually makes it easier to maintain and extend. Those times I had to muck around with Perl it took me forever to figure out what needed to be done. In the end I probably did write a lot fewer lines than if the code had been Java, but it took me much longer to decide what each of those lines of Perl should be.

joe
joe

This is an interesting little post. The bottom line is that highly readable code is better, on average and in "normal" situations, than the greatest hacks. This is also why COBOL is still the most dominant language for business. There's more new COBOL code being written than any other language outside of C. And before anyone gets all defensive you should know that Object Oriented COBOL is more OOP than C++ not to forget the power of data manipulation that is unsurpassed in any other language. Now it's true that you can write horrible spaghetti code in COBOL. But it's much easier to write readable code than in C or Java or especially perl.

Jaqui
Jaqui

I have to say that I like readable code, where even a newcomer to the workforce can read it, AND understand it easily. I just insist on the language being used to have fast execution at runtime. which is why Java, VB, VB.net, C#, J# all don't make the grade for me. I'm fully aware that C, Objective C and C++ take more lines of code for readability than the other languages, but the performance gain of these languages is worth the lower readability in my opinion. A programmer who knows these languages can read most code for any app, unless it's was highly optimised by the person that wrote it. [ reducing the lines of code using operator overloading techniques, which is something I do not agree with ] Even with the extra lines in C, Objective C and C++ for readability, they still outperform Java, VB, VB.net, C#, J# ... in speed of execution, this makes the USER more effective at work, which makes the application better.

Tony Hopkinson
Tony Hopkinson

Now a report that I attempted to look at an adult/explicit site logged with the IT nazis. Thanks ! :D

rclark
rclark

It was patching cobol programs. We left a whole 200 bytes in the working storage section to patch with. In order to patch the program, you dumped your executable out to binary, and started counting bits. When you got to the code section you wanted to patch, you overlayed a bun (Branch Unconditional) instruction to the Working Storage Section. There, you coded an external call to a subroutine that made the needed changes, and afterwards, did a bun back to the instruction after your original branch point. Total time to dump, count, branch, code and return was usually a couple of very tedious hours with very sharp pencils. It worked. It saved lives and dollars, but it is not for the faint hearted and you had to test everything to destruction. Since then, I've learned that you really do get what you pay for. If you pay for a rockstar hacker, you get a rockstar hacker. If you pay for a design team, you get a design team. The rockstar may be able to make one machine sing. The process may be the most efficient possible. But you will never, ever, get economy of scale or standards compliance out of a rockstar. And most of the world runs on standards and on economies of scale, just in time processes. The best (most accurate) process ever designed for processing transactions does so becuase it was designed by a team that knows the business. They process one check correctly, verifiably, in the most efficient manner possible. Then they duplicate the process a millions of times each day. That is possible because of shake and bake programmers, documentation of code, standards and team players. Rockstars have their place. You guys keep the rest of us on our toes and push the envelope for radical design theory. But the money and civilization depend on the drudge coders that can translate business logic into reliable code that can be maintained over decades. Most of the rockstar programs have halflives of fireflies. That is not bad, it encourages change, it encourages inovation, and it has certainly brought about leaps in both hardware and software. But it doesn't process many transactions per second, every second, of every day, of every year for decades at a time. Sadly, even rockstars age and gain experience. Now the community has lost a rockstar and dare we say it, gained a mature systems analyst. Congrats Justin, you may make it into management at a fortune 500 company yet.

Mark Miller
Mark Miller

Too bad it didn't show a hex keyboard instead. It would've been more realistic. :) I used to know kids in jr. high school who programmed Apple IIs in hex--machine code. Simply amazing. It turned me off to ever programming at the machine level, until I was forced to do it in college. I was amazed. I took a course in assembly language and while it didn't disappoint me in how much code it took to get anything done (lots), I was amazed at how few bugs my code contained. I expected to produce tons of them, because I thought with all that freedom I was sure to do something wrong with nothing to point it out to me. The revelation for me was that the commands were simple and precise, and this led to less bugs, because each step along the way I could tell exactly what I was doing. Nothing was black box or obscure. Nevertheless I wouldn't choose to write software this way. Thanks for the funny. :)

Justin James
Justin James

Mark - I see where you're headed with this, and I like it. I beleive that in the future, we probably will not be directly writing code as such, but that XAML, Windows Workflow, etc. are where things are headed. Even if it is just an advanced IDE that translates a "wizard" or GUI interface into actual code, it will be much better. For example, the IDe would have a "routine builder" that lets you define the parameters and return values, their min/max values (building a contract and assertations), build logic in a tree or grid as opposed to if/then or switch/case statements, etc. J.Ja PS - Just found your blog a few nights ago... good stuff on there! Congrats, you are my only RSS feed that I subscribe to. :)

Tony Hopkinson
Tony Hopkinson

we should stick with the really fast terse only maintainable by one person code on account of they could get lucky and survive being hit by the bus? Help me out here I'm missing something. Glad you got through stroke , but you've got to be aware that it could have been much worse.

Tony Hopkinson
Tony Hopkinson

are the biggest boon you can have in a development team. They should of course be under constant review by the development team themselves. Nothing worse than some external set of standards set by some twit who never coded in anger. You will indent by three spaces. You will use Hungarian notation. are usually the first two, You hardly ever see You will not rely on side effects. You will keep global variables to a minimum. Bad standards are worse than no standards.

Tony Hopkinson
Tony Hopkinson

I abhor code where comments substitute for readability, as long as the language can use meaningful names and has a decent structure, you should make as much use of that as you can. Self documenting should be the goal, relying on it wholly just like relying on comments is guaranteed to give you the odd funny. I always take comments then code to mean turn the comments into code. If your code can be made readable comments can and should be minimised. Any one who does int i; // the number of invoices should be canned. void InvoiceListClass.LoadfromDatabase(CustomerClass Customer) is intent is it not?

Justin James
Justin James

That's the conundrum... I love the "clever" languages, but see too much business sense in the more verbose languages. Luckily, I do not have to use any languages at work, and I can use whatever I want at home. :) J.Ja

Justin James
Justin James

COBOL was my second language, right after BASIC. I still think it was excellent. It never once stumped me. Boring? Sure. Not many people write books about "super cool COBOL apps". Then again, you won't see books like that about SQL, either. Data processing *is* boring. But as you point out, the world runs on it, and for what it does, it is excellent. J.Ja

onbliss
onbliss

Though I have not done any bench mark tests, I have read that C/C++ programs execute faster at runtime. In some cases (real time applications ?)- it could be a matter of life or death. In the world of application software, how noticeable is the slowness of a Java/.Net application? How much slowness does an end user perceive?

Justin James
Justin James

Remember the furor over the Barbie doll that would say, "math is tough!" about 10 years ago? For some reason, that's how I feel most people think about pointers. Personally, I have no problem with them, but a large number of people seem to be perplexed and baffled by pointers, which impacts readability. I will say, using pointers adds a layer of thinking to the code ("is it manipulating the pointer, or what the pointer references?") which many folks have a tough time with. But overall, I do agree with you in that the speed of C/C++ is important, because at the end of the day, users are the ones who actually run the software, it isn't like it gets written, compiled, and then sits there for fun unused... J.Ja

Mark Miller
Mark Miller

"Real Programmers Don't Use Pascal" http://www.pbm.com/~lindahl/real.programmers.html Quoting from it, all in good fun :) : Some of the concepts in these Xerox editors have been incorporated into editors running on more reasonably named operating systems-- EMACS and VI being two. The problem with these editors is that Real Programmers consider "what you see is what you get" to be just as bad a concept in Text Editors as it is in Women. No, the Real Programmer wants a "you asked for it, you got it" text editor-- complicated, cryptic, powerful, unforgiving, dangerous. TECO, to be precise. It has been observed that a TECO command sequence more closely resembles transmission line noise than readable text[4]. One of the more entertaining games to play with TECO is to type your name in as a command line and try to guess what it does. Just about any possible typing error while talking with TECO will probably destroy your program, or even worse-- introduce subtle and mysterious bugs in a once working subroutine. For this reason, Real Programmers are reluctant to actually edit a program that is close to working. They find it much easier to just patch the binary object code directly, using a wonderful program called SUPERZAP (or its equivalent on non-IBM machines). This works so well that many working programs on IBM systems bear no relation to the original Fortran code. In many cases, the original source code is no longer available. When it comes time to fix a program like this, no manager would even think of sending anything less than a Real Programmer to do the job-- no Quiche Eating structured programmer would even know where to start. This is called "job security".

Jaqui
Jaqui

one of those kids in high school programming apple IIs in hex, and in real mode :D heck, we used to create fonts as shape tables files [ binary code ] for fun. I do agree that a hex keyboard would be better, but when my friend sends me an email with the binary one, I'm not going to complain. Glad you knew I was teasing Justin with it. :D edited for a missing space

Mark Miller
Mark Miller

[i]I beleive that in the future, we probably will not be directly writing code as such, but that XAML, Windows Workflow, etc. are where things are headed.[/i] I think so, too. One of the other things Kay said was he's predicted for a while that languages will eventually advance to "the policy level", something like that. An article you might be interested in is a project that Kay and some associates have received NSF funding for: "Steps Toward the Reinvention of Programming", at http://irbseminars.intel-research.net/AlanKayNSF.pdf They describe what they've worked on in the past, and the research they'll be working on to improve on it. One thing that was mentioned as a goal was creating a language that can reasonably model specifications. The paper asks the question, "Instead of translating the spec. into code, why not just ship the spec?" Quite a tall order! In a podcast I listened to recently on .Net Framework 3.0 the people interviewed predicted a similar scenario in the future. They were down about it, saying that "the days of coding are coming to an end", and predicted basically the same thing: that someday the spec. will map directly to a programming language, and you'll be able to just execute the spec. I'm not so down about that, because I think that with the success rate of software projects being so abysmal, it's been holding back the promise of computing. I think moves in that direction would only improve things, leading to greater confidence in software production. I think that can only be a good thing for developers. I guess it's intimidating to some because it moves everything up into the system analyst/architect realm, and we're used to seeing fewer of them than there are programmers. It could have negative effects on tech employment. I don't know. I think in the long run it will help tech employment, because it will allow companies to imagine implementing much larger systems that would've been unthinkable before just because of the number of people that would've been required to build it before. To make an analogy, instead of a huge workforce being used to build one grand edifice (think "pyramids"), that same workforce could instead be used to build an entire city. That's what efficiency in engineering ultimately brings. Even so, what's being described is still programming, in essence. It's not as if the spec's will be able to be written in plain English and they'll just run. It'll still be code because it will have to be formalized, but it'll be at a more abstract level. The picture that comes to mind is it might be like BNF (Backus-Naur Form) that's used for specifying a language grammar. You're still coding, using symbols and setting rules, but that gets translated into a finished product that does what you specified. [i]Just found your blog a few nights ago... good stuff on there! Congrats, you are my only RSS feed that I subscribe to.[/i] Thanks. I'm flattered. :)

ramacd
ramacd

If I wanted to build only-maintainable by one person code, I would be using Perl. A benifit of using APL is that end-users do not see the danger of a high-powered tool, if it is even there. They just see the problems getting solved, by the end-users themselves. Given that the problems can be dealt with over the phone, there is no time for even a rock star to add gold-plating.

onbliss
onbliss

...than coding standards are architectural and design standards. Oh well they have fancy name for them - Design Patterns :-) The productivity gets hit when a developer unfamiliar with them has to understand and decipher them. Once the patterns are digested it just increases the productivity of the developer. I have Martin Fowler's refactoring book, and though he uses Java examples, I have learned a lot from them. Some of the principles I have used in VB6 code, and some in Access VBA code modules. And of course with .Net I could find more uses of what he is attempting to tell us.

onbliss
onbliss

All said and done, comments should add value to the work. If not, it is better that they are not present.

Mark Miller
Mark Miller

Most of the language was not bad. I didn't mind its verbosity. It was clear. I programmed in COBOL a bit in college. I was probably using COBOL 85. The thing that tripped me up time and time again and made working with it very painful was the fact that even though you could define records, ALL of the fields within each record were [i]global[/i]. There was very little scoping, and it reminded me a lot of programming in old style BASIC--the kind with the line numbers. I had programmed in BASIC for about 7 years, but I had become spoiled by the structured programming languages I learned in college, like Pascal and C. They were all about scoping variables. I could have local variables and variables inside records/structs that were unique because the name of the record structure was put into consideration, even if the field names were repeated in different structures. You could use the record names to load records in COBOL, but the fields, when referenced, were considered to be somehow separate entities from the record structures themselves. It didn't matter where a field was defined, it had to be named differently from all the other field names in all the other records, otherwise I'd get a "variable redefined" error. It drove me nuts. Once I figured out what was going on, I eventually developed a naming convention for field names, which cleared things up.

Justin James
Justin James

Java used to be miserable. As in, "by the time this app took to start, I could have re-written it in Lisp" slow. It has goltten better. So has .Net. Both of them take a big hit on app start, since they need to load the VM and a ton of libs. After that, they arre more than acceptable... until you get to a heavy resource usage situation. Cranking 50% CPU in either one gets you a lot less bang for your buck than native code. And a lot of calculations, even when you are not doing enough of them to really spike CPU, are just SLOW. For example, a while ago, I translated the CPAN Soundex module from Perl to .Net. It ran many times slower... I have written simple text transformation apps in VB.Net and then compared them to Perl, and the Perl would be literally thousands of times faster than the .Net code (a few seconds to process 50 MB of data vs. a few minutes). In other words, for basic shoving variables around and gluing libs together, either one is fine, but for heavy duty stuff, native code is noticeably better. Where .Net often beats Java is that it is at least acting as a wrapper for the Windows API, which is native code, as opposed to Java which suffers (or used to, it may no longer be the case) due to its insistence of doing everything in Java. For example, Java *finally* got double bufferring on window painting, to eliminate the "grey box" effect that Java apps are famous for. J.Ja

Tony Hopkinson
Tony Hopkinson

First you have JIT compiling, you can get round that with a utility called ngen which will give you a precompiled object file. Of course you lose the benefits of JIT at that point and you have to re ngen it on each change. Then you have the intrinsic overhead of a built in all things to all men class library, what Jacqui correctly though somewhat cruelly calls bloat. :D The next overhead is the garbage collector. Specifically invented to tidy up after messy developers. Personally I think there should be more assistance in the framework to go to self managed, not holding my breath though. Then there's the cost of OO itself, just instantiating the objects, auto initialising the variables, setting up the method tables, late binding to them. All lovely things but they aren't free. You do have in .NET the option of using structs (similar to c) which are a value type but with a lot of OO support making them look like classes but they are much lighter. And of course you can always risk unmanaged c++ code. Not so much of a problem for those of familiar with life time management, but the thought of someone who only has .NET messing about with that gives me the shudders. What sort of performance hit / gain you get as always depends.... One of the most noticeable hits in .Net is start up, when the JIT compiler gets hammered with all the stuff it needs just to start. In a big app trying to spread out / defer or thread that is definitely recommended. If you want to see an example of how not to do it, fire up SQL 2005 studio on a 2.8 with 1 gig of RAM when you have outlook, IE et al going. Bloody 'orrible it is.

Jaqui
Jaqui

is switching windows / tabs in an app, I've seen even menus take 3 to 10 seconds to display, when a c/c++app it's

dbmercer
dbmercer

On a similar vein, it seems a lot of people dislike C because memory management has to be programmed. I never saw what the problem was. Even from the standpoint of documentation, if you are allocating memory and it is not obvious where it is getting freed, a simple comment saying where it gets freed is enough to point a support programmer in the right direction. To me, the attraction of garbage collection is not that it obviates the need to program memory management, but that the possibility exists that someone somewhere will one day create a compiler that is so good at memory management that it produces code better than hand-coded memory management. This is kind of similar to the fact that, in the old days, compiled code was considered less efficient than hand-coded assembly, but in time, compilers have developed to the point where they can optimize code better than an assembly language programmer. The big disadvantage, in my mind, to garbage collection is not the inefficiency (which is still a disadvantage), but that programmers no longer understand memory management, and I think it is important. You can, after all, have memory leaks in garbage collected languages, yet many junior programmers don't really understand what a memory leak is. This is a conundrum, however. I personally like to write Perl, but I hate to read it, so would never recommend it for large-scale development. The C family of languages (C, C++, Java, Javascript, etc.) are really quite readable, in my opinion, so I would probably advocate the use of one of those. Personally, I'd say if execution speed is a priority, use C or C++. If not, use Javascript (detached from the browser, in Windows Script Host). Their syntaxes are all pretty similar.

Jaqui
Jaqui

never heard a thing about that. Pointers are frequently overused. You don't need to have every function in the app writing to the same few bits of data, this is what will cause bad results. Use a pointer only when you are working with files[ opening and closing, writing and reading, or the particular variable is going to have to be altered.

Mark Miller
Mark Miller

I would think with today's modern computers the numbers would whiz by so fast no one would be able to keep up with it. Nevertheless, it would look cool. You could impress your friends with the "blinky lights". :)

rclark
rclark

On the System 38 front panel, you got to watch the core load in hex led's. The System 34 had the same thing but IBM had the sense to put it behind a side panel. They understood they had a problem when the AS400 replacement for the System 38 was slow to gain acceptance because, you guessed it, it had no front panel hex code.

dawgit
dawgit

:^0 and maybe, a little wiser. -d

Justin James
Justin James

I've always liked that piece. I re-read it once a year or so to remind myself of "what could have been" had I been born 20 years earlier. :) J.Ja

Justin James
Justin James

That carrtoon is always a winner in my book. :) J.Ja

melvyn_ingram
melvyn_ingram

I agree to what you say, I have notice over the years that program's/software's have change from being easy to operated to loaded with extra add-on, unnecessary controls and technical mumbo jumbo. And (as you say), a features of the immediate and usually discounted past ways of programming.

Mark Miller
Mark Miller

Yes, this is always a problem, and it always gets us in trouble. I think I've been fairly good at not taking management literally. The worst story I can remember was when management took me literally. I worked at a small company where the hierarchy was President, VP of Engineering, and then there was me. The VP did all of the requirements analysis and most of the design. He was a former programmer, so it was easy to communicate with him. He was the bridge between the President's and the customer's desires for what we had to produce, and our understanding of the problem. It was great. Then he left that job, and he was replaced with someone with the title that went something like "Manager of Operations" (can't remember exactly). He claimed he had prior programming experience decades ago. When I used to deal with the VP, I could suggest ideas and he would check me. He would sometimes say no, and he was usually right. What scared me with the new guy is I would act the same way, and he would just sit there and say, "Okay. Let's do that." He assumed I had already done the analysis that the VP used to do. The VP used to care about the details. The new guy didn't. All he cared about was whether we were meeting our schedule and our budget. He used to drive me up the wall talking about costs. I had a modest upbringing so when he'd throw around figures like $10,000 for something it made me sweat bullets. I didn't want to think about that stuff. I wanted focus on my design and implementation. I've since adjusted and feel more comfortable talking about the business end of things, even the costs, but it was quite an adjustment.

Tony Hopkinson
Tony Hopkinson

Good developers and good business types are very literal. It's the assumption that we are literate in the same language that regularly knocks us both on our ass. First rule of all rules, if a business type tells you how to implement something, ask them what they really want. If tech tells you what you want ask them how you are going to sell it.

Justin James
Justin James

MArk, that is exactly what my job is, and they hired me because the project was bogged down in the exact prroblems you describe! In a nutshell, I now spend about 1/3 or my time talking to the busines people in their language, 1/3 of my time doing that architecture stuff, documentation, and other ways of turning those business discussions into specifications, diagrrams, etc. that technical people can use, and the other 1/3 of the time working with the technical guys directly. It is very interesting to see just how little the business folks understand the technical stuff, and vice versa. A lack of common definitions is such a problem. For example, last week a business person told me that "by law we need to store that data in a separate 'file'". This rraised a flag immediately, since the data was going to a database. It took 30 minutes of discussion for us to have mutual understanding of the issue. What they really meant was, "that data needs to not be displayed to just anyone with access to the record as a whole" and the way they do this on paper is by putting that information is a separate file folder in a cabinet that only certain people have access to. I am fairly sure that if that requirement had been sent to our developers, they would have made a separate database table or instance that used a different data file, just for that data, and still have showed it to everyone. I have come to realise that our developers take everything literally; the whole project is filled with it. Our business folks speak their own language, and intermix words constantly (one day "career portal" means the job site external users see, the next day it means the internal employee-only intrrranet site, as an example). I can't blame eithger group. It is who they arre and how they think. As a result, getting our developers in direct contact with the business people leads to disaster. When I was doing development roles, I always liked to get on the phone with the user direct to find out what they needed, because the people I had doing this "translation" were business people who knew a few tech words, and they usually made the problem worse. J.Ja

Mark Miller
Mark Miller

I don't want to take blame away from the management side of things. I think there have been too many software project management screw-ups to count. I used to blame project failures on this bad management, but I'm starting to see that the technology end has been partly at fault as well. I've kept wishing that business managers and IT teams could understand each other at some basic level. I used to suggest that people training for business degrees take a few low level CS courses, and that people taking CS take a few business courses, just so there was some basis for understanding between them. This is probably wishful thinking though. I don't think it would have any meaning unless it was required in the curriculum. I figure most CS students wouldn't even think of taking a course on marketing, product development, or capital management if left to their own devices. I didn't. I'm realizing that current popular technology requires more translation to machine specs than most organizations can handle competently. There are some successes, but the failures far outnumber them, and frankly I don't know if business managers are really going to budge in becoming more fluent in how software projects currently work. There is room for improvement in the technology end of things though. We as an industry could take the approach of rather than expecting management to come closer to where technology is at, bring the technology of programming closer to where they are at. I understand that COBOL was an attempt to do that, but IMO it's outdated. It has its uses. It does C/R/U/D stuff and report generation well, but I don't think it provides enough flexibility to allow the programmer to describe complex processes in the most effective way.

Justin James
Justin James

Mark - You are right on the money. Here are some facts: * Software projects nearly always are over budget and late. * Methodoligies (waterfall, CMMI, agile, etc.) change, but the results do not. Agile just takes out the long range deadlines that are blown by years, and replaces them with micro deadlines that are missed by days or weeks. * The frameworks get larger and more massive (frrom stdio.h to the .Net Framework), each one delivering even more abstraction away first from the hardware, and now the OS. But the results are the same. * Most programmers now use a garbage collected language, reducing seg faults and bad pointer references; memory leaks are still rather pervasive, as someone who will leave orphaned referrence in C will do it in Java or .Net. Still, tons or programs blow up on the memory management end, even with the GC. * We are currently swapping execution speed for development speed. Yet we still constantly come in over budget and late far too often. * IT is the only area that I can think of that consistently over promises and under delivers, yet still claims to produce ROI. Even "better", IT's ROI is very (probably usually) impossible to directly measure; instead, we rely upon circumstantial "efficiencies" to measure ROI. * IT's "efficiencies" often become "inefficiencies". Email became spam. IMs are now distractions. The Web became YouTube and a secuity hole. Don't even get me starrted on conferencing tools. * 2/3rds of ERP and CRM licenses go unused. * IT is probably the only industry where the cost of installing and maintaining the prroduct typically far outstrips the purchase price. In other words, IT as an industry is a miserable disappointment. Most "successful" IT projects simply meet the lowered expectations out there. Our development habits, particularrly the languages, have really not changed too much. Look at C; C, without including any libraries, cannot do much more than push some primitives around. Java, VB.Net, C#$ are the same way. Our common business languages have not improved one whit at their core. The one thing that has not changed is that we are still creating software by generating (via keyboard or tools) plain text files that get turned into bytecode to walk a general purpose CPU through the basic steps of execution. Anyone familiar with debugging will tell you that if everything has changed except one factor, and the results are still buggy, that factor is the root cause. Conclusion? The concept of source code as we know if is fundamentally flawed. The differences between languages is like the differences betwseen hammer types; sure, they all have a special purpose, but for simple nail banging, they are all just as good. But after 50+ years, we are still not sure if a tin can is better at driving nails than a cinder block. Something is very, very wrong with this picture. We need a nail gun. J.Ja

Justin James
Justin James

That piece by Alan Kay is excellent; I am about halfway through reading it, and I am looking forwards to finishing it. It is indeed very similar to what I have been thinking as well. Thanks for the link! J.Ja

Mark Miller
Mark Miller

While I said that "in the future" things will be this way I didn't say how far. It may be another 20 years before we get to that point, but I think things are moving in that direction. Legacy software will always slow down this progress, that and hardware design. I've been getting this feeling lately that's like what you're talking about--that those who strive to understand process and translate it into a machine process end up becoming analysts and architects who don't code. It may be economics, perception, and office politics that drives this more than anything else. Analysts and designers are more expensive than developers and since they work on more abstract concepts they are not held as accountable as the developers for mistakes. If they actually had to codify their ideas, that would threaten their position because the computer would demonstrate their shortcomings. Business leaders probably think it's more cost effective to have the analysts create their abstract specs, and let the less expensive workers do what they can to implement it, rather than have the analysts spend time trying to debug their own stuff. I think you missed some of what I was talking about though. I think part of the problem is the way programmers are trained. You said it yourself. The tools are there, but give it to a programmer and they don't know what to do with it. As Dr. Dijkstra used to say, it's all in how the computer is perceived. Is the developer supposed to adapt the idea to the way the computer works, or is the computer supposed to execute the programmer's idea? He believed in teaching students to express their idea to the computer using formal logic, and let the computer figure out how to execute it. Most schools teach students to adapt the idea to the way the computer works. Therefore when you give them a tool that allows them to just express the idea in a formal way, they get confused: "What is this?" And therein lies the conundrum. As you've said you have to take the hand you are dealt--which is developers who don't understand how to program this way. The ones who do are employed as analysts and designers who are not given the opportunity to code. The problem with this whole setup is it's inefficient. People complain about how 75% of all software projects fail, but they continue using methods that perpetuate this trend. If people want to continue down this path that creates resentment in the minds of management towards technology then so be it. We deserve the end result we get. Justifying it by saying "this is always what happens" doesn't make it any better. Secondly, I don't know if you're concerned about getting students educated in computer science, but enrollment in that field has hit the bottom of the barrel since 2001. It's gradually eking its way up again, but the numbers are not impressive. The reason people keep putting these ideas out there is there are alternatives to the current way of doing things. It takes people in positions of responsibility to do something about exploring those alternatives and finding the ones that get the job done well. We should not be satisfied with a way of doing things that produces failure most of the time. I'll be humble and say that what I'm suggesting may not be the answer, but a lot of the failures I hear about have to do with the domain expert saying, "This is what I want," and then the dev. team comes up with something that's not even in the ballpark. Wouldn't it be better if the point of process execution was closer to where the domain expert is at?

rclark
rclark

It never ceases to amaze me when otherwise knowledgable people forcast change. You get more accuracy with a monkey and a dart board. We have been having this dicussion for the last 30 years. And probably will for the next 30. But the one thing that hasn't changed in all that time is the human element. I have mentored many people over the years and I really believe that we have a disconnect between machines and humans. Only a few can bridge that gap. When they do, they over time become analysts. If they don't, they stop at some level below that, or they go on to sales and marketing. 3rd, 4th, and 5th generation languages are an attempt by people who have made the leap to bootstrap people who haven't across the divide. And they are very powerful, but the people who have made the leap, don't need them and the people who do, don't know what to do with them when they are across in the promised land. It all comes down to critical thinking skills. If you have them, and the wetware to use them, then you can design, program, and implement on any platform, any language. The real work is in the analysis. Not in the coding, not in the wizards. So bringing out another generation of tools is great, but the ones who will use them to actually produce more than a crystal report are the same ones that are using the current tools to solve problems. They help reduce experience differences, but don't do a thing for differences in native ability. So all those people who have jobs now in design and coding, will still have jobs in design and coding. Maybe in a higher level language, maybe in a new paradigm, but the one inescapable component will be the human mind.

Tony Hopkinson
Tony Hopkinson

I'd use C, or Pascal, Or VB, or Python .... Hell lets use C#, do the job properly. I aren't a user, not is JJ we see the dangers of a bucket load of business critical code that we can't make head nor tail of without a lot of work. How quickly could you handover what you do, and what's on the cards to do and what is in progress to another APL developer who is unfamiliar with your organisation. If as usual the answer is not very, one way of addressing this issue and many others in team or even interrupted development is easier to read code. A clear well commented piece of assembly is easier to understand than a bad piece of C#. C# gives you more scope with a lower necessity for annotation that's all.

Justin James
Justin James

As you say, standarize things like "minimize side effects" and "always call the initializer the same thing". My code style is so standardized, that I can often copy/paste a function from one program written years before into a current one, and it requires zero modification to work corrrectly. But I'm weird like that. Especially since I use a different style for every language. That is one reason why I hate using Oracle DBs in .Net... Visual Studio names their data access code in a very un-Oracle fashion, and I always get neurotic about which style to use... J.Ja

Tony Hopkinson
Tony Hopkinson

patterns, but architectural standards always useful. Just something as simple as the routine on a form that initialise the components from a class always has the same name. Not putting two and half pages of code in an event handler... It's the little things that give some one new to the code confidence. Introduction to Design Patterns in C# by James W Cooper is a good start.

Justin James
Justin James

That is similar to my recollection. My biggest issue with COBOL was those record definitions. I had a small typo in them once, and as a result, I spent months (working 40 minutes per day, high school project) trying to find the bug. Then again, I learned a great lesson: myterious, unpredicatabled behavor can and will be cause by mis-defined variables! J.Ja

Jaqui
Jaqui

is faster than interpreted, usually. perl being a prime example of an exception, it's interpreted and fast. While java and .net are slow loading, a large c or c++ app isn't a lot faster on lonading, they also have to load a lot of libs and create the app image. [ Mozilla, firefox, Opera all show this.Load times are similar, yeseamonkey and firefox are native code and opera is, I think, java ] But the more complex the app, the slower the load, no matter what language it's written in. It's the action afterwards where the native code apps will kick butt.

Tony Hopkinson
Tony Hopkinson

Course it's all relative doing in house stuff at a maufacturing firm (only two years ago) a PII 333 running win95 with 512M was my power pc. Just about everywhere I've worked I've had bigger screen, more ram, etc. Not eye-poppingly better, but usefully so. The really top kit goes to web developers. :D So it looks like they are good when the wodge a 1m flash presentation on the home page.

Justin James
Justin James

... was never, EVER using XML to store a lot of data. .Net's text parsing is really, really, really slow. I had one app that loaded a 2+ MB XML database on startup (the project is so nerdy and geeky, it is dangerous to mention it... OK, I give up... it was related to deck construction for Magic: the Gathering). Turning off all of the constraint checking with the BeginLoad() methid helped a LOT, but even then it was slow. It would have been much faster to use MSDE, even with its huge hit in performance. J.Ja

Justin James
Justin James

"The crowning turd in the water pipe is of course us developer types need and have much higher spec machines than the guys who are going to be using what we make." Tony, you've finally written something I diagree with. Until I built my monster machines at home a few months ago, my experience had been that developers get the same garbage Dell Optiplex as the rest of the folks in the company. I never once worked somewhere that gave developers special computers or computers outside the specs of everyone else in the company. Indeed, at my last job, I had the 2nd slowest computer in the company. J.Ja

Tony Hopkinson
Tony Hopkinson

soemthing to deal with and there are some tried and tested methods to alleviate it. JIT compiling just excerabated it. A 'erm less than responsive start up regime simply got worse. Better still to cut down on the Garbage Collector hit you should instantiate the stuff with the longest life times first, that gives you another trade off. Unfortunately with class based environments it's very easy to unknowlingly touch a dependancy chain and end up having to compile 3/4s of your base application before you see the mouse pointer go to busy. The crowning turd in the water pipe is of course us developer types need and have much higher spec machines than the guys who are going to be using what we make.

Mark Miller
Mark Miller

The most frustrating thing when I started out with .Net 1.0 WinForms was the start up time. Once the app. got going the performance was fine, but there was a significant lag between the time the app. was started and when it showed up onscreen. Part of this was my app. accessed a database right at startup and there was no way around it. A few months later I found a webcast from Microsoft that talked about how to optimize WinForms. The suggestions were good and were all about improving "apparent performance". They don't necessarily make the app. go faster, but they make it look responsive to the user. One was throwing up a splash screen first thing while the rest of the app. initializes. This lets the user know "I heard you. I'm getting the app. up now," to prevent the user from getting frustrated and trying to run the app. again. Another was putting any lengthy operations in threads. I had some lag between the first screen and the next screen that the app. goes to, because the next screen accessed the database to bring up a list. When I put the database access into a thread, the next screen showed up nice and snappy with the list display being a bit delayed. This was on a 1.4 Ghz CPU. I would think with a faster CPU programmers wouldn't have to be as accomodating.

onbliss
onbliss

Well that is sizable difference. But if the difference is less than one second, then I do not see any apparent wait time on the part of the user. Then it is fine by me.

alaniane
alaniane

What I don't like about garbage collection is the inability to know when it's going to collect. The documentation will tell you that you can force the garbage collector to collect an object and release memory immediately by setting an object to null; however, the profiler tells me that the memory still is not released until the garbage collector decides to release it.

Tony Hopkinson
Tony Hopkinson

the world obfuscation championships are in C. Perl obfuscates in a very understandable way to those whoare capable of obfuscating with it :D

jslarochelle
jslarochelle

...in C++ as you use to with C. You can now pass argument by reference. If you build your classes properly (copy constructors, assignment operator, ...) you can pass arguments by value. Of course you have to keep in mind the performance hit if you abuse of copy and pass by value (also the possibility of stack overflow you you get really ridiculous). However, the problem is that many programmers don't take the time to learn to properly use the C++ features that make it a better C. JS

Justin James
Justin James

Using standard libs for the low level stuff where pointers are more common helps significantly as well. The "math is tough" thing was a big to-do, because the public impression was that the Barbie doll was reinforcing the idea that females are not good at math... silly reference that was caused by me posting 10 minutes after I woke up... J.Ja

Editor's Picks