Software Development

Is SLOC a valid measure of quality or efficiency?


Yesterday I sat down to start work on a small project. Now, when I say small, it generally means less than 2,000 source lines of code (SLOC). For some people, that is huge -- for other people, that is barely on the radar.

I find it interesting that, since I shifted from being primarily a Perl developer to being primarily a VB.Net developer, my perception of a small project has dramatically changed. When I was working in Perl, 200 SLOC was a small project. My experience has been that it takes between 500 and 1,000 SLOC of VB.Net to approximate 200 SLOC of Perl in terms of overall functionality. The standard response to this (and the usual takeaway) is, "Perl is more efficient than VB.Net in terms of SLOC." I generally agree with that statement, and there is research out there that proves it.

Here's the real question: Is SLOC the best measure of a piece of code?

Analyzing language syntax 

Let us look at an example to see which of these two code snippets is really better.

Code 1
Dim Obj as MemoryHog = New MemoryHog

Dim Val as String

Obj.FillWithTonsOfData

Val = Obj.Property

Console.Writeline("Val is: " & Val)

Console.Writeline("Val + 1 is: " & Val + 1)

Console.Writeline("Val * 1000 is: " & Val * 1000)
Code 2
Dim Obj as MemoryHog = New MemoryHog

Obj.FillWithTonsOfData

Console.Writeline("Val is: " & Obj.Property & VbNewLine & "Val + 1 is: " & Obj.Property + 1 & VbNewLine & "Val * 1000 is: " & Obj.Property * 1000)
By the SLOC measure, Code 2 is significantly "better" than Code 1. But when you take a closer look under the hood, Code 1 has a number of advantages over Code 2. While it does take a bit more memory than Code 1 because of the extra variable Val, it is more efficient on the CPU, as it does not have to keep dereferencing Obj.Property. This is a common refactoring. Also consider that, while the latter portion of Code 1 is probably slightly less efficient on the CPU than the final, equivalent line in Code 2 because it has fewer function calls, it is simultaneously less efficient due to all of the concatenation in it. In fact, it has enough concatenations to make using the StringBuilder class a viable option. Additionally, it is a much more difficult (although still not "difficult") line of code to understand than the equivalent three lines from Code 1. As you can see, SLOC "efficiency" is hardly applicable, even within the same language.

Clever code

There is also the issue of the "cleverness" of code. I will be honest -- I adore elegant code. An example is some of the tricks with UNIX pipes. I've also always liked the C convention where "true" means "not zero," which can save a ton of typing (compare the C-esque if (String.Length) to the VB.Net'y If Not String = String.Empty or If Not String = "" or If String.Length > 0).

The efficient language folks have an argument that goes like this:

  • Each line of code is a potential point of failure.
  • Each line of code takes time to plan, type, review, and debug.
  • Fewer lines of code reduce failures and increase coding speed.

Well, yes and no. I agree with the two premises. But they also miss a few other premises that are demonstrated in my above examples, including:

  • Clever code takes longer to plan, review, and debug than verbose code.
  • Programmers who can work well with clever code and who can write clever code well are much rarer and expensive to hire (and probably harder to retain) than more verbose programmers.
  • Clever code can have more points of failure per line of code.
  • The real points of failure are variables, operators, and statements that make up a SLOC -- reducing those is what truly reduces points of failure. One SLOC with the same number of operations and variables as 10 lines of code has just as many potential points of failure.

So, in many ways, it's a wash. I prefer the clever code much of the time, but I also recognize that I am in a small (but vocal) minority that is mainly populated at this point by old-time UNIX programmers. I think that the folks who enjoy and are good at reducing SLOC also happen to be an extremely experienced and talented group of people. This is going to throw off your comparisons. Give someone who is good at lowering SLOC the same project as a more verbose programmer and tell the SLOC person to be verbose, and they will probably still be better. I have not met anyone in this industry who lacks experience but screams "SLOC." It truly seems to be relegated to the Old Guard.

Becoming a more efficient programmer

At the end of the day, whether code is clever and reduces SLOC is fairly irrelevant in my book. Refactor the code to increase readability, reduce variable, statement, and operator count wherever possible and reasonable, and you are well on your way to becoming a higher quality, more efficient programmer.

J.Ja

About

Justin James is the Lead Architect for Conigent.

90 comments
Miquel Gantzer
Miquel Gantzer

As in many other fields and issues, the context here really matters a lot. Having an agreed set of standards, and following it, can work miracles. Put together a team of analysts and programmers having agreed on coding following a reasonable set of standards (naming, writing comments, guidelines for choosing different looping or conditional statements, and of course recommendations about style as the ones discussed here...), and suddenly SLOC is a great and reliable metric. In many other circumstances, the best you can say about SLOC is that it's really easy to measure. And do not underestimate that! Being really able to objectively measure something might always be a good thing.

C_Tharp
C_Tharp

I know what you mean by cleverness, but to me, that is just tricks. Yes, it required a clever person to do the work in that manner, but it does not achieve anything significant. I define cleverness differently. A clever programmer writes the user documentation first so that the look, feel, function, and behavior are well understood before the coding starts. The goal is clear and effort goes into what is needed. Clever. A clever programmer builds the error handling up front. It will handle any error, even the unexpected ones. Thus, it functions as a development tool even before it serves as a production element. No extra effort or testing is required for it. Clever. A clever programmer writes code that documents itself so than anyone can understand it. Clever. A clever programmer always takes care of the else condition. Clever. A clever programmer build modules that can be independently tested and proven to cover all possibilities. These modules are tested with drivers that allow any input and stubs that reveal the output and side effects. They can be tested in isolation or in aggregate. It also allows modules to be built in any order, not just from the bottom up. This creates confidence at any level and allows problems to be easily isolated and resolved. Clever. A clever programmer abstracts modules so that they can be reused by future projects and uses the modules from previous projects because they were built by clever programmers who knew the cost benefit of reuse. Clever. A clever programmer anticipates change even when change is not expected or in ways that is not expected. Where change is possible, it is allowed. Thus the pieces are built so that they can be rearranged or replaced for future needs with minimal effort. Clever. A clever programmer creates quality and efficiency on many levels without any extra effort because of how the work is performed. It will be relected in patterns of source lines of code, but not in simple counts. I strive to be a clever programmer, with my definition.

mark
mark

I remember about 10 years ago the senior programmer where I was working then was designing a report writer for end-users of our application. I passed his office one afternoon for the second time that day and both times he was staring at his screen - which was uncharacteristic for him. I knocked and asked if he was alright, at which time he waved at his screen and commented that he was trying to figure out what a large block of code was doing. After looking at the code on his screen, I noted there were zero comments, dozens of very short variable names, and I asked him who wrote it - maybe he could ask them to which he replied that HE had written it about a month earlier, but now the application was throwing an error in this section and he couldn't remember what he was trying to accomplish with it. He was a very talented "clever" programmer - so "clever" he couldn't debug his own code. Today, the best code in my book is that which is easily understood by the people that come after me and must maintain it.

kpthottam
kpthottam

Let us stop for a moment and think of the days when we were all writing procedural code . The first problem encountered for maintaining and comprehending procedural code was there was too much to read in a single place. Hence this was solved by allowing code to be split into different files. I could go on but with each evolution , the common factor that drove the next step was "ease of comprehension and maintenance" . Ease of comprehension and maintenance often comes at the cost of greater SLOC , so I am strongly in the camp that ignores SLOC when measuring quality.

mikes
mikes

Many eons ago when I was first studying computer science formally (after having programmed for 10 years previously), my very wise professor presented us with the 4 Cs of good programming: good coding is clear, concise, correct, and compatible (i.e., portable/reusable). Some of this depends on skill, and some of this depends on the toolset. Any programmer that says they are *just* a Java programmer, or *just* a VB.Net programmer, or whatever, is automatically scored lower in my book because they haven't taken the time to choose the best tool for the job. I use a general purpose language called Euphoria (www.rapideuphoria.com). It's an interpreted language, but can be translated to C via an included tool, or bound directly with the interpreter to form a single executable. It's not always the best tool for the job, and I will use C or Java or PHP (or another language) when the need arises. The features of the language reduce the number of errors than can appear just from missed syntax (using one equals instead of two in C, for example). The originator and maintainer of the language has made it his mission to keep any feature with the potential for spaghetti code out. You can use C-like elegance throughout a program if you choose, but it's not required. In fact, I often try not to use such shortcuts, because the addition of =0 or !=0 can often make the code block more clear, and does not greatly detract from its conciseness, and it makes it better for someone who may not be familiar with that particular language to know exactly what the code is trying to do. In Euphoria, I'm not limited to keeping my entire statement on one line and string concatenation is fairly cheap as opposed to function calls, so I would use the Code2 style of puts, line-breaking and indenting after each ampersand. Just because it takes up more space on the screen doesn't mean it's less concise. The same thing goes for adding comments. Conciseness means how many bytes of RAM is the *code* (not data) going to use when executed.

metalpro2005
metalpro2005

Many developers (especially 'clever' coders) forget the fact that there are two main audiences for the code : the compiler (or interpreter) AND collegue developers who need to alter the code. Both members of the audience needs to understand the code and know the decisions WHY a specific solution was choosen. And implementation needs to be altered quickly. (refactoring) In a time when the most costly factor is the human element, focus needs to shift from satisfying the compiler to satisfy the (maybe not so smart) collegue developer.

Justin James
Justin James

... in some languages, where there really is only one or two ways to perform any given task, SLOC does map quite nicely to actual functionality implemented. COBOL (probably where SLOC got started, if I had a guess) really does not allow for much wiggle room. Even VB.Net, in many, many circumstances does not have too many reasonable ways to approach a particular issue. In fact, much of the SLOC "wiggle room" in the average programmer's day is more in the refactoring department rather than the "elegant" or "clever" department, it is just the nature of the languages. Even if you wanted to, it is very tough to be "elegant" or "clever" in OO langugages at the code level. At the OO architecture level though it is a different story... J.Ja

Justin James
Justin James

I wish more programmers met your definition of "clever"! Instead, too many seem to meet "elegant to the point of chaos", which (as you know) is a good summary of my version of "clever". :) J.Ja

mark
mark

I agree completely, especially your comments about documenting your code. For example, I was just reviewing some invoicing code I had written in 2002 before making a change. I had forgotten that I had about a 100 lines of comments at the top of the code that documents the entire invoicing process, along with smaller blocks of comments throughout the code. By reviewing my comments written 5 years ago, I quickly remembered how the code worked, and can confidently make a change to the code without fear of breaking something else - the dreaded "side effect". I always strive to document my code with an eye towards the person that comes behind me to maintain / enhance code I have written. Many times that person is me, and the comments I leave at the beginning are great helps to me. I use SLOC for estimating conversion projects I am involved in and the metric I look at most is the ratio of lines of CODE to lines of COMMENTS - the lower that ratio is (meaning the fewer the amount of comments), the higher my estimate goes, and the longer time I allow for the conversion.

Tony Hopkinson
Tony Hopkinson

except for the myth of abstraction and code re-use. The more abstract a piece of code the less it does. The more re-usable the more it does. It can blow up in your face very quickly. You end up with vast kludges like VBChart, with an inner core that if you sniff near, it will fall over. It's a nice idea and can a be real boon but you need a really good idea of all future potential uses, and that just doesn't happen for anything above a certain level of complexity.

Tavis
Tavis

I would have though that cleverness and elegance are rather separate concepts in programming. I imagine that if you look at elegant code, it appears just the way it should be, on reflection, and doesn't suggest how it was arrived at. As an ASP.NET developer, it often seems best to avoid writing procedural code at all (and in .NET you have a choice of twenty or so programming languages to avoid using). There's a lot you can do with declarative code (markup), which is supposed to also help with optimization, as another column has discussed. Declarative markup can be validated, not just debugged; associated with semantic structures; is particularly easy to bind to graphic user interfaces; and is straightforwardly transformable into procedural code or script as required. For example, you can generate screeds of ECMAScript from XML for various client-side applications. So you may end up with lots of LOC, but they are not represented in the source. However, there are limitations with this approach too, and I'm not a proper programmer so I'm sure I'm not even aware of most of them. Nevertheless, some of the best solutions I've come across don't involve extra lines of code, but often ideas that negate their use, perhaps with an understanding of a mathematical method or coming at a problem from another viewpoint, say a graphical one. Here you might use or reuse an inbuilt feature or object property in an innovative way, and do away with lines of calculations.

Justin James
Justin James

You are quite right, OO is rather verbose by design. It does make things more understandable when everything you are doing is spelled out in such specific detail. With procedural code, it is much more mysterious (and documented!) how a function applies to one type compared to another. J.Ja

etkinsd
etkinsd

Most of the companies I have worked for really didin't do much with their metrics -- but their management would say otherwise ;).

Justin James
Justin James

Unfortunately, compiler/interpreter optimizations can only go so far, so programmers are constantly choosing between machine and human concepts of efficiency. :( Sometimes, though, we get lucky and they are the same thing. J.Ja

Justin James
Justin James

Tony will beat you up a bit over this, but I am also in agreement here that really "well written code" (totally subjective item, of course) is extremely readable and pretty obvious as to what it is doing. I've looked at code I wrote 6, 7 years ago and I understand it just fine, despite not having intensely worked with that language in ages. In fact, I took a look last weekend at the source for a BASIC program I wrote *15 years ago* and it still makes perfect sense, despite having only a few useless comments like "this is the routine to put the image on the screen"! But the teachers I had even then were very big on the self-commenting code end of things. That being said, comments in code have their place, of course. But I tend to only put comments where the code gets a touch odd, typically if I am doing something "elegant" or "too clever by half". J.Ja

Tony Hopkinson
Tony Hopkinson

That is scary. Personally, two lines of comments in a modern language makes me think I've f'ed up.

C_Tharp
C_Tharp

It's a nice statistic. But, as someone once said, "There are lies, d*** lies, and statistics." Your metric has to be tempered by the quality of those comments. It is standard practice to comment out old code when changes occur, possibly with comments explaining why it is commented out. These comments skew the ratio to look more favorable than it is. Some programmers use comment lines as white space. If anyone has ever been rated by these measures, they quickly learn how to inflate them. The ratio can have value but only in conjunction with other measures.

Justin James
Justin James

I have found that "reusable code" is quite elusive. Outside of core components like UI widgets, or data access objects within a particular database design, the reusability bit is often more work than it can possibly worth. Indeed, "reusable" is to OO as "elegant" is to procedural. "Reusable" OO tends to be completely overengineered and gold plated to the point of either being useless, or wasteful. Not that it cannot be done, but reuse rarely goes beyond the boundaries of a few related projects while being domain specific. At the end of the day, the problem is that business rules are not generic, so any code that implements them must be either very specific, or implement an extremely abstract interface to the concept of "business rule." J.Ja

C_Tharp
C_Tharp

Yes, the level of complexity matters. I use abstraction and reuse mostly at leaf nodes, the bottom rung of the call chain. Reuse for aggregates goes down with the amount of aggregation (complexity). It has been useful for me in controlling hardware, which has a propensity to change more often than I liked. By abstracting the basic hardware controls and encapsulating the specific hardware, I was able to make changes reliably and quickly. It was useful when I converted a date dependent system for Y2K. I am currently using the concept for routine loading of a database. As with everything, there has to be balance. The mark of a pro is knowing where to strike that balance. It's a goal I may never achieve, but I keep trying.

Justin James
Justin James

As soon as you need to loop or have a case statement... in fact, the moment you need things to happen in a particular order, OO code becomes procedural code in some fashion. There is really no way to get around it. Well... not entirely true. You could get around it by having something like a loop object that you add callbacks to in the order you wish them to be executed, and then call the "Execute" method of that loop object, as well as set it's "EndCondition" property... but honestly, to make a langugage that purely OO would create a beast no one wants to see or use! J.Ja

Tony Hopkinson
Tony Hopkinson

Clever design makes you slap your forehead wondering why you didn't think of it before. Clever code makes you want to slap the forehead of the git who wrote it. It's always been true, but especially in OO, get the data structures right the coding is simple. Clever code is for re-inflating badly designed dartboards and getting more than one pot of tea from chocolate kettles.

jslarochelle
jslarochelle

Because I'm in charge of a large module (the server module) I use metrics to monitor how the code evolves. Of course it is not sufficient to use metrics. You also have to do design and code reviews. However, metrics are a good complement. Management does not use metrics because they think it has to do with Celsius/Fahrenheit conversion. And it is probably better left as a tool for us developpers. JS

Tell It Like I See It
Tell It Like I See It

Modern management tends to equate managing and measuring -- as in measuring is the only way to manage something. Since software development metrics all have flaws in them, or you have to use different ones in different situations, modern management hates dealing with it. They see it as it can't be measured and therefore it cannot be managed.

Tony Hopkinson
Tony Hopkinson

Comments for quirks, for todos, memos etc, very nice. Why, I can live with. How, most definitley not. public void PrintInvoices(InvoiceList ainvoices) { foreach(Invoice inv in ainvoices) { inv.print(); } } I'm one of those annoying verbose types, who will break things up just to make them readable. In the old days this would have an impact but with modern optimising compilers inlines, loop rollouts, etc, I always go for readable first. Some find it annoying, (one as annoying as I find comment every line), but that's for aesthetic reasons not because it was obfuscated.

Locrian_Lyric
Locrian_Lyric

I had to revise some code that was done by a sloppy programmer, but in one place his 'sloppy code' actually had a very pragmatic reason behind it... it was dealing with a quirk in our system. When I 'fixed' his code, the application blew up. Now, I will pick one tiny little nit about commenting in general. I think I can say with little fear of contradiction that TR has attracted some of the best and brightest on their forums. While you, I, Tony, et cetera, may be able to read self-documenting code I think it may be being a bit optemistic to assume the ability is universal. Then again, as I found out at one job, being too thorough in your commenting can amount to free training for your (cheaper/outsourced) successor.... Ah, self-documenting vs commenting.... the debate rages on eh? :D I added a comment at that point...

Tell It Like I See It
Tell It Like I See It

I once had a conversation with my manager that uncannily mirrors your previous post. The conversation was about whether we could reuse something another programmer wrote for some new situation. Ultimately, it wouldn't work. I like reusable OO code when I can get it. But as I pointed out to her that there comes a point in time where there are just too many differences between two applications. To have one code base to cover both applications would require more code than having two separate code bases. Unfortunately, more code means more points of failure. The areas I've gotten the most reuse are screen gizmos, data access (and validation) and the occasional utility object. By utility object, I mean things like reading ini files or other configuration-type files, getting a logged-in user name (prior to .NET), etc. I agree that the pattern is that "primitive" code can be more easily reused. More "advanced" code cannot be easily reused.

Tony Hopkinson
Tony Hopkinson

I'm not arguing against re-use from a desirable technical point of view. But there is a commercial drive for re-use that leaves us with totally unmaintainable designs. Reuse is meant to be for generic functionality, what it seems to have turned into is common functionality. As for the measurement questions, sort of, a bit, here and there, maybe, somewhat. At our current level of programming technology pattern recognition is impossible. An amoeba is better at it than our best supercomputer. All we do is get our programs to identify patterns we've recognised as programmers. Could we sit down and identify good code, agree why it was good, identify it as the best way to do it? Even if we got why, what, where and who would have a major impact. The only measure of code quality worth paying attention to in commercial IT, is how much does it cost to change the code without losing the quality. It could cost less because changing one piece of code adds / fixes functionality right across the application(s), or it could cost a lot more because it does the same with something undesirable... If you go down the reuse for the sake of it route, the latter outcome is guaranteed. All else is pseudo-scientific mumbo-jumbo, promoted by panacea merchants trying to sell us their version of 'the right way'.

Justin James
Justin James

I agree completely, business rules do tend to occur in multiple places. I tend to look at "reusable code" more at the library level than the individual function, procedure, or object level, as a result (see my post a few levels down that I just put up). The flip side, of course, is that when the library gets to a certain undetermined point, it *is* the application, and by definition cannot be reused. At the end of the day, a lot of what you can reused depends on how much business logic you put in the database/store procs vs. DAL vs. BO's. I know, it's a whole religious war in and of itself where to put that logic, so I am going to avoid it for the time being (I'll write about it if I want a post with 200 "I hate you!" comments ;) ), but the reusability of the application and/or library code is heavily dependent upon that decision. J.Ja

Justin James
Justin James

As far as I can see, at least. None of us like over engineered OO architecture. :) Tony and I beleive that a major cause of it is improper encapsulation, and the result are "one size fits all" methods or objects. You know the kind, where properties can't have default values because they can be used in 600 different ways, or methods have 312 overloads with radically different logic, each overload being 90% the same as the others (the copy/paste problem). To me, "reusable code" is code that is specific to a particular task, but it is a task that many projects use. Some examples: * CGI.pm, the Perl CGI module that handled sessions, URL encoding, etc. * stdio, the standard C I/O library. * Much of the .Net Framework. * ODBC, JDBC, and other similar methods of standardizing database connectivity. * ImageMagick, an image manipulation library. And so on. I think that's really telling, actually. If you find yourself copy/pasting code from one project to another project that is functionally different but works with the same type of items (the same DB structure counts, BTW), you've got reusable code. If you find yourself beefing up some code because you *know* you'll be using the added functionality down the road, it is probably reusable code. If you find yourself writing an interface but only seeing one possible implementation, for the sake of "what if?", you are probably wasting your time. :) J.Ja

C_Tharp
C_Tharp

How can you overuse reuse? If the function can be used in many applications, use it in all of them. That does not imply that the function is constructed with many optional parameters that are each rarely used. A business rule may have many prerequisites, not all of which apply in every case. Each value is set by a method independent of the others, so the methods that are needed are called and the others are not. They are developed as the need arises and do not impose any unnecessary development work. The update, which encapsulates the business rule, validates the prerequisites before proceeding and throws an error if something is not satisfied, otherwise it performs the update. The update can be modified as new conditions arise. The callers need to be modified only when they are directly affected by the modifications. This usually requires calls to new methods rather than modification to existing ones. If one is missed, an error is reported in the normal execution of the job. Methods are kept as simple as possible. Yes, that means there may be many of them. Abstraction to put and get may be adequate for some. That will reduce the number. Others will need to be more specific because they do more. It's OO. It can get too complicated when an object is built from an object of an object ... When business rules, and other processes or things which are represented in code, change a lot, they need to be encapsulated so that they only change in one place in the code. Otherwise, maintenance gets out of hand. My experience has been that they do change a lot. I felt that I had to answer your questions, but it was not my intent to spin off the discussion in this direction. Let me see if I can tie it back. Perhaps some questions should be asked about the usefulness of measurements in this kind of code. Can measurement detect when the construction techniques are troublesome or headed for trouble? Is there a limit to the number of functions that are useful? Is there a limit to the number of arguments that are practical? Can overused cut and paste development be detected? Can overblown OO be detected? Can useful patterns be seen in a graphical or mathmetical representation of the call tree? This kind of pattern detection is not so simple as measuring the number of source lines of code. That does not mean that it is not possible to make useful measurements. Perhaps, it does not matter because management does not use them.

Tony Hopkinson
Tony Hopkinson

you've never seen re-use overused? You know method call with twenty parameters, 18 of which aren't used and the first thing it does is run out of indents while it's figuring out which ones are set? Extensibility is great if you are going to extense a lot, otherwise it sucks. Go too far with it, or down the wrong road and you have OO's equivalent of a global variable.

C_Tharp
C_Tharp

A business rule tends to be a policy set for a business that says for a given change certain prerequisites must be met and a defined set of changes will occur. I have found that, unless the system being created is exceedingly simple, more than one piece of code will need to apply the business rule. It makes a great deal of sense to encapsulate the business rule and abstract the interface so that it can be applied by any piece of code ("reused"). This does not require "gold plating" or a great deal of crystal ball study. Any experienced programmer can recognize such pieces and build them without wasting any time or effort. The payoff comes in reduced development, reduced testing, and increased extensibility.

Tony Hopkinson
Tony Hopkinson

makes no difference As soon as you have to chnage a piece of code to make it reusable, think. You might be right, ... or maybe not

Tavis
Tavis

If you broaden the concept of coding to encompass structured graphic design, you can see examples of reuse in objects like symbols, which may have many variable properties which instances inherit, and optionally can override. Symbols can include other symbols, and so on. Because the symbols are referenced rather than instantiated and embedded, this makes updating all instances as easy as updating the master (perhaps with options to reset overrides). And symbols libraries can be externalized from single documents to support any number. In SVG and Flash this can result in spectacular gains in efficiency, effectiveness and elegance. Now, it may be possible to achieve these effects by procedural lines of code, such as drawing point-by-point, but it wouldn't seem sensible to do this. Similarly, if you wanted to vary the visual style of the graphic components, you could abstract the formatting and effects to an external stylesheet. SVG supports CSS, and it's easy to switch on sets of styles using classes. I don't see a lot of difference between this and object-oriented coding; perhaps if the objects aren't literally visual ones, a programmer should be able to visualize (internally represent) them, as well as document them as models, of course.

Tony Hopkinson
Tony Hopkinson

that has been bastardised. Propeller heads saw it as a common file save dialogue. Bean counters saw it put an entire application in there and then call it's File Save dialogue. I still makes me roll about the floor when people say OO is easier than procedural, when in fact in many situations it's simply better, if you get it close to correct. I've lost count of the times someone with enough knowledge to be dangerous has done too much with abstraction or inheritance. You nearly always find that sort of muppet forgot another cornerstone of OO, encapsulation.

Justin James
Justin James

... a few years ago when I needed to "pivot" a SQL table, it involves 106 LEFT JOIN statements that only different by 1 number in the where clause (which incremented by one for each JOIN) and the alias assigned to the sub-query, Excel wrote it for me, error free, in about 90 seconds worth of work. That sure beat copy/paste and then possibly making mistakes on the manual changes! I do this kind of thing fairly frequently. I also think it highlights a few shortcomings in our toolboxes. Visual Studio, as an example, is just now catching up to automating tasks that we've been repetitively re-coding by hand for 20 years now, basic data access. The fact that we are kludging together our own hacks (out of Microsoft Office products, of all things) just speaks very lowly, in my opinion, of the state of the art. J.Ja

Tell It Like I See It
Tell It Like I See It

We must be from very similar schools or something. I built an Access application that would take input about a SQL database to connect to, login information, etc. It then goes out and gathers all the data on the structure. From there you can select any table in the database and have this Access application create code for what I called a "data access class" (DAC). I then used this class whenever I need to access data from the table. With it, I was able to cut my time on some projects by as much as 75%. That time savings really got noticed at least a couple of times.

Justin James
Justin James

Tavis - Yes, what you say is absolutely right, but I was talking specifically about how OO code always ends up (eventually) being procedural code. It is actually where I think OO goes from being "great" to "miserable". OO as an architecture is awesome. OO in terms of having to code with it is a royal pain sometimes. :( It is so bad, I actually recommend on occassion custom writing a dunky code generator to crank out OO code based on a declaritive input like what you are talking about. I have been known to dump input into Excel and use "Auto Fill" to generate code, particularly SQL, for that matter. J.Ja

Tavis
Tavis

In the case of XSLT, you write declarative code to match patterns from an XML document, and can apply templates, conditions and for-each loops. These don't have all the power and flexibility of procedural code, I believe, but they can cope with a variety of requirements, such as recursion. In one sample application I wrote, an XML document held unit of measurement pairs and the mathematical formulae to convert between them (this could be of general use). Then an XSLT was used to create a web page form which looped through the conversion pairs and created a user interface for inputting values and displaying the conversions to the other unit of measurement. The XSLT generated the required ECMAScript functions to process and display the converted values. I'm not saying this approach is generally applicable, but in web development you can store a lot of behaviour in markup (say, in user defined ASP.NET controls, or Flash movieclips).

Tell It Like I See It
Tell It Like I See It

Well, maybe the government might decide to apply the ultimate government option. Why build one when, for three times the cost, you can build three! Of, course, that's when the bureaucracy kicks in and compares the various sections of the three programs and tells a fourth team to pull the "best" sections from each of the programs and merge them together. Then the committee that decided the "best" sections wonders why the fourth version simply doesn't work. :) Sorry, I couldn't resist.

Justin James
Justin James

... answering the question, "what's my baseline?" It is impossible to say that code runs "fast" or "slow" without having similarly featured code to compare it to! Yes, it may feel "slow" by the customer's view, but for all they know, you used a mathematically provable ideal algorithm that cannot be improved upon one bit. I know, I am arguring against myself here, but you raise some excellent and valid points about measurability, which led me to the baseline project. It's not like you are going to have 3 teams write the same project seperately and give the customer the best one... J.Ja

Justin James
Justin James

"BTW, in addition to "ignorance of proper procedures" I'll add that in some situations you may not HAVE proper procedures outlined, particularly in a situation that has never come up before." What you said here just triggered in my mind a perfect summarization of the difference between "manager" and "leader". A "manager" keeps people on track with established guidelines. The "leader" establishes those guidelines, or keeps people on track in their absence. Just a totally off-topic thought. You are right that numbers are just one view. It's like that horrid movie "Summer School", where everyone says that the teacher stunk because the students still flunked the test (the numbers) while one parent says that the teacher succeeded, because the kids stayed out of trouble and became interested in school (the unmeasurable). No idea where the analogy came from or why I thought of that film just now, but youa re right, the things you cannot measure are often just as important (if not more imporatant) as the measurables. J.Ja

Justin James
Justin James

You are 100% right. At that point, it becomes a "fast, cheap, or right?" conversation with the bean counters. The nifty thing I have found about that conversation is that by putting it into those terms, even a bean counter can understand it at that point, and as long as you deliver the measurables they requested, you've met the goal, albeit oft'times at the expense of some professional pride. J.Ja

Locrian_Lyric
Locrian_Lyric

such as... Is it fast AND easy to maintain? Is it a resource hog BUT secure? et cetera

Tell It Like I See It
Tell It Like I See It

Yes you need to measure to manage. I lump counting (as in the number of outstanding work items) into measuring as well. Like you, I see where the "manager" simply wants something they can plug into project or something similar to that. In my mind, that makes them a glorified clerk, not a project manager. The term I've heard for this "Management by the Numbers". I've yet to see it work. The problem with it is when managers think that the numbers are reality. The truth is that the numbers are one view of certain aspects of reality. There are always other aspects that could be considered -- things you aren't measuring or that cannot be measured effectively. More important than the numbers themselves, in my mind anyway, is the unerlying reason for the numbers. You touched on this in you second paragraph -- (paraphrasing) what caused the defect -- and dealing with that issue, if necessary. BTW, in addition to "ignorance of proper procedures" I'll add that in some situations you may not HAVE proper procedures outlined, particularly in a situation that has never come up before. Entering an "exact percentage of task completion for a multi-month task" is the job of a data entry clerk. Hmm, there's that pesky clerk aspect again. As you said, that's neither management nor leadership. To me, the numbers are a tool, nothing more. They need to stay a tool and not become religion.

Tell It Like I See It
Tell It Like I See It

Yes, you can measure a project (any project) at least in terms of meeting dates or tracking what is still outstanding versus what was completed. But with regards to the code (not the overall project, just the code), there's not a really good way to measure it. What is efficient in one situation would be considered inefficient in another situation or with a different measuring standard. That's the problem. Now, I would agree that perhaps you agree something needs to be relatively fast, so you measure how long it takes to do it. Then you try a different approach and see how long that takes. Then maybe you try a few more approaches so you can use the fastest one. The problem is that trying these different approaches takes time. Time is money, so where do you draw the line on this tinkering? Even when you decide on a fast approach to some situation, you send it to the client. Guess what, they feel it takes too many clicks to make it happen. In essence, they are using a different metric than was given to the development staff. Worse yet, if you reverse the situation, you may find the user is happy that you made it easy for them to use, but is complaining about how slow it is. My point is that you could apply different metrics to code (or the finished program). Management hates that this is the case. Managers want it black or white. Success or Failure. But coding is a very gray world and doesn't neatly fit into many managers' monochromatic (digital) world.

Justin James
Justin James

My personal belief is that it is extremely difficult to manage without measuring (and improvement is nearly impossible), but measuring does not equal management! Not in the slightest! Too many managers, sadly, think that having 1,001 dashboard displays with real time thermometers and gas gauges, and spreadsheets everywhere equals management. It really doesn't. Now that being said, there are ways to manage coding projects. Things like defects found, whether or not those defects were caused by sloppiness or ignorance of proper procedures, similarity of the final product to the requirements, whether or not the project was delivered on time or within budget, etc. Where the disconnect is, too many managers equate "management" with "meeting customer demands." So what you get is management attempting to drive metrics to go into Microsoft Project, like exact percentages of task completion for a multi-month task. And that's neither "management" nor "leadership". J.Ja

Justin James
Justin James

I think you can put a few other metrics on code, like, "how fast does it run?" and "how many system resources does it consume?" and "is it vulnerable to standard code attacks?" Indeed, *not* asking these questions is how so much garbage-ware gets out there. J.Ja

Locrian_Lyric
Locrian_Lyric

anything past 1)Does it run 2)Does it break other things 3)Can anyone other than it's creator maintain it. is all a matter of judgement.