No matter how carefully you craft your programming code and
regardless of your chosen language, every sophisticated application produced is
bound to have bugs. Debugging a program is just part of the tried and true
process of application development. But while bugs cannot be avoided, the
impact of debugging, in terms of time and energy, can be mitigated with proper
techniques.

In his book, Find the Bug: A
Book of Incorrect Programs
, Adam Barr shows you how to efficiently search
your source code for bugs using a technique dubbed inspeculation. Chapter 2 of that
book, Tips on Walking Through
Code,
is available from TechRepublic
Downloads
. In the following interview, Adam Barr discusses debugging and
the role it plays in enterprise-wide application development.

Interview

[TechRepublic]
Programming bugs are inevitable, especially in this age of large
enterprise-wide applications. Do you think debugging software has dulled the
debugging prowess of the average application developer?

[Adam Barr]
People can reminisce about the ‘good old days’ when you didn’t have
source-level debugging and you were reconstructing function call stacks by
hand. But a lot of the debugger improvements are avoiding frustrating,
mechanical work on the part of the programmer.

I think back to debugging the first version of the Windows
NT kernel, when you couldn’t even reconnect to the debugger if the serial cable
got disconnected…it really was like what cavemen used to debug fire. We would
spend time pawing through raw memory to figure out the contents of variables;
now you can use a live variable watch window that understands your object
structure.

Did the ‘old way’ make you a better debugger? No,
it just took longer and made it harder to apply the real skill that programmers
can bring to debugging; which is understanding what the code does and where the
bug is. That hasn’t changed. It’s the same for a feature like ‘Edit and
Continue’. I view it as a timesaver, not a crutch. Or source-level
debugging in general: I think programmers should understand assembly language
so they can understand the machine, but I don’t think you should say that everyone
should always debug in assembly just to improve their skills for the rare bug
that your source debugger can’t help with.

[TechRepublic]
Does the massive size of enterprise application systems change your basic
debugging principles because of the sheer size and dispersal of the various
parts of the system?

[Adam Barr] I
think the dispersal affects it more than the size. Once you have a system that
involves more than one computer, you start to have distributed bugs,
timing-related bugs, hard-to-reproduce bugs. These are the kind of bugs where
you can’t walk through the code repeatedly in the debugger until the bug
becomes obvious. You hit a networking bug and start stepping through one
machine in the debugger, the other machine times out and destroys
your repro scenario.

Thinking back to the first question, a debugger that could
track multiple systems at once, following the flow of data and
‘freezing’ the other computers, would be very helpful. Today you wind
up debugging forensically, so having code that does good logging, as an
example, becomes much more important. In cases like that, you usually get some
sketchy information about the state of the system, and then you have to think
through what the code would do in that situation. That’s the skill that
‘Find the Bug’ is trying to teach.

[TechRepublic]
Many applications are released as complete only to reveal bugs later when in
the hands of the end users. Do you believe the tolerance for programming bugs
has increased over the past few years? Why?

[Adam Barr] No, I
think the tolerance for programming bugs has decreased dramatically in the past
few years, because a single bug can paralyze an entire company. But the reason
a single bug can paralyze an entire company is because software is so
ubiquitous and embedded into the day-to-day processes of a company. The sheer
amount of software has increased so much that even if each individual component
is more reliable, the overall system may be less reliable. And of course, just
because the tolerance for bugs has gone down, does not mean that the actual
number of bugs has gone down.

[TechRepublic]
Your book asks readers to find bugs in these programming languages: C, Python, Java, Perl, and x86
Assembly Language. How did you choose these languages? Would the principles
outlined in your book also apply to C#, VB.NET, COBOL, etc?

[Adam Barr] The
principles in the book would certainly apply to any language; that was one of
the reasons I spread the programs out over five languages, to demonstrate that
the concepts are universal. And I include a quick primer on each language, so
any programmer can follow the code and learn about the languages also.

I had no formal process for choosing the languages. I
started with C because I was most familiar with it. Then I wanted an
object-oriented language, and I chose Java over C++ because it was more
different from C. I wanted at least one scripting language, and I chose both Perl
and Python to avoid complaints from supporters if I left one of them out
(instead they can complain about my categorizing their languages as
‘scripting’ languages). Then I chose assembly language because I do
think it is useful to understand how the machine works at a more basic level. I
thought about using a .Net language but I was contemplating a possible sequel
and decided I could make that one entirely .Net languages.