Static vs. dynamic languages: Why choose one over the other

A discussion with Chip Camden and Justin James about static and dynamic languages leads to talk of Haskell, refactoring, code contracts, and more.

We asked TechRepublic contributors Chip Camden and Justin James to sound off on why they would choose static languages or dynamic languages over the other. Read what the developers had to say about static vs. dynamic (and to clarify, by a "static language," they mean "statically typed language") in this back-and-forth email discussion, and then weigh in on this topic by posting to our forums.

Chip Camden: Both static and dynamic typing have their place. Where specific types are required or known ahead of time, static typing makes sense and catches a lot of programming errors. Of course, there are some types of programming that become excessively difficult when the parameter being passed has to inherit some class or implement some known interface. "Duck typing" is nearly impossible to achieve in statically typed languages like C#. You end up creating a separate interface for each method, and even then that means that the classes you pass as parameters must implement that method _within that interface_, so they must have knowledge of their future use as that parameter.

On the other hand, in languages like Ruby that do no type checking of parameters and only fail when attempting to call a method that wasn't implemented, you can have bugs that lurk unsuspected for years, or things that appear to work but are actually doing something different than you think they are.

What I really want is a language that provides type checking when I want it, but allows me to be more dynamic when I need to. Lately, I've been thinking that Haskell fills that bill pretty well. It's actually a statically typed language, but it has really smart type inferencing. If a type for a parameter is unspecified, the Haskell compiler can infer any type restrictions based on how it's used within the function. That means that I can pass parameters having two completely unrelated types to the same function, as long as both types meet whatever criteria the function needs — and that's all checked at compile time.

In practice, I find that I can get a lot more done more quickly with Ruby, though I'll find more bugs later than with either C# or Haskell. With C#, there's a whole phase of development dedicated to satisfying type errors, most of which have no bearing on correctness. Ruby just runs, but it might be doing something other than what you meant. With Haskell, solving type problems can sometimes be even more time-consuming than it is for C#, but they almost always represent real logic errors. Once you get a Haskell program to compile, it's got a good chance of being bug-free.

I get the feeling that C# (and Java before it) are like a kindergarten teacher who says, "If we all march in single file and stay on the sidewalk, nobody will get their shoes muddy." Haskell is more like "Take any path you want, as long as you don't mess up your shoes." Ruby is all "You want to strap rockets on your roller skates? No problem."

Justin James: Something I've learned time and time and time again is that type validity is only a small piece of the puzzle when it comes to writing code. If a function returns the right type but the logic to create the value is wrong, does it really matter? Static type checking eliminates one important yet minor form of error. How often do you see type errors in Ruby or Python code? How often do you see null reference exceptions in C#? Probably a lot more often! Boundary checking, data validity, etc. are all things that can bite you pretty hard, regardless of the language used.

Statically typed languages usually make these mistakes pretty obvious, under the "fail fast, fail hard" principle. Null reference, array out of bounds, use of uninitialized variables, etc. all conspire to make life tough if you didn't write the necessary checking code. The flip side is, you need to write a LOT of checking code! In most situations, you really want to treat a NULL as some safe default, like 0 for a number or "" for a string. Dynamically typed languages almost always are glad to combine auto initialization with auto casting and boxing so that null values are completely safe to use and provide sane defaults. And while that makes the programming sweet and easy, it also makes your troubleshooting a lot more difficult. I know that a lot of the errors I've seen over the years were due to an auto initialization occurring.

Something that I think needs a lot more attention is NOT the testing stuff. Quite frankly, I am sick of hearing about it. Too many development methodologies treat testing as the goal itself. I hate to break it to the TDD folks, for example, but when code changes, it's usually because the requirements changed, not because of refactoring. And for the folks who treat refactoring as a goal, I am sorry, but it isn't and shouldn't be. Too many refactorings are the sign of poor planning or lack of skill. I see tools like ReSharper and JustCode and many of the features they offer, and I say, "if you are extracting an interface from a class so often that you need a tool to do it, something is wrong." So, if you know what you are doing, the endless refactoring gets cut. And without the endless refactoring, your need for testing goes down quite a bit, and you no longer need to make it the focus of your development work.

At the same time, though, there *are* tests that really need to be done, but aren't because they are hard or tedious. That's where tools like Pex come in, and things like code contracts. Pex rips through your code, looks for potential edge cases, and tests them. Code contracts lets you declare what your code is doing in an easily tested manner. Combine the two ideas in either type of language, and you eliminate all sorts of issues.

Chip Camden: Re: massive refactoring as a sign of poor design — specifically, it's a symptom of not keeping components small, modular, and doing one thing well. If the definition of what a class or function does is concise and not burdened with other concerns, then refactoring becomes rare.

That same principle makes test coverage easier, too. Not only that, but avoiding side-effects and state-based behavior means that the code can be provably correct without having to test all possible inputs (and the number of "inputs" is considerably smaller, too, if you consider state as a type of input).

Thanks to Chip and Justin for sharing their thoughts on this topic and to Justin for this topic idea.


Mary Weilage is a Senior Editor for CBS Interactive. She has worked for TechRepublic since 1999.

Editor's Picks