Software Development

Static vs. dynamic languages: Why choose one over the other

A discussion with Chip Camden and Justin James about static and dynamic languages leads to talk of Haskell, refactoring, code contracts, and more.

We asked TechRepublic contributors Chip Camden and Justin James to sound off on why they would choose static languages or dynamic languages over the other. Read what the developers had to say about static vs. dynamic (and to clarify, by a "static language," they mean "statically typed language") in this back-and-forth email discussion, and then weigh in on this topic by posting to our forums.

Chip Camden: Both static and dynamic typing have their place. Where specific types are required or known ahead of time, static typing makes sense and catches a lot of programming errors. Of course, there are some types of programming that become excessively difficult when the parameter being passed has to inherit some class or implement some known interface. "Duck typing" is nearly impossible to achieve in statically typed languages like C#. You end up creating a separate interface for each method, and even then that means that the classes you pass as parameters must implement that method _within that interface_, so they must have knowledge of their future use as that parameter.

On the other hand, in languages like Ruby that do no type checking of parameters and only fail when attempting to call a method that wasn't implemented, you can have bugs that lurk unsuspected for years, or things that appear to work but are actually doing something different than you think they are.

What I really want is a language that provides type checking when I want it, but allows me to be more dynamic when I need to. Lately, I've been thinking that Haskell fills that bill pretty well. It's actually a statically typed language, but it has really smart type inferencing. If a type for a parameter is unspecified, the Haskell compiler can infer any type restrictions based on how it's used within the function. That means that I can pass parameters having two completely unrelated types to the same function, as long as both types meet whatever criteria the function needs -- and that's all checked at compile time.

In practice, I find that I can get a lot more done more quickly with Ruby, though I'll find more bugs later than with either C# or Haskell. With C#, there's a whole phase of development dedicated to satisfying type errors, most of which have no bearing on correctness. Ruby just runs, but it might be doing something other than what you meant. With Haskell, solving type problems can sometimes be even more time-consuming than it is for C#, but they almost always represent real logic errors. Once you get a Haskell program to compile, it's got a good chance of being bug-free.

I get the feeling that C# (and Java before it) are like a kindergarten teacher who says, "If we all march in single file and stay on the sidewalk, nobody will get their shoes muddy." Haskell is more like "Take any path you want, as long as you don't mess up your shoes." Ruby is all "You want to strap rockets on your roller skates? No problem."

Justin James: Something I've learned time and time and time again is that type validity is only a small piece of the puzzle when it comes to writing code. If a function returns the right type but the logic to create the value is wrong, does it really matter? Static type checking eliminates one important yet minor form of error. How often do you see type errors in Ruby or Python code? How often do you see null reference exceptions in C#? Probably a lot more often! Boundary checking, data validity, etc. are all things that can bite you pretty hard, regardless of the language used.

Statically typed languages usually make these mistakes pretty obvious, under the "fail fast, fail hard" principle. Null reference, array out of bounds, use of uninitialized variables, etc. all conspire to make life tough if you didn't write the necessary checking code. The flip side is, you need to write a LOT of checking code! In most situations, you really want to treat a NULL as some safe default, like 0 for a number or "" for a string. Dynamically typed languages almost always are glad to combine auto initialization with auto casting and boxing so that null values are completely safe to use and provide sane defaults. And while that makes the programming sweet and easy, it also makes your troubleshooting a lot more difficult. I know that a lot of the errors I've seen over the years were due to an auto initialization occurring.

Something that I think needs a lot more attention is NOT the testing stuff. Quite frankly, I am sick of hearing about it. Too many development methodologies treat testing as the goal itself. I hate to break it to the TDD folks, for example, but when code changes, it's usually because the requirements changed, not because of refactoring. And for the folks who treat refactoring as a goal, I am sorry, but it isn't and shouldn't be. Too many refactorings are the sign of poor planning or lack of skill. I see tools like ReSharper and JustCode and many of the features they offer, and I say, "if you are extracting an interface from a class so often that you need a tool to do it, something is wrong." So, if you know what you are doing, the endless refactoring gets cut. And without the endless refactoring, your need for testing goes down quite a bit, and you no longer need to make it the focus of your development work.

At the same time, though, there *are* tests that really need to be done, but aren't because they are hard or tedious. That's where tools like Pex come in, and things like code contracts. Pex rips through your code, looks for potential edge cases, and tests them. Code contracts lets you declare what your code is doing in an easily tested manner. Combine the two ideas in either type of language, and you eliminate all sorts of issues.

Chip Camden: Re: massive refactoring as a sign of poor design -- specifically, it's a symptom of not keeping components small, modular, and doing one thing well. If the definition of what a class or function does is concise and not burdened with other concerns, then refactoring becomes rare.

That same principle makes test coverage easier, too. Not only that, but avoiding side-effects and state-based behavior means that the code can be provably correct without having to test all possible inputs (and the number of "inputs" is considerably smaller, too, if you consider state as a type of input).

Thanks to Chip and Justin for sharing their thoughts on this topic and to Justin for this topic idea.

About

Mary Weilage is a Senior Editor for CBS Interactive. She has worked for TechRepublic since 1999.

160 comments
Tony Hopkinson
Tony Hopkinson

First thanks to Mr Schuster for giving me the right question for google. Been playing with the new .net4 type dynamic. Basically it persuades the compiler to use late (as in run time) binding on all variables of this type. So you can say define a List of dynamic. So with a handful of lines of code, no interfaces, and no casting. I could add dates, strings, doubles, booleans etc. and get them out again. as in My Bag.Add("SomeDate",new DateTime(2010,6,23)); Bag.Add("SomeFormattedDate", Bag["SomeDate"].ToString("YYYYmmDD"); are both ok but string dateValue = Bag["SomeDate"]; will fail at run time. and DateTime dateValue = Bag["SomeFormattedDate"]; will as well. Indeed the biggest problem I see would be a need to beef up testing to exercise the code and make sure the value we are expecting has the right type. More experimenting tomorow, got to find a way round some of C#'s "I know what he meant" help. As in Double d = 0; Bag.Add("SomeDouble",d); is fine Bag.Add("SomeDouble",0); will add an int though.... Bag["SomeDouble"] + 30f; will give you a runtime error depending on whether it was put in through a defined double, or as a literal which could be inferred to be another type. How many of us do Double d = 0f; :(

Tony Hopkinson
Tony Hopkinson

but It requires an initial investment, it introduces a delay before the first deliverable arrives, it requires good practice. All these things are anathematic to the good enough mantra that most of us work to. Management don't see TDD as a productivity enhancer It is in many many cases), they see it as an optional extra on the quality front, they certainly don't want to have to pay for maintaining the tests in order to develop something. Doesn't matter how much effort we can show would have been saved if we'd had tests in place first, because if we had written it correctly in the first place it wouldn't have been a problem. :( :( :( Therefore TDD is for incompetents.... :p

Mark Miller
Mark Miller

...not merely a cast. For example, it might work if you said: Bag["SomeDouble"].ToDouble() + 30f even if it's already a double. I know that in Smalltalk, even strings have an "asString" conversion method (which just returns "self") so that you can do stuff like this without worrying so much about what's in there. It seems to me you're just getting used to the idea. Dynamic typing does put more of the responsibility on the programmer. There's less hand-holding by the compiler. There's less predictability in code like this, because as you say, the keys don't have to accurately describe the type of thing that's in the "bag." In fact, I'd say it's really a misuse of something like the "bag" to put types in the keys, because there's no enforceable relationship between the two. In one of Sterling's recent blog posts I alluded to the idea that programming in a dynamic language is rather like programming in assembly language, just with a different set of assumptions, but you have the ability to think more abstractly. The system gives you the ability to query what you have, an ability you would not have in assembly, because there's metadata associated with data. Since the system can lack explicit, enforceable type information, it's up to the programmer to check it. Getting to OOP specifically, in a good system, it should be possible to imbue objects with some "intelligence" so that they can make things more sane. For example, if you wanted to constrain the Bag container to numeric types, you should be able to do that by overriding the Add(), and Update() methods (just speculating on the names). These methods can do the querying for you so you don't have to do it throughout your code. A different strategy you could use is filter the "bag" on a type criteria, using a lambda, which will create another collection that contains only numeric types. This way, no matter what was in the original "bag," you can be assured the filtered set is constrained. As I said at the outset of the discussion, I don't think dynamically typed systems are the total solution. What they give you is a more malleable system that is easier to change in the early stages of development. Once you have something that has been tested, and you can see works well, then I think static types are justifiable, because you want to lock down features at that point. What Richard Gabriel was saying (in the article I cited) is that enforcing type strictness too early makes a system that's much more clumsy to deal with. This tends to encourage project teams to implement work-arounds and kludges, rather than solving the problem.

Mark Miller
Mark Miller

I used to see the same thing with documentation. Not that software developers liked writing documentation, but usually upper management didn't see document maintenance, or adding documentation to source code, as necessary. It didn't contribute to "making the box do something," and so was seen as an unnecessary cost.

apotheon
apotheon

I've actually seen a book or two about test-driving GUI development. I must admit, I'm pretty curious about how they pull that off.

Tony Hopkinson
Tony Hopkinson

if you do Double? nd = 50.67; dynamic dy = nd; Double d = dy.Value * -1; dy = d; dy is now holding a double of -50.67, not a double? with a value of -50.67... Going to test out the various scenarios tonight, but In suspect it's me holding on to the nullable types that is causing the problem, as they superfluous while using dynamic. Paradigm change, hardest thing in coding.

apotheon
apotheon

> What Richard Gabriel was saying (in the article I cited) is that enforcing type strictness too early makes a system that's much more clumsy to deal with. This tends to encourage project teams to implement work-arounds and kludges, rather than solving the problem. This ties in nicely with something I said higher up in the discussion: Dynamic type systems offer a trade-off between type errors and problem domain solution errors.

Tony Hopkinson
Tony Hopkinson

A Class a set of properties. New ones added, old ones removed, and some existing ones twiddleed with. Classic stuff. Bit of ising and asing and a simple factory pattern. Instead of defining discrete properties TotalCosts, TaxableAmount etc. They are a PropertyBag as in they are now this["TotalCosts"] and this["TaxableAmount"]. Some of the values are allowed to be not present, and the traditional way to do this becauses they were a discrete property was Nullable. That sort of carried over while I was reworking the code, gave myself a wee slap after realising that not in that bag could be treated the same as in the bag but null.... So I was shooting myself in the foot trying to replicate a consequence of having a discrete property in a dynamic bag. I've found a couple of places where there's no choice but to cast, compikler can't dispatch correctly lambdas is one, the other might be the same thing in a different guise. Compile time error on this dynamic somevalue = "Fred"; String[] names = {"Fred","Wilma","Betty","Barney"} if (names.Contains(somevalue)) { ... } I can deal with those though, as I can assume Somevalue is a string, because I want to see if it's in Names... Loading and saving the Bag to xml is going to require a bit of type checking though as it's casting to and from string, and with specific formats for currencies and dates. A few ways to go with that. Need to look at performance as well as the overhead of late binding adds up. I've already thrown away six classes that would may need a factory and I'm getting to where all my calulate/ transform routines operate on the PropertyBag type, so I'm throwing yards of code away, and you can see teh design as it's not longer obscured by extreme amounts of scaffolding. Not saying we should all move to dynamic, but this particular need it's definitley better suited. And I'm learning stuff.

Tony Hopkinson
Tony Hopkinson

So you found that film of the management team, three bi-sexual trannies and a small dog then, in that motel in Utah then? Should be good for TDD as well.

Tony Hopkinson
Tony Hopkinson

Mechanics aside, how often is a GUI defined and steady enough for TDD to be worthwhile. Even storyboarding doesn't cut it, when non standard behaviours are required. Lot of messing about for very little reward as far as I can see. Do as little as possible in the UI, so you cn test as much as possible with easier to use stuff, seems to be the only way to go at the moment.

apotheon
apotheon

It's not even suggestive, as far as I've determined. Maybe it's a boot trademark thing. Let's see if this comment vanishes. . . . edit: Evidently not, even with a word that combines those three letters in it.

Sterling chip Camden
Sterling chip Camden

According to AnsuGisalas, Palmetto discovered that any comment that contains the four letters s u g g (together, without spaces) gets dropped. Why, I have no idea. Urban dictionary treats that word pretty tamely.

Sterling chip Camden
Sterling chip Camden

If often used that same "defensive" approach. Then after a dozen times of copying the text before posting it without a hitch, I start forgetting to... until it fails again.

Mark Miller
Mark Miller

One thing I notice is that we've tried to add a lot of replies to this one thread ("The language doesn't help, either"). That may be a reason comments are disappearing. In any case, I used to put up with this on comment systems, where comments would get lost. The "defensive posting" technique I developed just through "hard knocks" was taking a quick copy of the entire text of my note before submitting. I had to get dinged once, though, to remind me to do this, since I didn't make it a habit. If the comment didn't take, I'd go back and paste from the clipboard, and try again. The second try would usually work. I know we shouldn't put up with this, but until this is replaced with something better...

apotheon
apotheon

After all of that, making my commentary shorter and less informative, then experimenting with editing and splitting up into multiple comments and so on, I finally got my point across. I have no idea why it's so difficult to post comments sometimes. (Maybe it was the parentheses.)

apotheon
apotheon

I got ignored, of course -- but reddit's system solves every major problem Mark Miller addressed. The current system at reddit is less good than it was back then, but it still solves all those problems. Of course, good ideas that were ignored by CNET are even more likely to be ignored by CBS.

apotheon
apotheon

Back before CBS bought the company, when the developers solicited ideas and feedback for future site upgrades, I pointed out reddit as an example of a well-designed discussion system.

apotheon
apotheon

Check out reddit's discussion interface. It's much better.

apotheon
apotheon

A better discussion system is desperately needed. As evidence of that fact, I'm forced to try to "trick" the TR discussion system into letting me say something meaningful, because it is trying to eat anything that says something other than an indictment of the brokenness of the comment system. Even when I successfully post something stupid and meaningless, then edit that comment to contain useful material, the comment then gets eaten.

apotheon
apotheon

Hopefully, I'll be able to post another comment following this one that says something meaningful. This is me, trying to circumvent TR's tendency to eat discussion comments.

AnsuGisalas
AnsuGisalas

The way I was taught maths, by my late father, it was always a thing of beauty, not to mention a game to play, like chess. And that was in school, 7th to 9th grades. Some things build on other things, so that the prerequisites have to be laid down... but the way to look at things, at what the language of maths can say, at how the truth can be obfuscated and rediscovered, at how things can be mutated and remutated - that never ought change. Any damn fool can make advanced mathematics interesting to people who have the prerequisites (not that all damn fools necessarily succeed), but it ought to be something everybody gets to experience, even if they don't pursue maths to the better end.

Sterling chip Camden
Sterling chip Camden

I like the way Alan Kay compares SmallTalk to a minor Greek play. Chris Crawford's point is well taken, but he's wrong about the personal interest in Egyptian literature. There's a lot of very interesting historical, wisdom, and poetical literature available. A good sample can be found in The Ancient Near East, 2 vols., ed. James B. Pritchard. There is a difference between Greek and Egyptian literature, but it isn't caused by who's writing. The Greeks thought differently than other ancient peoples. They were the oddballs of the ancient world, and they only seem more human to us because we have inherited their ideas.

Sterling chip Camden
Sterling chip Camden

... and you can adjust the maximum depth up to 10 levels. Of course, even 10 levels wouldn't suffice for these kinds of discussions.

Mark Miller
Mark Miller

The analogy to ancient Egyptian ways of doing things seems pretty appropriate. Sometimes an analogy to Medieval times seems appropriate, too. Alan Kay talked about some ancient Greek and Egyptian analogies in the interview I cited above with Stuart Feldman (I quote from it below). I cited this interview often on my blog. There is a lot of meaning in what he said. One of the things he said was that our software today is like the ancient pyramids, with a similar work model: Big piles of stuff piled on top of each other, constructed with the exertions of "slaves." He said, though, that the pyramids had "no structural integrity." I think that's where the analogy falls apart, actually, because there were some really well-built ones that have lasted for a few thousand years, despite being built in a region that has earthquakes from time to time. The pyramids at Giza are the only ancient wonders of the world (as identified by the people of the Mediterranean region at the time) left standing. Kay contrasted the pyramids to the cathedral at Chartres in France, which was able to achieve "bigness" and structural integrity while using a lot less building material, and at the same time allowing people inside a lot more space. AK: In the last 25 years or so, we actually got something like a pop culture, similar to what happened when television came on the scene and some of its inventors thought it would be a way of getting Shakespeare to the masses. But they forgot that you have to be more sophisticated and have more perspective to understand Shakespeare. What television was able to do was to capture people as they were. So I think the lack of a real computer science today, and the lack of real software engineering today, is partly due to this pop culture. SF: So Smalltalk is to Shakespeare as Excel is to car crashes in the TV culture? AK: No, if you look at it really historically, Smalltalk counts as a minor Greek play that was miles ahead of what most other cultures were doing, but nowhere near what Shakespeare was able to do. If you look at software today, through the lens of the history of engineering, it's certainly engineering of a sort but its the kind of engineering that people without the concept of the arch did. Most software today is very much like an Egyptian pyramid with millions of bricks piled on top of each other, with no structural integrity, but just done by brute force and thousands of slaves. SF: The analogy is even better because there are the hidden chambers that nobody can understand. AK: I would compare the Smalltalk stuff that we did in the 70s with something like a Gothic cathedral. We had two ideas, really. One of them we got from Lisp: late binding. The other one was the idea of objects. Those gave us something a little bit like the arch, so we were able to make complex, seemingly large structures out of very little material, but I wouldnt put us much past the engineering of 1,000 years ago. If you look at [Doug] Engelbarts demo [a live online hypermedia demonstration of the pioneering work that Engelbarts group had been doing at Stanford Research Institute, presented at the 1968 Fall Joint Computer Conference], then you see many more ideas about how to boost the collective IQ of groups and help them to work together than you see in the commercial systems today. I think theres this very long lag between what you might call the best practice in computing research over the years and what is able to leak out and be adapted in the much more expedient and deadline-conscious outside world. It's not that people are completely stupid, but if theres a big idea and you have deadlines and you have expedience and you have competitors, very likely what you'll do is take a low-pass filter on that idea and implement one part of it and miss what has to be done next. This happens over and over again. If you're using early-binding languages as most people do, rather than late-binding languages, then you really start getting locked in to stuff that you've already done. You can't reformulate things that easily. Going even deeper into this analogy, I really like what Chris Crawford had to say about it, though keep in mind this was written in the mid '90s. He's been rewriting this stuff, so the old links to it are all broken. What he communicates is there's more at stake than understanding general principles. It seems like from his view that's all background for a new way of thinking. The lead up concept for this argument is we need a new alphabet: Remember, writing was invented long before the Greeks, but it was so difficult to learn that its use was restricted to an elite class of scribes who had nothing interesting to say. And we have exactly the same situation today. Programming is confined to an elite class of programmers. Just like the scribes, they are highly paid. Just like the scribes, they exercise great control over all the ancillary uses of their craft. Just like the scribes, they are the object of some disdain--after all, if programming were really that noble, would you admit to being unable to program? And just like the scribes, they don't have a damn thing to say to the world--they want only to piddle around with their medium and make it do cute things. My analogy runs deep. I have always been disturbed by the realization that the Egyptian scribes practiced their art for several thousand years without ever writing down anything really interesting. Amid all the mountains of hieroglypics we have retrieved from that era, with literally gigabytes of information about gods, goddesses, pharoahs, conquests, taxes, and so forth, there is almost nothing of personal interest from the scribes themselves. No gripes about the lousy pay, no office jokes, no mentions of family or loved ones and certainly no discussions of philosophy, mathematics, art, drama, or any of the other things that the Greeks blathered away about endlessly. Compare the hieroglyphics of the Egyptians with the writings of the Greeks and the difference that leaps out at you is humanity.

Mark Miller
Mark Miller

I've seen a bunch of comment thread systems over the years, and they all have this deficiency where they think it's sufficient to indent replies. They all eventually run into their "indent limit," and you end up with what we have here. I've seen worse like over at ZDNet, where it only indents a couple levels, or the blog hoster I use, Wordpress, which doesn't recognize discussion threads at all, much less let you edit your comments after posting them (I appreciate that feature very much!). Come to think of it, I've seen worse than this, like at Blogger.com and Facebook... The issue, to me, is how to accurately represent discussions so that people can follow them without getting confused, paying attention to what they want, while ignoring what they don't care about. I haven't seen anyone come up with an adequate solution.

apotheon
apotheon

Many programmers don't even want to know the principles, and think knowing those principles is irrelevant to their lives and jobs.

Sterling chip Camden
Sterling chip Camden

... has been most interesting to read. It's unfortunate that when the comments reach this depth, we can't reply to them directly -- and even more unfortunate if any of those replies get lost.

Sterling chip Camden
Sterling chip Camden

... who knew that a triangle with sides of 3, 4, and 5 units made a right angle, but apparently did not know the the Pythagorean theorem. They know what works, and they can sometimes creatively apply it to new problems, but they aren't aware of the general principles.

apotheon
apotheon

I don't really have much to add to your responses to me. I found them interesting to read, and appreciate your contributions to the discussion; I just don't have anything to contribute following them, at this point. I'm saying this so you don't think I'm ignoring you (or that TR is eating my comments). I've been running into the problem of comments that simply vanish when I click the "Submit Reply" button more and more often lately. There are some threads where the vanishment continues to occur no matter how many times I try again, so I just stop responding there because TR won't let me.

Mark Miller
Mark Miller

Alright, that seemed to work... Trying again. That's an interesting take on it. The challenge for me felt more like understanding the mechanics, and then seeing what could be done with it. It's only been more recently that I've started to understand the philosophy. I wasn't horrified when I encountered Lisp. It was just really confusing. What horrified me was I had finally met a language that had stumped me! My problem was what Tony said, which was that I was so used to one style of programming that when I was confronted with something completely different, I felt utterly disabled. I didn't know where to begin. Everything in Lisp had this look of "sameness" about it. There were no hard markers to distinguish data from functionality, as I had seen in all the other languages I had used. The execution model was also totally different. I was used to languages where a program counter would go to each instruction, and then advance down the line. You changed state by changing variable values. Not so in FP. I think it was some of the other math and CS classes I had taken, after my exposure to Lisp, that helped it make sense in retrospect.

Mark Miller
Mark Miller

I tried posting a response earlier, but it's apparently gone into a void. So trying just a quick thingy here.

Mark Miller
Mark Miller

I reflected recently on an interview with Alan Kay that I read back in '06 where he talked briefly about C++, and your comment brought it to mind again. It seems like he used to have a repulsive reaction to it. There's a famous quote of his where he said, "I coined the term object-oriented programming, and C++ isn't what I had in mind." Here's what he said in the interview: You have to be a different kind of person to love C++. It is a really interesting example of how a well-meant idea went wrong, because [C++ creator] Bjarne Stroustrup was not trying to do what he has been criticized for. His idea was that first, it might be useful if you did to C what Simula did to Algol, which is basically act as a preprocessor for a different kind of architectural template for programming. It was basically for super-good programmers who are supposed to subclass everything, including the storage allocator, before they did anything serious. The result, of course, was that most programmers did not subclass much. So the people I know who like C++ and have done good things in C++ have been serious iron-men who have basically taken it for what it is, which is a kind of macroprocessor. I grew up with macro systems in the early 60s, and you have to do a lot of work to make them work for you--otherwise, they kill you. I also reflected on some other articles I read talking about how the C++ preprocessor is itself a Turing Complete language, and how one could use it as a code generator. I don't know if this is what you're talking about, but it seems like the tools are there, in the sense of a kind of "primordial soup," to use C++ in a bootstrap process to create a language more suited to the problem you're trying to solve, whatever that may be. It's not that you'd be getting out of the C++ environment, but you wouldn't be working with native C++ code as most C++ programmers understand it. Re. the way math is taught in college I agree that the way math is taught at the undergraduate level sucks. From my experience, talking to others who managed to slog through it, it gets a lot more interesting in the higher level courses at some point. I remember talking to a fellow undergrad once about Numerical Analysis (I believe it was), and he was all excited about it, because they were actually proving all those arithmetic rules we learned in primary school. From what I've heard, math education in college gets a lot better in graduate school and beyond, because they're much more focused on it as a thing of beauty, and really understanding it, as opposed to just telling you, "Do this. Don't ask why. Just do it." A book I've read, called "The Art of Mathematics," by Jerry King (a math professor at Lehigh University), explains why undergrad math has been the way it is. He said when he got into graduate school, one of his math professors told him, "It's amazing any of us got this far. The only reason we have is we understood ourselves how beautiful it is." What he was talking about was just what you were complaining about: that what gets presented as math at the undergrad level is really bad. It doesn't communicate what math really is, and the primary reason for it (and he can say this because he's been on the inside where mathematicians talk about this) is that it's a common belief among mathematicians and non-mathematicians that "You're just born with it." Either you get it, or you don't. Most don't believe that it's possible to teach what mathematics really is to those who "don't get it." But it's seen as a requirement for some to be educated (like scientists and engineers), and so they present it in a bastardized way just so it can be applied to real world problems. Once you get into grad school, then it's assumed that you actually "get it," and so they deal more in what it really is. It's an unfortunate state of affairs, and King is all for trying to change people's attitudes about it, but it's a very ingrained belief in the system that's difficult to dislodge.

Tony Hopkinson
Tony Hopkinson

to program from day one. Take a well understood problem, which isn't going to change and describe it in this language. Abstraction gets a vague mention and is intrinsically unimportant to achieving a result. Start off like that, and very quickly you start understanding problems by programming a solution, and in your chosen language as well. So you are only looking at it with the provided tool. Institutionalise a tool (or type of tool) and soon it's a nail and we bang out another guess.... It's why I always recommend programmers use more than one language and preferably type of language, it's the fastest way for us detailed oriented micro-focused types to see that there is more to the picture.

apotheon
apotheon

The really advanced math stuff, despite being centuries and entire orders of abstraction away from its distant roots in Greece and the like, tends to be a heady space to live (said the guy who is largely familiar with that advanced stuff only vicariously) -- because it forces the practitioner to live in the same place as those early mathematicians, in the realm of philosophy. That's where the actual critical thinking, synthetic reasoning, principle oriented analysis of emergent patterns, and heavy abstraction in general occurs. That's the math people tend to fear -- and it's not the math so much as the meaningful philosophy behind it, I think. The rest of it is just hard work, but easily applied hard work over time (if that makes any sense). This is one reason bureaucratic languages do not scare most daycoders the way languages based on less common principles -- languages that provide the programmer with powerful tools for abstraction -- can terrify them on some primal level. These languages require a bit of philosophizing to really grok them and wield them well. Ruby, I think, lands in a bit of a hybrid zone between bureaucratic languages and the highly abstracting languages, where one can ignore the philosophical aspects of it and still get a lot of work done, but one can also dive into the philosophical underpinnings of it to do some very interesting things. C++, believe it or not, seems to offer a context that blurs the same line, but does so in a very different manner that requires a lot more dedication to reach one's reward, and punishes errors far more grievously. It occurs to me that this very state of affairs might be what makes some computer scientists value C++ so much: it provides the benefits of a deeply philosophical language to someone willing to go to the extreme lengths required to get there in C++, but makes you work so hard at it, and offers so many hazards along the way, that only the most precise and exacting programmers can get there without basically blowing themselves up, thus ensuring that one not only feels like one has earned it, but also that those people are such a rarity that they get to feel like gods among men. Of course, a lot of C++ devotees only get halfway there (if that) but think they're among those same elite thinkers. I'm lazier than that. I like the extremely easy access to the nifty philosophical value of Ruby. It still requires some advanced abstract reasoning, but you can get to the point where the abstract reasoning begins in a few days if you're really focused, whereas it takes years with C++ to do so without pretty much setting yourself on fire in the process. I guess it's a bit like the difference between taking a class in symbolic logic on one hand, and on the other taking three years of calculus, eventually absorbing the same principles as of symbolic logic in the process. I've also gotten into symbolic logic in college, but never really got into the advanced math track (I started down that road, but took a detour because of the awful, counterproductive, bureaucratic indoctrination approach taken by most college math instructors).

Mark Miller
Mark Miller

I think that's definitely an influence. A lot of the IT software projects I worked on were for bureaucracies. I think that's the major market for IT, which would partly explain why languages like C# and Java prevail in the marketplace. Bureaucracy is algorithmic, which would explain why these languages emphasize that approach. Another part of it is computer technology was created in the midst of the industrial era, when it was thought that a machine is an instrument for carrying out work, to automate it. This gets a little into what Paul Murphy discussed over at ZDNet (he doesn't seem to be active over there anymore). So the point is to operate it, to tell it what to do, and let it do the work. There is a strong emphasis in the popular languages on "commanding data." There is an operational framework that only embodies actions that need to be taken. This operates on "source material" (data)--data processing--to produce an end product. Since then the industry has discovered data abstraction, which can help make the complexity of the relationships in data easier to deal with--if you know how to handle that. So the culture understands the idea of "telling a machine what to do," and it understands how following a recipe produces an end result. So that's often how programming has been presented. I don't know if this is the whole story, though. Paul Graham told the story of how he started one of the first web application companies in the mid-90s, using Lisp. He beat out all of the competitors in his market space (who were using C++ and Perl). Ultimately his company got bought by Yahoo, and they turned it into their own product. The epilogue to the story is that Yahoo ended up rewriting most of his service in C++ and Perl. Why? They said they couldn't find enough programmers skilled in Lisp to maintain it. He thought that was a red herring. He was able to manage it just fine with just him and his business partners. That was one of his advantages. He didn't have to spend a lot of money on staff to add improvements to the service, and keep ahead of his competitors. He couldn't explain, though, why Yahoo made the switch beyond what they said. It seems to me that most programmers just don't understand what these languages represent. They can't get their head around it. I can sympathize. I was once like that. When I first encountered Lisp in college, it gave me a headache. The people who get it tend to have some skill in mathematics. I've been discovering that as a culture we Americans almost abhor mathematics. I don't mean arithmetic, manipulating numbers with the operators we use. I mean thinking about abstract concepts, and their relationships. I think that's a larger reason why these languages haven't succeeded. We've written most of the software that exists, and so even in the rest of the world, where these languages might have a better chance, they tend to use what we use.

Sterling chip Camden
Sterling chip Camden

I'm reminded of Tracy Kidder's book The Soul of New Machine. In one scene, Tom West of Data General goes to a newly installed VAX site and starts pulling out boards to learn about its architecture. He concludes that it suffers from the same bureaucracy that hampers DEC's corporate organization. @Mark Miller: I "got" Ruby before I "got" Lisp, but the latter was still an amazingly enlightening experience. When I "got" Haskell, that was yet another step. The funny thing is, it's possible to express the enlightening principles of any of these languages in both of the others, but the design of the specific language is what leads you to them (OK, in the case of Haskell, it drags you there, while beating you mercilessly with immutability).

apotheon
apotheon

The problem, in cases like C# and Java, might just be that they are languages for people working within a bureaucratic organization. It seems that the most successful tools (that is, those that have the most success in becoming popularized and respected) are those whose designs reflect the organization of their intended userbase. In the case of programming languages, that means languages that have a form choked by bureaucratic red tape -- languages like C# and Java. The principle is about like Conway's Law: if you have four groups working on a compiler, you'll get a four-pass compiler. The difference is that this principle turns the chicken and egg relationship inside out. I'd love to see the organizational model that matches the form of Lisp (in principle). Maybe it's actually the Common Lisp community: quirky, lazily academic, cryptic, condescending, and claiming guardianship of the Great Secrets of the Universe.

Mark Miller
Mark Miller

Just to add a little more emphasis to this, as I said as well, the language can obscure the concepts. When I saw this in C# 2.0, for example: // Create a handler for a click event button1.Click += delegate(System.Object o, System.EventArgs e) {System.Windows.Forms.MessageBox.Show("Click!"); }; I thought, "Okay, I can see the use in that. When else would I use it?" I didn't get why MS put in anonymous methods. I figured that was a lot of work to put it in just to be able to do that. They must've had some other reasons they enabled it. It wasn't until I looked at Smalltalk and saw something like this: numbers select: [:num | num even] (filtering to create a collection of even numbers out of a collection of numbers, using a lambda) that it dawned on me how powerful the idea was. I was able to connect that, "Oh, a lambda is like an anonymous method." Part of it was the examples I saw illustrated stuff like this. They used the language differently. A good part of it as well was that the language took out a lot of the cruft that existed in the popular languages. It took a kind of "less is more" approach. To kind of illustrate this, if I were to translate the above C# into Smalltalk I would write something like: button1 subscribe: [:o :e | MessageBox show: 'Click!'] (sans namespaces) and be done with it. It's less wordy. It says in a few characters what takes several in C#. It gets the idea across. Now, once you learn this, though, going back to C# can feel frustrating, because you'll realize you don't have quite the freedom you did in the language where you learned this stuff. That's the downside. You realize the craphole you've been working in for a while. I remember I was just getting acquainted with the idea of creating my own language constructs in Smalltalk. I added a method to the Array class to allow me to test a bunch of conditionals, to test if any were true, and if so, to execute a piece of code: {a < b. b >= c. m = d} ifAnyAreTrue: ['do something'] I liked this better than stringing them together with a bunch of OR operators in a traditional if-statement. It expressed what I was doing more clearly. I tried this in some version of C# (probably 3.0 or what have you). I was able to get this to work, but once I looked at it, I thought, "Why would I do this?" It required wrangling with types, and so it took much more code to pull off. An if-statement looked so much more succinct. The language designers didn't think of programmers using it like that. That was obvious. So not all of the knowledge you'll gain translates so well. Even so, my point is that you'll get more ideas for what you can do that would probably make your work a little more pleasant, making the language work for you, rather than you working for it. I'll close with a quote from Eric Raymond. He talks about using Lisp, but it would apply to learning any higher functioning language (taken from Paul Graham's article "Beating the Averages"): Lisp is worth learning for the profound enlightenment experience you will have when you finally get it; that experience will make you a better programmer for the rest of your days, even if you never actually use Lisp itself a lot.

Sterling chip Camden
Sterling chip Camden

I find that to be a chronic problem with the MSDN documentation. It takes more of a "cookbook" approach, and doesn't deal with concepts or abstractions very well at all.

Tony Hopkinson
Tony Hopkinson

of getting my head round the paradigm shift. Ruby on Rails is on the horizon and I have a plan to see if we can do more with IronPython that we do now. But not going to get to move away from C# anytime soon as far as this job goes and the dynamic type is just too damn useful for this scenario to ditch. So it's going to be the somewhat unsettling some dynamic stuff in static environment. I'll get there...

Mark Miller
Mark Miller

Reading through your comments, it kind of seems like you're seeing nullable types and dynamic typing as synonymous. They are not. The point of a nullable type is that it can either have a value of a particular type, or have no value at all (null). You can probably see it as a struct with two variables: a value property, and a null property (which would be true or false). You can think of a variable labeled "dynamic" as a placeholder for a value, whatever type it is. A dynamic variable can take a value of any type, Double, Int, String, etc. Nullable or not, it doesn't care. A Double? variable can only be a double value, or null. A dynamic variable gives your code something to hold onto--a typeless alias to a value--even though it doesn't know what it has in its grasp. Technically, the way .Net implements it, it may be more like Variant in VB, where it's a container for a type. I noticed your use of the Value property on the dynamic variable. It looks like it was able to take the value from the nullable Double, which is likely just a double value that is no different from what a Double variable would hold, and assign it to the Double variable. I haven't tried this, but what I mean is if you had: Double? nd = 50.67; Double d = nd; this would work, but: Double? nd = null; Double d = nd; would fail, regardless of whether you use a dynamic variable in between. Something I found really helped me grasp concepts of dynamic typing and lambdas is to learn a language where these are first-class concepts that are used all the time. There are plenty of them to choose from. I found that staying in the realm of the popular languages that are used in software development never let me experience the true power of these concepts, because the language tended to obscure them, as did the descriptive documentation for these language features that you can find on MSDN and related blog sites. It's not that they implemented or talked about them wrong. It's that they see them in a utilitarian way, like, "You can use this for this sort of thing." That might help you solve a problem, but it doesn't really clear the cobwebs and let you in on what this stuff really is. I'm not saying get totally away from C# and never use it again, but that if you saw a clearer model of these concepts, they would be easier to understand. Once you realize what's going on, you can more effectively use them in C#.

AnsuGisalas
AnsuGisalas

but they kept shouting at the trannies to spend more time renting a dog, and less time writing up the specs.

Mark Miller
Mark Miller

I got the idea from what I'd heard about the SAGE air defense system, constructed in the 1950s. They knew stuff would burn out on it, including instructions which would no longer work. So they programmed in some redundancy, where if an instruction failed, they put in several other strategies it could use to accomplish the same task, using alternate instructions. Alan Kay said the system "crashed very slowly." The triage approach was not as efficient, but the system would remain operational for a while longer after a partial system failure. If it was left in a dilapidated state long enough, it would eventually fail, as more stuff on it burned out, but it would "fight on" until that point. I would think the method for what I talked about would need to be improved for it to be effective. I wouldn't expect the Smalltalk Method Finder approach to be workable, because programmers would get damned tired of putting in lengthy "triage" explanations for what they wanted. After I posted that, I was reminded of a Star Trek: The Next Generation episode called "Darmok" where the alien species only spoke in metaphors. :) The basic point I was getting across is I think we will need software systems that are able to work in an environment of bounded nondeterminism. The way the internet works, with TCP/IP, is a great example of that.

Tony Hopkinson
Tony Hopkinson

MethodFinder, doable I should think though quite messy, damn slow unless it was 2nd or 3rd phase of a triage approach. But there will always be a no can do point. At the moment it's no such Function Foo with this signature for Type T Moving that further on, is a cost / benefit / risk assessment as far as "Cloud" service providers are concerned, so I suspect instead of your all singing all dancing approach we'll get a quick, cheap and nasty short term solution that will be yet another long term problem IT failed to address....

Mark Miller
Mark Miller

This is a problem I see with software in the future, as software development moves more into decentralized processing (for ex. parallel processing and "the cloud"). Our current development systems insist on simple determinism, and this isn't going to scale up to the large, distributed systems that I think will be developed in the future. In answer to the question, "What would you replace this with," I think one thing that needs to be done is to come up with computational processes such that software is able to find functionality based on criteria. Smalltalk has had a rudimentary system for doing just that, though it looks experimental, called "Method Finder" (it has a class library that goes with it. So presumably software could actually use this), where you can give it an expression like "3. #(1 2 3). #(4 5 6)", which is asking, "I'm looking for a method that will take two inputs, and produce the specified output (the 3rd argument)," which in effect is a scalar addition over a collection. Now, imagine for a moment a programmer wrote up some code to try to do this scalar addition, using some message, and included code similar to this if it failed. It's not looking to compute that exact result, but it's using the expression to find the operator it needs to do the computation it actually wants to do. I tried this in Method Finder, and it came up with: 3 + #(1 2 3) --> #(4 5 6). So the "+" method will actually do the job. This is the sort of thing I think will need to be incorporated somehow (probably in a more succinct form than this) into future programming systems, particularly if systems are going to distribute processing over the internet, because I have a feeling there will be situations like this where functionality on a server that another service was depending on goes away. Rather than have systems that just fail when this happens, we can have systems that have some adaptive capability, that will have some ability to find what they need on the internet, kind of like we're able to do.

Mark Miller
Mark Miller

Sorry about that. I was just kind of realizing some things as I was writing it. That's always rewarding for me, but perhaps onerous on the reader. I did talk about all of the cases in my long comment. '1' + 2 yields 3 1 + '2' yields 3 '1' + '2' yields '3' In the 2nd paragraph from the bottom I talked about String's concatenation operator. It's "," (comma). So to concatenate "1" and "2" I would say: '1', '2' which would yield a string "12".

apotheon
apotheon

> I meant neither explicitly declares a type, people see that as untyped as opposed to sort of deferred. Some statically typed languages require no explicit type declarations, either. Consider OCaml, for instance. > Wasn't picking on you or anything, mainly I was responding to Mark's point that weak typing might be valuable, which I strenuously disagree with not because he's wrong, but because I'll have to fix the garbage that results from it. Maybe I've gotten lost in the discussion somewhere along the way, but I think any suggestions that weak typing is valuable (Was that pun intentional?) was a mistaken switch from static/dynamic to strong/weak, and not an actual intentional statement that weak typing itself is valuable. > Been there done that, no desire to do it ever again, damn VB and variants... I certainly have no sympathy for VB and its variants.

Tony Hopkinson
Tony Hopkinson

:D When I said looked like I meant neither explicitly declares a type, people see that as untyped as opposed to sort of deferred. Wasn't picking on you or anything, mainly I was responding to Mark's point that weak typing might be valuable, which I strenuously disagree with not because he's wrong, but because I'll have to fix the garbage that results from it. Been there done that, no desire to do it ever again, damn VB and variants...

apotheon
apotheon

Ugh. Hopefully this won't be redundant. I wrote up a comment to cover this stuff already, and it seems TR's discussion software just threw it away in its entirety when I hit the Submit Reply button. I'm trying again. Tony Hopkinson: I really don't understand what you meant when you said "many see dynamic as weak because code wise it looks the same." I thought you meant that with both weak and dymanic type systems you'd write code that looked like "11" + 1 and get a useful response, so I pointed out that would not be the case in at least some instances (as in Ruby, thus my code example). Then, you went on to react as though I had assumed you said something stupid when in fact you hadn't meant that at all; you said "Which bit of my '11' + 1 didn't look you like your example?" Okay, so I'm lost now. What the heck did you mean about the code looking the same in a weakly typed system and a dynamically (but presumably strongly) typed system, when the result in both Ruby and a statically typed system will be the type system puking? You also said "Course only a complete berk would write this even in a weakly typed language." The fact, though, is that some weakly typed systems would not have any problem with that at all, and would just yield a result of either 12 or 111 without blinking, while strongly typed systems in both dynamic and static cases would often puke when they ran into that code, absent some interesting computational models like that described by Mark Miller in his explanation of the "11" object reacting to the message by passing off responsibility for resolving type mismatches to the argument object, the 1 integer. So . . . maybe we can rewind a moment and you can tell me what you meant by "many see dynamic as weak because code wise it looks the same." Mark Miller: The way you described Smalltalk handling "11" + 1 makes a lot of sense with the assumption of the language's computational model as I understand it, and of it being a strongly and dynamically typed language. When you said that "1" + "2" and "1" + 2, you left out what seems (to me) to be the obvious test -- 1 + "2". Of course, I expect the same result, give the "1" + "2" result. I take it Smalltalk doesn't use + as a concatenation operator, then. What does it do for concatenation? (It occurs to me that in my original attempt to write this up, I misread something in Mark Miller's code and totally hosed up this part of the response anyway, so maybe it's for the best that TR's discussion software screwed up and threw away my comment.)

Tony Hopkinson
Tony Hopkinson

there is no + operator for integer that takes a string for the other operand, or assuming + is the concatentaion operator for strings, that the other operand can't be an int. I don't want it to cope with that, if I did I'd use some crap weakly typed environment and save myself a load of work, and give my customer flakey crap. I see very little intelligent in int + Int.Parse(somevalue). If my input was a mixture of ints and int.ToString(), I'd convert the strings to int's in one operation, and fail that when it decided that "won and read" is not a number. Not six million cycles further in somewhere completely different....

Mark Miller
Mark Miller

"1" + 11 = "111"; or 11 + "1" = 12; in a strongly typed dynamic language both should be run time errors. I disagree. It depends on the system. In a language like Common Lisp, this would be considered a runtime error. Contrary to popular belief, Common Lisp is strongly typed, just not statically typed. As I pointed out in an earlier comment, however, if you give Smalltalk an expression like: 1 + '2' it will come back with 3. It's not that it's weakly typed. It's that the "native" objects are designed to be accommodating to other "native" objects, and so infer some meaning from expressions (though the meaning it infers may not agree with what you meant). This is part of "the Smalltalk way." One of the principles of objects that Alan Kay has had is that an object is a computer, and so it can emulate any other computer. In Smalltalk, 1 is most definitely a SmallInteger. It will never be anything but that. Same with '2'. It's a String. The difference is these objects don't assume things about each other. They're not designed to act like, for example, "I'm a String, and I only interact with other Strings, doing String things." Any object can potentially act like any other object. Like I was saying earlier, objects of different types are able to "negotiate" with each other. That's kind of a disingenuous way of describing it, because there's no real back and forth going on. It's more like what Alan Kay described as "goals." SmallInteger received "+", and some object it hasn't identified yet. It delegates the operation ("+") to the other object and basically "hopes" that it can carry it out. There's no guarantee that it can, or if it attempts to do so, the value it comes up with will be compatible with the SmallInteger's value. The apparent "negotiation" is in the fact that SmallInteger is basically saying, "Hi. I'm a kind of Integer. Can you do this with me?" The other object, however, is a String, and it says, "Well, I'm not going to match your exact type. I'll act like a Number with you," which is compatible. So the system is conscious of types, but as I said earlier, this only applies to interfaces. The way I see things being implemented, the objects act like, "I'll tell you what I am, and give you some information I have. You just do whatever you can to make this computation work in a way that makes sense. Get back to me if you need anything." Interestingly, I tried this the other day: '1' + '2' Given my other experience with MFC, Java, and .Net, I expected to get '12' as a result, a concatenation of the two strings, but Smalltalk interpreted this as a kind of arithmetic operation, giving me '3'. Again, it's not that it took the strings for numbers as far as what classes they were. The first String object inferred a meaning from "+", and delegated computation to the other object (a String as well), saying, "I'm a string. Can you do this with me?". The 2nd string converted its value to a Number, and requested that the first string convert itself to a Number, and then carried out the operation. As an added "bonus," I guess you could say, the 2nd String also inferred that since they were both strings, the result should be put back into a String. I also tried doing '1' + 2, and it came back with 3. So apparently the classes were designed to prefer numbers in arithmetic operations. Makes sense. In case you're wondering (I just remembered this), String recognizes a different concatenation operator than what most are used to: "," (comma). I guess a short way to describe this, is that Smalltalk takes the attitude that a string as a kind of container, and it's able to apply operations over containers, or extract values out of them in order to apply operators.

Mark Miller
Mark Miller

I was getting confused about the terminology. What Sterling said really cleared it up. I said "strong" when I meant "static." I got to thinking about this thing called traits in Smalltalk, from what you said. You could say that Smalltalk is a strongly typed, but dynamic language, because you are able to associate a class with its interface. Other classes can have the same or similar interface, but they typically don't unless they are somehow related (like integers and floats being numeric). Traits seems to turn Smalltalk into a weakly typed language, rather like C, because it disassociates the interface from the class. So you can have any number of classes with the same "traits," but they can also have their own unique methods. So it can be like the native types in C, where they're all the same thing under the covers, and the only real difference between them is their bit-width. Interesting.

Mark Miller
Mark Miller

it meets a real user, and you look like an idiot who doesn't know that "ten" isn't a number... I know what you mean about the academic attitude. I saw the difference between the two first-hand. When I got into the junior and senior level courses, they did want us to error-check more, though. What you describe is not unique to dynamic languages. I ran into the same issue using statically typed languages. You have to check the input no matter what. Explicit types won't help you if you set up an integer variable to receive the input, and the user types in "ten." The program is still going to blow up. What I was thinking, though, and I think there's a Smalltalk library that actually does this, is if you make the String class a little more "intelligent," with numeric vocabulary, you could have it convert "ten" to 10 for you, so you wouldn't have to bug the user with, "Hey, idiot. Yeah, you! You were supposed to enter a decimal!" On the other hand, the program would need to bug the user if they enter some junk like "WTF?", saying, "Please enter a number. For example, 10." I think you misinterpreted my earlier comment. What I meant was, if you were given some crappy code, I imagine if it was in a dynamically typed system it would be easier to modify than if it was statically typed. In my comments I haven't meant to imply that dynamic languages will automatically create better programmers. Far from it. If anything, industry experience has shown that the more leeway the compiler gives the programmer to not think of everything that's happening (like memory management), the more companies hire bad programmers, because they feel like they have more leeway to do so. That's been the downside to creating better programming systems. They've been getting abused. That was almost one of the virtues of C. It would weed out the bad programmers in industry quickly, because it made it impossible for them to get anything to run correctly, except for "hello world." Of course, as Justin has pointed out, it didn't prevent some reasonably smart, but hubristic programmers from creating disastrously insecure systems. I should've pointed out in that discussion that a similar thing happened to Unix in the early days of the internet, though not for the same reasons.

Tony Hopkinson
Tony Hopkinson

Which bit of my "11" + 1 didn't look you like your example? Course only a complete berk would write this even in a weakly typed language. It's even better when they use hungarian to show what they "thought" it would do...

apotheon
apotheon

I often find that ambiguity enhances the humor of a joke -- but only for the sufficiently intellectually inclined to appreciate it. The trick is making the ambiguity self-referential, or dry, or some other positively affecting characteristic, rather than merely obscure.

apotheon
apotheon

Quoth Sterling "chip" Camden: > Those who only think in terms of strong, static typing tend to think of dynamic as equivalent to weak. This is particularly troublesome considering that some developers think that a particular popular language is strongly/statically typed, when in fact it is weakly/statically typed. The degree of the problem varies, of course, but a case has been made on a number of occasions that C and Java are weakly, statically typed languages. Of course, you brought up one of those examples yourself, in pointing out (correctly) that C is actually more weakly typed than Ruby, the latter of which is one of the most dynamic languages in existence. Quoth Tony Hopkinson: > Definitely a point to emphasise though, as many see dynamic as weak because code wise it looks the same. That's not always the case. In fact, it's not even usually the case, in my experience. From what I've seen, the only cases where a dynamically typed language allows the code sample you provided ("1" + 11 = "111";) to work are cases where the language is also weakly typed or cases where the language's type system is distinctly nontraditional, so that a quoted string literal and an unquoted integer literal are not, in fact, primitive types of the language. For instance, in Ruby: $ irb >> "1" + 11 TypeError: Coercion error: 11.to_str => String failed from Rubinius::Type.coerce_to at kernel/common/type.rb:23 from Kernel(String)#StringValue at kernel/common/kernel.rb:110 from String#+ at kernel/common/string.rb:95 from { } in Object#irb_binding at (irb):1 from Rubinius::BlockEnvironment#call_on_instance at kernel/common/block_environment.rb:72 Caused by: NoMethodError: undefined method `to_str' on 11:Fixnum. from Kernel(Fixnum)#to_str (method_missing) at kernel/delta/kernel.rb:79 from Rubinius::Type.coerce_to at kernel/common/type.rb:21 from Kernel(String)#StringValue at kernel/common/kernel.rb:110 from String#+ at kernel/common/string.rb:95 from { } in Object#irb_binding at (irb):1 from Rubinius::BlockEnvironment#call_on_instance at kernel/common/block_environment.rb:72 >> "1".to_i + 11 => 12 >> "1" + 11.to_s => "111" Of course, this matches up exactly with what you said next: > in a strongly typed dynamic language both should be run time errors. . . . but that, to me, looks like it disagrees with your statement that "code wise it looks the same" for a comparison of weak and dynamic type systems.

Tony Hopkinson
Tony Hopkinson

Definitely a point to emphasise though, as many see dynamic as weak because code wise it looks the same. "1" + 11 = "111"; or 11 + "1" = 12; in a strongly typed dynamic language both should be run time errors.

Sterling chip Camden
Sterling chip Camden

Those who only think in terms of strong, static typing tend to think of dynamic as equivalent to weak. "Dynamic" means that you can defer type resolution until runtime. "Weak" means that types aren't enforced. In that regard, I think C is weaker than Ruby, because you can cast anything to anything else in C and it will use it as such (usually followed immediately by a segmentation fault). In Ruby, class X will never execute a method of class Y unless Y is an ancestor. However, what type checking C does do is all at compile-time, so it's certainly more "static" than Ruby.

Sterling chip Camden
Sterling chip Camden

If you just type the <sarcasm> tag into a comment without escaping the angle brackets, "View source" is the only place you'll see it. I actually had to go look to verify that were joking.

apotheon
apotheon

I thought we were talking about static vs. dynamic -- not strong vs. weak. You two must be aware these are not the same things.

Tony Hopkinson
Tony Hopkinson

and in academia for learning, but one I wholeheartedly disagree with. While it's true that weak typing lets you get to the meat of the algorithm without having to worry about typing as an imlementation detail, it encourages a slapdash indisciplined coding style, that will never survive out in the real world. It lets you bash together something that looks like it will work, worse still, something that if you constrain your test data will work. Then it meets a real user, and you look like an idiot who doesn't know that "ten" isn't a number...

Mark Miller
Mark Miller

Hmm. Maybe a strong typing system in that scenario actually gets in the way.

Tony Hopkinson
Tony Hopkinson

They'll still want quality, and they still won't want to pay for it. Until the people in charge realise that maintaining acceptable product quality (as opposed to achieving it) in a cost effective manner, requires quality practices, processes and people, They and we are f'ed. So we are where we are now, the best of us spending most of our time coping with no documentation, flawed or dated designs and code that looks like it was`written by a village idiot's thick relative. At that point typing system matters not at all. :(

Mark Miller
Mark Miller

I remember shortly after I got my degree in CS watching a graduate course on project management. This was in 1993. People in that class were complaining about the same goddamn problems I would later encounter during my time in the IT services sector--and they were in a different line of IT work. In most cases software companies end up in two internal camps that are kind of at war with each other: management vs. engineering. Probably exceptions to this are in places like Microsoft, etc., though they have their own dysfunctions. I've already talked about the basis of it elsewhere, and I kind of doubt it's going to get resolved anytime soon. The main reason this exists is most software customers (that is, those who are paying the money for it, not their IT staff) have an understanding of the technology that's at least as bad as management's. The reason they are where they are is they can relate to the customers. Both have a difficult time relating to their own engineering groups, which is the reason, when they've had the opportunity, they've said, "We need to focus on our core mission," which does not include IT, and sequestered it into specialty firms as quickly as they can. It's probably for the best.

apotheon
apotheon

I'm kinda proud of the metasarcasm in that comment.

apotheon
apotheon

I almost feel sorry for those idiots in upper management.

Mark Miller
Mark Miller

No...nothing like that. :) I would typically add documentation during down time between projects. The guy directly above me didn't mind me working on that at all. It was whoever was above him that didn't care for it during scheduled time. It was just that during scheduled projects they wanted us writing code, and nothing but. They'd even get irritated at us if we worked on requirements/design specs. for "too long." Sheesh! Fortunately our customers typically wanted at least an initial design spec. before they'd allow us to go forward with it. Of course the people at the top should've been thankful we even bothered to do that for any length of time. No telling the disasters we avoided by doing it! There was one place I worked for 4 years. When I gave my notice, I offered to "spill my brains" in documentation before I left, which the upper management accepted. I stayed for another *2-1/2 months* doing nothing but documenting the past project knowledge I had stored in my head. I got most of it out. I had still more I could've documented, but again, upper management started getting irritated, asking me to "fish or cut bait." So I cut bait...

Sterling chip Camden
Sterling chip Camden

This one, and apotheon's suggestion to look for the sarcasm tag in the page source, both had me rolling.

Sterling chip Camden
Sterling chip Camden

... were back in the 80s on Unix using input redirection. We didn't have to worry about mouse coordinates then, but we had the same problem with keeping tests updated. We had a mechanism for recording the input, so the tester could re-record the whole session. But they had to choose between that and editing the input stream manually for small changes. Providing a useful means for accepting new results is a more significant problem than appears on the face of it. It means, first of all, that your tool must be able to intelligently 'diff' the results, not just report a failure and give up.

apotheon
apotheon

I blame Microsoft for that one.

Mark Miller
Mark Miller

I think that's what it was called. This was in the 1990s. I didn't use it, but somebody told me about it. It was used for testing GUI apps. I can't remember if it allowed the tester to point and click, and then repeat the steps itself. All I remember is that the testing software only remembered screen coordinates for mouse clicks, and something about the tester entering those coordinates *by hand*! Something doesn't seem right about that... Anyway, a co-worker was telling me that anytime they made a GUI change, the old test wouldn't work. So the tester would have to go in and update the coordinates somehow. Tedious!

Tony Hopkinson
Tony Hopkinson

We had an absolute classic through one of our automation tests. It runs under XP (lowest common denominator), and passed fine. One of our QA people looked at it under an XP VM, and logged a fault that the document window was all grey. Some OS funny in term of windows message pump, hadn't processed the paint message at the expected point.... Worked fine on Vista, and Win 7... No one else saw it, automation software, found the components and checked their content passed A-OK. Needless to say credibility of automation tests dropped just a tad....

apotheon
apotheon

I suppose you could just write your tests to account for variations in some portion of expected return values. That'd be a heckuva lotta work, though.

Sterling chip Camden
Sterling chip Camden

My clients use various automated testing tools, some better than others. None of them are perfect, but they're a lot farther on now than they were ten years ago. It does involve a lot more updating of tests, though, because any GUI testing framework that's worth its salt will report details such as window ids, class names, and positions. So whenever those change, the test has to be adjusted accordingly. The good tools provide a way to update the test with the new results by a single action.

apotheon
apotheon

I think I saw something about test-driving GUI application development with Ruby, published by The Pragmatic Programmers, I think. While Ruby is almost certainly not where you need to do some GUI development, books from The Pragmatic Programmers have very high quality content in my experience, and there may be principles of GUI testing in there that are transferable. If you happen to see it in a store, it might be worth flipping through it to see if it could help. I thought about getting it when I saw it, but I haven't done a whole lot of GUI development (outside the browser, that is), so I didn't have a specific need for it at the time.