Software Development optimize

Why programmers should study the art of programming

Chip Camden encourages programmers to cultivate a broad and deep understanding of the trade by accumulating a knowledge of its history and keeping an eye on recent developments.

To the average programmer in the trenches, debating the theory of computation is like discussing the chemical properties of saltpeter while in a gunfight: it may all be correct, but it doesn't apply directly to the problem in front of them. Why waste time imagining the outcome of a deathmatch between Haskell Curry and Alan Kay when we've got a deadline to slap a new web UI over our legacy application? Why should we care whether we're using a monad or an exception to return an error state? What the heck does "orthogonal" mean? Don't give me a research paper, just give me code that works. And so runs the "get it done yesterday" logic.

From many a project manager's point of view, programmers who dabble in computing theory pose an even greater danger than wasting time: they threaten to poison the project with new ways of doing things. This attitude is not entirely unjustified. Every new idea that promises to revolutionize software development seems to do quite a bit of damage to the industry before the hype wears off and it finds its proper place within the programmer's arsenal of available tools. Converts to these theories try to squeeze every problem into the new mold whether it fits or not. Just as an appalling misapplication of the Theory of Evolution provided a justification for Nazi genocide, so has the misapplication of Object Orientation, for example, led to all manner of programming ugliness in the name of purism (may Godwin have mercy on my soul).

The history of programming is dotted by false starts and pendulum swings. I remember in the wake of Dijkstra's famous Go To Statement Considered Harmful, myriads of programmers created all sorts of grotesque code contortions in order to avoid using a GOTO, even where one was badly needed. The slavish obedience to this and other rules of Structured Programming became so detrimental that once when the objection "It isn't structured" was raised to an idea before the ANSI subcomittee on which I served, I replied "I don't believe in Structured Programming." The silent shock on every face made me expect to be instantly removed from membership. I might have, too, if my friend and mentor Ken Lidster hadn't spoken up and explained that what I meant was that I objected to a legalistic interpretation of Structured Programming principles. He was right, though at the time I felt that he had watered down my bold assertion.

Nevertheless, the pursuit of programming theory does do us some good. If it weren't for all the brainpower that's been applied towards improving the practice of software development, we'd all be punching machine instructions on paper tape or Visual Basic on Windows. Thank goodness someone said, "There has to be a better way!" Thank goodness someone keeps on saying it.

Programming language developers aren't the only ones whose work improves under this consideration. Despite the risk of fanaticism, journeyman programmers can also enhance their contributions (and hopefully, their careers as well) by exercising their skills in areas that lie beyond their strictly defined responsibilities. Especially where it lies just beyond their horizon, the question "How can we improve software development as a discipline?" may yield a significant payoff. The danger lies in half-learning and poor self-evaluation. As Alexander Pope famously said in his Essay on Criticism:

A little learning is a dangerous thing;

Drink deep, or taste not the Pierian spring.

The professional programmer should drink deeply. By cultivating a broad and deep understanding of our trade, accumulating a knowledge of its history, and keeping an eye on recent developments, he or she will develop the wisdom to distinguish hype from substance and to apply each tool to the purpose for which it is best suited.

Note: This post originally published on April 24, 2011.

Keep your engineering skills up to date by signing up for TechRepublic's free Software Engineer newsletter, delivered each Tuesday.

About

Chip Camden has been programming since 1978, and he's still not done. An independent consultant since 1991, Chip specializes in software development tools, languages, and migration to new technology. Besides writing for TechRepublic's IT Consultant b...

43 comments
harischandrav
harischandrav

Along with studying the art of programming, half way through, one should study the art of testing. Understanding testing automatically teaches a programmer the techniques of isolation and developing discrete functionality. Much of programming is a repeated use of these techniques.

premiertechnologist
premiertechnologist

At the opposite end of the spectrum, it is supremely useful to know how things are implemented in order to understand how the theories work and gain a better methodology in using them. I would suppose that we all know that programming is made up of basically four things: Data, instructions and operations, comparisons and branching. That's it. You take the data like 1 add 2 to it and if it is three, go somewhere to do something and if not, go do something else. As one who has come from a background of IBM Mainframe Assembly Language (please don't panic, I'm now working with ASP .net 4), I understand this concept very well. If I had only known the manipulations of COBOL starting out, I would not have a good grasp of what the machine was doing, but because I do, I can insure that I use the knowledge to implement the theories of programming to produce faster, leaner and more accurate effective programs. Learning about such things as Functional Programming and the earlier Object Oriented Programming (and how about OODBs? Relational is so last millenia) expands our toolbox. Ah, but here's the rub: Politics. The old "the end justifies the means" programming pressures constantly intruding on thought and reflection in our drudge programming factories has produced the ethic of "we never have time to do it right, just to do it over... and over... and over". Enter Agile Programming, created and posited by those fun and very successful folk over at Sun Microsystems. Throw out ITIL! We don't have time for that! Make our user feel good about the code we're producing by making him or her complicit in our conspiracies by insuring they waste untolled months of time as we change this and change that on the fly as the prototypes of our models continue changing without ever settling in to stability. At least they can't blame us for being late on a project, nor can they blame us when it bloody doesn't work right. And good luck in having any documentation -- even an F1 help key -- since everything changes so fast. So let's look at the short sighted low foresight fun guys at Sun Microsystems to see how successful they were. Ooopsy! They were a part of the whole enterprise going out of business and being bought out by Oracle. Now all those projects which were open... well, we might be paying BIG bucks for them. Or not. No one has planned that far ahead. Do I know what I'm talking about? Let's just say that I worked for a local governmental agency supporting the IBM Mainframe running Payroll / Personnel and Budget / Finance: The core of the entire business. Along came two yahoos who became the Development Manager and the Operations Manager, controlling 3/4ths of the budget and managing 85% of the people... and they are married to each other. They moved the entire operation from stability through the basis of ITIL principles to Agile. Millions of dollars flowed away from the core business and went to the Development Manager's pet project in the Sheriff's Department. Now the entire agency is at the mercy of these two as the Payroll / Personnel and Budget / Finance systems -- which have had a GUI front end slapped on the front over the last 18 years, with over a million dollars per year put on it and are now redeveloping the Sheriff's system with a million dollars a year for the next seven year. Meanwhile, the entire agency is looking to get off the IBM Mainframe in the next 5 years -- no, make that 7 -- without a clue as how to do it. As I left for retirement, the IT Director said to me about the Payroll / Personnel and Budget / Finance system -- and I quote exactly -- "I don't know what I am doing". And so, while it may seem all so very wise to learn the theories behind programming, it just seems to me that it is a waste of time in the larger view of things. You'd be a lot better off if you learned how to make politics work for you.

Tony Hopkinson
Tony Hopkinson

Because it's challenging, fun and interesting, on occasion, you can even get paid for it. Got to put some sparkle in yet another CRUD app. :(

anomalophobe
anomalophobe

You've expounded on the "why" but what about the "how?" I've abandoned my career of twenty years in network engineering to pursue "the art and science of programming." When I ask my fellow programmers about how to approach an issue, all I get is "Just do this because it works - don???t worry about why for now," which just doesn't work for me (because I'm a Big Picture person, and I need to know why).

Sterling chip Camden
Sterling chip Camden

... is Functional Programming. Of course, that approach has been around for a very long time, but it finally seems to be gaining popularity. I'm getting a lot out of it, and many newer languages are based on it. How will it become perverted?

premiertechnologist
premiertechnologist

I aways become the first user of the product I develop. By doing this, I often avoid doing things which clients / users find awkward. In being the first user, I test every function I know of to make certain it produces expected results. As you all know, one seemingly tiny little thing which shows up randomly once is probably not going to go away. In fact, it may start showing up millions of times, depending on. That is why I rentlessly pursue bugs when I test. It's so much better to find them before they escape into production -- and, in fact, with very few exceptions, when my products hit production, they just keep running without problems for years. Of course, you can use the other approach, especially if you have bad management which guarantees tight deadlines and no resources: Let the users find the problem and fix it later. By the time they have found the bugs, you may have revised them out of existence, particularly if you are forced to upgrade incessantly without ever having a stable product because your management is too stupid to understand how expensive such a proposition is. It also helps to have an active competent help desk in such situations, but in most organizations, good luck with that. As for the larger picture, you can probably just forget to research new methods on the job. Do it at home on your own time. If you can. Which can be problematic if you work on, say, an IBM Mainframe (wherein there is endless opportunity to catch up on the technology, since no one can know it all, and by the time you get close, they will be RIFfing you because they're going to replace it SOON).

Tony Hopkinson
Tony Hopkinson

Especially not for an audience full of programmers. In general even if a programmer does have the knack they still won't test their own code very well. Testing a colleague's can be iffy as well. This is why Test Build Test and unit tests should have a far higher priority than they get. From the business... Once code has gone legacy as in past version one with no units tests, in general business wise you are f'ed, and it's down to those guys who automatically try to enter ABC in a number box.

Tony Hopkinson
Tony Hopkinson

it's better to do it right than again, al ong time ago, learning the art of programming gave me a few ways of putting flex in my designs, so when I do have to do it over, it's not wholly painful.

Sterling chip Camden
Sterling chip Camden

Yep -- I've had a lot of my "unpaid learning" turn into new, unexpected business over the years. It turns out that what you focus on is what you get good at is what you end up getting paid for -- so focus on what you'd like to do.

rstanley
rstanley

Chip: I agree with anomalophobe, what books, or websites should be studied to better "Study" the art of programming, in the 21st Century. Haven't programmed professionally for many years but I would like to get back into it on a personal level, and/or contribution to Open Source projects. When I think I don't need to learn more about programming, is when I turn off my notebook forever! ;^)

DaemonSlayer
DaemonSlayer

to try to figure out the why and the how something is done. I think understanding it can be a big plus in using it correctly. While we may not need to know the how or why to make it work, knowing can help us put it to its best uses, and find something better in other uses. A sledgehammer works to crack a nut open, but if I'm not careful, I'll destroy what I wanted inside too. A Nutcracker is the better tool as I'm less likely to pulverize the meat of the nut. The sledgehammer is the better tool to knock out a wall.

javabuddy
javabuddy

you already told how today's project work , we want this thing to be done by tomorrow , day after tomorrow or next week, now where is the chance for discussing those computer history and research paper. its seems impractical to me that an application developer must have to read those, let's put in this way if you want to learn and understand what you are doing currently and improve upon you need to make an extra effort to continuously improve your knowledge but not something you easily do at work place. Javin Top 20 Core Java Interview question

Mark Miller
Mark Miller

That is a good question. I saw a comment made recently by a CS professor at CMU saying that "Our students need to learn functional programming for the future," because of the need for parallelism, and, "Object-oriented programming is ill-suited to this future since it lacks modularity and parallelism." I was really shocked at this. "OOP lacks modularity? Are you kidding me?" I could think of one form of OOP that implemented parallelism well, Erlang. But it went to show the degree to which the whole concept of OOP had been perverted over the years. Basically the professor was basing his comment on the experience with C++ and Java, not something like Smalltalk. It also brought to mind that the reason FP is attractive for parallelism is because it's largely been insulated from industry use. It hasn't been watered down, and so it's maintained its power. No doubt, though, if it's to become popular, there will be people who will try to water it down for one reason or another, performance, ease of understanding, etc. The thing is, I doubt you can water it down too much before you lose the reason it's attractive for parallelism.

Justin James
Justin James

I see people claiming that LINQ and other declarative programming techniques are "functional programming", when the truth is, they take some ideas that are big in FP (like lambdas) and go do their own thing with it (and not always for the good). J.Ja

Sohail Anwar
Sohail Anwar

Hi, I just read some stuff above regarding working with product testing and management behavior. I like the way of testing your products that you should be the first user of your product so that you will figure out the thing find awkward by users. It???s the finest ways I have ever known to catch your mistakes or awkward things. And off course I like your 2nd approach. I already had faced it out like the tight deadlines with no resources but not the bad management. Due to these problems, I faced a lot or difficulties on working, At that time I was working in Mainframe testing there. Any how it was a great experience.

Tony Hopkinson
Tony Hopkinson

You implement them in a particular language / environment, best way to learn them, but in the main all or some aspect of them can be applied in your day to day work. Design patterns, anti patterns , coding standards, functional programing, Saas, threading, parallelism, orthogonality, decoupling mechansisms, not a waste of time,they are widely applicable. Mangement too often confuse programming with language or framework or architecture, we shouldn't help them, because it isn't. Incementally improve your processes and practices, if we sat there and waited for the bean counters to approve some new tech before we could have a go at it, we'd still be on paper tape. Tell the boss you want to rework the legacy monolithuc rush to marget bodge, to make it modular, they can't sell it to management even if they agree. Tell them you shave three seconds off the loading time of the application however, maybe...

apotheon
apotheon

1. Learn a programming language pretty thoroughly. Spend at least two years, maybe five, using it every time you get the chance. 2. Once you've got a good handle on that first language, learn a new language every year. Use the hell out of it. Still maintain things in the languages you've already used that you like -- but really get into the new language so you can pick up the interesting differences thoroughly. Don't try to learn similar languages; try to learn very different languages. The biggest reason to learn a new language is to learn something new. 3. Find some open source projects on sites like Bitbucket that interest you and fall within your skill range; contribute to them. That pretty much sums it up. Use whatever tools will be the most helpful toward these ends -- books, online documentation, REPLs, Web servers as deployment environments, whatever.

Tony Hopkinson
Tony Hopkinson

Me I need something real to do. Then I'll pick some new thing to use in teh design, could be a new language, or an extension to an old one, a design pattern, doesn't matter. Then I read up about it. Have a go Read the book again this time with some real world 'experienece', than try and to do it better. Rinse and repeat, as you say you never really stop learning how to program, if you have, you aren't. One thing I would say when you go down the self teaching route, get an overview first, especially in anew language / framework. You should at least know the aim of the tool, I learnt more about C# from reading the .net framework books, than I did from looking at the syntax to initialise an array.

Sterling chip Camden
Sterling chip Camden

1. I keep my ear to the ground for new technologies by following tech blogs, etc. 2. I dedicate a percentage of my time to learning something new. I pick something and concentrate on it until I feel like I have a thorough enough understanding of it to write real applications, then move on to something else. The web provides almost all of my material, though occasionally I'll buy a book (e.g., The Ruby Way was my best investment in learning Ruby). It all depends on what you're learning, and sometimes it can take a while to find the best resource.

Tony Hopkinson
Tony Hopkinson

Programmers program. We aren't really talking academia here, an improved version of the Vienna Development Method is something we need like an extra hole in the arse. All you have to do each day is realise yesterday could have gone better, and why, then maybe today will go better. I say all you have to do and no it's not easy, but if you are a programmer, it should come as natural as breathing. Why would you do our job, if you weren't interested and you didn't find it enjoyable in and of itself? The money? :( That which is never attempted is always impossible. Programmers do study the art of programming, otherwise they are mere cookie cutters.

Sterling chip Camden
Sterling chip Camden

That's an interesting point. Languages like Ruby, Python, and Perl provide the ability to do FP but don't enforce it at all. Nevertheless, I don't see them as "watering it down" because they don't present any roadblocks to doing FP (except, perhaps, in their side-effect-laden interfaces to the outside world). OTOH, a language like Java makes FP downright hard because it forces everything into the OOP model. On the opposite extreme we have Haskell, where even necessary side-effect (like I/O) are recast into a side-effectless monad model. But is that really appropriate?

Sterling chip Camden
Sterling chip Camden

There's a lot in common between "declarative" and "functional", but they aren't the same thing. I, too, have to raise an eyebrow whenever someone says that LINQ is "functional programming".

Tony Hopkinson
Tony Hopkinson

difficult. :p Using some of the concepts such as closures, if the chosen environment inplements them well enough, doable. Being in .Net, I keep meaning to have a better look at F# / IronRuby, see if some of the functionality we need would be better expressed in it, not found the time/excuse yet. Certainly IronPython helped this way.

Sterling chip Camden
Sterling chip Camden

By creating small functions that have no side-effects, they can be provably correct. You can enumerate all inputs and the expected outputs, and create tests for those. Heck, in a language like Haskell, even the tests are somewhat redundant, because that's the way the code reads. But given that provability, you can then build larger pieces by putting those proven pieces together, prove the correctness of that, and continue.

Professor8
Professor8

Different approaches are optimal for different individuals. I prefer to read multiple books & docs on a topic, then take a class and ask lots of questions. There are always holes in the coverage of books and lectures. Then I can get to the point where I can effectively work everything else out by experimentation, and that's when I need something real to do to drive it all home. A friend of mine believes in re-re-re-re-reading a single text. That's what works for him, but would be excruciatingly slow for me. Another is dyslexic, so reading more than snippets is a no-go. He needs something real to do, which means he has to do a lot of searching for specific details relevant to what he's trying to do, and lots of experimentation.

Sterling chip Camden
Sterling chip Camden

I agree about frameworks, but I would caution that you should learn a language before you learn a framework that uses the language. Many people take the opposite approach with Rails, for instance, and I think they do themselves an injustice because they only learn the Rails way.

sysop-dr
sysop-dr

I learn a new programming language at least once a year. I get lots of opportunities where I am to apply the new languages as I have become a resource to other programmers and teams. Not every project is best done in one particular language. They are tools and if you have more tools in your tool set then you can use the best one for what you want to do. And learn about frameworks. Using someone's framework to base your application on allows you to use the bits that cover the parts you need but someone else already does better than what you can do. Let the framework do the mundane stuff and you just concentrate on the parts you know or need to create out of business logic. And read everything. Digest what you can and rewrite it as you see it. A blog of hey look what i just learned is good for you to reinforce what you learned and also a handy way to keep your links to references. And if someone else reads it you get feedback and learn more. 30 years into the business and there is so much more to learn but it's a fun road to discovery.

Tony Hopkinson
Tony Hopkinson

Learning is not making the mistake you just made through ignorance again. Learning is never finished. After thirty-four years I'm aware of my vast ignorance, I enjoy finding it and beating the crap out of it. The biggest problem with self teaching is you tend to miss the big picture and dive straight in the deep end to solve a particular problem with familar concepts. I learnt not to do that ages ago, no instructor taught me it. Most just parrot up to the point they stopped learning and started teaching. You don't want an instructor you want a mentor. You'll learn more here contributing to these sorts discussions than you will off some academic twit waffling on about predicate calculus. Do pass my lack of regard to your chinese friend...

Sterling chip Camden
Sterling chip Camden

If you're diligent about hunting up and comparing the best resources on the web and in books, then you'll probably find at least as qualified instructors as those you'll find at the average college or university. Then you must also commit yourself to doing the work, instead of skimming -- but the same is true of college courses.

cnoevil
cnoevil

Self taught student have fool for teacher. Believe me, I know, I've been battling my own ignorance for a long time now and I can't remember all the times that I have lamented the fact that I didn't have an instructor to turn to.

Sterling chip Camden
Sterling chip Camden

... could be considered academic, as I have no forseeable plans to apply it to the problem of making money.

Tony Hopkinson
Tony Hopkinson

A few courses paid for by work, some correspondence courses off my own bat, bulk is self teaching. Mind you I am a bit eccentric (english for loon), I program as a hobby.

Justin James
Justin James

... a third arm growing out of that area could prove to be useful too, especially when working on the car or doing woodworking projects. :D J.Ja

Realvdude
Realvdude

Tony, I think you hit on something common. I've been stuck in "job mode" for some years; and while I have watched new things develop, I had not really taken hold of anything. Recently I decided I needed to get back into "career mode" by get back to the academia, even though what I am learning has little impact at the job. What I am getting from it, is that satisfaction of gaining new knowledge and broadening my skills.

Tony Hopkinson
Tony Hopkinson

You need an improved Vienna Development Method less than an extra.... :p

Justin James
Justin James

... but that extra hole you mentioned sounds like it could double my efficiency in some areas. J.Ja

Mark Miller
Mark Miller

You're close. I don't remember him saying "method-oriented programming." I don't remember if Kay said this, but he's at least implied that perhaps he should've called it message-oriented programming, instead of object-oriented programming, because the term he chose got everyone focused on "objects," not what goes on between them. He said it's much more important to look at relationships and what's going on "in between," interstitially. He said something like, "The abstraction is in the messages, not in the objects." He's explained that the only reason he came up with objects was in answer to the question, "Send messages between what?" So messages were the real point. What's special about objects, in his conception of them, is that each one, in effect, is a computer. He said that a characteristic of a computer is that it can emulate any other computer. This is a more sophisticated concept of "programming to the interface," because it tells you *why* you'd want to do it. Different objects can have different implementations, but can offer the same interface. He suggested, once the web became the rage in the 90s, that every object should have its own URL, and that object pointers should be abstracted to include network-aware references. It shouldn't matter where objects are. He suggested that the web protocol, if it had been thought through more, could've been formulated to allow this sense of message passing. You can kind of see it in CGI. Think of the web-server and domain name as an "object locator," and then the part that comes after the "?" as the message. I did that once just as a little project in understanding DNU (the "Does Not Understand" handler) in Smalltalk. I set up a proxy class, and I was able to instantiate objects for web servers, and send messages to them, just like any other object in the system. The proxy objects translated Smalltalk messages into CGI calls. The mapping was very good! Only thing was what I'd always get back was this big, ugly HTML mess, of course. If I had contacted a web service, I would've presumably gotten back XML, but that would've presented similar problems. He went further to say that the conception of the web as it was then (and still is for the most part) is pretty bad. I think I've explained that before on here. He's explained that he formulated his conception of objects partly because he realized that data structures are not scalable. His "better idea" was that code and content should be packaged together. It's in line with the idea that "code is data," and you can see this in any Smalltalk implementation. His conception of objects, though, was that all objects should have a "membrane" that only allows messages through. They would be able to send and receive messages to each other, but their internals were supposed to be totally insulated. Objects were supposed to be in control of their own environment. They were at liberty to decide how they would respond to messages, if they'd even respond at all. All other objects would be able to see was the object's interface (called its "protocol" in Smalltalk). Further, objects had metadata, which would give information about them. For example, messages are their own objects, and an object that receives one can look at where the message came from, and respond differently, depending on which object sent it, or an object's "pedigree" (the object's class). The internals of each object would be "protected," in the C++ sense of the term. From what I've seen, none of the so-called OOP languages of today fully implement this idea. What I meant when I said that I am still trying to understand the power of objects is that while I understand these principles, and I can see where they can go, I haven't experienced their power in the sense of understanding how to design something that means something, see it in action, and understand why it all works, and how I can use it for my own purposes. I have a better sense of that with FP currently, because I've gotten part way through SICP, and it has some beautiful examples in there of what FP can do. What's also been interesting is it talks about message passing, and shows some OOP with functions. It's given me a better sense of what OOP *should* be about. Incidentally, Alan Kay has been able to implement his vision somewhat for what he would like the web to be like in his STEPS project. He and the people he works with have been working on a project they call "Frank," which is essentially his conception of an OS with an office suite. The goal has been to get the entire thing working (OS and apps.) in under 20KLOC. Everything we'd call a file, including web pages, are objects (or "virtual computers," as he calls them). It includes a web browser. You can read about it here. I saw you mentioned Excel earlier. Interestingly, his team has incorporated the idea of the "spreadsheet cell" into their UI, but they've made it more universal. It doesn't have to be part of a spreadsheet app., for one. It's just a UI element. Instead of it just dealing with numbers, it can deal with any content (graphics, text, sound, etc.). All the cell does is either contain a literal, or seek information, and compute something off of that. This goes back to defense-related work he did in the 1970s, in fact, where he came up with a similar, Smalltalk-based system, where each cell contained an entire workspace, and incorporated multimedia. He told me recently, "I put in quite a bit of effort to get Microsoft to do Excel along these lines without success."

apotheon
apotheon

The power of objects lies not in the objects, but in the methods. Alan Kay once said something like "I should have called it method oriented programming," because people have a tendency to translate the term "object oriented programming" in the ludicrous notion that there can never be enough objects in your program. The result is stuff like the Kingdom of Nouns in Java. C++, C#, and Java are great examples of how not to learn the real power of object oriented programming.

Mark Miller
Mark Miller

Well purity is the main reason FP is attractive at all to some people in the field right now, because without side-effects you can actually have a reasonable environment to program in for parallel processing. The thing I don't like about this orientation is it's faddishly rediscovering an old idea for a new purpose. The origins of FP started with mathematics, and then it was pretty much segregated into AI for decades, at least in academia, though FP had a foothold in the computer graphics world, in industry, for about 15 years as well, and it has continued that foothold with travel itinerary planning. The point I was making is that you don't have to go to FP for parallel processing, which is the reason I was complaining about what the CMU professor said. If you want to use OOP for parallelism, you can use Erlang. I'm not trying to make an OOP-fanboy point here. It's just that FP is turning into "the new thing" because of the interest in parallelism, and all I'm saying is FP is not the only way to go to accomplish that. Maybe there's something besides FP or OOP that's better at it. The main priorities of software development in the computer industry are functionality and speed, and a programming model that the typical programmer can understand. The mathematical underpinnings of FP tend to lose typical programmers, just as a lot of typical programmers miss the point of Smalltalk if they're exposed to it. That's the reason I made the point about how FP will probably be watered down if it becomes popular. The real deal will be too difficult for most programmers to understand, at least not without some significant remedial work being done to prep them for it. Yes, functional features have existed in OO for a while, beginning with Smalltalk. I doubt anyone would be thinking about putting functions into their so-called OOP languages if they hadn't existed in Smalltalk. I can't speak to FP in Python and Perl since I haven't used them. My own experience is I didn't really get a sense of what you could accomplish with functions until I programmed in languages like SML, Lisp, and Scheme, where functions are really a first-class thing. I'm still trying to get an idea of what can be accomplished with objects, because while I learned "OOP" with C++, Java, and C#, I got a sense years later that these languages "have no idea what they're talking about."

Jaqui
Jaqui

oh yeah dbase database engine becoming a programming language. :/ with DBaseIV it was a programming language rather than a data base engine.

RudHud
RudHud

...to this paper, published by Microsoft in 2003: http://research.microsoft.com/en-us/um/people/simonpj/papers/excel/excel.pdf Haven't bothered to read it, as I don't really care. Still, the idea has been placed out there by actual experts, and not just some troll. As to whether a spreadsheet is a language of any sort: a spreadsheet can be massaged into producing the sort of complex, interface-driven result that we associate only with programs. I think it's a really rotten idea, but it can be done. This means it passes the duck test.

apotheon
apotheon

There was recently a bit of a blow-up on the ruby-talk mailing list because of some troll deciding to pick a fight over whether or not MS Excel is a functional programming language. That's right, you "heard" me -- MS Excel. By comparison, LINQ is only infinitesimally different from functional programming. Any time someone tried pointing out MS Excel is basically defined by side effects, and that it's furthermore not rationally described as a language except by bending the definition of "language" out of shape so badly it's unrecognizable, the couple of people who kept trying to defend MS Excel as a functional programming language that can "do anything" (and maybe should do everything), would start acting like people were just biased ivory tower types who revel in perverse difficulty and like to insult people who use MS Excel -- which was, of course, about as far from the truth as one could get. I think misappropriation of terms is probably the most common threat to good ideas in programming, as it is in politics, economics, and any other field where specificity of terms for clear communication is of critical importance. It seems, in fact, like those are the circumstances under which people most likely to repurpose terms for destructive ends.