Open Source

Open source developers must examine the past to invent the future

Millennial developers would be wise to learn the key lesson that UNIX taught us long ago: open-source code is nice, but portability of code is critical.

 

UNIX
Dennis Ritchie (standing) and Ken Thompson with a PDP-11, circa 1972
 

Years ago, I was shocked to discover a young developer friend had never heard of BEA, JBoss, or app servers at all. Growing up with virtual machines and distributed systems, he couldn’t be bothered to look back 10 years to understand how we reached this “cloudy” present. But understanding the basics of our computing past helps us appreciate our present.

Similarly, many developers are so immersed in open source that they have no conception of how we got here or what made it possible. In overlooking this past, they may misunderstand how best to invent the future, as articles from open-source luminaries Eric Raymond and Tim O’Reilly indicate.

Learning from UNIX

As Raymond posits, UNIX was important not so much because of its invention of the first non-assembler language -- but rather, by “creat[ing] the modern idea of software systems that are cleanly layered and built from replaceable parts, and of re-targetable development tools.”

Otherwise stated, the UNIX hackers gave us the concept of modular source code, the principle upon which Linux, the web, and so much that we take for granted today is built.

Presented with a choice between open software or modular systems, the latter takes clear precedence. After all, in the early days of computing, “Not only was portability across architectures a near-impossible dream, you often couldn’t port between instances of the same machine without serious effort.” Software was hard-coded to the hardware on which it was intended to run. UNIX changed all that:

"Without the UNIX-spawned framework of concepts and technologies, having source code simply didn’t help very much. This is hard for younger hackers to realize, because they have no experience of the software world before retargetable compilers and code portability became relatively common."

This isn’t really a matter of history, however. It introduces a fundamental principle for building successful ecosystems.

The architecture of open

As Tim O’Reilly comments, “The reason open source culture grew from Unix was not political, it was architectural.” In other words, rather than get caught up in a somewhat pointless declarations of rights, we should instead focus on more fruitful architectures of openness.

Speaking of one particular industry trend, O’Reilly goes on to argue:

"Will the Internet of Things be proprietary or open? It seems to me that the best way to ensure that the answer to that question is 'open' is not to wave banners saying 'open data' or to try to create open versions of successful proprietary products but to work assiduously to find ways in which open data and cooperating systems create more value than closed, proprietary data."

This is absolutely correct. Too often we make a fetish of openness for openess’ sake. Open-source software is not interesting just because it’s open. It’s interesting because it’s portable, as Raymond said, or interoperable, as O’Reilly extends his argument.

In other words, it’s not the source that matters -- it’s the ability of that source to move between systems that counts.

Opening up the future

This is why I’m bullish on Google long term. Yes, Android is open source -- and yes, that sort of matters. But the real value of Google is how much of its software runs on others’ platforms.

Dismissing King Digital Entertainment (maker of the popular Candy Crush game) as a “one-hit wonder,” Michael Cusumano, a professor at the M.I.T. Sloan School of Management, argues that “Typically, companies will have...one big product, and...unless they manage to become the center of an ecosystem, over time they tend to weaken and disappear.”

To thrive, products must foster ecosystems around them.

By definition, a company or product can’t be the center of an ecosystem that it controls top-to-bottom. Ecosystems grow up around products and companies that foster third-party innovation that interoperates with a core. The company that must control everything is like early software development: heavily proprietary software hard-coded to a specific type of hardware, or even a specific computer.

Apple is like this in some ways: it likes to control the overall experience for consumers, from hardware to software to cloud services. But Apple has also opened up to a considerable degree, fostering a vibrant developer platform on its iOS devices.

Still, long term I’d bet on Google, because Google doesn’t hard-code its software to perform solely or even to perform better on its own hardware/software platforms.

Those developers looking to build a successful product have much to learn from both Apple and Google, of course. But the key lesson is that which UNIX taught us so long ago: open-source code is nice, but portability of code is critical. Otherwise stated, those developers who foster open, permeable ecosystems that encourage modular design and interoperable systems have a far better chance of succeeding than those that insist upon end-to-end control.

Just ask the original UNIX hackers.

 

 

About

Matt Asay is a veteran technology columnist who has written for CNET, ReadWrite, and other tech media. He is currently VP of Mobile at Adobe. Previous positions include VP of business development and marketing at MongoDB and COO at Canonical, the Ubu...

7 comments
knuthf
knuthf

I just wish you stated the obvious, that Windows "believers" / "MCSE" study the past to understand the problems with the "technology" they promote. 


There is a lot of reinvention going on and gross misunderstanding in the Open Source community. Maybe those of us that participated in "OSF", "X/Open" and "POSIX" can make the evolvement easier to understand, we may have been to focused on "getting the job done". It is difficult to explain that Unix was documented in Posix, and Linux is made by starting with Posix and making this, as opposed t e.g. "HPuX" (in my case NDiX) was made with AT&T, and adopteding their Unix code to proprietary hardware, and adding extensions - that ended being "Posix Compliant".

What most forget is X/11 being developed to the window managers we have now. There is so much discussion that address the aestetic impression, and skip all about "technology". Google with Android is the first to make an alternative windows manager. I just wonder when some will come up with "Windows95" on Linux, and implement feature by feature in the ".Net" framwork - and make that software available. Few is aware that this would make Windows applications "future proof" - Linux with Posix can have the kernel completely rewritten, and all will work. There is two much discussion about corners and "Shades of grey".

karikatsu
karikatsu

by learning the problem of past.. you can use it to solve the problem of the present.. like other programmers and developers who develop the storages, programming languages and operating system..



open source is nice because you can use running codes to make a new application or program because of this it will make the development of technology fast . but the problem is that if it is not your code.. means you need to analyse it and understand to use it.. 

bobc4012
bobc4012

I was also shocked that the writer of this article never was aware of early computer history. As some of the other commentors pointed out, there were other language compilers - "non-assembler". Some like PL/I mid 60s before Unix (IBM evolved it internally into a functionally, "C-like" compiler for internal use). You also had programs like RPG, LISP, FORTH, SNOBOL, APL, PL/M (By Gary Kildall for micros, before C) and a ton of others. A quick look at the History of Computer Languages would have given insight. A good start point would be http://en.wikipedia.org/wiki/Timeline_of_programming_languages . Of course, Thompson and Ritchie developed "B" before "C".


As for Open Source, It is somewhat tied to the H/W architecture. Granted, I can write a "C" program that could run on both "8080" and its descendants and 68000 and its descendants and only have to recompile using the right libraries. However, Open Source also implies anyone can see the source code, take it and modify for their own use or for others. Again, prior to the 70s, much of the S/W (if not most) produced by the computer manufacturers was freely available and one could view and modify.

The point the author didn't make was those computer manufacturers wrote in the assembler language for their H/W rather than in a suitable compiler language back then. Even companies that evolved their own "functionally C-like" language tended to keep it proprietary so as not to give their competitors a similar advantage.It wasn't until the 70s that those manufacturers started selling proprietary, closed S/W - due to court rulings (unbundling S/W from the H/W) and seeing that it was becoming a profitable market.

freitsma
freitsma

One aspect in this article is missing! The impact of the changed hardware profiles offered today and the impact it has on the design and use of applications!

jm-commercial
jm-commercial

"As Raymond posits, UNIX was important not so much because of its invention of the first non-assembler language -- but rather, …"


Oops!  

You must be referring to the C language.  It was not by any means the first non-assembler language.  That credit usually goes to Jim Backus' FORTRAN, with the  second being Admiral Grace Hopper's COBOL. Both were conceived in the early 1950s and compilers were delivered in the late fifties.


Of course, Raymond did not write what you attribute to him.  What he wrote about C (on the page to which you link) is this: "the first non-assembler language really suitable for systems programming." 


Actually, the first such language may have been the version of ALGOL used to write the operating system for the Burroughs B5000 series of computers.


Cordially,


Joaquin 

just.a.guy
just.a.guy

I would add that your article is written strictly from a mini-computer perspective. There was

a much larger, other computing environment that existed at the time, and had existed

for 10 to 20 years.   


Modular programming had existed before UNIX was invented, as well as

non-assembler languages.

It was not until the concept of structured programing principles were promulgated that 

modular programming really had a major impact on producing more reliable software.   



knuthf
knuthf

@just.a.guy  - and C was considered as a "semi-assembly language" that promoted a most unstructured approach to coding. The structured approach was Algol (Danish) and Pascal (German) and Simula (Norwegian) that XEROX decided to use, in Smalltalk. There were several computer manufacturers that made their own languages that were close adaptations of e.g. PL/1 and Pascal, and Pascal would have been the language had not the GNU-C compiler been placed in public domain. The first OO variant of C was "ObjectiveC" - used now by Apple. The C++ was developed on the side to implement the mechanisms in Simula, but without some of the constraints. Little did they know about the reason for the "construction modes" in Simula, nor did they bother to ask. The result is a mess, and UML did nothing to address this. 


The developers of Simula (Norwegian Computing Centre) designed a programming language for this kind of programming, where the underlying hardware can be reached, and called this "Beta". Smalltalk contains a lot of "Beta", and the purpose was that compilers and operating systems should be coded in this language. Beta also contained primitives for parallel execution and the research in Europe became "distant" to the American, where the problems were to produce code to feed the mainframes. Now we have "multi-core" architecture, and it suddenly is important to keep all CPUs running, the memory stream separated and the cache hit high. Now it is time to return - we have to study and question the past.

Editor's Picks