General discussion

Locked

bITs and blogs

By apotheon ·
Tags: Off Topic
blog root

This conversation is currently closed to new comments.

385 total posts (Page 3 of 39)   Prev   01 | 02 | 03 | 04 | 05   Next
Thread display: Collapse - | Expand +

All Comments

Collapse -

from the beginning

by apotheon In reply to bITs and blogs

<div style="text-align: justify">

<p>I'm going to do Something Different here. I'm going to try to be
informative. I'll probably be mostly informative about stuff relating
to Linux in some way. Since a lot of people, even among IT
professionals, don't know a lot of the fundamentals that comprise a
good basis for understanding Linux-related stuff, I figure I should
probably start with some of those fundamentals. See how helpful I am?</p>

<p>First off, Linux is unix, but not UNIX, or even Unix. At least,
that's how I refer to the various states of unix-compatibility. See,
UNIX is a trademark that is assigned to a unix when it passes certain
qualifications and when someone pays for the privilege of using the
UNIX name; Unix is basically a non-term that I usually don't use, but
when I do differentiate between Unix and either UNIX or unix, what I
mean is that Unix has certain characteristics relating to a family of
related OSes that are all descended from the same ancestor, and contain
(some of) the same code as that ancestor, but isn't necessarily UNIX
because nobody bothered to get it certified. The various *BSD unices
(that's plural for unix) qualify as Unix by this standard, because
almost every single unix in existence has a core of *BSD-based code in
it. Even the original UNIX line, developed initially at Bell Labs,
contains a lot of *BSD code because *BSD code is open source but not
copyleft, meaning that anyone can see the source code but it can be
incorporated in closed-source software without any legal issues. I'll
address the various terms of software licensing in a moment, but first
I'll finally mention what makes an OS qualify as unix.</p>

<p>Linux (pronounced "linn-ucks" or "leen-ooks") is a unix. It is not,
however, a Unix or a UNIX. It contains original code unrelated to the
core *BSD code, and though I'm pretty sure it would qualify, nobody has
ever gotten it certified as UNIX. It is a unix, however, because it
looks like a duck, quacks like a duck, and even smells like a duck. It
conforms to POSIX standards, it does everything UNIX and/or Unix does
that makes them unices as well, and it is very nearly indistinguishable
from other unices to the casual user. Linux was created entirely from
scratch, programmatically, and was basically created by observing the
behavior of other unices and figuring out how to write code that will
do the same stuff.</p>

<p>Linux and *BSD are both "open source" OS families. Such proprietary
unices as SysV, Solaris, AIX, HP-UX, and so on, are not. Linux is
"copyleft", whereas *BSD is not. Here's why:</p>

<p>Linux is licensed under the GPL (General Public License). The
General Public License requires that when distributing binaries
(compiled, executable programs), you have to provide the source code as
well. It requires that you do not restrict others from further
distributing and modifying that code. It also requires that later
modifications and distributions of that code are released under the GPL
as well.</p>

<p>The various *BSD kernels are licensed under the BSD license (thus
the name). This includes FreeBSD, OpenBSD, and NetBSD, among other
(less well-known) BSD OSes. The BSD license allows you to redistribute
both binaries and source code as you see fit. It also requires that you
do not restrict others from further distributing and modifying that
code. It does not require distribution of source code, though it does
(as already noted) allow it. It does not require that later
modifications and distributions of that code are released under the BSD
license, either.</p>

<p>Software such as the proprietary UNIXes and Microsoft Windows are
released under standard copyright, as modified by EULAs (End User
License Agreements). Copyrighted software that is not released under
other licenses is restricted from being copied or distributed in any
form at all except in accordance with "fair use" provisions, which
pretty much state that if it's useless to you without duplicating it
you're allowed to duplicate it, but only for purposes of such use.
Other than that, everything's restricted, by and large.</p>

<p>Before I touch on one more licensing scheme, I'll explain how the
various open source buzzwords fit into all this. First, there's
"copyleft": if an open source license is "copyleft", that means that it
is automatically inherited by derivative works and copies. This means
that if you modify and redistribute something issued under a copyleft
license, that modified version is also distributed under the same
copyleft license.</p>

<p>The term "open source" refers to software for which the source code
is open for viewing, modifiable, and redistributable. A similarly
applied term, "free software", refers to source code wherein it is
required that the source code be made available to anyone that has
access to the binaries. There's another, far less used term, that
refers to software that is all about allowing you things without
requiring anything: it allows distribution and modification of binaries
and source code without requiring it, in essence. This term is
"software libre". There's some dispute over what these terms actually
mean, but the general consensus and understanding of the terms seems to
be precisely what I've relayed here. Both "software libre" and "free
software" are "open source software", but "software libre" is not "free
software", and "free software" is not "software libre".</p>

<p>The GPL is a free software license, an open source software license,
and it <b>is</b> copyleft. The BSD license is a software libre license,
an open source license, and <b>not</b> copyleft. The CCD CopyWrite is a
software libre license, an open source license, and it <b>is</b>
copyleft.</p>

<p>CCD CopyWrite is a license I created specifically because I saw the
need for a true software libre license that was also copyleft. In
essence, "software libre" is the state of licensing that replicates the
conditions of the public domain (absent outside influences). You can do
anything you like with your libre licensed software, and so can anyone
to whom you give it: there are no legal restrictions on modification
and distribution of the content. Only laws relating to tangential
matters apply to software libre, such as laws relating to fraud (no
lying about the performance or attribution characteristics of a piece
of software). By creating a copyleft libre license, I've set aside a
"protected public domain", wherein licensed works can be treated as
though they are within the public domain, but unlike the actual status
of public domain works they cannot be re-copyrighted and "removed" from
the public domain after modification to produce a derivative work.</p>

<p>There you have it. I've made some generalized statements about what
UNIX, Unix, and unix are, how Linux and *BSD fit the picture, what the
various open source software categorization buzzwords are, and some
licensing examples to fit the different categories.</p>

<p>Note 1: Despite spurious claims to the contrary, no Windows was
ever really POSIX compliant. Some components of the Windows NT system
have, in various versions, been POSIX compliant to one degree or
another, but it has never been a POSIX compliant OS. NTFS was at one
time POSIX compliant: whether or not it still is compliant is something
of which I'm not really sure. NTFS has undergone so many changes over
the years that it's almost unrecognizable as being related to the
filesystem that originally bore that name.</p>

<p>Note 2: I stated that Unixes come from a common ancestor. I did not
identify that ancestor, though I hinted at both AT&T UNIX and BSD
Unix. I made reference to BSD code in UNIXes, but did not specify how
much of the OS is traceable to BSD. This was intentional. While the
facts I've related are essentially indisputable, the opinions that can
be derived from those facts are often in reasonable dispute. Since
my purpose here isn't to address that dispute, I avoided it.</p>
</div>

<p><a href="http://www.unix.org/">UNIX</a></p>
<p><a href="http://www.linux.org/">Linux</a></p>
<p><a href="http://www.bsd.org/">*BSD</a></p>
<p><a href="http://www.gnu.org/copyleft/gpl.html">GNU General Public License</a></p>
<p><a href="http://www.opensource.org/licenses/bsd-license.php">BSD License</a></p>
<p><a href="http://ccd.apotheon.org">CCD CopyWrite license</a></p>

Collapse -

hacker (n.): one who hacks

by apotheon In reply to bITs and blogs

<div style="text-align: justify">
<p>There's a term being bandied about in the media, and being used
improperly, with dismaying regularity. This term is one that relates to
IT professionals and enthusiasts and their shared culture. It is a term
that helps to set us apart from the rest of the world's population by
our appreciation of a certain ethic, a certain aesthetic, and a certain
metasociety that cannot be understood without exposure to, and (perhaps
more importantly) enjoyment of, the computer geek's world.</p>

<p>The term I'm talking about, of course, is "hacker". In the news
media, in the press releases of corporations like Microsoft, and in
mainstream cinema, the term "hacker" is divested of its real meaning
and granted instead only the sinister characteristics of the computer
criminal. This has, I think, come to pass because those outside of
hacker culture probably never bother to notice any hacking going on
around them unless it affects them directly and, once in a while, that
hacking might consist of someone testing and even penetrating the
security of computers and computer networks. To assign the term
"hacking" only to such activities, though, is the same as assigning the
term "pilot" only to terrorists who fly jumbo jets into skyscrapers,
"golfer" only to those who cheat at the game of golf, "driver" only to
those who drive while intoxicated and end up killing pedestrians, or
"parent" only to those who molest their children.</p>

<p>It's worse than that, actually. Not all child molesters are parents,
not all killers are drivers, not all cheaters are golfers, not all
terrorists are pilots, and not all who crack security on computers and
computer networks are hackers. Many, in fact, are script kiddies whose
closest brush with actual hackers is using a network security auditing
script some hacker wrote eight years ago. Remember that little problem
with Newsweek inaccurately reporting the contents of an FBI memo,
sparking a riot that killed 16 people? They're just as wrong, and far
more often, in the way they report computer crime.</p>

<p>The term "hacker" is used at times to refer to people outside of
computer system enthusiasts, and that's fine. I've yet to see a
non-computer-person misuse the term when referring to what they do.
I've even seen people refer to themselves as hackers of "reality",
meaning of course that they're screwing with the common perceptions of
the dominant paradigm. Good for them. Let's comfort the disturbed and
disturb the comfortable, and call ourselves saints and hackers for
having done so. It's pretty difficult to find any true hacker culture
outside of enthusiastic computer users, though.</p>

<p>The term arose with the tech model railroad club (TMRC) at MIT in
the 1960s, particularly amongst a group of members of the club who were
also involved in the goings-on of the MIT AI (artificial intelligence)
lab. From there, it began to be applied to other computing enthusiasts
unrelated to TMRC, and a vast culture of hacking arose, including its
own jargon, ethics, value system, and worldviews. As <a href="http://www.faqs.org/rfcs/rfc1392.html">RFC 1392</a>,
the Internet Users' Glossary, defines it, a hacker is "A person who
delights in having an intimate understanding of the internal workings
of a system, computers and computer networks in particular. The term is
often misused in a pejorative context, where 'cracker' would be the
correct term." There's also a reference to the term "cracker" in RFC
1392, not to be confused with the racist insult usage of the term, nor
with that usage of the term that denotes a snack food.</p>

<p>Early hacker history is loaded with the stories of giants who walked
the earth. Somewhere in the middle, there was a distinct paradigm shift
coinciding with the move from OSes and computers that were wedded to
each other to unix, the first really modular, portable separation of
the OS from the hardware ? or, at least, the first one that really
caught on. This can be blamed, of course, on the concurrent creation,
or <i>synchrogenesis</i> (to coin a term), of the C programming
language and the Unix operating system. While the Internet was already
underway before unix began to play a substantial role in it, it was
unix that gave it the first major push toward being a public
environment. The various unices have been the primary OS of choice for
hackers in general ever since. There are those few true hackers that
simply don't use the unix environment, of course, but they are an
exceedingly rare breed. Most people that work with computers outside of
the realm of unix are professionals or end-users without the real
essence of the hacker, or are strictly hardware hackers, a strange
breed indeed. Even those hackers that have created their own OSes along
the way generally came from unix and eventually came back to it, too.</p>

<p>In the late '70s and early '80s, the growth of the PC industry began
to see the independent and convergent evolution of a new class of
computer users. They weren't a culture, yet, though. They had terribly
underpowered little "toys" that didn't even have the ability to
effectively communicate with each other over the Internet. This is one
reason many people don't realize just how old the Internet is: if they
know anything about the history of computer networking with PCs, they
probably think back to the bad ol' days of dial-in BBSes before PCs
could touch the Internet. It was the ISPs like Prodigy and AOL that
ultimately brought the Internet to the masses (thank goodness we've
moved on to better options now), by giving PCs something to dial into
that would then connect them to all the wide world of the Internet, and
it was the web browser and email that made it something worth doing.
Then, in the early '90s, just before the release of Windows 3.11,
hacker culture met the scattered PC enthusiasts, and that convergent
evolution finally came to its merging point. Linux and BSD for the 386
were created, almost simultaneously. Both were made open source, as
well, which suited the hacker ethic perfectly. The hacker's home OS was
born, and it was twins.</p>

<p>Generally, one does not decide to become a hacker and pursue any set
of required tasks to get there. It's not a profession with certifying
authorities, though there is a certain amount of semi-official
recognition that cements one's place in the culture. It's not a skill
set that one acquires at school or on the job, though one is never a
hacker without skill. It's not an attitude, though without the right
attitude all you'll ever be is a programmer, or a script kiddie, or a
network administrator, or an end user, or a wannabe, or perhaps worst
of all a <i>suit</i>. Hacker culture is something of a meritocracy,
but mere ability isn't everything: there's also the ethic and the
aesthetic sense, for instance. It's all something you can't just study
and understand. You have to grok it.</p>

<p>That's not to say that hackers never disagree. They not only
disagree, but can do so very noisily, obstinately, at great length.
They even disagree regularly on subjects as fundamental as what exactly
it is to be a hacker. Find two hackers and ask them what being a hacker
means: if they don't just quote RFC 1392 or the Jargon File at you,
you'll get two different answers. You might even get three. Put them in
a room together, and they may argue it to death, and they may both end
up with different opinions than those they had when they started, but
they'll still probably disagree on some fundamental points. If both are
real hackers, though, they'll surely recognize each other as such by
the time a truce has been called and the dust has settled.</p>

<p>For my part, I've been called a hacker by several people who know
what the term really means, independently and without prompting. These
are people who recognize that I have some skill, and that I grok the
hacker life ? and I really do understand it on that visceral level.
It's commonly accepted (if usually unspoken) tradition in hacker culture that it's better to
be identified as a hacker by someone else, someone that knows what he
or she is talking about, and among my credentials is recognition by a
bona fide rocket scientist who's been as much a real hacker as anyone
I've met for longer than I've known there was such a thing as Linux
(and she has been using Slackware since version 1.x). Guess what: I
dispute their claims. I'm not sure I qualify. It's that pesky skill
thing, you see. I have the enthusiasm and the interest and all the rest
of it, but somehow I've just never really gotten immersed enough in
certain key activities (programming, foremost among them) to develop
more skill than that of a dabbler in hacking. I mean, really, there's
an assumption in the term "hacker" that, to be one, you have to "hack".
I've had some close brushes with activities that carry that name, and
I'm even the recent founder of a very small hacking club, of sorts, but
as for real experience in hackish activity ? well, it's a little sparse.</p>

<p>Some of these people who have thusly granted me title certainly know
more than I do about the matter. Perhaps I should defer to wiser heads
than mine. I know I don't want to be the wannabe that self-identifies
without proper justification, though. I'm not comfortable accepting
that apellation at this time. I may never be.</p>

<p>I know I get annoyed when some idiot reporter or Microsoft marketing
executive uses the term to describe something lower than the scum on
the soles of my 14-hole Doc Marten boots, though.</p>
</div>

Collapse -

hacker (n.): one who hacks

by Jay Garmon Contributor In reply to hacker (n.): one who hack ...

You know you're a real hacker if you don't want to be called one. Anyone claiming to be a hacker is a script kiddie with delusions of grandeur.

Collapse -

hacker (n.): one who hacks

by jmgarvin In reply to hacker (n.): one who hack ...

It drives me crazy when I teach my Hacking class and my students think
it is all instant gratification kind of stuff. I have to explain
to them OVER AND OVER that you really have to examine your target
carefully, poke and prod it a bit, and THEN attack.<br />
<br />
I also have a hard time getting some of them past the script kiddie
mentality. My best trick is to show them how to forge email (a
party trick to be sure) and explain that it is the tip of the iceberg.<br />

Collapse -

living history

by apotheon In reply to bITs and blogs

<div styl="text-align: justify">
<p>I've seen him, and probably even exchanged words with him, a couple
of times before last night. I just never realized who he was until
conversation over a table at Denny's.</p>

<p><a href="http://en.wikipedia.org/wiki/Paul_Kunz">Paul Kunz</a> is
awesome. The first webserver to be set up outside of Europe was his
fault, at the Stanford Linear Accelerator Center where he spent much of
his time in 19**. He came back from meeting with Tim Berners-Lee of
CERN, the creator of the World Wide Web, and along with a couple of
colleagues got North America into the Web for the first time.</p>

<p>I'm an avid user of Linux. I use Debian GNU/Linux whenever I can get
away with doing so. When I have to use another distro (or, heaven
forfend, Windows) for work-related activities, I simply have less fun.
Debian makes me feel good. It's not just Debian, though: my favored
computing environment uses the GNUstep framework with the WindowMaker
window manager. It turns out that Paul Kunz was the guy that kicked off
the GNUstep project in the early '90s, too.</p>

<p>Last night, we sorta introduced ourselves to each other and
conversed, along with about a dozen other Linux enthusiasts at a LUG
meeting in Denny's. I couldn't believe my luck when I relized who he is
and what he's done. I even put together sort of a programming
enthusiasts' study group a few weeks ago in part so that I'd have an excuse to work
with others on learning Objective-C, the programming language in which
GNUstep is primarily implemented. Yes, I like GNUstep that much.</p>

<p>These days, I make more money doing web programming than anything
else. I'm using technologies that this man pioneered on this continent
about a decade and a half ago.</p>

<p>I guess, in a way, he's sort of a hero of mine. I certainly admire the man.</p>

<p>. . . and last night, I met him. He's a nice guy, very personable. I
wonder what his reaction will be when he finds out I started a
Wikipedia article about him.</p>
</div>

Collapse -

living history

by Jay Garmon Contributor In reply to living history

<p>THAT. IS. SO. COOL.</p>
<p>Hey, if this LUG gets off the ground, let me know. We may want to talk to you about our TechRepublic Chapters project.</p>

Collapse -

T3H: programming for fun and profit

by apotheon In reply to bITs and blogs

<div style="text-align: justify">
<p>Here it is, Friday morning. Every Thursday evening now, I attend T3H
meetings. I get together with some friends that have some similar
interests, and we . . . learn stuff. It is, so far, sort of a
programming study group, with the primary goal of helping its members
learn stuff. I put it together in the first place (yes, I'm the person
who organized it) from members of a local LUG as an excuse to create an
environment where I wasn't learning new programming skills alone and in
a vacuum. I wanted more people to play off of, in person, where we'd be
there specifically to help each other with new concepts and new
implementations.</p>

<p>It all started with Objective-C, but as of last night that has
changed. The group member who knows the most about the language was
running into a wall, in part because of a lack of the necessary skills
to be leading the charge alone and in part because he doesn't actually
know the syntax of the language well enough to be teaching it. As such,
the whole group hashed out the needs, wants, and skills of the group as
a whole. From this, we developed a new, more useful plan for the future.</p>

<p>This, of course, perfectly suits one of the first rules of good
software design: plan to throw the first one away, for any given value
of "one".</p>

<p>So, Objective-C is on the back burner (for those of us who plan to
ever revisit it). We're going in new directions. The new directions, in
this case, number three, with three different languages that we'll be
studying as a group, simultaneously.</p>

<p>Basically, we're going to be attacking essential concepts of
programming, in each case first in theory (by discussing the concepts),
then in usage and implementation (by discussing how they're used and,
possibly, by working up pseudocode), and finally in practice (by using
the three example languages we've chosen to give these concepts useful
form). Yes, three languages.</p>

<p>See, there are three different basic types of languages that we have
decided we need to learn in order to get the full run of important
theories of program design. Each of us knows at least one programming
language (and by "programming language", I mean "turing complete
language", and by "knows", I mean "has at least a functional familiarity with"), and the number of languages known with any real facility
varies between us from "one" to "half a dozen or more", but nobody has
as complete a knowledge of all three of these types of programming
language methodologies as we'd like. The three types are procedural,
functional, and object-oriented*.</p>

<p>The representatives of each language type are C for procedural
programming, Scheme for functional programming, and Ruby for
object-oriented programming. In addition to procedural, functional, and
object-oriented programming methodologies, however, these languages
also represent a second trinity, this time of programming aesthetics,
though this aspect of the reasons to choose the languages went largely
unremarked: the "shiny new thing", the "obscure, eccentric genius", and
the "venerable workhorse".</p>

<p>In Ruby, everything is an object; it is a "true", or "pure",
object-oriented language. It's the useful scripting language child of
Smalltalk and Perl, in essence. It has a reputation for power,
flexibility, rapid development, intuitive ease of learning and use, and
a whole bunch of other wonderful (if improbable, by some measures)
characteristics. It's an example of a "shiny new thing" language, as
well, fitting it into the trinity of programming aesthetics.</p>

<p>In Scheme, everything (or at least nearly everything) is a function.
I'm less familiar with what makes Scheme what it is than I am with
Ruby, though from what I understand it is about as close to a "true" or
"pure" functional language as you're going to get without resorting to
simply writing programs in Lambda calculus (which would be akin to
trying to communicate using English without nouns). Scheme is one of
the Lisp family of languages, which as a whole comprises one of the
canonical examples of the "obscure, eccentric genius" programming
aesthetic. In fact, the two primary approaches to Lisp seem to be
embodied in Common Lisp (a large, sprawling language) and Scheme (a
more succinct, terse language that grew from the same base). Granted, I
could be mistaken. This will be my first-ever foray into the land of
Lisp and its imitators.</p>

<p>Finally, of course, there's C. C is the procedural representative,
though it is not the most "true" or "pure" procedural language out
there. It is, however, procedural enough to get the point across by a
fairly wide margin, and it's a language that every serious code hacker
should know. Period. It's the language to which all modern languages
owe their compilers and interpreters, most of them owe their syntax,
and all owe their running environments. It's the language in which
operating systems are written, because you don't want to do everything
in kernel development with automatic garbage collection or object
oriented design. It's the only language developed specifically because
a high-level tool for operating system design didn't already exist. It
is, truly, the practical, pragmatic, ubiquitous, venerable old
workhorse of programming languages. Unix and C are each the other's
mother, in a strange sort of incestuous, mutually procreative,
synchrogenetic relationship. It's going to be the example language
representing procedural programming, and that's that.</p>

<p>Conveniently enough, most of what makes Objective-C what it is will
be covered by these languages ? particularly, C and Ruby. Objective-C
is a true superset of C, unlike that OOP (object-oriented programming)
kluge C++, which means that if you learn all of C you'll know all of
Objective-C except the OOP structure. Ruby gets its OOP structure ? its
OOP semantic elements ? from Smalltalk, which is precisely the source
of Objective-C's OOP structures. Between C and Ruby, then, Objective-C
will be only a step away. Well, good.</p>

<p>I've lucked out. I get to learn Ruby and Scheme from the ground up,
and I frankly haven't done much with C either. I get to learn a whole
lot of stuff here. For the small price to pay of organizing and
coordinating all this, I get to learn everything without having to
shoulder the responsibility of contributing superior expertise. I get
to go back to just being a student again. Because I have experience
with other languages (like PHP, Perl, and Object Pascal, for instance),
and because I enjoy reading books about programming theory**, I also
have free reign to comment and rattle on and not be perceived as a
prejudicial idiot. This should be great fun.</p>

<p>Did I mention that I get to learn? That's such a barrel of fun that
I don't know what to do with myself. Happy times are ahead.</p>
</div>

<p><i>* = Objectional? Hah. That's funny.</i></p>

<p><i>** = I quite strongly recommend both Eric Raymond's <u>The Art Of
Unix Programming</u> and <u>The Pragmatic Programmer</u> by Andrew Hunt
and David Thomas. They're thoroughly excellent books on programming
theory.</i></p>

Collapse -

T3H: programming for fun and profit

by jmgarvin In reply to T3H: programming for fun ...

Let me add <u>Beginning Linux Programming</u> Stones and
Matthew. I also quite like <u>Advanced Programming in the
Unix Envrionment</u> Stevens.<br />
<br />
A good book to pick up C is <u>A Book on C</u> Kelley and Pohl.<br />

Collapse -

T3H: programming for fun and profit

by agilbertson In reply to T3H: programming for fun ...

Actually, there are three elements to Scheme: Functions, atoms, and
lists.  An atom is just a peice of data (character string, numeric
data, pile of bytes); a list is a group of atoms or lists; and
functions are...well, functions.<br />
<br />
At least, that's the way I remember it.  While it may not be
totally correct, thinking of it that way lets me actually write
programs in Scheme.  You should probably check out the <a href="http://www.plt-scheme.org/">PLT Scheme</a>
website; they've got some resources on learning Scheme and a fairly
decent IDE/interpreter called DrScheme.  (I used this when I was
taking Matthias Felleisen's courses at Northeastern.)<br />

Collapse -

Understanding OSes: Booting

by apotheon In reply to bITs and blogs

<div style="text-align: justify">
<p>This is part one of the Understanding OSes series. Find more at the <a href="http://techrepublic.com.com/5254-6257-0.html?forumID=99&threadID=173898&messageID=1822678&id=3923716">Table of Contents</a>.</p>
<p>When a computer first boots, it starts up the BIOS (Basic
Input-Output System), a sort of rudimentary operating system designed
to work (hopefully) without user intervention. Its purpose is to find
and activate the bare minimum hardware needed to allow the discovery
and activation of an operating system that is actually designed to
interact with the user. The BIOS provides a central sort of
abstraction that accepts input from various devices and offers output
interfaces from which devices can receive data. This allows your
computer to trade information between components so that the processor
can access an operating system on your boot media (hard drive, floppy
disk, et cetera), load necessary bits of it into memory, and ultimately
begin running software so that it interacts with humans in the real
world in some way.</p>

<p>This leads to booting the operating system. Interestingly enough,
the operating system itself then has to redo everything the BIOS did,
but it has to do it in its own (generally much more complex) fashion so
that more than just the bare minimum hardware is detected and
activated, and so that more complex operations can be attempted. This
entire process is referred to as "booting", an evolution of the term
"bootstrapping", which in turn arose from the quaint image of pulling
oneself up by one's bootstraps. The idea is that the OS must create its
own foundation for operation in order to run effectively. It would be
very much like trying to lift oneself off the ground by tugging on
one's bootstraps if the BIOS wasn't there to provide a point of
leverage. In the case of large-kernel OSes (which is to say: anything
you're likely to encounter in a comparative discussion of modern OSes),
there is actually an intermediary step called the "bootloader", because
the BIOS is in fact too limited to access enough system resources to
load the entire kernel all by itself. Instead, it fires up the
bootloader, that intermediary pseudo-OS, which then in turn does for
the operating system proper what the BIOS did for it.</p>

<p>The most fundamental part of the operating system consists of the
kernel. In the process of discovering hardware, activating it, and
creating software interfaces with hardware (usually through the HAL, or
Hardware Abstraction Layer, in modern personal computer OSes), your OS
probably does some autoprobing. What this means is that the kernel's
boot process has a set of data describing hardware types, and it looks
around at what is available to see if anything matches the hardware
definitions in that set of data. It thus probes for hardware and keeps
track of what it finds. Where it's looking for this data is at I/O
ports, which are "addresses" of data buses. A data bus is basically a
pathway along which data can travel between hardware components, and
I/O ports are bus addresses that relate to data input and output
functionality.</p>

<p>If you're using an operating system that displays the results of
hardware autoprobing, you'll see a series of text messages on the
monitor of your computer (or hear it on a speech-only system, or
whatever suits your particular setup; for all I know, some of you might
operate systems by smell, though that would certainly require a very
customized setup). As it recognizes hardware at these I/O ports, your
OS kernel in some manner loads drivers: software components that
provide a means of communication between the hardware components and
the internal workings of the OS. In most modern OSes, there is a
particular part of the kernel whose job it is to gather input from
hardware and provide it to the rest of the kernel, and to pass kernel
output on to the I/O ports. That part of the kernel is the HAL, and it
provides a means for most of the kernel to be the same no matter what
hardware platform is being used to run the OS. Only the drivers and HAL
need to change between hardware platforms, providing for a great deal
of OS portability so that the hardware doesn't always have to be
exactly the same.</p>

<p>This all leads us to the first two major sticking points in
comparing the major personal computer OSes with which we're familiar.
How autoprobing is handled and how the HAL is designed create
significant differences in how the OS behaves under certain
circumstances. DOS and Windows (which inherits its autoprobing
capability almost unchanged from DOS, even after all this time) use a
very rigid, inflexible mechanism for hardware autoprobing. This
mechanism is bad at identifying hardware because it doesn't listen very
well. What it does, in general, is ask each I/O port "Are you this?"
over and over again for many values of "this", where each "this" is
some piece of hardware the OS has been told to expect. If you have
installed drivers tailored for a given piece of hardware at a given I/O
port, the kernel will default to that driver first and ask the hardware
"Are you this, for which I have a driver?" When there are no specific
drivers, the kernel must try only the very limited set of options given
to it at the "factory", one at a time, until it finds something that
looks like a match.</p>

<p>If you happen to have more than one set of drivers installed that a
particular type of hardware's answer might resemble, you might get a
hardware conflict situation. A piece of hardware that lurks beyond a
given I/O port will be asked "Are you this?" and will respond "I fit
that description", but if the driver is actually for another piece of
hardware that is simply similar enough that the OS isn't sure how to
differentiate, you get the wrong driver associated with a piece of
hardware. The autoprobe then fails. Microsoft tried to solve much of
this by introducing the PnP (Plug and Play) system, which essentially
consists of a huge database of drivers pre-installed but only activated
when hardware matches up with them, and by increasing the number of
characteristics the kernel uses to describe a hardware expectation when
probing at a given I/O port. This creates an inversely proportional
relationship between the amount of time spent autoprobing and the
likelihood that the right drivers will be identified for a given piece
of hardware.</p>

<p>The various unices tend to differ somewhat in how good they are at
autoprobing. This difference largely depends on how much developer time
has gone into hardware interaction performance for a given unix OS.
Some unices have placed more focus on security utilities, some on
number-crunching, and so on. Linux has, from the beginning, had a lot
of attention lavished on the performance of the kernel in relation to
hardware. One benefit of this has been a lot of attention on
autoprobing, and as a result the Linux kernel is very, very good at
autoprobing. It basically listens better than most other OSes, and is
designed to be very good at handling the data it gets from I/O ports
during autoprobing to choose driver modules. The term "driver modules"
will come up again later, by the way, in another of these articles.</p>

<p>There are those who theorize that it is the cleverness of the Linux
kernel's autoprobing that allowed it to be as rapidly successful among
developers (who are also users) as it has been. By being good at
autoprobing, it was easier to install than other unices: unices in
general have lacked "user friendly" installers for years, with the
exception of those proprietry unices that have been closely wedded to
proprietary hardware platforms so that autoprobing is largely
unnecessary. This means that the user who wishes to install a unix on a
given computer must do so with a fairly rudimentary installation
interface, if there is a cohesive installer at all. With the original
Linux installations, this actually had to be accomplished with no
installer whatsoever. Instead, a boot floppy was used to get started,
and part of the process involved compiling a kernel from source on the
machine on which you plan to run it. For those who know this process,
at least in theory, no further explanation is needed. For those who
don't, suffice to say that it is a long, involved process, and largely
irrelevant for this exposition on the OS boot process.</p>

<p>If it wasn't for the fact that it is as good at autoprobing as it
is, Linux would have taken much longer to move beyond that stage and
further advance in market share and mindshare. It attracted developers
because it is a unix, with all the power and flexibility that implies,
and because it was much easier to install than its brethren. Being an
open source project, the Linux kernel's popularity among developers
also ensured that it got more development, not only in improving on its
already clever autoprobing capability, but also on everything else it
did. As such, it broke away from the pack early and gained popularity,
performance, and functionality more quickly than it would otherwise
have done. That's the theory, anyway, and that's why it's now looking
at the situation of being a real contender for market niches previously
thought to be the sole province of Microsoft and Apple, with
bit-players like Amiga, BeOS, and NeXTstep momentarily hovering at the
fringes as technically superior, but undermarketed, alternatives.</p>

<p>Once hardware is identified, and drivers are in place, the first
stage of the boot process is complete. Your computer has reached what
unix hackers often call "run level 1". The next step involves running
an initiating process, often called "init". This process checks storage
media out and starts up other processes, such as unix daemons and
Windows services running in the background. These are the programs like
print spoolers, incoming mail listeners, and local web servers, which
are always paying attention for possible incoming instructions whether
from the network or from user applications that might call on them.
Once these background processes are running, the initiating process
will start up your interface (at least, that's the order of things in
theory, though Windows often violates that and unices can be made to do
so with startup scripts). In unices, getty (or equivalent) is started
to watch consoles for command line input at the shell. In Windows, a
GUI environment is started immediately, and in modern Windows versions
the console input processes aren't started at all unless the user or
some user application accesses the command line.</p>

<p>Then, finally, user applications and high-level demons and
background services are started. While your network may be initiated
early in the init process in Linux, for instance, networking servers
(such as an SSH server, an FTP server, and so on) are started after the
basic interface processes are begun. The same holds true for Windows.
This is because such processes actually use the interface as part of
their operation. Where the default interface is the GUI (Graphical User
Interface), where any CLI (Command Line Interface) is actually an
application running inside the GUI, this imposes quite a lot of
resource overhead on the operation of such services. This is part of
the reason that so many services in Windows have been incorporated into
the ever-more bloated kernel, and that those that haven't been
incorporated into the kernel often make use of services that behave in
unsafe ways such as by inappropriate use of RPCs (Remote Procedure
Calls), in an attempt to recapture performance otherwise lost to the
GUI's RAM and CPU demands. Such tricks to bypass the security enforced
by a strict separation of system processes from user processes are not
necessary with unices because of the fact that the GUI isn't integral
to the OS, and thus doesn't impose the same resource inefficiency on
the higher level services.</p>

<p>I don't mean to suggest, of course, that these are the only reasons
such things happen, or that these are the only consequences that
proceed from those causes, but they are the most relevant to this topic.</p>

<p>This concludes part one of the Understanding OSes series. Part two is <a href="http://techrepublic.com.com/5254-6257-0.html?forumID=99&threadID=173898&messageID=1809200&id=3923716">Understanding OSes: Kernel Modularity</a></p>
</div>

Back to After Hours Forum
385 total posts (Page 3 of 39)   Prev   01 | 02 | 03 | 04 | 05   Next

Related Discussions

Related Forums