General discussion

Locked

bITs and blogs

By apotheon ·
Tags: Off Topic
blog root

This conversation is currently closed to new comments.

385 total posts (Page 5 of 39)   Prev   03 | 04 | 05 | 06 | 07   Next
Thread display: Collapse - | Expand +

All Comments

Collapse -

the life of a professional enthusiast

by apotheon In reply to the life of a professiona ...

Insert comment text here

Collapse -

Understanding OSes: Kernel Modularity

by apotheon In reply to bITs and blogs

<div style="text-align: justify">
<p>This is part two of the Understanding OSes series. Part one was <a href="http://techrepublic.com.com/5254-6257-0.html?forumID=99&threadID=173898&messageID=1783490&id=3923716">Understanding OSes: Booting</a>. Familiarity with the subject matter of part one is recommended before reading part two. Find collected links for this series at the <a href="http://techrepublic.com.com/5254-6257-0.html?forumID=99&threadID=173898&messageID=1822678&id=3923716">Table of Contents</a>.</p>

<p>At the center of every modern operating system is the kernel. The
kernel is, in essence, the parent program of all other programs that
make up the totality of your OS and computing environment as a whole.
The kernel, however, is not all one piece of homogenous process, at
least in theory, and the degree to which it is separable into distinct
parts is called "modularity". What modularity means changes from one
kernel design to another: in the Linux kernel, a module might the
device driver for your wireless networking card, while in the Hurd
kernel, a module might be even more fundamental, such as the operating
system's interrupt request handling functionality. In the course of
this article, I'll address kernel modularity by describing the
modularity characteristics of several major types of kernel design:
highly modular microkernel, two-part microkernel, monolithic kernel,
and megalithic kernel. I'll do this primarily by examples, though in
some cases my examples will explain the types by contrast as much as by
demonstration of the type.</p>

<p>The most modular OS design philosophy going in theoretical OS design
circles utilizes what is known as a "microkernel" with sort of a cloud
of modular parts floating around it, ready to be attached to and
detached from the microkernel itself to provide functionality if, and
only when, they are needed. This is meant to provide increased
security, because no more kernel is present than is needed to be in
operation at any time, thus reducing the amount of possibly compromised
code. This is meant to provide increased stability, because the
complexity of the current running system is reduced when no more than
exactly what is needed is actually running. This is meant to provide
increased performance, as there is less kernel to be loaded into memory
and to demand the attention of the processor than if everything is
loaded all the time. All of this is how a microkernel is supposed to
make your life better, in theory. As we all should know, however, in
theory there is no difference between theory and practice, and in
practice there is a difference. Make a note of that principle of theory
and practice: I use it a lot.</p>

<p>Unless and until microkernel development makes sudden leaps forward
in perfection of its principles of implementation, we're going to tend
to find many of the intended effects of microkernel design being
reversed. They perform like dogs because interfaces between the modular
chunks of the kernel (the microkernel and its cloud of extensions)
require communication: by analogy, compare being able to pass thoughts
around in your brain with being able to discuss those thoughts with
another human being. Their stability is good, as far as the microkernel
itself is concerned, for exactly the reasons it's theorized it should
be good, but the stability for the whole system is reduced by virtue of
the fact that increased complexity of communication across software
interfaces creates more opportunity for errors and failures to occur,
possibly causing big chunks of the system to fail if something goes
wrong. As far as anyone is aware (or so I'm given to understand),
security operates pretty much as advertised, though there are those who
are skeptical of the claim of increased security as well. I suppose, in
the end, that if the "perfect" microkernel OS were written, it would be
more stable, secure, and quick on the uptake than everything else
going, but if the "perfect" monolithic kernel were written as well, it
too would be perfectly secure, stable, and fast, so one begins to
wonder where the escalation of hypotheticals might end.</p>

<p>There is exactly one commonly available true microkernel OS in
current development, so far as I'm aware. That OS is GNU/Hurd. It's
still not, as of this writing, in ready-for-release form, but it's
operational. From what I understand, it's slow as molasses in
comparison with GNU/Linux system performance, but operational. I'll be
keeping an eye on its development to see where it's going.</p>

<p>There is also a commonly available pseudo-microkernel in current
development that might surprise you. That kernel is called Darwin, and
it's the heart of MacOS X. It is a very nearly "true" microkernel, but
that of course is shot down the tubes as far as microkernel theory is
concerned by the way Apple makes use of the Darwin kernel for the basis
of MacOS X. You see, MacOS X is built as a layered-on monolithic
kernel, with Darwin simply acting as the core of it. Ultimately, MacOS
X seems to be designed as a bilithic kernel, if I'll be allowed the
neologism, which operates pretty much indistinguishably from the way a
monolithic kernel operates.</p>

<p>The next step in our odyssey from microkernel to megalithic kernel
is the modular monolithic kernel. This is a somewhat effective attempt
to wed some of the better facets of microkernel and monolithic kernel
design into one single kernel design standard. The current canonical
example of this is the Linux kernel. The way the Linux kernel works is
largely indistinguishable from the way a normal monolithic kernel
works, as viewed from outside, and thus it enjoys the performance
benefits of a mostly unified operating system. Essentially, a true
monolithic kernel makes everything that is strictly part of the
necessary OS of a computer system part of the kernel, which does not
include the user interface. This is normally only possible by knowing
before you compile your kernel exactly what hardware you'll be using,
and designing the kernel to suit that hardware, then compiling it.
There are ways around this, and I don't pretend to really know anything
about them excepting the way Linux does things, so I won't comment
except on the Linux modular monolithic kernel design.</p>

<p>Within the Linux
kernel, there is a set of module support interfaces that allow for
kernel modules to be loaded and unloaded as needed. Linux notably uses
kernel modules for device drivers, among other purposes. Because of the
tight integration of modules with the kernel, there is minimal
performance loss in the communication between modules and kernel,
though when you compile your kernel you need to "deactivate" support
for modules you know you won't use if you want greater performance
benefits possible through absence of those modules. For those who are
familiar with Linux distributions, this is (at least theoretically) a
performance benefit of using Linux From Scratch, Debian From Scratch,
Gentoo, or really any custom-compiled kernel, assuming you are willing
(and knowledgeable enough) to configure the kernel to suit. All else
being equal, then, the combination of features of modularity and
monolithic kernel design contributes greatly to the performance of
Linux, and helps to explain how a Pentium II 366MHz Thinkpad running
Linux that I own consistently outperforms an Athlon XP 1600+ desktop
system running Windows XP Pro that I also own.</p>

<p>This brings us to the red-headed stepchild of kernel modularity,
Windows. There have, over the years, been aspects of Windows kernel
design that make arguments possible claiming that it is a microkernel
system, a monolithic kernel system, a bilithic kernel system, or a
megalithic kernel system (the last being a system that includes even
such normally application-level software as the user interface in the
kernel itself). Ultimately, I suppose the way to describe the somewhat
schizophrenic history of the Windows kernel is as a modular megalithic
kernel. From top to bottom, the OS itself is made up of separate, but
indivisible, parts. For instance, the user interface of Windows XP is
indeed a separate set of code from the rest of the kernel (and is,
itself, several separate programs interoperating intimately), including
the legendary OS-integrated Explorer rendering engine used to make much
of the user interface possible, but the OS itself literally will not
start and operate properly without all major modular parts of the
entire vertically integrated stack of parts running. If the UI goes
down, the entire operating system is essentially inoperable unless and
until the UI starts working again.</p>

<p>Technically speaking, of course, the
higher-level OS components are not part of the kernel, but there is
such an indistinct dividing line between parts of the operating system
that sometimes it can be a little hard to tell where the kernel ends
and everything else begins. Despite initiatives at Microsoft over the
years intended to implement good design principles in kernel design,
they have largely been sabotaged by the necessities of interfacing the
kernel with the rest of the operating system. The part of the OS that
is officially identified as the kernel by Microsoft, its central parent
process, is actually not a bad design, at all, in theory. In practice,
it is the axle on which the Windows wheel turns, and a stable axle
won't smooth the ride without some rubber between the wheel and the
pavement. The reasons for the evolution of the Windows system
architecture are manifold, and will be the subject of a later addition
to the Understanding OSes series.</p>
</div>

Collapse -

Understanding OSes: Kernel Modularity

by jmgarvin In reply to Understanding OSes: Kerne ...

I'm looking forward to more exokernels on the market.  While I
think they are a little on the kludgy side, I also thing it is bringing
the best of modularity and the best of the efficient structure of
current monolithic kernels together.<br />
<br />
What are you thoughts apotheon?<br />

Collapse -

Windows->Linux: Introduction

by apotheon In reply to bITs and blogs

<div style="text-align: justify">
<p>I keep seeing people asking for advice relating to getting started
in Linux, and the advice they get is often quite disorganized, in large
part because of where they're asking. For instance: jumping into a
distro-nonspecific discussion forum such as TR, and asking for what's
the "best" distribution of Linux to use, is likely to elicit sixteen
answers from a dozen people, followed by a whole lot of debate,
sometimes highly technical and other times full of crap. Similar
problems arise with other pieces of advice for which a putative
penguinista might ask, and really it helps to find a clear, coherent
discussion of the options, along with advice on how to accomplish what
you're aiming to do that is similarly clear and coherent.</p>
<p>That's what I aim to do with this. Before I get anywhere near the
end of my Understanding OSes series (and I may never finish ? I'll
probably come up with ever-more stuff about which to write as I keep
learning more), I'm declaring the intent to begin a new series. Welcome
to the Windows->Linux series.</p>
<p>I'll probably start with some cautionary statements about if/when
you should make the leap from Windows to Linux, what to expect, and how
to get help when you need it. That's even more fundamentally important
than your choice of distro, since it'll help you out with everything
you do in your transition from one OS to another.</p>
<p>Table of Contents:<br />
<ol>
<li><a href="http://techrepublic.com.com/5254-6257-0.html?forumID=99&threadID=173898&messageID=1822599&id=3923716">Windows->Linux: To Migrate or Not To Migrate</a></li>
</ol></p>
</div>

Collapse -

Windows->Linux: Introduction

by CharlieSpencer In reply to Windows->Linux: Introduct ...

I look forward to reading your posts on this topic.  I've found
your previous posts and commments to be well reasoned and equally well
expressed.

Collapse -

Windows->Linux: Introduction

by p_jones79 In reply to Windows->Linux: Introduct ...

I too have been trying to find answers to many linux vs windows
questions for some time, and have found similar forums and web sites to
what you have descibed.<br />
I look forward to reading more on the subject, as a student of MCSA/E,
and just completed my compTIA A+ i am trying to get a good grip on what
skils are needed/wanted to succeed in this industry.<br />
I'm sure if I had more time I could come to my own conclusions, but
thats the point of sites like these i suppose, and i love you all for
it :)<br />

Collapse -

The Future of IT

by apotheon In reply to bITs and blogs

<div style="text-align: justify">
<p>[The text of this post was copied from an article of the same name at <a href="http://www.apotheon.com"><b>@political</b></a>.]</p>

<p>In the here and now, the domestic US information technology industry is in trouble. Sales aren't as brisk as they once were, and the employment market is in the toilet. There are moments of optimism here and there, but when you turn back around you always find a dozen IT industry professionals out of work and wondering where the industry went. Meanwhile, <a href="http://www.computerworld.com/governmenttopics/government/policy/story/0,10801,101601,00.html?from=story_picks">Microsoft is pressing for more H-1B work visas</a> so they can hire more foreign nationals domestically, claiming there just aren't enough qualified IT professionals available.</p>

<p>The commoditization of software is what inevitably led to this state of affairs. Microsoft essentially spearheaded the commoditization of software in the US, turning OSes into prepackaged "products" sold by unit. Microsoft is often credited with "revolutionizing" the industry in this manner, making it into a powerhouse by virtue of the commoditization of software, but the truth of the matter is that technology advances as it's needed, and it would have happened whether the software was sold as product units or not. In any case, this commodity profit model was only possible by strictly enforcing copyright law in a manner advantageous for maintaining Microsoft's revenue streams, and continued to be profitable only by lobbying for ever-stricter copyright law in Microsoft's favor. It's not only Microsoft that has done this, of course: every large proprietary software vendor has had a hand in it, but it's Microsoft that led the charge, and continues to lead it.</p>

<p>By making it a product rather than treating development as a service to the customer base (which would of course make Microsoft in current form entirely nonviable), these corporations have created a situation wherein software development costs can be kept to a minimum to provide greater profit margins in a "finished product" format. When it's a sealed-up "product", a program can be 98% old code and sold as something entirely new, it can be assembled from "parts" that are developed by anyone anywhere that simply comply with certain external behavioral standards, and can be made functional and featureful without the actual code being of any particular level of quality. As a result, close control over code quality isn't needed, and offshoring becomes entirely viable as a development model.</p>

<p>The copyrighted code is sold in executable form on a CD, and you are required by law to not make any copies of it except as possible "backups" in case the original is damaged. You are not allowed by law to install it on more than one computer at a time, and depending on circumstances you may not be legally allowed to install it on more than one computer ever. This maximizes profits, as the applications and OSes sold cost effectively nothing to reproduce in bulk quantities for sale, and with ever-cheaper offshoring development available, and the cost of transmitting finished code electronically being effectively zero, overhead for corporations like Microsoft gets smaller and smaller. As surmised in the linked article, the reason H-1B visas are desirable to Microsoft has nothing to do with not being able to find engineers. One of the major reasons for it is that Microsoft doesn't need to pay visa workers as much as domestic workers, so they can hire people here on a work visa to do the few things that simply cannot be done overseas and transmitted electronically, such as local project management.</p>

<p>All of this comes together to create a situation wherein computer scientists are in decreasing demand in the domestic marketplace, all while software vendors are telling the media that offshoring is reversing and domestic hiring will increase so that they can keep their customer base active. There's the first problem that rears its ugly head for software vendors: as the technical crowd is put out of work, they stop spending money on software. The very people they're putting out of work are a significant percentage of their customer base. In essence, the domestic software market is cannibalizing itself because of the consequences of a commoditized software profit model.</p>

<p>Linux has the ability to be the spearhead for the cure to these ills, to mix a metaphor. Open source software development, with Linux as its current poster child, short-circuits profit models that rely on software as a commodity. It emphasizes the value of development, and de-emphasizes the value of corporate bureaucracy, packaging, and marketing dollars. In short, it puts potential revenue streams back in the hands of individuals, and takes them out of the metaphorical hands of corporations. It emphasizes the value of the developer and the software support provider. It favors many small companies rather than a few multinational corporations, and demands local access to development and support talent.</p>

<p>If the law doesn't shift to exclude the growing influence of free/libre/open source software, FLOSS may just become the revitalizing influence needed by the domestic IT industry. Of course, corporations have strong lobbies, and FLOSS has almost no lobby at all, so there's no guarantee the law won't change tomorrow to make a domestic industry revitalization of that sort effectively impossible. We'll see.</p>

<p>As for me, I composed this article on a Linux machine.</p>
</div>

Collapse -

The Future of IT

by jmgarvin In reply to The Future of IT

Part of the problem with the IT industry is the doom and gloom. 
HR doesn't understand IT (and never will) and corporate culture wants
IT to be something it is not and never can be.<br />
<br />
What IT needs is an image face lift.  Forget the certs, forget the
education, forget the experience, just focus on WHY the typical IT
worker looks bad to corporate america. <br />
1) We are practicle.  We don't drive a Porsche, even though we might make $100k/year<br />
2) We aren't socially apt.  Ok, some of us are, but many don't
have the soft skills to function like corporations want us to<br />
3) We don't explain IT.  It is all a black box...<br />
4) Technology should just work.  There shouldn't be any of this
"there will be downtime from 8am - 5pm for system upgrades" crap that
is floating around...If a system outage is needed, explain to your
users WHY you need to do it and WHY it will take so long.<br />
<br />
Much of IT is considered a service, but that isn't quite right...it is something of kung fu mixed with technical knowledge.<br />

Collapse -

The Future of IT

by Jay Garmon Contributor In reply to The Future of IT

A lot of this is fiscal. Development requires paying talent, and talent
costs money. Finidng ways to distribute and amortize those costs over
the widest possible consumer base is what drive the whole wagon. That's
why software development companies rather than in-house software
development is the norm, and that's why DRM is such a touchy subject,
because any crack in tight copy control dilutes the amortization curve.<br />
<br />
It's also why outsourcing is so huge, because it drops the talent price, plain and simple.<br />
<br />
Also, to jmgarvin's point, the typical IT consumer (read: non-techie)
wants to use information technology the same way he drives a car--with
minimal undertsanding of the basic drive mechanism, minimal maintenance
responsibilities, and high degree of certainty that the product will
operate relaible 99.999% of the time.<br />
<br />
To me, the answer here is to lean into all this, rather than fight it.
Loathe though I am to ever say IBM is insightful, their notion of
relying on IT services rather than products is a wise one. Enterprise
conusmers want a specified service level at a specified cost, and they
don't truly care what hardware or software is needed to run it, any
more than I care what servers Google uses to build their search engine.
I want good search results, and whatever Google does to deliver that
quality of service is OK with me. <br />
<br />
Embracing utility IT is inevitable, especially when it will place the
quality of servcie at the forefront of the IT equation, and it will
break the software megaliths' stranglehold on feature bundling. I for
one can't wait.<br />

Collapse -

following up

by apotheon In reply to bITs and blogs

<div style="text-align: justify">
<p>As I've said in an early post in this blog, I'm not planning to
respond to any comments posted under my entries directly. That doesn't
mean nobody should post their comments, however: the only reason I'm
not posting replies as comments of my own is that I figure I should let
my words stand for themselves, and if something really warrants a
statement from me it's probably worthy of its own blog entry.</p>
<p>This is one of those entries.</p>
<br />
<br />
<p><b>exokernels:</b></p>
<p>First of all, TR member <a href="http://techrepublic.com.com/5213-6257-0.html?id=4093177">jmgarvin</a>
expressed some curiosity about my thoughts on exokernel design. For
those of you who are not familiar with the concept of the exokernel,
it's essentially an OS design architecture where all the traditional
operating system functionality outside of "hardware multiplexing"
(read: hardware abstraction and management) is delegated to the
application level. In practice, where exokernel systems
are being implemented, this means that program library frameworks that
perform the interface functionality normally associated with the OS are
used as a basis for higher-level applications, external to the exokernel itself. For instance, in the the
<a href="http://pdos.csail.mit.edu/exo.html">MIT model of an exokernel OS</a>,
there's a set of unixy environment libraries called ExOS, on top of
which Emacs runs. This allows both increased system modularity for
greater stability and greater customizability of the user interface
environment for applications. In theory, it sounds like a good idea.</p>
<p>Unfortunately, this is similar in many ways to the theoretical basis
for the microkernel, taken to an absurd degree. The potential arises
for a great many problems with resource consumption, conflicting
resource management at the application level, reduced performance due
to increased API activity, and so on. This sort of OS architecture is
probably better suited to the lab than real-world application. It may
some day mature enough to be useful, but if it were completed and
released tomorrow it would probably be looked at as largely useless
unless you wanted to run a Windows-like system with greater kernel
stability and security. One might conceivably do something similar to
colinux (side-by-side OS installations, simultaneously running), but
the useful niche for such a thing is vanishingly small at present. Then
again, I'm no expert when it comes to exokernel concepts.</p>
<br />
<br />
<p><b>IT image:</b></p>
<p>In an earlier comment, jmgarvin (again) observed that the IT
industry has an image problem ? or, more to the point, several image
problems, as follows:</p>
<ol>
<li>IT professionals take a utilitarian approach to things that
corporate culture views as being status-related. We don't, for
instance, tend to run right out and get a Porsche to declare status
when we start making $100k per year, and we tend to be prone to
affectations like ponytails and pocket protectors.</li>
<li>IT professionals typically don't interact with colleagues the same
way business grads do. We don't operate with the slick veneer of
salesmen, for instance, and schmooze with the VPs at office parties. IT
pros are often most comfortable when left the heck alone and allowed to
work long, hard hours in peace.</li>
<li>Information Technology is a mystery to "outsiders", including the
people that pay the IT bills. They don't want to know the details, only
that it works, and at the same time they feel left out of the loop when
IT professionals don't tell them the details (which is generally as
close to "always" as possible). Tell me about the last time you had to
explain why a routing table was lost and how much you enjoyed trying to
translate it into terms your nitwit pointy-haired boss could understand.</li>
<li>When IT resources go down, the lack of willingness to understand it
leads to frustration. IT workers get blamed immediately, even when it's
someone else's policies that led to the downtime. Necessary Windows
system restarts become the IT manager's fault at the next performance
review. Computers, to these non-IT workers, should "just work", and if
it doesn't they need someone to blame.</li>
</ol>
<p>The above is a paraphrase, with a bit of my own interpretation and
explanation stirred in. Don't blame jmgarvin for any disagreement you
may have with my rewrite.</p>
<br />
<br />
<p><b>utility IT:</b></p>
<p>The explanation <a href="http://techrepublic.com.com/5213-6257-0.html?id=1383826">The Trivia Geek</a>
offered of how "utility IT" is the inevitable future of IT makes some
very good points. Ultimately, businesses are going to want their IT
infrastructure to "just work", and won't care so much about what is
being implemented to achieve that. As he noted, IBM is definitely
taking this direction, throwing out product names largely as marketing
strategy. IBM's movers and shakers are fully cognizant of the fact
that, ultimately, business owners don't generally care about whether
you're using Windows, Linux, or OS/400, as long as the business is as
profitable as it can be. The IT industry service-based profit model
looms large on our horizon.</p>
<p>That sounds good to me.</p>
<p>The Trivia Geek mentioned another matter, tangential to that. He
mentioned outsourcing as a means to save money. Open source software is
the answer to much of that: when your software development is free, you
don't need to pay someone to do it for you, even at severely reduced
rates. Such software will need implementation expertise,
modification/customization, and so on, however. Thus, the development-
and support-as-a-profit-model evolves.</p>
</div>

Back to After Hours Forum
385 total posts (Page 5 of 39)   Prev   03 | 04 | 05 | 06 | 07   Next

Related Discussions

Related Forums