Security

The truth about viruses


Once every couple months or so, I find myself explaining to someone that the flood of viruses everyone has come to expect is not an unavoidable side effect of an increasingly networked world. Usually this comes up in response to the all-too-common security through obscurity argument that Linux systems would suffer the same frequency of virus problems as Microsoft Windows if they were as popular as Windows is now. Such a comment ignores several factors that make up the vulnerability profile of Windows with regard to viruses.

The most obvious, for those who recognized the term "security through obscurity" that I used above, is that Linux-based systems and other open source OSes (such as FreeBSD and OpenSolaris) actually benefit greatly from the security through visibility approach taken by popular open source software projects. There's another factor that's much more important to virus vulnerability in particular, however, that even most open source software advocates don't consider. It's really quite simple.

Microsoft doesn't fix virus vulnerabilities.

A virus is malicious code carried from one computer to another by some kind of medium -- often an "infected" file. Once on a computer, it's executed when that file is "opened" in some meaningful way by software on that system. When it executes, it does something unwanted. This often involves, among other things, causing software on the host system to send more copies of infected files to other computers over the network, infecting more files, and so on. In other words, a virus typically maximizes its likelihood of being passed on, making itself contagious.

All of this relies on security vulnerabilities that exist in software running on the host system. For example, some of the most common viruses of the last decade or so have taken advantage of security vulnerabilities in Microsoft Office macro capabilities. Infected files that were opened in a text editor such as Notepad would not then execute their virus payload, but when opened in Office with its macro execution capabilities would tend to infect other files and perhaps even send copies of themselves to other computers via Outlook. Something as simple as opening a macro virus infected file in Wordpad instead of Microsoft Word or translating .doc format files into .rtf files so that macros are disabled was a common protective measure in many offices for a while.

Macro viruses are just the tip of the iceberg, however, and are no longer among the most common virus types. Many viruses take advantage of Trident, the rendering engine behind Internet Explorer and Windows Explorer that's also used by almost every piece of Microsoft software available to one degree or another, for instance. Windows viruses often take advantage of image-rendering libraries, SQL Server's underlying database engine, and other components of a complete Windows operating system environment as well.

Viruses in the Windows world are typically addressed by antivirus software vendors. These vendors produce virus definitions used by their antivirus software to recognize viruses on the system. Once a specific virus is identified, the software attempts to quarantine or remove the virus -- or at least inform the user of the infection so that some kind of response may be made to protect the system from the virus.

This method of protection relies on knowledge of the existence of a virus, however, which means that most of the time a virus against which you are protected has, by definition, already infected someone else's computer and done its damage. The question you should be asking yourself at this point is how long it will be until you are the lucky soul who gets to be the discoverer of a new virus by way of getting infected by it.

It's worse than that, though. Each virus exploits a vulnerability -- but they don't all have to exploit different vulnerabilities. In fact, it's common for hundreds or even thousands of viruses to be circulating "in the wild" that, between them, only exploit a handful of vulnerabilities. This is because the vulnerabilities exist in the software and are not addressed by virus definitions produced by antivirus software vendors.

These antivirus software vendors' definitions match the signature of a given virus -- and if they're really well-designed might even match similar, but slightly altered, variations on the virus design. Sufficiently modified viruses that exploit the same vulnerability are safe from recognition through the use of virus definitions, however. You can have a photo of a known bank robber on the cork bulletin board at the bank so your tellers will be able to recognize him if he comes in -- but that won't change the fact that if his modus operandi is effective, others can use the same tactics to steal a lot of money.

By the same principle, another virus can exploit the same vulnerability without being recognized by a virus definition, as long as the vulnerability itself isn't addressed by the vendor of the vulnerable software. This is a key difference between open source operating system projects and Microsoft Windows: Microsoft leaves dealing with viruses to the antivirus software vendors, but open source operating system projects generally fix such vulnerabilities immediately when they're discovered.

Thus, the main reason you don't tend to need antivirus software on an open source system, unless running a mail server or other software that relays potentially virus-laden files between other systems, isn't that nobody's targeting your open source OS; it's that any time someone targets it, chances are good that the vulnerability the virus attempts to exploit has been closed up -- even if it's a brand-new virus that nobody has ever seen before. Any half-baked script-kiddie has the potential to produce a new virus that will slip past antivirus software vendor virus definitions, but in the open source software world one tends to need to discover a whole new vulnerability to exploit before the "good guys" discover and patch it.

Viruses need not simply be a "fact of life" for anyone using a computer. Antivirus software is basically just a dirty hack used to fill a gap in your system's defenses left by the negligence of software vendors who are unwilling to invest the resources to correct certain classes of security vulnerabilities.

The truth about viruses is simple, but it's not pleasant. The truth is that you're being taken to the cleaners -- and until enough software users realize this, and do something about it, the software vendors will continue to leave you in this vulnerable state where additional money must be paid regularly to achieve what protection you can get from a dirty hack that simply isn't as effective as solving the problem at the source would be.

About

Chad Perrin is an IT consultant, developer, and freelance professional writer. He holds both Microsoft and CompTIA certifications and is a graduate of two IT industry trade schools.

257 comments
ginnerbuffalo
ginnerbuffalo

It's absolutely shocking how easy it is for individuals with little to no understanding of complex technological topics to pass themselves as technology experts. The author of this piece is a solid example. The core claim of this article is that computer viruses are a phenomenon completely bestowed upon us due to the fact that the software vendor whose operating system most of us choose to use leaves virus protection up to 3rd party antivirus provider. Building on this claim, we're led to believe that open source software systems are protected from viruses in a superior fashion due to the fact that their source is managed publicly; banking on the notion we should take to the bank that some good Samaritan will identify all virus exploitable security vulnerabilities in any piece of open source software long before the world's growing population of would-be software exploiters can develop a virus to exploit said vulnerabilities. The critical flaw in this author's premise is in the assertion that if Microsoft were to own the reaponsibility to protect it's software against viruses, the way the author believes the world's open source community does with tireless benevolence toward its own software projects, Microsoft software would be similarly impervious. The author's flawed logic is clearly in their lack of understanding of the different roles that software vulnerabilities and antivirus software systems play in determining whether or not a computer can be infected or not. The fact of the matter is that a virus requires first, a means for infection, and secondly the ability for execution and transmission. Most commonly, a virus must have either a gullible user through which to Trojan themselves onto the target system, and or a vulnerable piece of software on the target to exploit in order to successfully land on the target. To protect against this, users must be educated for their own protection, and vulnerable software must be patches against vulnerabilities. Antivirus software systems are almost entirely focuses on removing viruses after the fact of this infection, and thus they play no role whatsoever in the author's "open source protects better than Microsoft because Microsoft delegates this responsibility to 3rd party antivirus providers" argument scenario. So, essentially, the crux of the author's claim here should come down to the question of who we feel has the best likelihood of responding to software vulnerabilities that pose the highest virus threat to users: the best intentions of the world's open source communities through whatever time each contributor can find to help deliver us a security patch in their free time away from their full time livelihood, or should we instead look to the software vendors that provide us software for a fee that guarantees us with dedicated support and the right to legislation in the case where we feel we've not been adequately protected?

seanferd
seanferd

I have heard frequently through the years that the reasons behind the "user-unfriendliness" of software interfaces is that software is designed by software engineers. I can understand the reasoning behind this, but I wonder how much of the problem is currently caused by marketing and design staff trying to "fix" prior percieved issues, do things "their way", make something "hip/ new/ slick/ cool", or simply do a mashup (poorly) of ideas/ feedback from product testing or "what would you like to have in a GUI" discussions. I have absolutely no idea what drives design decisions at,e.g., Microsoft, today: techs, marketing, designers (non-tech), or the top level executives. Does anyone have some facts on this? I'm curious.

MarkFilipak.trash
MarkFilipak.trash

I used to be a Product Line Architect for Intel. All the current problems regarding exploits, rootkits, etc. were predicted long ago, but in the rush to make money no one listened. Microsoft is responsible for the vast majority (all?) of the exploits by 1, Exposing NetBEUI and Samba to the Internet (i.e., raw sockets) through servers running behind ports 137, 138, 139, and 440; 2, Exposing significant parts of the Windows API to Jscript; and 3, Exposing ring 0 of the CPU protection model, together with the kernel mode software running in ring 0, to installable drivers such as Active-X in an ill conceived attempt to give direct hardware access to video and other services in order to improve the performance of Windows as an entertainment vehicle. Windows is so ubiquitous and Microsoft has been so irresponsible as to constitute a threat to national security. If you think that's an extreme statement, consider what could/would happen to the Internet (and to commerce) if the United States went to war with an adversary that could marshal an army of bots.

PhilippeV
PhilippeV

Thunk about it: OS'es are still built today to be software enablers rather than software blockers. They are built specifically in order of being allowed to run all kinds of pices of software... including virus. IT's their default policy, because still today, users still want OSes to behave this way: install all this support and make it easily accessible, so that you won't need more configuration and permission requests each time you want to install and run a new software. But look at the problem: * ALL OS'es are built to offer many services with LOTS of APIs, even those many that you won't ever configure or know why they are installed or if you really need them. * ALL these services are preinstalled, and run in the background, and users won't know that they can constitute a potential risk if they are left in their default state, or don't know how to disable them untill you really need them. * ALL OSes are booting while enabling ALL existing devices, including their drivers built by many sources. Today, too many drivers are coming with too much services bundled with them (update assistants, toolbars, and so on) that have not been even scrutinized by the OS builder. It's time to think about rebuilding OSes so that your PC won't boot by enabling all these services. These should be enabled only when you really need them, and then made completely inactive, as long as there's no user interation requiring them. With this design, not only your PC will get much better performance (think about the many services that are currently running at each boot and permanently), but will also leave much less open doors for attackd by outdoor malwares. BUT. Pc makers won't like this approach, because this would mean that you won't need to upgrade your PCs tyo support more services in more modern versions of OSes. It's time to rethink about OS infrastructure: isolate completely each service, and make each service separately manageable and configurable, and disabled by default, until the user makes an explicit agreement to allow an interaction between two services, in a very specific context (don't reuse an existing user agreement for an unrelated context of use). What this means: * each service on a PC should be treated as if it was a separate user, installed and running in its own user security realm. * forget the concept of "system administrator". No service (including drivers...) should run in this context, instead they should run un their own realm. * forget the simple binary security association. The allowed security realms where allowed access is granted should allow identifying the context of use, by configuring list of conditions (like the list of currently running applications, or application functions that are demanding access to a service), that must be matched to create the security identity asking for a access. If some condition has changed, this creates another context. In other words, it's not because you have allowed access to your local email storage files to your prefered email manager program, that this should allow immediately any other software, even operated by you, to access those files. The security identity creating a context of use should be more than just a simple user account. This shoudld include the state of the running applications and application functions. What this means is that each service running in a OS should become a isolated blackbox, with a very limited API, and should be pluggable/unpluggable and not running by default (they will start running only after the access requests have been granted after verifying the context of use: the OS should take care of verifying these context of uses). Then your PC will behave as if it was a collection of separate internet domains running a simple service for a specific use, each one with its own collection of granted accesses. These blackboxes will behave as if they were remote services, each one with its own local administrator. The system administrator account should only run the core kernel offering nearly no service except those needed to start the security account manager. The security account manager will have its own local administrator to protect its database, but it will provide NO service for applications and device drivers itself. Each application will install with its own local administrator and its own local security database. for storing allowed access grants. (they won't perform the actual verification, but will use the service of the security account manager to exchange secure keys about its own local identity and the system on which it runs, and will pass on, the context of use (i.e. the effective user asking for access, and its local context of use). As much as possible, everything in a software that is not required to support another software should not be accessible to other softwares, but remain in the blackbox: this is isolation of APIs, and means also the end of the famous problems of conflicting shared libraries. So summarize: OSes need to be componentized. Note that this design will allow a component to be implemented/deployed locally or remotely (through local proxies performing no service itself). This will also ease the deployment of applications (if there is a sensitive service that is best managed centrally or in a safer place, it will be easily transfered to run in that place). Applications will also not expose their own data to others: its own files will not be directly usable by other softwares. This means changing the way we perceive the PCs and the filesystem in general: instead oa having a general filesystem, we have a local filesystem for each component. The storage place of this component(specific filesystem will not necessarily be local, but should be transferable transparently, insome security contexts to safer remote places). Now what will happen when a security hole is discovered: it will affect a single component, not the system as a whole. The component will be unpluggable at any time without making the OS completely unusable. If a local implementation of the component fails, it becomes reparable by replacement by a remote safe backup. You don't need to reinstall everything. And because each service will not be running contantly by default, when they are shutdown, it's like if the only small small door allowing secure interaction with it is closed : its local filesystem is closed, no application can even know where it resides (locally or remotely): only the security manager (part of the most secure place of the OS) knows that information in its internal database (residing in its own blackbox). How will this work: suppose you just need to get information about the current time: the OS will start the time service only when you need it, and will let it run just for some seconds, and will shutdown it when this service is nolonger requested. The time a service is allowed to run should remain short (just to keep the system responsive if needed for performance with very frequent accesses to the same service). The OS should monitor the frequency of use and adapt the time to leave the service running. When a service is shutdown automatically by the security, it no longer resides in memory (so the caches are cleaned, the memory is wiped, and nothing remains that will allow inspection by after death debugger, unless a secure dump of the service has been explicitly allowed by the user, generally a developer of the application itself). Application memory dumps (in case of crash after the dump has been explicitly allowed) will only contain data usable only in the same security context as the one that permitted this dump (dumps are not transferable by default, except to designated entities). All the concept behing this is: instead of building complex aggregates of components interacting in an unknown and instable manner, with unknown and unpredictable hierarchical preconditions and interactions, divide the problem into many separate problems simple to resolve. Each time you create an aggregate, limit it to the interaction of very few components (ideally, only 2, but a third optional and link to an optional component with different priority will be needed to allow creating graphs instead of just hierarchical trees of interactions). This aggregate will create its own local identity, with its own locally stored database of allowed interactions with other remote entities, and this database will be under supervision by the security manager (meaning that this aggrgate must have its own identity registered with it by a security access request to it for creating its local adminstration account).

ballantr
ballantr

Fixing the OS vulnerabilities should happen, but in the mean time... the internet is delivered to me through one outlet at a time - my ISP. Why do ISPs deliver known viruses? If a big pipe down the road is delivering a bad smell (or a real virus), should we put filters on all the houses and gas masks on all the people; or should we filter the pipe?

apotheon
apotheon

> the notion we should take to the bank that some good Samaritan will identify all virus exploitable security vulnerabilities in any piece of open source software long before the world's growing population of would-be software exploiters can develop a virus to exploit said vulnerabilities. Did you fundamentally misunderstand the article? The point made was that when found in major open source software projects -- whether by "good guys" or "bad guys" -- it gets fixed. This is not mere speculation: it is proven in practice. Meanwhile, Microsoft does not fix such vulnerabilities when they are found. It leaves that up to third party AV software vendors. This, too, is not mere speculation: it, too, is proven in practice. There is no technical difference between the two development models that makes this a necessary fact. There are certainly social factors that encourage an open source project to take the higher road here, and that provide opportunity for a closed source project like MS Windows to get away with doing the opposite. That's not a matter of technical requirement, though. It's a choice. Sure, if you choose to address the vulnerabilities that make viruses possible as what they are -- vulnerabilities -- the ability to leverage the efforts of a bunch of outside developers thanks to an open source development model can help. In theory, you could do as well given enough money to pay developers (though even Microsoft can't really afford to pay enough developers to achieve the same effects as enjoyed by the Linux kernel project), but that doesn't matter if you never make the decision to deal with vulnerabilities as vulnerabilities, and just treat them as a force of nature for AV software vendors to handle instead. > The fact of the matter is that a virus requires first, a means for infection, and secondly the ability for execution and transmission. This is true, as far as it goes. Those means of infection and abilities for execution and transmission are effectively null and void if the vulnerabilities viruses exploit are fixed, however. > To protect against this, users must be educated for their own protection, and vulnerable software must be patches against vulnerabilities. This is also true, as far as it goes. You might want to note that second part of it in particular -- because that specifically ties into the content of the article, and reinforces it. An end user does not have to be as gullible, however, to be taken for a ride when the system is more vulnerable -- not just by accident, but by design as in the case of MS Windows auto-execution features. In any case, the first thing an educated user can do to protect himself or herself better is to choose software that is less prone to vulnerability. > Antivirus software systems are almost entirely focuses on removing viruses after the fact of this infection, and thus they play no role whatsoever in the author's "open source protects better than Microsoft because Microsoft delegates this responsibility to 3rd party antivirus providers" argument scenario. You clearly either misunderstood or ignored parts of the article. Antivirus software is treated as the solution to the virus problem, leaving MS Windows developers free to ignore an entire class of vulnerabilities. As a result, the vulnerabilities remain, and while antivirus software then helps protect against specific viruses and (to some extent) closely related viruses, sufficiently different viruses can exploit the same vulnerabilities again. Now that I've summarized something like half the article for you, maybe it'll clear up some of your misunderstanding. > So, essentially, the crux of the author's claim here should come down to the question of who we feel has the best likelihood of responding to software vulnerabilities that pose the highest virus threat to users: the best intentions of the world's open source communities through whatever time each contributor can find to help deliver us a security patch in their free time away from their full time livelihood, or should we instead look to the software vendors that provide us software for a fee that guarantees us with dedicated support and the right to legislation in the case where we feel we've not been adequately protected? No, the crux of it should come down to this: Would you rather trust the project that has a demonstrated track record of addressing these vulnerabilities and strong encouragement to do so as a result of social effects of its development model, or would you rather trust the vendor that has a demonstrated track record of ignoring these vulnerabilities and plenty of opportunity to get away with continuing to ignore them thanks to the social effects of its development model?

Neon Samurai
Neon Samurai

Some developers' day job is to write open source software. With the transparency and peer review possible, most developers are motivated to write good code rather than sloppy "whatever meats the deliverable date" that never get's cleaned up after. Also, if patches are only available when time permits some good Samaritan; why are patch times much shorter on average with FOSS development? " Most commonly, a virus must have either a gullible user through which to Trojan themselves onto the target system, and or a vulnerable piece of software on the target to exploit in order to successfully land on the target. " Since AV is focused on the latter; technical vulnerabilities. Let's focus on that rather than dragging social engineering ("gullible users") into this. " To protect against this, users must be educated for their own protection, and vulnerable software must be patches against vulnerabilities. " We can ignore this since it applies to the Social Engineering point discarded above. " Antivirus software systems are almost entirely focuses on removing viruses after the fact of this infection, " Which is why my AV program halts processes and presents messages like "virus XYZ detected. Do you wish to halt the program, attempt to clean it or continue as is".. because it's not detecting and providing the opertunity to stop an infection attempt in progress... " and thus they play no role whatsoever in the author's "open source protects better than Microsoft because Microsoft delegates this responsibility to 3rd party antivirus providers" argument scenario. " Assuming that AV actually plays no part in protecting against the initial virus penetration attempt. (my guess is that one in such a situation needs to look for a better AV vendor since looking for a better maintained OS may not be an option due to specialty software needs)

Tony Hopkinson
Tony Hopkinson

in fact make it twenty.. Based on your argument please explain Sony's little root kit manouever. Or MS silently forcing an extension for their own known to be insecure stuff into FireFox. Explain why javascripts could access your hardware, and now can't. Feel free to use any curency symbol you like in your argument for....

seanferd
seanferd

OK, no, it isn't shocking at all. See it all the time. See all sorts of strawmen and heaps of other logical fallacies, as well as calls to facts not in evidence. vendors that provide us software for a fee that guarantees us with dedicated support and the right to legislation in the case where we feel we've not been adequately protected? Read the EULAs again. You have no legal recourse at all.

BCJr
BCJr

I didn't scan all of the posts, just a couple so this observation may have already been made. BUT... This particular post could be an example of social engineering. How many people clicked on the link in the post? What criteria did they follow before clicking on it? For all anyone knows this link is just a honeypot for the unsuspecting. How would the endusers know? How could they tell the difference?

Absolutely
Absolutely

But in general, both in and out of computing, I've noticed that noise and persistence frequently have disproportionate influence in comparison to accuracy and honesty, especially where accountability is diffuse. To your question, whether the traditional problems caused by "software engineers" have been replaced by problems caused by "marketing and design staff", Census Bureau data on the most popular college majors would tend to support the hypothesis that currently there is not enough technical expertise to keep purely aesthetic considerations in a reasonable perspective.

Scottieoo
Scottieoo

Thoes are the 3 words that make a good design. Ease of use Self-user education Intuative Neat Flawless Most appropriate information first, followed by a button at the bottom called "More Information" or "Advanced." ERROR MESSAGES THAT ACTUALLY TELL YOU HOW TO FIX AN ERROR!!!

rclark
rclark

But I know I can't find anything on the ribbons. It is really hard to center text for a title in word. Use to be on my toolbar, and easy as pie. Now I have to dig through several menus and drop downs. I realise that it is only once per docuemnt but please.... I'm probably not holding my mouth right or something......

rclark
rclark

I saw a recent Science Channel interview with one of the people who keeps the internet running. His view is that someone who knew which nodes to take down could shut off all credit card and banking in the U.S. with as few as four nodes taken out. On future tech, I saw an interview that said DOD has three backup systems since their operations are dependent on the internet to such a large extent and they are worried it will be a prime terror target. I'm sure we won't be invited to use theirs if the primary goes down. So how am I to buy food, fuel, and pay the bills if my paycheck can't get to the bank, my bank won't let me have cash, and my credit cards won't work?

seanferd
seanferd

1) I am not a hardcore tech by any stretch of the imagination. 2) I have always wondered why certain code must be loaded or threads must run when not really in use, also why certain files are built like legislation: a core bill with a bunch of riders. (e.g.: Win98, not connected to a net, sendmail.dll is loaded, and the only svc you use from sendmail is "send to desktop- create shortcut".) Why does the webcheck window have to run? Why do I have low level drivers in IOSUBSYS auto-load when there is nothing for them to drive? 3) I do understand the programming API / machine efficiency arguement, but could the implementation be better? (see RING 0 post). 4) re: Linux- I have heard similar arguments regarding the monolithic Linux kernel. (Nothing against Linux, I know it is different regarding other issues.) 5) Phillip V: Have you checked out Minix and the micro-kernel philosophy behind it? It is not a mature OS, but Minix 3 has moved beyond the "example only" stage to a "general use" OS.

rclark
rclark

But. We did things like this way back in the beginning. Don't want to go back there. We do things the way we do because it is more efficient. All that overhead to reduce security problems. Cache's and onboard memory and single copy api loading. All of it is to reduce the number of calls to hardware. Your OS might be more secure, but would not run very well. You would also have a great amount of duplication of software as each vendor provided their own service modules. Kernels are core services because they provide service to all hardware/software. Perhaps when we have non material hardware, we can do things this way. Until then, we have to work within the limits of the physical and so will need to save microseconds where we can.

NickHurley
NickHurley

Its a good concept though not entirely new. Not that I don't agree with it, its just that most of the "holes" weren't created out of malice (hopefully). All this protecting trade secrets, setup an odd relationship between software vendors and hardware vendors at that. As in they will give you enough info to get this thing to work without giving away their secrets. Throw a couple of tired programmers in the mix and boom you got security/application flaws, errors whatever. Not saying that this is an excuse its just one of the realities of the job of any kind of software development. What you are describing would be perfect though as history has shown us, things that don't favor monopolies and fat bottom lines get shelved or the inventors get shot (or poisoned in the case of the Hydrogen car maker Meyer).

RT (Panzer Time!)
RT (Panzer Time!)

That seems like a bad idea to me: 1st, it would suck up huge bandwidth, 2nd, everybody has the right to be connected to the Net unprotected.

Neon Samurai
Neon Samurai

It plugs the poster's website while having no direct relation to the discussion. Hovering over it confirmed the domain. I have no personal reason to click on it and if I was curious, could do so through a VM or environment unfriendly to Win32/64 based malware. One could also check the domain on site. I had to run a search for how to clean a bit of malware and most of the results where other companies trying to sell a cleaner for it or other malware sites offering a "fix". I picked the link that lead to a domain which was trustable rather than the odd domains. That may be an experience thing though. If I wanted, I could then check a whois on this one and see where the domain is registered then whois the IP to confirm it still lead back to the domain registrar. The best recommendation is still to not clikc OK for every thing that pops up or every link that apears. If it's not what one is looking for or apears out of the blue; no touchy. practice safe hex and be aware of what you are doing.

Neon Samurai
Neon Samurai

But then, they are a hardware and design company. I remember back in the modem days when you had a problem, win95 would basically say "the modem doesn't work and if you can't figure out why, I'm sure not going to tell you." On my friends Apple, the error would be closer too "uh, sorry, the modem isn't connected or has a issue with . would you like me to try and fix it for you?" Now, well into my Linux days, when I get an error, I simply copy it and past into the google search field to get eight or more posting about how to fix it; some explained better than others of course. Acutally, an honest question you may be especially suited to answer; what do you think of the OLPC's interface designed for use by people who've never pushed a mouse or looked at a computer screen before?

NaughtyMonkey
NaughtyMonkey

But my center function is on the home ribbon with the same icon as always. I don't believe I did anything special. They did waste a lot of space with the styles section on the home ribbon though. I think customizable ribbons would have been a good idea. That way everyone could customize their home ribbon for the functions they use most. edit because I can't speel

seanferd
seanferd

Maybe you're supposed to use the H Pos adjustment on your monitor.

jackie40d
jackie40d

I also have 100 ounces of silver plus some cash just incase of . . Such a problem Plus I have money spread around here and there so I can get something from some where . .

apotheon
apotheon

Keep cash on hand. Keep easily liquidated assets with real, intrinsic value on hand. Vote against anyone who wants to eliminate tangible currency. That's how.

apotheon
apotheon

MS Windows is part of a strong tradition of making three hundred mostly unrelated "features" part of the same single piece of software such that none of them can be run without all the rest of them -- even if there's no logical reason they should depend on one another. The Linux kernel is indeed "monolithic", to some degree, but parts of it can be moved out to kernel modules so that they can be unloaded (or simply not be loaded in the first place) after booting up the system if you don't want their functionality. Whether this functionality is indivisible from the kernel or supplied via modules depends on how you configure the kernel when compiling it (which means that, for most people, it depends on what the maintainers of their chosen Linux distribution choose to do when compiling the kernel that will be provided with the installer). It's generally true that one doesn't want things permanently attached if they're not going to be used. If they're only [b]sometimes[/b] not going to be used, on the other hand, you may want to compile a Linux kernel that includes them as a permanent part of the monolithic whole rather than provided via modules -- for performance reasons. It depends on your needs. The problem is that modules can cut down on efficiency, because modules have to communicate with the kernel via an interface rather than simply being a part of the kernel. If your performance needs are great enough during those times when you would have those specific modules loaded, or if you for some reason find it inconvenient to load and unload modules as needed for a given system, it may make more sense to compile that functionality into the kernel rather than simply using separate modules. Linux isn't the only OS that provides this sort of flexibility in how you configure your kernel, of course. For instance, with FreeBSD many drivers can be either compiled into the kernel or provided as separate modules to be loaded during the boot process, in much the same way as for Linux (except that I find the kernel configuration process for FreeBSD simpler and more straightforward).

rclark
rclark

What right would it violate to do on the ISP what an AV program does on the desktop? The only ones who benefit from allowing malware on the desktop are the people who write them and the people who combat them. Nobody else has a dog in that fight. For everyone else, it's a straight win on lots of fronts. Put an appliance inline and it doesn't have to throttle bandwidth at all. Make it fast enough and it won't have to delay throughput either.

Scottieoo
Scottieoo

Because I didn't mean that the only reason for security flaws was becuase if more people use the software then they will be likley to find flaws. But you have to agree that the more people that use the software the greater the chance of flaws being found. However, the more functionality and useability a piece of software has the greater the posibility of flaws. So should you make a tight, locked program that no one can really use effectivley or should you make a great program based on functionality and let the user choose how much security they want to implement whilst using that software.

Neon Samurai
Neon Samurai

I should probably check out the latest hardware writeups on it before speaking but why break with tradition; here I go with another gonzo post. I had that same unfamiliar feeling with the appliance when I poked at it. It felt more like the interface I'd expect to find on a child's toy with it's few limited functions. That passed pretty quickly though and I can see how it would be an easy start for children. The hardware I've not had a chance to see though I should really make a point of getting to the next computer conference in TO being that it's within subway transit distance. I've used chicklet keyboards on old winCE clamshells and could type well enough to keep up with notes in classes but it's not the same as a proper tactile keyboard. I know they have to seal it for weather but hopefully they'll be able to fine tune the key weight and movement distance. That would just suck to have a barely usable keyboard on such a potentially powerful education tool.

Neon Samurai
Neon Samurai

I just checked back and noticed the thread after TR finally started displaying my updated subscriptions properly again. One of the first things I did during my honeymoon fase with VMware Server was grab a copy of the OLCP VM appliance to check it out. It runs slow due to the layer of hardware emulation between Sugar and physical hardware but this is only in the appliance meant for demos. I figured you'd either tried it or sween the screenshots but that was my own assumption without thinking it all through. The only place you can see the OLPC around these parts is at trade shows unless you look over the screen shots or the VM appliance (I think it's no longer distributed as a VMware file though.) If you can track it down and getting it working, you may have fun for a half hour poking around and pretending to have never seen a computer before. As an experienced user who grew up infront of machines, I found Sugar to be a little restrictive. It only presents a few popup menus and programs. For someone who's just learning a GUI interface for the first time, it makes far more sence though.

apotheon
apotheon

. . . and what does that have to do with what I said?

Scottieoo
Scottieoo

I've always learnt that nothing is 100% safe from exploits or people with malicious intent. Most of the things you say are correct. You can only make software with a certain level of security and the rest is reliant on trust. If you trust some one you will install their software on your computer and give that software full access to your files.

apotheon
apotheon

"[i]If more children start to use Linux they will soon find exploits and security vulnerabilities, the same thing that happend to Microsoft.[/i]" That's just the old "security through obscurity" fallacy -- arguing that Linux is more secure only (or primarily) because it's more obscure. There are a great many reasons that isn't strictly true. You can educate yourself on some of them by reading this informational article I wrote for TechRepublic about a year and a half ago: [url=http://articles.techrepublic.com.com/5100-10877-6064734.html][b]Security through visibility: The secrets of open source security[/b][/url]

apotheon
apotheon

The interface for the OLPC's GUI environment was pretty foreign to me at first, and didn't initially give me a whole lot of hints on how to get out of the most immediate common-case uses of the thing. It's pretty easy to pick up how to use it, though, and I imagine that no more than about fifteen minutes to get a relatively bright child familiarized with it would give him/her all he/she needs to figure out the interesting stuff on his/her own. Less swift children might need half an hour or so of initial familiarization, maybe as much as an hour. Of course, I imagine that class time set aside for initial familiarization would probably be blocked out in hour-long increments anyway -- I remember the painful slowness of group instruction environments. The GUI environment provided on the OLPC project's XO is like nothing I've seen before. It's a whole new interface to me, and that's saying a lot for someone who has encountered everything from Vista, Compiz Fusion, and Aqua, though WindowMaker, Enlightenment, and Sawfish, on down to AHWM, TWM, and even wmii. In a way, the XO interface reminded me more of the interface for PalmOS on my Handspring Visor Edge a few years back than of anything I've seen on a fully fledged laptop system. It had that sort of idiot-proof feel to it, simple and highly usable. As for the keyboard, it was essentially unusable for me personally because I need something with which I can touch-type, and the crappy feel of the spacebar pretty much eliminated any possibility of that. It requires one to press the spacebar with real authority to get it to register.

Scottieoo
Scottieoo

I have never actually seen the OLPC interface as I live in Australia and Microsoft give (free) relativley "cheap" software to schools so they can "get em' while they're young." However, based on the November 2006 release (on Wikipedia) it uses the following. A pared-down version of Fedora Core Linux as the operating system, with students receiving root access. A simple custom web browser based upon the Gecko engine used by Mozilla Firefox. A word processor based on AbiWord. Email through the web-based Gmail service. Online chat and VoIP programs. Several interpreted programming languages, including Forth, Logo, JavaScript, Python, Csound, and the eToys version of Squeak. A music sequencer with digital instruments: Jean Pich?'s TamTam Audio and video player software: Totem or Helix. I am just amazed that they can fit so much software on such a tiny hardware profile (unlike Microsoft's Vista, made for a hardware monster). Security isn't much of an issue as it uses flash memory and simple software with minimal security leaks (ie Fedora, Mozilla, Gmail, AbiWord). --------------------------------------------- Now I've used alot of Linux distros duel booted with Windows and they now both seem to be failry object oriented. Once you can master a mouse it is fairly easy to start clicking on things and seeing what it does, hence the whole concent of object-orientation. With Unix and command lines you need a "side manual" and this was pretty much due to the fact that the hardware in the past didn't have enough power for a built-in and intuative interface. A good design is about "learning on the fly" so that productivity is not slowed down and you don't have to waist money on teaching. OLPC pretty much covers the basics as an all in one neat little package and once you get going how easy is it to master Firefox? With images, signage (symbols), and the now famous "point and click" approach to computing how can children afford to not learn how to use a computer? The only worring thing that needs to be addressed is content filtering so that children can keep their boundaries and family values so they don't winde up on the wrong path by taking advantage of people for greed. If more children start to use Linux they will soon find exploits and security vulnerabilities, the same thing that happend to Microsoft.

rclark
rclark

I have an X61T and use it for taking notes. It has a lot of floating menus and there was this annoying one that floated in and out of the AEROGlass. (Appears, then fades out.) I finally tried to figure out what the blasted thing was to get rid of it. It was always between my stylus and the text and that was getting on my nerves. Turns out it was the formatting tool bar. Don't know why they didn't anchor it. Then the light dawned. They wanted it to be available for each piece of text typed, so I could change formats on the fly while typing. Only I wasn't typing, I was writing and the computer was translating to type. So it was always in the way. Now that I KNOW, I can use it the way they meant. Learning is such hard work sometimes. It gets harder as the years go by. I can feel brain cells turning to concrete with every new application.

rclark
rclark

Unless the price of silver goes way up with the loss of credit cards, 100 oz wouldn't last me long with the girls helping me spend it. We would have to drastically reduce the outgo until the income started back....

rclark
rclark

They are cash cards. Accepted anywhere Visa is. Fill them up and "Safer than Cash" able to be reissued if lost or stolen. I don't know about everyone else, but I've never lost or had a card stolen from me. I've had several that were compromised at the credit card company, but none of them ever gave me a problem about it. My point is that if the wheels come off of the bus, those cards will be worth about $1 as guitar pics, if you have a sharp knife and know how to carve them up. The only thing that will work then is the green stuff, and if it gets bad, the hard currencies of gold and silver. The problem with both of them is that when you need them, it's too late to get them.

seanferd
seanferd

The kernel codes themselves, and the philosophies behind them. Narrowly construed, this meant the Minix versus Linux debate. Viewed inclusively, it means all the other ixness, BSD, Unix (nonotSCO!)SysV, whatever. Add in interesting bits that still live on in different forms: Amiga, NeXT, SunOS incarnations, whatthehell VAX, anywhere there are grains of quality code or ideas. Combine that with whatever potential there is for there to be a complete re-write of the very concept of code, operating systems, and computing. May be something good, maybe something bad. Yeah, corporations do stuff for growth & survival that tend to diminish their original goals, assuming that the original goals were set by innovators and not just the more "ruthless" type business addicts. Following current trends, here's the sci-ish fi-ish vision: insanely powerful quantum holographic processing units are available to the general public, and here we are trying to get the latest vesion of My Live Active Windos 3.x to run on the damn thing well enough for Little Timmy to send a text message into last week. I like Windows because I like the challenge.

nentech
nentech

If it is corporations Expect the same as we have now Each will push their own designs and try to offer what they think the customer will buy then beat us into submission with a flood of ads (Please note I wrote ?offer what they think the customer will buy?) (Not ?what the customer wants?) (Or ?what the customer needs?) No OS could withstand this sort of treatment and remain secure So if you like the OS you now use just hope the corps leave it alone Col

seanferd
seanferd

I agree entirely with your point re the Linux & BSD kernels. I can also see where Phillip V is coming from. I like Linux just fine as it is, but Andrew S. Tanenbaum apparently would beg to differ with Linus Torvalds and ourselves. I just hope all the competition in kernel design theory brings about more robust operating systems for everyone.

Absolutely
Absolutely

[i]We all know some thing?s should be stopped How we define those things is the problem[/i] I suggest that the concept "right to property" is the solution to that problem. [i]Now here is the punch Get everybody to agree with your definitions[/i] Not necessary! We [b]pay[/b] for Internet service. In a [b]free[/b] market, I don't need [u]anybody[/u] to agree with how I spend [b]my[/b] money. So, to be as free as possible, we should only resort to "democracy" in circumstances in which "capitalism" isn't possible. Such circumstances are actually quite few, despite the noises of politicians.

Neon Samurai
Neon Samurai

I'd not promote violence but if breaking the fingers of each found intentionally malware writer would have any real effect, it may be worth considering. In reality, it wouldn't cure the problem or have any real lasting effect. Hardening the OS is the next closest point I can think and some developers consider malware proof of concept for flaws that should be corrected. With other's, development budgets get allocated to other tasks and we see the effect of that daily; the importance is on selling units not producing quality. In general, AV sucks but it's the best the end users have in combination with education. Your right though, it shouldn't be a requirement in the first place and user education should be much greater; reality is far different from ideal. It seems to be the difference between reality and the ideal situation that criminals take advantage of. discussion, debate; sometimes it's hard to tell the difference. :) I can easily fall into the category of perfectionist. I get to develop best practices in my own humble sandbox. There can be quite a contrast between that and the real world also; I've plenty of diplomacy to redevelop in that area. With wifi specifically, there are some very easy steps that any owner of a two year old or newer router can take but often don't. In that case, it really should be the vendors who provide the hardware with better default settings. That's a whole other discussion though. Either way, this thread is long enough already. I'm sure all these topics will come up in other forums; the seem to regularly. ;) Cheers,

nentech
nentech

The idea of stopping malware and other unwanted pests is a good one Where and how is the thing to discuss Not debate or argue about I just think most of the effort should go into stopping it at the source where it enters the Internet I don?t think our computers or networks should have to be a fortress(Most homes are not) That hasn?t always worked in the real world I like to think of the anti (virus, adware, etc) software as the security guards In most cases, most of time the guards do a good job Some times they make a mistake Disasters do happen even with the best protection Recovery from (a crash, infection etc) and making it as easy as possibly should be most important I think MS has a long way to go in that department As for Linux, BSD and all the other OS types I will leave that to others to debate Just for fun You may like to play the game ?Spot the perfectionist? There are few who visit TR Mostly found in discussions about Security or Windows versus Linux Found some in a discussion about wireless security not long ago Just as well the real world isn?t perfect Most/all of us would be fired when we made our first blunder Col

Neon Samurai
Neon Samurai

In these cases though, I was looking for debat more than a fight. Admittedly though, I was coming out of another forum that ended up having all valid discussion destroyed by the lowly Pickleman troll. It's the differences of opinion and levels of knowledge I come to these forums for. In my experience, viruses, trojans, spam, adware, spyware and advertisements all seem to have clear definitions. It starts from the idea that if it's installed on my machine without my approval then it is unauthorized; after that it's just clasifying the type of unauthorized access it falls into. I remember when there where only the virus and the trojan to choose from in the days before spyware and other malware had need of a subclassifiction in the greater malware family. digital restrictions management (DRM) is a relatively new thing in terms of limiting the end user's ability to user there own machine legally. In the past it has and is still often used to validate software to be run on hardware or alongside other software (medical equipment being one place, the Linux kernel being another) but in both cases, the intent is the benifit rather than the restriction of the end user. In that case, I'd consider WGA unwanted DRM in that it limits the licensed end user and bloat in that it is unwanted by the end user and does not perform a critical function to the operation of the OS. I do love debate though. It's my own limitation when my writting reads more like an attack or lust for the fight. Perhaps it's also originally being a small town boy serounded by very few other technically savvy when growing up. Things like viruses seemed very clear and fiction like Cyberpunk provides a great (oddly, very accurate as tech evolves) framework for such ideas. I should really learn to use more smilies in my writting. When I'm looking for a fight, I'm not at all subtle but I can see how I come across hot and heavy without meaning too also. It's the down side of an emotion neutral medium. In the BBs days there was the same messes caused by one person writting and another reading there own emotional interpretation into the text.

rclark
rclark

I don't have a problem defining malware. The payload is immaterial to me. Anyone who loads software on my machine without my permission is stealing from me. The same advertisement that would be ok if I clicked on it, is objectionable if the promoters download software that forces it on me when I don't click on it. It's ok if the website I choose to visit has it on the website. It's not ok if by visiting the website, I get infected with recurring interations of the ad. For all the rest, trojans, worms, virus, root kits and such like, most AV programs catch most of them. Most AV programs agree what is and is not malware for 99.9% of what is in the wild. Corporate digs that include competitors software as malware aside, most virus signatures are known. Just block those that every one agrees is malware. Let everything else come through. After that, you can debate everything else.

nentech
nentech

I am not trying to define porn or spam(not in a can) or viruses That will start too many arguments Which is the point I am trying to make I could think of WGA as a virus I don?t want it Not because I am a pirate But because of the problems it may cause So it is unwanted software on my computer It infects other computers It came through automatic updates (unknown to the people who use the PCs it has infected) Just some of the virus like features of WGA Microsoft will not agree For me that is unwanted software that should be blocked Other people will disagree By the way back in the old days there were a few harmless virus floating around ?Greetings from xxxxx today is the day to calibrate xxxxx? If you want to discuss the definition of porn please ask the publishers of the magazines Or go to your local church Then ask some of the people on the street on the way back Not all ads are unwanted by all people There are also test viruses Some people don?t care what companies know about them Rclark you wrote ?I wouldn't use it to block porn or adverts of any type? Other people would You still haven?t told me what AV they should use yet Neon Samurai very brave of you to define porn In case you missed it I would like to see some things blocked from entering the Internet The problem is what to block and how to define what to block How to decide if it is information or an ad Is it info about how to use a product an ad It has the product and company name So is it an ad for those names What is the line between it being an ad and a manual? Like you said the definitions change Were they wrong 50 years ago are we right now will our values stay the same? It makes very little difference anyway Blocking anything from the Internet will have to go through the same discussions about freedom of information that everything else has I did say it can lead to censorship Not it will lead to censorship Thanks apotheon a least someone got the point I was trying to make I have to wonder if they were just bored Neon you do seem to enjoy a fight Sometimes maybe too much rclark don?t look for offence when there is none offered Col

apotheon
apotheon

The examples of pornography and spam are at the extreme social end of the spectrum of things difficult to define, but the same problem applies to superficially cut-and-dried matters like viruses and other malware, too. What would you call the Sony rootkit of 2005? On one hand, it's obviously a rootkit. On the other hand, it's part of a DRM system that Sony only agreed to cease distributing because A) it's a security vulnerability (which is not necessarily the same as being a rootkit) and B) end-users were really not happy with Sony for that. In fact, if it weren't for B, I'm pretty sure A wouldn't have prompted Sony to stop distributing it. So far, Sony and I disagree on the definition of malware. Now let's take that to the intermediary -- the ISP. Let's say there's some piece of software that is automatically downloaded by every DRM-laden CD offered by a given corporation (like Sony). The downloaded software is necessary in part because it contains a "key" used to "unlock" the DRM "protected" music on the CD when you want to play it on your computer. Let's say that "key" has to be updated regularly to access that music on an Internet-connected computer. Now let's say that, if you connect to the Internet with that computer, it calls back to the corporate mothership to determine whether it needs to download a new "key" to maintain "rights" to continue playing music from that CD. Suddenly, we realize that this DRM software meets some of the definition terms of spyware. In strict terms, it [b]is[/b] spyware. Should the ISP block it? In this case, both the corporation selling this DRM-laden music and many of the end-users will probably object to that traffic being blocked. The corporation would object because the ISP is interfering with the operation of the DRM. The end-user would object because once the computer connects to the Internet, the DRM software may well decide that if it can't get a "key" that means you are a "pirate", and suddenly you won't be able to listen to that music any longer. Now you've got the ISP between the RIAA and its customers, both demanding it change its handling of malware. Well, that's easy. So it changes. Now . . . what if the DRM software doesn't automatically disallow playing the music if it can't contact the mothership? The situation has suddenly changed such that on one hand you've got some of the end-users demanding that the DRM network traffic is blocked, and on the other you've got the RIAA demanding that the ISP support its member corporations' DRM software operations. There's a conflict of opinion on what constitutes "malware" in this case -- just as there might be for pornography or spam. The only reason malware seems like an issue that isn't saddled with social issues the way pornography and spam are is that we're technically literate IT professionals, and some of us haven't been brainwashed by market-dominating corporate interests. The further you get from that specific class of computer users, the more things become apparently debatable. Even if there's a clear-cut definition of malware, [b]most people will disagree with that definition[/b], whether out of dishonesty or ignorance. The problem of getting ISPs to block malware isn't that there isn't a reasonable way to handle it: it's that "reasonable" usually isn't the way things get handled when you hand off management of a problem to large bureaucratic organizations.

apotheon
apotheon

Shutting off all BT traffic to try to stop "piracy" is asinine. The most important use of BT isn't "piracy" -- it's ISOs for free/libre/open source operating system installers.

Neon Samurai
Neon Samurai

There is a distinct difference between the social and cultural issues like pornographic media or acceptable public appearance and computer software designed specifically to harm individuals and damage systems. Porn is generally defined as depicting the social taboo of the majority of the population. 50 years ago in the US it was showing two naked people together or a someone wearing leather and holding a whip. Today, that same level of taboo has to depict penetration and the whip being used along with some creative uses of boyscout knots. Who gets to say what is taboo now in any culture is really a whole other debate; this isn't the place for it. On the other hand viruses, malware and similar software has a very clear definition. It is not based on the opinions of the self richeous. A virus is by definition a program which copies itself into the programming code of another executable to propagate. It may or may not do so with the intent to destroy information even. Spam is by definition a canned meat :) (sorry, I couldn't resist), or a flood of unsolicited email. By advert I'm guessing you mean advertisement which is pretty obvious to anyone who's been near a radio, tv, website or billboard in the last hundred years. There is a distinct difference between a social opinion of what is taboo and may or may not cause any real harm (regardless of what the bible belt folks claim) versus programming code which is specifically designed to damage and cause harm. Comparing the two in trying to make a point is really reaching for an argument. You may as well claim that a specific keyboard is crappy hardware because a printer in your house is painted yellow.

rclark
rclark

Both are protected forms of expression, even if they are both annoying to a majority of users. I would only block true malware (Virus, trojan, worm, etc) The adverts, porn, and age appropriateness of information is a societal issue. I realize that malware has been expanded to include advertisers who push unwanted data. I have never subscribed to that view. People who spam or pop up are annoying. People who infect machines to spam or pop up are criminal. I would include Microsoft in that when they update the system with patches without permission, and when they pull information without authorization.

nentech
nentech

We all know some thing?s should be stopped How we define those things is the problem Define porn Is it people having sex? Is it nudity? Is it someone in a swimsuit? Is it someone not covering their face?( I know that?s not, but it in some cultures it is just as bad) Is it male or female? Is it child or adult? Is it heterosexual or homosexual? Is it 1, 2 ,3, 4, or more people Define a advert Define a virus Define spam Now here is the punch Get everybody to agree with your definitions Hell the people in this discussion cant even agree about the definition of the word ?Hacker? Edit to add this Now this may be funny Tell me which AV they should use It will be interesting to see what reply?s you get Col

RT (Panzer Time!)
RT (Panzer Time!)

I believe Comcast recently blocked all Bittorrent traffic on most of their networks. So, Bittorrent might be the vector for some illegal copyright infringement, but not all Bittorrent is bad. They have no right to control what I see. Yeah, I can switch ISP, but the "market force" of millions of average computer-illiterate Internet users will just spread a shoddy AV system across most portions of the Web available to endusers. We'll never escape, hackers'll just learn how to circumvent the AV or use it against us just the same as they do now.

rclark
rclark

"First shoot all the lawyers". No more slippery slopes. If we continue delaying or killing ideas because they could bite us, we are going to be stuck where we are. If they censor too much, switch ISP's. I think AOL is in that position now. And they don't even do a good job of it. I think the radical right and the ID card nonsense, and the radical left and the censorship nonsense are really going to have to get over it. Both think they are ultra defenders of their faiths, but are being marginalized by their stances. They need to chill a bit and let the market work. We might get burned, but better to have loved and lost, than never to have loved at all.

nentech
nentech

Now we stop the virus Next we stop the spam Next we stop the adverts Next we stop the kiddy porn Next we stop all porn Next we stop the holiday snaps at the beach because those people are not dressed properly Next we stop everything from this country because they are evil And so on till we stop every thing Then we are safe from everything Col

Editor's Picks