General discussion
-
CreatorTopic
-
May 17, 2005 at 8:36 am #2188375
Wanted: Effective and FAST XP Defragger
Lockedby pailr · about 18 years, 11 months ago
Whatever happened to the good old days when Norton Speed Disk could defrag your 40 gig hard drive in less than 10 minutes (rather than closer to an hour), compress all your free space and would link into Disk Doctor to fix any allocation errors? Why is it that Microshaft has no clue when it comes to what DEFRAGMENTATION really means? How is it that they cannot comprehend that not making all your unused disk space while supposedly defragmenting your hard drive actually promotes fragmentation when storing new files???
Topic is locked -
CreatorTopic
All Comments
-
AuthorReplies
-
-
May 17, 2005 at 8:42 am #3236830
because
by jaqui · about 18 years, 11 months ago
In reply to Wanted: Effective and FAST XP Defragger
microsoft has a storage policy:
throw that data at the hard drive like an egg at a wall, where it sticks is where it should be.they have never had a decent disk i/o system.
-
May 18, 2005 at 5:46 am #3237586
have you tried…
by husp1 · about 18 years, 11 months ago
In reply to Wanted: Effective and FAST XP Defragger
have you tried “Diskeeper 9” ? I have been using the diskeeper lite version on my home system and the speed is great!!! defraged a 80 gig external on a usb connection in under 15 min.
-
May 20, 2005 at 8:30 am #3238577
Well, I think it’s not quite that simple
by jaykleg · about 18 years, 11 months ago
In reply to Wanted: Effective and FAST XP Defragger
I don’t think that the Microsoft folks are quite as dumb as you seem to be indicating. And the file systems used by most other “modern” PC operating systems share many general characteristics with those used by Microsoft. They all have to deal with software files, data files, temporary files, caching, virtual memory, metadata, etc.
A controller / drive only spends a tiny portion of its time actually attempting to perform contiguous whole data file reads and writes. In Windows systems with single partitions on single hard drives holding the OS/software/data the heads dance around among several locations — for instance the Master File Table (MFT), regular data locations, software file locations, pagefile, registry and temporary file locations for the currently active user profile just to name a few. Placing all of the files at one “end” of the partition and all of the free space at the other “end” of the partition will result in roughly the same rate of fragmentation during new writes as will the scheme of having defragmented files sprinkled liberally around the drive. And it may actually increase the amount of time required for many file system operations. This is why it is a waste of time to create a separate partition on the same hard drive for the pagefile. The pagefile gets touched FREQUENTLY and almost never in big chunks. Placing it in another location means the heads have to thrash back and forth between the separate page file location and the location where everything else is stored. This actually makes the system slower — a LOT slower in some cases I’ve seen. (Putting a pagefile on a separate physical drive on a separate channel is quite another matter, but it still doesn’t offer a huge advantage for most systems. It just doesn’t result in the performance penalty exacted by using a separate partition on the same drive.)
These days the space available on hard drives is relatively large in comparison to average file sizes. (Unless you’re talking about systems that deal almost exclusively in BIG files like video files.) Having a bunch of non-contiguous spaces in which to write files works about as well as (or better than) just having one large space for that purpose. A Windows XP or Windows Server 2003 system that gets defragged once a week (or once a month, depending on use patterns) takes only a few minutes to defragment the file system each time. At least that’s my experience with the servers and workstations I look after.
If a system’s data consists strictly of static files which are never edited (in other words, they remain the same size), then access to them might be improved by stacking them all tightly together in once contiguous area. But if you edit files and change their sizes, then what happens when the edited files are written back to disk? The primary difference between the scenario where all the files are packed into a single contiguous space and the scenario where they are sprinkled across the drive is that the heads actually have to travel farther to find free space for writing in the first scenario.
A great many of the small files on an NTFS partition are actually stored WITHIN the MFT. And contiguous reads and writes of any size don’t occur within the pagefile. There are just lots and lots of reasons why the idea of cramming everything down at one end of the partition doesn’t work as well as a person might think from a superficial examination of the defragger’s graphical display. The defragmentation schemes in most defraggers were created to find generally acceptable compromises among several needs which are somewhat at odds with each other.
I know this explanation is not satisfying for people obsessed with “neat” solutions.* But this is the way I think it actually works. At least that’s my story. And I’m stickin’ to it.
😉
Footnote:
*I know because I am one of those people. Back before there were hard drives I used to defrag all of my floppies. Sad, eh?-
May 20, 2005 at 11:21 am #3238331
It may not be as simple as all that….
by pailr · about 18 years, 11 months ago
In reply to Well, I think it’s not quite that simple
BUT, in order to bypass the problems presented by lack of a decent XP defragger, I set our systems at home up on a dual-boot with Win98 and the file system specified for our XP OS partitions is set to FAT32. That way, we can access data on all drives from either operating system AND have the advantage of using Norton Speed Disk from a boot into Win98.
Until Microshaft allows someone to make a decent defragger that will do everything that NSD does, I will not be changing this config. I am NOT!!! going to allow Uncle Bill to tell me I must take hours to degrag a drive simply due to his company’s arrogance and disregard for the masses.-
May 20, 2005 at 12:43 pm #3236204
Reply To: Wanted: Effective and FAST XP Defragger
by jaykleg · about 18 years, 11 months ago
In reply to It may not be as simple as all that….
If you had used NTFS for the Windows XP installations I think you would see that the built-in defragger works pretty darned well (and quickly) on that file system. MS wants people to stop using FAT32, and for darned good reason. It’s an awful file system. And I wouldn’t be surprised to hear that the built-in defragger doesn’t work well with FAT32.
Why would you dual boot Win98 and WinXP? Are you saying that you do that solely so you can use NSD to defrag your WinXP partition?
It sounds like you hate Microsoft. Computers and operating systems aren’t religious idiologies. So if you hate what you’re using currently you should just switch to something else. Really.
But if you actually want a solution for a defragger that works just fine for WinXP I think the built-in defragger (on NTFS) is it. There are lots of third party options, too — Norton being the last one I would think of using. There’s O&O, PerfectDisk, Diskeeper (designed by the same people who designed the built-in defragger), etc. In any case, if you want to use WinXP I don’t know of any expert who would suggest using it with FAT32 — except under dire need.
-
May 23, 2005 at 12:02 am #3338860
My chief complaint…
by pailr · about 18 years, 11 months ago
In reply to Reply To: Wanted: Effective and FAST XP Defragger
…is the issue of Microshaft’s [Diskeeper] defragger not allowing certain files to be moved, thereby preventing contiguous free space. The argument that was put forth above in the thread concerning why there is little advantage in having all your free space contiguous is inadequate, at best. Seldom will you ever see a PC that is used solely for creation of new files nor for only editing existing ones. But, for those who regularly do create new files on their system, the contiguous free space is a critical defense to fragmentation. Yes, they will still have the problem of increased space requirements of edited files, but half the battle, at least, is won if the hard drive doesn’t have to seek out new bits and pieces of free space on which to store newly created files.
As to your suggestion that Windoze’s native defragger is adequate to the task while using NTFS — due to installing a new, larger hard drive this weekend, I decided to put that concept to the test. The results were… minimally adequate time-wise, and totally lacking, of course, in actually defragmenting the drive (i.e., creating contiguous free space). If the free space is fragmented, then the drive is NOT!!!! defragged. It’s as simple as that. It makes no difference whether all the currently resident files are in one piece. That does not constitute the totality of the concept of defragmentation. It is only a part.
You mentioned O&O and PerfectDisk — do they override Microshaft’s insistence that some files are immovable or do they kowtow to the rich Uncle? (B. G.) If the latter, then they are no more effective that MS’s internal lackey (in effect) [Diskeeper].
As to my regard (or extreme lack thereof) for Microshaft, I would think that would be more than obvious. The unfortunate facts are, though, that EVERY other operating system is crawling through its infancy, with software developers loathe to create quality software for anything other than Windoze. I understand the economics of the issue from the developers’ point of view — fewer users means less potential income for the products you develop. But, until they realize that supporting other OSes with high quality offerings will help to diversify the customer base, with (hopefully) eventual parity between Microsoft and other OS providers, then we will continue to labor under Bill’s heel, suffering his lack of regard for those who have made him rich beyond any man’s wildest expectation — the consumers.
-
May 23, 2005 at 3:42 am #3236591
You make many assertions…
by jaykleg · about 18 years, 11 months ago
In reply to My chief complaint…
which you back up with intuitive arguments, but I’m saying that I think you are mistaken in some of those assertions. I could be wrong. One way for you to be sure would be to do some solid research on file systems and their actual needs with respect to defragmentation. In anyone’s particular case the only way to be certain is to actually do benchmarking of various file system performance characteristics on systems which have been defragged using Microsoft’s own defragger and the defraggers provided by third parties. Most of the third party defraggers do enable you to move files during the early portion of the boot process. There’s also a freeware defragger called Contig.exe available from Sysinternals.com which can, if I remember correctly, be scripted to run at boot time to make non-contiguous files contiguous. The fact that you’re not aware of these facts indicates that you may not yet have enough information to be sure of your ground in this argument.
I’ve done benchmarking on various types of RAID arrays and single disk storage before and after defragging. The effects of defragging on these systems varies widely, of course, with the type of use to which they are put. I saw significant differences on heavily thrashed single disk and RAID systems on which massive numbers of file system changes were made on a regular basis. (Even then the differences were not always great, depending mostly upon average created file size compared to storage area size.) I NEVER saw a significant difference pre-and-post early boot defragmentation of those few “stubborn” files which can’t be moved when the OS’ GUI is up and running.
These results can differ for different types of equipment and for different use patterns. That’s why I say you can only be sure if you do your own benchmarking on your own systems. (And not just one set of benchmarks, either.) As for me, I defrag the systems I use most on either a weekly or monthly basis, and that works quite well.
-
May 27, 2005 at 10:25 am #3181215
From what I heave read of RAID…
by pailr · about 18 years, 11 months ago
In reply to You make many assertions…
you would be correct. But, RAID is an extreme luxury which I cannot afford.
Thanks for your input, though. It may be informative to others as well. I will check into the pre-boot defraggers. My experience has shown in the past that my system experiences a VERY great difference when the files are not defragged every few days. That is, of course, in the absence of RAID.
-
-
-
-
AuthorReplies