Software optimize

Take control of the Windows XP pagefile

The Windows XP pagefile can become fragmented and hinder the computer's performance. Here are a few techniques you can use to better manage this file.
By Dr. Thomas Shinder

This article was originally published on TechRepublic March 25, 2002.

The Microsoft Windows XP pagefile is used to extend the amount of computer memory available to applications and services so that they're not limited by the amount of physical RAM installed on the computer. Because the pagefile represents an extension of your physical RAM, anything you do to optimize the performance of the pagefile will have an impact on the overall performance of the computer.

After you carry out my recommendations on pagefile fragmentation, pagefile location and performance, and pagefile security and command-line management, you should find that your Windows XP box runs faster than ever.

This blog post is also available in PDF format in a free TechRepublic download.

Pagefile fragmentation

It takes longer to read a fragmented file. Files tend to fragment when they're added or changed in a low disk-free space environment. Also, files fragment more frequently on compressed NTFS volumes. Like any other file on a computer hard disk, the pagefile can become fragmented. However, the pagefile is immune to compressed NTFS volume fragmentation because it cannot be compressed.

The pagefile will fragment if there isn't enough contiguous hard disk space to hold the entire pagefile. This typically isn't the case when the operating system is first installed. But, by default, the operating system configures the pagefile to allocate space for itself dynamically. During the dynamic resizing, the file can end up fragmented because of a lack of contiguous disk space.

You can determine the level of pagefile fragmentation with the built-in Disk Defragmenter. Perform the following steps to see how many fragments the pagefile has been split into:

  • Click Start | Run.
  • In the Run dialog box, type dfrg.msc in the Open text box and click OK.
  • Click the Analyze button in the Disk Defragmenter application (Figure A).

Figure A

  • When the Disk Defragmenter dialog box appears (Figure B), click View Report.

Figure B

  • In the Analysis Report dialog box (Figure C), scroll through the Volume Information box and find the Pagefile fragmentation section.

Figure C

In this example, the pagefile is in three fragments.
  • Click Close to close the Analysis Report dialog box.

You could improve the performance on this machine by removing the fragmented pagefile and re-creating it in a way so that it doesn't subsequently fragment. You'd do this by creating a static pagefile, which has the same minimum and maximum size. This prevents the operating system from dynamically resizing the pagefile -- it's the dynamic resizing that causes an unfragmented pagefile to become fragmented.

You can accomplish this task in two ways:

  • Temporarily move the pagefile to another disk/partition.
  • Use a third-party product such as Diskeeper.

Diskeeper review

Diskeeper can perform an "offline" defragmentation of the pagefile after the system is restarted. It cannot perform this task while the system is running normally. If you don't have Diskeeper, you'll need to move the pagefile off its current partition and re-create it. See what this Product Spotlight has to say about Diskeeper.

Perform the following steps to remove and re-create the pagefile:

  • Click Start and then right-click the My Computer object. Click on the Properties command.
  • In the System Properties dialog box, click on the Advanced tab.
  • On the Advanced tab, click on the Settings button in the Performance frame (Figure D).

Figure D

  • On the Performance Options dialog box, click on the Advanced tab.
  • On the Advanced tab (Figure E), click the Change button in the Virtual Memory frame.

Figure E

  • In the Virtual Memory dialog box, click on the drive that presently holds the pagefile. Click on the Custom Size option button and change the Initial Size and Maximum Size values to 0 (Figure F). Click the Set button.

Figure F

Zero out the current pagefile.
  • Click on another drive. Select the Custom Size option. In the Initial Size and Maximum Size text boxes, type in a value equal to the amount of RAM in the computer. Click Set and then OK (Figure G). You'll see a dialog box informing you that you must restart the computer for the changes to take effect. Click OK three times to complete the process.

Figure G

Create the temporary pagefile.

  1. A System Settings Change dialog box will appear and ask if you want to restart the computer. Close all programs and click Yes.
  2. After the machine reboots, click Start | Run.
  3. In the Run dialog box, type dfrg.msc in the Open text box and click OK.
  4. Click the Defragment button in the Disk Defragmenter application. The purpose of this defragmentation run is to create enough contiguous free space to fit your new pagefile. After defragmentation is complete, click the Close button. Then close the Disk Defragmenter application.
  5. Go back to the Virtual Memory dialog box and put the pagefile back on the original drive. Remove the pagefile from the temporary drive (Figure H). Click OK and then click OK again in the dialog box that tells you that the system needs to be restarted. Click OK two more times and click Yes to restart the computer.

Figure H

Re-create another pagefile and remove the temporary pagefile.

You can now run an analysis again using the Disk Defragmenter application to confirm that the pagefile is no longer fragmented.

Pagefile location and performance

If you want to max out your computer's memory-handling performance, you must do more to optimize your pagefile. Factors to consider when optimizing the pagefile include:

  • Disk type and location
  • Gauging performance

Disk type and location

There is some debate on where to place the pagefile. If you look in Microsoft TechNet, you'll see several references on this subject. From my analysis of the available literature, I submit the following recommendations:

  • Place the pagefile on a dedicated disk.
  • Do not place the pagefile on a RAID volume.
  • Do not place the pagefile on a volume that shares the same physical disk with a busy partition, such as the operating system partition.

To get the best performance out of the pagefile, you should place it on a dedicated disk. This is especially the case on high-end systems with a large amount of RAM. If your budget doesn't allow you to do this, you can place the pagefile on a disk that contains files that are occasionally read and written to, such as archive files that you create once a month.

You should not place the pagefile on a RAID volume because RAID volumes typically require extra write time. While RAID 5 volumes have significant advantages in read time, write time isn't any better than with a non-RAID disk. Since the pagefile is subject to frequent reads and writes, you want to make sure it isn't placed on a RAID volume.

Gauging performance

Various formulas are available for how to optimally size the pagefile. Most are based on some percentage of the amount of RAM installed on the computer. These RAM-based pagefile size recommendations are just estimates. None of them will accurately reflect the best pagefile size for your computer.

The best way to determine the appropriate pagefile size is to use the Performance Monitor, which has two counters that you can use to determine your pagefile's optimal size:

  • % Usage
  • % Usage Peak

The % Usage counter tells you in real time what percentage of the pagefile is currently in use. The % Usage Peak counter tells you what percentage of the pagefile was in use during its peak usage (that is, the pagefile usage when the system made the greatest demand on the pagefile). The latter value is most useful in determining the best pagefile size.

Start by creating a pagefile that is 1.5 times the size of your physical RAM. Then, perform the following steps:

  1. Click Start | Run.
  2. In the Run dialog box, type perfmon.msc in the Open text box and click OK.
  3. In the Performance console, click the plus button (+) in the toolbar.
  4. In the Add Counters dialog box (Figure I), click the Down Arrow in the Performance Object drop-down list box and select the Paging File object. Select the All Counters option. Select the Select Instances from List option and select the pagefile location. Click Add and then click Close.

Figure I

  1. Let the counters run for a day or two. You might want to set the interval to 15 seconds or longer (click the Properties button in the toolbar and adjust the Update Automatically field) to reduce the amount of system resources dedicated to the monitoring. Then change to the Report View and examine the usage statistics (Figure J). If you find the pagefile % Usage Peak is under 90 percent, you're in good shape and you don't need to resize the file. If you find the usage peak is over 90 percent, you might want to do a detailed review of the session and see how often the usage goes over 90 percent. If it goes over 90 percent frequently, consider resizing the pagefile and running the monitoring session again.

Figure J

Analyze the pagefile counter statistics in Report View.

Pagefile security and command-line management

You can take a couple more actions if you want to take command of your pagefile:

  • Delete pagefile on shutdown.
  • Use the Pagefileconfig command-line utility.

These are optional configurations, but you might want to implement them in special circumstances.

Delete pagefile on shutdown

You can clear the contents of the pagefile on system shutdown with the Local Security Policy console. Click Help and Support and search for Local Security Policy. Select Security Settings | Local Policies | Security Options, and then double-click on the Shutdown: Clear Virtual Memory Pagefile entry and select the Enabled option (Figure K).

Figure K

I recommend that you enable this option only if you have multiple operating systems on the same machine. It's possible to read the contents of the pagefile if you boot into another operating system. However, if you have only a single operating system, the pagefile will be locked and not readable. With a single operating system, you shouldn't wipe the contents of the pagefile; those contents may be helpful to you if you ever need to run a forensic analysis of the machine.

Pagefileconfig command-line utility

Windows XP includes a command-line utility, Pagefileconfig, that allows you to:

  • Change the current pagefile settings.
  • Add pagefiles to the system.
  • Delete pagefiles from the system.
  • Display the current pagefile settings.
When you first try to run the Pagefileconfig utility from the command prompt, you'll see the following dialog box (Figure L):

Figure L

Run the CSCRIPT //H:CSCRIPT //S command at the command prompt, and then reissue the Pagefileconfig command without switches. You'll see the screen shown in Figure M.

Figure M

Pagefileconfig displays the current pagefile status.

You can run the Pagefileconfig /? command to learn more about the utilities that let you manipulate the pagefile.

Conclusion

The Windows XP operating system automatically installs and configures a pagefile during system setup. While the default configuration of the pagefile does a reasonably good job, you can make several improvements on the default configuration. By following the recommendations I've given here, you'll be able to take command of your pagefile and improve your system's performance.

Stay on top of the latest XP tips and tricks with TechRepublic's Windows XP newsletter, delivered every Thursday. Automatically sign up today!

62 comments
ashu_meshram
ashu_meshram

it is very good to learn all these things

jnijkerk
jnijkerk

It's obvious that the pagefile will fragment. There's one simple method to avoid fragmentation. 1. Leave the minimum size of the pagefile (64 Mb) on the disk where the operating system resides. 2. There are two possibilities: A. Locate the rest of the the pagefile exclusively on a second (or third or whatever) harddisk of it's own. or B. Locate the pagefile on a partition of it's own on a (again second or whatever) harddisk that is seldom used (that's to say, a disk with practically no read/write actions). For this I have a harddisk in use that is functioning as a 'parking lot' for files that need temporarily storage before being copied to other disks (for archiving or modification). 3. It is obvious that the partition for the pagefile must be bigger than the pagefile itself, 1,5 times bigger than the theoretical maximum size will be big enough. 4. After this, don't change the other default settings for the pagefile, leave everything to the operating system. Additional notes: The faster the disk where the pagefile resides, the faster the virtual memory. You will experience that the pagefile never gets fragmented and maybe (depends on the application) you will experience a sometimes dramatical increase of the speed of the virtual memory. And: even on advanced raid systems ;-)) I use a single disk for the pagefile. So get happy... ;-) Jur

Eskander51
Eskander51

Thank you very much,it is splendid.

roman70816
roman70816

I loved the aricle, im a p.c. tech and i just learned how to tweak and defrag my pagefile on my H.D.D. with this article. Thank you!! Thank you!! guys for this wonderful article on optimizing your pageflie. You guys are the best. Keep up the good work and the good tips,tweaks, and articles guys. your the best!!!!!!!

Ron_007
Ron_007

The first commenter asked my question, what about Vista and Win7? One point I would dispute though. The pagefile will fragment if there isn?t enough contiguous hard disk space to hold the entire pagefile. This typically isn?t the case when the operating system is first installed. Whenever I've looked at a disk immediately after an install it IS hugely fragmented. And the pagefile has typically been more than a little fragmented. AFIK the OS still assigns space to all files on a "first come, first served" basis, no matter how small the space on the disk or how large the file is. Any opinions on using SSD for swap file. Although it has fast read, I suspect the slow write could become an issue. How about the question of "wear" of the SSD? They have a limited life span compared to 'normal' hd.

rakib_cool
rakib_cool

a very good post. Thanks to author for such publishing.

ps.techrep
ps.techrep

The only reason in XP for a pagefile on the system partition is that Windows uses the space for dumping the memory in the event of some kinds of crashes. If you must have the pagefile on the system partition, to defrag it, remove it, defrag the system partition in Safe Mode, then restore it. "Install" Mark Ruscovich's pagedfrg and reboot once. Yes Kurt, It's possible to have a 100% defraged pagefile that requires no maintenance, by putting it on a dedicated _partition_, and it need not be a primary partition. A separate disk isn't needed.

knudson
knudson

Most of us have lots of old disks around. I have several 18 gig HDs, I install it and move the pagefile to it. Fixed size pagefile, usually 1.5 -> 2 x ram. Then use some of the remainder, usually in a second partition, for archive storage. Things I just want to save, but rarely access. Added performance, don't know, but it seems logical.

Jack Flash
Jack Flash

What's interesting is that in spite of XP's stability it has strange phenomena of real memory frangmentation so that even if you have 3GB mem and 1.5GB of it is free it still starves for memory...the same issues do NOT exist in Vista/Win7. Jack IT Professional - You are not Alone Anymore Join the party here: http://budurl.com/ITMM +Free Full IT Course as you signup to the Blog Mailing list

Rob C
Rob C

If I have 3 GB of ram, and am running XP Pro, can I just switch the pagefile off ?

jeslurkin
jeslurkin

When I read 'defrag the pagefile', I was going to comment: 'STUPID' Now I see that he recommends what I have done for years. I _do_ adopt the 'Nix policy of putting it in a 'dedicated' partition. (It shares this partition with the rubb... er, Recycling Bin, and the hiberfil.sys.) I don't worry about performance improvement, I just don't want them (nor any non-system files) to cause fragmentation/ refragmentation of the system partition.

tomtanin
tomtanin

Sysinternals has a free tool called PageDefrag that shows you how many pagefile file fragments are on the system and also lets you configure the system to perform a pagefile defrag on the next boot.

Gis Bun
Gis Bun

Half of this article assumes a multiple partition or drive system. Standard recommendation is the pagefile should be 1.5 times the amount of memory. But if the system lacks RAM [less than 512MB, I'd boost that to 2 times the amountof RAM]. If the system has plenty of RAM [3GB or greater], consider less than 1.5 times. A system that could actually dual boot 2 XP systems [it's rare] can use the same pagefile.

Neon Samurai
Neon Samurai

Sysinternal's Pagefiledefrag is a great utility that consolidates your page file, hibernate, registry hives and other normally unmovable system related blobs. I like to set it to defrag every boot; if no need to defrag, it'll just flash past without adding noticeable boot time. I also read several years ago that the guy behind sysinternals and Microsoft both recommend system managed swap files. The question was posed in an interview; how does one get best swap performance for say, gaming? First, Windows manages swap sizes better than a human and the standard ram~ram*2 size rule. Second, put a swap file on C and also on a second physical drive. Windows will make use of the faster swap location as applicable. (wish I could find the original article to offer) I'm not sure if the second point here has since changed or not though. It would be interesting to hear from the sysinternals folk or a people that know Windows swap management right down to the source code level.

Jeff7181
Jeff7181

"To get the best performance out of the pagefile, you should place it on a dedicated disk. This is especially the case on high-end systems with a large amount of RAM." Seems like it would especially be the case on a low-end system with a small amount of RAM. A large amount of RAM means it'll be hitting the page file less frequently, right? So if you have a small amount of RAM it'll be hitting the page file on the hard disk more often, so having a page file that performs well would be more important for overall system performance than on a system with a large amount of RAM.

Mark W. Kaelin
Mark W. Kaelin

If you are still using Windows XP, the pagefile is an important performance consideration. When was the last time you checked your pagefile settings?

keithc
keithc

If you put the pagefile in a separate partition, every time the OS needs to access it, the disk read/write head has to move to that partition. Of course, with multiple read heads, this matters less, but the rule we used to use for OS/2 was always put the page file in the most used partition on the least used disk. Putting it on the least used disk means that the chances are that the read/write head is unlikely to be busy somewhere else on the disk when the OS needs to swap memory. Putting it in the most used partition means the head usually needs to move a shorter distance, as it is likely in that area anyway.

jnijkerk
jnijkerk

Nice to reply your own post. ;-))) There's one other thing, the FAT32 file system is faster than NTFS. So I suggest to format the partition or disk where the pagefile resides (the part wich is above 64 Mb off course) as a FAT32 partition. On my systems it works fine...

jeslurkin
jeslurkin

If a system actually needs to use a swap file, the constant re-writes could 'wear out' flash mem pretty quickly.

alexisgarcia72
alexisgarcia72

Yes, you can create for example a 8GB dedicated partition just with the pagefile. In this way this partition only contains such file and is not defragmented at all or you can easily defrag with RAXCO or any other Highend disk defrag (I avoid MS defrag)

Doug Vitale
Doug Vitale

On PCs with 2GBs or more of RAM, I disable the page file (choose "No Page File" in Figure G). Windows XP should be more than content with 1 GB of RAM, so if you have 2 GB or more you can remove the page file and force Windows to use RAM, which is much faster.

jeslurkin
jeslurkin

I no longer put the recycling bin in 'Swap' partition. It has now its own 'Trash' part. set to 1% of total system drive size. This _does_ require that one go to every other partition and 'zero-out' the RB files in those other parts. I would find it incomprehensible that MS duplicates the RB in every part., and sets the default to 10%,... except that I realize that MS wants the avg. user to clog things up prematurely and buy a new machine and OS.

Bruce Epper
Bruce Epper

On multi-boot systems you can share the pagefile between all of your Windows installations from 2000 on. I had a triple-boot system with 2K, XP & Vista all sharing the same paging file and they didn't give a rip.

Ron_007
Ron_007

Yes, a default mass marketed PC still ship with single C: partition for OS and general use and sometimes with a "Recovery" partition Download one of the free partition managers and you can resize partitions and you can put in a separate swap file partition. Actually, the last time I had to re-install Vista on my laptop I tried the default partition tool. I was pleasantly surprised. It did everything I needed, resize, move and define partitions.

Brainstorms
Brainstorms

I now place the pagefile on its own partition, per recommendations I've read. Disk space is cheap, and having watched what Windows picks when it chooses, I now make a 3 or 5 GB partition and put a 2 or 4 GB (custom, static sized) pagefile there. (The rest of that partition space can be used for scratch.) On dual-boot systems (WinXP & Win7 Upgrade) that are tight on disk space, I let them share the pagefile -- they don't care; whoever boots, uses it without complaint. If there's enough space, Disk 1 has a WinXP OS partition + a Win7 pagefile partition; Disk 2 has a Win7 OS partition & a WinXP pagefile partition. Size-wise the latter dual-boot scheme has the two Windows mirroring each other's disk usage in the first 'x' GB of a pair of drives. The rest of the drives can then be paired up as RAID-1 partitions for a (robust) Linux install. I may not see much Windows pagefile use, but then there's always a partition that can be "borrowed" by Linux in a pinch. Regarding Linux, I've only seen it use swap space on a machine with 256 MB of RAM running Xubuntu, or a 512 MB machine running Ubuntu while running many apps at once. With 512 MB or more, it doesn't seem to need it (much). Less than that and Linux does *noticeably* slow down during installation and/or LiveCD use if there's no swap partition for it to find/use.

mr.vedatozkan
mr.vedatozkan

"To get the best performance out of the pagefile, you should place it on a dedicated disk." you can place your pagefile on a RAID system!!! but the raid configuration must NOT be RAID 1,3,5 but it must BE RAID 0 (stripped set). as you know stripped set RAID has faster read/write performance than the other RAID configurations, so it will boost your pagefile. you must not place it on RAID 1 configured disk system because it will dublicate the pagefile and you will get a very slow read/write performance,doubled pagefile and expensive pagefile disk system. also RAID 3 and 5 disk system won't worth placing pagefile on them.

MightyMouse58
MightyMouse58

Hi, maybe slightly off-thread but can this same principal be applied to Vista Ultimate? Thanks

vindasel
vindasel

I've placed my page file in a dedicated 8 GB partition on a secondary physical HDD and set it to be Windows managed. My desktop has several large HDDs, so 8 GB disk space is easily expendable. I've read several suggestions on managing pagefiles such as setting it to 1.5xRAM size, disabling it all together, etc etc. But in my experience, there's no performance benefit with any of these implementations. So to be on the safe side, I just give it a reasonable sized partition to play in and let Windows manage it. On my laptop with only one unpartitioned physical HDD, I simply let Windows manage it. In case it fragments in either of the systems, I can easily defrag it with Diskeeper Pro that I have installed on both machines. It defrags the page file as well as the MFT if set to run a boot-time defrag.

oldbaritone
oldbaritone

I still cannot understand why Windows insists on needing a pagefile. For crying out loud, my new system has 4GB of RAM! HOW MUCH MEMORY does it really need? "Back in the old days" (Win 3.1) the way to make a Windows machine screaming-fast was to load it to the max with RAM, and disable paging completely. After all, RAM speed is measured in nanoseconds; HDD (and therefore pagefile) speed is measured in MILLIseconds. Windows insists that it MUST use technology that is ONE MILLION TIMES SLOWER because "that's the way it works." When Windows-95 came out, I tried disabling paging with a system that had 8x the minimum RAM. In fact, the system had more physical RAM than another system in the office had "total" - including the pagefile. (One machine had 64M, the other had 16M physical with 32M pagefile.) Do the math - the system with 48M total ran just fine, but when I disabled paging on the 64M machine, IT WOULD NOT EVEN BOOT. 'Scuse me? Runs fine in 48M but won't boot in 64M? Hmmm... It has been that way ever since, even though the average RAM in high-performance systems is hundreds or thousands of times what it used to be. But we haven't been able to disable paging for 15 years, and I don't foresee Microsoft changing that decision, ever.

SubgeniusD
SubgeniusD

During recent Win 7 install I set up a separate swap partition for the paging file. Read a debate somewhere on TechNet (concerning Vista/Win7) which ranged from not bothering with a pagefile at all to placing on separate physical disk. One skeptical engineer felt the separate disk theory would only yield minor performance increase (if any) and then only if the second disk was faster then the system disk. I keep a performance monitor gadget on my desktop and have yet to see Ram usage go beyond 60% or so with 4 Gigs of DDR2. So I doubt I'll see any actual performance gains at all at this point. In the Linux world swap partitions are also debated although most installers routinely setup a swap partition by default. Running Linux on this machine for months I've never seen swap used at all. Doe Mark have any handy links to Win 7 pagefile info that could clarify this?

bjorn.goddijn
bjorn.goddijn

The things I'm missing is: why not a fixed size for the pagefile. And 2 other tweaks in the registry: DisablePagingExecutive=1. Furthermore in the registry, if you have 1GB or more RAM: LargeSystemCache=1 (both keys are under: HKLM\System\CurrentControlSet\Control\Session Manager\Memory Mangement\) All these and assiging the pagefile to a dedicated partition (on an other drive than your OS when possible) increase stability enormous.

Bruce Epper
Bruce Epper

Putting your paging file on another partition on the same drive as your system partition will impair your performance as you are still running it through the same controller channel onto the same spindle with an overall increase in actuator movement. So there is absolutely nothing to be gained from that practice.

Neon Samurai
Neon Samurai

LiveCDs are loading there image into ram and reading from the disk reader as needed so they tend to be slower in my experience. Great to test a distribution or when you need an OS separate from the hard drive but not the ideal approach to ongoing use. One approach I used in the past was a liveCD for a thinclient (work notebook I couldn't install on). I'd boot the liveCD to get a local desktop then run the apps from my desktop in another room; over the local network, you wouldn't realize that the programs on your screen where actually running elsewhere. It does depend on network speed though as it's noticeable over an internet connection though doable. Install I wouldn't worry about so much though I've never been able to really measure that. Do you mean distribution install or when installing large numbers of packages afterward? Either way, it's much less during general use. I rgularily have multiple VMs booted along side my host OS, Firefox and several other programs and still can't max out my 4 gig of ram enough to touch the swap. It's only been some runaway program once or twice in the last two years. When Conky shows my swap usage going up, I have a look and kill the offending process then watch it drop back down. Placing your /tmp directory on a separate partition has the same purpose. Sometimes something goes badly and will start eating temp space but won't be able to DoS your whole system since it can't go past the /tmp partition. (Apache on Mandriva 2007.1 but it's EOL with Mandriva 2010 being current now) Interesting approach to your Windows swap. The last time i was booting two versions on the same hardware I didn't have any consideration for swap so creating a partition let alone sharing it between them didn't enter my mind. I'm not sure what I'd do with it now as I'd be more likely to put the latter version on metal and previous into a VM.

alexisgarcia72
alexisgarcia72

Best practice will be to put the page file in a raid 0 configuration, but I see raid5 setups with better performance than raid0 setups. I.e: raid5 4 hd 15,000rpm hardware raid with adaptec accelerator.

AKHandyman
AKHandyman

I've used Diskeeper for years and insist that it goes into every new rig I build. I have also found that with some high-end rigs, gaming ones in particular, a dedicated second HDD ,i.e. 36GB WD Raptor, makes for a perfect pagefile. I know most folks don't go to the extremes that I have, but this is just my two cents. I have had exceptional performance with the separate HDD as a pagefile and never have had any complaints otherwise.

Michaelss
Michaelss

I have been running Vista Ultimate 64bit with 8GB of RAM with it completely off for 2 years now, never had a problem. I am now running Windows 7 Ult 64bit the same way, completely OFF, no problems. I am also running on two SSD drives and have also turned off the Superfetch, and my machine runs wicked fast and with out a glitch.

Who Am I Really
Who Am I Really

that's how they keep the current & future RAM development in motion; if you could trick windows into using in all RAM and function reasonably well then people wouldn't always be clamoring for a system with more RAM, I ran Win3.1 on many different 80386 & 80486 systems and no matter what my config. was, it always asked for a huge Swap File, (and yes Swap File is correct for 3.1x & 9x and Page File is for NT) I have Win3.1 on 3 different 80486 with 32MB RAM and it still asked for a 100MB Swap File on all of them, that's over 3x the RAM, I got tired of it wasting my Disk Space so I changed to temporary Swap Files and it never goes over 22MB, why ask for a 100MB permanent then? That's what fuels the consumers desire for larger disks ... and on and on it goes, why not just run an OS on hardware that's released 5 years after the OS and you aren't bothered by the Page File as the system is fast enough that you don't have to wait around for things to happen ie. - win2k on systems from 2005 & up - XP on systems from 2007 & up - vista on systems from 2012 & up* - win7 on systems from 2015 & up* *(oh ya were not there yet)

gjkool
gjkool

For some 5 years already, I use XP with the pagefile completely off, and 1 GB of RAM. Worked great, no problems at all! It even was the only way to get video capturing to work reliable and flawlessly. With pagefile on, the video capturing would every time hesitate or even freeze. Now, with the pagefile OFF, video editing and encoding also works great, photo editing too, even with batch editing. Better, faster than with pagefile! Since december I have got an other, faster computer with 2GB RAM. Tried W7, it works surely much better than vista, but finally went back to XP and NO pagefile. I do not regret the downgrade: XP without pagefile is noticably faster and more reliable than W7.

JCitizen
JCitizen

you might have been exceeding them, and the OS went blind. I can't remember the old limits anymore. But XP SP1 can only see 3 and one half Gbs in the best possible scenario. XP x64 Edition more of course.

cc99
cc99

Right on.... I have used it for a long time on every XP system I have tuned up. It works very well, it's simple, and very fast to install and use. Users can be trained to run it once a month.

Neon Samurai
Neon Samurai

It used to be the old ram*3 rule in the 128~512 days. These days, I just give it 2gig and carry on. It's more of a warning, if I see swap usage beyond the 1% or so average then it's time to look at what's eating physical ram as it's likely a hung process or memory leak. With separate physical disks, I'd be on the side for rather than against. Unless your hard drive is slower than the primary drive, your going to get some benefit. Even if it's two equal speed drives; your still getting swap read/write through a separate channel and platter head than if it was on the fist drive mixed into your OS install. Ideally it should be on a separate branch. Rather than have it on IDE1 with slave/master both using the same traffic channels; put put your second hard drive on IDE2 master. This is just an example of course since it's more likely your looking at sata0 and sata1 these days. That would be my thinking anyhow.

Bruce Epper
Bruce Epper

The only thing you will really lose by disabling the pagefile is the ability to get crash dumps if your new device driver BSODs the system or an app crashes. If this doesn't concern you, it is not a big deal to disable the file.

alexisgarcia72
alexisgarcia72

I know lot of articles do not recommend to disable de pagefile, but yesterday I started an experiment. In a 3GB xp computer, I disabled the pagefile. The pc is stable, running photoshop, paintshoppto, beyondcompare, format factory and one vmware 6.5 xp machine. I don't see any problem don't having the pagefile.

Rob C
Rob C

I tried the latest UBuntu LiveCD, and was lamenting that it can not store settings and data from your sessions. (If it can please let me know how ?) I then tripped over Puppy linux, which can store your settings, data, etc. It also fully loads into ram, which makes it extremely fast. And thus, also frees up your CD drive for other uses, during your session. I love it. Do any of the other LiveCd's save settings, data, etc ? (They don't have to load fully into ram, as I can live with slow responses, provided the other features are inviting.)

JCitizen
JCitizen

I wasn't sure I liked RPM when I got my first distro. Mandriva 7.0 - man that seems like the dark ages now!

Neon Samurai
Neon Samurai

Handy having a light distro to grab when needed. I can understand your grief with Red Hat. If anything is going to turn you off RPM, that'd be it. With Mandriva, the actually used a different package utility Rpm always made a mess of my systems but urpmi managed dependencies the way Rpm should have. (bit confusing with the utility named exactly as the package also). But, my first five minutes with apt-get and then aptitude fully clarified why people like .deb so much. I can deborphan to clean up packages left behind. I can easily reinstall a package or identified damaged packages that need to be reinstalled. There just doesn't seem to be anything as clean for managing RPM once the system goes to hell. dpkg-reconfigure.. what a dream that little utility is. And, I can easily take a tarball and install it as a .deb thanks to checkinstall. I don't know if I can go back to a .rpm based distro. For the life of me, I can't figure out why the Maemo/Mobi merger is going to .rpm instead of sticking with .deb as Maemo has always been.

Brainstorms
Brainstorms

I got started on RedHat EL 4 (at work), and that experience got me turned off to RPM-based Linux... Perhaps I'm just naturally a Debian guy. ::shrug:: I've heard so many positive comments about Mint that I guess I'll have to try it out. Although I don't *mind* running a few scripts to install "the missing elements" that Ubuntu can't redistribute. (Thereafter they fetch updates automatically, so what's the big deal? It doubles the install time, sure...) I don't do LiveCD installs any more either. I boot the LiveCD to make sure everything's kosher, then partition the hard drives and reboot. Then I do the actual install from the Alternate CD -- mainly because I've got so many hard drives that I can pair them up in each machine I've built. Now everything is installed on RAID 1 md's for reliability. (And that's saved me once already in the last 3 months!) My /home partitions are now LVM volumes, which makes migration & expansion a breeze. Lately I've experimented with lightweight distros -- Xubuntu, AntiX, Puppy, DSL, Tiny Core... Aside from being able to boot off pendrives (if the BIOS supports), they don't need swap partitions and they run fast -- which makes them good for prep & rescue. I'm keen to try out Lubuntu, when it's released. LW Linux sure does breathe new life into old, limited hardware. And not needing a swap file matters when your laptop's HD is smaller than the thumb drive hanging around your neck!

Neon Samurai
Neon Samurai

Mandriva 2008.1 was my general liveCD when I needed an adhoc OS so presented with an old notebook, I grabbed my Mandriva 2009.1 expecting it to run just as happily; KDE4 is not kind to limited ram. I dropped in my 2008.1 disk and the machine was as usable as it needed to be. I do keep a list of liveCD on hand though for various needs but have been using Backtrack more often then not unless it's an adhoc OS for someone else to use. I also got turned off using LiveCD installs. They are great for average users because they simply stamp the liveCD image to the drive and can do so in the background while they continue working. If I'm doing a hard drive install though; it's Mandriva Free or similar that provides a true full install instead of stamped image (Debian these days personally). With Ubuntu, I'd need to remove and replace various applications so it becomes equal to a Windows install needing me to clean out the default or do some chopping and slipstreaming ahead of time. Something like Mint or Backtrack where I'm going to simply use the factory default image would be perfect for me. Thank goodness there are choices for different user needs though as many people seem very happy with there chosen liveCD written images.

Brainstorms
Brainstorms

Yes, using a LiveCD is definitely slower, mainly because it can't (or won't) read the entire squash FS into its ramdisk to run, and fetching & uncompressing files from a CD is slow... Before I install on a system that hasn't tasted Linux before, I like to run the LiveCD session to test the recognition & operation of the hardware components. (I also like to layout & format my partitions this way, before starting the installer.) That's when I notice the "need" for a swap partition on an attached hard drive. Installing with a text-based installer, such as one of the Ubuntu Alternate Install CDs, probably isn't affected much, but running or installing from a LiveCD GUI session *is* -- IF the system is short on RAM. I've gotten Ubuntu 9.10 to run a LiveCD session & install on a machine with 256 MB of RAM, but I don't recommend it! -- VERY slow. 384 MB would be a better minimum, and with 512+ MB, a swap partition isn't even needed. (Doesn't seem to be needed once the OS is installed, either, as you note.) But if you (defrag Windows then) boot DSL or Puppy from a pendrive or CD, and use gParted to carve out a small (256 MB) swap partition, then performance picks up and running off a CD, etc. is fine. Most of my machines have 2-4 GB of RAM, so a swap partition is almost a formality. Older machines with RAM limits are what benefit. The only time I've noted that Ubuntu used a swap partition was running Xubuntu on an old ThinkPad with 256 MB RAM. Running FF & a few apps had it using 200 MB RAM + 100 MB of swap within 10 minutes. /tmp on a separate partition is a good idea, as would be /var if your machine is accessible from the WAN (mail or script kiddies can also backfill your FS). I keep XP native to make Win7 Upgrade happy, and install those on bare metal for gaming -- until my VM's (in Sun's VirtualBox, not the OSE version) have decent accelerated 3D hardware support. It's not there yet, but for the rest of my (minimal) Windows needs, a VM is "the way to go".

Who Am I Really
Who Am I Really

I work with audio files and every time I would save a .wav file the real-time defrag of Diskeeper would eat the file and the file would be severely damaged, scrambled into exactly 4KB chunks of audio all positioned in the wrong place in the file kind a like the following if the file was supposed to be; - Notify me via e-mail when new posts are added to this thread I would end up with something like; - thread posts Notify e-mail this when new are me to via added so I had to uninstall Diskeeper.

tim.stephens
tim.stephens

I always build my own machines, install a decent amount of RAM (hey its cheap!) and always set the page file to 0. I run VM's and lots of apps and have NO problems with running out of RAM (within sensible limits)and the system flys. Check out the LINUX swapfile some time, on most systems with decent amount of RAM it is never used.

JCitizen
JCitizen

that was a trip down memory lane!! :p I remember a lot of floppy diagnostic disks took advantage of ramdisk.

jeslurkin
jeslurkin

Earliest W95 didn't know how to use more than 11 megs of actual RAM. ISTR that OSR-2 could use up to 16MB. Versions of 98 could use up to 24 to 64MB. ME was never able to use more than 128MB, in my experience. One 'trick' of which I have heard (and haven't had a chance to try) is to use a ramdisk in the unused RAM and put the swapfile in the RD. (FURD19i is alleged to be able to make a RD as large as 4GB.)

Who Am I Really
Who Am I Really

My only beef with PageDefrag is that occasionally on certain system configurations ie. systems with low RAM & smaller disk it will move the pagefile all the way to the end of the drive, making system boot and application startup an obnoxiously noisy / clickety clackety event because every access to the page file requires a full sweep across the platters and then back to wherever it was loading the application etc. from.

JCitizen
JCitizen

I read somewhere that the big HDD makers are modifying the way they handle sector formatting and controllers on the new huge HDDs: http://hothardware.com/Articles/WDs-1TB-Caviar-Green-w-Advanced-Format-Windows-XP-Users-Pay-Attention/ By using drive geometry, better controllers, and 4k sector sizes. I don't know why, but it has been so long since a figured a setup for a drive using the OEM low level formatting I forgot it wasn't already this way?! Maybe I'm thinking 4k clusters? I'm losing it, again, I guess. :P However, this could bring new interest to folks that couldn't afford the SSD, but maybe a new Caviar?!?

Neon Samurai
Neon Samurai

Mind you.. VMs were the primary target for this system build with Games for graphics processing. Previous to VMs, games where my primary build benchmark. I should hope I have to open several VMs to consume the system resources. Even then, it looks like load times are the first slowdown I hit as processor and ram still have room to go. If I really wanted to, I'd start splitting the VMs among separate SATA drives. Dragon's Age comes closest for games as it shows a clear difference between idle and game used ram. I'm hitting heat and GPU limitations now though rather than processor or memory. If I really wanted to get technical though, we could discuss location on the platter for swap partitions. In the middle is preferred because the drive head is never more than half the drive away. If swap read times are your key target then put it at the front of the drive where you get far more data in a single cylinder. (normally where one would put there database partition in an industrial setup) Ah, who am I kidding.. it's time to go SSD if they'd just increase the size and drop the price a little more. Something as simple as running off SSD has already proven to cut brute force cracking times in half or more. I'm getting tempted to stuff one in this machine for OS and game partition leaving the platter drives for large file and slow read storage.

JCitizen
JCitizen

and, like you said - it just makes sense. Why have a write head whacking back and forth trying to read/write to paging, and trying to operate the sys/app too? Seems quite logical to let the other controller do it. It isn't doing anything anyway. May even extend drive life. However, as already pointed out, with the HUGE RAM we have now days, disc usage is more of an alarm than a need. The RAM should handle it all! I suppose it depends on whether your application is behaving badly or not. I read on forums that video producers have trouble with paging because the application causes paging problems. I myself really wouldn't know, as I don't use my outdated equipment anymore. I do know that Nero or Sonic did seem to be writing to disk pretty often under XP, but then I only had 384Mb of RAM at the time. Now I'm 64bit 6Gbs of RAM but no money for production yet. Hey! The economy is looking up! It could happen?!