Do you run your operating system with the pagefile disabled, set to a static amount, or automatic?

  • Disabled

    Votes: 9 25.7%
  • Static

    Votes: 12 34.3%
  • Automatic

    Votes: 14 40.0%

  • Total voters
    35

Hellbovine

Well-Known Member
This is a place to explore the Windows pagefile, and I am opening up the conversation with a poll. Please discuss the reasons behind your choice, benchmarks, testing, white papers, and other data, but refrain from linking to regurgitated articles or disreputable sources, and be prepared to support any claims you make with evidence.

Below is some official documentation to get us started:
- Microsoft Pagefile White Paper (link)
- Mark Russinovich Pagefile Blog (link)
 
Last edited:
I havnt had a pagefile in years, even with just 4GB ram.
Running this online system with 16GB.
8GB for an offline workstation is enough.
Not a gamer, audio and video are my thing.
 
Last edited:
Static, but with different minimal and maximal sizes.

Min = 128 MB

Max = 2 times the RAM (or more)

On this long-time install of 7 x64 / 8 GB, the page file rarely grew in the past, but now often does. I think it's the Firefox's lust for RAM lately.

Note: On reboot it returns to 128 MB.
 
Last edited:
I think it's the Firefox's lust for RAM lately.
Since mozilla stopped a certain tweak FF has been a bastard on ram use, watch a YT livestream at 1080P and FF is like fkkn pacman.
Careful use of the cleanram tweak i recently posted helps :cool:
Look for a FF tweaker, there are a few about.
 
Why don't we ask Mark Russinovich (Microsoft EVP, CTO and father of SysInternals)?

Pushing the Limits of Windows: Virtual Memory

How Big Should I Make the Paging File?​

Perhaps one of the most commonly asked questions related to virtual memory is, how big should I make the paging file? There’s no end of ridiculous advice out on the web and in the newsstand magazines that cover Windows, and even Microsoft has published misleading recommendations. Almost all the suggestions are based on multiplying RAM size by some factor, with common values being 1.2, 1.5 and 2. Now that you understand the role that the paging file plays in defining a system’s commit limit and how processes contribute to the commit charge, you’re well positioned to see how useless such formulas truly are.

Since the commit limit sets an upper bound on how much private and pagefile-backed virtual memory can be allocated concurrently by running processes, the only way to reasonably size the paging file is to know the maximum total commit charge for the programs you like to have running at the same time. If the commit limit is smaller than that number, your programs won’t be able to allocate the virtual memory they want and will fail to run properly.

So how do you know how much commit charge your workloads require? You might have noticed in the screenshots that Windows tracks that number and Process Explorer shows it: Peak Commit Charge. To optimally size your paging file you should start all the applications you run at the same time, load typical data sets, and then note the commit charge peak (or look at this value after a period of time where you know maximum load was attained). Set the paging file minimum to be that value minus the amount of RAM in your system (if the value is negative, pick a minimum size to permit the kind of crash dump you are configured for). If you want to have some breathing room for potentially large commit demands, set the maximum to double that number.

Some feel having no paging file results in better performance, but in general, having a paging file means Windows can write pages on the modified list (which represent pages that aren’t being accessed actively but have not been saved to disk) out to the paging file, thus making that memory available for more useful purposes (processes or file cache). So while there may be some workloads that perform better with no paging file, in general having one will mean more usable memory being available to the system (never mind that Windows won’t be able to write kernel crash dumps without a paging file sized large enough to hold them).


Paging file configuration is in the System properties, which you can get to by typing “sysdm.cpl” into the Run dialog, clicking on the Advanced tab, clicking on the Performance Options button, clicking on the Advanced tab (this is really advanced), and then clicking on the Change button:

You’ll notice that the default configuration is for Windows to automatically manage the page file size. When that option is set on Windows XP and Server 2003, Windows creates a single paging file that’s minimum size is 1.5 times RAM if RAM is less than 1GB, and RAM if it's greater than 1GB, and that has a maximum size that's three times RAM. On Windows Vista and Server 2008, the minimum is intended to be large enough to hold a kernel-memory crash dump and is RAM plus 300MB or 1GB, whichever is larger. The maximum is either three times the size of RAM or 4GB, whichever is larger. That explains why the peak commit on my 8GB 64-bit system that’s visible in one of the screenshots is 32GB. I guess whoever wrote that code got their guidance from one of those magazines I mentioned!

A couple of final limits related to virtual memory are the maximum size and number of paging files supported by Windows. 32-bit Windows has a maximum paging file size of 16TB (4GB if you for some reason run in non-PAE mode) and 64-bit Windows can having paging files that are up to 16TB in size on x64 and 32TB on IA64. Windows 8 ARM’s maximum paging file size is is 4GB. For all versions, Windows supports up to 16 paging files, where each must be on a separate volume.
 
I have disabled the pagefile ever since Windows XP, but I am interested in this discussion since it is good to revisit popular topics to learn more, and verify if we are still making the best choices, as things change over time. Here is my conclusion after spending 30 years on computers:

REASON #1 TO ENABLE
The pagefile is far less complicated than people make it out to be. Its main purpose is to provide a safety net for systems that do not have enough real memory (RAM), because the pagefile will use fake memory (pagefile) as a substitute to prevent crashing.

Memory was very expensive for the longest time, and with limitations of 32-bit systems there was not always enough RAM available, which is the reason for the pagefile existing. In today's systems there is no reason to skimp on memory with how cheap it is, and no Windows 10/11 computer should be running on less than 8 gigabytes of RAM, as it requires around 1.5 gigabytes just to idle at the desktop.

If you have enough RAM, you can go without a pagefile and never crash, but having a pagefile is the right option in systems that have low RAM, otherwise memory hungry applications may crash to the desktop.

REASON #2 TO ENABLE
On rare occasions, you may come across a program that explicitly wants a pagefile in order to launch, and in that situation you may have to enable the pagefile to bypass this faux restriction. This is considered bad practice since a pagefile is not actually mandatory for these programs to function normally, nor is it mandatory for the operating system, this is purely the choice of the developer.

In cases like these, you may be able to find a shortcut target line property that can be added in order to stop the application from checking for a pagefile, and then it will launch successfully and continue to run normally. In many cases, the developers pushing for a pagefile know there is a memory leak or other poorly coded aspect of their software which causes it to consume memory unnecessarily.

A great example of this is Escape From Tarkov, a newer game that was hyped up. In a Twitter post, one of the developers asked people to enable a 20-30 gigabyte pagefile in order to workaround a memory leak. Basically, this is to fix bad coding in most circumstances.

REASON #3 TO ENABLE
Having a big enough pagefile allows the system to create a crash dump when the situation arises. This is not a compelling reason to use a pagefile because it is usually obvious when things crash to the desktop due to memory issues, and you still get a blue stop screen with troubleshooting information if something crashes the operating system while the pagefile is disabled.

REASON #1 TO DISABLE
The pagefile is not free, it costs CPU cycles, memory, and disk activity. There are additional costs when it expands or shrinks the pagefile if you are using automatic sizing, as well as creating disk fragmentation. Whenever the pagefile is being accessed, it will produce hard page faults which cause spikes in DPC latency, which is a bad thing. Disabling the pagefile reduces overhead, and with enough RAM it will not cause problems.

REASON #2 TO DISABLE
Windows may try to use the pagefile even when it does not need to, by swapping memory out of RAM and into the pagefile, if it thinks that data is not going to be used again soon. Some people view this as a positive thing, but for users with plenty of RAM available it is often seen as a negative since the pagefile is slower than RAM. Disabling the pagefile prevents this predictive behavior.

REASON #3 TO DISABLE
The pagefile reserves disk space on a drive in order to have a place to swap data in and out of. This is how it creates that "fake memory" by taking data out of RAM and putting it onto a disk instead. If the pagefile is set to automatic sizing it can grow to be many gigabytes, until you no longer need extra memory or it runs out of disk space. If it is set to custom sizing it will always reserve that amount of space at all times. Disabling the pagefile is an easy way to reclaim reserved disk space.

OPTIMIZING THE PAGEFILE
From an optimization point of view, the best choices are disabled or static, and never automatic sizing due to the extra overhead and disk fragmentation costs. If using a static size, moving the pagefile to a separate drive is better than having it on the Windows drive. All the advice about how the pagefile should be set to a multiple of your RAM is based on outdated and misinterpreted information from the XP era, and was never intended to be a formula for consumers to use.

A modern recommendation for Windows 10/11, would be to have 16 gigabytes of memory available in total, split between the RAM and pagefile. In other words, if you have 8 gigabytes of RAM installed, then adding a static pagefile of 8 gigabytes would give you 16 gigabytes of memory in total, and that would be ideal for almost every user's situation. For devices with small disk drives, aim for 8 gigabytes total instead, such as 4 gigabytes of RAM and 4 gigabytes of pagefile.

This approach is how it was always intended to be, as the premise is to just make sure there is enough total memory available (in any form) so that nothing crashes due to insufficient memory. A related aspect to the pagefile is the idea that SSD are not affected by fragmentation, and this is untrue as the Windows Storage team has debunked that myth, and SSD still need to be defragmented, which is why Windows intentionally tries to defragment SSD once every 30 days.

In conclusion, if you have an abundant amount of RAM (16 gigabytes or more) for Windows 10/11, it is safe to disable the pagefile and there should not be any ramifications. However, if you have less RAM installed, then you should use a static pagefile to prevent crashing.
 
Last edited:
All from personal exprience.

4GB ram and no pagefile is doable for an offline system/media player/watching dvd movies.. Good as a basic audio editor/light DAW work.
Not recommended to run VM's. Might struggle playing blu ray movies/4k video.

8GB ram and no pagefile is ok for an offline system/media player and just about enough to run VM's. Will struggle with online work, firewalls a/v and browser plugins, watching 1080P livestreams.

16GB ram and no pagefile is recommended for online work, firewalls a/v and browser plugins, watching 1080P livestreams.
 
Last edited:
Some games/apps require a pagefile to be in place, regardless if you have 512Gb+ RAM. Removing it can cause problems in this regard, but Windows will always use one anyway if one is not present, aka a small reserve behind the scenes, should one choose to disable it.

It's fruitless to be even bothered with it, as no gains (at least none you will notice) can be had from messing with it. The only good thing about 'manually' setting it fixed (which is what I do), is maybe drive wear?, as on 'AUTO', the pagefile will expand and contract as it needs, which 'may'? be bad for SSD type drives.

I just set a static size based on recommended for my RAM for max and min, and leave it be, on fastest drive.
 
The pagefile never contracts (much like a database). To make it smaller, you have to disable it, reboot and create a smaller replacement file.
Fixed at the default 1.5X is the best way, since if you run out -- Windows would have serious performance issues if it had to auto-grow.

Better to run out of memory, than sit there for 12-15 min. for Windows to barely get back to your prompt.
 
...I just set a static size based on recommended for my RAM for max and min...
Fixed at the default 1.5X is the best way
Both of these comments are based on Windows XP. This formula changed again with Vista, and likely every Windows after that, as these weren't formula for public consumption, rather they are a part of the algorithm the operating system used to calculate how the automatic setting should initially configure the pagefile. This has been thoroughly documented, but end-users keep ignoring that.

The modern support article from Microsoft (first link I posted originally) no longer has such recommendations, as the general concept of the pagefile is the more memory you have installed, the smaller the pagefile can be, and vice versa. This is backed up by testing as well, if you have 16 gigabytes of memory installed then the automatic pagefile will start out with a size of about 2 gigabytes, not some magic multiple of memory.

Mark Russinovich says (second link I posted): "There’s no end of ridiculous advice out on the web and in the newsstand magazines that cover Windows, and even Microsoft has published misleading recommendations. Almost all the suggestions are based on multiplying RAM size by some factor, with common values being 1.2, 1.5 and 2. Now that you understand the role that the paging file plays in defining a system’s commit limit and how processes contribute to the commit charge, you’re well positioned to see how useless such formulas truly are."

If people want a modern formula for end-users, on Windows 10 and 11, I would say make your pagefile a static size equal to an amount that gives you at least 16 gigabytes of total memory split between your RAM and pagefile. In other words, if you have 4 GB of RAM installed, then set the pagefile to a static 12 gigabytes. This formula makes sense, because it targets real world scenarios that both Microsoft resources talk about testing for. The reality though is that we are on 64-bit now, RAM is very cheap, and adding more is objectively the best way to address the situation.
 
Last edited:
...Windows will always use one anyway if one is not present, aka a small reserve behind the scenes, should one choose to disable it.
Source for this?

The pagefile never contracts (much like a database). To make it smaller, you have to disable it, reboot and create a smaller replacement file...
MacVap's post disagrees, so I went and tested it myself. I used Mark Russinovich's tool, "NotMyFault" which creates an intentional memory leak. I had the pagefile set to automatic and it started out in the 2 gigabyte range, then I started the leak. I have 16 gigabytes of memory installed and as soon as I hit about 15 gigabytes in total consumption the pagefile started growing, which is what Microsoft said would happen at 90% usage. I let the pagefile grow to 8 gigabytes before I stopped the leak, and the pagefile immediately reset to the original 2 gigabyte size without a reboot.

This also means that the Microsoft pagefile white paper has not updated the information at the very end of their document, because they explicitly state it can only grow up to a maximum of 4 gigabytes, but I just proved that wrong. Mark Russinovich also talks about the 4 gigabyte limit and he wrote those articles during an XP dominant era too, which explains why both sources mention this data, but it's incorrect for today's Windows.
 
Last edited:
A place to discuss the Windows pagefile. I'm opening up the discussion with a poll. Things that would be great to talk about are why people chose what they did, benchmarks, and white papers. Here is some official documentation to get us started:

https://learn.microsoft.com/en-us/windows/client-management/introduction-page-file

https://learn.microsoft.com/en-us/a.../pushing-the-limits-of-windows-virtual-memory

Please refrain from linking to regurgitated articles and/or places that aren't reputable. Let's keep links limited to people that have actually done testing or are considered authorities on the topic, such as Mark Russinovich in the second link I posted above.
Simple. dont disable pagefile. its bad advice. 99.9% of configurations need a pagefile. its best to set it to a static value and leave it on the fastest drive
 
Back
Top