Windows 10 LTSC 1809 - Optimize Gaming/Poweruser/Runtime Profile, retaining maximum compatibility. For x64/UEFI systems.

Status
Not open for further replies.

MT_

Active Member
Computer\HKEY_CURRENT_USER\System\GameConfigStore entries are a pain to set with either HKLM or HKCU, neither work.

But I got some solution.
 
Last edited:

Clanger

Well-Known Member
If you see any errors in my files lemmy know, i can redo them. Going to start to bat my files tonight, thats hard on the eyes :(
 
  • Like
Reactions: MT_

MT_

Active Member
I was under the assumption that you wanted to disable NTFS compression/encryption/access timestamp?

;LTSC-DISABLE-NTFS-COMPRESSION.reg
[HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Control\FileSystem]
"NtfsDisableCompression"=dword:00000000

;LTSC-DISABLE-NTFS-ENCRYPTION.reg
[HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Control\FileSystem]
"NtfsDisableEncryption"=dword:00000000

;LTSC-DISABLE-NTFS-LAST-ACCESS-TIME-STAMP.reg
[HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Control\FileSystem]
"NtfsDisableLastAccessUpdate"=dword:00000000

I think you'll have to change them to dword:00000001
 

Clanger

Well-Known Member
tweaks not working where they once did, services restarting themselves :mad:, defender wiping files it dont like, file association bugs, grrr :mad:
This is mighty appealing right now.
 
Last edited:

MT_

Active Member
Not having the problems here that you are describing. But If i could i would still be on 150x earliest LTSB. But sadly I have a newer Nvidia card and its afaik impossible to run it with older builds.

Either Nvidia lockdown or WDDM is no longer backward compatible.

No matter how appealing old Windows is :)
 

MT_

Active Member
I'd be tempted to create a build just for laughs to see how neutered I can make it. but how far do you want to go? :p
 

XanderBaatz

New Member
Hey, nice what you're doing here. It's really solid.
You should set up a Discord server for your project, easier to get feedback and talk about general tweaks, optimizations, commands etc.

Also, wanted to mention something:

LTSC-DISABLE-NTFS-ENCRYPTION.reg
[HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Control\FileSystem]
"NtfsDisableEncryption"=dword:00000000

This breaks certain Store applications, meaning they won't install.
Examples: Asphalt 9 and Forza Horizon 4.

I guess they need the encryption somehow?
I suggest you remove it, in case someone in the future wants to download bigger apps through Store, and personally, I haven't seen any performance impact with it NTFS encryption enabled. It's not like it's BitLocker. ;)
 

Clanger

Well-Known Member
but how far do you want to go? :p

My inner maniac says "to the bone baby", the realist says dont bother, stick with the configuration you got and work with that, your project gets a lot of traffic as it is :). I was on xp before hand and went straight to 7 so it was a helpful excercise to learn w7 and the modern windows os and that has served me well.
 
Last edited:
  • Like
Reactions: MT_

MT_

Active Member
Hey, nice what you're doing here. It's really solid.
You should set up a Discord server for your project, easier to get feedback and talk about general tweaks, optimizations, commands etc.

Also, wanted to mention something:

LTSC-DISABLE-NTFS-ENCRYPTION.reg
[HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Control\FileSystem]
"NtfsDisableEncryption"=dword:00000000

This breaks certain Store applications, meaning they won't install.
Examples: Asphalt 9 and Forza Horizon 4.

I guess they need the encryption somehow?
I suggest you remove it, in case someone in the future wants to download bigger apps through Store, and personally, I haven't seen any performance impact with it NTFS encryption enabled. It's not like it's BitLocker. ;)

Thanks! I only played one big game SoT a year back, and indeed it automatically encrypted the folder (super annoying)

I kind of assumed (and hoped) Windows was smart enough to accept the non-encrypted installation state :-(
 

evolve

Member
Will do, and yes I've seen that video!

Mostly interested in his document, I must say it was merely to test but HPET is slow and on later Intel boards even detrimental (Like on my Z270G). So I tested his invariant TSC settings and it resulted it way better mouse polling and back to old W7 0.500/1.000 timer. From my former testing way back, W7 was behaving best with in-engine fps limiters which also showed 0.500/1.000ms. (FPS limiters were much less fluctuating).

Everything feels incredibly snappy. I've turned off my ISA bus completely pretty sure HPET falls under that too though.
hi mt not sure if you checked this guide or no i cant confirm it tho since he didnt talked about exact values wanna copy paste it here :
https://github.com/CHEF-KOCH/GamingTweaks/blob/master/Myths/Known Myths.md
this section about hpet :
---------------------------------------------------------------------------------------
Disable High Precision Event Timer (HPET)

This is not needed after Windows April Update (Build 1803) you can check the current status via credit /enum. Changing the values (especially on newer Intel COU's 9900k can result in a worse performance).
--------------------------------------------------------------------------------------
 
  • Like
Reactions: MT_

MT_

Active Member
Yeah I know this guys github, has a lot of useful things. Some might be hard to confirm, other things I don't entirely agree with either.

My stuff in profile is defo not hard evidence, and I'm still in the process of testing a lot of it more in-depth.

But one thing is clear, timers affect in-game fps limiters. And you'll mostly want to use these for reasons way beyond the scope of this thread! Forcing HPET on the other end can have detrimental effects on system wide performance.

The fact that 180x+ builds now all use 10Mhz synethetic QPC and claiming nothing have to/can be changed, doesn't mean the clocksource implementation behind it is no longer relevant. This is probably a flawed assumption as if i would to force HPET right now, behind this synthetic timer, with the HPET Intel platform quirk I can bet things would quickly go down hill :p

Interesting read perhaps.
 
Last edited:

evolve

Member
Yeah I know this guys github, has a lot of useful things. Some might be hard to confirm, other things I don't entirely agree with either.

My stuff in profile is defo not hard evidence, and I'm still in the process of testing a lot of it more in-depth.

But one thing is clear, timers affect in-game fps limiters. And you'll mostly want to use these for reasons way beyond the scope of this thread! Forcing HPET on the other end can have detrimental effects on system wide performance.

The fact that 180x+ builds now all use 10Mhz synethetic QPC and claiming nothing have to/can be changed, doesn't mean the clocksource implementation behind it is no longer relevant. This is probably a flawed assumption as if i would to force HPET right now, behind this synthetic timer, with the HPET Intel platform quirk I can bet things would quickly go down hill :p

Interesting read perhaps.
yeah i saw this before https://www.overclockers.at/number-crunching/the-hpet-bug-what-it-is-and-what-it-isnt_251222/page_2
but i was lazy to read it carefully but i think all he said is more synthetic timer in newer build doesnt mean worse performance right ?
--------------------------------------------------------------------------------------------------------------------------
- Power plan tweak: Set to 'Lock interrupt routing'. Really recommend using Interrupt affinity tool to balance IRQ's per device for lowest jitter.
do u know which devices on which core would be better for Interrupt affinity tool for example gpu or mouse?
-------------------------------------------------------------------------------------------------------------------------------------
and i saw this thing here Timer (Default is 0.496~, should bring back more stable frames with in-engine fps limiters, but I'll have to test)
i tested some games all of them was changing it to 0.500 automatically when the game was running . i still need to set it manually ?!
 
Last edited:

MT_

Active Member
Well I never felt a noticable decrease performance vs 1607.

As for the 0.500/1.000, the values happen when using useplatformtick yes, seems to be more stable compared to the non-rounded numbers 0.496 etc which are off. More like Windows 7 as thats the only OS i get rounded numbers and had the least jitter.

Whether its 0.500 or 1.000 or even 2.000 does't matter as long as they show proper rounded values (ending with 0)

I can't make recommendation for IRQ assignment, but maybe set priority for yourself. I think mouse polling is really important, but core 1 is always highest jitter due to kernel running on core 0, so I put usb on core 1 without sharing.

That is just my idea, let me know if you have better theories :p
 
Last edited:

evolve

Member
Well I never felt a noticable decrease performance vs 1607.

As for the 0.500/1.000, the values happen when using useplatformtick yes, seems to be more stable compared to the non-rounded numbers 0.496 etc which are off. More like Windows 7 as thats the only OS i get rounded numbers and had the least jitter.

Whether its 0.500 or 1.000 or even 2.000 does't matter as long as they show proper rounded values (ending with 0)

I can't make recommendation for IRQ assignment, but maybe set priority for yourself. I think mouse polling is really important, but core 1 is always highest jitter due to kernel running on core 0, so I put usb on core 1 without sharing.

That is just my idea, let me know if you have better theories :p
thanks buddy
Whether its 0.500 or 1.000 or even 2.000 does't matter as long as they show proper rounded values (ending with 0)
when i put -no- on useplatformtick i getting strange number if i remember exact value was 0.5005 :D
I put usb on core 1 without sharing.
i dont know what is without sharing mean is it mean core 1 only handle mouse intrrupts or mouse interrupts only triggering on core 1 ?
highest jitter due to kernel running on core 0
from where this jitter come from ? is it due the multiple interrupts happening from kernel on core 0 ? or core 0 has more utilization ?
 
Last edited:

MT_

Active Member
Hi!

Bcdedit entries can work for you or not. Might have to experiment ;-)
Especially useplatformclock/tick. Could be highly platform dependent.

I mean without sharing that you serve all usb controller interrupts on cpu1 and all other hardware on cpu 0,2,3 (The more cores the more options possible)

cpu0 probably has most jitter due to kernel/drivers running on it, so also adding interrupt serving to it can really impair polling. A core can only do one thing at a time synchroneously so the more jobs the more jitter/interference.

Redhat Linux also has some nice docs on the web about reducing latency/jitter altho it is based on linux, the basics are similar.
 
Last edited:

evolve

Member
Hi!

Bcdedit entries can work for you or not. Might have to experiment ;-)
Especially useplatformclock/tick. Could be highly platform dependent.

I mean without sharing that you serve all usb controller interrupts on cpu1 and all other hardware on cpu 0,2,3 (The more cores the more options possible)

cpu0 probably has most jitter due to kernel/drivers running on it, so also adding interrupt serving to it can really impair polling. A core can only do one thing at a time synchroneously so the more jobs the more jitter/interference.

Redhat Linux also has some nice docs on the web about reducing latency/jitter altho it is based on linux, the basics are similar.
Especially useplatformclock/tick. Could be highly platform dependent.
lol i saw a guide it was saying delete this value and i did that:p
cpu0 probably has most jitter due to kernel/drivers running on it, so also adding interrupt serving to it can really impair polling. A core can only do one thing at a time synchroneously so the more jobs the more jitter/interference
since you asked about idea wanna say something(mybe its not the case here btw):
not sure these jitter can increase by more core utilization or not but i saw many benchmarks when mid to high end cpus in game getting lot of core utilization even a cpu with 12 logical core can get up to like 70 % when u uncap frame rate and cores getting ( lets take this ) randomly utilization by game so in this case putting interrupts on specific core might not be beneficial when the core have to handle other things at the same time for game i mean it might impact those graphs in the mouse tester in this situation ?(sry if its mybe wrong)

also i found this guy article before although its a bit old it has a section titled Fast timers waste performance iam not sure what he wants to say but for games i saw a video reducing windows timer increase performance in games

 
Last edited:

MT_

Active Member
Dynamic timers vs static fixed hz timers both have benefits and drawbacks, in Linux land a dynamic tick kernel has higher throughput, but higher latency/jitter vs lets say 1000hz tick kernel. It can very well be true that disabling dynamic ticks will also reduce fps throughput a tiny bit, but my tests shown this to be negligable (pretty much within margin of error), and frametimes/0.1%/1% lows slightly increased.

Because with dynamic tick kernel it waits until it has to do something, and often it coalesces various tasks at once but this can also mean increased load spikes at these intervals, fixed interval tick kernel will just do whatever has to be done as soon as possible so it spreads out more evenly.

At least, thats my theory on it.
 
Last edited:
Status
Not open for further replies.
Top