Discussion: NVIDIA Display Driver (nvlddmkm.sys) DPC Latency

Status
Not open for further replies.
I
I remember old discussions/threads about disabling usb on xp, uses about 20% system resources.
I remember tweaking my win98 back in the day.

Never worry about.mouse latency. I remember a small tweak in the registry that really cleaned up my mouse movements more then any mouse truely could. Will have to dig it up.
 
I

I remember tweaking my win98 back in the day.

Never worry about.mouse latency. I remember a small tweak in the registry that really cleaned up my mouse movements more then any mouse truely could. Will have to dig it up.
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Input\Settings\ControllerProcessor\CursorSpeed

Change what's there to this

"CursorUpdateInterval"=dword:00000001

If it helps one person then I did my job, the rest of you it didn't help oh well haha
 
Hey all,
just found this thread as I had DPC issues with the nvidia driver.
I was finally able to figure it out, after trying different tips and tricks mentioned here.

Long story short, what fixed it for me:

DISABLE the Displayport SOUND INTERFACES.

I'm using an USB sound interface (where sound pops occured) and every montior connected via displayport shows up as an additional separate sound interface in the sound interfaces list.

As soon as I disabled those additional "monitor" interfaces all the latecy of the nvidia driver was gone.

Seems like the nvidia is trying to push sound even if the sound routing in windows suggests that these are not used at all.

This change made the nvlddmkm.sys latency go from 3ms to 0.3ms average on my machine.

Hope this helps some other person figure this out.
 
Last edited:
DISABLE the Displayport SOUND INTERFACES.
It sounds promising, but didn't work for me. A lot of Nvidia users don't install the audio files during the graphics driver installation, so this fix wouldn't help those people either. I'm not saying it didn't fix your issue, but I don't think it's the culprit for this thread.

I've always had my bios HDMI video/sound set to video only too, so the sound capability is disabled at the bios level (I use an hdmi connection to my graphics card). There's a lot of different ways we can disable GPU sound though, so the method might matter. I went ahead and disabled the Nvidia audio through the device manager, rebooted, and the issue persisted still.

Something I did find interesting though, is I also went into the Nvidia control panel and in the options there under "Set up digital audio" it has "HDMI" and shows my monitor (which doesn't have built-in speakers) and when I change that setting to "Turn off audio" and click "Apply" it doesn't actually set any registry keys like it's supposed to. It pretends to make the change, then you reboot and it comes right back. It constantly forces the audio to turn back on. This looks like another bug that also needs fixing.
 
It sounds promising, but didn't work for me. A lot of Nvidia users don't install the audio files during the graphics driver installation, so this fix wouldn't help those people either. I'm not saying it didn't fix your issue, but I don't think it's the culprit for this thread.

I've always had my bios HDMI video/sound set to video only too, so the sound capability is disabled at the bios level (I use an hdmi connection to my graphics card). There's a lot of different ways we can disable GPU sound though, so the method might matter. I went ahead and disabled the Nvidia audio through the device manager, rebooted, and the issue persisted still.

Something I did find interesting though, is I also went into the Nvidia control panel and in the options there under "Set up digital audio" it has "HDMI" and shows my monitor (which doesn't have built-in speakers) and when I change that setting to "Turn off audio" and click "Apply" it doesn't actually set any registry keys like it's supposed to. It pretends to make the change, then you reboot and it comes right back. It constantly forces the audio to turn back on. This looks like another bug that also needs fixing.
I only use it because it goes through my Denon receiver for audio and video with HDMI. With it working I don't have any issues so one solution works for some and not others.

It just comes to us like humans, all different and something that works for one may not work for others but that is for posting and we only hope it could help a few.
 
Hey Everyone, I just wanted to post this here to help you guys fix your issue.
I was very determined to fix this DPC latency spikes on my nvidia drivers and I can safely say that it is possible to do so.
Now with that being said I wanted to explain it in a way that didn't sound quite like the same thing everyone else has been saying.
Note: I did not use any driver trimming software. This is the latest driver with all bloatware installed.
With that said the main issue is this:

The Nvidia driver is just extremely sensitive to latency at any point in the chain. So the way that you tweak this to reduce DPC latency spikes is reduce the latency spikes of everything else underneath that is reporting.
So what I did is disabled the nvidia driver in display adapters in device manager and then ran latency mon to detect all the other smaller processes underneath that were having issues.

The main thing i found worked best is separate literally everything off of 1 single core and let only the drivers get that 1 single core.
Then program the SSD/Interrupt controller/usb hub/ display drivers/BIOS drivers. ANYTHING that shows up in device manager find a way to spread the workload across all of the cores left and try to make it as even as possible.

Once you have done that rerun latency mon and see if your latency spikes are now higher on 1 specific core.
If they are try to move around the devices and drivers until you find a combination that works.
Once you have that done, THEN REENABLE the drivers and try again. Basically find out how the driver interacts
with each core and it's drivers in the system until you find ones that don't like to be together or which ones like to be far apart.


So for me putting my WIFI/SSD/INTERRUPT CONTROLLER/USB ever on any same core would cause the NVLDDMKM to have spikes in the hundreds of microseconds.
But once I started slowly chipping away at the beast I eventually found a combo that worked and this was the results. DPC latency under 100 and as I am going further along my journey I'm starting to see only 20-40 microseconds rather than 70-80.
 

Attachments

  • DPC WITH DRIVER.PNG
    DPC WITH DRIVER.PNG
    31 KB
The next to note is this screenshot of the cpu section. Notice how my counts are now spread out fairly evenly across most of my cores and the latency of each core is very small. This is the indication that most of my devices like being where they are at and the only ones that complain are the most likely largest objects in the entire chain. Like:
The windows kernel
The storage/ssd drivers
The ethernet/network drivers
 

Attachments

  • DPC PER CORE.PNG
    DPC PER CORE.PNG
    25.1 KB
Here was a 30 minute Run I captured as well and here is the averages measured as well. Notice how my average is .4us which is around optimal.
 

Attachments

  • PER CORE RUN AMAZING.PNG
    PER CORE RUN AMAZING.PNG
    34.2 KB
  • 30 MINUTE RUN.PNG
    30 MINUTE RUN.PNG
    36.7 KB
  • STATS PAGE OF GREAT RUN.PNG
    STATS PAGE OF GREAT RUN.PNG
    78.4 KB
Last edited:
The main thing i found worked best is separate literally everything off of 1 single core and let only the drivers get that 1 single core.
Then program the SSD/Interrupt controller/usb hub/ display drivers/BIOS drivers. ANYTHING that shows up in device manager find a way to spread the workload across all of the cores left and try to make it as even as possible.

Once you have done that rerun latency mon and see if your latency spikes are now higher on 1 specific core.
If they are try to move around the devices and drivers until you find a combination that works.
Once you have that done, THEN REENABLE the drivers and try again. Basically find out how the driver interacts
with each core and it's drivers in the system until you find ones that don't like to be together or which ones like to be far apart.


So for me putting my WIFI/SSD/INTERRUPT CONTROLLER/USB ever on any same core would cause the NVLDDMKM to have spikes in the hundreds of microseconds.
But once I started slowly chipping away at the beast I eventually found a combo that worked and this was the results. DPC latency under 100 and as I am going further along my journey I'm starting to see only 20-40 microseconds rather than 70-80.
Hey, how do I go about setting drivers to run on specific CPU cores?
 
Hey, how do I go about setting drivers to run on specific CPU cores?
Microsoft’s very own Interrupt affinity policy configuration tool.
You want to set mask and then choose a core. You can choose to restart the device then and there. Or you can restart.
Don’t worry about the errors that pop up. They’re just gonna happen on almost any device.

You can verify it’s enabled by running latency mon and watch counts.

I posted a video about on my personal channel if you want to watch a guide with more verbal examples.

Mod note - added link for Interrupt affinity policy configuration tool on TechPowerUp
 
Last edited by a moderator:
Microsoft’s very own Interrupt affinity policy configuration tool.
You want to set mask and then choose a core. You can choose to restart the device then and there. Or you can restart.
Don’t worry about the errors that pop up. They’re just gonna happen on almost any device.

You can verify it’s enabled by running latency mon and watch counts.

I posted a video about on my personal channel if you want to watch a guide with more verbal examples.
I am watching your video now. Can I reach out to you here again if I have any follow-up questions? I would appreciate any further insight if I find myself needing it
 
One important thing I noticed is that the NV Sys will pop up on core 0 even though I have specified my entire device manager off of core 0.
This made me wonder if there are other parts of the driver that are splitting up and
Being fragmented or if this is the IRQ sharing everyone was mentioning.
 
I got to thinking...we have tried to tweak windows and a lot of people still have issues so if it isn't windows it must be something else. Computer parts, now hear me out here everyone has a power supply. There are many differences in power supplies are very different from each computer.

What I found out after research is vrm frequency! This Voltage Regulator Module has a big range difference between motherboards. Can range from 200khz-600khz(maybe even higher). In monkey terms VRMs are composed of “Phases”, and each “Phase” is composed of a capacitor, a choke, and a MOSFET.

Capacitors are used to store small amounts of electricity, while a choke is used to filter out, or “choke” certain frequencies of electricity. Motherboards go through phases.

This means that your VRM has more steps with which to clean and regulate power before it’s delivered to your CPU, which has a direct impact on the CPU’s ability to maintain stable high clocks.

Even if you aren’t pushing for overclocks, VRMs can still impact your motherboards ability to run at its regularly-rated. Increasing VRMs across the board lowers efficiency and creates a bit more heat however improves the power across.

Before this gets to be a big post Buildzoid went into depth with this if you want more info on it. Now getting to the gritty of it, I increased mine to 400(max for my system) and has improved my system overall. I checked latency monitor three times and it also improved my numbers(not by a lot since it was lower then most but still improvement)

Will this help anyone else......I hope so!
 
Im struggling with that nvlddmkm spikes from 2018.
My system is greatly tweaked and has 5-8 avg latency with nvidia gpu device disabled.
As long as gpu driver enabled i have 10-14 and spikes up to 274 when gpu switches power states from idle (open video,steam,game,start recording etc). I have tried everything what written in that topic and other big topics about that problem. Unfortunately nothing helped.
Ive tried over 20 driver versions,including famous 441.14 with disablewritecombing (actually its worst,spikes up to 890).
Tried MSI,Interrupt tool from microsoft,different windows build and different hardware ( Intel platform I7 8700K, Amd platform 5600X, RTX 1060 3GB and RTX 2060 6GB),different settings and overclocks in bios - including disabling all the features for cpu and gpu.
Nothing helped,its just a driver related issue,nvidia is fckd up.
Im even tried tweaking friend rig with 5800x3D and rtx 4080 with latest drivers : same spike 200-300 when changing P-State.
Locking P2 State in profile inspector wont help as well.

In my opinion there is only 2 possible ways to solve that issue,otherwise just ignore it since that spike is mostly happens while alt+tabing or launching game/software. :

1 : Modified driver (have no idea who would do that)
2 : Lock GPU P state ( someone somewhere told me what it can be done via setting "Prefer maximum performance" in Nvcp on dwm.exe and explorer.exe individualy)

I had a chat with someone who tweak the e-sport competitive rigs on tournaments and he said what that spike up to 400 us is completely normal,since it cant be even considered as stutter or freeze,it just a one skipped frame,moreover, the probability that this will happen in the game is negligible.

So if you have spikes like i do (up to 280) just forget. There is nothing we can do,clearly nvidia problem.
 
Im struggling with that nvlddmkm spikes from 2018.
My system is greatly tweaked and has 5-8 avg latency with nvidia gpu device disabled.
As long as gpu driver enabled i have 10-14 and spikes up to 274 when gpu switches power states from idle (open video,steam,game,start recording etc). I have tried everything what written in that topic and other big topics about that problem. Unfortunately nothing helped.
Ive tried over 20 driver versions,including famous 441.14 with disablewritecombing (actually its worst,spikes up to 890).
Tried MSI,Interrupt tool from microsoft,different windows build and different hardware ( Intel platform I7 8700K, Amd platform 5600X, RTX 1060 3GB and RTX 2060 6GB),different settings and overclocks in bios - including disabling all the features for cpu and gpu.
Nothing helped,its just a driver related issue,nvidia is fckd up.
Im even tried tweaking friend rig with 5800x3D and rtx 4080 with latest drivers : same spike 200-300 when changing P-State.
Locking P2 State in profile inspector wont help as well.

In my opinion there is only 2 possible ways to solve that issue,otherwise just ignore it since that spike is mostly happens while alt+tabing or launching game/software. :

1 : Modified driver (have no idea who would do that)
2 : Lock GPU P state ( someone somewhere told me what it can be done via setting "Prefer maximum performance" in Nvcp on dwm.exe and explorer.exe individualy)

I had a chat with someone who tweak the e-sport competitive rigs on tournaments and he said what that spike up to 400 us is completely normal,since it cant be even considered as stutter or freeze,it just a one skipped frame,moreover, the probability that this will happen in the game is negligible.

So if you have spikes like i do (up to 280) just forget. There is nothing we can do,clearly nvidia problem.
I am sure it's not a Nvidia problem anymore since people have the same driver and perfectly fine were as some are not. This is why I came to the conclusion that it would come down to power delivery.

The only way I really could test this besides my own conclusions would be a duplicate system to the exact specs,system everything except a different power supply.
 
Status
Not open for further replies.
Back
Top