When "GlassWire Control Service" is running during a fast download the DPC AUDIO latency skyrockets and makes the PC nearly unusable to the point of the entire system including mouse cursor is freezing (incognito mode doesn't help)

@Ken_GlassWire
The approach taken in thus topic needs to be more disciplined to correctly diagnose the issue on @thinking’s system.

High DPC times are a performance issue where a CPU is unable to process interrupts in a timely manner.
Therefore you can’t be sure of diagnosing or duplicating the cause without, at minimum, the relevant CPU configuration details and performance stats. Personally, I’d also want to know the details for the system board / chipset and NIC.
This all becomes more important the more that the system and components are loaded. For example, is the NIC using interrupt moderation or, more importantly, not.

@Thinking, you’d help by providing the GlassWire team with the monitoring reports from LatencyMon or whatever else you are using.

While GlassWire may be implicated, it is not a good idea to ignore an identified issue. ndis.sys has high DPC time, so that should be diagnosed, e.g. realtek drivers are notorious for problems. Ndis.sys is the network driver so why would you not look at it when you are stress testing network throughput. See this example of high DPC times for ndis.sys.

1 Like

For anyone who is interested in these issues of interrupt processing delays (ISR, DPC latency), there is an easy to read article from the NT Insider, a journal about Windows system programming: Windows and Real-Time

1 Like

I run LatencyMon occasionally to test my system for real-time audio. DPCs and ISRs can affect any program running on the system not just audio, but real-time audio is the most sensitive. Most other applications can buffer data and slow down, but otherwise weather typical delays fine.

So far I have seen no indication on my system of the effects described by the OP in response to the GW Control Service. With his system freezing up, there is apparently a serious conflict going on somewhere in the PC ecosystem.

LatencyMon provides a nice summary report of the DPC and ISR usage for all drivers observed during the monitoring period. Anything with excessively high usage needs to be investigated.

Here is a summary from the software developer that explains a few details (just replace audio program with a generic placeholder, and don’t get hung up on audio):

https://www.resplendence.com/latencymon

About DPCs and ISRs

The Windows thread disptacher (also known as scheduler) which is part of the kernel executes threads based on a priority scheme. Threads with higher priority will be given a longer execution time (also known as quantum or time slice) than threads with a lower priority. However the kernel also knows other types of units of execution known as interrupt service routines (ISRs). Devices connected to the system may interrupt on a connected CPU and cause their interrupt service routines to execute. An interrupt can occur on the same processor that an audio program is running on. Any thread that was running on the processor on which an interrupt occured will be temporarily halted regardless of its priority. The interrupt service routine (ISR) is executed and may schedule a DPC (Deferred Procedure Call) to offload an amount of work. The DPC will most likely run immediately on the same processor which means the audio application will halt until both the ISR and the DPC routines have finished execution. That is because ISRs and DPCs run at elevated IRQL which means they cannot become preempted by the thread dispatcher (scheduler). Therefore to guarantee responsiveness of the system, ISR and DPC routines should execute as fast as possible. Guidelines say that they should not spend more than 100 µs of execution time however this is often not reached due to hardware factors beyond the control of the driver developer. If execution time gets too high, the audio program may be unable to deliver audio buffers to the hardware in a timely manner.

2 Likes

To show the reporting that is available, I ran Resplendence Software’s LatencyMon version 6.71 to monitor a speed test on my 5 year old gaming laptop. I found no concern over GlassWire but I will also disable GlassWire and run the speed test for comparison in the next post.

Incidentally, I’ve used several Resplendence tools and they’ve all been very good to excellent.

My network speed test

I ran my ISP’s version of OOKLA SpeedTest Custom over my home Gigabit LAN. The maximum theoretical throughput on this cable (HFC) plan is 900/100 Mbps and normally we’d max out at 700/100. This test was during the middle of a business day so download speed was only 500 Mbps. My ISP rate limits the upload speed to 100 Mbps which is why it barely ever changes on speed tests.

LatencyMon report tabs

Main

Stats

LatencyMon indicates that real-time audio would probably drop out during the network speed test.
Note the warning that the CPU wasn’t running full throttle. Any improvement in CPU throughput should lower the DPC latency.

The NDIS driver is servicing the network speed test so it is no surprise to see it have the highest ISR and DPC times.

Firefox is running the speed test so it is no surprise to see it generates them most hard page faults:

CPU 0 is the only CPU of concern because it is running the network driver. See the CPUs tab below for more detail.

Processes

Hard Page Faults are data that was expected in memory but was not so had to be retrieved from the HDD or SSD. As expected they occurred for Firefox running the speed test and LatencyMon doing the monitoring.

Drivers

The drivers stats are sorted by the highest reported DPC execution time. Note that this report displays in milliseconds instead of the microseconds used in the Main tab.

The GlassWire driver stats show that DPC latency is not being directly produced by this driver:

CPUs

CPU 0 is the CPU being loaded by the network speed test. It has more than 5 times the CPU time of all the other CPUs.

GlassWire

The GlassWire UI was not running during the speed test. The screenshot mainly shows the speed test throughput.

1 Like

Remah,

You made me jealous, would be great to have these speeds…

1 Like

I repeated the speed test without the driver running. As expected when Windows is not gathering data for GlassWire, there is much better performance as the original post reported.

The question is whether it is possible to get better performance from network monitoring? So there’s further scenarios I want to compare for my satisfaction. I don’t plan to provide all the screenshots unless there is something significant that I find:

  • GlassWire incognito mode.
  • GlassWire running but the various threat protection drivers disabled (Windows Defender Firewall Windows Defender Antivirus, Windows Defender Threat Protection)
  • GlassWire disabled but running another network monitor.

Speed Test with GlassWire disabled

The speed test throughput is much higher. 682 Mbps versus 532 Mbps with GlassWire.

LOL, I don’t even need that speed but my sons like it for their gaming. :racing_car:

LatencyMon with GlassWire disabled

Main

No DPC latency issues without GlassWire running.

Stats




Processes

Drivers

CPUs

1 Like

I tried two other network monitoring options without GlassWire and repeated the GlassWire running and not running scenarios as higher throughput is being achieved:

  1. GlassWire disabled and Windows Resource Monitor running
    :arrow_right: 738/101
  2. GlassWire disabled and Windows Message Analyzer running
    :arrow_right: 797/101
  3. GlassWire disabled
    :arrow_right: 735/100
  4. GlassWire enabled
    :arrow_right: 607/100

The only scenario that clearly slows throughput is when GlassWire is running.

Apologies for the long post. :blush:

Scenario 1 - Speed test with GlassWire disabled and Windows Resource Monitor running

Speed test

Higher throughput, maybe because lots of parents have got off their computers and gone to get their kids from school!?

Windows Resource Monitor running

LatencyMon

Text report

CONCLUSION


Your system appears to be suitable for handling real-time audio and other tasks without dropouts.
LatencyMon has been analyzing your system for 0:00:50 (h:mm:ss) on all processors.


SYSTEM INFORMATION


Computer name: ME08
OS version: Windows 10 , 10.0, version 1903, build: 18362 (x64)
Hardware: Alienware 17, Alienware, 0MPYM4
CPU: GenuineIntel Intel® Core™ i7-4710MQ CPU @ 2.50GHz
Logical processors: 8
Processor groups: 1
RAM: 16265 MB total


CPU SPEED


Reported CPU speed: 2494 MHz

Note: reported execution times may be calculated based on a fixed reported CPU speed. Disable variable speed settings like Intel Speed Step and AMD Cool N Quiet in the BIOS setup for more accurate results.

WARNING: the CPU speed that was measured is only a fraction of the CPU speed reported. Your CPUs may be throttled back due to variable speed settings and thermal issues. It is suggested that you run a utility which reports your actual CPU frequency and temperature.


MEASURED INTERRUPT TO USER PROCESS LATENCIES


The interrupt to process latency reflects the measured interval that a usermode process needed to respond to a hardware request from the moment the interrupt service routine started execution. This includes the scheduling and execution of a DPC routine, the signaling of an event and the waking up of a usermode thread from an idle wait state in response to that event.

Highest measured interrupt to process latency (µs): 298.0
Average measured interrupt to process latency (µs): 18.499961

Highest measured interrupt to DPC latency (µs): 273.60
Average measured interrupt to DPC latency (µs): 9.510706


REPORTED ISRs


Interrupt service routines are routines installed by the OS and device drivers that execute in response to a hardware interrupt signal.

Highest ISR routine execution time (µs): 94.274258
Driver with highest ISR routine execution time: ndis.sys - Network Driver Interface Specification (NDIS), Microsoft Corporation

Highest reported total ISR routine time (%): 0.451270
Driver with highest ISR total time: ndis.sys - Network Driver Interface Specification (NDIS), Microsoft Corporation

Total time spent in ISRs (%) 0.484532

ISR count (execution time <250 µs): 529091
ISR count (execution time 250-500 µs): 0
ISR count (execution time 500-999 µs): 0
ISR count (execution time 1000-1999 µs): 0
ISR count (execution time 2000-3999 µs): 0
ISR count (execution time >=4000 µs): 0


REPORTED DPCs


DPC routines are part of the interrupt servicing dispatch mechanism and disable the possibility for a process to utilize the CPU while it is interrupted until the DPC has finished execution.

Highest DPC routine execution time (µs): 473.726945
Driver with highest DPC routine execution time: ndis.sys - Network Driver Interface Specification (NDIS), Microsoft Corporation

Highest reported total DPC routine time (%): 2.498758
Driver with highest DPC total execution time: ndis.sys - Network Driver Interface Specification (NDIS), Microsoft Corporation

Total time spent in DPCs (%) 2.567489

DPC count (execution time <250 µs): 473790
DPC count (execution time 250-500 µs): 0
DPC count (execution time 500-999 µs): 22
DPC count (execution time 1000-1999 µs): 0
DPC count (execution time 2000-3999 µs): 0
DPC count (execution time >=4000 µs): 0


REPORTED HARD PAGEFAULTS


Hard pagefaults are events that get triggered by making use of virtual memory that is not resident in RAM but backed by a memory mapped file on disk. The process of resolving the hard pagefault requires reading in the memory from disk while the process is interrupted and blocked from execution.

NOTE: some processes were hit by hard pagefaults. If these were programs producing audio, they are likely to interrupt the audio stream resulting in dropouts, clicks and pops. Check the Processes tab to see which programs were hit.

Process with highest pagefault count: firefox.exe

Total number of hard pagefaults 8
Hard pagefault count of hardest hit process: 8
Number of processes hit: 1


PER CPU DATA


CPU 0 Interrupt cycle time (s): 13.256916
CPU 0 ISR highest execution time (µs): 87.053729
CPU 0 ISR total execution time (s): 1.935567
CPU 0 ISR count: 528699
CPU 0 DPC highest execution time (µs): 473.726945
CPU 0 DPC total execution time (s): 10.162841
CPU 0 DPC count: 459548


CPU 1 Interrupt cycle time (s): 0.464824
CPU 1 ISR highest execution time (µs): 14.535686
CPU 1 ISR total execution time (s): 0.000033
CPU 1 ISR count: 6
CPU 1 DPC highest execution time (µs): 215.061748
CPU 1 DPC total execution time (s): 0.036412
CPU 1 DPC count: 3134


CPU 2 Interrupt cycle time (s): 0.296266
CPU 2 ISR highest execution time (µs): 94.274258
CPU 2 ISR total execution time (s): 0.003011
CPU 2 ISR count: 199
CPU 2 DPC highest execution time (µs): 167.391740
CPU 2 DPC total execution time (s): 0.036337
CPU 2 DPC count: 5096


CPU 3 Interrupt cycle time (s): 0.333218
CPU 3 ISR highest execution time (µs): 51.784282
CPU 3 ISR total execution time (s): 0.001322
CPU 3 ISR count: 180
CPU 3 DPC highest execution time (µs): 110.810746
CPU 3 DPC total execution time (s): 0.006164
CPU 3 DPC count: 973


CPU 4 Interrupt cycle time (s): 0.244039
CPU 4 ISR highest execution time (µs): 3.528468
CPU 4 ISR total execution time (s): 0.000004
CPU 4 ISR count: 1
CPU 4 DPC highest execution time (µs): 90.807137
CPU 4 DPC total execution time (s): 0.01470
CPU 4 DPC count: 2046


CPU 5 Interrupt cycle time (s): 0.180139
CPU 5 ISR highest execution time (µs): 0.0
CPU 5 ISR total execution time (s): 0.0
CPU 5 ISR count: 0
CPU 5 DPC highest execution time (µs): 68.181636
CPU 5 DPC total execution time (s): 0.004782
CPU 5 DPC count: 604


CPU 6 Interrupt cycle time (s): 0.222445
CPU 6 ISR highest execution time (µs): 4.977145
CPU 6 ISR total execution time (s): 0.000012
CPU 6 ISR count: 6
CPU 6 DPC highest execution time (µs): 92.115878
CPU 6 DPC total execution time (s): 0.012226
CPU 6 DPC count: 1589


CPU 7 Interrupt cycle time (s): 0.183152
CPU 7 ISR highest execution time (µs): 0.0
CPU 7 ISR total execution time (s): 0.0
CPU 7 ISR count: 0
CPU 7 DPC highest execution time (µs): 68.854050
CPU 7 DPC total execution time (s): 0.006148
CPU 7 DPC count: 822


Drivers

Scenario 2 - Speed test with GlassWire disabled and Windows Message Analyzer running

Installed 64-bit version 1.4 build 4.0.8112.0
No optimization for data capture.
Run as administrator.
Select File | Favorite scenarios | Local network interfaces

Windows Message Analyzer

Speed test

LatencyMon

Main

Text report

CONCLUSION


Your system seems to be having difficulty handling real-time audio and other tasks. You may experience drop outs, clicks or pops due to buffer underruns. One or more DPC routines that belong to a driver running in your system appear to be executing for too long. One problem may be related to power management, disable CPU throttling settings in Control Panel and BIOS setup. Check for BIOS updates.
LatencyMon has been analyzing your system for 0:00:53 (h:mm:ss) on all processors.


SYSTEM INFORMATION


Computer name: ME08
OS version: Windows 10 , 10.0, version 1903, build: 18362 (x64)
Hardware: Alienware 17, Alienware, 0MPYM4
CPU: GenuineIntel Intel® Core™ i7-4710MQ CPU @ 2.50GHz
Logical processors: 8
Processor groups: 1
RAM: 16265 MB total


CPU SPEED


Reported CPU speed: 2494 MHz

Note: reported execution times may be calculated based on a fixed reported CPU speed. Disable variable speed settings like Intel Speed Step and AMD Cool N Quiet in the BIOS setup for more accurate results.

WARNING: the CPU speed that was measured is only a fraction of the CPU speed reported. Your CPUs may be throttled back due to variable speed settings and thermal issues. It is suggested that you run a utility which reports your actual CPU frequency and temperature.


MEASURED INTERRUPT TO USER PROCESS LATENCIES


The interrupt to process latency reflects the measured interval that a usermode process needed to respond to a hardware request from the moment the interrupt service routine started execution. This includes the scheduling and execution of a DPC routine, the signaling of an event and the waking up of a usermode thread from an idle wait state in response to that event.

Highest measured interrupt to process latency (µs): 1504.90
Average measured interrupt to process latency (µs): 19.472324

Highest measured interrupt to DPC latency (µs): 1494.90
Average measured interrupt to DPC latency (µs): 13.693991


REPORTED ISRs


Interrupt service routines are routines installed by the OS and device drivers that execute in response to a hardware interrupt signal.

Highest ISR routine execution time (µs): 113.959904
Driver with highest ISR routine execution time: ndis.sys - Network Driver Interface Specification (NDIS), Microsoft Corporation

Highest reported total ISR routine time (%): 0.306606
Driver with highest ISR total time: ndis.sys - Network Driver Interface Specification (NDIS), Microsoft Corporation

Total time spent in ISRs (%) 0.331768

ISR count (execution time <250 µs): 400533
ISR count (execution time 250-500 µs): 0
ISR count (execution time 500-999 µs): 0
ISR count (execution time 1000-1999 µs): 0
ISR count (execution time 2000-3999 µs): 0
ISR count (execution time >=4000 µs): 0


REPORTED DPCs


DPC routines are part of the interrupt servicing dispatch mechanism and disable the possibility for a process to utilize the CPU while it is interrupted until the DPC has finished execution.

Highest DPC routine execution time (µs): 1583.478749
Driver with highest DPC routine execution time: ndis.sys - Network Driver Interface Specification (NDIS), Microsoft Corporation

Highest reported total DPC routine time (%): 2.705536
Driver with highest DPC total execution time: ndis.sys - Network Driver Interface Specification (NDIS), Microsoft Corporation

Total time spent in DPCs (%) 2.807986

DPC count (execution time <250 µs): 391774
DPC count (execution time 250-500 µs): 0
DPC count (execution time 500-999 µs): 275
DPC count (execution time 1000-1999 µs): 8
DPC count (execution time 2000-3999 µs): 0
DPC count (execution time >=4000 µs): 0


REPORTED HARD PAGEFAULTS


Hard pagefaults are events that get triggered by making use of virtual memory that is not resident in RAM but backed by a memory mapped file on disk. The process of resolving the hard pagefault requires reading in the memory from disk while the process is interrupted and blocked from execution.

NOTE: some processes were hit by hard pagefaults. If these were programs producing audio, they are likely to interrupt the audio stream resulting in dropouts, clicks and pops. Check the Processes tab to see which programs were hit.

Process with highest pagefault count: messageanalyzer.exe

Total number of hard pagefaults 440
Hard pagefault count of hardest hit process: 409
Number of processes hit: 5


PER CPU DATA


CPU 0 Interrupt cycle time (s): 14.782769
CPU 0 ISR highest execution time (µs): 113.959904
CPU 0 ISR total execution time (s): 1.399446
CPU 0 ISR count: 399788
CPU 0 DPC highest execution time (µs): 1583.478749
CPU 0 DPC total execution time (s): 11.606116
CPU 0 DPC count: 350258


CPU 1 Interrupt cycle time (s): 0.544429
CPU 1 ISR highest execution time (µs): 77.085004
CPU 1 ISR total execution time (s): 0.002034
CPU 1 ISR count: 149
CPU 1 DPC highest execution time (µs): 105.149960
CPU 1 DPC total execution time (s): 0.104512
CPU 1 DPC count: 12789


CPU 2 Interrupt cycle time (s): 0.473912
CPU 2 ISR highest execution time (µs): 88.546913
CPU 2 ISR total execution time (s): 0.002221
CPU 2 ISR count: 174
CPU 2 DPC highest execution time (µs): 134.056937
CPU 2 DPC total execution time (s): 0.109732
CPU 2 DPC count: 16226


CPU 3 Interrupt cycle time (s): 0.257482
CPU 3 ISR highest execution time (µs): 54.097835
CPU 3 ISR total execution time (s): 0.003411
CPU 3 ISR count: 306
CPU 3 DPC highest execution time (µs): 138.938252
CPU 3 DPC total execution time (s): 0.012819
CPU 3 DPC count: 2072

Processes

Drivers

Scenario 0 - Speed test with GlassWire running

Speed test

Compared with GlassWire not running

LatencyMon

Main

Stats

Summary

CONCLUSION


Your system appears to be having trouble handling real-time audio and other tasks. You are likely to experience buffer underruns appearing as drop outs, clicks or pops. One or more DPC routines that belong to a driver running in your system appear to be executing for too long. One problem may be related to power management, disable CPU throttling settings in Control Panel and BIOS setup. Check for BIOS updates.
LatencyMon has been analyzing your system for 0:01:23 (h:mm:ss) on all processors.


SYSTEM INFORMATION


Computer name: ME08
OS version: Windows 10 , 10.0, version 1903, build: 18362 (x64)
Hardware: Alienware 17, Alienware, 0MPYM4
CPU: GenuineIntel Intel® Core™ i7-4710MQ CPU @ 2.50GHz
Logical processors: 8
Processor groups: 1
RAM: 16265 MB total


CPU SPEED


Reported CPU speed: 2494 MHz

Note: reported execution times may be calculated based on a fixed reported CPU speed. Disable variable speed settings like Intel Speed Step and AMD Cool N Quiet in the BIOS setup for more accurate results.

WARNING: the CPU speed that was measured is only a fraction of the CPU speed reported. Your CPUs may be throttled back due to variable speed settings and thermal issues. It is suggested that you run a utility which reports your actual CPU frequency and temperature.


MEASURED INTERRUPT TO USER PROCESS LATENCIES


The interrupt to process latency reflects the measured interval that a usermode process needed to respond to a hardware request from the moment the interrupt service routine started execution. This includes the scheduling and execution of a DPC routine, the signaling of an event and the waking up of a usermode thread from an idle wait state in response to that event.

Highest measured interrupt to process latency (µs): 2098.40
Average measured interrupt to process latency (µs): 17.063021

Highest measured interrupt to DPC latency (µs): 2093.50
Average measured interrupt to DPC latency (µs): 8.979410


REPORTED ISRs


Interrupt service routines are routines installed by the OS and device drivers that execute in response to a hardware interrupt signal.

Highest ISR routine execution time (µs): 125.466319
Driver with highest ISR routine execution time: ndis.sys - Network Driver Interface Specification (NDIS), Microsoft Corporation

Highest reported total ISR routine time (%): 0.147704
Driver with highest ISR total time: ndis.sys - Network Driver Interface Specification (NDIS), Microsoft Corporation

Total time spent in ISRs (%) 0.168421

ISR count (execution time <250 µs): 332388
ISR count (execution time 250-500 µs): 0
ISR count (execution time 500-999 µs): 0
ISR count (execution time 1000-1999 µs): 0
ISR count (execution time 2000-3999 µs): 0
ISR count (execution time >=4000 µs): 0


REPORTED DPCs


DPC routines are part of the interrupt servicing dispatch mechanism and disable the possibility for a process to utilize the CPU while it is interrupted until the DPC has finished execution.

Highest DPC routine execution time (µs): 1153.143545
Driver with highest DPC routine execution time: ndis.sys - Network Driver Interface Specification (NDIS), Microsoft Corporation

Highest reported total DPC routine time (%): 0.033806
Driver with highest DPC total execution time: ndis.sys - Network Driver Interface Specification (NDIS), Microsoft Corporation

Total time spent in DPCs (%) 0.116231

DPC count (execution time <250 µs): 295912
DPC count (execution time 250-500 µs): 0
DPC count (execution time 500-999 µs): 27
DPC count (execution time 1000-1999 µs): 1
DPC count (execution time 2000-3999 µs): 0
DPC count (execution time >=4000 µs): 0


REPORTED HARD PAGEFAULTS


Hard pagefaults are events that get triggered by making use of virtual memory that is not resident in RAM but backed by a memory mapped file on disk. The process of resolving the hard pagefault requires reading in the memory from disk while the process is interrupted and blocked from execution.

NOTE: some processes were hit by hard pagefaults. If these were programs producing audio, they are likely to interrupt the audio stream resulting in dropouts, clicks and pops. Check the Processes tab to see which programs were hit.

Process with highest pagefault count: firefox.exe

Total number of hard pagefaults 11
Hard pagefault count of hardest hit process: 6
Number of processes hit: 6


PER CPU DATA


CPU 0 Interrupt cycle time (s): 3.252233
CPU 0 ISR highest execution time (µs): 125.466319
CPU 0 ISR total execution time (s): 1.108349
CPU 0 ISR count: 330361
CPU 0 DPC highest execution time (µs): 1153.143545
CPU 0 DPC total execution time (s): 0.609303
CPU 0 DPC count: 276858


CPU 1 Interrupt cycle time (s): 0.546852
CPU 1 ISR highest execution time (µs): 63.308741
CPU 1 ISR total execution time (s): 0.010848
CPU 1 ISR count: 1863
CPU 1 DPC highest execution time (µs): 245.713713
CPU 1 DPC total execution time (s): 0.053786
CPU 1 DPC count: 5832


CPU 2 Interrupt cycle time (s): 0.604561
CPU 2 ISR highest execution time (µs): 15.289094
CPU 2 ISR total execution time (s): 0.000249
CPU 2 ISR count: 66
CPU 2 DPC highest execution time (µs): 866.514836
CPU 2 DPC total execution time (s): 0.039609
CPU 2 DPC count: 4187


CPU 3 Interrupt cycle time (s): 0.384163
CPU 3 ISR highest execution time (µs): 5.357658
CPU 3 ISR total execution time (s): 0.000057
CPU 3 ISR count: 21
CPU 3 DPC highest execution time (µs): 78.032077
CPU 3 DPC total execution time (s): 0.007104
CPU 3 DPC count: 1042

Processes

Drivers

1 Like

Thanks for doing this detailed testing @Remah! We will test on our machines/network also.

1 Like

I ran some more tests with Microsoft Message Analyzer to see if there was any difference if I used a broader range of scenarios than only Local Network Interfaces.

The result was the essentially the same as in the above posts for the four scenarios I tried. The Speed Test was faster if the GlassWire service wasn’t running (682-803 Mbps) than when it was (528-571 Mbps).

I used the Wired Local Area Network favorite scenario which can be revealed by editing the favorites. It had the worst impact on my system but as a sample size of one this may be misleading. But it got me thinking that maybe filtering by connection is an issue for GlassWire?

I also created three new scenarios:

  • Microsoft-Windows-WFP as shown in the screenshot below
  • Microsoft-Windows-Base-Filtering-Engine-Connections
  • Microsoft-Windows-Kernel-Network

Steps to create scenario

New Session button
Add Data Source button
Live Trace button
Add Providers button
Add System Providers in drop-down list
Select provider from Available System Providers list
Add to Selected Providers
OK button
Start button

Incidentally, Microsoft Message Analyzer (MMA) is very useful for kernel trace logging if you don’t need to use Event Tracing for Windows (ETW) and Event Trace Logs (ETL) e.g. to log Windows startup. MMA will view/pars ETL files but also appears to do a better job of decoding event info than Windows Event Viewer. I haven’t used WEV with Windows 10 but MMA certainly seemed easier to read.

1 Like

It’s been two weeks, is there any update on this (or just an official acknowledgement of the issue)? This is a pretty important issue for people with fast internet.

Thanks for your super in depth analysis Remah!

Thanks for your detailed report @Remah. Our network is not as fast as yours, that’s cool!

I will say that we have many Fortune 500 companies using our software on their servers on fast networks and I don’t ever recall a report from an ISP or company who said we slow something related to the network.

We’ll see if we can recreate this on our end.

Meanwhile we’re in the final month or so of our backend rewrite that we have been working on for awhile. The update will use less resources and will allow us to move forward with our MacOS version hopefully before the year is out.

Unfortunately this still seems to be an issue with Glasswire 2.2, above 200-400Mbps it starts to stutter for me.

If making GlassWire’s monitoring asynchronous is not an option, some way to completely ignore an application, such as Steam for when I am downloading would be great. Using the GlassWire incognito mode doesn’t seem to help, only entirely stopping the service fixed the issue.

1 Like

We’re going to release an optional “lite” version of GlassWire that will be out for slower PCs. It doesn’t log hosts and it will be out in the next few months.

1 Like

At those download speeds I get DPC issues but I don’t get audio stuttering because I use Process Lasso in Probalance mode and this maintains responsiveness. Running Latencymon clearly shows that my system has issues which would likely cause stuttering if I didn’t run a process optimizer.

Latencymon report

CONCLUSION


Your system appears to be having trouble handling real-time audio and other tasks. You are likely to experience buffer underruns appearing as drop outs, clicks or pops. One or more DPC routines that belong to a driver running in your system appear to be executing for too long. At least one detected problem appears to be network related. In case you are using a WLAN adapter, try disabling it to get better results. One problem may be related to power management, disable CPU throttling settings in Control Panel and BIOS setup. Check for BIOS updates.
LatencyMon has been analyzing your system for 0:16:05 (h:mm:ss) on all processors.


SYSTEM INFORMATION


Computer name: MA08
OS version: Windows 10 , 10.0, version 1903, build: 18362 (x64)
Hardware: Alienware 17, Alienware, 0MPYM4
CPU: GenuineIntel Intel® Core™ i7-4710MQ CPU @ 2.50GHz
Logical processors: 8
Processor groups: 1
RAM: 16265 MB total


CPU SPEED


Reported CPU speed: 2494 MHz

Note: reported execution times may be calculated based on a fixed reported CPU speed. Disable variable speed settings like Intel Speed Step and AMD Cool N Quiet in the BIOS setup for more accurate results.

WARNING: the CPU speed that was measured is only a fraction of the CPU speed reported. Your CPUs may be throttled back due to variable speed settings and thermal issues. It is suggested that you run a utility which reports your actual CPU frequency and temperature.


MEASURED INTERRUPT TO USER PROCESS LATENCIES


The interrupt to process latency reflects the measured interval that a usermode process needed to respond to a hardware request from the moment the interrupt service routine started execution. This includes the scheduling and execution of a DPC routine, the signaling of an event and the waking up of a usermode thread from an idle wait state in response to that event.

Highest measured interrupt to process latency (µs): 33134.50
Average measured interrupt to process latency (µs): 24.952987

Highest measured interrupt to DPC latency (µs): 33122.30
Average measured interrupt to DPC latency (µs): 14.024317


REPORTED ISRs


Interrupt service routines are routines installed by the OS and device drivers that execute in response to a hardware interrupt signal.

Highest ISR routine execution time (µs): 379.355253
Driver with highest ISR routine execution time: ndis.sys - Network Driver Interface Specification (NDIS), Microsoft Corporation

Highest reported total ISR routine time (%): 0.013757
Driver with highest ISR total time: ndis.sys - Network Driver Interface Specification (NDIS), Microsoft Corporation

Total time spent in ISRs (%) 0.02630

ISR count (execution time <250 µs): 255625
ISR count (execution time 250-500 µs): 0
ISR count (execution time 500-999 µs): 1
ISR count (execution time 1000-1999 µs): 0
ISR count (execution time 2000-3999 µs): 0
ISR count (execution time >=4000 µs): 0


REPORTED DPCs


DPC routines are part of the interrupt servicing dispatch mechanism and disable the possibility for a process to utilize the CPU while it is interrupted until the DPC has finished execution.

Highest DPC routine execution time (µs): 33138.747795
Driver with highest DPC routine execution time: ndis.sys - Network Driver Interface Specification (NDIS), Microsoft Corporation

Highest reported total DPC routine time (%): 0.117690
Driver with highest DPC total execution time: ndis.sys - Network Driver Interface Specification (NDIS), Microsoft Corporation

Total time spent in DPCs (%) 0.402451

DPC count (execution time <250 µs): 2184774
DPC count (execution time 250-500 µs): 0
DPC count (execution time 500-999 µs): 14430
DPC count (execution time 1000-1999 µs): 415
DPC count (execution time 2000-3999 µs): 12
DPC count (execution time >=4000 µs): 0


REPORTED HARD PAGEFAULTS


Hard pagefaults are events that get triggered by making use of virtual memory that is not resident in RAM but backed by a memory mapped file on disk. The process of resolving the hard pagefault requires reading in the memory from disk while the process is interrupted and blocked from execution.

NOTE: some processes were hit by hard pagefaults. If these were programs producing audio, they are likely to interrupt the audio stream resulting in dropouts, clicks and pops. Check the Processes tab to see which programs were hit.

Process with highest pagefault count: outlook.exe

Total number of hard pagefaults 9559
Hard pagefault count of hardest hit process: 2189
Number of processes hit: 83


PER CPU DATA


CPU 0 Interrupt cycle time (s): 86.892270
CPU 0 ISR highest execution time (µs): 199.724539
CPU 0 ISR total execution time (s): 1.745509
CPU 0 ISR count: 226063
CPU 0 DPC highest execution time (µs): 33138.747795
CPU 0 DPC total execution time (s): 27.985715
CPU 0 DPC count: 2007986


CPU 1 Interrupt cycle time (s): 30.560836
CPU 1 ISR highest execution time (µs): 379.355253
CPU 1 ISR total execution time (s): 0.240752
CPU 1 ISR count: 25320
CPU 1 DPC highest execution time (µs): 516.539695
CPU 1 DPC total execution time (s): 1.079728
CPU 1 DPC count: 38925


CPU 2 Interrupt cycle time (s): 31.782135
CPU 2 ISR highest execution time (µs): 90.211307
CPU 2 ISR total execution time (s): 0.043322
CPU 2 ISR count: 4171
CPU 2 DPC highest execution time (µs): 366.143945
CPU 2 DPC total execution time (s): 0.346120
CPU 2 DPC count: 28557


CPU 3 Interrupt cycle time (s): 27.106567
CPU 3 ISR highest execution time (µs): 44.371692
CPU 3 ISR total execution time (s): 0.000818
CPU 3 ISR count: 50
CPU 3 DPC highest execution time (µs): 186.843625
CPU 3 DPC total execution time (s): 0.174699
CPU 3 DPC count: 14799


CPU 4 Interrupt cycle time (s): 29.599547
CPU 4 ISR highest execution time (µs): 15.311949
CPU 4 ISR total execution time (s): 0.000217
CPU 4 ISR count: 22
CPU 4 DPC highest execution time (µs): 493.920209
CPU 4 DPC total execution time (s): 0.404097
CPU 4 DPC count: 30961


CPU 5 Interrupt cycle time (s): 30.071397
CPU 5 ISR highest execution time (µs): 0.0
CPU 5 ISR total execution time (s): 0.0
CPU 5 ISR count: 0
CPU 5 DPC highest execution time (µs): 798.451083
CPU 5 DPC total execution time (s): 0.342023
CPU 5 DPC count: 25920


CPU 6 Interrupt cycle time (s): 28.088837
CPU 6 ISR highest execution time (µs): 0.0
CPU 6 ISR total execution time (s): 0.0
CPU 6 ISR count: 0
CPU 6 DPC highest execution time (µs): 171.381315
CPU 6 DPC total execution time (s): 0.397360
CPU 6 DPC count: 28390


CPU 7 Interrupt cycle time (s): 26.082160
CPU 7 ISR highest execution time (µs): 0.0
CPU 7 ISR total execution time (s): 0.0
CPU 7 ISR count: 0
CPU 7 DPC highest execution time (µs): 794.942662
CPU 7 DPC total execution time (s): 0.343011
CPU 7 DPC count: 24100


1 Like

Do you use any special settings? I installed ProcessLasso and with the default settings it doesn’t seem to do much, it detects that system responsiveness is absolutely terrible but it doesn’t seem to actually fix it.

AFAIK, the only change I have made is for one process. I exported the configuration to find the specific change:
DefaultPriorities=firefox.exe,above normal

I can think of several reasons for the difference in our experiencve. The most likely are:

  • The network driver runs on one CPU core so the difference might be due to your computer running the process on a less capable core.
  • You might be testing with more hosts than me which will increase GlassWire’s load.
  • As above, I’ve increased the priority for Firefox which is running the speed test and playing a video.

From a quick and dirty test, I think that giving Firefox a higher priority may be the main difference. It largely stops my problems with audio artefacts during video playback. Without Process Lasso I always get popping, but no stuttering. With Process Lasso I largely eliminate popping. I could get better performance by using a more rigid rule but I like Probalance because I don’t want to setup more than one rule.

Hello there,

I’m currently evaluating GlassWire 2.2.201 and I notice network “slowdowns” at fast speeds. For my testing below, I am running Windows 10 1809 + 1Gbps Ethernet. I will be copying a file from 1 PC on the network to another PC on the network via ethernet.

GlassWire OFF (this will serve as our baseline)
Transfer speed range around 90-100 MB/s.

GlassWire ON (test subject)
Transfer speed will jump up/down between 65-90 MB/s

The transfer speed with GlassWire OFF seems more stable while GlassWire ON seems to jump.

@Will

Thanks for your report.

We will try to reproduce this. GlassWire just uses an API for network monitoring from Windows that should not impact network performance in any way.

Is your firewall set to “on” or off? Which mode? Do you have anything blocked with the firewall at all? Are you a paid or free user?

Also, it looks like you are moving files locally on your HD, not over the network? Or am I mistaken on that? If it’s another PC, does that PC have GlassWire also or not?

I just tried on my own PC and could not reproduce this but I’ll ask our QA to do so. Also it looks like you are using two different file sizes. I tested using the same file.

Hi @Ken_GlassWire,

  1. My firewall is ON, but I don’t have anything BLOCKED.

  2. I am a free user evaluating the product in TRIAL MODE.

  3. I am moving file from local PC [GlassWire INSTALLED] to another PC [GlassWire NOT INSTALLED] on the network, so data is being transferred via 1Gbps ethernet.

1 Like

Thanks! We will investigate and try to reproduce this.