Worried about trashing SSD

no, I’m not using GlassWire right now, in 9 days I’ll install it - that’s why I’m asking before I fire it up

but I do work lot of with network, so my traffic hovers at ~5.3TB in two months

1 Like


I found this older article from 2012.

It shows that even with 10GiB of writes per day your SSD would last for 11-70 years depending on the type (if I understand it correctly). This is an old article from 2012 though, SSDs have improved a lot since then so they are probably much better now.

well, these articles are kinda’ theoretical, at best, since no one has actually took the time to kill the drive with day to day use including power on cycles

I will post the configuration here on how to move the GlassWire database once I get it back from the dev team. Sorry for the delay.

thanks, no need to hurry

Actually, that is what they try to do. Even if they don’t have enough time to kill all the SSD drives, enough fail that they can reasonably accurately predict the likely life of the average SSD.

If you are interested you can read about the theory in articles like this:

The underlying technologies have also had a lot of use which provides relevant failure rates in other product applications. This provides failure rates that have some relevance too.

Most hard drives don’t fail in computers before the computer is replaced so it is unlikely that we will see higher failure rates from more reliable SSDs.

1 Like

On reflection, I should have added that actual studies of real life failures in use indicate that write exhaustion is not a significant cause of SSD failures. Other issues are a lot more relevant including the following:

  • power cycling
  • power cycling during writes
  • component failures e.g. capacitors
  • firmware failures

If you search for real life SDD reliability statistics and stress test results you should find more info, e.g.:

1 Like

I did read a lot of reliability test and all of them are inducing forced failure - as you noted, power cycling in electronics is a huge factor in failure rate

as for SSDs to be more reliable than HDDs … hmm

  • SSD capacity is much smaller compared to a HDD of same price
  • when a SSD “goes”, it goes all at once; a HDD give you indication of impeding failure via it’s S.M.A.R.T. attributes and you have time to react

You’re totally right that catastrophic SSD failure is more common but it still happens a lot with HDDs too. The majority of SSD failures are still preceded by a SMART warning. It’s just that most users don’t take immediate action, or even any action, before the SSD stops for good.

Anyway, the type of failure and the relative price of capacity don’t negate the fact that SSDs are more reliable than HDDs.

Back to your concern about SSD thrashing reducing its life. I’d recommend buying an SSD with the best electronics and the best firmware - how the controller handles wear levelling, garbage collection, and the like does have an impact on both the SSD performance and life. The better SSDs can last twice as long as other reputable SSDs as shown by reports like The SSD Endurance Experiment: They’re all dead. So I hope that if you did buy your SSD purely on price that you got one of the good ones. At 0.7TB per month, even the worst performing SSD in that article will probably give you 100 months or 8 years of life.

I’d be surprised if GlassWire is a heavy user of disk because my web browsers all use much more disk than GlassWire does.

1 Like

they kinda’ aren’t
google published a paper a while back detailing the SSD reliability in their data centers
one of their findings is that when SSDs go, they go without warning

I can’t post links here, yet … google this: The FAST 2016 paper Flash Reliability in Production: The Expected and the Unexpected
page 67 (78 in the reader)


  • Ignore Uncorrectable Bit Error Rate (UBER) specs. A meaningless number.
  • Good news: Raw Bit Error Rate (RBER) increases slower than expected from wearout and is not correlated with UBER or other failures.
  • High-end SLC drives are no more reliable that MLC drives.
  • Bad news: SSDs fail at a lower rate than disks, but UBER rate is higher.
  • SSD age, not usage, affects reliability.
  • Bad blocks in new SSDs are common, and drives with a large number of bad blocks are much more likely to lose hundreds of other blocks, most likely due to die or chip failure.
  • 30-80 percent of SSDs develop at least one bad block and 2-7 percent develop at least one bad chip in the first four years of deployment.

An obvious question is how flash reliability compares to that of hard disk drives (HDDs), their main competitor. We find that when it comes to replacement rates, flash drives win. The annual replacement rates of hard disk drives have previously been reported to be 2-9%, which is high compared to the 4-10% of flash drives we see being replaced in a 4 year period. However, flash drives are less attractive when it comes to their error rates. More than 20% of flash drives develop uncorrectable errors in a four year period, 30-80% develop bad blocks and 2-7% of them develop bad chips. In comparison, previous work on HDDs reports that only 3.5% of disks in a large population developed bad sectors in a 32 months period – a low number when taking into account that the number of sectors on a hard disk is orders of magnitudes larger than the number of either blocks or chips on a solid state drive, and that sectors are smaller than blocks, so a failure is less severe.
In summary, we find that the flash drives in our study experience significantly lower replacement rates (within their rated lifetime) than hard disk drives. On the downside, they experience significantly higher rates of uncorrectable errors than hard disk drives.


For anyone else who’s interested, here’s the report download page https://www.usenix.org/conference/fast16/technical-sessions/presentation/schroeder

SSDs are more reliable overall
I agree with your “kinda aren’t” that some reliability statistics favor HDDs.

Even so, SSDs are more reliable than HDDs overall. This applies in particular to the context of your original concern about “trashing the SSD with the data recording” and the SSD dying: SSDs are far less likely to need replacing than HDDs.

Uncorrectable errors are significant but we already live with them
As you point out, uncorrectable errors are a significant issue but one that we don’t usually worry about, at least on our personal computers.In part this is because the consequences on a home computer are quite different than a corporate server.

If uncorrectable errors were a major issue for us then we would have ECC RAM (error correcting memory). We would also use storage systems that provide a higher level of error correction to ensure that such ECC errors on the drive can be corrected. See ZFS, Btrfs and Microsoft ReFS as the main examples. RAID is another but I think it is increasingly less relevant and less useful.

Uncorrectable errors are known to predict HDD failure
Uncorrectable errors are reported by SMART attribute 187 and are already a known predictor of HDD failure. So we see that uncorrectable errors do much the same on SSDs.

For those who don’t know, uncorrectable errors are hardware ECC failures that would result in a bad sector on a HDD. The level of uncorrectable errors in the report is 2-6 per 1000 SSD drive days which means about about 1-2 per year for a single SSD. The report says 26-60% of SSDs have uncorrectable errors versus an average 3.5% for HDDs which means 3-12 times as many impacted drives. What makes this potentially worse is that a bad block on an SSD is much larger and so it can have much more impact than a bad sector on an HDD.

To give you an idea of their impact, an HDD with more than 30 uncorrectable errors is almost certain to fail within a year, e.g.
http://www.extremetech.com/computing/194059-using-smart-to-accurately-predict-when-a-hard-drive-is-about-to-die. As I said previously, these sort of errors are commonly ignored by most users who are not even aware such errors exist. That is unless they cause a crash (Windows blue screen or application failure) or they notice corrupted content in a document.

Uncorrectable errors are predictable
In the Summary, page 14, the report says that various errors (nine are listed in the graph on slide 19; some are SMART errors) allow the prediction of uncorrectable errors [edit] so you see that warnings about imminent failure are actually very common:

Previous errors of various types are predictive of later uncorrectable errors. (In fact, we have work in progress showing that standard machine learning techniques can predict uncorrectable errorts based on age and prior errors with an interesting accuracy.)

1 Like


To move your database please create a “glasswire.conf” file in notepad with these two lines only (the lines in quotes):
"# db_file_path=<new_path>\glasswire.db

This file should be copied to the c:\programdata\glasswire\service folder
The database path is set to D:\glasswire\ folder in the file sample, but you can change the path to something else.

You should restart the GlassWire service when the file is copied.


@Ken_GlassWire awesome, thanks! :kissing_heart:

1 Like

And thanks to ZMe_Ul and Remah for an interesting discussion. :slight_smile:


I did something maybe can be a solution for this issue …

I moved the GlassWire folder that located in ProgramData hidden folder to an HDD and created a symlink … so all the amounts of writes will go to the HDD, and as I see there is no data been written to my SSD when I check the resource monitor and Crystal Disk Info.
Am I doing it right??

1 Like

I think this method will be removed when update GlassWire

Your solution will work fine but I would always prefer the GlassWire solution. The configuration information is stored with the GlassWire application so if you uninstall GlassWire it should remove the database file. Whereas GlassWire will not know to remove the symlink you have created.

1 Like

I don’t think in some day I will uninstall GlassWire :stuck_out_tongue: , anyway thank you guys :wink:

1 Like

I think it’s not possible easily because I tried this with my ssd. I recovered my data and want to transfer it on the firecuda SSHD but got some errors. After that I read many forums and official documents then I get solution. Please help some easy solution.