puzzling raw read error rate graph

How, what, where and why - when using the software.
clif9710
Posts: 2
Joined: 2020.11.03. 05:11

puzzling raw read error rate graph

Post by clif9710 »

Please see the attached screen grab showing information for my Seagate 1tB HDD. Scroll down to see the raw read error rate graph. Note the sudden rather wild variation that extends over about a 6 month period. My first question would be how often is the RRER checked? My second question is what could explain this sudden burst after 5 years of no significant RRER followed by a return to "normal" that seems to be holding?

I checked the log and found nothing. Performance and health are both 100% but power on time is 2084 days. Am I living on borrowed time?
Attachments
errorRate.jpg
errorRate.jpg (429.17 KiB) Viewed 2611 times
User avatar
hdsentinel
Site Admin
Posts: 3128
Joined: 2008.07.27. 17:00
Location: Hungary
Contact:

Re: puzzling raw read error rate graph

Post by hdsentinel »

Everything you see is completely normal and expected. Let me explain generally and answer your questions.

As described numerous times on this forum, the changes in the Raw Read Error Rate for Seagate hard disks are completely normal.
This high value does not mean the number of errors - this is why it is called "error rate" (instead of error count) - because changing this value to very high does not indicate problems.

This is why Hard Disk Sentinel shows 100% health and the text description shows no problems.

Of course I can confirm that if there would be a real error, Hard Disk Sentinel would surely
- report in the text description
- show the degradation in health value
- it would be detected by Disk -> Short self test or Disk -> Extended self test
- the Disk -> Surface test -> Read test could not be completed without problems

If you are interested in further information about this, please check this forum topic:
https://www.hdsentinel.com/forum/viewto ... p=973#p973


The graph designed to automatically show the very first value ever detected - and then focus on the most recent values recorded. This is why you may see "empty" area, but it does not mean the values not changed there (that could change and probably would show similar fluctuation).
If you double click on the graph, then further data points displayed, but generally the software designed to focus on the most recent months - and for reference, showing the very first value ever detected/recorded.


> My first question would be how often is the RRER checked?

The complete status (all self-monitoring attributes, including this one) detected once per every 5 minutes by default.
This can be controlled on Configuration -> Advanced Options page, notice the "detection frequency" slider which can be used to adjust the detection frequency.
But not all values logged, as some of these changes (like what you can see) are completely normal and does not indicate problems.

For some other critical values (eg. bad sectors, weak sectors, spin up problems, etc.) any change is noticed and saved to the Log page.


> My second question is what could explain this sudden burst after 5 years of no significant RRER
> followed by a return to "normal" that seems to be holding?

Please see above. In that 5 years, the graph would show similar - just those values are no longer recorded/displayed.


> I checked the log and found nothing. Performance and health are both 100% but power on time is 2084 days.

You can any time perform tests too just to confirm if the disk drive is working perfectly or reveal any (even minor) issue:
https://www.hdsentinel.com/faq.php#tests

but as this change does not indicate any issues, it is completely normal and expected to have 100% health.

> Am I living on borrowed time?

Apart from the above, the drive reached the end of designed lifetime and after that, the chances of sudden failures increase. So it may work for long time (even for many more years) but according the experiences, it may fail "suddenly" without prior health decrease.
The "estimated remaining lifetime = More than 100 days" on the Overview page indicates that: in a mission-critical environment it may be better to consider replacement. For non-critical secondary storage it may work for very-very long time.
Post Reply