Server Fault Asked by MSF004 on December 23, 2021
I have a software RAID 1 array on RHEL. I am getting this error emailed to me each morning: Device: /dev/sda [SAT], 1 Currently unreadable (pending) sectors
When I run a test on sda (or sdb) everything appears to pass. Am I missing something?
mdstat shows the RAID array is fine:
# cat /proc/mdstat
Personalities : [raid1]
md5 : active raid1 sdb5[1] sda5[0]
108026816 blocks [2/2] [UU]
md1 : active raid1 sdb1[1] sda1[0]
511936 blocks [2/2] [UU]
md2 : active raid1 sda2[0] sdb2[1]
805306304 blocks [2/2] [UU]
md3 : active raid1 sda3[0] sdb3[1]
62914496 blocks [2/2] [UU]
unused devices: <none>
Here is the output of: # smartctl -q noserial -a /dev/sda
I have also tried running: # smartctl -t long /dev/sda
smartctl 5.43 2012-06-30 r3573 [x86_64-linux-2.6.32-279.9.1.el6.x86_64] (local build)
Copyright (C) 2002-12 by Bruce Allen, http://smartmontools.sourceforge.net
=== START OF INFORMATION SECTION ===
Model Family: Hitachi Ultrastar A7K1000
Device Model: Hitachi HUA721010KLA330
Firmware Version: GKAOAB0A
User Capacity: 1,000,204,886,016 bytes [1.00 TB]
Sector Size: 512 bytes logical/physical
Device is: In smartctl database [for details use: -P show]
ATA Version is: 7
ATA Standard is: ATA/ATAPI-7 T13 1532D revision 1
Local Time is: Sun May 21 17:51:42 2017 CDT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
General SMART Values:
Offline data collection status: (0x84) Offline data collection activity
was suspended by an interrupting command from host.
Auto Offline Data Collection: Enabled.
Self-test execution status: ( 0) The previous self-test routine completed
without error or no self-test has ever
been run.
Total time to complete Offline
data collection: (15354) seconds.
Offline data collection
capabilities: (0x5b) SMART execute Offline immediate.
Auto Offline data collection on/off support.
Suspend Offline collection upon new
command.
Offline surface scan supported.
Self-test supported.
No Conveyance Self-test supported.
Selective Self-test supported.
SMART capabilities: (0x0003) Saves SMART data before entering
power-saving mode.
Supports SMART auto save timer.
Error logging capability: (0x01) Error logging supported.
General Purpose Logging supported.
Short self-test routine
recommended polling time: ( 1) minutes.
Extended self-test routine
recommended polling time: ( 255) minutes.
SCT capabilities: (0x003f) SCT Status supported.
SCT Error Recovery Control supported.
SCT Feature Control supported.
SCT Data Table supported.
SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x000b 098 098 016 Pre-fail Always - 4
2 Throughput_Performance 0x0005 100 100 054 Pre-fail Offline - 0
3 Spin_Up_Time 0x0007 122 122 024 Pre-fail Always - 550 (Average 591)
4 Start_Stop_Count 0x0012 100 100 000 Old_age Always - 68
5 Reallocated_Sector_Ct 0x0033 100 100 005 Pre-fail Always - 0
7 Seek_Error_Rate 0x000b 100 100 067 Pre-fail Always - 0
8 Seek_Time_Performance 0x0005 100 100 020 Pre-fail Offline - 0
9 Power_On_Hours 0x0012 094 094 000 Old_age Always - 43202
10 Spin_Retry_Count 0x0013 100 100 060 Pre-fail Always - 0
12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 68
192 Power-Off_Retract_Count 0x0032 100 100 000 Old_age Always - 751
193 Load_Cycle_Count 0x0012 100 100 000 Old_age Always - 751
194 Temperature_Celsius 0x0002 090 090 000 Old_age Always - 66 (Min/Max 17/72)
196 Reallocated_Event_Count 0x0032 100 100 000 Old_age Always - 0
197 Current_Pending_Sector 0x0022 100 100 000 Old_age Always - 1
198 Offline_Uncorrectable 0x0008 100 100 000 Old_age Offline - 0
199 UDMA_CRC_Error_Count 0x000a 200 200 000 Old_age Always - 0
SMART Error Log Version: 1
No Errors Logged
SMART Self-test log structure revision number 1
Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error
# 1 Short offline Completed without error 00% 43186 -
# 2 Extended offline Completed without error 00% 43170 -
# 3 Short offline Completed without error 00% 43162 -
# 4 Short offline Completed without error 00% 43138 -
# 5 Short offline Completed without error 00% 43114 -
# 6 Short offline Completed without error 00% 43090 -
# 7 Short offline Completed without error 00% 43066 -
# 8 Short offline Completed without error 00% 43042 -
# 9 Extended offline Completed without error 00% 43031 -
#10 Short offline Completed without error 00% 43024 -
#11 Short offline Completed without error 00% 43018 -
#12 Extended offline Completed without error 00% 43002 -
#13 Short offline Completed without error 00% 42994 -
#14 Short offline Completed without error 00% 42970 -
#15 Short offline Completed without error 00% 42946 -
#16 Short offline Completed without error 00% 42922 -
#17 Short offline Completed without error 00% 42898 -
#18 Short offline Completed without error 00% 42874 -
#19 Short offline Completed without error 00% 42850 -
#20 Extended offline Completed without error 00% 42833 -
#21 Short offline Completed without error 00% 42826 -
SMART Selective self-test log data structure revision number 1
SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS
1 0 0 Not_testing
2 0 0 Not_testing
3 0 0 Not_testing
4 0 0 Not_testing
5 0 0 Not_testing
Selective self-test flags (0x0):
After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.
FYI, in my case I got the error about 8 invalid sectors and that did last for a while.
Then I saw this message in my syslog:
Nov 13 06:57:35 monster smartd[3359]: Device: /dev/sdf [SAT], No more Currently unreadable (pending) sectors, warning condition reset after 1 email
So I guess it should fix itself and if not the drive should probably be replaced ASAP.
The first message I can find in my syslog happened on Nov 12 at 06:57:35
Nov 12 06:57:35 monster smartd[3359]: Device: /dev/sdf [SAT], 8 Currently unreadable (pending) sectors
So it looks like it repeated the message until 1 day later and checked the drive again and did not find the errors anymore.
I'm still thinking that I'll get a new hard drive. I can get a 12Tb and have 3 drives in my RAID. So if one dies, I still have 2... It's way more trouble to not have a backup.
My existing drive highest temp. is 48°C. So relatively good. Also the drive ran for only 9599 hours (~400 days) so still very young.
Answered by Alexis Wilke on December 23, 2021
This just means that you have one bad sector on one drive in your RAID array. There is nothing to worry about at the moment, unless you start getting a lot more bad sectors on that disk. You should NOT try to manually fix the error...that will be done automatically on the first of each month by the raid-check command, which is run from /etc/cron.d/raid-check. You can check that command and run it manually to immediately reallocate the bad sector on the disk:
[root@server]# more /etc/cron.d/raid-check
0 1 * * Sun root /usr/sbin/raid-check
[root@server]# /usr/sbin/raid-check
This will force mdraid to copy the bad block from the other disk in the RAID array, and it will mark the bad block as unusable.
Answered by CpnCrunch on December 23, 2021
I believe the "197 Current_Pending_Sector" count is is a count of sectors that have failed to read. This does kind of imply that the drive is starting to fail, but it doesn't necessarily mean the drive is bad. If those sectors are re-written the drive firmware can re-map them away and the drive could be fine.
Searching around also finds discussions that suggest some models of SSD drives regularly toggle this to 1 and back, and it might be a mostly harmless firmware bug in the SMART reporting.
So you could just ignore it, and provided your filesystem can handle the occasional bad block read (ie, does some kind of robust raid/redundancy under the hood), it might slowly clear itself as the filesystem overwrites those blocks. If your filesystem cannot handle bad block reads, you could get IO errors every time something tries to read whatever is in that block. You might still be able to recover by finding and deleting that file, and the filesystem will eventually re-write over that sector.
You can also clear the Current_Pending_Sector count by explicitly overwriting those sectors. THIS WILL DESTROY DATA ON THE DISK! This could corrupt the filesystem in ways that could loose all the data on the disk, not just what is lost in the bad sector. So only do this if you can afford to loose all the data on the disk.
You can find the bad sector by running a smart long test;
# smartctl -t long /dev/sda
You can then check the status of the long test to see if its finished, and the LBA of the first error it encountered;
# smartctl -l selftest /dev/sda
smartctl 7.1 2019-12-30 r5022 [x86_64-linux-5.7.0-1-amd64] (local build)
Copyright (C) 2002-19, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF READ SMART DATA SECTION ===
SMART Self-test log structure revision number 1
Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error
# 1 Extended offline Completed without error 00% 17711 12345
# 2 Short offline Completed without error 00% 17709 -
# 3 Short captive Interrupted (host reset) 10% 450 -
# 4 Short captive Interrupted (host reset) 10% 228 -
You can then overwrite the bad sector using dd;
# dd if=/dev/zero of=/dev/sda seek=12345 count=1
Though note that the default bs=512 blocksize might be smaller than the physical sector size on the drive, so you may need to write up to count=8 to wipe it completely. You can then rinse-and repeat the test/overwrite cycle until all the bad sectors are overwritten. Finally, you can check that the count has been zeroed;
$ sudo smartctl -A /dev/sdc | grep Current_Pending_Sector
197 Current_Pending_Sector 0x0012 200 200 000 Old_age Always - 0
I experienced this on an old WD 200G HDD that had some kind of hickup that may have been caused by poor hotplug connections that resulted in a Current_Pending_Sector count of 26. When I ran the long test, it passed without finding any bad sectors, but the count was still at 26. I was able to zero the counter it by zeroing out the whole drive with dd;
dd if=/dev/zero of=/dev/sda bs=1M status=progress
Note the bs=1M makes it go much faster, but it's not the 512 sector size so there will be a partial block at the end. Subsequent smartctl long and short tests have all reported the drive as fine.
Answered by Donovan Baarda on December 23, 2021
Now You have Current Pending Sector on the sda
hard disk. This disk is used in software RAID. There is a start of death. You need to replace this disk.
This disk is designed for desktop PC. It tries to recovery itself (not via RAID). It reads bad sector every several seconds until checksum of bad sector will be good (disk's performance will drop dramatically), then the disk will write the read data to a new reserved sector. But the read data from rescued sector may be wrong often. CRC32 checksum can only indicate first error and it doesn't usable for recovering data (by example - XOR on RAID-5 can used for recovering data). When RAID will have read these data from the bad drive it will give different data, which will be read OK. This situation will produce kernel panic on the system if bad block has important system data. That is why You need to replace bad disk.
Answered by Mikhail Khirgiy on December 23, 2021
Your SMART Current Pending Sector has value 1. This means that there is a bad sector on the disk and the drive firmware is unable to reallocate it, but you still have zero reallocated sectors count so it probably recoverable even though your drive have been running for 5 years in not very healthy environment - temperatures are up to 72 C°.
You can try finding this bad sector with dd if=/dev/sda of=/dev/null
and then you'll need to remap this sector by overwriting it.
Answered by AlexD on December 23, 2021
Get help from others!
Recent Answers
Recent Questions
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP