View Single Post
  #3   (View Single Post)  
Old 27th May 2013
Septic Septic is offline
New User
 
Join Date: Mar 2010
Posts: 9
Default

Ah, forgot about tmpfs, good call!

After applying softupdates and noatime, the performance was a little better - it tended to do 4-6 files at 'normal' speed, then hang - rinse and repeat (faster than before though, so already a plus).

I thought I'd do a test on something not yet backed up with lots of little files - I had the linux kernel 2.6.31 sources, so used that:

Code:
# time rsync -turlgxvh --progress /share/Programming/C/linux-2.6.31/ /tmp/tmpfs/

...

sent 343.97M bytes  received 559.85K bytes  76.56M bytes/sec
total size is 342.20M  speedup is 0.99
2.335u 2.950s 0:03.75 140.8%    475+2250k 22+0io 0pf+0w
Clearly the zpool runs nicely, nearly 350MB in under 4 seconds. Back onto the newly remounted backup drive though, it took over 15 minutes, with clear i/o delays:

Code:
# time rsync -turlgxvh --progress /share/Programming/C/linux-2.6.31/ /mnt/Programming/C/linux-2.6.31/

...

sent 343.97M bytes  received 559.87K bytes  379.64K bytes/sec
total size is 342.20M  speedup is 0.99
5.690u 8.376s 15:07.90 1.5%     477+2259k 386+432io 0pf+0w
As it's the destination disk, I thought I'd check its smart details:

Code:
# smartctl -a /dev/ada0
smartctl 6.1 2013-03-16 r3800 [FreeBSD 9.1-RELEASE amd64] (local build)
Copyright (C) 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family:     Western Digital Caviar Green
Device Model:     WDC WD20EADS-00R6B0
Serial Number:    WD-WCAVY1950152
LU WWN Device Id: 5 0014ee 2592e4d98
Firmware Version: 01.00A01
User Capacity:    2,000,398,934,016 bytes [2.00 TB]
Sector Size:      512 bytes logical/physical
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   ATA8-ACS (minor revision not indicated)
SATA Version is:  SATA 2.6, 3.0 Gb/s
Local Time is:    Mon May 27 11:00:33 2013 BST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

...

SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x002f   193   193   051    Pre-fail  Always       -       1778345
  3 Spin_Up_Time            0x0027   182   147   021    Pre-fail  Always       -       7866
  4 Start_Stop_Count        0x0032   100   100   000    Old_age   Always       -       553
  5 Reallocated_Sector_Ct   0x0033   200   200   140    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x002e   200   200   000    Old_age   Always       -       0
  9 Power_On_Hours          0x0032   069   069   000    Old_age   Always       -       23140
 10 Spin_Retry_Count        0x0032   100   100   000    Old_age   Always       -       0
 11 Calibration_Retry_Count 0x0032   100   253   000    Old_age   Always       -       0
 12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       -       63
192 Power-Off_Retract_Count 0x0032   200   200   000    Old_age   Always       -       37
193 Load_Cycle_Count        0x0032   187   187   000    Old_age   Always       -       41393
194 Temperature_Celsius     0x0022   118   108   000    Old_age   Always       -       34
196 Reallocated_Event_Count 0x0032   200   200   000    Old_age   Always       -       0
197 Current_Pending_Sector  0x0032   198   198   000    Old_age   Always       -       754
198 Offline_Uncorrectable   0x0030   200   200   000    Old_age   Offline      -       308
199 UDMA_CRC_Error_Count    0x0032   200   200   000    Old_age   Always       -       1
200 Multi_Zone_Error_Rate   0x0008   001   001   000    Old_age   Offline      -       261345

SMART Error Log Version: 1
No Errors Logged

...
While I didn't think the drive had been powered on for nearly 3 years in total, the raw read error rate is a bit concerning - could it be as simple as the drive that's giving grief? All it's usage before now hasn't shown that it's been stalling or having any other form of issues, hence why I hadn't considered it before.

Thanks for the help so far
Reply With Quote