View Single Post
  #1   (View Single Post)  
Old 26th May 2013
Septic Septic is offline
New User
 
Join Date: Mar 2010
Posts: 9
Default Slow rsync backup from zfs pool with 99% of files

I have a zfs pool that I want to create a monthly offsite backup of, storing it on a separate disk, which I'm attaching locally.

When I'm performing the rsync, the time it takes to move the majority of each new file is between 1-8 seconds; once the transfer starts, the speed is perfectly fine, usually over 50MB/s - but it's the actual sitting there with the filename, with no other progress, that is causing the delay.

My rsync command (with/out -h or --progress makes no difference):

Code:
# rsync -turlgxvh --progress /share/ /mnt/
The system is running off an Intel Xeon E3-1220, and has 16GB of ECC RAM (cpu usage < 1%, RAM @ 50% with wired).
The zfs pool consists of 4x 3TB Western Digital Red disks in raidz2:

Code:
# zpool status share
  pool: share
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        share       ONLINE       0     0     0
          raidz2-0  ONLINE       0     0     0
            ada5    ONLINE       0     0     0
            ada3    ONLINE       0     0     0
            ada4    ONLINE       0     0     0
            ada6    ONLINE       0     0     0

errors: No known data errors
The copy is being done to a Western Digital Caviar Green 2TB that was created as such:

Code:
# gpart create -s gpt ada0
# gpart add -t freebsd-ufs ada0
# newfs /dev/ada0p1
# mount /dev/ada0p1/ /mnt/
No problems with the pool bandwidth (except the obvious lack of transfer):

Code:
# zpool iostat share 2
               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
share       2.56T  8.31T     10      0   229K      0
share       2.56T  8.31T      0      0      0      0
share       2.56T  8.31T      0     37      0  68.2K
share       2.56T  8.31T      0      0      0      0
share       2.56T  8.31T      0      0      0      0
share       2.56T  8.31T      0      0      0      0
share       2.56T  8.31T      0      0      0      0
share       2.56T  8.31T     12      6  1.46M  17.1K
share       2.56T  8.31T      0      0      0      0
share       2.56T  8.31T      0      0  12.5K      0
share       2.56T  8.31T      0      0      0      0
share       2.56T  8.31T      2     34  8.24K  58.7K
Note that this is the initial creation of the offsite disk; in future, the file differences will be much smaller and I could live with this speed. For 1.3TB of data though - it's so far been running the whole weekend and it's only done 678GB.

Code:
# df -h share
Filesystem    Size    Used   Avail Capacity  Mounted on
share         5.3T    1.3T    4.1T    24%    /share
Is there anything I can try to see where a bottleneck is (i.e. UFS/ZFS)? I've tried watching the files it has issues on, but they're all variable sizes, types and compressions. The current folder - extracted files from a server 2003 iso - has been going for about 3 hours now!

In future I'll probably just gzip each folder in the share root and transfer it individually.
Reply With Quote