View Single Post
Old 14th June 2011
rocket357's Avatar
rocket357 rocket357 is offline
Real Name: Jonathon
Wannabe OpenBSD porter
 
Join Date: Jun 2010
Location: 127.0.0.1
Posts: 429
Default

Quote:
Originally Posted by sharris View Post
You can mark this thread solve when somebody prove it, than say-so.
Just out of curiosity, what does it matter if the data is stored at the outer edge of the platters or inner edge? I mean, I know you're talking about speed...I get that...what I'm asking is "as long as you have a model of the speed in various *logical* locations on disk, what do the terms "inner" or "outer" matter?"

For instance, you could use a little bit of "magic" that Gregory Smith wrote about in "PostgreSQL 9.0 - High Performance" (Chapter 3 - Database Hardware Benchmarking) called "zcav" (part of the bonnie++ suite of tools).

http://www.coker.com.au/bonnie++/zcav/

The terms "first", "outer", etc... are pretty meaningless since what you're talking about is not a standard, per se, but rather a convention. It's entirely possible for a hard drive manufacturer to use the physical "inner" section of the platters as the "first" sectors of the drive, though by convention it's usually the opposite since the physical "outer" section of the platters is denser and reads faster. (I'd bet that even though it's "convention", most if not all hard drive manufacturers would use the "outer" sections as "first" given the performance characteristics.)

I ran into the same "abstraction" issue when I asked the question "if a cpu instruction is a string of 1's and 0's, what physically happens when said 1's and 0's are "run" on a given processor? In other words, what circuits are turned on and off by a given instruction? There is no answer, because the question is asked on a logical level and not an implementation level (and it's unlikely that Intel is going to hand over the docs to explain how their CPU's work on that detailed of a level, and it's unlikely I'd understand it even if they did). The same question, when researched in terms of MIPS, got the answer "it's implementation-dependent". Well, yeah, obviously...same goes for hard drives.

So there's only one thing left to do...test it on a logical level (all of this done on my OpenBSD-CURRENT workstation):

Code:
# pkg_add bonnie++ gnuplot

# df -h
Filesystem     Size    Used   Avail Capacity  Mounted on
/dev/wd0a     1005M    105M    850M    11%    /
/dev/wd0m      412G   93.6G    298G    24%    /home
/dev/wd0d      3.9G   54.0K    3.7G     0%    /tmp
/dev/wd0f      2.0G    565M    1.3G    30%    /usr
/dev/wd0g     1005M    180M    775M    19%    /usr/X11R6
/dev/wd0h      9.8G    3.4G    6.0G    36%    /usr/local
/dev/wd0j      2.0G    4.1M    1.9G     0%    /usr/obj
/dev/wd0k      9.8G    315M    9.0G     3%    /usr/ports
/dev/wd0i      2.0G    811M    1.1G    42%    /usr/src
/dev/wd0l      3.9G    518M    3.2G    14%    /usr/xenocara
/dev/wd0e     10.8G   25.3M   10.2G     0%    /var

# for slice in a d e f g h i j k l m c; do zcav /dev/rwd0$slice > rwd0$slice.zcav; done
Then use something like this for gnuplot:

Code:
unset autoscale x
set autoscale xmax
unset autoscale y
set autoscale ymax
set xlabel "Position GB"
set ylabel "KB/s"
set key right bottom
set title "Seagate Barracuda ST3500418AS"
plot "rwd0c.zcav" title "rwd0c"
replot "rwd0a.zcav" title "rwd0a"
replot "rwd0d.zcav" title "rwd0d"
replot "rwd0e.zcav" title "rwd0e"
replot "rwd0f.zcav" title "rwd0f"
replot "rwd0g.zcav" title "rwd0g"
replot "rwd0h.zcav" title "rwd0h"
replot "rwd0i.zcav" title "rwd0i"
replot "rwd0j.zcav" title "rwd0j"
replot "rwd0k.zcav" title "rwd0k"
replot "rwd0l.zcav" title "rwd0l"
set terminal png
set output "rwd0c-zcav.png"
replot "rwd0m.zcav" title "rwd0m"
This is a 500 GB disk, and everything but /home is contained in the "first" 10-15% of the disk, which likely is in a single "zone". Accordingly, only /home will see performance degredation...looking at the raw numbers, I'm seeing ~120 MB/sec across all slices (except /home and rwd0c, neither of which has finished yet, though /home is starting to show ~95 MB/sec, so it's far enough along to see performance degredation). This would make it difficult to translate sections to slices because only one slice is realistically big enough to traverse zones...but this likely isn't the case for you since you have multiple partitions spanning large sections of disk.

I'd be interested to see if you could correlate the partitions to drive zones.

Edit - I tried this with a RAID 1 of 73 GB 15k SAS drives that have a default Ubuntu install on them, and it was very easy to see that the sdc5 swap partition was on the slowest part of the "end" of the disk.

__________________
Linux/Network-Security Engineer by Profession. OpenBSD user by choice.

Last edited by rocket357; 14th June 2011 at 06:37 PM.
Reply With Quote