Quote:
Originally Posted by vermaden
Yes, but your results are from Virtual image on disk within wirtualization, You wil ALWAYS get "strange" results about disk performance when you use disk in a file for your virtualized OS, I also got strange and realtively faster results then the drive can do even in slower QEMU.
|
I gave a look at the code and TBH its even more naive than I expected.
check it out for yourself.
The fantastic scores i was getting were bogus. i think its because of 2 factors:
1. The virtual disk was not preallocated so there is some kind of COW mechanism going on here though i don't know the details. This layer of abstraction was i believe causing the skewed result.
2. The disk is too small so the f, q, h stoke readings have no meaning here.
here is the o/p with a preallocated disk. this is more in line with what one would expect:
Code:
/dev/ad1
512 # sectorsize
8589934592 # mediasize in bytes (8.0G)
16777216 # mediasize in sectors
17753 # Cylinders according to firmware.
15 # Heads according to firmware.
63 # Sectors according to firmware.
ad:01000000000000000001 # Disk ident.
I/O command overhead:
time to read 10MB block 0.192098 sec = 0.009 msec/sector
time to read 20480 sectors 2.492608 sec = 0.122 msec/sector
calculated command overhead = 0.112 msec/sector
Seek times:
Full stroke: 250 iter in 2.628599 sec = 10.514 msec
Half stroke: 250 iter in 2.394040 sec = 9.576 msec
Quarter stroke: 500 iter in 3.194460 sec = 6.389 msec
Short forward: 400 iter in 3.233777 sec = 8.084 msec
Short backward: 400 iter in 3.047335 sec = 7.618 msec
Seq outer: 2048 iter in 0.335767 sec = 0.164 msec
Seq inner: 2048 iter in 0.335223 sec = 0.164 msec
Transfer rates:
outside: 102400 kbytes in 1.474514 sec = 69447 kbytes/sec
middle: 102400 kbytes in 1.627657 sec = 62913 kbytes/sec
inside: 102400 kbytes in 1.558496 sec = 65704 kbytes/sec
the f/h/q stroke readings may look overoptimistic but they are not. this has to do with how the benchmark calculates this value. if the whole disk was used for this virt. disk I would expect the numbers to be worse than 25 ms.
the transfer rates are more or less the same, as expected.
also, I would expect the COW results to degenerate to the above values when the disk becomes full.