DaemonForums  

Go Back   DaemonForums > FreeBSD > FreeBSD General

FreeBSD General Other questions regarding FreeBSD which do not fit in any of the categories below.

Reply
 
Thread Tools Display Modes
  #1   (View Single Post)  
Old 14th November 2009
replaysMike replaysMike is offline
New User
 
Join Date: Nov 2009
Posts: 2
Default ZFS Performance monitoring

Probably not the best title for my thread but here it goes...

I've gone and built a ZFS web server to stream videos using lighttpd, and I seem to be hitting some performance bottlenecks but I'm not quite sure where to look. I can't seem to get more than 300mbits out of the box, and I think it should be able to handle quite a bit more. There are 14 SATA drives + 2 flash drives (for cache) setup as a single pool, mirrored in pairs.

Here's what the box is up to right now:

Code:
vmstat -l output: 
tx: 32330.48 KiB/s  12000 p/s
Code:
zpool iostat -v output:
operations         bandwidth
read    write     read   write
------ ------  ------ -------
2.16K   14       269M   63.6K

cache
1.2K     2        127M    512K
Code:
netstat -na | grep ESTABLISHED | wc -l = 22236
kern.openfiles = 48790
Now, obviously I'm servicing an assload of connections. Lighttpd is configured to use 500 processes with 100 connections each, which is a ridiculous amount I know but the box stays responsive in processing new connections. I would imagine the drive system is random reading like a muthafucker (excuse my language ), but I don't know how to get stats on that. top output indicates most lighttpd processes are in "zfs" or "zio" state. Obviously since Im streaming video, each connection will be long running and always in a read state. I'm thinking my bottleneck here is read io/s per second, not thruput. Caching on the flash drives doesn't seem to be used as heavy as I thought it would. Load averages on this box are running 35 35 35 or higher.

This box is obviously heavily loaded, but I can't quite pinpoint what is being hit the hardest. Me thinks I've maxed out the io system - the number of connections served must be taxing the crap out of it from random read requests. Am I correct and what is the best way to determine this?

Thanks! (hey phoenix, this is a vancouver project - you available for a consult?)

Last edited by Carpetsmoker; 14th November 2009 at 10:23 PM. Reason: Added code tags
Reply With Quote
  #2   (View Single Post)  
Old 14th November 2009
replaysMike replaysMike is offline
New User
 
Join Date: Nov 2009
Posts: 2
Default

full zpool iostat output:

Code:
               capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
content1     790G  5.57T  2.15K     13   269M  62.2K
  mirror     113G   815G    316      2  38.6M  9.34K
    da0         -      -    156      0  19.5M  9.41K
    da8         -      -    152      0  19.0M  9.41K
  mirror     113G   815G    320      2  39.1M  9.16K
    da1         -      -    152      0  18.9M  9.22K
    da9         -      -    161      0  20.2M  9.22K
  mirror     113G   815G    323      1  39.4M  8.79K
    da2         -      -    166      0  20.8M  8.84K
    da10        -      -    150      0  18.7M  8.84K
  mirror     113G   815G    315      1  38.5M  8.67K
    da3         -      -    144      0  17.9M  8.72K
    da11        -      -    165      0  20.6M  8.72K
  mirror     113G   815G    307      1  37.4M  9.03K
    da5         -      -    151      0  18.9M  9.08K
    da13        -      -    148      0  18.5M  9.08K
  mirror     113G   815G    309      1  37.6M  8.56K
    da6         -      -    143      0  17.9M  8.62K
    da14        -      -    158      0  19.8M  8.62K
  mirror     113G   815G    312      1  38.1M  8.64K
    da7         -      -    153      0  19.1M  8.71K
    da15        -      -    152      0  19.0M  8.71K
cache           -      -      -      -      -      -
  da4       2.26G  72.2G    686      1  65.5M   228K
  da12      2.25G  72.2G    691      1  65.5M   227K
----------  -----  -----  -----  -----  -----  -----

full top output
Code:
last pid: 12617;  load averages: 63.44, 41.37, 36.95                                                        up 0+02:54:37  02:31:28
530 processes: 3 running, 527 sleeping
CPU:  0.3% user,  0.0% nice, 28.0% system,  3.3% interrupt, 68.4% idle
Mem: 652M Active, 12G Inact, 2216M Wired, 9700K Cache, 851M Buf, 876M Free
Swap: 32G Total, 44K Used, 32G Free

vmstat
 procs      memory      page                    disks     faults         cpu
 r b w     avm    fre   flt  re  pi  po    fr  sr ad14 ad16   in   sy   cs us sy id
12 0 0   3429M   672M   208   0   0   0 66726 3697  61   0 32533 5279 173191  0 37 63
vnstat
Code:
Monitoring igb0...    (press CTRL-C to stop)

   rx:    1063.46 KiB/s 22569 p/s          tx:   32381.85 KiB/s 11929 p/s

Last edited by Carpetsmoker; 14th November 2009 at 10:24 PM. Reason: Added [code] tags
Reply With Quote
Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
System Monitoring Tools IronForge OpenBSD Packages and Ports 4 29th October 2009 03:18 AM
question with monitoring Statistics badguy OpenBSD General 4 20th October 2009 02:11 AM
How to: DMESG Monitoring damien-NF FreeBSD Installation and Upgrading 2 4th August 2009 11:30 PM
Bad ftp performance Randux NetBSD Package System (pkgsrc) 2 4th January 2009 09:17 PM
pf NAT monitoring cerulean FreeBSD General 1 20th October 2008 12:27 PM


All times are GMT. The time now is 04:24 AM.


Powered by vBulletin® Version 3.8.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Content copyright © 2007-2010, the authors
Daemon image copyright ©1988, Marshall Kirk McKusick