Re: Stable tag slipped

看板DFBSD_kernel作者時間21年前 (2005/04/05 07:01), 編輯推噓0(000)
留言0則, 0人參與, 最新討論串6/9 (看更多)
Matthew Dillon wrote: > :I've been running 3 loops of mirroring wget -m on the apache manual, > :with the fetched page deleted in between, MaxClients=256, > :MaxKeepAliveRequests=0. There is one loop runned localy. > : > :There are more than 4000 connections in TIME_WAIT, more than 4000 sockets. > : > :The server responds very well and there are no delays. > : > :Tomorrow I will try MaxKeepAliveRequests to an impossibly high number to > :generate long running connections and see how it copes. > : > :I welcome suggestions on what test to run as I am no expert in neither > :os nor networking. > : > :Raphael > > Have you adjusted the portrange? Do these: > > sysctl net.inet.ip.portrange > sysctl net.inet.tcp.msl > sysctl kern.ipc.maxsockets > sysctl net.inet.tcp.recvspace > sysctl net.inet.tcp.sendspace > > You may also have to lower the MSL on the originating machines to reduce > the number of sockets being held in a TIME_WAIT state. > > (default is 30000ms) > sysctl net.inet.tcp.msl=15000 > currently with the same tests still running: dragonfly# netstat -tn | wc -l 3170 dragonfly# netstat -tn | fgrep TIME_WAIT | wc -l 3105 dragonfly# netstat -m 197/551/18176 mbufs in use (current/peak/max): 150 mbufs allocated to data 47 mbufs allocated to packet headers 114/246/4544 mbuf clusters in use (current/peak/max) 629 Kbytes allocated to network (4% of mb_map in use) 0 requests for memory denied 0 requests for memory delayed 0 calls to protocol drain routines dragonfly# uptime 0:45 up 1:31, 2 users, load averages: 0,19 0,28 0,30 dragonfly# uptime 0:45 up 1:31, 2 users, load averages: 0,18 0,27 0,30 dragonfly# sysctl net.inet.ip.portrange net.inet.ip.portrange.lowfirst: 1023 net.inet.ip.portrange.lowlast: 600 net.inet.ip.portrange.first: 1024 net.inet.ip.portrange.last: 5000 net.inet.ip.portrange.hifirst: 49152 net.inet.ip.portrange.hilast: 65535 dragonfly# sysctl net.inet.tcp.msl net.inet.tcp.msl: 30000 dragonfly# sysctl kern.ipc.maxsockets kern.ipc.maxsockets: 8104 dragonfly# sysctl net.inet.tcp.recvspace net.inet.tcp.recvspace: 57344 dragonfly# sysctl net.inet.tcp.sendspace net.inet.tcp.sendspace: 32768 dragonfly# The machine doing the hammering is a Mac running MacOSX. [pomme:~] raphael% sysctl net.inet.tcp.msl net.inet.tcp.msl: 600 [pomme:~] raphael% sysctl kern.ipc.maxsockets kern.ipc.maxsockets: 512 Just keep in mind there is one loop running locally on the dfbsd machine being tested. Why should the portrange be adjusted? Raphael
文章代碼(AID): #12KSSp00 (DFBSD_kernel)
文章代碼(AID): #12KSSp00 (DFBSD_kernel)