8.2: ISCSI: ISTGT a bit slow, I think
--=-sTvcvfVti2aMO0NdY5kv
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
hi,
I testing the maximum throughput from ISCSI, but I've reached only
~50MB/s (dd if=3D/dev/zero of=3D/dev/da13 bs=3D1M count=3D2048) with crosso=
ver
1Gb/s cabel and raw disk. Both machines are FreeBSD 8.2-stable with
istgt and the Onboard ISCSI initiator=20
With ZFS as target we loose round about 8-10MB/s.
istgt.conf
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
[global]
Timeout 30
NopInInterval 20
DiscoveryAuthMethod Auto
MaxSessions 32
MaxConnections 8
#FirstBurstLength 65536
MaxBurstLength 1048576
MaxRecvDataSegmentLength 262144
# maximum number of sending R2T in each connection
# actual number is limited to QueueDepth and MaxCmdSN and ExpCmdSN
# 0=3Ddisabled, 1-256=3Dimproves large writing
MaxR2T 32
# iSCSI initial parameters negotiate with initiators
# NOTE: incorrect values might crash
MaxOutstandingR2T 16
DefaultTime2Wait 2
DefaultTime2Retain 60
MaxBurstLength 1048576
[....]
[LogicalUnit4]
Comment "40GB Disk (iqn.san.foo:40gb)"
TargetName 40gb
TargetAlias "Data 40GB"
Mapping PortalGroup1 InitiatorGroup1
#AuthMethod Auto
#AuthGroup AuthGroup2
UnitType Disk
UnitInquiry "FreeBSD" "iSCSI Disk" "01234" "10000004"
QueueDepth 32
LUN0 Storage /failover/bigPool/disk40gb 40960MB
[LogicalUnit5]
Comment "2TB Disk (iqn.san.foo:2tb)"
TargetName 2tb=20
TargetAlias "Data 2TB"
Mapping PortalGroup1 InitiatorGroup1
#AuthMethod Auto
#AuthGroup AuthGroup2
UnitType Disk
UnitInquiry "FreeBSD" "iSCSI Disk" "01235" "10000005"
QueueDepth 32
LUN0 Storage /dev/da12 200480MB
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
The raw disks, itself reaches over 150-200MB/s with or without ZFS
(raidz2)
We have 4GB Ram and 4 x 3Ghz Xeon CPUs on board.
I thought, we should reach over 80-100MB/s, so, ISTGT or the Initiator
is a bis slow, I think.
I've tested just in the moment with Ubuntu 10.10 Initiator and I've got
round about 70>MB/s - or without ZFS - constant 80>MB/s, over a regular
switched network.
Is this the end what we could reach? 'Cause of TCP and ISCSI overhead?
What we can't: enable Jumbo frames. Our switches (Cisco catalyst
WS-X4515) doesn't support jumbo frames.
I've tested Jumbo Frames (9k) over the crossover, but the performance
was worse. Round about 20MB/s ....
So, does anyone has some hints for me? :-)
cu denny
--=-sTvcvfVti2aMO0NdY5kv
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)
iEYEABECAAYFAk2Z324ACgkQKlzhkqt9P+CH8QCglvQbgHu81wlVSeggbFN/R1cf
rsUAnRAV/CNy8LUKp7aKFI+KEiXBM50W
=PymT
-----END PGP SIGNATURE-----
--=-sTvcvfVti2aMO0NdY5kv--
討論串 (同標題文章)
完整討論串 (本文為第 2 之 5 篇):