Re: RFC: GEOM MULTIPATH rewrite
On Jan 20, 2012, at 2:31 PM, Alexander Motin wrote:
> On 01/20/12 14:13, Nikolay Denev wrote:
>> On Jan 20, 2012, at 1:30 PM, Alexander Motin wrote:
>>> On 01/20/12 13:08, Nikolay Denev wrote:
>>>> On 20.01.2012, at 12:51, Alexander Motin<mav@freebsd.org> wrote:
>>>>=20
>>>>> On 01/20/12 10:09, Nikolay Denev wrote:
>>>>>> Another thing I've observed is that active/active probably only =
makes sense if you are accessing single LUN.
>>>>>> In my tests where I have 24 LUNS that form 4 vdevs in a single =
zpool, the highest performance was achieved
>>>>>> when I split the active paths among the controllers installed in =
the server importing the pool. (basically "gmultipath rotate $LUN" in =
rc.local for half of the paths)
>>>>>> Using active/active in this situation resulted in fluctuating =
performance.
>>>>>=20
>>>>> How big was fluctuation? Between speed of one and all paths?
>>>>>=20
>>>>> Several active/active devices without knowledge about each other =
with some probability will send part of requests via the same links, =
while ZFS itself already does some balancing between vdevs.
>>>>=20
>>>> I will test in a bit and post results.
>>>>=20
>>>> P.S.: Is there a way to enable/disable active-active on the fly? =
I'm
>>>> currently re-labeling to achieve that.
>>>=20
>>> No, there is not now. But for experiments you may achieve the same =
results by manually marking as failed all paths except one. It is not =
dangerous, as if that link fail, all other will resurrect automatically.
>>=20
>> I had to destroy and relabel anyways, since I was not using =
active-active currently. Here's what I did (maybe a little too verbose):
>>=20
>> And now a very naive benchmark :
>>=20
>> :~# dd if=3D/dev/zero of=3D/tank/TEST bs=3D1M count=3D512
>> 512+0 records in
>> 512+0 records out
>> 536870912 bytes transferred in 7.282780 secs (73717855 bytes/sec)
>> :~# dd if=3D/dev/zero of=3D/tank/TEST bs=3D1M count=3D512
>> 512+0 records in
>> 512+0 records out
>> 536870912 bytes transferred in 38.422724 secs (13972745 bytes/sec)
>> :~# dd if=3D/dev/zero of=3D/tank/TEST bs=3D1M count=3D512
>> 512+0 records in
>> 512+0 records out
>> 536870912 bytes transferred in 10.810989 secs (49659740 bytes/sec)
>>=20
>> Now deactivate the alternative paths :
>> And the benchmark again:
>>=20
>> :~# dd if=3D/dev/zero of=3D/tank/TEST bs=3D1M count=3D512
>> 512+0 records in
>> 512+0 records out
>> 536870912 bytes transferred in 1.083226 secs (495622270 bytes/sec)
>> :~# dd if=3D/dev/zero of=3D/tank/TEST bs=3D1M count=3D512
>> 512+0 records in
>> 512+0 records out
>> 536870912 bytes transferred in 1.409975 secs (380766249 bytes/sec)
>> :~# dd if=3D/dev/zero of=3D/tank/TEST bs=3D1M count=3D512
>> 512+0 records in
>> 512+0 records out
>> 536870912 bytes transferred in 1.136110 secs (472551848 bytes/sec)
>>=20
>> P.S.: The server is running 8.2-STABLE, dual port isp(4) card, and is =
directly connected to a 4Gbps Xyratex dual-controller (active-active) =
storage array.
>> All the 24 SAS drives are setup as single disk RAID0 LUNs.
>=20
> This difference is too huge to explain it with ineffective paths =
utilization. Can't this storage have some per-LUN port/controller =
affinity that may penalize concurrent access to the same LUN from =
different paths? Can't it be active/active on port level, but =
active/passive for each specific LUN? If there are really two =
controllers inside, they may need to synchronize their caches or bounce =
requests, that may be expensive.
>=20
> --=20
> Alexander Motin
Yes, I think that's what's happening. There are two controllers each =
with it's own CPU and cache and have cache synchronization enabled.
I will try to test multipath if both paths are connected to the same =
controller (there are two ports on each controller). But that will =
require remote hands and take some time.
In the mean time I've now disabled the writeback cache on the array =
(this disables also the cache synchronization) and here are the results =
:
ACTIVE-ACTIVE:
:~# dd if=3D/dev/zero of=3D/tank/TEST bs=3D1M count=3D512
512+0 records in
512+0 records out
536870912 bytes transferred in 2.497415 secs (214970639 bytes/sec)
:~# dd if=3D/dev/zero of=3D/tank/TEST bs=3D1M count=3D512
512+0 records in
512+0 records out
536870912 bytes transferred in 1.076070 secs (498918172 bytes/sec)
:~# dd if=3D/dev/zero of=3D/tank/TEST bs=3D1M count=3D512
512+0 records in
512+0 records out
536870912 bytes transferred in 1.908101 secs (281363979 bytes/sec)
ACTIVE-PASSIVE (half of the paths failed the same way as in the previous =
email):
:~# dd if=3D/dev/zero of=3D/tank/TEST bs=3D1M count=3D512
512+0 records in
512+0 records out
536870912 bytes transferred in 0.324483 secs (1654542913 bytes/sec)
:~# dd if=3D/dev/zero of=3D/tank/TEST bs=3D1M count=3D512
512+0 records in
512+0 records out
536870912 bytes transferred in 0.795685 secs (674727909 bytes/sec)
:~# dd if=3D/dev/zero of=3D/tank/TEST bs=3D1M count=3D512
512+0 records in
512+0 records out
536870912 bytes transferred in 0.233859 secs (2295702835 bytes/sec)
This increased the performance for both cases, probably because =
writeback caching does nothing for large sequential writes.
Anyways, here ACTIVE-ACTIVE is still slower, but not by that much.
_______________________________________________
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscribe@freebsd.org"
討論串 (同標題文章)
完整討論串 (本文為第 15 之 17 篇):