Re: Blogbench RAID benchmarks

看板DFBSD_kernel作者時間14年前 (2011/07/22 07:01), 編輯推噓0(000)
留言0則, 0人參與, 最新討論串20/23 (看更多)
--0016363b7be26f17a104a89c3802 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable On Thu, Jul 21, 2011 at 3:35 PM, Freddie Cash <fjwcash@gmail.com> wrote: > On Mon, Jul 18, 2011 at 7:06 PM, Matthew Dillon < > dillon@apollo.backplane.com> wrote: > >> Ok, well this is interesting. Basically it comes down to whether we >> want to starve read operations or whether we want to starve write >> operations. >> >> The FreeBSD results starve read operations, while the DragonFly resul= ts >> starve write operations. That's the entirety of the difference betwe= en >> the two tests. >> >> Would using the disk scheduler's in FBSD/DFly help with this at all? > > FreeBSD includes a geom_sched class for enabling pluggable disk scheduler= 's > (currently only round-robin algorithm is implemented). > http://info.iet.unipi.it/~luigi/geom_sched/ > > Page 39 of the presentation on GEOM_SCHED shows the following, indicating that it should make a big difference in the blogbench results (note the second result with greedy read and write): Some preliminary results on scheduler=E2=80=99s performance in some easy cases (the focus here is on the framework). Measurement is using multiple dd instances on a =EF=AC=81lesystems, all speeds in MiB/s. two greedy readers, throughput improvement NORMAL: 6.8 + 6.8 ; GSCHED RR: 27.0 + 27.0 one greedy reader, one greedy writer, capture e=EF=AC=80ect NORMAL: R: 0.234 W:72.3 ; GSCHED RR: R:12.0 W:40.0 multiple greedy writers, only small loss of througput NORMAL: 16+16; RR: 15.5 + 15.5 one sequential reader, one random reader (=EF=AC=81o) NORMAL: Seq: 4.2 Rand: 4.2; RR: Seq: 30 Rand: 4.4 > And I believe DFly has dsched? > > >> This is all with swapcache turned off. The only way to test in a >> fair manner with swapcache turned on (with a SSD) is if the FreeBSD >> test used a similar setup w/ZFS. >> >> ZFS includes it's own disk scheduler, so geom_sched wouldn't help in tha= t > case. Would be interesting to see a comparison of HAMMER+swapcache and > ZFS+L2ARC, though. > > -- > Freddie Cash > fjwcash@gmail.com > --=20 Freddie Cash fjwcash@gmail.com --0016363b7be26f17a104a89c3802 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable <div class=3D"gmail_quote">On Thu, Jul 21, 2011 at 3:35 PM, Freddie Cash <s= pan dir=3D"ltr">&lt;<a href=3D"mailto:fjwcash@gmail.com">fjwcash@gmail.com<= /a>&gt;</span> wrote:<br><blockquote class=3D"gmail_quote" style=3D"margin:= 0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;"> <div class=3D"gmail_quote"><div class=3D"im">On Mon, Jul 18, 2011 at 7:06 P= M, Matthew Dillon <span dir=3D"ltr">&lt;<a href=3D"mailto:dillon@apollo.bac= kplane.com" target=3D"_blank">dillon@apollo.backplane.com</a>&gt;</span> wr= ote:<br> </div><div class=3D"im"><blockquote class=3D"gmail_quote" style=3D"margin:0= 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> =C2=A0 =C2=A0Ok, well this is interesting. =C2=A0Basically it comes down t= o whether we<br> =C2=A0 =C2=A0want to starve read operations or whether we want to starve w= rite<br> =C2=A0 =C2=A0operations.<br> <br> =C2=A0 =C2=A0The FreeBSD results starve read operations, while the DragonF= ly results<br> =C2=A0 =C2=A0starve write operations. =C2=A0That&#39;s the entirety of the= difference between<br> =C2=A0 =C2=A0the two tests.<br> <br></blockquote></div><div>Would using the disk scheduler&#39;s in FBSD/DF= ly help with this at all?<br><br>FreeBSD includes a geom_sched class for en= abling pluggable disk scheduler&#39;s (currently only round-robin algorithm= is implemented).=C2=A0 <a href=3D"http://info.iet.unipi.it/%7Eluigi/geom_s= ched/" target=3D"_blank">http://info.iet.unipi.it/~luigi/geom_sched/</a><br= > <br></div></div></blockquote><div>Page 39 of the presentation on GEOM_SCHED= shows the following, indicating that it should make a big difference in th= e blogbench results (note the second result with greedy read and write):<br= > <br>Some preliminary results on scheduler=E2=80=99s performance in some eas= y<br>cases (the focus here is on the framework).<br><br>Measurement is usin= g multiple dd instances on a =EF=AC=81lesystems, all<br>speeds in MiB/s.<br= ><br>two greedy readers, throughput improvement<br> NORMAL: 6.8 + 6.8 ; GSCHED RR: 27.0 + 27.0<br><br>one greedy reader, one gr= eedy writer, capture e=EF=AC=80ect<br>NORMAL: R: 0.234 W:72.3 ; GSCHED RR: = R:12.0 W:40.0<br><br>multiple greedy writers, only small loss of througput<= br>NORMAL: 16+16; RR: 15.5 + 15.5<br> <br>one sequential reader, one random reader (=EF=AC=81o)<br>NORMAL: Seq: 4= ..2 Rand: 4.2; RR: Seq: 30 Rand: 4.4<br><br>=C2=A0</div><blockquote class=3D= "gmail_quote" style=3D"margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rg= b(204, 204, 204); padding-left: 1ex;"> <div class=3D"gmail_quote"><div>And I believe DFly has dsched?<br></div><di= v class=3D"im"><div>=C2=A0</div><blockquote class=3D"gmail_quote" style=3D"= margin:0pt 0pt 0pt 0.8ex;border-left:1px solid rgb(204, 204, 204);padding-l= eft:1ex"> =C2=A0=C2=A0 This is all with swapcache turned off. =C2=A0The only way to = test in a<br> =C2=A0 =C2=A0fair manner with swapcache turned on (with a SSD) is if the F= reeBSD<br> =C2=A0 =C2=A0test used a similar setup w/ZFS.<br> <div><div></div><br></div></blockquote></div></div>ZFS includes it&#39;s ow= n disk scheduler, so geom_sched wouldn&#39;t help in that case.=C2=A0 Would= be interesting to see a comparison of HAMMER+swapcache and ZFS+L2ARC, thou= gh.<br clear=3D"all"> <font color=3D"#888888"> <br>-- <br>Freddie Cash<br><a href=3D"mailto:fjwcash@gmail.com" target=3D"_= blank">fjwcash@gmail.com</a><br> </font></blockquote></div><br><br clear=3D"all"><br>-- <br>Freddie Cash<br>= <a href=3D"mailto:fjwcash@gmail.com">fjwcash@gmail.com</a><br> --0016363b7be26f17a104a89c3802--
文章代碼(AID): #1EAA-rLG (DFBSD_kernel)
討論串 (同標題文章)
文章代碼(AID): #1EAA-rLG (DFBSD_kernel)