Re: kernel leaking memory somewhere
--0015175cfa1e47f667047ae08e81
Content-Type: text/plain; charset=ISO-8859-1
2009/12/16 Matthew Dillon <dillon@apollo.backplane.com>
> :Hi guys,
> :
> :This is TOP output on FBSD 7 box that runs the same software as the DF
> :box. Actually the FBSD box also runs MySQL and Ruby on Rails!
> :
> :...
> :As you can see the active memory here is 294MB.
> :
> :49 processes: 49 running
> :CPU states: 0.0% user, 0.0% nice, 0.6% system, 0.0% interrupt, 99.4%
> idle
> :Memory: 1213M Active, 1836M Inact, 320M Wired, 25M Cache, 199M Buf, 114M
> Free
> :Swap: 4096M Total, 2692K Used, 4093M Free
> :...
> :
> :The difference is clear. The 2 servers have same amount of physical
> memory.
> :
> :Im gonna try the memory program, but Im telling you all I really need to
> :do is start doing some heavy IO and get those postgres processes going
> :(they usually only use about 30MB RES memory) and the box starts swapping.
> :
> :
> :Petr
>
> This has nothing to do with actual memory pressure. The core pageout
> code is similar but the code that manages the memory pressure is
> completely different between FreeBSD and DragonFly and that controls
> the balance between the inactive and active queues.
>
> Also, and this is important... DragonFly maintains the active/inactive
> state for VM pages cached by the filesystem via the buffer cache,
> particularly if a page goes from wired to unwired or vise-versa.
> I have no idea if FreeBSD did any similar work.
>
> I also did work a few years ago related to properly treating the
> entire available VM as (memory+swap) instead of just (swap), which
> changes how memory pressure related to dirty pages is accounted for.
> i.e. in DFly if you do not configure swap space the system will run all
> the way to the point where 90% of the VM pages in the system are dirty.
>
How hard would it be to change those settings at runtime, especially how
much
memory is available for disk caching and total available memory space?
In virtualized environments (e.g. kvm or vkernel), it could make great sense
to "return" memory pages to the host. Think about vkernel's that don't need
a fixed-sized memory image, which then use most of the space for
disk-caching,
which probably makes no sense altogether in a vkernel (as it would
double-cache
those blocks, once in the host and another time in vkernel).
Just my thoughts as I recently stumbled across the virtio-baloon driver :)
Regards,
Michael
--0015175cfa1e47f667047ae08e81
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable
<br><br><div class=3D"gmail_quote">2009/12/16 Matthew Dillon <span dir=3D"l=
tr"><<a href=3D"mailto:dillon@apollo.backplane.com">dillon@apollo.backpl=
ane.com</a>></span><br><blockquote class=3D"gmail_quote" style=3D"border=
-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-lef=
t: 1ex;">
:Hi guys,<br>
:<br>
:This is TOP output on FBSD 7 box that runs the same software as the DF<br>
:box. Actually the FBSD box also runs MySQL and Ruby on Rails!<br>
:<br>
:...<br>
:As you can see the active memory here is 294MB.<br>
:<br>
:49 processes: 49 running<br>
:CPU states: =A00.0% user, =A00.0% nice, =A00.6% system, =A00.0% interrupt,=
99.4% idle<br>
:Memory: 1213M Active, 1836M Inact, 320M Wired, 25M Cache, 199M Buf, 114M F=
ree<br>
:Swap: 4096M Total, 2692K Used, 4093M Free<br>
:...<br>
:<br>
:The difference is clear. The 2 servers have same amount of physical memory=
..<br>
:<br>
:Im gonna try the memory program, but Im telling you all I really need to<b=
r>
:do is start doing some heavy IO and get those postgres processes going<br>
:(they usually only use about 30MB RES memory) and the box starts swapping.=
<br>
:<br>
:<br>
:Petr<br>
<br>
=A0 =A0This has nothing to do with actual memory pressure. =A0The core pag=
eout<br>
=A0 =A0code is similar but the code that manages the memory pressure is<br=
>
=A0 =A0completely different between FreeBSD and DragonFly and that control=
s<br>
=A0 =A0the balance between the inactive and active queues.<br>
<br>
=A0 =A0Also, and this is important... DragonFly maintains the active/inact=
ive<br>
=A0 =A0state for VM pages cached by the filesystem via the buffer cache,<b=
r>
=A0 =A0particularly if a page goes from wired to unwired or vise-versa.<br=
>
=A0 =A0I have no idea if FreeBSD did any similar work.<br>
<br>
=A0 =A0I also did work a few years ago related to properly treating the<br=
>
=A0 =A0entire available VM as (memory+swap) instead of just (swap), which<=
br>
=A0 =A0changes how memory pressure related to dirty pages is accounted for=
..<br>
=A0 =A0i.e. in DFly if you do not configure swap space the system will run=
all<br>
=A0 =A0the way to the point where 90% of the VM pages in the system are di=
rty.<br></blockquote><br>How hard would it be to change those settings at r=
untime, especially how much <br>memory is available for disk caching and to=
tal available memory space? <br>
In virtualized environments (e.g. kvm or vkernel), it could make great sens=
e <br>to "return" memory pages to the host. Think about vkernel&#=
39;s that don't need<br>a fixed-sized memory image, which then use most=
of the space for disk-caching,<br>
which probably makes no sense altogether in a vkernel (as it would double-c=
ache<br>those blocks, once in the host and another time in vkernel).<br><br=
>Just my thoughts as I recently stumbled across the virtio-baloon driver :)=
<br>
<br>Regards,<br><br>=A0 Michael<br></div>
--0015175cfa1e47f667047ae08e81--
討論串 (同標題文章)
完整討論串 (本文為第 11 之 15 篇):