Re: svn commit: r239598 - head/etc/rc.d

看板FB_security作者時間13年前 (2012/09/08 14:01), 編輯推噓0(000)
留言0則, 0人參與, 最新討論串27/29 (看更多)
--ep0oHQY+/Gbo/zt0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On 2012-Sep-05 02:12:48 +0100, RW <rwmaillists@googlemail.com> wrote: >All of the low-grade entropy should go through sha256. Overall, I like the idea of feeding the high-volume mixed quality "entropy" through SHA-256 or similar. >Anything written into /dev/random is passed by random_yarrow_write() 16 >Bytes at time into random_harvest_internal() which copies it into a >buffer and queues it up. If there are 256 buffers queued >random_harvest_internal() simply returns without doing anything.=20 This would seem to open up a denial-of-entropy attack on random(4): All entropy sources feed into Yarrow via random_harvest_internal() which queues the input into a single queue - harvestfifo. When this queue is full, further input is discarded. If I run "dd if=3D/dev/zero of=3D/dev/random" then harvestfifo will be kept full of NULs, resulting in other entropy events (particularly from within the kernel) being discarded. There would still be a small amount of entropy from the get_cyclecount() calls but this is minimal. Is it worth splitting harvestfifo into multiple queues to prevent this? At least a separate queue for RANDOM_WRITE and potentially separate queues for each entropy source. --=20 Peter Jeremy --ep0oHQY+/Gbo/zt0 Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.19 (FreeBSD) iEYEARECAAYFAlBHw8YACgkQ/opHv/APuIfkrwCgkZum7Lyrep1wQthkNAU44/ea IhMAnRrxd4u1x9//YZrmfkyx/s+Kqv58 =9EFJ -----END PGP SIGNATURE----- --ep0oHQY+/Gbo/zt0--
文章代碼(AID): #1GIjyp4D (FB_security)
討論串 (同標題文章)
文章代碼(AID): #1GIjyp4D (FB_security)