Re: ZFS corruption due to lack of space?
--IMjqdzrDRly81ofr
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
On 2012-Oct-31 17:25:09 -0000, Steven Hartland <steven@multiplay.co.uk> wro=
te:
>Been running some tests on new hardware here to verify all
>is good. One of the tests was to fill the zfs array which
>seems like its totally corrupted the tank.
I've accidently "filled" a pool, and had multiple processes try to
write to the full pool, without either emptying the free space reserve
(so I could still delete the offending files) or corrupting the pool.
Had you tried to read/write the raw disks before you tried the
ZFS testing? Do you have compression and/or dedupe enabled on
the pool?
>1. Given the information it seems like the multiple writes filling
>the disk may have caused metadata corruption?
I don't recall seeing this reported before.
>2. Is there anyway to stop the scrub?
Other than freeing up some space, I don't think so. If this is a test
pool that you don't need, you could try destroying it and re-creating
it - that may be quicker and easier than recovering the existing pool.
>3. Surely low space should never prevent stopping a scrub?
As Artem noted, ZFS is a copy-on-write filesystem. It is supposed to
reserve some free space to allow metadata updates (stop scrubs, delete
files, etc) even when it is "full" but I have seen reports of this not
working correctly in the past. A truncate-in-place may work.
You could also try asking on zfs-discuss@opensolaris.org=20
--=20
Peter Jeremy
--IMjqdzrDRly81ofr
Content-Type: application/pgp-signature
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.19 (FreeBSD)
iEYEARECAAYFAlCRluIACgkQ/opHv/APuIf5XwCePbniJH+FqKmFdUYvRlHobjbE
U74AoIBMqgc6dVkhg9Znx5K9IVh4Spa2
=1SD/
-----END PGP SIGNATURE-----
--IMjqdzrDRly81ofr--
討論串 (同標題文章)
完整討論串 (本文為第 10 之 12 篇):