Re: Change block size on ZFS pool
On Mon, 12 May 2014 14:43+0200, Matthias Fechner wrote:
> Hi,
> =
> I upgraded now a FreeBSD 9 to version 10.
> Now my zpool says:
> pool: zroot
> state: ONLINE
> status: One or more devices are configured to use a non-native block size.
> Expect reduced performance.
> action: Replace affected devices with devices that support the
> configured block size, or migrate data to a properly configured
> pool.
> scan: scrub repaired 0 in 42h48m with 0 errors on Mon May 5 06:36:10 2=
014
> config:
> =
> NAME STATE READ
> WRITE CKSUM
> zroot ONLINE 0
> 0 0
> mirror-0 ONLINE 0
> 0 0
> gptid/504acf1f-5487-11e1-b3f1-001b217b3468 ONLINE 0
> 0 0 block size: 512B configured, 4096B native
> gpt/disk1 ONLINE 0
> 0 0 block size: 512B configured, 4096B native
> =
> My partition are aligned to 4k:
> =3D> 34 3907029101 ada2 GPT (1.8T)
> 34 6 - free - (3.0K)
> 40 128 1 freebsd-boot (64K)
> 168 8388608 2 freebsd-swap (4.0G)
> 8388776 3898640352 3 freebsd-zfs (1.8T)
> 3907029128 7 - free - (3.5K)
> =
> But it seems that the ZFS pool is not aligned correctly.
> =
> Is there a possibility to correct that online without taking the pool
> offline?
No.
I can think of one rather dangerous approach, using gpt/disk1 as the =
victim. However, the real victim is your precious pool and its (then) =
sole member, gptid/504acf1f-5487-11e1-b3f1-001b217b3468.
Mind you, what I propose is dangerous, and untested, and it leaves you =
with absolutely NO redundancy while performing the steps below.
If your zroot pool contains important data, you should consider buying =
a pair of new harddrives, or at least buy one new harddrive. Partition =
the new harddrives similar to the existing ones. Create a new =
mirrored, 4K pool using the gnop trick as shown below and transfer =
your precious data using a recursive snapshot and the zfs send/receive =
commands.
You have been warned!
What follows is a potentially dangerous and untested procedure off the =
top of my head:
1. Detach one of the mirrors, say gpt/disk1, using:
zpool detach zroot gpt/disk1
2. Clear all ZFS labels on gpt/disk1:
zpool labelclear gpt/disk1
3. Create a gnop(8) device emulating 4K disk blocks:
gnop create -S 4096 /dev/gpt/disk1
4. Create a new single disk zpool named zroot1 using the gnop device =
as the vdev:
zpool create zroot1 gpt/disk1.nop
5. Export the zroot1 pool:
zpool export zroot1
6. Destroy the gnop device:
gnop destroy /dev/gpt/disk1.nop
7. Reimport the zroot1 pool, searching for vdevs in /dev/gpt:
zpool -d /dev/gpt zroot1
8. Create a recursive snapshot on zroot:
zfs snapshot -r zroot@transfer
9. Transfer the recursive snapshots from zroot to zroot1, preserving =
every detail, without mounting the destination filesystems:
zfs send -R zroot@transfer | zfs receive -duv zroot1
10. Verify that zroot1 has indeed received all datasets:
zfs list -r -t all zroot1
11. Verify, and if necessary, adjust the bootfs property on zroot1:
zpool get bootfs zroot1
(If necessary: zpool set bootfs=3Dzroot1/blah/blah/blah zroot1)
12. Reboot the computer into singleuser mode, making sure to boot from =
the zroot1 pool. If this is not possible, you might need to physically =
swap the harddrives.
13. Don't perform any zfs mount operations while in singleuser mode as =
you don't want to deal with any conflicting filesystems from the =
zroot1 pool and the original zroot pool.
14. Destroy what remains of the original zroot pool:
zpool destroy zroot
15. Simply attach gptid/504acf1f-5487-11e1-b3f1-001b217b3468 or, =
gpt/disk0 if it exists, to the zroot1 pool, using gpt/disk1 as a =
guide:
zpool attach zroot1 gpt/disk1 gptid/504acf1f-5487-11e1-b3f1-001b217b3468
OR
zpool attach zroot1 gpt/disk1 gpt/disk0
The latter alternative depends on the gpt label being properly set for =
the gptid/504acf1f-5487-11e1-b3f1-001b217b3468 partition.
16. Wait patiently while you allow the newly attached mirror to =
resilver completely. You may want check on the progress by issuing:
zpool status -v
17. You might want to rid yourself of the @transfer snapshot:
zfs destroy -r zroot1@transfer
18. If you want to rename the zroot1 pool back to zroot, you need to =
do so from a stable/10 snapshot, CD or memstick, capable of using all =
the enabled zpool features:
zpool import -fN zroot1 zroot
Reboot WITHOUT exporting the zroot pool!
If you depend on the /boot/zfs/zpool.cache file, you might want to =
update that file by doing these commands instead:
zpool import -fN -o cachefile=3D/tmp/zpool.cache zroot1 zroot
(import any other pools using the -fN -o cachefile=3D/tmp/zpool.cache optio=
ns)
mkdir /tmp/zroot
mount -t zfs zroot /tmp/zroot
cp -p /tmp/zpool.cache /tmp/zroot/boot/zfs/zpool.cache
Be sure to mount the right dataset, i.e. your bootfs.
19. If you swapped the harddrives in step 12, you might want to =
rearrange your harddrives back into the right order.
Think very carefully about the steps in this laundry list of mine, I =
might have missed something vital. If possible, first do some =
experiments on an expendable VM to verify my claims.
Creating a new 4K zpool and transfering your data is by far the safer =
route.
I hope someone more knowledgeable on ZFS will chime in if what I =
propose is clearly mistaken.
Be very careful!
-- =
+-------------------------------+------------------------------------+
| Vennlig hilsen, | Best regards, |
| Trond Endrest=F8l, | Trond Endrest=F8l, |
| IT-ansvarlig, | System administrator, |
| Fagskolen Innlandet, | Gj=F8vik Technical College, Norway, |
| tlf. mob. 952 62 567, | Cellular...: +47 952 62 567, |
| sentralbord 61 14 54 00. | Switchboard: +47 61 14 54 00. |
+-------------------------------+------------------------------------+
_______________________________________________
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscribe@freebsd.org"
討論串 (同標題文章)
完整討論串 (本文為第 3 之 5 篇):