Re: Musings on ZFS Backup strategies

看板FB_stable作者時間12年前 (2013/04/27 13:33), 編輯推噓0(000)
留言0則, 0人參與, 最新討論串36/38 (看更多)
>The "recommended" approach is to do zfs send | zfs recv and store a >replica of your pool (with whatever level of RAID that meets your >needs). This way, you immediately detect an error in the send stream >and can repeat the send. You then use scrub to verify (and recover) >the replica. I do zfs send | zfs recv from several machines to a backup server in a different building. Each day an incremental send is done using the previous day's incremental send as the base. One reason for this approach is to minimize the amount of bandwidth required since one of the machines is across a T1. This technique requires keeping a record of the current base snapshot for each filesystem, and a system in place to keep from destroying the base snapshot. I learned the latter the hard way when a machine went down for several days, and when it came back up the script that destroys out-of-date snapshots deleted the incremental base snapshot. I'm running 9.1-stable with zpool features on my machines, and with this upgrade came zfs hold and zfs release. This allows you to lock a snapshot so it can't be destroyed until it's released. With this feature, I do the following for each filesystem: zfs send -i yesterdays_snapshot todays_snapshot | ssh backup_server zfs recv on success: zfs hold todays_snapshot zfs release yesterdays_snapshot ssh backup_server zfs hold todays_snapshot ssh backup_server zfs release yesterdays_snapshot update zfs_send_dates file with filesystem and snapshot name John Theus TheUsGroup.com _______________________________________________ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscribe@freebsd.org"
文章代碼(AID): #1HUsCKHs (FB_stable)
討論串 (同標題文章)
文章代碼(AID): #1HUsCKHs (FB_stable)