Using ZFS for File System Replication
1335
/usr/lib/libc/libc_hwcap1.so.1
4.6G 3.7G 886M 82% /lib/libc.so.1
fd 0K 0K 0K 0% /dev/fd
swap 1.4G 40K 1.4G 1% /tmp
swap 1.4G 28K 1.4G 1% /var/run
/dev/dsk/c0d0s7 26G 913M 25G 4% /export/home
scratchpool 16G 24K 16G 1% /scratchpool
The MySQL data is stored in a directory on
/scratchpool
. To help demonstrate some of the basic
replication functionality, there are also other items stored in
/scratchpool
as well:
total 17
drwxr-xr-x 31 root bin 50 Jul 21 07:32 DTT/
drwxr-xr-x 4 root bin 5 Jul 21 07:32 SUNWmlib/
drwxr-xr-x 14 root sys 16 Nov 5 09:56 SUNWspro/
drwxrwxrwx 19 1000 1000 40 Nov 6 19:16 emacs-22.1/
To create a snapshot of the file system, you use
zfs snapshot
, specifying the pool and the snapshot
name:
root-shell> zfs snapshot scratchpool@snap1
To list the snapshots already taken:
root-shell> zfs list -t snapshot
NAME USED AVAIL REFER MOUNTPOINT
scratchpool@snap1 0 - 24.5K -
scratchpool@snap2 0 - 24.5K -
The snapshots themselves are stored within the file system metadata, and the space required to keep
them varies as time goes on because of the way the snapshots are created. The initial creation of a
snapshot is very quick, because instead of taking an entire copy of the data and metadata required to
hold the entire snapshot, ZFS records only the point in time and metadata of when the snapshot was
created.
As more changes to the original file system are made, the size of the snapshot increases because
more space is required to keep the record of the old blocks. If you create lots of snapshots, say one
per day, and then delete the snapshots from earlier in the week, the size of the newer snapshots might
also increase, as the changes that make up the newer state have to be included in the more recent
snapshots, rather than being spread over the seven snapshots that make up the week.
You cannot directly back up the snapshots because they exist within the file system metadata rather
than as regular files. To get the snapshot into a format that you can copy to another file system, tape,
and so on, you use the
zfs send
command to create a stream version of the snapshot.
For example, to write the snapshot out to a file:
root-shell> zfs send scratchpool@snap1 >/backup/scratchpool-snap1
Or tape:
root-shell> zfs send scratchpool@snap1 >/dev/rmt/0
You can also write out the incremental changes between two snapshots using
zfs send
:
root-shell> zfs send scratchpool@snap1 scratchpool@snap2 >/backup/scratchpool-changes
To recover a snapshot, you use
zfs recv
, which applies the snapshot information either to a new file
system, or to an existing one.
15.5.1. Using ZFS for File System Replication
Because
zfs send
and
zfs recv
use streams to exchange data, you can use them to replicate
information from one system to another by combining
zfs send
,
ssh
, and
zfs recv
.
Summary of Contents for 5.0
Page 1: ...MySQL 5 0 Reference Manual ...
Page 18: ...xviii ...
Page 60: ...40 ...
Page 396: ...376 ...
Page 578: ...558 ...
Page 636: ...616 ...
Page 844: ...824 ...
Page 1234: ...1214 ...
Page 1427: ...MySQL Proxy Scripting 1407 ...
Page 1734: ...1714 ...
Page 1752: ...1732 ...
Page 1783: ...Configuring Connector ODBC 1763 ...
Page 1793: ...Connector ODBC Examples 1773 ...
Page 1839: ...Connector Net Installation 1819 2 You must choose the type of installation to perform ...
Page 2850: ...2830 ...
Page 2854: ...2834 ...
Page 2928: ...2908 ...
Page 3000: ...2980 ...
Page 3122: ...3102 ...
Page 3126: ...3106 ...
Page 3174: ...3154 ...
Page 3232: ...3212 ...