export http_proxy=cache.local:3128The "pkg-get" util (/opt/csw/bin) is already setup to use the proxy server (/opt/csw/etc or /usr/local/etc pkg-get.conf, check where this really is). Use this to get precompiled Solaris GNU apps and dependencies easily (gcc, lsof, amanda, libthisorthat).
config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 mirror ONLINE 0 0 0 c4t0d0s0 ONLINE 0 0 0 c4t4d0s0 ONLINE 0 0 0
zpool create pool1 \ raidz2 c0t0d0 c1t0d0 c3t0d0 c5t0d0 c6t0d0 c0t1d0 c1t1d0 c3t1d0 c5t1d0 \ raidz2 c6t1d0 c4t1d0 c0t2d0 c1t2d0 c3t2d0 c4t2d0 c5t2d0 c6t2d0 c0t3d0 \ raidz2 c1t3d0 c3t3d0 c4t3d0 c5t3d0 c6t3d0 c0t4d0 c1t4d0 c3t4d0 c5t4d0 \ raidz2 c6t4d0 c0t5d0 c1t5d0 c3t5d0 c4t5d0 c5t5d0 c6t5d0 c0t6d0 c1t6d0 \ raidz2 c3t6d0 c4t6d0 c5t6d0 c6t6d0 c0t7d0 c1t7d0 c3t7d0 c4t7d0 c5t7d0 \ zpool add pool1 spare c6t7d0Note that we skipped the root zpool disks. Alternative: The above configuration gives up 10 disks to parity in 5 raidz2 pools. This may also be an option:
zpool create pool1 \ raidz1 c0t0d0 c1t0d0 c3t0d0 c5t0d0 c6t0d0 \ raidz1 c0t1d0 c1t1d0 c3t1d0 c4t1d0 c5t1d0 c6t1d0 \ raidz1 c0t2d0 c1t2d0 c3t2d0 c4t2d0 c5t2d0 c6t2d0 \ raidz1 c0t3d0 c1t3d0 c3t3d0 c4t3d0 c5t3d0 c6t3d0 \ raidz1 c0t4d0 c1t4d0 c3t4d0 c5t4d0 c6t4d0 \ raidz1 c0t5d0 c1t5d0 c3t5d0 c4t5d0 c5t5d0 c6t5d0 \ raidz1 c0t6d0 c1t6d0 c3t6d0 c4t6d0 c5t6d0 c6t6d0 \ raidz1 c0t7d0 c1t7d0 c3t7d0 c4t7d0 c5t7d0 zpool add pool1 spare c6t7d0We know this config can sustain writing at 500-600MB/s from many threads, and 900MB/s reading (our tests showed a very flat and consistent rate even increasing reading or writing threads to 12). I am sure the raidz2 version would not do quite as well though it might still be enough. Our tests did not check what happens when doing both at once...however to sustain the LTO-4 drives we only need to be able to read at 160 MB/s. I believe amanda would write to cache and dump to tape at the same time...i think it is reasonable that >10 backup processes can top out the IO write capacity whilst still maintaining sufficient read IO to feed the tape drives without stream interruption. Benchmarks soon... There is still always the option of creating a raid-0 zpool for Amanda disk cache if the performance in real usage is not sufficient to keep data streaming to the 2 LTO-4 drives. In practice we'll only be using one at once so assuming the need to stream to 2 is a generous estimate. (i hope to fill one 800GB tape per night at most).
zfs create pool1/amholding zfs set quota=5T pool1/amholding zfs set reservation=5T pool1/amholding zfs create pool1/mysql-backup
/opt/download/src/amanda-3.2.2# ./configure --with-bsdtcp-security --with-bsdudp-security --with-rsh-security --with-ssh-security \ --prefix=/usr --libexecdir=/usr --sysconfdir=/etc --with-group=sys --with-gnutar=/usr/sbin/gtar-wrapper.pl --localstatedir=/var --with-fqdn \ --with-config=umatlas --with-smbclient=/opt/csw/bin/smbclient
Other notes:
define changer sl500 { tpchanger "chg-robot:/dev/scsi/changer/c7t500104F000B87A69d0" property "tape-device" "0=tape:/dev/rmt/0" "1=tape:/dev/rmt/1" property "eject-before-unload" "yes" # property "use-slots" "1-5,11-20" } tpchanger "sl500"
/opt/csw/etc/amanda/label-changer-barcode: #!/bin/bash # this needs to be fixed up to deal with non-sequential slots echo "Before running this be sure tapes are in sequential order in library from slot 1 to slot X with no gaps" voltaglist=`/opt/csw/sbin/mtx -f /dev/scsi/changer/c7t500104F000B87A69d0 status | ggrep "Storage Element" | awk '{ print $4 }' | ggrep -o L[0-9][0-9][0-9][0-9][0-9]` slots=`/opt/csw/sbin/mtx -f /dev/scsi/changer/c7t500104F000B87A69d0 status | ggrep "Storage Element [0-9].*\:Full" | awk '{ print $3 }' | ggrep -o "[0-9][0-9]*"` slot=1 for voltag in $voltaglist do echo -e "Labeling tape in slot $slot with $voltag\n" su amanda -c "amlabel umatlas $voltag slot $slot" slot=$(($slot+1)) done-- BenMeekhof - 14 Dec 2009