Tag Archive for Disk file systems

No ZFS Support for EMC Replication Manager

As I originally blogged, I was hoping to use EMC snapshots to perform server-less/network-less backups. EMC provides two main tools for managing snapshots in this type of situation:

  • EMC Replication Manager
  • EMC PowerSnap Networker Module

The PowerSnap Module supposedly automates taking snapshots for the purpose of backups, while Replication Manager supposedly provides a much more robust package.

With Replication Manager you might create a policy to take a snapshot every five minutes, keep the last 10, and use those for backups whenever necessary.

To make a long story short, Replication Manager is useless for LUNs with ZFS. According to EMC, this won’t change in the near future. PowerSnap also has no support for taking snapshots of LUNs with ZFS on them so basically EMC has no server-less backup offerings for Solaris with ZFS.

As an IT guy in general, ZFS is the best thing that has happened to file systems in the last 10 years and it is only getting better. ZFS is already standard in FreeBSD and NetBSD. Linux supports ZFS over FUSE due to license issues but I’m confident those will be solved. The file system is platform independent, meaning you can move the data transparently between Intel and Sparc architectures. Deduplication has just been added to the feature set and disk encryption is on it’s way.

As a Solaris admin, I really can’t figure out why EMC would decide to cut off their own foot like this. It is clear that UFS will remain for legacy and backwards compatibility but ZFS is the future. Not planning to support ZFS is like not planning to support Solaris.

The only possibility that I can see is that EMC sees Sun, Solaris, and ZFS as enough of a threat, that they are strategically trying to limit options? For operations local to a server, ZFS has largely replaced the need for heavy hardware like EMC on the SAN. Some would argue that ZFS RAID + JBOD is better than ZFS + RAID on EMC. You can do the snapshots without the EMC. On a simple level, you can send snapshots asynchronously to another system, similar to MirrorView, without the EMC. You can do deduplication without the EMC. Now with Sun’s Flash Cache technology which integrates with ZFS, you can get the performance without the EMC. Along the same lines, you see Sun changing the rules of the storage/database game with solutions like Exadata V2. The integration of Zones with ZFS may be challenging Vmware on the virtualization front, especially with the serious advantage Sun’s Coolthreads servers have in terms of consolidation.

That said, I still prefer to offload this work to dedicated storage hardware for the time being and probably in the future. If EMC chooses not to support ZFS, they will only force us not to buy EMC arrays. We will stop buying disks, stop buying tools, etc.

Instead, they should be providing better support for ZFS, integrating with ZFS to get better performance, providing tools which make EMC the preferred disk array behind a ZFS filesystem.

EMC Replication Manager in Solaris

UPDATE: No ZFS Support for Replication Manager in the near future

Using storage level snapshots can be used to run backups without directly requiring resources from the original host.

EMC Replication Manager coordinates the creation of application consistent snapshots across all the hosts in your network. It handles scheduling creation/expiration of snapshots,  mounting and unmounting from backup servers, etc. from a single console.

Although it is not tightly integrated into EMC Networker like the similar Networker PowerSnap module, it can be used to start a backup process after taking a new snapshot and it has the capability to manage snapshots unrelated to backups from a GUI.

While the data sheet claims support for Solaris, there are several caveats which I have run into.

  1. There is no mention of ZFS support in the data sheet and apparently, there is no support in the software either. One would expect this to be a non-question since ZFS has been part of Solaris since 2006.
  2. The data sheet is missing the word “SPARC” next to the word Solaris. There is no support for x86.

Honestly, this has put a dent in my plans since my backup server is an x86 box. I’m hoping the lack of ZFS support will work out as long as we can script any FS specific magic we need. I don’t have an option of running something like Linux on it (just to get the software working) because I won’t be able to even mount the ZFS filesystems- let alone back them up.

In the meantime, I’ll have to move my backups to a SPARC server and considering the lack of low end SPARC machines, I’ll have to allocate something way too expensive to be a backup server.

Listing ZFS Clones using the origin property

Recently I created my first ZFS clones but quickly realized that there was no simple way to tell the clones from the regular filesystems. My first instinct was to run ‘zfs list -t clone’ similar to ‘zfs list -t snapshot’ but this didn’t work. Maybe it works in newer versions of ZFS.

After some poking around I found the ‘origin’ property which sets the clones apart so running something like-

zfs list -o origin,name,used,avail,refer,mountpoint | \
grep -v ^- |awk '{print $2"\t"$3"\t"$4"\t"$5}'

will get you what you are looking for.

If you haven’t played with ZFS clones yet, basically they are writable snapshots of a file system.

They are great if you want to copy a lot of data to the side, modify it, and possibly replace the original data, without taking a lot of time or disk space. The ZFS clones take seconds to create, since they don’t actually copy any data, and they will only store the blocks which have changed since their creation. If you want to replace the original data, you can then transparently promote the clone to be the master filesystem and turn the master into a clone.

The downside of clones is that they are always dependant on the snapshot from which they were created. You can not destroy a snapshot on which a clone is based without destroying the clone.

For the sake of simplicity and since I don’t usually have disk space issues, I usually prefer to make full copies using ZFS send/recieve but I have definate plans to make more use of ZFS clones in the future.

Howto resize or shrink UFS partitions

A friend of mine asked me the other day if there was such a thing as Partition Magic for Solaris. Apparently, someone had installed a system on a single slice and they’re security team was requiring a separate partition for the DB.

Here are the givens:

  • Sunfire V210
  • Solaris 8 (Otherwise we’d be using zfs)
  • 2 73GB disks
  • 1 slice on disk1
  • Disk 2 is supposed to be a mirror of disk 1 but it isn’t used yet
  • Downtime is allowed
  • Reinstalling is not an option

I personally don’t know of any tool that lets you shrink UFS partitions but that doesn’t mean that we can’t perform some Partition Magic of our own.

NOTE:
I have not tested this procedure. I think it is logical and should work and it should do no harm as the first disk remains fully intact.

  1. Go into single user mode
  2. Partition the second disk as required.
  3. newfs the partitions on the second disk
  4. Mount the second disk’s partitions
  5. Use ufsdump/ufsrestore to copy the filesystem into it’s smaller home
    ufsdump 0f - / | ( cd /mnt/newroot ;ufsrestore xvf - )

  6. When all the partitions are done, use installboot to make the second disk bootable.
    installboot /usr/platform/`uname -i`/lib/fs/ufs/bootblk /dev/rdsk/c1t2d0s0

  7. Shutdown the system, physically swap the disks, and do a reconfiguration reboot.

If rebooting goes smoothly, test your new system thoroughly and then build your mirrors.

Cloning zones in Solaris 10 6/06

I’m in the process of setting up a machine to host several SAMP (Solaris-Apache-MySQL-PHP) containers. I decided that it would be very efficient to create a generic zone and clone it over and over again. From reading up on the subject it seemed more than possible, after all, what is a zone besides a config file and a filesystem?

I googled for “Cloning Solaris Zones” and found lots of documentation on the zoneadm clone feature. I started to follow the howtos and hit a brick wall… my zoneadm doesn’t know how to clone. Deeper digging shows that the documentation on Sun’s site was for Solaris Express- Sun’s bleeding edge version of OpenSolaris- Can I say “How useless!”

I continued to google, after all I was very close I have the configuration and the filesystem, I just need to connect the two. I found the zoneadm attach/detach commands. This sounds perfect to me but alas my zoneadm doesn’t support attach/detach. Apparently, this feature is only available from Solaris 11/06- Can someone tell me when Sun started releasing new OS versions every 6 months!

I had no intention of giving up and here is the process which evolved:

  1. Setup the “Gold Master” zone including all the services, users, passwords, etc. (I’m assuming that your zonepath is a ZFS filesystem- this has it’s pluses and minuses so don’t take my word on it.)
  2. Halt the Master zone and export the config file to your zone template file:
    zoneadm -z master halt
    zonecfg -z master export -f /root/template

  3. It should look something like this: (edit with values for new zone)
    create -b
    set zonepath=/zfszones/zoneclone
    set autoboot=true
    set pool=work1-pool
    add inherit-pkg-dir
    set dir=/lib
    end
    add inherit-pkg-dir
    set dir=/platform
    end
    add inherit-pkg-dir
    set dir=/sbin
    end
    add inherit-pkg-dir
    set dir=/usr
    end
    add net
    set address=192.168.0.2
    set physical=bge0
    end
    add rctl
    set name=zone.cpu-shares
    add value (priv=privileged,limit=10,action=none)
    end

  4. Configure a new zone using the new config file:
    zonecfg -z zoneclone -f zoneclone.cfg

  5. Create a ZFS snapshot of the master zone:
    zfs snapshot zfspool/master@040207

  6. Clone the ZFS snapshot
    zfs clone zfspool/master@040207 zfspool/zoneclone

  7. Mount the new ZFS filesystem at the correct zonepath
    zfs setmountpoint=/zfszones/zoneclone/ zfspool/zoneclone

  8. Change the zone state to “installed” –WARNING: I have no idea if this is a good idea but it seems to work.
    vi /etc/zones/index

    Find a line that looks like:
    zoneclone:configured:/zfszones/zoneclone:0000003c-ffbf-f825-ffbf-f80001000000

    Replace it with:
    zoneclone:installed:/zfszones/zoneclone:0000003c-ffbf-f825-ffbf-f80001000000

  9. Boot the new zone:
    zoneadm -z zoneclone boot