Tag: Intel

No ZFS Support for EMC Replication Manager

As I originally blogged, I was hoping to use EMC snapshots to perform server-less/network-less backups. EMC provides two main tools for managing snapshots in this type of situation:

  • EMC Replication Manager
  • EMC PowerSnap Networker Module

The PowerSnap Module supposedly automates taking snapshots for the purpose of backups, while Replication Manager supposedly provides a much more robust package.

With Replication Manager you might create a policy to take a snapshot every five minutes, keep the last 10, and use those for backups whenever necessary.

To make a long story short, Replication Manager is useless for LUNs with ZFS. According to EMC, this won’t change in the near future. PowerSnap also has no support for taking snapshots of LUNs with ZFS on them so basically EMC has no server-less backup offerings for Solaris with ZFS.

As an IT guy in general, ZFS is the best thing that has happened to file systems in the last 10 years and it is only getting better. ZFS is already standard in FreeBSD and NetBSD. Linux supports ZFS over FUSE due to license issues but I’m confident those will be solved. The file system is platform independent, meaning you can move the data transparently between Intel and Sparc architectures. Deduplication has just been added to the feature set and disk encryption is on it’s way.

As a Solaris admin, I really can’t figure out why EMC would decide to cut off their own foot like this. It is clear that UFS will remain for legacy and backwards compatibility but ZFS is the future. Not planning to support ZFS is like not planning to support Solaris.

The only possibility that I can see is that EMC sees Sun, Solaris, and ZFS as enough of a threat, that they are strategically trying to limit options? For operations local to a server, ZFS has largely replaced the need for heavy hardware like EMC on the SAN. Some would argue that ZFS RAID + JBOD is better than ZFS + RAID on EMC. You can do the snapshots without the EMC. On a simple level, you can send snapshots asynchronously to another system, similar to MirrorView, without the EMC. You can do deduplication without the EMC. Now with Sun’s Flash Cache technology which integrates with ZFS, you can get the performance without the EMC. Along the same lines, you see Sun changing the rules of the storage/database game with solutions like Exadata V2. The integration of Zones with ZFS may be challenging Vmware on the virtualization front, especially with the serious advantage Sun’s Coolthreads servers have in terms of consolidation.

That said, I still prefer to offload this work to dedicated storage hardware for the time being and probably in the future. If EMC chooses not to support ZFS, they will only force us not to buy EMC arrays. We will stop buying disks, stop buying tools, etc.

Instead, they should be providing better support for ZFS, integrating with ZFS to get better performance, providing tools which make EMC the preferred disk array behind a ZFS filesystem.

Sun’s Predicament

I’ve been working with Unix for a fairly long time now- about 13 years.

I’ll admit that I started with Linux and thought it was light years ahead of SunOS 4.x running on those old SPARC machines- I mean who had heard of SPARC processors? I remember my boss trying to explain to me that even an older SPARC processor was more powerful than a newer Intel Pentium processor. I didn’t really believe him. In time, I convinced them to get rid of most of their SPARC/Solaris in favor of the hip, free, and cheap Intel/Linux combination.

Now I see that I couldn’t have been more wrong. I realize that SunOS 4.x probably still has features which I don’t know how to use properly. When I look at Solaris 10, ZFS, Zones, LDOMS, DTrace, etc. I not really sure you could pay me to work with Linux (that would be soo depressing). That isn’t even mentioning the SPARC hardware it runs on- Can any Intel server compare to a T5140???

That’s why the current situation with Sun absolutely SUCKS (pardon my french)! I’m sure there are a lot of admins out there who feel the same way. If this Oracle deal doesn’t go through and Sun disappears because of it, it will be our loss. We’ll be stuck with mediocre operating systems and commodity hardware and I really hope it doesn’t happen.

That said, I’d like to say thanks to all the people at Sun who are still turning out crazy cool technologies despite the problems.

Solaris 10 doesn’t find network card

I recently installed Solaris 10 06/06 x86 on my desktop machine, a Compaq Evo with an onboard Intel 10/100 network card.

At first the Solaris installation seemed to hang while trying to find a network configuration from a non-existant RPC boot server. In retrospect, I think the problem was that Solaris didn’t find an appropriate driver for the card but after waiting a long time, the installation continued skipping the network configuration.

Running prtconf -pv shows the pci identification details for the ethernet card:

model: ‘Ethernet controller’
power-consumption: 00000001.00000001
fast-back-to-back:
devsel-speed: 00000001
interrupts: 00000001
max-latency: 00000038
min-grant: 00000008
subsystem-vendor-id: 00000e11
subsystem-id: 00000012
unit-address: ‘8’
class-code: 00020000
revision-id: 00000081
vendor-id: 00008086
device-id: 0000103b
name: ‘pcie11,12’

Looking up the identification information in the PCI ID repository tells me I’m dealing with a 82801DB PRO/100 VM (LOM) Ethernet Controller

Looking at /boot/solaris/devicedb/master, I found the following similar drivers:

bash-3.00# grep 82801DB /boot/solaris/devicedb/master
pci8086,1039 pci8086,1039 net pci iprb.bef “Intel 82801DB Ethernet 82562ET/EZ PHY”
pci8086,103d pci8086,103d net pci iprb.bef “Intel 82801DB PRO/100 VE Ethernet”

Both cards use the iprb driver so I add the identifier for my driver into /etc/driver_aliases:

iprb “pci8086,1038”
iprb “pci8086,1039”
iprb “pci8086,103b”
iprb “pci8086,103d”

Load the driver with the modload command and plumb the interface:

modload /kernel/drv/iprb
ifconfig iprb0 plumb

If that works, create the /etc/hostname.iprb0 file. I wanted to use DHCP so I did the following:

touch /etc/dhcp.iprb0
touch /etc/hostname.iprb0

Then do a reconfigure reboot.