SUMMARY: SAN Storage best practice

From: Vic Engle <sunmanager_at_summerseas.com>
Date: Tue Aug 05 2003 - 11:11:03 EDT
Sun Managers,

Thanks for all the excellent answers. This list is one of the most
valuable technical resources available.

Everyone seemed to be in agreement that doing host side mirroring of
fault tolerant SAN storage is overkill. Several people also pointed out
that volume manager would still be very usefull in a san environment for
other host side volume management requirements that might exceed what is
possible with disk suite. Also VM would provide DMP which is usually a
requirement in a SAN environment.

Thanks,
Vic


The original question and all of the replies follow:

###############################################################
Scott Nichols Wrote:

If your storage array is setup to hand out fault tolerant LUN's then it
would be unnecessary to use Veritas for raid type LUN's.  There are
other
very good reasons to use a Volume manager:

        1) Creating large LUN's on the Storage device and using VXVM to
slice it up to smaller LUN's for smaller filesystems needed.  We use 2G
LUN's for sybase servers.  If we have a 200G sybase server we would
slice
200G from the Storage array then use VXVM to slice that down to 2G raw
devices for sybase.  Creating 100 2G LUN's on the Array would effect
performance from the Array perspective.  This would also help if you
have
LUN count limits on your storage array.

        2) VXVM gives you more portability.  Assigning LUN names rather
than
device names helps tremendously in the situation where you want to move
LUN's to a different host.  Let's say you have a sybase server fail and
you
want to take advantage of the SAN to move the sybase LUN's from Host A
to
Host B.  All you have to do is export from Host A then import to Host B.
SVM with soft partitioning will accomplish the same task, but you are
stuck
dealing with "D" numbers rather than meaningful names.  This isn't a big
deal in a small environment, but becomes quite a hassle when dealing
with a
large environment.


###############################################################


Thomas Jones Wrote:


Overkill.

###############################################################

Bill Welliver Wrote:

Hi Vic,

We don't use VM on top of our SAN, for the following reasons:

we weren't doing "plaiding" already ( that is, software raid across
multiple hw raid boxes)
the number of disks used up would be incredible (four disks == one disks
worth of storage using mirrors)

now, the interesting thing is that our san vendor (hp/compaq) has a
feature that will replicate across two sets of controller pairs (two
different sans), which would give us effectively the same thing as
you're suggesting without the hassle of VM. That's actually something
we're considering.

If you're considering doing the vm + san thing, you need to think about
whether you actually gain what you think you're gaining (example: if you
mirror on top of san, but the two sides of the mirror are in the same
disk group on the san, you're probably not getting any data security
benefit).

Hope this helps (do let me know if I've created more questions)!

bill

###############################################################

Hans Jacobsen Wrote:

Different people require or desire different levels of protection...

--I've been in an organization where double mirroring was seriously
considered (mirroring mirrors - 4 way mirroring in essence)
--I've personally considered doing raid-5 + mirroring, but in the end
DID
decide it was overkill.  

The thing I kept in mind was the idea of a hot spare -- often hot
sparing
can be very complicated in complex setups.  Examples of problems include
the
inability to have a hot spare because of a complex setup (RAID
controlled at
the host level with lots of different hosts cannot easily tell that a
drive
is spare to use it automatically if needed) or the infamous "grab of the
wrong spare" problem where a hot spare is grabbed out of a different
storage
array by Veritas...

What kind of fault tolerance is already in place in your SAN and what
are
your uptime requirements?  If the SAN is hard to get folks to to work on
it
for any reason, more fault tolerance can be attractive.

-hej


###############################################################

Chris Dantos Wrote:
Hi Vic:

To me further RAID is overkill.  I do use Solstice Disksuite to 
concatenate the LUNs together.

###############################################################

Boe Franklin Wrote:

In our storage that is fault tolerant we do not use software mirroring
(veritas).
Hardware is much better at it. We use A3500FC w/ a spare disk per tray.
If the application user wishes it, we would do it...they do pay for the
storage they use (as raw gigs...)

###############################################################

Aaron Hirsch Wrote:

You'd have to review the overall layout for the luns being provided to
you otherwise you may put the Veritas mirrors on the same spindle, which
would defeat the extra "protection" and potentially cause hot spots.  We
use Veritas for striping the filesystems, but leave the mirroring to our
SAN box.  This also frees up some system resources as you will not have
the extra overhead of software mirroring to deal with.

We've had two disks fail in 1 year and it did not affect our uptime and
deliveries at all...

###############################################################

Kevin Smith Wrote:

I have worked with both directly connected storage and EMC san storage.
It just is simply not worth bothering with striping and/or mirroring if
using correctly set up arrays.

###############################################################

Scott Mickey Wrote:

Vic, 
 
Get the August 2003 copy of SysAdmin magazine.  It has an 
article titled "Configuring SAN Storage in Solaris". 
It goes into a lot of detail about required Solaris patches, 
HBA hardware configuration, and Veritas VxVM DMP setup. 
You need to have multiple paths from your machines to the 
SAN and Dynamic MultiPathing (DMP) handled by Veritas 
Volume Manager is likely the way you want to go. 
http://www.sysadminmag.com/

###############################################################

Kelly McDonald Wrote:

We still use Veritas on some of our servers for volume management and
DMP
for HBA failover. I do strip volumes with vxvm for performance but do
not do
any additional raid.

###############################################################

Darren Dunham Wrote:

What do you want to happen if you lose a controller card?  If someone
trips and breaks a fiber connection?  If a GBIC dies?

VxVM with DMP can make those components redundant.

###############################################################

Marc Johnson Wrote:

Vic,

Without getting too deep into semantics, you need to turn the paradigm
you
are using now and turn it up side down.  What I mean is that your design
specifics should be based upon the level of availability that the
"business
unit", in your case academic department, requires and is able to
support.
For instance, 99.999% availability requires less than 5 minutes of
unplanned downtime per year.  In order to achieve this in the
availability
continuum from reliable (99.5%) through highly available (99.99%) to
continuously available (fault tolerant - 99.999%). There are many
different
layers to make a "system" highly available: shared storage, database,
integration (middleware or EAI), application, web, security, and edge
(load
balancing, content switching, caching, etc.).

You have a great start by using a volume manager which can also be
deployed
across all of the platforms that you have mentioned (Windows, Solaris,
and
Linux).  Best practices for availability and storage management
encourage
the use of a common volume manager coupled with storage management
software
to carve up the LUNs, secure the access, and keep different zones
separate
(Windows from Solaris/Linux).  In conjunction with a Storage Area
Network
(SAN), the complexity makes the necessity of the storage software
management even greater.  Best practices in storage management encourage
the use of a common resource management, which a common volume managers
plays right into.  This resource management software also manages the
zones
and different fabrics set up within the SAN.  However, resource
management
is also a larger picture which is useless without proper processes,
procedures, and policies.  These are the basics.

###############################################################

Morton Hichael Wrote:

the raid in the san is enough.

we have a crappy emc box (good hardware/horrible support).

the luns were created with the hardware raid resources and presented to
the os.  veritas was setup to take the luns and again present them to
the os (with no additional raid).  from my perspective, this is a waste
of time and money and introduces a higher level of complexity into the
server.  a complexity that is not needed.  but then, the salesman was a
friend of the owner and the owner wouldn't listen to the it staff.

when we migrate to solaris 9 or 10, we will strip veritas out of the
mix.

###############################################################

Reggie Beavers Wrote:

Vic,
IMHO, there's no need to take an additional performace
hit by using vxvm unless you wish to mirror across
arrays for additional protection. It all depends on
how much you trust your array (does it have single
points of failure) verus cost. I dwould definitely not
recommended additional mirroring within the same
array.



###############################################################

Original question:

On Mon, 2003-08-04 at 12:43, Vic Engle wrote:
> Good Afternoon,
> 
> We are in the beginning stages of deploying a SAN for our Solaris,
> Windows and Linux environment. We currently use all direct attached
> storage on our Suns and we use veritas vm to build fault tolerant
> volumes from the direct attached storage. 
> 
> When we implement the SAN we will be able to present luns to the Sun
> boxes which are already fault tolerant in the SAN array. In this case is
> it still advisable to use veritas vm for additional fault tolerance or
> would that be overkill? Just wanted to get a feel for what most people
> do.
> 
> Thanks,
> Vic Engle
> Unix Support
> Duke Clinical Research
> _______________________________________________
> sunmanagers mailing list
> sunmanagers@sunmanagers.org
> http://www.sunmanagers.org/mailman/listinfo/sunmanagers
_______________________________________________
sunmanagers mailing list
sunmanagers@sunmanagers.org
http://www.sunmanagers.org/mailman/listinfo/sunmanagers
Received on Tue Aug 5 11:10:56 2003

This archive was generated by hypermail 2.1.8 : Thu Mar 03 2016 - 06:43:17 EST