salaries rise butthe jobsget tougher - TechTarget

68
STORAGE Managing the information that drives the enterprise also inside 5 biggest backup woes—and how to fix them 28 State government cleans up its storage act 36 Less painful data migrations 44 salaries rise but the jobsget tougher It has exploded,” says ING Direct’s Elijah Golden, whose team manages more than half a petabyte. 20 Vol. 7 No. 10 November 2008

Transcript of salaries rise butthe jobsget tougher - TechTarget

STORAGEManaging the information that drives the enterprise

also inside5 biggest backup

woes—and how tofix them 28

State governmentcleans up its

storage act 36

Less painful data migrations 44

salaries rise butthe jobsget tougher

“It has exploded,”says ING Direct’s Elijah Golden,whose team manages morethan half a petabyte. 20

Vol. 7 No. 10 November 2008

October 2008 Storage

inside | november 2008

features

20 COVER STORY ECONOMY DOWN, SALARIES UPOur sixth annual Storage Salary Survey shows storage salaries are risingoverall, and climbing even higher as the number of terabytes managedincreases. Experienced storage pros remain in demand but many respon-dents say that heavier workloads, smaller staffs, longer hours and tighterbudgets are all contributing to stress and making the job of managing storage even tougher. ellen o’brien

28 FIVE THINGS THAT MESS UP YOUR BACKUPSData backups are still job No. 1—and problem No. 1—for most storagemanagers. In this article, backup guru W. Curtis Preston describes the fivemost prevalent backup system problems and explains what you can do toprevent or remedy them. w. curtis preston

36 SHOW-ME STATE SHOWS HOW TO CONSOLIDATE STORAGEWhether it’s the result of a merger or just good housekeeping, at some point intime storage managers will have a storage consolidation project. The Missouristate government embarked on a major storage consolidation project thatincluded numerous political and technical hurdles. alan radding

44 DATA MIGRATION TIPSData moves. Or, it has to be moved when you’re refreshing array technology,merging storage resources with an acquired company or shifting data aroundto more economical tiers. Data migration is a common task, but it’s often adifficult one. We describe some technologies and tools to ease the pain of datamigrations. robert l. scheier

“The salary wasn’t the primary motivation for me. It was more of a challenge, achance to lead the group.”

“Economy down, salaries up,” p. 20

36

20

2 Storage November 2008

online

The so-called cloud represents the fastest way

for any firm to get onboard with the

“green IT” movement.

18

Cover photograph by Ryan Donnell

G

trends6 TRENDS

Microsoft tries to breathe new life into DAS … How about a little less talkand a little more action? … Data classification still a work in progress ...Storage spending steady … Deduping primary storage ... Retaining data for a long, long time.

64 SNAPSHOTToo busy to archive your email?

Analysis

18 STORAGE BIN 2.0 Outsourcing clouds the green issueIf you think shipping everything offsite is the best way to go green, you should think again. steve duplessie

52 HOT SPOTS A turning point for storage networkingBy 2010 to 2011, most data centers should be onboard with converged networks. Will you be ready? If not, here’s why you need to be. bob laliberte

56 BEST PRACTICES It’s time to pay attention to storage power usePower and cooling isn’t just a problem for the data center. According toGartner Inc., storage managers place power consumption in a three-way tiefor last place in terms of their concerns, a clear example of organizationalmisalignment. james damoulakis

Departments4 Editorial The year of iSCSI storage—finally!

63 Advertising index/Sales masthead

SELF-HEALING DISK SYSTEMS EXPLAINEDhttp://searchstorage.com/selfhealexpONLINE EMAIL ARCHIVING FAQhttp://searchstorage.com/emailarchfaqBREAKING NEWShttp://searchstorage.com/news

Vice President of EditorialMark Schlack

Editorial DirectorRich Castagna

Senior Managing EditorKim Hefner

Senior EditorsRich FriedmanEllen O’Brien

Associate EditorChristine Cignoli

Art DirectorMary Beth Cadwell

Production ManagerPat Volpe

Contributing EditorsJames Damoulakis, Steve Duplessie, Stephen Foskett, Phil Goodwin, Jacob Gsoedl, W. Curtis Preston

Site Editor Peter Bochner

Senior News DirectorDave Raffo

Senior News WriterBeth Pariseau

Features WriterCarol Sliwa

Managing EditorMaryann Tripp

Assistant EditorMatt Perkins

Assistant EditorRachel Kanner

Site Editor Andrew Burton

Associate Site EditorHeather Darcy

Assistant Site EditorChris Griffin

Features WriterTodd Erickson

TechTarget Conferences

Sr. Editorial Events ManagerLindsay Mullen

Editorial Events AssociateNicole Tierney

Storage magazine117 Kendrick Street, Suite 800Needham, MA [email protected]: www.SearchStorage.com

STORAGE

TechTarget Storage Media Group

In our August 2008 Trends article “The debate continues: Disk vs. tape,” we didn’t

intend to suggest that the study described in the story or the study’s conclusions were

in any way manipulated by the LTO Consortium or any of its members.

When it comes to data de-duplication, most companies only offer one kind of solution. But with Quantum, you’re in control.

Our new DXi7500 offers policy-based de-duplication to let you choose the right de-duplication method for each of your backup

jobs. We provide data de-duplication that scales from small sites to the enterprise, all based on a common technology so they

can be linked by replication. And our de-duplication solutions integrate easily with tape and encryption to give you everything

you need for secure backup and retention. It’s this dedication to our customers’ range of needs that makes us the smart choice

for short-term and long-term data protection. After all, it’s your data, and you should get to choose how you protect it.

Find out what Quantum can do for you. Get a free de-duplication white paper at www.quantum.com/de-boxedin.

© 2008 Quantum Corporation. All rights reserved.

De-boxed in

4 Storage November 2008 Photograph by Emily Nathan

editorial | rich castagna

it has been said that imitation is the sincerest form of flattery—but in the IT world,acquisition trumps imitation when it comes to flattery. And acquisitions also offera speedy shortcut to new technology without all the bother and expense of R&D.First, Dell sent ripples through the storage market by scooping up iSCSI vendorEqualLogic, and now Hewlett-Packard (HP) has followed suit by snagging its owniSCSI storage purveyor, LeftHand Networks.

Of course, there’s a lot more to this than flat-tery. It’s about Dell’s and HP’s perception that theyhad gaping holes in their product portfolios andthat iSCSI was just the right fit for those gaps. Dellshelled out a whopping $1.4 billion for EqualLogicwhile (compared to that lofty figure) HP “stole”LeftHand for a mere $360 million.

In Dell’s case, it likely meant a heckuva lotmore than an ATM withdrawal, but mighty HPprobably only had to rummage through the cor-porate sofa cushions to scrape up enough changefor its iSCSI trophy.

For those of us who write about storage, thesemoves clearly enhance the credi-bility of iSCSI storage and offersound validation of the iSCSImarket in general. Two big ven-dors buying into iSCSI also makesus feel a little less queasy abouthaving already declared it “theyear of iSCSI” more times thanwe care to remember. (In theDecember 2006 issue of Storagemagazine we declared that iSCSIstorage would be a hot technolo-

gy in 2007. We weren’t that far off …)For Dell and HP, whether they overpaid or un-

derpaid is up for debate. But I have a feeling thata few years down the road both companies willlook back on 2008 as the year they hopped on theiSCSI bandwagon just in time.

When iSCSI systems first arrived there was a lotof speculation about how they were going tosteamroll NAS and Fibre Channel. After all, iSCSIsystems were cheaper, offered lots of capacity andcould easily tap into existing networks. The lastpoint was often the strongest argument for iSCSIdomination. That level of speculation was, kindly

stated, a bit overzealous. In reality, iSCSI has donemore creeping into storage environments thansteamrolling.

Our Purchasing Intentions surveys bear thisout. We haven’t seen a wild swing toward iSCSIor a steep climb in the deployment curve. Instead,we’ve seen incremental acceptance of the tech-nology, with implementations growing at ananalogous rate. And we never hear that a key rea-son for installing iSCSI is that it’s IP-based and canuse installed network gear. But it’s almost as if wewere watching iSCSI too closely from survey tosurvey and year to year, because over a relativeshort two or three years, it has made significantinroads. In our last survey, about 40% of all firmsplanned to implement iSCSI in 2008. No dramaticspikes, but 40% is a pretty impressive number.

A lot of pundits have also played the companysize card, saying that iSCSI was OK for smallerbusinesses but the enterprise market would showlittle interest. Wrong on that count, too. The 40%figure was consistent across all company sizes—and it’s big companies that have shown the mostgrowth in deployment plans over the last two years.

Finally, there’s the rap that iSCSI storage could-n’t handle a company’s key apps; let’s rethink that,too. We’ve seen the number of critical apps beingdeployed on iSCSI also grow.

There’s no question that Dell’s and HP’s deeppockets will help validate the viability of iSCSIstorage. And in the next year or so, when 10GigEthernet and SAS-II drives become widely avail-able, iSCSI will become an even more attractivealternative. Wow, it looks like 2009 is shaping upas … the year of iSCSI. 2

IT’S BIGCOMPANIES THATHAVE SHOWN THEMOST GROWTH

IN iSCSIDEPLOYMENT

PLANS OVER THELAST TWO YEARS.

Rich Castagna ([email protected])is Editorial Director of the Storage Media Group.

The year of iSCSI storage—finally!

See additional HP models which feature small form factor, high-performance SAS hard drives.

ALTERNATIVE THINKING ABOUT SYSTEM POTENTIAL:

See eye to eye with your budget—without limiting your vision.Compromising is fi ne. For other people. But now you can watch your bottom line, while still getting a look into the future. The HP portfolio of solutions erases the gap between cost and innovation, while delivering reliable ProLiant technology, all at prices that require a second look. So, while others try to think outside the box—we’re rethinking what goes on inside it.

Technology for better business outcomes.

Prices shown are HP Direct prices; reseller and retail prices may vary. Prices shown are subject to change and do not include applicable state and local taxes or shipping to recipient’s address. Offers cannot be combined with any other offer or discount and are good while supplies last. All featured offers available in U.S. only. Savings based on HP published list price of configure-to-order equivalent (Enclosure: $5,822 – $1,863 instant savings = SmartBuy price of $3,959; Blade Server: $1,530 – $631 instant savings = SmartBuy price of $899; Rack Server: $1,188 – $339 instant savings = SmartBuy price of $849; Tape Drive: $809 – $80 instant savings = SmartBuy price of $729).Intel, the Intel logo, Xeon and Xeon Inside are trademarks of Intel Corporation in the U.S. and other countries.© 2008 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.

To learn more, call 1-866-625-1019 or visit hp.com/servers/rethink18

HP ProLiant DL120 G5 Server$849 (Save $339)Lease for just $21/mo.

[PN: 470064-763]

• Powered by the Intel® CoreTM 2 Duo Processor• One 250GB SATA hard drive• 2GB memory• 1-year limited warranty

HP StorageWorks DAT160 USB Tape Drive$729 (Save $80)Lease for just $18/mo.

[PN: Q1580SB]

• One-Button Disaster Recovery featureeasily restores lost fi les, applications

• Store up to 160GB on a single cartridge,while backing up to 50GB/hr.

HP BladeSystem c3000 Enclosure$3,959 (Save $1,863)Lease for just $110/mo.

[PN: 481658-001]

• Supports up to 8 server bladedevices in a 6U enclosure

• 3-year limited warranty

HP ProLiant BL260c Server$899 (Save $631)Lease for just $22/mo.

[PN: 480965-B21]

• Powered by the Intel® Xeon® Processor• 2GB memory• 1-year limited warranty

rformance SAS hard drives.

rrr

I0260_M2977_A_STM.indd 1 9/11/08 2:02:07 PM

HP Blades.indd 1 9/16/08 12:39:03 PM

6 Storage November 2008

m

trends

Illustrations by Calef Brown

OUR VIEW:OVERSTATINGTHE OBVIOUS Almost every vendorbriefing I’ve hadlately seems to startwith some variationof this statement:“Worldwide datagrowth is unprece-dented, and it’s notstopping anytimesoon.”

So I’m proposing a shift: Vendors, cutdown on the timespent telling usabout the datagrowth explosion.It’s obvious to every-one that data is ex-ploding, especially to the storage man-agers and adminswho are watching it fill up disks andslow down backups.Instead, let’s getdown to the busi-ness of storing it intelligently. It’s notenough to create abunch of fanfarearound releasing abigger disk. Whydoes only one ven-dor (NetApp) includededupe in its storageOS? It’s a smart tech-nology that saves a ton of time andspace for businesses.

The time we’llsave is better spentdeleting, deduplicat-ing, compressing,virtualizing and oth-erwise getting thispile of informationdown to a manage-able size. It’s certain-ly not getting anysmaller, and vendorsare the ones withthe resources to de-velop the technologythat’s needed. —CC

Microsoft wants to resurrect DASicrosoft breathed some new life into an old-school storage technologywhen it recently recommended that users should consider using DAS with Ex-change as a way to cut costs. Exchange 2007, the newest version of the popularemail server, features easier replication, and the pairing of DAS and Exchangecan be an effective way to handle inflated mailbox sizes.

“When we were doing the planning for Exchange 2007, one of the thingsthat became clear when you talked to customers was that mailbox quotas atmost enterprises hadn’t kept up with user demand,” says Microsoft’s Jon Orton,senior technical product manager of the Exchange Server team. “Users end upspending a lot of time just managing their mailbox.”

Having dedicated DAS servers for Exchange can be very appealing to storagemanagers. Matt Lavallee, director of technology at Shrewsbury, MA-based MLSProperty Information Network Inc., which serves about 30,000 members, re-cently implemented DAS to run Exchange 2007 and SQL server. “It’s the per-formance, really, when it comes right down to it,” he says of choosing that setup.He considered putting it on iSCSI, but “to get the volume that Exchange and SQLwant, you have to spend four times as much on the infrastructure just becauseof that app,” he says. “That means you’re also spending four times as much forevery other node you want to have networked because of that one outlier.”

But is DAS viable for both small and large businesses? According to Matt Baker,storage specialist at Dell, “DAS-oriented solutions really service two poles ofpossible deployment: small customers who haven’t yet adopted SANs and don’t

Story by Christine Cignoli (CC) continued on page 8

The world’s most efficient storage appliance

Incredible physical density

An amazing 11.5 RAW TB

per rack unit (U) including the

server, cypress offers twice

the raw storage capacity of its

competition. With effective

storage densities utilizing

compression and on-line

de-duplication conservatively

estimated at 22.5 TB/U Cypress

defines a new category of ultra

dense storage platforms.

One fifth the power

consumption

With a maximum power

utilization of 1200 watts Cypress

leads the industry at 26 Watts/

TB. By enabling Intelligent

Power Management power

utilization falls to as low as 700

Watts. Combined with our other

storage optimizations, power

consumption is an effective 7.7

Watts/TB or 5X better than the

industry average.

On-line de-duplication of

NFS, CIFS and iSCSI

Offering NFS, iSCSI and superior

CIFS connectivity, Cypress is

a paragon of open connectivity.

On-line de-duplication across

the entire storage pool provides

the worlds only de-duplicated,

compressed, and thin-provisioned

iSCSI host. Imagine the powerful

implications this has for storage

of Virtual Machine boot images

and thin-client desktops. Cypress

is the solution for VM storage

consolidation.

The Cypress storage appliance, based on the ZFS+ file operating system, is the industry’s first general purpose

filer designed with storage efficiency as the primary objective. Increasingly, efficiency in today’s data centers

is measured not only by power consumption but also capacity, space, reliability and economy of acquisition.

The Cypress brings together a combination of highly innovative software technology and a world class server

platform to address the efficiency crisis facing today’s IT operations. Cypress further combines the tools to

setup, provision and monitor storage on a web-based application that greatly simplifies administration.

Cypress

A ground-breaking combination of features:

On-line de-duplication across the entire storage pool•

Real-time block level compression •

Intelligent power management at the file system level•

Industry leading storage density•

Storage management: no

Ph.D. in NAS required

Setup your system in under 5

minutes. Cypress WebAdmin

includes everything you need

to deploy storage to thousands

of users:

FILE-SYSTEMS• STORAGE POOL• DISK MANAGEMENT• NDMP• SNAPSHOTS• PERFORMANCE• CDP: REPLICATION•

Even more impressive is the

level of automation within

the Cypress operating system.

Once you have connected to

your MS AD or LDAP server,

all user file system creation

and provisioning is automatic.

reducing the carbon footprint of information

®

SUN X4540 and ZFS+®

www.green-bytes.comZFS+ is a registered trademark of greenBytes Inc. Opensolaris is a trademark of SUN Microsystems. Windows is a trademark of Microsoft Corporation.

GreenBytes.indd 1 10/20/08 12:50:39 PM

8 Storage November 2008

trends

necessarily want to for Exchange, and cus-tomers looking to deploy atypically largemailboxes.” He also mentions large busi-nesses with a dedicated Exchange environ-ment using DAS, where staffers mightmanage both servers and storage.

Using Exchange with DAS can leave own-ership of that storage with the Exchangeteam. “DAS is pretty simple to manage,”says Microsoft’s Orton. “We’re finding thatmany Exchange administrators are capableof upkeep and, once it’s deployed, it’s pret-ty manageable.”

Part of DAS’ renewed appeal for Exchangeis Exchange 2007’s new storage-friendly fea-ture: Cluster Continuous Replication (CCR).“Each node in a cluster has its own inde-pendent copy of the data, and the data repli-cation is handled by Exchange,” says Orton.

Dell’s Baker points out that there are ad-vantages with the new method, but that withrecovery, “there are two sides to every sto-ry. There’s a total recovery thing to keep inmind, which is that once you’ve failed over,you have to repair the other side,” he says.“Repair isn’t necessarily any faster when youhave two separate full copies of data.”

Lee Johns, Hewlett-Packard’s director ofmarketing, entry storage and storage blades,thinks Exchange 2007 is an example of ap-plications getting more storage-smart. “Moreand more applications are building in storageservices like replication or clustering,” hesays. “That can lend itself to DAS implemen-tations.” He says DAS has become more ac-ceptable as it’s become more capable.

Baker says he doesn’t see a mass exodusfrom SANs anytime soon, and that virtual-ization may actually drive people away fromDAS. “It’s sort of this creative tension in themarketplace,” he says. “One trend is makingapplications intelligent enough to use DASarchitecture and the other is really wantingto put storage in a central place to facilitatethings like mobility.”

Using DAS for selected apps like Exchangemight be the best way for this abiding tech-nology to live on. “iSCSI and Fibre are bothviable, but then you’re creating all kinds ofinfrastructure for something that has ex-treme throughput requirements,” says MLS’Lavallee. “Why weigh down the entire infra-structure or pay all kinds of money for infra-structure for one specific application?” —CC

“WHY PAY ALL KINDS OF MONEY FOR INFRASTRUCTURE

FOR ONE SPECIFIC

APPLICATION?”

ø

Continued from page 6

the real deal

Hard drive prices are down this month among all of the categories we track. The biggest drop is for the 750GBSATA drive, down 10% since last month to $180, part of its continued six-month slide. Among tape drives, theSDLT-320 dropped, but SDLT-600 rose 4% and is up overall since the summer. The cost of an LTO-4 drive keepsgoing down, paring a little more than 6% off its price this month. Media prices haven’t budged much over the pastfew months, although LTO-3 and LTO-4 registered drops of approximately 5% and 3%, respectively.

Hard drive prices down across the board

ALL CHARTS SHOW AVERAGE COST PER UNIT

Tape drivesHard disk drives

73GB SCSI, 15K

146GB SCSI, 10k

750GBSATA

1TBSATA

300GBSAS

SDLT600

SDLT320

DLT-S4

LTO-4LTO-2 LTO-3

$245

$180

$2,694$2,775

$224(-3.62%)

$528(-3.16%)

700

600

500

400

300

200

100

$0

5,000

4,500

4,000

3,500

3,000

2,500

2,000

1,500

1,000

$0

$261(-0.76%)

$2,854(-0.01%)

(-10.17%)

(0%)(-3.95%)

LTO-1 LTO-2 LTO-3 LTO-4

Media

SuperDLTtape

II

QuantumDLTtape-

S4

SuperDLTtape

I

$90(0.44%)

$41 (-4.78%)$56

(-3.00%)$98

150

125

100

75

50

25

$0

(-0.52%)

$4,026(-6.05%)

$45(3.38%)

$37(3.10%)

$2,984(4.21%)

$96(0.63%)

(-6.44%)

$2,656(-1.46%)

Fuji Film.indd 1 10/15/08 11:50:05 AM

travis mcculloch, systems architect atHilton Grand Vacations Co. in Orlando, FL,knows the value of data classification. But likeso many users, he hasn’t yet figured out thebest way to categorize, assign and relegateevery piece of data in his 20TB environment.

For now, McCulloch is using the dataclassification technology in CommVault’sSimpana software (Data Classification En-abler) to shorten backup windows that oncestretched on for hours. “For some of our hugefile servers, we’re seeing a backup that takesonly 20 minutes—it was taking seven oreight hours before,” he says. The product isan agent plug-in, explains McCulloch. “It cre-ates an index on the system itself. It also keepsanother index of what’s changed,” he says.

Brian Babineau, an analyst at the Milford,MA-based Enterprise Strategy Group, saysbackup is a great place to start using dataclassification technology. “It’s not just aboutcreating a copy anymore,” he says. “Now youcan know what’s in the copy.”

McCulloch isn’t yet using any data classifi-cation technology outside of backups. “It’son the roadmap,” he says. “We haven’t got itto the point where I’m comfortable rolling thething up.” CommVault offers data classifica-tion outside of backups, and McCulloch saysit’s on the list of vendors he might consider.

But McCulloch doesn’t want to start bychoosing a vendor—he wants a strategy.“There are too many solutions to try to pickone and figure it out,” he says. “We’re in themiddle of trying to figure out what we need.You have to talk to the legal folks, talk tothe developer folks, talk to the marketingfolks; there are a lot of pieces.”

Analysts say McCulloch is representative ofmany users when it comes to data classifica-tion. There’s a lot of interest and a growingneed, and the technology is improving. Butoutside of law firms, which are driving dataclassification purchases for ediscovery, manyIT shops are just getting their heads aroundwidespread data classification.

“Adoption has been rather limited, find-ing its niche in vertical areas such as legalcompliance,” says Greg Schulz, founder ofStorageIO Group in Stillwater, MN. “One ofthe challenges with data classification isthat to be effective on a broader basis, so-lutions need to scale in a stable and pre-dictive manner.”

Last year, 74% of Storage readers whocompleted our monthly survey said theyhad deployed or would deploy data classifi-cation tools. But 71% cited the developmentof policies as their greatest challenge.

Many users have configured their ownworkarounds. Sunil Nemade, CIO in theSeventeenth Judicial Circuit in BrowardCounty, FL, created a “mini-data warehouse”that collects data from various databases andthen runs Crystal reporting tools against it.

John Wooley, director of IT at Nielsen Mo-bile in San Francisco, says the concept of dataclassification doesn’t “seem that interestingto me.” He’d like a traditional storage resource

management (SRM) tool to label his data bycategories such as owner, age and last access.For now, he’s writing scripts or using the freeSpaceMonger tool for one-off discoveries.

Babineau says it’s helpful for users to re-member that data classification tools fall intofour categories: traditional SRM, ediscovery,archiving and data loss prevention.

“There isn’t a separate market for the dataclassification guys,” says Babineau. “The ven-dors have fallen into place on where they’regoing to participate.” Kazeon Systems andStoredIQ, for example, are focused on edis-covery and compliance. Data classificationtechnology goes beyond archiving, saysBabineau, although some archiving productsoffer classification features. “But some ofthese [data classification] solutions have aricher index for searching and provide tax-onomies about the data.” The best way tomake a classification tool work for you is totest it, says Babineau. “Point it at a data sourceand let it go.” —Ellen O’Brien

10 Storage November 2008

trends

storage 101FCoEThe protocol lets FibreChannel (FC) and Ether-net/IP traffic share thesame cables. Currently,there’s no way to trans-port both kinds of trafficon one network withoutdropped packets. Otheroptions include FibreChannel over IP (FCIP),which uses switch-to-switch connections totransport data, and theInternet Fibre ChannelProtocol (iFCP), whichcarries FC data over IPnetworks using SCSI protocols. But neither ensures performance.

FCoE, developed byNuova Systems (ownedby Cisco), will probablybe ratified by the end of2008, with early adoptionpossible this year or inearly 2009. Cisco com-petitor Brocade says 2011 is a more realisticgoal for mainstream use.Vendors have releasedFCoE-compatible switch-es and begun introducing converged networkadapters (CNAs) thatcould replace a host busadapter and an EthernetNIC with a single card.

FCoE adoption is tied to 10Gb Ethernet, whichwill provide enoughbandwidth to run FC and IP traffic at speeds comparable to current FC products. Data centerEthernet (DCE), or con-verged enhanced Ether-net (CEE), is emerging toprovide fast Ethernet fordata centers to convergeprotocols. —CC

ø

Data classification still mostly a piecemeal approach

“YOU HAVE TO TALK TO THE LEGAL FOLKS, TALK TO THE DEVELOPER FOLKS, TALK TO THE

MARKETING FOLKS; THERE ARE A LOT OF PIECES.”

© 2008 Fujitsu Computer Systems Corporation. All rights reserved. Fujitsu, the Fujitsu logo and ETERNUS are registered trademarks of Fujitsu Limited. All other trademarks mentioned herein are the property of their respective owners.

To help enterprises manage the flood of mission-critical data, Fujitsu ETERNUS Storage Systems deliver the

reliability and availability data centers require. For continuous data access and easier maintenance, major

components are redundant and hot-swappable. The controller modules’ software can also be upgraded

without shutting down or rebooting. A built-in statistical failover mechanism ensures stable operation by disabling

components exhibiting intermittent failures. Furthermore, disk data encryption using 128-bit AES provides security

against data theft. The ETERNUS Storage Systems range from the new, low-cost ETERNUS 2000, designed for

small and medium businesses; to the ETERNUS 8000 designed for large enterprise applications and is available

with up to 2 PB of storage. Go to us.fujitsu.com/computers/reliablestorage for more information.

The more you have to store,the more reliability matters.Fujitsu ETERNUS® Storage Systems: Uncompromisingreliability for your most demanding applications.

DISASTER RECOVERY—Cost-effective WAN

optimization, secure remote data replication over

iSCSI with IPsec data encryption

DATA PROTECTION—Online, efficient disk-to-disk

backup using tiered storage

Fujitsu.indd 1 8/12/08 10:35:43 AM

the tough economy doesn’t appear tobe dampening the demand for storage soft-ware. The worldwide market cracked the$3 billion milestone for the first time in asingle quarter during this year’s April-to-June stretch, according to recent researchfrom Framingham, MA-based IDC.

“Going over $3 billion is a nontrivial eventand an indication of how important storagesoftware is to the various tools that peoplehave at their disposal,” says James Baker, astorage software research manager at IDC.

The $3.1 billion second quarter markeda 6% increase over the first quarter and animpressive 14% jump over the same time-frame a year ago. It also signaled the 19thconsecutive quarter of growth, comparingrevenue totals for the same quarters on ayear-to-year basis.

EMC continued to outpace all rivals, bank-ing $745 million in the second quarter for a24% share of the market. Symantec madeheadway at No. 2, with $588 million for a19% share—and nearly 27% over its Q2 2007revenue. Rounding out the leaders wereIBM, NetApp, Hewlett-Packard and CA.

Either EMC or Symantec topped six of theseven major storage software categories IDCtracks. Symantec commanded 35% of themost lucrative market segment—data pro-tection and recovery software. “What hashappened is that the benefits of having dataprotection are just as important for thesmall- and medium-sized businesses,” saysBaker, “maybe even more so, because theycan’t withstand interruptions at all.”

Overall, the storage replication marketwas the only one to dip slightly from the typ-ically sluggish first quarter of 2008 to thesecond quarter of this year, despite an over-all growth of 7% vs. the second quarter oflast year. Baker says some users may be buy-ing less general-purpose replication soft-ware because they get the feature bundledin with other products. IDC views the de-crease as a “bit of a blip” and continues toproject double-digit growth for the longterm.

Every other storage software segment grewat least 5% from the first to the second quar-ter, including two in the double digits: file-

system software, led by Symantec’s $102 mil-lion; and storage management, topped byEMC’s $101 million. EMC also dominatedstorage device management software with a64% share of total second-quarter revenue.

But Symantec overtook EMC in storage in-frastructure software, taking a 29% share toEMC’s 26%. IBM topped the archiving space,with 27% of the quarter’s $305 million.

—Carol Sliwa

12 Storage November 2008

News from SearchStorage.com. For the full text, goto http://searchstorage.com/news.

News in short

ø

Dell unveils VMwaresnapshot managerDell has released anEqualLogic PS Seriesstorage array that packsthree times the amount ofdisk as its previous high-est capacity array. ThePS5500E array includes aVMware Auto-SnapshotManager, but lacks fea-tures that are becomingmore common on SANs,such as drive spin-down,data deduplication, solid-state drives andsupport for 10Gb Ethernet.Auto-Snapshot Manager/VMware Edition joins the Auto-Snapshot Manager/Microsoft Edi-tion that EqualLogicbrought out last yearbefore being acquiredby Dell. These Auto-Snapshot products (bothare included on the array)place a software agenton the physical server,and also provide visibili-ty into the snapshotschedule and backup job status associatedwith apps through theserver interface.

trends

Storage software sales still brisk

survey says:ØFibre Channel favored

for virtual servers

Although iSCSI storage is often tout-ed as a good match for virtualizedserver environments, the majority of respondents to our recent fall Purchasing Intentions survey thinkFibre Channel (FC) SANs and virtual-ized servers make the best pairing.Fifty-two percent of respondents tothe survey use FC for their virtual-ized servers, far ahead of those whochoose iSCSI SANs with virtualizedservers (12%). And 20% of those surveyed haven’t even implementedvirtualization on their servers. NAShas won over 9% of respondents fortheir virtualized servers, while DAStakes last place.

WHAT’S THE MAIN TYPE OF STORAGE THAT YOU’RE USING FOR

YOUR VIRTUALIZED SERVERS?

FibreChannel

SAN

We haven’tvirtualizedour servers

iSCSISAN

NAS

DAS

12%

9%7%

20%

52%0Number of storage

companies that

have gone public

this year after a

busy 2007. Nexsan

filed the first step of

an IPO in April, but

hasn’t made

further moves.

(Guess who’s coming to the rescue.)(Guess who’s coming to the rescue.)(Guess who’s coming to the

rescue.)

• Continuity of operations and disaster recovery solution with multiple layers of protection — replication, continuous data protection and automated failover

• Provides cost-effective protection for Microsoft® Exchange, SQL, IIS, Oracle, file servers and other applications on 32- and 64-bit Windows servers

CA XOsoft™ High Availability r12• Storage capacity: up to 38.40TB / 76.80TB • 48 cartridge slots in 4U form factor, supports up

to four LTO-4 tape drives• SCSI, Fibre Channel and SAS connectivity options• LTO-3 configurations are available• Includes partitioning, remote management,

barcode reader

Overland® ARCvault™ 48 LTO-4 Library SAS

• Up to 12TB of capacity, fully expands to 60TB• Cost-effective Fibre Channel SAN RAID

storage for Windows®, Linux® and other server platforms

• 2U rack-mount enclosure, expands to 45TB, dual 4Gbps controllers

• Supports RAID 0, 1, 5, 10, 50 and the new RAID 6; SAS drives optional

Overland® ULTAMUS™ RAID 1200

Call CDW for pricing Call CDW for pricing CDW 1504743

Call CDW for pricing CDW 1531058

CDW.com 800.399.4CDW

We’re there with the storage solutions you need.There are plenty of threats to your data, and each one has the potential to do more than give you a headache. That’s why it’s essential to have a disaster-recovery plan in place to protect your data and business from the unexpected. At CDW, we’re there with a wide range of products to help you design a disaster-recovery solution. We have the expertise to answer questions and get you a custom configured solution before you need it. So call CDW today. Because being prepared for the worst is probably the best thing you can do.

Offer subject to CDW’s standard terms and conditions of sale, available at CDW.com. ©2008 CDW Corporation

How safe is your critical data? If you have to ask, you may want to call CDW.

8035 cdw_StorageMag_11-1.indd 26 10/7/08 2:24:09 PMCDW.indd 1 10/9/08 2:49:28 PM

14 Storage November 2008

News from SearchStorage.com. For the full text, goto http://searchstorage.com/news.

trends

News in short

ø

Brocade releasesencryption switchBrocade is increasing itssecurity options by rollingout an encryption FibreChannel (FC) switch andan encryption blade fordata at rest. Encryptionfor data at rest securesinformation stored ontape and disk for backups.Brocade’s EncryptionSwitch is a 32-port,8Gb/sec FC switch. TheFS8-18 Encryption Bladeis a 16-port blade thatplugs into Brocade’s DCXBackbone switches. Theswitch and the blade bothscale up to 96Gb/sec ofencryption processingpower. Brocade positionsits encryption switch andencryption blade as high-er performing and higherscaling devices thanthose that vendors suchas Decru, Kasten Chaseand NeoScale broughtout a few years back.However, none of thoseproducts gained wideacceptance. Data en-cryption is supported asa plug-in service withBrocade’s new Data Center Fabric Manager.

it managers have become as obsessed withreducing the amount of redundant data intheir storage as Americans are with reducingtheir waistlines. But this trend has been fo-cused mainly on secondary storage—backupand archiving apps, where most of the redun-dant data lives in storage infrastructures.

A handful of vendors are trying to takeduplicate data out of primary storage eventhough there’s a lot less redun-dant data in primary (tier 1)storage than in secondary stor-age. So data reduction ratios inprimary storage will be muchlower than the 15:1 or 20:1 ra-tios common when dedupingsecondary storage. “But you’ll be getting alot more bang for the buck because tier 1disk is more expensive,” says Eric Burgener,senior analyst and consultant at Taneja Groupin Hopkinton, MA.

But as the use of virtualization increases,more and more virtual machines are runningon one physical server. This creates multipleinstances of OSes and apps, which in turnwill increase the level of redundant data onexpensive primary storage.

The next question is: When data reduc-tion is performed on primary storage, is itstill dedupe or something else (usually com-pression)? One could claim that, at the filelevel, Microsoft Office offers some kind of

generic dedupe functionality, according toJohn Matze, VP of business development atHifn, which makes card-level data reductionaccelerators. But “that’s a partial dedupe thatexists in Microsoft’s file system,” which hecalls “poor man’s data deduplication.”

“Deduplication is well-suited for static,redundant data, but it’s not well-suited forprimary storage,” says Peter Smails, VP of

worldwide marketing at Storwize, whichbegan shipping a primary storage data re-duction appliance in 2005.

Ocarina Networks brought out the secondmajor release of its storage optimizationproduct in September. Ocarina’s Extract,Correlate and Optimize (ECO) System com-bines compression, dedupe and more than100 file-specific information extraction al-gorithms. But even though ECO makes someuse of dedupe, “their optimizer doesn’t worklike a dedupe engine at all,” says Burgener.“And they get some of the highest reductionratios in primary storage.”

According to Carter George, VP of productsat Ocarina, primary storage data reduction is

Deduplication now focusingon primary storage

DATA REDUCTION RATIOS IN PRIMARYSTORAGE WILL BE MUCH LOWER THAN

THE 15:1 OR 20:1 RATIOS COMMON WHENDEDUPING SECONDARY STORAGE.

23%The difference in

price LTO-4 tape

dropped from its

first month of ship-

ment (May 2007) to

one year later.

November 2008 Storage 15

75%

24%

34%Say natural disastersprompted them to create a DR strategy and plan

Say that a full DR testhas failed because theydiscovered the plan wasout of date

QUESTION OF THE MONTH

Respondents who haveexperienced a hardwareor software system failure

all about shrinking the size of files. “The filetypes driving storage growth are already com-pressed,” he says. “You can’t compress thesame file twice with generic algorithms.”

Ocarina offers what it calls content-awarecompression. “It’s easy to see the advantageof shrinking a file, but what about perform-ance?” asks the firm’s George. Applicationperformance is far more critical in primarystorage, since backup and archiving tend notto be performance oriented. George definesprimary storage performance as “time to firstbyte,” and says it differs by market and user.“You might be able to take 30 seconds to openWord, but in HPC, 1 [millisecond] latencymight be death,” he says.

According to Taneja Group’s Burgener,anyone trying to figure out if it makes senseto do data reduction in primary storage hasto answer two questions. What am I payingin terms of dollars/GB on the primary side?And how much less primary storage will Ihave to buy over time? “If you’re buyingEMC and paying $20/GB to $25/GB, youhave 200TB of data and you can get a 10:1

reduction level, then it’s simple to figureout if it will be worth it,” he says.

George notes that shrinking files changeother things in the storage equation. “Thefirst wave of users will be people avoidingdisk purchases,” he says. “The second wavewill be people storing things they neverthought of archiving before, like transferringseismic archives from tape to disk.”

Right now, the group of vendors offeringcapacity optimization for primary storage issmall. In addition to the three vendors men-tioned, NetApp bundles optimization intoits Ontap GX OS; Greenbytes offers an ap-pliance that combines the Sun Fire x4540server and the ZFS file system; and Riverbedhas announced a box that sits in the WANpipeline. But Burgener thinks all of the ma-jor storage vendors will offer data reductionfor primary storage in some way in a fewyears. Yet array vendors may be caught in a bitof a vise: “If they have it in their arrays,” hesays, “that means you buy less storage.”

—Peter Bochner, with additional reportingby Rachel Kanner

17%Don’t have a DR site

From Symantec’s annual DisasterRecovery Research Report conduct-ed by Applied Research-West Inc.

What software tools do you use to manage

your storage systems?

“We’re currently using HP Essentials and Command View 8.0 (for allstorage types and vendors), and ECC 6.0 for our EMC storage. We relyheavily on Cisco Fabric Manager to manage the SAN and topology.” —Paul M. Macht, senior enterprise architect, IT, Duke Medicine,

Duke University Health System, Durham, NC

“Currently, I just use the software that came with ourstorage [IBM] and VMware Virtual Infrastructure; all ofour SAN is dedicated to a single VMFS volume.” —Chad S. Mawson, IT manager, Woods & Aitken LLP, Lincoln, NE

“We don’t use any glossy sales brochure-oriented ‘enterprise man-agement blah blah blah,’ just operating system tools. We use Com-mand View EVA, which comes with our HP systems, which you haveto use. We make heavy use of the SSSU [Storage System ScriptingUtility] command line facility. It’s nice that HP allows multiple ar-rays to be managed from one system, but most unfortunate that ithas to be Microsoft Windows. And I’ve written a bunch of programson AIX and OpenVMS platforms, and some reporting programs usingXSLT templates to process XML output.” —Tom O’Toole, San Diego

Ø

by the numbers:DR planning

G

16 Storage November 2008

News in short

trends

News from SearchStorage.com. For the full text, goto http://searchstorage.com/news.

VMware sets sights on storageVMware previewed sever-al storage-related prod-ucts due out next year, including storage re-source managementwithin VirtualCenter (nownamed vCenter), thin pro-visioning for virtual ma-chine file system (VMFS)volumes, hot expansion of virtual disks, iSCSI per-formance enhancements,extensible multipathingsupport, enhancements to Storage VMotion andDataRecovery, a newbackup app. These arepart of VMware’s VirtualDatacenter OS initiative.

Cisco focuses on FC and EthernetCisco Systems is launch-ing new products andtaking steps toward aconverged Ethernet-FibreChannel (FC) network. Cisco rolled out a soft-ware-based virtual Ether-net switch that companyofficials say lays thegroundwork for settinggranular policies over Eth-ernet networks, includingiSCSI SANs and WANs.

is long-term archiving the Y2K problemfor the 21st century? The Storage Network-ing Industry Association (SNIA) and othersin the industry hope to bring attention to thearchiving compatibility problem early in thiscentury rather than at the end. The specificproblem is how to make sure archived datawill be readable down the road after formatchanges in hardware and software.

A SNIA survey of 267 organizations foundthat 80% have information they must keepfor more than 50 years because of legal andregulatory rules; 68% must keep informationfor more than 100 years; and more than 40%keep email for at least 10 years.

SNIA formed a 100-Year Archive TaskForce, and among things that canbe done, according to task forcemember Michael Peterson ofStrategic Research Corp., San-ta Barbara, CA, is to put the term“archiving” on ice.

“We need to abandon theterm ‘archive’ and replace itwith retention and preserva-tion,” says Peterson. “The termarchiving denotes a dungeoninto which I put things and nev-er look at them again. Thinkingof archival as a long-term prob-lem turns out to be wrongthinking because of the conceptof legal compliance.”

Peterson says ediscovery andlegal requirements are whatwill get people’s attention. Healso says long-term compatibility problemscan be solved by properly handling data as itis stored instead of waiting until retrieval.

Most of the technologies and products todo this already exist, says Peterson, includingself-healing storage arrays; federated repos-itories that support tape, disk and optical me-dia; and data dedupe and other migrationmethods. And standards are emerging fortools like eXtensible Access Method (XAM).

“It’s not a technology problem; it’s an op-erating practices problem,” he says. “We callthis process information-centric manage-ment. If you don’t start the process, nothingwill work on the back end.”

IT consulting firm MindTree Ltd. has de-veloped a set of best practices based on SNIAresearch. Rama Narayanaswamy, MindTree’sVP who oversees its storage practice, breakshis recommendations into three areas: phys-ical media, data and application levels.

His recommended best practices for mediainclude storing all data on networked stor-age media because it makes it easier to read,manage and protect. He says all networkedstorage media can be uniquely identified us-ing either a WWN (for FC) or a MAC address(for Ethernet) supplied by vendors, and alldata can be migrated to new media.

Narayanaswamy recommends storing alldata on block-based rather than file-based

storage to ensure successful migration.On the application layer, he says informa-

tion should be segregated based on longevi-ty. Long-living information should be storedin text format because that format is mostlikely to survive. Narayanaswamy says disk isa better medium than tape for long-term re-tention, but it’s not without problems.

“Disk storage on a SAN makes it easier tomigrate seamlessly,” he says. “But let’s saywe’re talking about Hitachi [Data Systems] orEMC. Twenty years from now, will they sup-port that same type of migration? Standardshave to be implemented now to make itwork in the long run.” —Dave Raffo

ø

Forging a long-term retention plan

Mid-2009When the SCSI Trade

Association says

storage systems

using 6Gb SAS drives

will ship.

Introducing a server system that grows with you.The Nexlink® StableFlex Modular Server built on Intel® Multi-Flex Technology can handle whatever the future brings.One system integrating storage, servers and networking – a Business-in-a-Box.

SimplicityPoint-and-click management to confi gure storage, servers and networking from anywhere in the world.FlexibilityDiskless servers and an integrated SAN provides unprecedented fl exibility.ValueEnterprise-class features priced specifi cally for the Small and Midsize market.

Order your fully confi gured solution online today. Visit our Web site for full details and pricing.

Visit www.nexlink.com/Products/Servers/special.aspx or call (877) 450-7808

Intel.indd 1 9/15/08 3:26:43 PM

18 Storage November 2008

a

Illustration by Eric Palma/Photograph by Kathleen Dooher

ccording to research from Enter-prise Strategy Group, folks leveragingcloud-based options are twice as like-ly to work at companies with greeninitiatives than at companies withoutany storage-as-a-service (SaaS) offer-ings in place. Why is that, you ask? Ihave several theories.

The so-called cloud represents thefastest way for any firm to get onboardwith the “green IT” movement. Whatcould be greener than getting rid ofyour power-hungry, cooling-crazed,constantly expanding infrastructure?

What you likely won’t hear is that bydoing this you’re in danger of pushingyour green responsibility onto thoseproviding the services you use. That’sbecause companies don’t always lookfor reality; they look for an ability tofollow a mandate. If someone tells youto cut data center power costs by 50%in the next two years, it’s only naturalthat you might look to solve the prob-lem in the same way IT has historicallylooked to solve many such issues: by

pushing the responsibility onto others.Outsourcing isn’t new, whether it’s

writing code or building shiny toys.Sometimes we do it because it’s cheap-er, faster or whatever; but many timesit’s so we can blame someone else ifthings go wrong. Many firms that out-sourced manufacturing overseas didso for sound business reasons such ascheaper labor and international distri-bution sites. But plenty of them were sointent on meeting their mandate theydidn’t stop to consider that some out-sourcers might be using illegal (or atleast immoral) child labor—until over-seas labor became a PR issue. Then theyabruptly put a code-of-conduct doctrinein place for their suppliers to (hopeful-ly) prevent such practices in the future.

I don’t expect companies offeringcloud infrastructure services to employ12-year-olds working 20-hour shifts,but other things could go wrong whentying outsourcing to green initiatives. Ifyou’re using the cloud to become “green”fast, you should know that while there’s

a good chance your provider is betterat squeezing the green out of its opera-tions than you might be, it’s not guar-anteed. Your motivation might be tosave your company power and coolingexpenses, but your provider’s modelcould use more power and cooling tohouse your infrastructure and data thanif you kept it yourself.

Perhaps a better example has to dowith capacity in the cloud. We solveour own capacity problems by lettingothers take on the load. Everything isgreat until they go down for a day andyou can’t get to anything you’ve giventhem. They’ll respond by saying “Sor-ry, but the fine print says that mighthappen” and there isn’t too much youcan do about it. So while you may havesolved your capacity issue (well, untilthis problem showed up), you didn’tconsider all of the angles: What if theygo down? What realistic assurances doI have that my data isn’t being viewed/used by someone else? Even things likebasic privacy assumptions should bequestioned. For example, when youuse a free email (or any other service)and store sensitive data on that firm’sinfrastructure, are they selling or read-ing or moving information about you?Where’s the data stored? How is it pro-tected? Do they have their own greeninitiatives?

Vendors aren’t benevolent; some-times they give you something becausethey can monetize it in other ways.And that’s what you need to check outbefore taking the seemingly easy routeof passing the buck. 2

Steve Duplessie is founder and senioranalyst at Enterprise Strategy Group.You can see his blog at http://esgblogs.typepad.com/steves_it_rants/.

storage bin 2.0 | steve duplessie

Outsourcing clouds the green issueThink shipping everything offsite is a goodway to go green? Think again.

BROCADE: THE FIRST STEP IN DATA CENTER VIRTUALIZATION.How do you reap the benefi ts of virtualization without abandoning your existing technology? The Brocade Data Center Fabric (DCF) architecture. This strategic framework gives you the performance, scalability, and reliability to embrace technologies like server virtualization today and a virtualized data center tomorrow—leveraging the hardware and software you already own. Learn how Brocade can power your next-generation data center at www.brocade.com/virtualization

© 2008 Brocade Communications Systems, Inc. All rights reserved. Brocade is a registered trademark, and the B-wing symbol is a trademark of Brocade Communications Systems, Inc.

Server virtualization? How about data center

virtualization?

Download the white paper at

brocade.com/virtualization

THOUGHTS ON THE EVOLUTION OF THE DATA CENTER

sala

rie

s up

economy dow

n,

20 Storage November 2008

salary survey

SALARIES and the de-mand for experiencedstorage professionalscontinue to rise evenas many shops struggleto do more with fewerstaff and tighter budg-ets, according to our2008 annual Storagemagazine Salary Sur-vey. In a year that willbe remembered forWall Street’s implo-sion, storage managersfaced a data explosion,and were forced tomanage exponentialgrowth at a time whenCEOs wanted every ITinfrastructure projectto count toward thebottom line.

Raises jumpedan average of6% over 2007,but workloadsand stress alsoincreased,according to ourexclusive annualSalary Survey.By Ellen O’Brien

Photography by Ryan Donnel l

Storage November 2008 21

“THE SALARY WASN’T THEPRIMARY MOTIVATION FOR

ME. IT WAS MORE OF ACHALLENGE, A CHANCE TO

LEAD THE GROUP.”—ELIJAH GOLDEN, TEAM LEAD FOR THE DEDICATED STORAGE

GROUP AT ING DIRECT

22 Storage November 2008

The average salary for the 759 respondents whocompleted our survey and spent some or all of theirtime working on storage climbed to $86,573, a 6% in-crease vs. the salaries they reported earning in 2007.This has respondents earning an average of 3% morethan the 250 storage pros who completed last year’ssurvey. In addition, our 2008 annual Salary Survey re-spondents predict that their 2009 salaries will jumpby 4.5% (to an average of $90,483).

As the number of terabytes managed grows, so dostorage salaries. Storage pros managing 10TB to 99TBearn an average of $84,597, while those managing100TB to 500TB earn an average of $91,735; those man-aging more than 500TB earn an average of $102,595

(see “Average 2008 salary by TBs managed,” p. 24).But as workloads increase so does stress, according

to our survey, which contributed to complaints of jobdissatisfaction. Of those surveyed, approximately97% report an increase in storage capacity vs. 2007.Almost half watched their storage capacity grow be-tween 11% and 30%, and a little more than 15% re-ported storage growth greater than 50%.

“It has exploded,” says Elijah Golden, team lead forthe dedicated storage group at ING Direct, an Internetbank. “It has doubled to more than half a petabyte sinceI first got here,” adds Golden, who arrived at the Wilm-ington, DE, bank a little more than two years ago.

The Mid-Atlantic region (where Golden works) had

WHAT STORAGE PROS LOVE/LOATHE ABOUT THEIR JOBS

LOVE LOATHE

CULTURE“Likes to promote fromwithin. Puts employee

needs ahead of politics.”FLEXIBILITY“Able to work fromhome once a week.

Self-managed.”

CHALLENGING WORK“Work on multiple storage

and host platforms. Diversity of tasks rounds out

my skill set.”

COWORKERS“People I work

with are great.”

AUTONOMY“Able to work at my ownpace without someone

looking over my shoulderall the time.”

ORGANIZATIONAL IMPACT“The ability to be creative and

bring new solutions to business problems.”

HOURS“I’m not on call for storage.”

PRIDE“Self-satisfaction from makingstorage and backup retention

work more efficiently.”

RESPECT “Senior management has

much faith in the decisions/recommendations I make.”

STRESS“Stressful, hectic

workloads.”

NO BONUS “The company I worked

for last year was acquired andour new company doesn’t

have a bonus structure. Talkabout a pay cut!”

HOURS“A 24/7 obligation

for support.”

POLITICS“Politics and the process can

really slow down action.”

THE ECONOMY“Pressure to always

have to prove my worthin my job.”

ATTITUDES“Storage isn’t a priority

within IT because it’s notunderstood.”

TECHNOLOGY “Unreliable hardware that

causes outages.”

WORKLOAD“There are too many new storage projects and not

enough dedicated staff time to complete them.”

CAREER PATH “Lack of advancement

opportunities.”

BUDGETS “The training budget is con-

stricted so everything is frommanuals and online searches.”

COMMUTE “Close to where I live.”

the second highest salaries among respondents, withan average of $95,597; the leader was the Pacific re-gion where the average is $96,141 (see “Average2008 salary by region,” p. 24). The Mid-Atlantic re-gion topped the charts when it came to salary in-creases in 2008 vs. 2007, averaging more than 8%;that’s 3% higher than the average raise reported byall survey participants.

Wanted: New challengesA competitive salary was cited as the most importantfactor in choosing a job by 62.5% of respondents. Thiswas followed by career advancement and job respon-

sibilities, which were cited almost equally when de-ciding to take a new job or stay put; the opportunityto work on innovative projects also heavily influencedjob satisfaction and career choices.

ING Direct offered Golden his first chance to leada storage team. “The salary wasn’t the primary moti-vation for me,” he says. “It was moreof a challenge, a chance to lead thegroup,” he says. “And I knew I wouldbe part of a group where we discussedstorage at a global level.”

Seventy-two percent of our respon-dents report managing teams with fiveor fewer people (Golden manages ateam of three people). The promise of new job responsibilities and the expectation that he would interactwith senior management were sellingpoints for 44-year-old Golden. “One ofmy goals is to do what we call a SANhealth check and present the findingsto senior management,” he says, and toconduct an in-depth analysis of storageresource management (SRM) tools.

Additional benefits, such as ING Di-rect’s 401(k) matching contribution of6%, a large onsite gym that offers exer-cise classes, and a 45-minute commuteall factored into Golden’s decision towork for ING. However, according toour survey, benefits ranked last amongrespondents (behind company location)when considering a new job. And, per-

haps as a sign of stricter corporate budgets, more than18% of those surveyed say their benefits packageswere reduced this year vs. 2007; however, 14% reportimproved benefits, while 67% saw no change.

Last year, Matt Milone was looking for a new chal-lenge and accepted the position of senior engineer, SANarchitecture at First Financial Bank in Middletown,OH. Milone, who had previously worked in Ireland forDell Inc., says he was inspired by the possibility ofbuilding a new storage system. “They had a partiallyimplemented storage infrastructure, but they need-ed someone to come in and design a storage environ-ment,” he says. “That’s a very rare opportunity.”

Like so many storage pros these days, Milone wastasked with saving money while implementing newprojects. For example: “When we were bringing the[EMC Corp.] Clariion storage online, [EMC] addedsome blocks [in the contract] to provision storage andset it up. I said, ‘No, we’ll have a block in there for

“IT’S COOL WORKING WITH PEOPLE WHO ARE SMARTER THAN YOU. BEING PAST MIDDLE-AGED AND STILL BEING ABLE

TO LEARN THINGS ON A DAILY BASIS IS A GOOD THING.”

24 Storage November 2008

you to certify, but I’ll do the rest of the work.’ Theywere probably not too happy about it, but the bankwas happy.” That’s the sort of thing that helps him earnbonuses, says Milone.

When comparing annual bonuses for storage pro-fessionals in various industries, financial services ledthe pack. Storage professionals in this sector say theyanticipate an average 2008 bonus of $13,324. Ofcourse, those estimates were provided before thecredit crisis on Wall Street turned catastrophic, cast-

ing a pall over the nation’s lending institutions andleading to massive buyouts and bailouts (see “Moststorage jobs safe after fallout,” p. 27). In 2008, how-ever, storage professionals working in financial serv-ices earned an average of $91,881. That lagged behindindependent contractors ($126,667), media/publish-ing ($96,333) and IT services ($93,964) (see “Aver-age 2008 salary by industry,” this page).

Culture is keyMark Sadler has worked as a storage administrator forthe last 18 years at Dillard’s Inc., a chain of depart-ment stores. Based in Little Rock, AR, Sadler says henever expected to stay so long at the job. Early on, hestarted specializing in IBM products and liked thatthe IT department was ambitious.

There’s a growing realization that experienced stor-age administrators are very valuable to their compa-ny. “Some people say I’m a tape specialist,” says Sadler,“which is funny—because they think chimpanzeescan run this—until someone’s data is lost, and lawyersshow up for ediscovery and you’re expected to pro-duce something.”

Sadler is part of a dedicated storage team, and hesays that shows him Dillard’s understands the rele-vance of storage in the big picture. The likelihood ofhaving a dedicated storage team increases when a com-pany’s revenue exceeds $500 million. For companieswith revenue of less than $500 million, an average of28% of respondents report dedicated storage teams.That number rose to 37% for companies with revenueof $501 million to $1 billion, and shot up to 47% forcompanies with sales of $1.1 billion to $10 billion.Overall, more than 40% of those surveyed work at

average 2008 salarybased on Tbs managed

More than500TB

100TB to500TB

10TB to99TB

1TB to9TB

Less than 1TB

None

$0 20 40 60 80 100 120

SALARY IN THOUSANDS

avg. 2008 salary as itrelates to co. revenue

More than $10 billion

$5.1 billion to$10 billion

$1.1 billion to$5 billion

$501 million to$1 billion

$101 million to$500 million

$51 million to$100 million

Less than $50 million

$0 20 40 60 80 100

SALARY IN THOUSANDS

average 2008 salaryby region

Pacific

Mid-Atlantic

Northwest

New England

Southwest

Mountain

Southeast

Midwest

Canada

$60 80 100

SALARY IN THOUSANDS

average 2008 salaryby industry

Self-employed/Contractor

Media/Publishing

IT services

Financial services

Utilities

Healthcare/Pharmaceuticals

Manufacturing

Wholesale/Retail

Transportation/Travel and Hospitality

Government/Non-profit

Construction

Education

Agriculture

Other

$20 40 60 80 100 120 140

SALARY IN THOUSANDS

2008 salaries

companies with a dedicated storagegroup, while nearly 40% work in stor-age jobs that are part of a systems group.Another 14% work in companies wherestorage is organized within the net-working group.

This year, as in years past, our SalarySurvey reveals the value of time servedon the storage front lines. The averagesalary for respondents with six yearsto 10 years of dedicated storage expe-rience was an impressive $94,941. Incomparison, the average salary for re-spondents with six years to 10 yearsgeneral IT experience was $76,119.

Sadler, who works on a seven-memberdedicated storage team at Dillard’s,says his coworkers keep him intellec-tually engaged. “They are the best atwhat they do,” he says. “It’s cool work-ing with people who are smarter thanyou. Being past middle-aged and stillbeing able to learn things on a dailybasis is a good thing.”

We asked storage pros what they likemost about their jobs and the majority

Experience still trumps education when it comes to affecting IT salaries,

which IT workers know has long been true. But this year, unlike last year, our

survey shows that employees with a college degree did receive a salary boost

vs. those with little or no college experience. On average, respondents with-

out undergraduate degrees earned $80,908; those with degrees earned

$87,259 and advanced degrees lifted it up to $91,731 (see “Average 2008

salary by education” at right). The killer combination, according to our

survey, is having more than 10 years dedicated storage experience and an

advanced degree. Those storage pros were rewarded with average annual

salaries of $122,975. Fifty percent of storage pros who completed our sur-

vey have undergraduate degrees and 15% have graduate degrees.

Hands-on SAN skills have the potential to add somewhere between 13%

and 18% to base salaries, says David Foote, CEO at Foote Partners LLC in

Vero Beach, FL. “That’s a significant number,” says Foote, adding that “you

might not be able to get 16%, but you’re going to get a bump for SAN skills.”

Storage certifications from vendors such as Brocade, EMC Corp., Cisco

Systems Inc. and IBM Corp. show less impact on salary increases than

experience. The majority of our respondents (65%) hold no vendor certifi-

cations. Among those who have certifications, 21% say it “definitely”

helped their career, more than 50% say it “somewhat” helped their ca-

reer and 25% say certifications haven’t helped at all. Among respondents

to our survey, those with no certifications had a higher average salary than

those with five or more certifications; however, having three certifications

gave respondents a slight bump of less than $3,000 (see “Average 2008

salary by number of certifications” at right).

EXPERIENCE COUNTS

average 2008 salary by educationGraduate school

(master’s degree, Ph.D.)

College graduate

Some college

Junior college

High school

$40 60 80 100

SALARY IN THOUSANDS

average 2008 salary bynumber of certifications

Five or more

Four

Three

Two

One

None

$40 60 80 100

SALARY IN THOUSANDS

said flexibility (such as working from home one daya week) and technically challenging work using cut-ting-edge technology (see “What storage pros love/loathe about their jobs,” p. 22). Additional reasonsrespondents cited for feeling good about their jobsare respect for coworkers, autonomy and the abilityto make decisions without having to navigate lots ofred tape.

For those who are frustrated or dissatisfied in theircurrent jobs, complaints focused on upper manage-ment, budgets squeezed too tight and staffs stretchedtoo thin. More than one respondent commented thatit was difficult to specialize in any one storage skillwhen wearing so many hats. A lack of understandingof storage issues within the executive ranks also con-tributed to job dissatisfaction.

Jim Lekas, an IT systems administrator at Marlboro,MA-based Hologic, says his job is made more enjoy-able because his boss is a “storage guy. The good thingis that I’m able to go to my boss and explain what wemight need.”

Stock options were one of the main reasons he tookthe position nine years ago, says Lekas. “I kind of

caught the down curve when there were lots of jobsavailable and there were lots of people giving stockout,” he says. Today, only 13% of those surveyed saythey receive stock options.

At 44 and married with three kids, Lekas says heappreciates the stability of the company and its com-petitive health benefits. In 2008, says Lekas, heheard from lots of recruiters “because I work a lotwith NetApp [products]. But I don’t want to go toNew York City.” Like 53% of our respondents, Lekassays he envisions a career path focused on storage.The remaining 47% say they plan to leverage theirstorage experience to move into another area of IT.

Budgets and bossesJeffrey McMorran has worked for 14 years at NMG, aNewmarket, ON-based management company withannual revenue of approximately $75 million. “Thesalary is very, very good,” says McMorran, who earns$136,000 as the IS director. And NMG lacks the bu-reaucracy McMorran knows he would likely have todeal with at a larger company.

NMG has about 53 employees, says McMorran,who’s part of a small group manag-ing approximately 1.3TB. “We haveappreciation days—golf days; it’s agreat place to work,” he says. Mc-Morran acts as a CIO, reporting tothe VP of finance and administra-tion; it’s a role he enjoys, althoughit means he’s also a team leader, some-thing he’d rather not be. “Computersare easy, people are difficult,” henotes.

McMorran’s salary is notable for asmaller company. Our survey showssalaries rising in step with companyrevenue, starting at $73,208 in com-panies with less than $50 million inrevenue and stretching to $97,555 incompanies with $5.1 billion to $10billion in revenue (see “Average 2008salary as it relates to company rev-enue,” p. 24).

“I KNEW I WOULD BE PART OF A GROUP WHERE

WE DISCUSSED STORAGE AT A GLOBAL LEVEL.”

Scott Keister, senior manager, enterprise systemsat World Kitchen LLC in Corning, NY, has a lot ofopportunities to compare his upstate salary to thosein New York City, which are sometimes higher butalways accompanied by a higher cost of living.

“For this area, I think my salary is very competitive,”says Keister, 41, who has worked for the dinnerwaremanufacturer for the last 12 years. He attributes hislongevity to a corporate mission to continue technol-ogy investments to improve business processes.

“Our EMC sales rep comes in and says for the sizeof your company, the things that they have allowed youto get is just amazing,” says Keister. Heattributes this, in part, to a CIO that waspromoted from within the ranks andwas the former director of enterpriseservices. “He has been a good speakerfor us—to challenge the business tomake sure they provide us with thetechnology we need to make sure thebusiness runs as well as it does,” saysKeister.

When it comes to spending on stor-age, 53% of those surveyed estimatethat their company spent less than15% of their 2008 IT budget on stor-age. Another 13% estimate between16% and 20% of total IT dollars wenttoward storage. Another 14% figurestorage received more than 20% of thetotal IT budget. (Twenty percent of respondents said they didn’t know.)

Keister says he’s lucky his job allowsfor (or requires) a lot of strategizing.Storage professionals are dividingtheir time almost evenly betweenstorage design, primary storage oper-ations and backup, according to oursurvey. And another 15% of their time

is spent on maintenance.“I would say we spend an enormous amount of

time each year strategizing and planning improve-ment,” says Keister. “It helps us develop a real three-year strategy and it’s amazing—it works.” There’sone good way to say for sure that a company caresabout its technology investments, notes Keister.“When budget season comes around and you ask forthings and you get it. That’s when you know they getit,” he says. 2

Ellen O’Brien is a senior editor at Storage.

The full impact of this fall’s U.S. financial crisis, and its impact on IT jobs, won’t be known for some time. For now,

industry experts predict that only the highest paying IT jobs will take a hit.

“I don’t see a massive contraction of their compensation,” says Kaushik Roy, an analyst at Pacific Growth Equi-

ties in San Francisco. When British banking giant Barclay’s paid $1.7 billion to bail out Lehman Brothers’ North

American operations, Lehman’s data center operations were key to the deal, says Roy, adding that “their data

center asset management is one of their crown jewels. I don’t see them making a change there.”

In general, he says, IT folks in New York City should be hardest hit and may have to settle for a salary reduction to

stay in the city. “There could be some shifts here and there,” says Roy, “but these people are very much in demand.”

David Foote, CEO at Foote Partners LLC in Vero Beach, FL, an IT skills and salary research firm, agrees: “There are

some regulated industries I wouldn’t want to be in right now; I think everyone at Lehman or Merrill Lynch would be

nervous.” But most of the jobs lost will likely be sales and marketing, he predicts. “To tell you the truth, I don’t think

IT is going to be that inconvenienced—only to the degree where there are redundant systems [and mergers].”

MOST STORAGE JOBS SAFE AFTER FALLOUT

1

528 Storage November 2008 Illustrations by Ian Whaddock

backup best practices

Backup is still the greatest pain point for storage managers.

The following five vexing backup problems can become less

onerous if you use these simple procedures to improve your

backup performance and reliability. By W. Curtis Preston

UNHAPPY TAPE DRIVESunhappy tape drives cause more backup and re-store issues than any other problem. The most com-mon thing to fail in most backup environments is atape or tape drive. Tape error may frequently mas-querade as another problem. (For example, one back-up software product often reflects a drive failure as anetwork timeout.) And because most environmentsachieve less than half of the available throughput of their drives, corporate IT buys more and moredrives to meet the throughput demands of the back-up system.

Modern tape drives are designed to operate at theiradvertised speeds, and operating them at lower speedsis what causes them to fail more often; there’s a min-imum speed at which the tape must move past thehead to achieve a good signal-to-noise ratio. Evenvariable speed tape drives have a minimum speed atwhich they can write data. LTO-4, for example, hasa minimum native transfer rate of 23MB/sec. Andwhile few users experience the 2:1 compression ra-tio advertised by drive manufacturers, whatever com-pression rate they do experience must be multipliedby the minimum transfer rate of the drive. For ex-ample, data that experiences a 1.5:1 compression ratiobeing sent to a tape drive with a minimum speed of23MB/sec makes that drive’s minimum transfer rate34.5MB/sec (23 x 1.5).

Depending on which backup software you use, youcan increase the speed of backups that go directly totape with the following: LAN-free backups, multi-plexing and not using additional tape drives untilyou’ve given the initially used tape drives enoughthroughput. The second (and simpler) solution is tostop using tape as your primary target for backups andinstead back up directly to disk. Using disk as an in-termediary staging device usually gets the initial back-up done much faster, and then the local (LAN-free)movement of data from disk to tape can go muchfaster. These backup methods will keep the tape drivesmuch happier, they’ll fail less often and you can re-duce the number of tape drives you’ll need to buy toget the job done.

things that mess up your backups

2

November 2008 Storage 29

MISSING DATAthe second big problem in today’s backup systemsis the data that backups miss. This isn’t data you triedto back up and failed to do; it’s the data the backupsystem never attempted to back up because it simplywasn’t told to. Missed backups don’t generate errormessages, but they can (at some point) cause anRPE—a resume processing event. If this problem is-n’t addressed, you can be sure that someday someonewill ask you to restore something that hasn’t beenbacked up.

Consider the following two real-life stories: Oneday a backup administrator was asked to restore aset of files on server hpdbsvk. According to the firm’snaming convention, this meant HP-UX databaseserver “k.” The backup administrator also knew thatbecause servers were named in alphabetical order,there were also database servers hpdbsva throughhpdbsvj, and he was only backing up servers hpdbsvathrough hpdbsvj. Immediately, he knew he had somework to do, but soon afterward someone walked intohis office and asked him to restore a database onhpdbsvk. While the data was never restored, the ad-ministrator didn’t lose his job and didn’t even get introuble. How is that possible?

Real-life story No. 2: One day an administratorwas asked to restore some code sitting in /tmp on anHP-UX system. The file system had disappeared uponreboot because it was a RAM file system. The cus-tomer requesting the data was furious when he foundout that the backup system didn’t back up /tmp. Again,the administrator didn’t lose their job or get in trou-ble. Why not?

In both cases, the reason the backup administratordidn’t lose their job was the same: documentation.Back in the days before the Web, the backup systemin question used a paper-based request form users hadto fill out if they wanted a system backed up. The formincluded a line that read “Do not consider this requestaccepted until you receive a copy of it in your in-boxsigned by someone on the backup team.”

In the case of the customer who requested a re-store from hpdbsvk and started fuming because itwasn’t being backed up, the backup administratorasked to see the form with his signature on it. The cus-tomer didn’t have the form, so the issue became whatI like to call a “YP not MP”—Your Problem, not MyProblem—as far as the backup administrator was con-

cerned. As for the /tmp situation, it was excluded frombackups, and the exclusion had been approved by up-per management and well-advertised. (After all, the“T” in tmp stands for temporary, so why would youback up temporary things?)

Applying the paper backup request system to today’sWeb-based world is simple. Create a backup systemrequest Web page that notifies the user who request-ed the backup that the backup is being performed. Ifyou’re using a data protection management tool, theuser who requests the backup can even be notifiedevery time the backup succeeds or fails. How’s thatfor customer service? The Web page should also liststandard backup configurations, including things likewhat gets backed up (or not backed up) by default.

It’s also important to mention how important it isto use your backup software’s ability to automaticallydiscover and back up all file systems or databases ona given machine. If your backup software has thisfeature, use it; don’t attempt to manually list all filesystems. You’re just asking for trouble and an RPEwhen you discover that you forgot to add the F: driveon a particular server. If your backup app doesn’t havethis feature, get a new one.

3

30 Storage November 2008

There are a lot of questions buzzing around VMware backups, but there aren’t a lot of problems. Most people can back up their vir-

tual machines (VMs) as if they were physical machines, and everything works just fine. Most major backup packages have changed

their pricing so that you only pay for one license for the VMware server, regardless of how many guests you’re backing up.

The big challenge some storage environments face is resource contention, especially if they’re doing a lot of full backups.

The first thing you can do to solve this problem is to better stagger the full and differential backups across the week and

month to minimize the number of backups that could occur at any one time. You should also check out the ability of your

backup software to limit the number of concurrent backups on the VMware host. Finally, you should investigate your backup

software’s ability to do incremental forever inside the VM using features like Synthetic Full Backups from CommVault, Saveset

Consolidation from EMC Corp.’s NetWorker, Progressive Incrementals from IBM Corp.’s Tivoli Storage Manager and Synthetic

Backups from Symantec Corp.’s Veritas NetBackup.

If, after using these techniques, you still have resource-contention issues inside the virtual server when you’re backing up

its guests, you should consider more advanced methods such as VMware Consolidated Backup (VCB), esXpress from PHD

Technologies Inc., esxRanger from Vizioncore Inc. or using a snapshot-based filer that’s VMware-aware.

Virtual server backup tips

UNNOTICED TRENDSbackup administrators spend most oftheir time looking at last night’s backups. Theywant to know what failed last night and theescalation procedure for that server. Can theyrerun the backup? If so, what do they do if thererun backup continues into prime businesshours? Must they notify someone?

As a result, backup administrators often don’tnotice if a given server, file system or databasedoesn’t successfully back up for multiple days.Some environments where I’ve performedbackup assessments have had servers thathave gone several days—even as much as amonth—without a successful full or incremen-tal backup; and the larger the environment, thegreater the problem. At one customer’s sitewhere they back up 10,000 systems, more than1,000 systems went four days or more withouta successful backup of any kind.

Servers that go several days without a backup areobviously at greater risk than others. If a backup ad-ministrator was aware of such a trend, they mightdo a number of things, such as cancel less importantbackups so that the server that hasn’t backed up forseveral days can be given more resources. At a min-imum, the storage admin may set the priorities onthe backup system so that a server that hasn’t backedup for several days is more important than otherservers.

Here are some examples of other trends that areimportant to detect:

• Servers backing up significantly more data thanthey used to back up

• Tape libraries/disk devices approaching capacity• Tape and disk system throughput numbersMost backup products don’t provide the kind of

tools necessary in their base product to see this kindof information. The solution is a relatively simple one,but not an inexpensive one: Buy a data protectionmanagement tool. There’s a reason a whole industryhas grown around such tools, and it’s difficult to prop-erly manage a backup system without one.

UP TO

48TB IN ONELITTLE ARRAY

INTRODUCING THE DELL EQUALLOGIC™ PS5500E, AN ECONOMICAL, ENTERPRISE-CLASS STORAGE SOLUTION.

LEARN HOW DELL SIMPLIFIES STORAGE AT DELL.COM/PSseries/PS5500E

Meet your growing data needs with a highly-scalable iSCSI SAN solution. With all-inclusive software standard, the PS5500E delivers the features and functionality that you need right away. All in a compact form that delivers up to 3x the drives with only 1U more space. It’s a lot of storage power, for just a little.

© 2008 Dell Inc.

DELL_AD-STORAGE.indd 1 8/20/08 10:51:50 AM

32 Storage November 2008

backup person” because no one wants to take on andmaintain all of those custom scripts.

Another way customization manifests itself is inunique backup configurations. Instead of having astandard backup configuration for everyone, someenvironments create custom backup configurationsfor each customer that requests one. For example,“For this server, we’re going to back up only the F:drive and we’ll do it only on Thursday nights from3:00 am to 4:00 am.” Besides making things muchmore complex, this kind of customization also goesagainst the way most backup software is designed.Backup software is designed to share resources andautomatically send things to the right resource as itbecomes available and as priorities dictate. Uniquebackup configurations drastically reduce the overallutilization of all resources by not allowing the backupsoftware to do its job.

Overcoming this problem is relatively simple:Create standard backup configurations and stick withthem. The following is an example of a standard forfile-system backups:

• All systems back up all drives• *Temporary Internet Files*, C:\Temp, *.mp3 files

are always excluded• All systems receive a full once a month• All systems receive a differential/cumulative

incremental/level 1 once a week• All systems receive an incremental once a day• Fulls and differentials will be distributed across

the week/month as dictated by the system load• All backups occur between 6:00 pm and 6:00 amDeviations from this standard must be justified by

business reasons and approved by a business unit man-ager who will receive a chargeback for theextra cost involved in such customizations.

Regarding custom scripts, the best thingto do is to consult the forums and mailinglists for the backup software you’re using tofind out if anyone has discovered anotherway to meet your requirement without cus-tom scripting. Software updates often fixsuch problems found in earlier versions, butpeople continue to use their old ways be-cause it’s what they know.

Finally, if the software you’re using can’tbe made to do what you want it to do with-out all of those custom scripts, perhaps it’snot the right backup software for you andanother backup application would do whatyou need it to do out of the box. Althoughchanging backup software packages shouldbe considered a last resort, it may actuallybe the best thing in some cases.

Without deduplication, the use of disk in the backup system is rele-

gated to storing only one or two nights’ worth of backups in a process

known as disk staging, as backups are staged to disk before they go to

tape. This helps backups but doesn’t help restores, as most restores

will still come from tape.

Dedupe allows you to store several weeks or months of backups on

the same disk that was previously storing only one or two days’ worth

of backup. Keeping more data on disk allows for much faster restores

for all data, not just the backups made in the last few days.

Deduplication can also help you get data offsite without shipping

tapes. Because the dedupe system stores only the new, unique blocks

every night, backups can be replicated offsite, allowing you to have

onsite and offsite backups without touching a tape.

What role does deduplication play?

4OVERUSE OF CUSTOM SCRIPTScustomization comes in a variety of flavors andcan be a good thing. It can make your backup systemdo something it wasn’t originally designed to do, allowing you to work around limitations. But cus-tomizing your backup process can also create extrawork and make things much more complex.

Backup administrators good at shell or batch script-ing can create programs that help them automatecertain tasks. One customer I visited had 150 customscripts written around their backup system. The prob-lem with this kind of customization is that it’s hardto maintain and even harder to pass on to the nextbackup administrator. Administrators who createtoo many scripts may find themselves stuck as “the

5UNENCRYPTED DATAnews reports of lost or stolen tapes have become more frequent. Most states now require public notifica-tion of such a loss. Regarding personal data, however, there’s a moral obligation to keep it safe that goes be-yond the risk of public exposure. According to CBSnews.com, someone steals a person’s identity every 79seconds, and then opens an account in that name and goes on a buying spree. And a Gartner Group study re-veals that 1 in 50 people have suffered from some type of identify theft. Given the incredible popularity ofthis crime and the huge impact it has on those targeted (you could be the next victim), do you want it to beyour backup tape that helps some identity thief?

There are two solutions to this problem. First and foremost, encrypt your backups. There are a number ofways to encrypt data, such as using backup software encryption and encryption engines built into fabricswitches, tape libraries and disk drives. The second solution is to not ship tapes offsite but to use a disk-based deduplication backup system that replicates your backups offsite. If you still want to make tapes,make them at your offsite location.

In my opinion, anyone in management who refuses to fund the security of backups should be relieved oftheir duties, and very well could be if things go wrong. Make sure that person isn’t you. If your company isshipping unencrypted backup tapes with personal information on them, you should immediately notify yoursuperiors in writing of the seriousness of this problem and request a project to solve it. Document your re-quest and the response, especially if it’s a negative one. Continue to make yourself a pain until they solve theproblem or give you another job; you don’t want the job of enabling identity thieves.

In sum, while some of these solutions may be simpler than others, a lot of what you can do to make yourbackups better comes down to understanding the limitations of what you’re using and knowing how to doc-ument and improve your backup processes. Sometimes it pays to spend money on specialized backup toolsthat provide a clearer view of your backup environment. 2

W. Curtis Preston is vice president, data protection services at GlassHouse Technologies, Framingham, MA. He’salso the author of Using SANs and NAS, Unix Backup and Recovery and the Storage Security Handbook.

November 2008 Storage 33

Hi. My name is Mark, and I have a backup problem.

Step 1: Admission

©1999-2008 CommVault Systems, Inc. All rights reserved. CommVault, the “CV” logo, Solving Forward, and, Simpana are trademarks or registered trademarks of CommVault Systems, Inc. All other third party brands, products, service names, trademarks, or registered service marks are the property of and used to identify the products or services of their respective owners. All specifications are subject to change without notice.

commvault.indd 2 7/17/08 5:26:21 PM

Solve your backup problems by switching to CommVault’s #1-ranked enterprise backup software.

Admit it, you have a backup problem. Complexity, spotty performance, shoddy support, wasted time and money…the symptoms are unmistakable. The good news is, getting

better is not nearly as impossible as you think. In fact, it’s as simple as switching to CommVault and the enterprise backup software ranked #1 by end-users and honored

with a Gold 2007 Product of the Year Award.*

CommVault’s revolutionary Simpana® software takes the cost and complexity out of enterprise backup while adding unprecedented dependability and

scalability. No wonder more than 8,000 companies have chosen CommVault for better enterprise backup.

Ready to get better? Give us 45 minutes to show you how. Simply call 888-667-2451 or email [email protected] for an appointment.

Or, visit commvault.com/switch to learn why more than 8,000 companies all over the world have chosen CommVault.

Step 2: Salvation

TM

#1 Enterprise Backup Product by end-users in 2007 Diogenes Labs–Storage magazine Quality Award.

2007 Storage magazine/ SearchStorage.comProducts of the Year Awards–Gold Award Recipient,

Backup and Disaster Recovery Category.

*

Single-platform software with licensable

modules for Backup & Recovery, Archive,

Replication, Resource Management,

and Search

solving forward TM

commvault.indd 3 7/17/08 5:26:43 PM

DAN ROSS, CIO for the Missouristate government, spendsmuch of his time riding herd onhis aggregated budget drawnfrom 121 separate fundingsources and 161 appropriations,many of which are earmarkedfor specific programs.

consolidate

wNovember 2008 Storage 37Photography by Mark Katzman

The Missouri state governmentembarked on a major storage

consolidation project that includednumerous technical and political

hurdles. By Alan Radding

SHOW-ME STATE

showshow to

storagehen matt blunt ran for the Missouri governorship in 2004, he made IT consoli-dation a major piece of his platform for saving money. Dan Ross, a career Missouribureaucrat, saw candidate Blunt make that promise on the news and turned to hiswife, remarking: “What a monster of a task for some poor fool!”

A few weeks after Blunt became governor, he called Ross and offered him the con-solidated CIO job. “At first I thought he had misspoken and meant to offer me anoth-er job,” recalls Ross, who had no serious IT experience. “I wasn’t even what you wouldcall a ‘power user,’” he adds. It didn’t matter, the governor told him; the CIO job need-ed someone with Ross’ top administrative skills. He took the job.

The monster job consisted of consolidating the IT operations of 16 state cabinet agen-cies—each with independent IT budgets, staffs and decision making—into the state’scentral mainframe data center, which was wrapping up a mainframe consolidation ef-fort. At the time, Ross calculated his consolidated IT budget would be $205 million in2005 once he painstakingly pulled “well-camouflaged money into a visible pile,” he says.

storage consolidations

38 Storage November 2008

Today, the consolidated IT budget stands at $218million. It should be closer to $250 million, but leg-islators have repeatedly dipped into the budget toredirect anticipated consolidation-related savings toother projects. “That’s just too much money not toattract legislator interest,” says Ross, who spendsmuch of his time riding herd on this aggregated budg-et drawn from 121 separate funding sources and 161appropriations, many of which are earmarked for spe-cific programs—in short, an accounting nightmare.

The actual technical consolidation fell to the centraldata center staff. Howard Carter, the data center direc-tor, had a mixed reaction when he heard then-candidateBlunt’s promise of big savings from IT consolidation. “Ithought it could be good for state government, but Iknew it would be a big increase in our workload,” he says.

Multiple data centersNobody truly knows how much storage the 16 agen-cies had at the outset. Even now, some agencies maykeep storage tucked in servers or as small SANs hid-den from sight. At the start of the project, the datacenter managed 12TB of open-systems storage, re-ports Carter. Today, three years later, it hosts morethan 200TB of open-systems storage, 10 times morethan for mainframe operations.

To achieve the promised savings, the data centerteam has to squeeze every possible dollar out of itsstorage acquisitions. Two rules determine everychoice: an agency can have whatever it wants forstorage if it can pay for it, and dollars drive every de-cision. To date, “the consolidation activities have orwill result in a total savings and cost avoidance of

DATA CENTER DIRECTOR Howard Carter (right)led the expansion of open-systems storage

into the main data center, while CIO DanRoss concentrated on budget items.

November 2008 Storage 39

$52.3 million,” reports Ross.The Missouri state data center itself is the result of

a consolidation effort that began years before Bluntmade IT consolidation a priority. The initial consoli-dation involved only the state’s biggest mainframeusers. A second mainframe consolidation in 1996pulled in the last of the state’s mainframes. The cur-rent mainframe is IBM Corp.’s biggest—the Systemz10 configured for almost 6,000 general-purposeMIPS and another 3,000 MIPS from various assistprocessors.

Open-systems storage growthBy the time the open-systems consolidation began,following Governor Blunt’s election in 2004, “we hadjust a smattering of open systems in the data center,maybe a couple of dozen,” says Carter. That amount-ed to approximately 12TB of storage.

The initial couple of dozen servers have grown tomore than 500 consolidated servers, approximately150 of which are virtual. When the consolidation be-gan, the CIO’s office had identified 1,400 servers.Many consolidated servers run as IBM server bladesin the central data center, while others run as rack-mounted Dell Inc. servers. The standard server is aquad-core, dual-processor, dual-sockets machine, although a few four-processor, quad-core servers havebeen added.

For the open systems, the data center brought inIBM’s SAN Volume Controller (SVC) and set up aSAN behind SVC. Storage currently consists of twoIBM DS8000 arrays and five IBM DS4700 arrays,reports David Kassabaum, the data center’s internalconsultant (which is a state-employee position).

The DS8000 arrays are used for mainframe andopen-systems storage with one DS8000 used exclu-sively for open systems. One DS8000, with 25TB ofFibre Channel (FC) storage consisting of 146GBdrives, handles mainly mainframe storage with only1.5TB of open-systems storage. The other DS8000,with 50TB of FC disk, handles only open-systemsdata. The rest of the storage is divided among the fiveDS4700 arrays, three designated for SATA only andtwo for FC. The five DS4700 arrays are used exclu-sively for open-systems storage.

With the consolidation, data center storage hasbeen growing at a compound annual growth rate of25%. The growth comes mainly from the open sys-tems, which run Windows. “We haven’t added newmainframe applications, although the existing appli-cations continue to grow,” says Carter.

To accommodate the open-systems storage traffic,

the data center boosted the network fabric. It broughtin two Cisco Systems Inc. 9513 switches, increasingthe port count to 300 ports, notes Kassabaum.

Implementing the consolidationOver the last three years, budgets have been consoli-dated and staff transferred to the CIO’s office. Eachagency has also determined what it needs for serversand storage. Some servers are now completely ad-ministered through the consolidated data center,while others reside in computer rooms across the stateand are managed locally.

Old equipment is classified as surplus and soldwhenever possible. Each agency’s workload was puton new servers and storage. Surprisingly, very littlestorage and data was actually transferred to the datacenter. A big piece was moving and consolidating theExchange email application and data; that migrationwas handled by an outside service company. In othercases, the agency’s IT staff simply copied over the data.

At the data center, storage resides behind SVC,which allocates the storage as Vdisks, which areequivalent to LUNs. “The storage is allocated basedon the agency’s request and budget,” says Kassabaum.“The agency pays for its storage.”

Agencies have a choice of 15,000 rpm FC diskdrives or 7,200 rpm SATA drives. The state’s standardFC disk is 300GB and SATA drives are 750GB. Bothdrive types are configured for RAID 5.

The five DS4700 arrays are either FC or SATA. “Wetried to mix FC and SATA, but it didn’t work,” says

q Surround yourself with the right people. Designate

outstanding people to handle critical planning, strategy,

financial and technology tasks.

q Consolidate as fast as possible. Changes in leader-

ship mean there’s a chance consolidation may never

be completed or that it might get rolled back.

q Preserve budget flexibility. Avoid getting locked into

single-purpose budget allocations.

q Expect rapid fabric growth. Avoid running out of

ports or excessive hop counts.

q Guard your consolidation savings. People will grab

any available money they can identify.

q Pay attention to politics. Technology is only one part,

often the smaller part, of the consolidation challenge.

q Consider skill sets. Jobs and responsibilities will

change, often requiring training.

Lessons learned

40 Storage November 2008

Kassabaum. The problem revolved around incompat-ibilities in the microcode. It was faster and easier justto segregate the different drive types.

Although the data center had been a longtime IBMmainframe shop, IBM still had to compete for theconsolidated storage. “We looked at the other vendorswhere we had a state contract. It came down to theDS4700 or the EMC Clariion,” says Kassabaum. SVCactually limited the choices, so whatever they chosehad to work with SVC.

The capacity of each array was determined by pric-ing trends. “Everything is cost driven. We aim to getthe best price per terabyte,” says Kassabaum. As a re-sult, the storage team often finds itself buying morestorage than its immediate need. “But we always endup using it,” he adds.

The data center ran an informal negotiation, not a formal bid process, which gave EMC multiplechances to compete against IBM and come up witha lower bid. Each time EMC was more expensive.Both machines did what the state wanted, other fea-tures were superfluous bells and whistles that didn’timpact the selection. “Dollars drove every decision,”says Kassabaum.

SVC proved instrumental. The staff relies on SVCto stripe the data across multiple arrays, not just mul-tiple disks. SVC also includes a large upfront cache.Between the multi-array striping and the large cache,SVC allows the staff to coax better performance outof the SATA disks, reports Kassabaum.

The staff also relies on IBM’s TotalStorage Produc-tivity Center (TSPC) to do whatever storage manage-ment they perform. With TSPC, they can see how thestorage is allocated among agencies and the size of theVdisks. “If it gets really busy, we might move storagearound to reduce bottlenecks,” says Kassabaum. Oth-erwise, they don’t optimize for storage performance.

Allocating storage“The only tiering we do is tiering by dollars,” says Kass-abaum. If an agency doesn’t pay for FC, they get SATA.Occasionally, Kassabaum and Patty Washburn, com-puter information technology specialist, will make arecommendation, but the IT staff assigned to the

agency usually knows what it needs and, more im-portantly, what it’s willing to pay.

Kassabaum and Washburn don’t actively managethe storage beyond the basic allocation. They set upthe client agency requests and leave the actual man-agement of the storage to the assigned IT staff. “Af-ter we allocate the storage, we don’t know how it’sused,” says Kassabaum. An agency might have 10TBallocated, but whether they’re storing 1MB of dataor 9.9TB, Kassabaum and Washburn won’t know. Asa result, they allocate storage at a very high utiliza-tion rate, typically 90% to 95%, but have no idea howmuch is actually being used.

“We’re always adding storage. We’ve gone from zeroto 200TB of open-systems storage in just a few years.Just yesterday I had a request for 2TB,” says Washburn,adding that “our biggest challenge is keeping aheadof agency requests.” Many requests are unexpected,leaving the team scrambling to come up with capacity.

The data center staff also relies on SVC to move dataaround. “When a customer has really sensitive data,we use SVC to move it to an enterprise [DS8000] box,”says Kassabaum. To ensure security, the storage teamsmap each LUN or Vdisk to a single host and zone thefabric so a designated host can see only its LUN.

The data center relies on IBM’s Tivoli Storage Man-ager (TSM) for backup and recovery. Backups aremade daily to tape and shipped offsite. Ross recentlysigned an outsourcing contract with IBM for a remotehot site for the recovery of the mainframe and thoseopen-system servers running mission-critical apps.

The biggest technical challenge turned out to be notthe servers or storage but the network. “We didn’t havethe luxury of planning the fabric,” says Kassabaum.The state wanted to capture the savings from con-solidation as fast as possible, so the data center teambegan with the older switches they had, 34-port Mc-Data (now owned by Brocade) switches.

They didn’t get very far along when the data centerbegan running out of ports. “We couldn’t just add moreswitches because we would encounter hop counts thatwere too large,” explains Kassabaum. That’s when theyturned to the big Cisco switches.

Playing politicsTechnical challenges, however, pale in comparisonto the financial and political challenges. The politicsof IT consolidation in any organization can be fierce,with jobs and budgets at stake. The Missouri IT con-solidation began as a political campaign promise.When Ross saw it as a monster, he was thinking aboutthe politics involved, not the technical changes.

THE POLITICS OF IT CONSOLIDATION IN ANY

ORGANIZATION CAN BE FIERCE,WITH JOBS AND BUDGETS AT STAKE.

BakBone.indd 1 10/16/08 2:31:22 PM

42 Storage November 2008

“It would be very hard for a CIO from the outside tocome in and do this. My strength lay in being a long-term bureaucrat who had worked for both parties,”says Ross. “This [consolidation] doesn’t have a lot todo with technology,” he adds, it has more to do withbudgets, funding, appropriations and accounting.

Ross immediately brought in three top deputies tohelp him. One was a strategic planner, the second wasthe financial wizard and the third, Chris Wilkerson,was the true technologist.

Resistance to consolidation was expected from thestart. “Yes, you lose autonomy, but there’s no loss of[IT] service,” says Ross. Resistance was about con-trol, not service delivery. Ross made sure that everyagency’s service-level agreement was honored or theynegotiated changes. Unlike past CIOs, who couldmake recommendations for change but had no pow-er, Ross came to the task with the full power of theGovernor’s office behind him. He could transferbudgets and staff as required.

Jobs were another concern, as civil service employ-ees are protected. Ross opened all top IT positions foranyone to apply. Some staffers chose to retire. “Nobody

would lose their job due to the consolidation.No one was laid off,” says Ross.

However, people would be asked to changeand many would have to learn new IT skills.“We had one person who wouldn’t take newtraining,” recalls Ross. Without training, theycouldn’t do the job and were gone. “If theywanted job security they had to be prepared tolearn new skills and change,” he says.

The IT consolidation drove the demand forand growth of IT. But the consolidated IT or-ganization is now 51 positions smaller than whenRoss started. “We were able to leverage tech-nology to keep staffing down,” he says, which isgood because Governor Blunt capped overallMissouri government at 60,000 employees. Theconsolidated IT organization has 1,186 employees.

Agencies also played politics with the savingsthey would see from IT consolidation. “We auto-mated a process that saved one agency $500,000in postage expenses,” says Ross. But the agency hadto spend those savings before legislators grabbed itfor use elsewhere (see “Lessons learned,” p. 39).

Future directionRoss and Carter are looking ahead to the loom-ing retirement of a number of veteran data cen-ter staff. Ross has gone so far as to set up an ITrecruitment storefront on Second Life, the 3D

virtual world. “I’m 58 years old and I never thoughtI’d have an avatar, but I do,” says Ross, whose avatar,Second Life’s slim, muscular male default avatar, alongwith avatars for other top IT executives, is busy re-cruiting technology people. They’ve already hired onetechie and as the retirements start hitting, they’ll bepounding the virtual pavements of Second Life andelsewhere for more hires.

By the end of summer 2008, a few of the agenciesin the Missouri IT consolidation project had yet to becombined. With an election looming this month andGovernor Blunt choosing not to run, the consolida-tion project will lose its biggest and most powerfulchampion. There’s a possibility the new administra-tion won’t continue the consolidation or that someagencies may take it as an opportunity to bolt.

“But it will be hard to go back,” says Carter. Ross,who has been reciting the mantra of budget savingsand cost avoidance, expresses confidence that it won’tbe rolled back. Still, it’s all politics, so anything canhappen. 2

Alan Radding is a frequent contributor to Storage.

In 2008, the Colorado legislature passed a bill to consolidatethe IT functions of 13 executive agencies to produce an IT groupof 1,100 people with a $250 million budget. The consolidation,intended to roll out over four years, will be a logical consolida-tion leaving much of the IT physically dispersed. Led by stateCIO Mike Locatis and deputy CIO John Conley, the team beganextensive planning and preparation even before the bill becameeffective. Initial planning steps included the following:

q Peer organizations. Spoke with counterparts in Missouri and

other states.

q IT employee input. Received input from 950 of the 1,100 em-

ployees to date, often at town meeting-styled gatherings and

through a Wiki.

q Attorney general. Clarified employment issues.

q Agency heads. Discussed service-level agreements, as well as

IT budgeting and billing.

q Vendors. Sent new procurement procedures and expectations

to product vendors in an effort to build committed relationships.

q Asset inventory. Identified technology on hand, end-of-life

state, lease expiration and refresh plans.

“If I were doing this again, I would take even more time to lis-ten to employees,” says Conley. “This is a massive change foremployees.”

Colorado’s consolidation plan

Why do dozens of industry leaders deploy ProtecTIER? Simply this: it’s faster, more economical,

more reliable and more secure than other solutions. Up to 25X increases in usable storage capacity.

450MB/s inline sustainability (900MB/s clustered). And scalability up to 1PB. All with enterprise-class

data integrity, and without disrupting your existing environment. By every measure, other de-duplication

solutions simply don’t measure up. No matter what they may claim.

See for yourself with the FREE Taneja Group product profile, “Evaluating Enterprise-Class VTLs: The

IBM System Storage TS7650G ProtecTIER De-duplication Gateway.” Go to www.diligent.com/lead

More and more market leaders rely on ProtecTIER™ inline data de-duplication to protect more. And store less.

Data de-duplication.LEAD. OR FOLLOW.

The IBM System Storage TS7650G ProtecTIER De-duplication Gateway is available through IBM and IBM business partners. For more product info, visit www.diligent.com/products:ProtecTIER-1

PROTECT MORE. STORE LESS. 1.508.663.1300 | www.diligent.com/lead

Top Global Consumer Electronics Companies

Top Government Agencies

Top Global Telecommunications Companies

Top Media & Entertainment Companies

W44 Storage November 2008

Data migrations can be complicated,time consuming and happen all toofrequently. Here’s how to simplify theprocess. By Robert L. Scheier

data migrationhatever storage media your data sat on a yearor two ago, chances are it’s moved since then and will likelymove again soon. There are plenty of reasons why that datamay have to move: maybe the lease is up on an old Fibre Chan-nel (FC) SAN and you’re upgrading to new hardware, you’removing to a new data center or you need to move older filesto less expensive storage to keep up with soaring data demands.

Data migration may be a common chore, but that doesn’tmean it’s easy. Disk (and tape) drives are linked to applicationsand business processes through servers, routers, switches, andstorage and data networks, not to mention access control poli-cies and other layers of security. The more complex your envi-ronment, and the more data you’re managing, the less likelyyou’ll be able to use simple copy functions built into operatingsystems or arrays to pull off your required migrations.

Migrating data involves a lot more than just ripping out onestorage cabinet and plugging in another. The following tipswill help make your data migrations go more smoothly.

storage migration

November 2008 Storage 45

tips1. Understand your mapping. before migrating any data to new storage arrays,be sure you understand how servers are currentlymapped to storage so you can re-create those map-pings in the new environment. Otherwise, serversmay not reboot correctly after the migration.

To avoid unplanned outages, administrators should“understand the true end-to-end relationships amongthe platforms you’re moving across,” says Lou Berger,senior director of products and applied technologiesat EMC Corp. This is especially important if, for re-dundancy purposes, your storage infrastructure is amultipathing environment where hosts may bootfrom alternate arrays if the primary array is down. Ifadministrators fail to check the parameters on thehost HBAs to ensure the pathing software is set upcorrectly, he says, the host may not reboot properly.

Administrators also need to be sure the host willdiscover storage resources in the proper order aftera migration. “Some applications and databases aresensitive to the order in which they discover volumes,”says Berger, because an application boot sequencemight be on one LUN and its data on another.

Administrators may not even know a server existsuntil it fails to reboot after a migration, “because of-tentimes people install them and forget them,” saysAshish Nadkarni, a principal consultant at GlassHouse

Technologies Inc., a Framingham, MA-based consult-ing and services firm. While storage discovery and auditing tools are valuable, he says, none of them cancapture 100% of the misconfigurations that can causea problem.

2. Gather metrics. jalil falsafi, director of information technology atelectronic components distributor Future Electron-ics Inc. in Montreal, had to migrate data from IBMCorp. DS4100 and DS4300 entry-level arrays toHewlett-Packard (HP) Co. StorageWorks XP24000arrays during intervals of relatively slow networktraffic over a period of six weeks. That required anin-depth understanding of the capacity of his SANand when other functions, such as a database back-up, would increase network loads.

WHILE STORAGE DISCOVERYAND AUDITING TOOLSARE VALUABLE,

NONE OF THEM CAN CAPTURE100% OF THE MISCONFIGURATIONS

THAT CAN CAUSE A PROBLEM.

46 Storage November 2008

“You have to scope how many LUNs, or logicaldisks, you’re going to migrate. You have to know theirsize; you have to know the speed of your array; youhave to know the speed of your switch as well as ‘hotspots’ when traffic loads are very heavy,” says FutureElectronics’ Falsafi. “You need to take the worst-casescenario into consideration, not the average or theminimum.”

Falsafi used monitoring tools available in Falcon-Stor Software Inc.’s IPStor network storage server, aswell as host- and array-based utilities, to gather thosemetrics.

“Migration can have a severe impact on overall sys-tem performance,” says Chris McCall, product mar-keting director at LeftHand Networks Inc. (which isbeing acquired by HP). “It becomes a fairly nasty is-sue [with questions such as] ‘Is my controller per-formance maxed out already or close to maxed?’” Hewarns that overloading a storage or data network withmigration traffic can reduce the availability or per-formance of not only the data being migrated, but allof the data on the network.

Measuring network bandwidth needs before per-forming a migration is a chore that can be easily over-looked, says Greg Schulz, founder and senior analystat StorageIO Group, Stillwater, MN. “Unless youknow for sure, go out and doublecheck to see what theimpact is going to be,” he says. Once an administra-tor is sure how much bandwidth should be allocatedto the migration and when it will be available, thebandwidth can be managed with tools such as opti-mization technologies, replication optimizers andtraffic shapers, he adds.

3. Downtime isn’t so bad. some vendors claim they can migrate data withoutcausing any downtime for applications. But some ob-servers, such as Gary Fox, director of national services,data center and storage solutions at Dimension Data,recommend building in some downtime because it’stricky to migrate data and ensure its consistency whiledoing a migration during regular production hours.If possible, he suggests, do migrations during non-business hours “so you’re not under so much pressure”in case something goes wrong.

“I’m kind of old school in this regard,” he adds.

4. Watch for security leaks.when migrating data among arrays from variousvendors, permissions and security settings can be leftbehind, making the data vulnerable to theft, corrup-tion or misuse. Even moving data among file sys-

SERVER VIRTUALIZATION can make data migrations more of

a challenge, with the prospect of migrating virtual machines

(VMs) among physical servers, as well as migrating the data

the VMs use. The system images, and the data needed by

applications, may also have to be converted into new formats

for use in the virtual environment.

Both virtualization vendors and third parties provide tools to

perform such functions. VMware Inc.’s Virtual Machine File

System (VMFS), for example, gives multiple VMs shared ac-

cess to pools of clustered storage and, says the company,

provides the foundation for live migration of virtual machines

and virtual disk files.

VMware’s VMotion also allows customers to perform live

migrations of multiple VMs among physical servers with no

downtime. It also provides management capabilities such as

the ability to prioritize migrations to ensure that the most impor-

tant VMs have the computing and network resources they need.

VMware’s Storage VMotion allows customers to migrate the data

used by VMs among arrays with no downtime, but requires more

manual work than is now required with VMotion, says Jon Bock,

senior manager of product marketing at VMware. Either VMware,

or storage vendors writing to VMware’s API, will provide more

automation tools for Storage VMotion in the future, he says.

Microsoft Corp. doesn’t offer live migration of virtual ma-

chines in the initial release of its Hyper-V server virtualization

technology, but says that capability will be included in the

next release, which isn’t expected until next year.

VMware recently announced vStorage, which includes new

APIs designed to enable storage vendors to give their storage

management tools better visibility into and integration with

the VMware virtual environment, and to provide better visibili-

ty through the VMware vCenter Server management interface

into how VMs are using storage.

VMware recently unveiled updates for Storage VMotion

such as the ability to migrate volumes from thick- to thin-

provisioned devices, to migrate data from Raw Device Mapping

volumes to Virtual Machine Disk Format (VMDK) volumes, and

to better integrate storage management into the VMware

vCenter Server management interface.

The latest release of Symantec Corp.’s Backup Exec provides

support for heterogeneous migration and replication of data in

both VMware ESX and Microsoft’s Hyper-V environments, says

the company. Other third-party offerings include DataCore

Software Corp.’s “Transporter” option for new licenses of its

SANmelody and SANsymphony software that allows adminis-

trators to migrate disk images and workloads among different

operating systems, VMs and storage subsystems.

Among other updates to its backup software, Hewlett-Packard

Co. recently extended the Zero Downtime Backup and Instant

Recovery features of its Data Protector software for VMware

VMs, giving customers “zero impact backup of mission critical

application data residing on virtual machines,” according to the

company.

Managing migration in virtual environments

48 Storage November 2008

tems—say, from NTFS to NFS—can result in a lossof permission and security settings, says GlassHouseTechnologies’ Nadkarni. “If you’re moving … fromWindows to Unix or Unix to Windows, you have tobe very, very cautious because more often than not theuser permissions are completely destroyed,” he says.

The easiest way to avoid security issues is to do ablock-level rather than a file-level migration. That way,the migration is performed at “a level below the filesystem, so the host doesn’t even see the difference”in the data, says Nadkarni.

It’s possible to maintain security settings in a file-basedmigration, he notes, if the source and target systemslie within the same authentication or authorizationdomain in a service such as Microsoft’s Active Di-rectory. Some file-based migration tools also have theintelligence required to maintain such security set-tings, he notes.

Digging into the details of how a file copy utilityworks is important, says StorageIO’s Schulz. “Whatdoes it copy? How does it copy? Does it simply copy the

file, or copy the file as well as all other attributes, metadata and associated information? Those could be thereal gotchas if you haven’t brought along all of the ex-tra permissions and access information. Dig into thedocumentation, talk to the vendor or service provider,and understand what type of data is being moved, andhow it is to be moved.”

5. Virtualize carefully. host-based storage virtualization, which is availablefrom a number of vendors, is a fairly reliable way toaccomplish such cross-vendor migration. Future Elec-tronics’ Falsafi says the host-based virtualizationprovided by the FalconStor software made the ac-tual migration painless. “We zoned the XP with a FibreChannel switch so [it] came up as another set of harddisks to the IPStor. We created a mirrored LUN on theHP StorageWorks XP24000 array and did synchro-nization. Once the primary array and the backup LUNswere synchronized … all we did was flip the switchfrom the primary to the backup, and the backup became

the primary,” he says.But not all virtualization is created

alike. Some virtualization appliancescan add to the work administratorshave to do, or cause application out-ages while administrators update driv-ers or the volume managers used tomanage the storage, says Nadkarni.For example, he says, a virtualizationappliance can cause problems bychanging the SCSI Inquiry Stringused to identify a specific array. If theappliance changes the inquiry string,the volume manager used to managethe storage must be reconfigured torecognize the new string, he says, orapplications that depend on that vol-ume may not run properly. Storageadmins should ask virtualizationvendors whether their products are“completely transparent,” says Nad-karni, or whether their installationwill require changes to servers orother components that could causeapplication outages.

Nadkarni also suggests stayingaway from virtualization appliancesthat require an array or entire storagenetwork to be taken out of service tovirtualize (or unvirtualize) storage re-sources. Some appliances “may re-

MIGRATION CAN be done on the host or on the network, at either the block or file

level, or on the array itself at the block level. Users can choose from hundreds of

tools ranging from simple utilities supplied with storage arrays (most useful for mi-

grating data among the same vendor’s arrays) to open-source software or complex

suites that could cost thousands of dollars.

Host-based software tools are often effective at migrating data without down-

time. Some support only Windows file systems, while others support multiple op-

erating systems at either the file or block level. Among the host-based file-level

tools is the open-source rsync, which synchronizes files across Unix systems.

Many operating systems already include host-based, block-level migration tools.

Among the network-based, file-level migration tools are virtualization appliances

such as EMC Corp.’s Rainfinity. Network-based, block-level migration tools in-

clude Brocade’s Data Migration Manager, an application that runs on Brocade’s

DCX Backbone high-end switch and can migrate as many as 128 LUNs in parallel

at speeds of up to 5TB per hour, according to the vendor.

Among the relatively few players in the array-based block-level migration

tools is Hitachi Data Systems’ Universal Replicator software, which can migrate

data among Hitachi arrays and those from other vendors.

Many vendors use file systems to mask the complexity of moving data among

multiple platforms. Among them is Ibrix Inc.’s Ibrix Fusion FileMigrator, which

adds data tiering capabilities to its Ibrix Fusion 4.2 file system. FileMigrator, says

the company, allows IT administrators to set policies and move data according

to usage patterns. FileMigrator “addresses a huge pain point” by performing data

migration “as a background process under the covers based on policies,” says

Terri McClure, an analyst at Enterprise Strategy Group, Milford, MA.

Migration toolkit

Data mobility made simple

Whether you’re consolidating multiple remote-site backups into a central site or simply moving data between production and disaster recovery sites, REO® Compass delivers the simplicity, ef ciency and security you

need to protect and recover critical data in the face of a compromising event. REO Compass offers:

• Seamless integration into your existing backup infrastructure• Centralized control of wide-area backup and recovery• Automated, policy-based and capacity-optimized backup data movement• Consolidated media tracking and audit trail• Simpli ed setup and management• Guaranteed data integrity

For enterprise-caliber protection at a cost that’s anything but enterprise-level, REO Compass is the ideal solution for today’s resource-strapped IT departments.

www.overlandstorage.com/REOcompass

OSREOCompassad.indd 1 10/14/08 1:00:20 PM

50 Storage November 2008

quire you to take an outage to reconfigure your net-work or to take an outage on the entire storage array,to insert the appliance,” he says. They can also requirethe administrator “to change things on the host” suchas drivers, multipathing software or volume managers.

6. Thin provisioning. thin provisioning helps preserve storage space byonly taking up space on a disk when data is actuallywritten to it, not when the volume is first set asidefor use by an application or user. This eliminates wastewhen the application or user doesn’t wind up need-

ing the disk space. However, many data migrationtools write “from block zero through to the very lastblock” of a volume on the target system regardless ofwhich blocks are actually being used, nullifying thebenefits of the thin provisioning a user had applied onthe source array, says Sean Derrington, director ofstorage management and high availability at Syman-tec Corp.

File-system utilities or host-based volume managers“that are intelligent enough to figure out if the block isbeing accessed or not” before deciding to write to it canhelp circumvent this problem, says GlassHouse Tech-nologies’ Nadkarni. Block-level migration techniques

that are good for preserving the security around dataaren’t good for preserving thin provisioning, he says,“because they write to the entire volume.”

7. The devil is in the (software) details. something as simple as different patch levels ap-plied to software in the old and new environments cancause server crashes after a migration. Nadkarni saysmigrating among storage arrays also requires unin-stalling the previous vendor’s software from serversand installing the new vendor’s. Not only does thisrequire time, but it could cause instability if compo-nents left behind by the incomplete uninstall of old-er software conflict with other applications.

8. Build in enough learning time. if there’s a common theme to these tips, it’s thatstorage migration is complex and full of “gotchas” thatcan compromise application uptime, reliability or se-curity. “The key to a successful data migration is nothaving any unknowns in your environment,” says Nad-karni. “The more unknowns,” he adds, “the bigger therisk.” Storage administrators often underestimate thetime required to learn their new storage environmentand what it takes to migrate data to it successfully.

Besides the technical challenges involved in eachdata migration, it’s also important to clearly under-stand the business objectives for the migration, saysTerri McClure, an analyst at Enterprise Strategy Groupin Milford, MA. For example, what’s the ROI of thedata migration? Is the aim to migrate seldom-used datato less expensive media to reduce disk and power costs,to decrease the data’s RTO or both? If so, it may be pos-sible to create automated storage policies to avoid anendless round of manual migrations, she says.

“To do anything successfully and seamlessly youhave to do a lot of preparation, thorough preparation,”says Future Electronics’ Falsafi. “That means analysis,data gathering, trend analysis. For me, it’s very vitalyou get this information and know exactly how yoursystems behave before you do anything. The cost ofan unsuccessful data migration—interrupted busi-ness operations, and a loss of revenue and credibili-ty—far outweighs the additional amount of time itmay take to thoroughly understand your source ortarget environments.” 2

Robert L. Scheier is a freelance technology writerbased in Boylston, MA. He can be reached [email protected].

WHILE “MIGRATION” AND “REPLICATION” are often used

interchangeably, their textbook definitions—and the tools

required to perform them—are quite different.

Migration means moving data from one platform to another,

without leaving the original data in place. It’s used when up-

grading hardware, moving to a new site, creating a test data-

base, or moving a virtual machine to a new physical server

with more processing or network resources.

Replication means creating a second set of data and syn-

chronizing any changes made between the original and the

copy so that either set can be used at any time. It’s often used

for backup and recovery, for continuous data protection (CDP)

or in high-availability architectures.

Users may only need their replication tools to support a single

vendor’s storage arrays. But multivendor support is usually more

important for migration tools because the data is often being

moved to a different vendor’s storage platform.

Migration vs. replication

STORAGE MIGRATION IS COMPLEX AND FULL OF “GOTCHAS”

THAT CAN COMPROMISE APPLICATION UPTIME,

RELIABILITY OR SECURITY.

MAXELL OFFERS THE PERFECT COMBINATION OF STORAGE & SUPPORT TO KEEP YOUR DATA SAFE.

On your sturdy shoulders rests the data of your entire company, but there’s no need to worry. Maxell’s years of experience creating

superior backup tapes ensures that your data will be there when you need it. With Maxell’s customized training sessions,

free barcode labeling, computer tape testing and 24/7/365 technical service, you can relax knowing that retrieval will be a breeze.

Of course, you don’t have to let your office know that. Let them treat you like the hero you really are.

FREE MAXELL BLOW AWAY GUY POSTERPLEASE SEND THE COMPLETED COUPON IN ASTAMPED ENVELOPE TO THE FOLLOWING ADDRESS:

BAG POSTER c/o MAXELL CORPORATION OF AMERICA22-08 ROUTE 208FAIRLAWN, NJ 07410

NAME

ADDRESS

CITY STATE ZIP

EMAIL PHONE

WHILE SUPPLIES LAST. MUST BE RECEIVED BY 3/15/09.

©2008 M

axell C

orp

ora

tion o

f Am

eric

a.

MDSA1119_AD1_M03.indd 1 9/15/08 1:44:32 PM

d

52 Storage November 2008

hot spots | bob laliberte

ata centers are being transformed. Companies are con-solidating geographically dispersed data centers into cen-tralized ones to reduce footprints and costs, and to improveperformance. One of the most visible technologies enablingthis change is virtualization, particularly server virtualiza-tion. But despite all of the attention virtualization has re-ceived, probably less than 10% of available servers havebeen virtualized, leaving a lot of room for future growth.Another significant part of this transformation is the ex-panding role of the network. To support all of the featuresand functionality of server virtualization, a networkedstorage environment is required. Research from EnterpriseStrategy Group indicates that 86% of server virtualizationshops leverage a networked storage environment. Whilevendors will argue the merits of various types of networks,the most common one is still Fibre Channel (FC), chosenfor performance reasons. However, it’s not used exclusivelyand many firms will deploy multiple storage networks basedon performance needs, internal skills and budgets.

FCoE’s role in the networkJust as data centers are transforming, the most popularstorage networking technology is also evolving. Whilemany companies were content to follow the FC roadmap—upgrading from 1Gb to 2Gb, then to 4Gb and now 8Gb—new technologies like Fibre Channel over Ethernet (FCoE)have given users something to think about before blindlyprogressing to 16Gb FC. Why is that? And why should thestorage team pay attention?

There’s the potential for much higher throughput/performance.• FCoE leverages 10Gb Ethernet (10GbE). To be more

specific, it leverages an enhanced version of the Eth-ernet standard referred to as Converged EnhancedEthernet (CEE). The changes are mostly related to elim-inating dropped packets and relieving congestion.

• The roadmap for FCoE mirrors Ethernet. This meansthe next leap is four times the throughput (up to 40Gb),which will quickly surpass the FC roadmap.

Additional savings can be realized through convergence.• Every IT organization is under constant pressure to re-

duce costs. FCoE provides the opportunity to reduce thenumber of cards and cables required at least at the rack

level. This could also have an impact on power and cool-ing requirements.

• List prices for 10GbE ports are already less than $500per port and will continue to decline as sales volumesincrease.

Major vendors have made significant investments in this space.• They understand the benefits of convergence and are

building hardware and software portfolios to providesolutions to enable this transition. Some of the morenotable acquisitions include Cisco Systems Inc. bring-ing in Nuova Systems Inc. and Brocade’s acquisition ofFoundry Networks Inc. Other firms like Emulex Corp.and QLogic Corp. have developed their own technolo-gy to deliver converged network adaptors to replacehost bus adapters and NIC cards.

Why is this important to the storage team? As data cen-ters and the networks that power them continue to change,the line between data networks and storage networks willblur. Server virtualization and data mobility are forcingIT to rethink the traditional, siloed approach to data cen-ter technologies. For example, before Cisco announced itsNexus 1000 virtual switches at VMworld 2008, server ad-mins controlled VMware virtual switches embedded inthe ESX hypervisor through a VMware interface. Now, ifusers choose to deploy the new Cisco Nexus 1000 in VMwareenvironments, network admins can regain control of theswitching environment and leverage Cisco’s NX-OS to man-age the virtual, as well as the physical, Ethernet switches.

What the transformation means to storage teamsIt’s important to understand where your company is whenconsidering this transformation process. Has it imple-mented server virtualization? Is it in production? How isit connecting the virtualized server environment to thestorage? What technologies are being considered? Takethis opportunity to become more relevant to the business.Think in terms of how changes in the IT environment canpositively impact the company’s bottom line, not just en-hance the visibility of your particular domain. More specif-ically, the following must be considered:

Bottlenecks. Once server virtualization technologies havebeen deployed correctly, bringing on a new application can

A turning point for storage networking

Photograph by Kathleen Dooher

Storage pros will need to learn more aboutthe network than ever before.

“ Working with FalconStor

has been instrumental in achieving global 24 x 7 production

for our SAP systems. FalconStor’s disk-based data protection

and storage virtualization solutions eliminated the backup

window for our SAP business system and has had a positive

impact on our business as a whole. We’ve reduced costs,

improved the scalability of our data center and deployed

an enterprise-wide disaster

recovery solution.”

Roland Etcheverry, Ph.D., CIO, Champion Technologies, Inc.

To learn how to deploy enterprise-wide data protection and storage virtualization with thin provisioning:

Contact FalconStor at 866-NOW-FALC (866-669-3252) or visit www.falconstor.com

54 Storage November 2008

require only a few minutes. But how long will it take to pro-vision the storage to support it? If the answer is measuredin days or weeks, the process needs to be reviewed and newstorage technologies may be required.

Who controls the newly deployed network? Typically, storageteams dictate the type of network supporting the storageenvironment. Looking ahead, it’s easy to imagine thatchanging. If FCoE takes off, will the deployment of Ether-net switches be controlled by storage or networking compa-nies? Will the storage team or the networking group havethe responsibility and budget? Will companies need hybridIT groups with members from both of these teams?

Do FCoE products need certification from storage vendors?Traditionally, FC switch sales are influenced by the specif-ic vendor but controlled by storage companies. Ethernetswitches, sold into the network groups, haven’t needed anyapprovals for NAS and iSCSI implementations. With FCoE,however, the game has changed. For now, all FCoE prod-ucts are undergoing testing and so-called certification bymajor vendors. But will this trend continue? If you’ve beenpurchasing Ethernet switches for years without requiringstorage vendor approval, why start now? And questions re-

main as to whether or not storage vendors will even havethe time or desire to test all of the solutions. Will storagevendors retain final approval or will network vendors con-vince customers that it’s an unnecessary step? It will beimportant to keep an eye on this potential shift in power.

FC won’t disappear overnight. Remember when opensystems were going to eliminate mainframes? There arestill plenty of mainframes around and you could argue thatthe concept of virtualization is simply open systems rec-ognizing a great benefit of logical partitions in the main-frame world and adopting it. FC will be around for a while,but like ESCON and FICON, it may not be the fastestgrowing or most exciting segment to work in five to 10years down the road.

Chart your career for changeAthletes cross train to break up the monotony of their rou-tine and to increase their overall strength and endurance.IT shops should do the same. Begin to explore and educateyourself, but don’t limit your studies to just your current re-sponsibilities. Think about adjacent domains and alwaysconsider how a new technology will drive higher levels ofservice to the business.

Vendors, especially those with a convergence message,now offer classes and certification programs to become

better educated on these new technologies and virtualiza-tion products. Many are offered online and don’t requiretravel. Take advantage of any company-sponsored train-ing to expand your knowledge base and position yourselffor future growth.

In addition, go to the certifying bodies themselves, likethe IEEE and T11, to learn more about the status of CEE andFCoE. Other helpful sites include the FCoE home page.

Look for integration pointsTop-of-rack switches are one of those areas where FCoEmakes sense. They reduce the number of cables and cardsdeployed, but don’t require a full rip and replace becausethey can direct traffic into an FC SAN or Ethernet LAN.You’ll need to be ready to implement these new technolo-gies when the opportunity arises.

Cisco is driving convergence with products and soft-ware, like the Nexus product line and NX-OS operatingsystem, which combines storage networking SAN-OS andIOS into a single interface. The company’s recently an-nounced Nexus 1000 virtual switch resides in the hyper-visor and replaces the VMware virtual switch. It’s also

controlled by NX-OS. You can expect Brocade, with its ac-quisition of Foundry Networks, to follow suit with a com-bined OS and single console to manage the storage anddata network.

The big pictureConvergence is coming, so be supportive of these effortsand try to become part of the planning and testing teams.Remember, the two largest FC vendors made some majorinvestments to solidify their portfolios this fall. Become aproponent of mixed silos. Many companies have alreadybegun to embrace the shift on a project-by-project basis.Network convergence in virtualized environments is stilla relatively new model. Yes, there are products available andthere’s some testing taking place, but it’s certainly not toolate to get up to speed. This year, dedicate time to learningabout FCoE and meeting with vendors that supply thistechnology. Think of 2009 as the year to kick the tires. Moreadvanced companies may start limited production deploy-ments and by 2010-2011 most data centers should be onboard with converged networks. The question, however, is:Will you be? 2

Bob Laliberte is an analyst with the Enterprise StrategyGroup, Milford, MA.

AS DATA CENTERS AND THE NETWORKS THAT POWER THEM CONTINUE TO CHANGE, THE LINE BETWEEN DATA

NETWORKS AND STORAGE NETWORKS WILL BLUR.

Distribution Module

• Plugs directly into RPP and PDU products

• Hot-swappable and safe

• Available in single and 3-phase

APC is proud to be a member of the green grid.

Virtualization can significantly reduce IT load, resulting in underloaded power

systems. Improve your efficiency by avoiding

oversizing and by downsizing at the time of IT consolidation with

our modular scalable architecture.

Downsize, but expect re-growth

virtualized load

power original load

Scale down...

Scale up...

1 Rack enclosures that are HD-Ready 2 Metered PDUs at the rack level3 Temperature monitoring in the racks 4 Centralized monitoring software5 Operations software with predictive capacity

management 6 Ef cient InRow® cooling technology

7 UPS power that is exible and scalable

Modular PDU

• Doubles the power inhalf the fl oor space

• 60% smaller, so material costs are reduced

• Built in advanced alarms and notifi cation

Right-sized, modular power – the key to virtualizing with true efficiency!

If you haven’t already virtualized your servers, you’re probably seriously considering it. What you may not know about virtualization is this: modular power is critical to maximizing the gains made through virtualization – otherwise, oversized power simply negates the efficiency advances you’ve made.

Now, the modular power you know from our acclaimed Symmetra PX 40/80 is more flexible than ever with the all-new Symmetra PX 250/500. Featuring modular power in larger increments of 25kW up to 500kW, it configures in parallel up to 2 MW, for enterprises with consolidated servers that are experiencing growth on a larger scale.

The PDU – modular power’s newest frontier.

In addition to the Symmetra PX 250/500, we also introduce the first ever fully modular PDU. Our new Modular Power Distribution technology brings the right-sized scalability and flexibility you need when virtualizing to the power distribution unit – right down to the rack level. Scaling up or down no longer means powering down – or attempting to forecast future use.

Modular power – for maximizing savings from virtualization.

Start saving energy today by virtualizing – but not without these flexible advances in modular power – the Symmetra PX 250/500 and APC’s first ever Modular Power Distribution Unit.

The following have been tested and work best with InfraStruXure Solutions.

Go to www.xcompatible.com to learn more.

High Ef ciency 415V

• 415V of power means higher power density in a smaller footprint

• Step down and conversion rates provide more power by routing at higher voltage

• 288kW now fi ts in 12”!

Principles of InfraStruXure® High Density-Ready Architecture...

© 2008 American Power Conversion Corporation. All trademarks are owned by Schneider Electric Industries S.A.S., American Power Conversion Corporation, or their affiliated companies. e-mail: [email protected] • 132 Fairgrounds Road, West Kingston, RI 02892 USA • 998-1629 *Full details are available online.

Grow in bigger increments with the Symmetra® PX 250/500.

Right-sized modular power.Undersized footprint.Supersized energy savings.

The Symmetra PX 250/500 fits anywhere, with no rear access required. Scalable in 25kW increments, it boasts a 96% efficiency rating.

(Shown in Line up and Match configuration for 500kW and 6 minutes runtime.)

Visit www.apc.com/promo Key Code e684w • Call 888-289-APCC x9711 • Fax 401-788-2797

An Improved Architecture for High-Efficiency, High Density Data Centers Download a FREE copy of APC White Paper #126: “An Improved

Architecture for High-Ef ciency, High-Density Data Centers”

Storage_1108_e684w.indd 1 9/29/2008 11:14:07 AM

56 Storage November 2008

as IT organizations continue to grow in size andcomplexity, an inevitable challenge is keepingvarious parts of the company from working atcross purposes. Keeping groups in synch andaligned means having common goals and met-rics. This becomes an even bigger challengewhen reaching beyond IT.

Such is the case with issues relating to the datacenter. Much has been made of data center pow-er and cooling consumption and limitations thathave only been exacerbated with the rise in en-ergy prices. Organizations such as the UptimeInstitute talk about a crisis in the data center,while research firm Gartner Inc. reports thatdata center managers rank power and cooling astheir top priorities. However, for many organi-zations there’s a wide gap between the chief con-cerns of data center managers and those mostimportant to IT directors. Each one is guidedby different metrics and, for the most part, tends to marchto a different drummer.

Therefore, it’s not surprising to find that while data cen-ter managers see power and cooling as a major concern, ITinfrastructure managers tend to rank it low in importance.Again, according to Gartner, storage managers place powerconsumption in a three-way tie for last place in terms oftheir concerns, a clear example of organizational misalign-ment. This contradiction is understandable, since for yearsthe metric by which IT has been “taxed” for data center us-age is floor space, not power consumption. As a result, ven-dors have met their customers’ demands by providing moredensely packaged servers and storage that occupies less floorspace. However, these products also had the unintendedconsequence of increased power and cooling requirements.

Organizations are becoming aware of the data center cri-sis and are taking steps to bring IT shops and facility infra-structures into synch. Some organizations, perhaps mostnotably Microsoft Corp., have undertaken initiatives tomake data center cost allocation a function of power. Theprimary targets in these initiatives—what we call the low-hanging fruit—have been servers. However, with the in-creased adoption of virtualization and more efficientphysical server designs, the focus will inevitably shift toother areas of the data center, including storage.

This isn’t as ominous as it may sound. Projects that manyshops have started or completed to better manage data onstorage devices can help reduce power and cooling costs.Here’s a starter list of things that can be done to help getstorage energy usage under control.

1. AVOID OVERSPENDING AND OVERPROVISIONING. It’s pret-ty obvious that spinning disk is the source of most storagepower consumption, and unused spinning disk representswasted energy. But there are several challenges in growingstorage incrementally. The first relates to the organization’sability to accurately forecast storage capacity needs. Beyondcapacity planning, it requires a relationship with a vendorwho can support the incremental storage growth. Finally,the technology must be such that storage can be expandedeasily and with minimal disruption.

2. REVISIT YOUR TIERING STRATEGY. Small, fast disks de-mand more energy than large, slow disks, so the distribu-tion of data across various tiers of storage can have a bigimpact on your power consumption. An EMC Corp. studyindicates that storing a terabyte of data on a 7,200 rpm 1TBSATA drive is 94% more efficient than storing it on a 15,000rpm 73GB Fibre Channel drive.

This provides an added incentive to ensure that storageservice levels and their associated tiers are properly alignedbased on application and data value. There are many situ-

It’s time to pay attentionto storage power use

best practices | james damoulakis

Power and cooling isn’t just a data center problem. Try these eight tips for reducing consumption.

Illustration by Brucie Rosch/Photograph by Kathleen Dooher

WOULDD YOUU STILLL BEE INN BUSINESS?WOULDD YOUU STILLL HAVEE AA JOB?

Many companies’ disaster recovery plan revolves

around luck — “It won’t happen to us!” Others

utilize the “store a tape off-site” method, which

fails 70% of the time when it’s really needed.

If you are interested in building a plan that

works, that will keep you online no matter what

gets thrown at you, contact us to ask about a

free DR Analysis and Design assessment.

We can show you how to turn a disaster into

nothing more than a minor annoyance.

1 Executive Blvd.Suffern, New York 10901Phone: 800-486-6461Main: [email protected]

58 Storage November 2008

ations where tiered storage allocations are far from ideallydistributed. Understanding this distribution and devel-oping a clearly defined set of service-level requirementsto apply to new applications and existing apps can leadto substantial savings in equipment cost and energy use.

3. REVISIT RAID POLICIES. Another facet of a tiering strat-egy is the RAID protection policy applied to a given tier ofstorage. It’s not so prevalent these days, but overprovisioningof RAID 1 or RAID 10 increases the number of spindles andpower consumption. When additional performance or avail-ability isn’t required, drive count can be reduced.

4. CONSOLIDATE STORAGE ARRAYS. The more frames in adata center, the more power that needs to be reserved forthem. In addition, older arrays tend to be less efficient thanthe latest models. Reducing the number of storage systemsis an obvious option, just as consolidating physical serversthrough virtualization is.

Speaking of virtualization, newer arrays may offer en-hanced functionality such as thin provisioning to improveutilization and further reduce energy consumption.

5. ESTABLISH POWER USAGE METRICS. Historically, powerspecifications entered into a storage infrastructure discus-sion only during installation planning. Even within organ-izations with relatively sophisticated cost modeling andchargeback systems, specific metrics relating to power werenowhere to be found. However, to truly align data centerand IT infrastructure objectives, this will need to occur.

Data center managers focus on kilowatts and BTUs, withconsideration given for related factors such as peak loadtimes. Organizations like The Green Grid promote stan-dardized data center efficiency metrics. It will also becomenecessary to establish metrics such as GB/kW or IOPS/kW,and to then determine these rates for each tier of storageas well as usage for the storage infrastructure in total.

6. CONSIDER SOLID-STATE DRIVES (SSDs). While SSDs arestill a new phenomenon for many, they can play a role inreplacing power-hungry, high-speed, low-capacity disks. Inaddition to offering higher performance, EMC reports thaton a per IOPS basis, flash disks require 98% less energy.

It’s therefore imperative that organizations do their home-work and understand actual performance requirements be-fore making any substantial investment in this technology.

7. CONSIDER MASSIVE ARRAY OF IDLE DISKS (MAID). At theother end of the performance spectrum is MAID technol-ogy. We know a significant amount of data currently storedon spinning disk is accessed infrequently. From an energy-consumption standpoint, the most efficient disks are theones that aren’t spinning at all, and that’s the rationale be-hind MAID. For archival data that still requires accessibil-ity, this technology represents an attractive alternative toconventional, continuously spinning, nearline storage.

8. MAKE ENERGY USAGE A BUYING CONSIDERATION. As the“green” demand grows, vendors have realized that energyefficiency can be a competitive differentiator. It makessense to factor this into equipment purchasing criteria,and to consider energy impact when architecting newstorage infrastructures. Often times, the focus is on cap-ital expenditures (CAPEX) and insufficient attention is paidto operation expenditures (OPEX). However, evidence ismounting in the server world that these lifecycle costs, in-cluding power and cooling, can actually overshadowCAPEX. This isn’t yet the case with storage, but OPEX,including power, is certainly a significant component oftotal cost of ownership.

Effectively managing the power and cooling demands inthe data center is a sum of many parts. Within each part, aseries of improvements will combine to provide importantdividends. For many, there may not yet be a sense of urgencysurrounding power and cooling, particularly with regard tostorage. But data centers around the country are beginningto feel the pinch. It’s not simply an issue of rising costs. It’sa matter of data centers and utility providers reaching pro-duction limits. When planning for future storage environ-ments, it’s an issue that will no longer be ignored. 2

Jim Damoulakis is CTO at GlassHouse Technologies, a leading independent provider of storage and infrastructureservices. He can be reached at [email protected].

1. Storage magazine 2. 1549-6783 3. 10/6/08 4. Monthly with bonus September issue 5. Thirteen Issues Annually 6. $99.00 7. TechTarget, 117

Kendrick Street, Suite 800, Needham, MA 02494 8. TechTarget, 117 Kendrick Street, Suite 800, Needham, MA 02494 9. Michael Kelly, VP and Group

Publisher, 117 Kendrick Street, Suite 800, Needham, MA 02494; Rich Castagna, Editor in Chief, 117 Kendrick Street, Suite 800, Needham, MA 02494; Kim

Hefner, Senior Managing Editor, 117 Kendrick Street, Suite 800, Needham, MA 02494 10. Owner of Publication: BAMCO, Inc., 767 Fifth Avenue, 49th floor,

New York, NY, 10153; MFC Global Investment Management (U.S.), LLC, 101 Huntington Ave., H-6, Boston, MA, 02199; Munder Capital Management, 480

Pierce Street, Birmingham, MI, 48009; Coatue Management LLC, 126 East 56th Street, New York, NY, 10022; MFS Investment Management, Contact:

Mr. David Antonelli, 500 Boylston Street, Boston, MA, 02116-3741; M.A. Weatherbie & Company, Inc., Contact: Mr. Matthew Weatherbie, 265 Franklin

Street, Suite 1601, Boston, MA 02110-3113; Janus Capital Management LLC, Contact: Mr. James Goff, 151 Detroit Street, Denver, CO, 80206-4805. 11.

None 12. N/A 13. Storage magazine 14. September Bonus Issue 15a. 54,107/53,433 15b(1). 45,896/45,667 15b(2).

0/0 15b(3). 6,137/6,293 15b(4). 0/0 15c. 52,032/51,960 15d(1). 0/0 15d(2). 0/0 15d(3) 0/0 15d(4). 303/250 15e. 303/250

15f. 52,336/52,210 15g. 1,771/1,223 15h. 54,107/53,433 15i. 99%/100% 16. November 2008 17. Publisher Signature

STATEMENT OF OWNERSHIP, MANAGEMENT AND CIRCULATION

Exclusive Extra for Storage Subscribers

The Storage Virtualization Workbook eZineThe Storage Virtualization Workbook is a brand new eZine brought to you by the Storagemagazine team. This edition outlines how to get started on a storage virtualization projectand provides a number of tips and best practices from some of the leading experts in thefield. Take the complexity out of storage virtualization with 24/7 access to this new eZine inan easy-to-access, easy-to-read, easy-to-print online format.

Get Your FREE Copy Today: www.SearchStorage.com/eZineVirt

Sponsored by:

The Web’s best storage-specific information resource for enterprise IT professionals

Have you looked at

OnlineBackupVisit the SearchDataBackup.com “Online Backup Special Report”

to find out how online backup has evolved:

www.SearchDataBackup.com/online_backup

lately?

Special Report: Online Backup

Sponsored by:

Managing the information that drives the enterprise

Also insideWhen to destroy data 4

FRCP changes 13Readiness checklists 22

Compliance audit tips 24Vendor resources 28

STORAGEkeep it

legalTips and best practices that will improve your data complianceprocesses

Exclusive Extra for Storage Subscribers

Compliance Best PracticesFor storage pros, it’s easy to let ‘Update Compliance Processes’ slip to the bottom ofthe to-do list. It can be complicated and you don’t always know when you’re actually done.To help out, check out the latest Storage eZine, a soup-to-nuts guide to help you keep abreaston regulation changes – with several tips on shoring up your processes:

Get Your FREE Copy Today:www.SearchStorage.com/compliance_ezine

Sponsored by:

Exclusive Extra for Storage Subscribers

The Green Storage eZineStorage is now the main source of power consumption in the data center. The associatedexpense makes “going green” a necessity. Check out the latest Storage eZine for expert tipsand articles that focus on how to reduce the power and energy of your storage equipment,while still providing the adequate capacity your company requires.

Get Your FREE Copy Today: www.SearchStorage.com/Green_eZine

Sponsored by:

Vice President and Group PublisherMike Kelly, [email protected]

PublisherMark Walter, [email protected]

Product Marketing ManagerCassandra Severino, [email protected]

Senior Marketing DirectorRick Nendza, [email protected]

Director of Sales, East CoastKarima Zannotti, [email protected]

U.K. Sales ManagerColin Mitchell, +44 7936 095 [email protected]

Major Account Manager, East CoastNicole Gilbert, [email protected]

Major Account Manager, East CoastEd Laplante, [email protected]

Major Account Manager, East CoastNick Fidler, [email protected]

Account Executive, East CoastMargaret Norden, [email protected]

Account Executive, East CoastMeaghan Hauck, [email protected]

Account Executive, East CoastNicole Puopolo, [email protected]

Director of Sales, West CoastJeremy Hurley, [email protected]

Major Account Manager, West CoastJohn Taranto, [email protected]

Account Manager, West CoastKatie Connon, [email protected]

Account Executive, West CoastBill Henry, [email protected]

Director of Sales, Channel MediaPeter Larkin, [email protected]

Circulation ManagerKate [email protected]

Customer [email protected]

For Reprints, Contact FosteReprintsRhonda Brown, 866-879-9144 x194FAX: [email protected]

Storage magazine (ISSN 1549-6783) is published monthly with a special September issue by TechTarget, 117 Kendrick Street,

Suite 800, Needham, MA 02494. Periodicals Postage Paid at Boston, MA 02205 and additional mailing offices. This publication

is free to qualified subscribers as determined by the publisher. Subscription inquiries can be sent to the Customer Service,

Storage magazine, P.O. Box 3539, Northbrook, IL 60065-3539 or via our Web site at http://subhelp.storagemagazine.com. Subscription rates are $99

for 1 year, $189 for 2 years in the U.S. and Canada; $150 for 1 year and $290 for 2 years in Europe, Mexico and South and Central America, and Africa;

$160 for 1 year and $310 for 2 years in Asia, Australia and the Pacific. POSTMASTER, please send all address changes to Customer Service, Storage

magazine, P.O. Box 3539, Northbrook, IL 60065-3539. Canada Subscriptions: Canada Post: Publications Mail Agreement #40612608. Canada Returns

to be sent to Bleuchip International, P.O. Box 25542, London, ON N6C 6B2. Subscribers, please allow 6-8 weeks for change of address. Include your

old address along with, if possible, an address label from a recent issue, or change via our Web site at http://www.searchstorage.com. All rights

reserved. Entire contents, Copyright, ©2008, TechTarget. No part of this publication may be transmitted or reproduced in any form, or by

any means without permission in writing from the publisher. For permissions or reprint information, please contact Mike Kelly, Vice Pres-

ident and Group Publisher ([email protected]). Printed in the U.S.A.

This index is published as a service to readers—we aren’t responsible for errors or omissions. You can get more information from advertisers by usingthe URLs listed below or by going to www.searchstorage.com and clicking on the “Current Issue” tab under Magazine. Then go to the lower left-hand corner of the page to find “Our sponsors.”

American Power Conversion, 55www.apc.com/promo

BakBone Software, 41www.bakbone.com

Brocade, 19www.brocade.com/virtualization

CA, 23, 25, 27www.ca.com

CDW, 13www.cdw.com

CommVault, 34-35www.commvault.comThe world’s fastest growing data management software company. Solving Forward.

Dell, Inside Back Coverwww.dell.comDell, Inc. is a global provider of industry-leading storage products, services and solutions.

Dell EqualLogic, 31www.Dell.com/PSeries/PS5500ELearn how to simplify storage with the new PS5500E, an economical, enterprise-class iSCSI SAN solution.

Diligent Technologies, 43http://www.diligent.com/lead

FalconStor Software, 53www.falconstor.com

FUJIFILM Recording Media U.S.A., Inc., 9www.fujifilmusa.com/tapestorageFUJIFILM Recording Media U.S.A., Inc.,(FRMU) is a leader in supplying advanceddata storage media, including LTO Ultriumand enterprise-class data tape featuringthe lowest cost per gigabyte with lowpower consumption for mass storageapplications.

Fujitsu Computer Systems Corp., 11www.fujitsu.com

greenBytes, Inc., 7www.green-bytes.com

Hewlett-Packard Co., 5www.hp.com/

Intel, 17www.intel.com

Maxell, 51http://www.maxell-data.comMaxell offers a complete line of record-able media to meet today’s data storagedemands.

NetApp, Back Coverwww.netapp.com/efficiencyEnterprise data storage and access solutions, including the industry’s onlysingle platform for SAN and NAS.

Overland Storage, 49http://www.overland-storage.com

Pillar Data Systems, Inside Front Coverwww.pillardata.com

PKA Technologies, 57www.pkatech.comPKA Technologies Inc. is a full-service,Value Added Reseller that has been providing Enterprise and Business-Classtechnology solutions to SMB and Fortune500 companies since 1983.

Quantum Corp., 3http://www.quantum.comQuantum—Backup. Recovery. Archive. It’s What We Do.

SearchDataBackup.com Online Backup Special Report, 60www.SearchDataBackup.com/online_backup

Storage Compliance eZine, 61www.SearchStorage.com/compliance_ezine

Storage Green Storage eZine, 62www.SearchStorage.com/Green_eZine

Storage Storage Virtualization eZine, 59www.SearchStorage.com/eZineVirt

Ultrium LTO, 47www.ultrium.comLTO is a powerful, adaptable open tape format created by Certance, HPand IBM for data protection.

advertising index STORAGE

64 Storage November 2008

Too busy to archive your email?The volume of emails just keeps growing but, according to our recent survey, they’re not all getting neatly filed.

The percentage of respondents archiving email since we surveyed readers a year ago has dropped from 52% to

45%, even as compliance requirements and regulations have increased archiving awareness. However, 61% of those

not archiving email say it’s on their to-do list vs. 58% listing that as their top reason last year. Thirty-two percent

of those surveyed don’t do it because their company doesn’t require it, while 28% say they don’t have the budget

to archive (note: respondents could choose more than one option). Of those who are archiving, 68% apply it to all

email, while 23% archive based on end user. The primary reason for respondents to archive their email is to be

prepared for any legal issues (35%), followed closely by the need to better manage storage capacity (30%).

—Christine Cignoli

“We have an email policy that we don’t store or archive emails longer than 60 days from origination/receipt. This limits exposure and risk.”

—Survey respondent

snapshot

45% 61%

51%38%

4%7%

55%No

Yes

We use athird-partyemail archiv-ing tool

Other

We use our email

application’sbuilt-in utilities

We use anexternal

service toarchive mail

Do you archive yourcompany’s email?

What are your top three email archiving challenges?*

63% Managing the volume of archived emails

41% Searching for archived email

37% Setting up archiving policies*Respondents could choose multiple responses

How is email archived?

Email archiving is onour to-do list

Our company doesn’trequire us to

archive email

We don’t have thebudget to purchase an

archiving application

Our mail volume issmall and relatively

easy to manage

We don’t need anotherapplication to manage

We aren’t confident anarchiving productwould adequately protect our email

Other

0% 10 20 30 40 50 60 70

Why aren’t you currently archiving email?*

*Respondents could choose multiple responses

28%

13%

3%

2%

12%

32%

I WANT ALL DAY COMPUTING

(AND ALL NIGHT)Up to 19 hours of battery life. ExpressCharge.™

Energy Smart power management.

THE NEW LATITUDE™ E6400Featuring the Intel® Centrino®2 Processor with vPro™ technology

DELL.COM/LATITUDE 866.238.DELL

Dell recommends Windows Vista® Business.

Intel, the Intel logo, Centrino, Centrino Inside, Intel vPro and vPro Inside are trademarks of Intel Corporation in the U.S. and other countries.

Dell.indd 1 10/21/08 10:27:57 AM

© 2008 NetApp. All rights reserved. Specifi cations are subject to change without notice. NetApp, the NetApp logo, and Go further, faster are trademarks or registered trademarks of NetApp, Inc. in the United States and/or other countries. All other brands or products are trademarks or registered trademarks of their respective holders and should be treated as such. *This guarantee and related Program is limited to

the terms set forth in the Program Guide and Acknowledgement For 50% Virtualization Guarantee Program document, applicable only to prospective orders placed after the Program effective date and is dependent upon your compliance with the terms and conditions set forth in this document and any of the instruction sets and specifi cations set forth in the referenced documents. NetApp’s sole and exclusive liability and your sole and exclusive remedy associated with the terms of this guarantee and related Program is the provision by NetApp of the additional storage capacity as set forth in this guarantee and related Program.

With NetApp® at the heart of your business, you can

at a fraction of the cost and footprint.

E F F O R T L E S S L Y H O L D Y O U R D A T A

Imagine storage and data management solutions smart enough to support the data you need, and not a lot of dead weight.

It’s possible when you partner with NetApp. Our industry-leading solutions use deduplication and other space-saving technologies

to help you store data effi ciently and reduce your footprint by 50% or more. So you can manage exponential growth while

minimizing your storage investment—all with the support of a team that will exceed your expectations. See how we can help

your business go further, faster. Find out how you can use 50% less storage, guaranteed,* at netapp.com/effi ciency.

NetApp.indd 1 10/14/08 12:55:16 PM