Tom Petrocelli's take on technology. Tom is the author of the book "Data Protection and Information Lifecycle Management" and a natural technology curmudgeon. This blog represents only my own views and not those of my employer, Enterprise Strategy Group. Frankly, mine are more amusing.

Showing posts with label storage. Show all posts
Showing posts with label storage. Show all posts

Friday, March 11, 2011

And I Yawn Again at NTAP

There have been so many blogs written in just one day about the Network Appliance deal to buy the chunk of LSI called Engenio. Between Steve Duplessie at ESG, Greg Schultz of StorageIO, and Andrew Reichman of Forrester, I figured the deal was pretty much covered. This morning on Twitter I saw another half dozen links to blogs about it. With all this coverage you would think this was a game changer in the industry. It’s as if Google bought Microsoft.
I find myself bored by it.
While I like be a contrarian sometimes, that is not the case here. I don’t think it’s a bad deal. All the points have been made as to why it’s a good deal and I can’t dispute any of them. Controlling one’s technology is a good thing. Increasing gross margins on products is also a good thing. And you can’t complain about adding incremental revenue. All very good.
But not very great. Not very bold, not very exciting, not game changing, and certainly not transformative. It’s a boring move not a bold one.
Look, there are are only a few types of sustainable models for companies. They are:
  • Emerging – what every startup is. They don’t know what they will be when they grow up but that’s okay. This is where our most exciting technology comes from. Eventually, however, they will have to grow up and become something else or getting eaten but someone else.
  • Parts Supplier – you make parts for other people. Like headlights or NICs. Great work, especially if you spread it out amongst a lot of companies. The goal of a Xyratex. Qlogic, or an Atto for that matter is to be a supplier to as many people as possible and build something of an aftermarket for your components. Think Cummings (they make engines).
  • R&D Shop – companies that produce only intellectual property. You see this in Pharma and semiconductors but most computer tech companies want to control their R&D. If a big company sees something it likes in another company, they just buy it.
  • Outsourced Services – who doesn’t outsource call centers and manufacturing these days? Most of the services industry falls into this category. It’s the business process equivalent of a parts supplier.
  • Specialty Supplier – the big companies can’t make everything. High performance or special purpose products can’t be produced in enough volume for the big companies to be interested. We used to have more of these in the hardware industry. SGI was one and sort of still is. Alienware certainly was but was bought. Software is rife with specialty companies. Software can get away with it because they have almost no recurring costs. It’s all R&D and no inventory. What is important is that these companies have something that is very important to a small number of people but enough people to sustain the company. They are unique but have demand.
  • Conglomerate – a set of loosely related companies. Some are completely unrelated like GE (aircraft engines and broadcast media?). EMC looks more like a conglomerate than anything else. You can try and put a “Data Management” wrapper around them but RSA, EMC storage, Documentum, and VmWare are only loosely connected in the marketplace. Conglomerates manage companies or divisions like a portfolio. They diversify to guard against downturns. Storage is down? That’s okay because security is up and so on. Google is looking more like a conglomerate every day. And when the need growth they buy some other company in a different space than where they are now.
  • Solution Supplier – soup to nuts in a particular market. HP can provide you everything from mobile devices to laptops to storage to servers. Oracle and IBM can also provide you with almost a whole solution. For these companies, it’s a matter of defining your boundaries. Is is business hardware to business software like Oracle? Maybe it’s all software from infrastructure to desktops to game systems like Microsoft. It’s about delivering a complete end-user solution. When you need more growth, you push out the boundaries.
Then there are the odd ducks. The folks who sell directly but in a narrow non-unique space. They are too big and too old to be Emerging but sell direct rather than to other companies in their market. Not diverse enough to be a conglomerate, they don’t supply enough of the whole system solution to be considered a real solution supplier. And they don’t do anything special enough to be a specialty supplier. This is where I see NTAP now. Basically a one trick pony in a whole herd of mustangs.
NTAP was the specialty supplier when NAS was new. Now, all the solution suppliers and some conglomerates have NAS in their bag of tricks. It’s just not that special anymore. They might have superior technology (don’t know really) but they clearly are at a disadvantage when someone wants to buy a whole system. If I’m putting in a new application, I can buy most of my parts from Oracle, IBM, or HP. No one has everything (well, maybe IBM does) but their services divisions can help me to get whatever I need. Heck, even Dell is better positioned for the IT business.                             
For me to buy from NTAP, I have to only want storage. Just storage. Not servers, not infrastructure software, not desktops, and not mobile devices. If all I need is storage then they have to compete against the conglomerates and all the storage products that the conglomerates and systems suppliers have too. That’s not to say that NTAP is doomed. I believe all those other smart people that say they are a good company and this deal will help them. They can compete effectively in their niche. But their niche is not special enough anymore to drive people to them and them alone. It’s always a bake off for NTAP. They are no longer a specialty supplier but it’s not clear what they are anymore. What I don’t see with this deal is a growth plan. Incremental revenue is not about moving forward. It’s running in place. There is nothing in this deal that will really drive meaningful revenue growth or make them an HP or even an EMC.
LSI was smart here. They know where they are in the food chain. They supply parts. What they do is the computer tech equivalent of making headlights. A good solid business but not one where Engenio fit. They got money for it and can focus on making more of the type of parts they make best. Good move LSI.
What is unclear is what NTAP wants to become. If they stay where they are things will only get harder. If they keep patching the cracks with spackle the house won’t get any bigger or better. Maybe they should buy Brocade or merge with/get bought by Cisco. Doubling down in storage isn’t going to do the trick. To get meaningful growth they will need to do something a bit more risky and bold.

Friday, August 27, 2010

What the…?

Okay, the 3Par bidding has hit the $2B level. That’s nearly twice the opening bid of $1.1.5B. The first bid seemed high but the offers have now gone into the exosphere. For those who don’t remember their 8th grade science, that perilously close to being in orbit. It’s a place where there is practically no air. Get that? No air to breathe.
Considering that 3Par had revenue of US$168M in fiscal 2010 (resulting in a net loss by the way), HP is bidding almost 12 times last year’s revenue! That’s insanely high. Even at an accelerated growth rate, HP won’t make that back before I’m a grandfather. Trust me, that is a long time from now (or better be – you kids listening?). This bidding war has generated a lot of analysis, including mine. Theories range from Dell and HP vying for the number 2 spot in storage behind EMC ( I like that one) to HP’s Dave Donatello being on a mission from god. Okay, not a mission from god per se but the storage business equivalent. At these numbers, none of the theories including mine make any sense.
Look, 3Par is a great company with a lot going for it. It is also a company that would have eventually topped out like Brocade. It might have hit the $500M level but not much more. Selling out is the best thing that they can do for their stockholders. David Scott, 3Par’s CEO, has really done his job.
As it stands, it is hard to imagine HP or Dell seeing much return on this investment. The number’s are just too high. Here’s my theory: like at an auction, these companies are now just into the bidding. The original reasons for doing the deal are now tangential. It’s about winning. It’s about the emotion.
My advice to Dell, let HP take it. Let them spend huge amounts of money that could otherwise have gone into a new storage product. Use the money you have set aside for this purchase to buy someone else. It’s not like there aren’t a bunch of companies out there. Isilon, Compellent, Xiotech, they’re all independent right now. Heck, buy Data Robotics. They’re private and everyone seems to love their stuff. I know it’s not “enterprise” but you could probably sell a boatload of those Drobo arrays to the SOHO market and walk them up to the mid-range. You would certainly see your way to a ROI much faster.
These numbers just don’t make sense anymore.

Monday, August 23, 2010

Computer Industry Goes Zoom Zoom

You would think that last week’s announcement that Dell was acquiring 3Par for US$1.15B was news enough. Ha! Intel then raised eyebrows by announcing the acquisition of McAfee for US$7.6B. Now, comes Monday morning and HP raises the stakes against Dell by sending in their own and bigger bid for 3Par. It’s nice to be loved. Somewhere in all this, Hitachi Data Systems announced that they had acquired the Intellectual Property and core engineering team of Parascale, a cloud software company. Too bad for them. What should have been a sweet announcement was lost in all the noise.
So, what the heck is going on here? On the one hand, this is actually not that surprising. Computer tech companies tend to throw off lots of cash so they have a lot sitting around for acquisitions. Most of these big companies can thus afford to buy expertise or market share. This is especially true when you are coming out from the bottom of the market. Best to build up the arsenal before the economy really picks up.
This is an industry with a tradition of letting smaller companies trail blaze new technology and markets then get their payoff from a big company. In the long run this is cheaper and less risky for big companies but profitable for small ones. More unusual are the Googles and Microsofts who start in a garage and end up a behemoth. That’s the myth of computer tech but not the reality. What is not a myth is that deal making gives folks like me something to talk about. So here’s the talking about part.
Intel-McAfee Makes for Secure Communications
The Intel-McAfee deal has a lot of pundits scratching their heads. It’s a lot of money for a company with a big consumer business. McAfee’s revenue would barely be a rounding error for Intel. In 2009 Intel’s revenue was 18.5 times McAfee’s (~US$35B vs. US$1.9B). $1.9B is nothing to sneeze at but it will be a long time before a McAfee revenue stream makes up for the money Intel paid for it. What McAfee has going for it is lots of core security technology. More importantly, it’s spread across all aspects of the digital world – web, mobile, desktop, and server. Combined with Intel hardware and chips and you have a much higher revenue generating business than McAfee alone. It’s like having your cereal with fruit and milk. It’s part of a complete breakfast. It also well positions Intel for the long term. This is an example of the Gestalt principle – the whole is way better than the sum of the parts.  Besides, people said similar things about EMC’s RSA acquisition and that has worked out well for them, right?
3Par Bid Up by HP
I wasn’t that thrilled about Dell’s acquisition of 3Par, except insofar as it worked well for the 3Par folks (nice folks). I’m both more and less thrilled about the HP bid along the same lines. It’s better for 3Par financially, so I’m more thrilled. It’s makes less sense for HP though. Unlike Dell they have a coherent storage story, reputation and brand going back decades, as well as an extensive product line. Do they need 3Par? At least with Dell, 3Par would be a prominent part of the line up. They might have even kept their name, like Equalogic did. With HP, they will be absorbed. It’s hard to see what this deal adds to the HP product mix that they can’t get or build more cheaply. I doubt they need 3Par’s customer base really. Perhaps it’s just a way to keep Dell from becoming a serious competitor in storage. Perhaps. Generally, I don’t like this for HP but do for 3Par investors. It will be interesting to see how high this one gets bid up. There could be crazy amounts of money tossed around here.
HDS Goes Parascaling Up In The Clouds
The cloud is about software. It sells hardware but doesn’t exist without software.  Parascale provides software that makes storage and servers into clouds. I don’t know enough about Parascale to say if it worked or was particularly good software. Assuming it worked just fine, then this is the kind of technology play that I like. It adds immediate value, helps move hardware, has broad, future potential in an emerging market, and is a deal that is easy to do. It’s kind of conservative but conservative often pays the bills.
Bye Bye to OpenSolaris
There were also a bunch of other, smaller announcements too. One that is significant was that Oracle will be dropping support for the OpenSolaris project. This is sad since there was a vibrant community around OpenSolaris. It was not, however, unexpected. Oracle has nothing to gain by supporting an open Unix product. In the end, this will be good for the Open Source community. There are already too many Linux and Unix projects and variants diluting the talent pool. Do we really need OpenSolaris and FreeBSD and OpenBSD and NetBSD and Darwin and so on and so on. Not really. So, while I understand how this bothers some people and generates a lot of “what else will Oracle kill?” questions (Don’t worry it won’t be Java or MySql. They generate revenue) it’s really for the better. Time to move on.
I must admit, all this activity is exciting. It’s rare that this industry gets a week like this. Deals are usually more evenly spaced out. It’s like NASCAR for computer geeks.

Wednesday, August 18, 2010

Piling on the Dell/3Par News

Whenever some news comes out about an acquisition, everyone chimes in. It’s like kids playing little league football. Someone tackles the kid with the ball and all the other kids pile on.  I promised myself I wouldn’t do that. I lied. Hey, if you can’t lie to yourself, who can you lie to?
But really, I follow the storage segment but don’t claim in-depth technical knowledge anymore. I’m too interested in technology and business strategy to dive into the deep technical details. I can make a thin provisioning joke but that doesn’t mean I have the kind of encyclopedic knowledge of the segment that folks like Marc Farley (of 3Par – ready to buy that boat?) or Chuck Hollis of EMC have. Sticking to what I know here are some thoughts.
Why it’s a good thing (in list form):
  1. 3Par would have eventually hit the wall. The hardware industry is a game of numbers. Big volume plus low cost equals great margins. You need market share and manufacturing prowess for that. A company the size of 3Par would have eventually gotten eaten alive by the big boys.Or faded into irrelevance. That would have been the slow death.
  2. The deal provides a nice Return on Investment for 3Par investors. I like it when people make money in startups. It provides fuel for more startups and gives hope to the rest of us entrepreneurs. Now, if you all want to swing some of that cash my way…
  3. I bet Dell really wants 3Par. 3Par could have gotten bought up by someone who just wanted them out of the way.  That would have been sad for the industry. There is a better chance that some of what makes 3Par unique will continue to live on at Dell. It’s nice to be loved.
  4. 3Par employees can get great deals on Alienware computers. I’m just speculating but wouldn’t that be cool. Those babies are hot! If that’s not in the term sheet then amend that puppy now.
Why it’s not a good thing (also in list form):
  1. US$1.15B is a lot of money. Dell is going to have to sell a lot of storage to make that back. That’s especially hard to do when the 3Par message has often been how you could buy less storage at a cheaper price to get the same functionality. I get the “less is more” messaging for a startup but you all have to make back a big pile of money now.
  2. Dell’s bought a lot of storage companies but still doesn’t have a cohesive storage message. This is actually a good-not good thing. On the one hand, you don’t think of Dell as being in storage the way you do, say, HP or EMC. They’ve bought up a boatload of storage companies but it’s like Yatzee - all tossed in an incomprehensible pile. On the other hand the scrappy 3Par people are really good at new marketing. If they stick around (and Dell should make it worth their while to stick around) they could have a positive effect on Dell’s overall storage marketing. If they’re allowed to which brings us to…
  3. They can’t use what makes 3Par special. People think that companies like 3Par are about technology. Not really. They are about ideas. The simple audacity of 3Par is part of what makes it successful. That rarely translates well in a big company. Just because Dell wants 3Par doesn’t mean they know what to do with them.  The impact of the creative folks that have been driving the company will be diluted once they are just a cog in the Dell machinery. 
  4. On some level, this has to annoy EMC, Dell’s big storage partner. The more meat Dell adds to the storage stew, the less tasty it is for EMC. I keep wondering how long EMC will put up with this. Dell clearly wants to create a business that competes with EMC. An ugly breakup would be bad for Dell since EMC could probably crush them in the enterprise storage segment. My guess is that the only reason this has yet to happen is that Dell has not gotten it’s act together enough to really get in EMC’s way. Maybe this is what EMC needs to go buy a server company and finally become the full service provider that they should. Some of those Taiwanese computer companies have good SOHO servers that would fit in well with Iomega and Mozy. Just sayin’…
Ultimately, this is very good for 3Par, it’s investors, and many of it’s employees. Making honest money always is. Whether Dell gets it’s $1.15B out of the deal remains to be seen.  They need to develop a simplified but cohesive product line. Better storage marketing would also help. 3Par people can help but will they be allowed to? Wish i knew.

Tuesday, March 02, 2010

Tiers of a Clown

I've been following the debate about automated storage tiering with amused interest. The various marketing operatives of data storage companies (and a few C-Level folks to boot) are all lining up into one of two camps – tiering is necessary or tiering is unnecessary. There has been dueling animations (very clever) from The Storage Anarchist and 3Par's Marc Farley as well as commentary from a host of industry bigwigs. I love the animations but then again, I always loved cartoons.


Automated storage tiering or automated tiered storage (or data lifecycle management, or whatever else it used to be described as) is using different types of physical storage for different classes of data mostly to save money and maintain performance. The promise of storage tiering is that you can move less important, unchanging, or less frequently accessed data to cheaper slower, storage. You can keep the most important, frequently changing, and most accessed data in a really expensive array that combines high performance with heavy duty data protection features. For data that you don't need quite so often and doesn't change, you can move it to something slower and not as rigorous. And so on until you finally archive it to an archive system or deletion. This has been the bread and butter of folks like Compellent and has been picked up by most of the bigger storage companies since. The ultimate goal is high levels of efficiency in your data storage systems. The more important the data is the more resources it can consume. Less important data consumes fewer resources and balance in the universe is maintained.

A great example of where one might use tiered storage is with a check image. For a short while a check image has to be available to online customers and tellers immediately. Then it has to be stored for seven years and only moderately available. Then it is deleted. Chances are good that after 90 days you won't care to see the actual image so moving it to slower storage is not much of a burden but it saves money.

Three things about tiered storage that are important to consider. These considerations are what fuel the debate. First, automating it is tough. You have to get the software right or you lose data and have diminished efficiency. The second consideration is the ever dropping cost of storage. As data storage continues to become even more stupid cheap, it raises the question of whether you need to be all that efficient in the first place. If a high performance array is inexpensive then everything can have high performance storage without moving data around to a bunch of arrays. Finally, it's hard to decide what data belongs on what resource. Do I base it on age? Class of data? How do I decide what data is what class? These are not technical problems. They are business problems which are much harder to overcome. Wrangling with your organization is hard work. You have to put a lot of effort into deciding what goes where and hope that your vendor supports your criteria.

To me, the problem of storage tiering is that it is a good idea that can be tough to execute. It's like the old joke about teenage sex – everyone talks about it, no one really does it, those that do it don't do it well. I'm sure that lots of folks will say that they have products that allow folks to do this well. However, technology doesn't solve the organizational problem which makes it hard for folks to want to implement it. That doesn't effect the bread and butter customers that top tier storage companies (sorry – couldn't resist) who tend to be huge companies. They have the business process resources to pull it off. It might explain why automated storage tiering is not generating a huge following in mid-sized and smaller companies. They have other things to do with their limited resources then try and squeeze a bit more efficiency out of their storage system. The ROI for them is simply not big enough. Heck, many are still struggling with the blocking and tackling of doing backups and security.

So, where do I weigh in on this debate. I agree with both sides. If that sounds a bit weasel-like then sorry. For some companies there are mission critical applications that would benefit from an automated tiered storage system. For others, it's hard see how there would be benefit enough to warrant the time and effort. For me, the debate is a non-debate. It's not about whether automated storage tiers is beneficial or not. What matters is whether it's beneficial to you. If you think in terms of customers, instead of products and technology, it becomes clear. What applications do you have that need this approach? Does your organization need it at all? Can you decelerate the pace of your storage buying enough to justify the costs and time involved in implementing this? Will you be able to decide what data should go where and when?

In the end, it's a feature like all other features. If it has value for you then it's a winner. If it doesn't then find something that does. But watch the debate. It's quite entertaining.

Thursday, September 03, 2009

EMC Pacmans Kazeon. Mmmm. Good.

More news on the storage M&A front and once again it's about EMC. EMC announced that they are acquiring Kazeon a self avowed “leader in eDiscovery” tools. Stripped of all the eDiscovery hoopla, Kazeon makes a decent indexing and classification rules engine. In that space, it is a very useful thing indeed.

What Kazeon does is catalog and index your data and run the information past a rules engine to generate additional metadata. Again, that's good but only as good as the rules. They also layer on some analytics which is all the rage in eDiscovery. Analytics are also only as good as the data and metadata and, in my opinion, overemphasized in the eDiscovery market. But that's just me...

Kazeon is a hand in glove fit for EMC. For many years now EMC has looked to get past selling just hardware and has wanted to sell systems to store and manage data. That's a great strategy since it creates new value but sticks to their knitting. Kazeon is another value added tool that EMC can add to their toolbox.

The Kazeon acquisition also gives them some eDiscovery street cred. They have been trying to play in the eDiscovery sandbox for years, mostly through their Documentum offerings. Nothing wrong with that since the majority of eDiscovery work is about workflows anyway. However, automated tools for tracking data also are important not only to discovery during litigation but also to insure compliance with in-house data retention rules. And by retention I mean destruction. But you knew that didn't you...

The dirty secret about data retention is that no one can really comply with their own internal rules without knowing where and what all their data is. Knowing where all your data is is a really hard problem to solve. That's where Kazeon comes in. They create catalogs of vast amounts of data that allows you to better comply with discovery rules, a preservation and legal hold, and internal data retention policies.

So, Kazeon is an obviously good thing but why is it good for EMC? Actually, there are two (probably more) reasons why this works so well. First, it adds value. If I buy tons of EMC storage, the Kazeon/EMC products will help me to know what I have on it. Second, those catalogs of information and metadata need, you guessed it, more storage. It's the reason Documentum was such a good deal for EMC. It lets you get more value from your stored data and makes you store more data. A twofer for EMC.

EMC will now be able to deliver, by itself, one of the most comprehensive information management systems available. By combining Documentum, Kazeon, and all the other tools EMC has at its disposal, plus hardware, they will be able to deliver an information management solution that will make lawyers squeal with delight.

That's not to say it's perfect. Kazeon can't help you if someone dumps their files onto a flash drive or emails a smoking gun document to their Gmail account. Smartphones and PDAs are also a challenge that Kazeon will not help with. Still, they hit all the high notes and do better than what most companies do - which is nothing!

As an aside, Kazeon also has an intellectual property (IP) management component to their systems. IP management and eDiscovery are very similar problems – you need to know what data you have where, in order to comply with rules and regulations. EMC has often touted Documentum as an IP management tool. They haven't gotten too far with that since it takes so much effort to set up Documentum to do IP management. Unless you are already committed to Documentum across the company, there are better out of the box IP solutions. Kazeon will give them some more heft in that space. It will allow EMC to automate many of the sticky tracking and classification tasks associated with IP management, especially in preventing leakage. It's not there yet, but getting better.

I don't know if EMC is full yet after eating up so many companies. Kazeon is quite a tasty and healthy morsel for them though. It makes good, strategic sense. I wonder if they left room for dessert.

Thursday, August 27, 2009

Cloudy Skies This Week

Recent blog posts and comments I made on Twitter might give some people the impression that I'm against cloud computing. I bet I've given some people the impression that I hate cloud computing. Despise it! Want to see it die! Nothing could be further from the truth. I love the idea of cloud computing. It's the cloud computing marketing that I take issue with.

Overall, what's not to like about the cloud idea? The promise of cloud computing (notice I say promise, not reality) is the ability to only buy what you need with the option to buy more later if you want to. In that respect, it deals with one of the key problems in computing: coarse granularity in systems. If I need 10 percent of a server, I might have to buy a whole server. Someday I might need that whole server but not right at the moment. Then again, maybe never. We have wonderful terms for buying more than you need such as underutilization. The best term is “a waste of money”. So, buying only what I need when I need it is a great way to manage my budget. Same goes for software. I no longer have to buy a software package designed for fifty people for just three people to use. It's efficient and cost effective. It also makes it easier to quantify the cost of running an application.

Cloud computing is also evolution not revolution. We have been doing limited purpose cloud computing for years. It's called web hosting. And email hosting. Oh. And application hosting. Do I notice when my hosting provider adds new resources in order to add more customers. Not really. I pay ten bucks and get a chunk of resources adequate to running my simple web site and that's how I like it.

So what's not to like? Well a couple of things really. Security of a cloud is no better than security in a non-cloud data center. You still have the problems of internal espionage, external break-ins, and other Dick Tracy stuff.

There is also a migration problem. When the day comes that your application needs to move to a dedicated system (don't kid yourself – it will happen), you might have a heck of a time moving it. Unlike moving up to a bigger piece of iron, applications may have to be rebuilt to live in a different type of environment. In that way, I suppose, it is different. It's worse... and nobody wants that.

This is especially true of clouds built around service frameworks like Amazon's. At some point the application might get big enough that it makes sense to bring it in house. Worse yet, you could find yourself dissatisfied with the service provider (like that never happens!) and forced into an acrimonious divorce. This is an especially nasty problem because they have you by the data stores if you get my meaning.

These are not reasons to forgo the cloud. They are reasons to be careful. Figure these issues out ahead of time and make good choices up front. And ignore the hype. If someone slaps “cloud” on something that seems not so cloudy, be suspicious.

Remember, cloud computing is a strategy and maybe an architecture. It's not a product no matter how many times the corporate talking head says so.

Thursday, June 25, 2009

FCoE is Rubbish

I'm beginning to understand the debate about the Fiber Channel over Ethernet (FcoE) concept and I don't like it. At first it made little sense to me. iSCSI delivered most of what we needed in terms of cheap SANs. It leveraged the IP infrastructure already in place in just about every organization in the world. iSCSI also made use of 40 years of experience, knowledge, and training. For high performance you went with Fiber Channel since iSCSI couldn't meet the very high loads of some systems. Otherwise, iSCSI was good enough for a lot of applications.

So why FCoE? The cost structure is likely to be higher than iSCSI or at best the same. The performance wouldn't be the same as pure FC. The idea that you need a gateway to the IP network doesn't really make sense to me either. Who really does that anyway? A few folks doing long distance SANs perhaps but there are tools for that and cost is not a big problem in those environments. You can always gateway to iSCSI if needed and the hardware for that already exists.

Then it occurred to me. It's marketing, stupid! When networking vendors such as Cisco go out and sell their SAN products, they are generally on the same footing as the FC players like Brocade. Brocade has more experience and knowledge about SANs which translates into an advantage for them. They also know the storage folks inside large companies that network vendors don't. Those storage guys like having their own flavor of network technology. It keeps the network admins out of their shorts.

Now, FCoE starts to make sense. You can sell IT on convergence or native integration or unified platforms or whatever marketing babble you choose. It is hoped that management will rally around the idea of having one networking platform even for two different types of network applications. Don't kid yourself, SANs and LANs are very different network applications with very different technology needs.

Best of all, as a networking vendor you have the upper hand in the sale. You can insert your champions (the core networking folks) into the process. You can sell expertise that the FC guy doesn't have. The worst case scenario has you on equal footing to the FC vendor where you can sell on the merits of the products. Of course, with 30 more years of experience in Ethernet, you will have a few tricks up your sleeve that the FC guy doesn't. Nice position to be in.

FCoE as a convergence/integration/unified platform play is rubbish. No one is going to run SAN traffic and LAN traffic over the same Ethernet network. It will still be two pools of equipment, much of it specialized to FCoE. Most of the real networking expertise in a company is in the IP space so no real advantage there. Once you start to install the specialized FCoE switches and NICs (or brand new unified platforms using a forklift I imagine) the costs won't be that much different.

iSCSI makes sense in so far as it provides a low cost SAN option for low to mid-performance SANs. Old fashioned FC makes sense because it provides a proven high performance storage networking capability for intense applications. FCoE does neither. All it does is give networking vendors a leg up against existing FC vendors.

Great marketing I must admit. Not convincing technology but a good way to position SANs as just another network flavor. If there is a technology advantage here it doesn't seem to create much of a business advantage for the IT folks.

I'm sure I'll get a bunch of hate mail telling me all the minor advantages of FCoE, many from corporate mouthpieces. Instead of wasting your time on that, tell me why someone will pay money for FCoE rather than iSCSI or FC. Tell me why we need to gateway SANs from FC to Ethernet which can't be routed and hence not good for wide area applications like remote backup.

Don't go down the road of the unified platforms either. Saying you need FCoE to create unified networks is not true and, at best, self serving. Unified platforms happen because of software not Ethernet.

Just don't say convergence, okay.

Thursday, April 16, 2009

Tweeted by Well Meaning EMC People

The problem with expressing oneself in 140 characters or less is that it doesn't provide an opportunity for clarity. This is why Twitter can be frustrating at times. Folks can't quite get your message. Not because there is something wrong with them but because of the limitations of the media. You have to learn how to operate within the restrictions of the form and still get across what you mean. It's like Haiku. You have to learn how to do it and I'm still learning.

So it was when I commented on the announcements for EMC's newest Symmetrix V-Max. I received a number of replies to my Tweets that defended the new architecture. Looking back, I'm sure that I was not clear nor that we could have a meaningful dialog about the technical aspects of V-Max. It was my bad for trying to make this type of point in the Twitter medium.

See, my problem wasn't with V-Max at all but the way it was announced. I complained about the video of the EMC executives, I complained about the web site description, I complained about the datasheet, and I complained about the blogs that pretty much reiterated what was on the other three. Here's why.

They said almost nothing of value.

I get that the V-Max architecture is somehow great for virtual server environments. The name alone told me that (good name by the way – huzzah to product management). So what? Lots of folks claim that. How does it do that? Even if you could tell me how (which is what most of the EMCers want to do), it still doesn't tell me why I should care. I'm left to ponder the Why.

That is, in nutshell, the core problem. All products and marketing have to pass the “Who Cares?” test. What serious problem do I have that this solves? What is so compelling about this product that I need to run out and buy it? Given my ever shrinking budget and staff, why does this doo hickey deserve a precious slice of my time and money? I don't get that from the EMC marketing.

The other thing marketing has to do is grab my attention so you want to listen to the “Who cares” message. This is not a gentle tap on the shoulder but a grabbing by the ears and shaking kind of thing. It doesn't have to be cute or loud or include music by nearly dead old rock stars. It has to be compelling. Even if you want to turn away, somehow you can't. Like an accident on the highway.

The product marketing for the launch of V-Max fails in this capacity. It is the same old formula. Have the executives get up there and make grand but vague statements followed up by bland and vague marketing literature. This doesn't even work for Steve Jobs anymore and he is as close to a rock star as this industry has. The EMC video, for example, almost looks like a parody of a computer commercial. The white background with the executive talking head is too much like a Mac vs. PC commercial but without the Mac. Actually, if they had made a parody, like the Microsoft parody of the Volkswagen commercial some time back, it would have been much more effective.

Check out the data sheet. It is a nine page white paper full of more hyperbole than this blog. Other than the claim to be able to scale to petabytes (an old claim that everyone makes) and that it supports a variety of disk types (like everyone), it's hard to pick out anything concrete from the first few pages.

I am sure that none of this matters to the EMC faithful. Current customers will bypass all this and get the message direct from their sales rep. EMC sales reps do a great job of connecting technology and features to real world problems. Potential customers on the other hand will see little here that makes them pick up the phone and call EMC. They will say to themselves “So what?” not “Holy storage problems Batman, we need one of those for the Batcave computer!”

So, before all the technical folks tweet me to death, it's not about the technology, it's about the marketing. It's tired. I know you don't want to hear that, especially when your best customers are going out of business, but you need to hear it. You can choose to chalk it up to one cranky, uninformed, blogger. On the other hand, you can see it as a wake up call to find better ways to market your products in tough times. Hopefully to find new customers to replace the ones that have evaporated in the recession.

Start with “The new V-Max will allow you to cut costs and operate with reduced staff by....” You can finish off that line. Then I will care.




Tuesday, April 22, 2008

Stop, Drop, and Rock and Roll

Online personal storage has been around for years. Yahoo has had its Briefcase for eons. XDrive, a service of AOL, has also been around awhile. EMC just recently bought yet another on-line storage and backup company called Mozy. So you would think that another entry in this rather crowded field would be a cause for overwhelming ennui.

Happily, that is not the case with Dropbox, one of the latest entries into the on-line personal storage market. It's not so much what they do - all the other companies in the market perform similar functions - but the way they do it. Like XDrive and Mozy, Dropbox has a client interface for your desktop that allows the on-line storage to appear as a desktop folder or drive. Unlike XDrive and Mozy, Dropbox also supports this functionality for Macintosh computers without having to add additional software such as Adobe Air. Like XDrive and Briefcase, Dropbox has a web interface that allows you to access these files from any browser. Unlike those products, Dropbox has a crisp, clean interface. It's sort of the Sauvignon Blanc of interfaces - refreshing and somewhat tart. Briefcase in particular is in need of a face lift. It looks positively pre-2000. It's the Bordeaux that is long past its peak.

Unlike any of the above, Dropbox is fast or at least appears to be. It is very good at caching files so that they seem to be local in terms of performance. Even when you upload a lot of files at once, it is incredibly fast compared to other offerings.

Sharing files with others is very easy too. In fact, this is the strength of Dropbox. Let's say you have a big file that you want to send to someone. You can e-mail it and risk it choking as it pushes through yours and the recipients e-mail gateways. With Dropbox, you can simply upload the file and make it available though a browser link. If they also use Dropbox, you can share it them and it looks like a local or network drive, no matter whether they are on a Windows machine or a Mac. If you change or delete the file, those changes are made immediately available, just as if you changed them on a file server.

In the end, that's what Dropbox really is - an Internet file server. You can do all the things one normally does with shared and private drives on a local file server. It even supports drag and drop well. Compared to Dropbox, all the others seem slow and clunky.

About the only thing that XDrive and Mozy have that Dropbox doesn't is an automatic backup client built into the software. Since this "feature" does not work particularly well for either of these services, that's not much of a lack. Your usual tools including SyncToy work just fine with Dropbox.

Overall, Dropbox is an excellent on-line, personal storage service. I only hope they can figure out how to make money at it. Maybe Yahoo would like to buy it... if they aren't gobbled up themselves

Thursday, March 27, 2008

What am I missing?

Maybe I'm not as smart as the rocket scientists at EMC. On the other hand, maybe they are just pursuing a bad strategy. My own ego requires I think the latter is the case so...

What I don't get is EMC's pursuit of Iomega. If this is a downmarket move ala Cisco and Linksys, then it's a not a good one. Iomega was once a great brand and technology leader. They invented the ZIP and Jazz drives and pioneered portable hard drives. They were to the 90's what SanDisk, Fabrik, and Maxtor are today - a premium purveyor of personal mobile storage.

Sadly, Iomega has fallen on hard times. First cheap CD-ROMs and later flash memory eroded the need for their supercharged floppy disk products. In comparison to these alternatives, the products that Iomega was selling became expensive, bulky, and low capacity. Lately, they have tried to produce personal storage devices such as CD and DVD writers as well as small, USB portable hard drives and tape cartridge devices. In other words, the commodity stuff that lots of companies are making.

Recently they were planning to sell out to a Chinese outfit called Excelstor. It was presented to the public that Iomega was buying Excelstor but it's really the other way around. This makes EMC's move even more mystifying. They are making an unwanted, unsolicited bid for a ho-hum consumer storage products company whose brand no longer has any cachet or even geek appeal. Ask your average teenager who Iomega is and they look at you like you're talking about Calvin Coolidge.

If EMC wanted to expand into consumer markets they could have purchased a startup on the upswing like Simpletech instead of letting Fabrik grab them. Heck, they could have gone for LaCie or even Maxtor if they wanted to attack the consumer market. Perhaps it would have cost more (though Fabrik only paid $43M for Simpletech) but you get what you pay for. Iomega is last century enough to be almost steampunk. On top of that all, EMC is getting a bunch of products, like the tape cartridge products, that they can't possibly think have any legs in the market.

So, I don't get it. I usually find EMC's moves wise. RSA gave them a strong security portfolio and no one can argue with the VMWare move. The market has proven that one. But Iomega? I think the rocket scientists might have midfired this time.

Friday, February 22, 2008

Personal Storage is now Stupid Cheap

I mean stupid cheap. Like Jack Benny cheap. As in so cheap, I have sneakers that cost more. You get the picture...

A 500GB internal SATA drive goes for around US$100. I just bought a SimpleTech 2.5" 120GB USB drive for my laptop that uses no external power supply for around US$75. At this rate I'll be buying multi-Terabyte drives for my home network soon.

I can't get the economics here. At this rate can these drive manufacturers still be making money? The USB cable alone is worth a few bucks. Add in packaging, support, the cut the store gets, distribution and - oh yeah - the drive itself and what's left? $1.98? I exaggerate a little here but only a little.

I'm just old enough to remember the day when I had to remove programs from my hard drive to make room for more programs. Now, I just plug a new drive into my mini-NAS devices and - voila! - more than enough storage. The bigger problem is starting to be managing the data. I constantly misplace files and programs. I have software on my system that I no longer know what it does. Which is probably a clue that I don't need it. I have software for Mac OS 7 sitting on my network. I don't even own a Mac anymore!

Data storage has come a long way in 10 years. What used to be a valuable resource is turning into a cheap commodity. Who needs to clean off a hard drive anymore? Even with music and video files, it's hard to fill up these big drives. And, if I should happen to get above 60% utilization, I can plug in another massive drive for next to nothing. I'll get bored with what's on the drive long before that happens. The bottom line: for even a power user the growth in data is far below the growth in drive size.

So, there you have the state of personal storage. My first computer has a couple of 5 1/4" floppy drives, my first hard drive was 20MB, and I
am currently sitting on a total of nearly a terabyte at home. I am finally at the point where I can say that I will probably never fill up all these drives. Now that is something I never thought I'd hear myself say.

On an entirely different note, I have stumbled upon what has to be the single best personal technology blog ever! It's from Britain and called Dork Talk. It's written by the brilliant actor (yes actor!) Stephen Fry, best known in the US for his portrayal of Jeeves opposite Hugh Lurie's Bertie Wooster. It is one of the funniest, insightful, and down right entertaining takes on gadgets and personal tech. Go read it NOW! Oh, and I rarely plug other blogs so you know this one is good.


Wednesday, December 12, 2007

Santa Skips the NeoScale House

It looks like the Grinch came early for the folks at NeoScale. They are gone, finished, kaput! That's not only sad for the people involved (and they were good people) but for the storage security market as well.

Oh wait! That was the problem. There was no storage security market! In the end, that's what killed them. They had a product that quickly became a feature. They had a market that was not really a market unto itself anymore. Somewhere on the Autobahn of technology, they found themselves driving a Model T while everyone else zipped by in a Porsche.

If you are looking for the moment in time when Neoscale jumped the shark, you would have to look to EMC's RSA acquisition. At that point everyone knew (except NeoScale perhaps) that storage security was being folded into the overall security market. Decru clearly saw that train coming down the track. They were smart and sold themselves to NetApp. NetApp got some key competitive features and technology, Decru's investors got out while the getting was good.

But NeoScale. Ah, poor NeoScale. They held onto their dream a little too long. Life just passed them buy. And now they're gone into the dust heap of data storage history.

So this holiday season, let's all remember the poor folks at NeoScale and in the future, remember the lesson that they bring this season. Expand, sell, or you die. A product does not a company make. A feature even less so.

Tuesday, October 23, 2007

SNW, a Lovely Time of Year

It is that time of year again. Geese are flying south, the leaves are turning colors, and herds of "storage people" have gone to Storage Networking World. This year's show was not like other SNWs for me. For the first time, I was there as a vendor (someone trying to sell something to someone else) but also as a consumer. Since the company I work for (the wonderful IP.com) buys data center equipment including storage, I had a different mindset. Boy, did that change the way I looked at the show.

On a positive note, the show appeared well attended. My talk was on the last day at 8:30am, the trade show equivalent of the grave yard shift. In the past, that severely limited my audience size to say the least. Not this year! Instead, I had a full, practically overflowing, room. That's great because it tells me that people are attending the show for the right reasons. It's no longer a junket. It's a real learning experience. This puts pressure on speakers. If people are serious about the sessions, you can't throw together a talk at the last minute. That's an excellent turn of events. Nothing vendors like better than attendees that are there for serious reasons and not just trick-or-treating at the expo booths.

In terms of what was being offered from a product perspective, there was little earth shattering. There was a bunch of noise about Fibre Channel over Ethernet, which I don't really get. I don't mean I don't get it technically. I just don't get why anyone would care. People who want to install Ethernet SANs are happy with iSCSI. Those who need more than iSCSI can deliver are willing to go with real Fibre Channel. FCoE looks a like someone is trying to slice the baloney too thin and find a middle path between the two. Okay. I still don't get it.

There was a lot of marketing around XAM. It didn't appear like a lot of people cared too much about that either. I'm guessing that it is more important to other vendors than IT people. What that boils down to is that the people actually attending the show (IT people) won't care much about it past the cute buttons being handed out. the XAM marketing looks like navel gazing to me.

Other than that, there was still too many array vendors, ILM has all but disappeared, and there were a lot of storage management vendors selling tools that should be bundled in the first place. In other words, not much has changed since last year.

In an interesting aside, while at the ARMA conference the week before SNW I noticed something strange. ARMA is all about records management and there was a lot said about ILM, especially mapping ILM to records management processes and terminology. It struck me as unusual that the records management folks still seem to care about ILM and the storage folks (who started the ILM train rolling) don't. Odd!

Tuesday, August 07, 2007

Save Me The Schadenfreude

For those who don't know, schadenfreude is when you take pleasure in someone else's misery. There is probably a lot of that going on in the storage industry today after the conviction of Greg Reyes, former CEO of Brocade, in federal court. Reyes was not always liked and not just because he was successful.

I don't feel any pleasure in his conviction though. For whatever faults he may have had, to be looking at possible decades in jail for something like options backdating is crazy. People commit rape and murder and don't spend as much time in the hoosegow.

What is even more nuts is that he didn't make any money on it himself. That's right. Reyes backdated options for other people but not himself. That suggests to me that he really didn't think it was illegal. Did he think it was scummy? That's between him and his confessor. You don't put people in jail because they do lousy things only illegal ones.

I'm sure this is scaring the heck out of others who sit on corporate boards or are corporate executives in public companies. It's one thing to make an accounting mistake - or even to bend the rules a bit - and have to pay fines to make it go away. It is something else entirely to find yourself a character in HBO's OZ. Okay, make them resign in disgrace if they were caught trying to game the system. Make them pay back the ill gotten gains. But send them up the river? That's not "sending a message that corporate malfeasance won't be tolerated!" It's going after rich people because they're rich and sending them to reeducation camps. Where's the Gang of Four when you need them.

So, if any of you are feeling smug and thinking that Greg got his comeuppance, don't tell me about it. He doesn't deserve this and everyone knows it. This should be a civil not a criminal issue. If he was a bad boy and made an accounting boo-boo, then take away his piggy bank. Don't lock him up in the slammer. That's not justice. It's vengeance and we should be better then that.

Monday, March 19, 2007

Greetings from the Data Protection Summit

I've been going to fewer and fewer trade shows, conferences, and expos. The reason: I see mostly the same folks (nice though they are) and the same gear. I have the same conversations about mostly the same stuff. Storage conferences in particular are dull with endless rows of storage systems, mostly disk systems, that all look the same. Usually someone tries to convince me that their new disk system is faster then anything ever known. Okay, sure. Whatever...

However, I decided to go to the Data Protection Summit in Irvine even though it's a new event. I like focused conferences especially about the very thing I write about the most. As can be expected with a new conference, turnout was not what everyone wanted but the sessions were high quality. On the vendor side, there weren't any of the giant three-letter-acronym companies. Instead, there were mostly smaller companies that focused on one or two innovative (or so they hope) products.

So here are my impressions of what the conference had to offer:

  1. Gobs of storage security “platforms”. More concentration of storage security than I've ever see before. There were more encryption devices displayed then people in the sessions. The problem with these devices is that they all look and act the same. I think it would be hard to decide on a vendor given the sameness of the products.

    Note to vendors: People don't buy based on small differentiators especially when they can buy from a large full service vendor instead.

  2. Where was everyone else? With the emphasis on encryption so strong, there was little room for other forms of data protection. Hardly anyone was showing good old-fashioned data protection solutions, tape or disk. ILM, traditional backup and restore, CDP, etc. were talked about in the sessions but no one was showing them. It is a strange disconnect between what was discussed and what was shown.

    This tells me two things. First that in minds of small vendors at least, all the other forms of data protection have jumped the shark. No matter that backup systems are what people are buying. It's just not interesting enough to get the VC's to invest in you. Second, old-fashioned backup is still interesting enough to talk about and (more importantly) buy. It's just not as much fun to see. Everyone talks about the new Fall Out Boy album but secretly listens to Bruce Springsteen.

  3. Compliance Rules! If security didn't get your attention then regulatory compliance sure did. No matter what the subject, it always came up. Vendors, analysts, and IT folks all had something to say about it (me included). It's so worrisome even though it only represents a small part of what is wrong with data protection. It just goes to show that lawyers can scare the snot out of anyone, even seasoned IT people. Most of us would rather face an angry mob of end-users whose data was dumped down the drain than be deposed by an attorney.

    We can all imagine the opposition lawyer, dressed completely in black and hissing “Where are the tapes” like Darth Vader. Sends a shudder down your spine, doesn't it?

  4. Your own private DR site. I saw something really cool from a company called VaultStor. It was a standalone, self contained, D2D system in a fireproof and waterproof vault. It's a DR center that fits in a closet. It's a great solution for small and remote offices where it is terribly inconvenient to build a protected data center. Often in those cases the device are left vulnerable to physical disaster. Or the data is left vulnerable to any kind of disaster. VaultStor is a great idea. My thanks to Tom Rauscher of Zetera for pointing them out to me.

  5. From The “All Talk and No Action” Department. There was a lot of gabbing about protecting mobile data but few solutions. Everyone admits it's a problem yet there didn't seem to be too many good ways to address it. Most of the solutions were geared toward protecting mobile data from prying eyes but not from total loss.

    There seems to be this fantasy land where laptops hold no useful information that isn't backed up on a corporate server. In this wonderful place, the CEO has never dropped his laptop down a flight of stairs and a salesperson never accidentally erased the customer proposal that was written in the hotel the night before. Lovely place. I want to live there.

And there you have it. The first Data Protection Summit has come and gone. It yielded, for me anyway, some interesting insights into where the industry is heading. So, despite the low turnout, there was a lot of value in it.


Wednesday, February 21, 2007

Managing Metadata

Everywhere I turn, I hear more about metadata. It seems that everyone is jumping on the metadata bandwagon. For those of you who have never heard the term, it is data about data. More precisely, metadata describes data, providing the context that allows us to make it useful.

Bringing Structure to Chaos

Organizing data is something we do in our heads but that computers are pretty poor at. It is natural for a human being to develop schema and categories for the endless streams of data that invade our consciousness every moment we that are awake. We can file things away for later use, delete it as unimportant, and connect it with other data to form relationships. It is an innate human capability.

Not so with computers. They are literal in nature and driven by the commands that we humans give them. No matter how smart we think computers are, compared to us organics, they are as dumb as rocks.

Metadata is an attempt to give computers a brain boost. By describing data, we are able to automate the categorization and presentation of data in order to make it more meaningful. In other words, we can build schema out of unstructured data. Databases do this by imposing a rigid structure on the data. This works fine for data that is naturally organized into neat little arrangements. For sloppy situations, say 90% of the data in our lives, databases are not so useful.

Metadata Is All Around Us

We are already swimming in metadata. All those music files clogging up our hard drives have important metadata associated with them. That's why your iPod can display the name, artist and other important information when you play a song and iTunes can build playlists automatically. Your digital camera places metadata into all of those pictures of your kids. Because of metadata, you can attach titles and other information to them and have them be available to all kinds of software. Internet services use metadata extensively to provide those cool tag clouds, relevant search responses, and social networking links.

Businesses have a keen need for metadata. With so many word processor documents, presentations, graphics, and spreadsheets strewn about corporate servers, there needs to be a good way to organize and manage them. Information Lifecycle Management assumes the ability to generate and use metadata. Advanced backup and recovery also uses metadata. Companies are trying to make sense out of the vast stores of unstructured data in their clutches. Whether it's to help find, manage, or protect data, organizations are increasingly turning to metadata approaches to do so.

Dragged Down By The Boat

So, we turn to metadata to keep us from drowning in data. Unfortunately, we are starting to find ourselves drowning in metadata too. A lot of metadata is unmanaged. Managing metadata sounds a lot like watching the watchers. If we don't start to do a better job of managing metadata, we are going to find out an ugly truth about it – it can quickly become meaningless. Just check out the tag cloud on an on-line service such as Technorati or Flicr. They are so huge that it's practically useless. I'm a big fan of tag clouds when they are done right. The ability to associate well thought out words and phrases to a piece of data makes it much easier to find what you want and attach meaning to whatever the data represents.

The important phrase here is “well thought out”. A lot of metadata is impulsive. Like a three year old with a tendency to say whatever silly thought comes into their brains, a lot of tags are meaningless and transient. Whereas the purpose of metadata is to impart some extended meaning to the data, a lot of metadata does the opposite. It creates a confused jumble of words that shine no light on the meaning of the data.

The solution is to start to manage the metadata. That means (and I know this is heresy that I speak) rules. Rules about what words can be used in what circumstances. Rules about the number of tags associated with any piece of data. Rules about the rules basically. It makes my stomach hurt but it is necessary.

I don't expect this discipline from Internet services. It runs counter to the “one happy and equal family” attitude that draws people to these services. For companies this is necessary as they implement metadata driven solutions. Unfortunately, it means guidelines (tagged as “information”, “guidelines”, “metadata”, or something) and some person with a horrible, bureaucratic personality to enforce them. Think of it as a necessary evil, like lawmen in the old West.

For the most part, companies will probably start to manage metadata when it is already too late, when they are already drowning in the stuff. There is an opportunity to still avoid the bad scenario though. Metadata-based management of unstructured data is still pretty new. Set up the rules and guidelines now. Enforce the rules and review the tags and how they are used regularly. Eventually, there will be metadata analysis software to assist you but in the meantime, put the framework in place. The opportunity is there to do it right from the beginning and avoid making a big mistake. Use metadata to create value from your data rather than confuse things further.


Friday, January 05, 2007

Looking Forward in 2007

At this time of year, there is a tendency to look back on the year that just passed. Subsequently, the industry press and blogsphere is chock full of articles with titles like "The Best of 2006 in.." or "Looking Back at 2006 in...". I thought about doing that too. For about ten seconds. Who wants a recap of stuff we already know. That's fun for cultural ruminations about music or movies but not for technology, especially a subspecies like data storage. We are more interested in where we are going than where we've been.

One of the reasons I was tempted to do a retrospective is because the upcoming year seems to be shaping up to be, well, a bit dull. The last few years have been, technologically speaking, quite exciting. We have seen the mainstreaming of ILM, CDP, disk-to-disk backup, and WAFS. Highly scalable NAS devices and file virtualization, on the edge just a few years ago, are now commonplace.

With all this new technology having previously been introduced, a slowdown is inevitable. Once technology is proven to the point where managers can feel it is safe, they deploy it. That's where the focus will be in 2007. Getting all of this useful technology into the data center without something going horribly wrong. The good news is that after all the data disasters of the past year, there is renewed interest in data protection at the highest levels of the corporation. That means there will be budget for these projects.

For vendors, this is good news indeed. Rather than evaluating new technology, IT managers will actually be buying it. It's a salesperson's dream - something new to talk about that works and that customer might actually buy.

Also on the vendor side, consolidation will continue. The big companies will continue to gobble up the small ones that are still left. Their focus will be on integrating these newer technologies into their overall product lines. To that end, CDP is fast becoming a feature of disk-based backup and WAFS another network service.

Since everyone will be spending their time digesting acquisitions and technology, don't expect radical new technology to hit the streets. Think incremental changes to products rather then revolutionary new technology. Most folks will be too worried about deployment to care about something disruptive.

For sure, some areas will continue to show innovation. Information management tools such as classification or information tracking are still in their infancy. A lot can happen in this arena in the next year. Application specific storage also has some ways to go and will be a hot area in 2007.

A lot will be happening on the consumer side as well. While more a packaging project, making advanced technology available to consumers will be an interesting path for some companies. Seagate and Iomega are gearing up to attack this market head on. As more and more households store and share lots of digital photos, music, and videos, they will need better storage options then a PC hard drive. These products will be well received once the prices come down a bit.

In a nutshell, 2007 will be a bit boring unless you are making or spending money. In that case, you will be very busy.


Friday, November 10, 2006

Time May Change Me, But I Can't Trace Time

The title of this entry is from the David Bowie song "Changes" off the album "Hunky Dory". While I know he didn't have computer data in mind when he sang these words, it sure applies. How do we trace time? Can we track changes in data and adjust resources based on change? The answer lies in another quote from Maestro Bowie "It Ain't Easy".

I've been researching a paper around the idea of just-in-time data protection. It pulls together some concepts that I have been batting around for quite some time including the idea of Service-Oriented Architectures and dynamic data protection. I've written about both of these topics before. In looking at how to make the data protection environment more responsive, I started to realize that resource allocation needs to be adjusted according to how quickly the data is changing. The rate of change of the data, should then drive one of the major data protection metrics, Recovery Point Objective or RPO. RPO basically says how far back in time you are committed to recover some piece of data. My thinking goes like this: Why spend money to provide high performance backup to data that isn't changing? Conversely, rapidly changing data justifies a short time frame RPO and more resources.

As I went about talking with vendors and IT people I quickly discovered that there was no good and easy way to determine the rate of change of data. We simply don't track data that way. There are some indirect measures, especially the rate of growth of disk storage. For homogeneous data stores, such as a single database on a disk, this works well, assuming your unit of measure is the entire database. It doesn't work well at all for unstructured data, especially file data. We might be able to look at how much space is used in a volume but that's a pretty gross measure. Databases have some tools to show how fast data is changing but that does not translate to the disk block level and does nothing for file systems.

What we need to understand is how often individual files are changing and then adjust their data protection accordingly. If a file is changing an awful lot, then it might justify a very short RPO. If it's not changing at all, perhaps we don't need to back it up at all so long as a version exists in an archive. In other words, we need to assign resources that match the metrics and rate of change affects the metrics. This is complicated because how often data changes is variable. It might follow along with a predictable lifecycle but then again, it might be more variable than that. The only way to know is to actually measure the rate of change of data, especially file data.

The simple solution is a system that tracks changes in the file system and calculates rate of change for individual files. This information would then be used to calculate an appropriate RPO and assign data protection resources that meet the metrics. The best system would so this on the fly and dynamically reassign data protection resources. A system like this would be cost effective while providing high levels of data protection.

No one has this yet. I'm thinking it is two to five years before this idea really takes hold in products. That's okay. It's usually a long time before something goes from a good idea to a usable product. We have to start with the good idea.

Thursday, November 02, 2006

More Fun for You at SNW

After last year, I had pretty low expectations of SNW this year. There was so little new and interesting that it was discouraging. So, going into this year's Expo I have to admit that I had a bad attitude. I am happy to report that I was mistaken and actually found some interesting trends and technology. That is not to say all is rosy. There is still a lot of silliness out there but that is easily balanced by intelligent responses to real problems.

The Most Interesting Announcement

The announcement that caught my attention right away was EMC's acquisition of Avamar. Avamar was an early entry into disk-to-disk backup. What is most interesting about Avamar is their software. The hardware is beside the point. They are leaders in making backup more efficient by only bothering with changed data, called de-duping. They have done a great job with presenting backed up data so that it is easy to find and retrieve.

This fills a significant hole in EMC's lineup. They now have traditional backup (from Legato), CDP, disk-to-disk backup, and remote copy. Pretty much a full spectrum data protection product line. Nice.

Application Specific Storage

There are way too many storage array vendors in the marketplace. They can't all survive. However, there is a nice trend emerging, one that I think may have legs - Application Specific Storage. By this I mean storage systems tailored for specific types of applications. In general, we have taken general purpose arrays and tweaked them for specific purposes. In some cases, some vendors have specialized arrays and software for certain industries such as graphics, media, or the large files typical in oil and gas explorations.

The newest classes of storage are similar in concept - build arrays that fit certain application criteria. This is married to specialized files systems and network operating systems as well as other hardware to make networked storage that is able to meet the performance and management needs of common applications. This is a trend to watch.

Annoying Techspeak

Okay, there are lots of silly acronyms and marketing speak in the computer technology industry. What I really hate is when it is downright misleading. I saw the term "adaptive data protection" tossed on some booths. That attracted me like a moth to a lightbulb of course. Unfortunately, there was nothing adaptive about it. What I found was configurable (read manually configurable) CDP. Aw comeon! Adaptive means that it changes when the environment changes. It does not mean that I can change it when I notice that something is different.

ILM In A Narrow Band

There is much less talk about ILM than last year or even than the year before. What there is now is more focused ILM products. Lots of advanced classification software and search and index engines. This is good. It shows the maturation of the ILM space.

Oh You Pretty Things!

Design has finally come to storage. I don't mean engineering design, functional but unattractive. Instead, design in terms of form and attractiveness. Lets face it, a lot of storage gear is downright ugly. Some of it so ugly that you need to put a paper bag over it before you put it in your data center. Now we have curves and glowing logos. You actually want to look at the stuff.

Gimme Shelter

Yes! More secure products. Software, secure storage, and security appliances. Not only is there critical mass in security products, but more and more security is integrated into other components. Let the black hats penetrate the network and server perimeters. They'll hit the wall with storage.

Give Me Some Relief

And what do IT professionals want? Relief. Relief from regulators and lawyers. Relief from high costs. And relief from the crying and wailing over lost data. They want to keep what they want while ditching what the lawyers want to ditch.

Perhaps this is a sign that things are changing in the storage industry. Innovation still exists and is growing. That's very good news.