Saturday, May 24, 2008

EMC World 2008

Well folks, I just got back from EMC World 2008 in Las Vegas. It was a fun trip, but man am I tired. There was a lot of walking at the concference as well as a lot of late nights having fun after the conference sessions were over each day.

I'll have some more detailed postings on what I think about some of the technology I saw at EMC World a little later. Right now I just wanted to talk a a bit about the general trends and feelings I got from the convention.

First and foremost, EMC has finally awakened to the fact that people want de-duplicating products, and they want them now. EMC has really been behind the eight ball when it comes to dedupe. I don't know if it was because of their close relationship with FalconStor in the past, or what, but they really didn't have much of a story to tel l when it came to dedupe, and start-ups like Data Domain where definatly eating EMC's lunch in that market. But the EMC giant has definitely awakened from it's slumber, and introduced some interesting new products.

Basically, the new products fall into two categories, first to a software addition to the existing DL400 line which provides deduplication. The second is a new line of deduplication engines that provide much the same capabilities as Data Domain does. The main differences are that EMC's appliances provide the users with a choice between in-line, post processing, or no deduplication at all. They also have a well designed VTL feature which is an area that Data Domain has been struggling in.

The other area that EMC was emphasizing was "green computing". A lot of this was nothing more than marketing hype and spin on existing products. However, they did mention a feature that they would provide soon that really was "green computing". The idea was to spin down drives that weren't currently in use. Now when EMC didn't introduce and specific products yet, they did suggest that we would see this technology first in the VTLs, but that it could make an appearance in the overall CLARiiON line in the not too distant future.

Overall, a lot of EMC marketing around "green", but some new technology and a good opportunity to talk with the folks at EMC about where they are going with some of the products. I got to spend a little sime talk ing with the folks who work on StorageScope about reporting, and support for AIX VIO in Control Center in general.

Finally, I took my wife along so she could have some fun as well, and I think she ended up having more fun than I did. Las Vegas is a great place for shopping, hanging out in the SPA, and generally having a good time. All of which she did while she was there. We also went to see Phantom of the Opera, which was great. Overall, a good trip for both of us. More details on a latter posting.

--joerg

Tuesday, May 13, 2008

Storage Sea Change

Folks,

After re-reading yesterday's posting, I had one of those "well DUH" moments. It seems obvious now, but it hit me like a ton of bricks. Block Storage Virtualization (BSV) is creating a sea change for how people are going to buy their storage.

Once we are in a virtual world, then we no longer need "intelligent storage". All we really want is cheap storage, the intelligence will be elsewhere (i.e. in the virtualization engine). Of course, this is the reason that so many vendors (NetApp and HDS spring directly to mind) have put virtualization right into their array. They are really just trying to hold back the tide.

But it really is holding back the tide. Why would I want to commit to a vendor like HDS as my front end virtualization engine? Why wouldn't I want a completely independent engine? Well, at least one reason springs to mind. It might be easier to get there from here. What I mean by that is if I have an existing vendor's product already in house, have processes built around it, have people trained, etc. then it makes some sense to be able to leverage all of that knowledge and all those processes. However, if I'm not a NetApp or HDS shop, then why would I bring them in just to virtualize my existing storage? It's no easier from a training/process perspective to do that than to go with something that's a pure virtualization play like SVC, Invista, or Yadda Yadda.

The difficulties involved in virtualizing your existing storage/application are something you should seriously consider. Picking a virtualization engine that will allow you to "encapsulate" your existing LUNs, for example, might make the process of rolling out the virtualization engine a lot less painful for your users than allocating all net new storage that's been "virtualized" and then copying your data to the new "virtualized" LUNs.

So what does all this lead to? I suspect that what we will see from the storage vendors are more "dumb" array products and increased sales of arrays like EMC's CLARiiON AX lines of storage. Why pay for all of that expensive smarts in something like a Symmetrix when all you really need is something that can serve up LUNs that perform well. So the sea change I predict is coming is not the complete demise of the storage "big iron", no, it's more like they will go the way of the mainframe. There will still be a business there, it just won't be as big a business as it once was. Sure, the vendors will fight against it, just like IBM did with the mainframe, but in the end I think that the results will be the same, storage "big iron" will get marginalized.


--joerg

Monday, May 12, 2008

Block Storage Virtualization

For my first posting I really want to talk about block storage virtualization. I really think that 2008 will be the year that people start to roll this out in production in a serious way. Why? It's the money stupid!

Yes, that's right, with the economy getting tight, I suspect that IT budgets, even those for storage, are going to get slashed. So, how are storage managers going to do more with less? You don't think that with the budget cuts there will also be a reduction in the growth of storage/data do you? Of cource not! The business will simply expect the storage team to do more with less, that's all. Simple really, don't you think?

What this will mean is that storage managers are going to be looking for a way to drive the per-GB cost of storage down even more. For many I think that the answer will be block storage virtualization.

Why? Well, I think that there are a couple of answers to that. First off, one direct way to reduce CAPEX will be to drive down the cost of the array's themselves. How? Easy, more competition. If I virtulize the storage, then the array becomes even more of a comodity than it is today, thus driving down the price. It's basic economics really. The more vendors I allow to bid on my next 100TB storage purchase, the lower the price per GB should be, right?

Also, if the real "smarts" is in the virtualization controller, then I don't need it in the disk array, so I can save money on licencing the software in the array. I no longer need to buy replication software from each storage vendor, I have a single replication mechanism which is probably in the virtualization controller itself. More in this in a later post, I think it's going to have a huge impact on the storage vendors going forward.

I also think that I can achieve some OPEX savings by having more efficent operations and fewer outages. Think about it, if all of my storage admins work with a single tool for provisioning, replication, etc. then I have more people with the same skill set, all working in the same interface. That's got to be more efficient and less error prone than having a couple of folks who know the HDS stuff well, and a couple more that know the EMC stuff well, etc.

You had this option before by just buying all of your storage from a single vendor, the trouble with that way of approaching things was that I also had vendor "lock-in". The vendor knew that they had me by the short hairs. Where this really showed up was not in the per/GB price of my storage, or my storage software. I mean, anyone with two brain-cells to rub together knows that if you are going to get everything from a single vendor you better lock in your discount up front, and it better be big. But trust me, the vendors made up for those big discounts via things you didn't have them locked in on. Professional Services, for example. At any rate, virtualization gets me out from under all of that, makes provisioning something that anyone on the team can do at any time following the exact same processes and procedures. You have to believe that will have a posative effect on you OPEX costs.

So, if 2008 is the year of block storage virtualization, what about file virtualization? We all still do NAS right? More on that next time.

--joerg

Welcome!

Welcome to my storage blog!

I've been thinking about doing something like this for some time, but never got around to it with all of the pressing things going on at work and home, etc. But I just really need to get some things off my chest when it comes to this topic, so here I am!

What I plan to write about here is simple, it's what I know best, computers and storage. I work in the storage business, but I grew up in the Systems Administration side of things. I think that this gives me a bit of a bias, although I prefer to call it a perspective.

I've actually worked on both sides of the equation so to speak. I've worked for manufactures such as CDC (yes, you need to be old like me in order to recognise that they actually built computers back in the day) and EMC. I've also worked implimenting systems and storage at companies in the helathcare industry as well as media companies.

This has been a long time coming, and I have a lot of pent up demand on topics, so I suspect that there will probably be a lot of entries really quick.

Climb in, strap in, and hang on, cause here we go!

--joerg