Wednesday, April 23, 2014

Openstack Icehouse release, a first look

On April 17, the OpenStack Foundation announced the availability of the ninth release of OpenStack, codenamed Icehouse. The release boasts 350 new features, 2,902 bug fixes and contributions from over 1200 contributors.

Icehouse focuses on maturity and stability as can be seen by its attention to continuous integration (CI) systems, which featured the testing of 53 third party hardware and software systems on OpenStack Icehouse.

The hallmark of the Icehouse release consists of its support for rolling upgrades in OpenStack Compute Nova. With Icehouse's support for rolling upgrades, VMs no longer need to be shut down in order to install upgrades. Icehouse "enables deployers to upgrade controller infrastructure first, and subsequently upgrade individual compute nodes without requiring downtime of the entire cloud to complete." As a result, upgrades can be completed with decreased system downtime, thereby rendering OpenStack significantly more appealing to enterprise customers.  There are also some added functions for KVM, Hyper-V,  VMware, and XenServer which are too numerous to go into here. See the Openstack Icehouse release notes for more details.

Icehouse also features a "discoverability" enhancement to OpenStack Swift that allows admins to obtain data about which features are supported in a specific cluster by means of an API call. Swift now also supports system-level metadata on accounts and containers. System metadata provides a means to store internal custom metadata with associated Swift resources in a safe and secure fashion without actually having to plumb custom metadata through the core swift servers. The new gatekeeper middleware prevents this system metadata from leaking into the request or being set by a client.

On the networking front, OpenStack now contains new drivers and support for the IBM SDN-VE, Nuage, OneConvergence and OpenDaylight software defined networking protocols.  It also supports  new load balancing as a service drivers from Embane, NetScaler, and Radware as well as a new VPN driver that supports Cisco CSR.

Meanwhile, OpenStack Keystone identity management allows users to leverage federated authentication for "multiple identity providers" such that customers can now use the same authentication credentials for public and private OpenStack clouds. The assignments backend (the source of authorization data) has now been completely separated from the identity backend (the source of authentication data). This means that you can now back your deployment's identity data to LDAP, and your authorization data to SQL, for example.

The Openstack Dashboard (Horizon) has support for managing a number of new features.

Horizon Nova support now includes:

  • Live Migration Support
  • HyperV console support
  • Disk config option support
  • Improved support for managing host aggregates and availability zones.
  • Support for easily setting flavor extra specs

Horizon Cinder support now includes:

  • Role based access support for Cinder views
  • v2 API support
  • Extend volume support

Horizon Neutron support now includes:

  • Router Rules Support -- displays router rules on routers when returned by neutron
Hoizon Swift support now includes:

  • Support for creating public containers and providing links to those containers
  • Support explicit creation of pseudo directories

Horizon Heat support now includes:

  • Ability to update an existing stack
  • Template validation
  • Support for adding an environment files

Horizon Ceilometer support now includes:

  • Administrators can now view daily usage reports per project across services.


In total, Icehouse constitutes an impressive release that focuses on improving existing functionality as opposed to deploying a slew of Beta-level functionalities. OpenStack's press release claims "the voice of the user" is reflected in Icehouse but the real defining feature of this release is a tighter integration of OpenStack's computing, storage, networking, identity and orchestration functionality.

Saturday, April 12, 2014

Simplivity vs. Nutanix



At a high level both of these products provide the same service(s) for the user.  Certainly the two “leap-frog” each other in terms of features, but at this point in time, they are very close. Both of them are “Hyper-converged VMware appliances”, though Nutanix is able to support other hypervisors such as Hyper-V and KVM as well as VMware.  Simplivity will allow large customers to utilize their own hardware, however, the customer must buy the Simplivity software as well as the OmniCube Accelerator Card for each server since the card is what does all of the writes in the Simplivity architecture. 

From an architectural perspective both systems provide a “hyper-converged” solution made up of X86 servers with internal storage which are networked/clustered together. You grow the overall system by simply adding additional nodes to the cluster.  As of this writing, Nutanix offers more different kinds of options for those nodes, giving the user more flexibility on how the clusters is gown.  Both systems provide for multiple tiers of storage including SSD and HDDs and will automatically move hot data between tiers.   It should be noted that Nutanix offers an interesting feature that Simplivity does not.  Nutanix has the concept of “data locality”.  With data locality, when you v-motion a VM to a different node in the cluster, Nutanix will move the datastore(s) for that VM to the same node (assuming there is space).  This movement is done in the background, over time so as not to impact performance.

As of the latest version both systems provide deduplication of data natively built into the system. There is some discussion about which method of deduplication is “better”, however, in the end I believe that both will provide the user good deduplication results. Both systems also provide compression of data at the lower tiers. 

Again, in regards to backups, replication, DR, etc. both system provide very similar features. Both systems allow for replication of deduplicated/compressed data thus providing “WAN optimization”, both systems provide for snapshots, and both systems replicate data within the cluster for data durability. Simpilivity is able to provide one feature that Nutanix is not currently able to support, and that is replication to the “cloud”.  Specifically, Simplivity provides their software as a VM image running in AWS which can be federated to an Omnicube running in the users data center. 

In regards to management.  Both systems provide for a GUI management environment which allows the user to manage the entire footprint from a single pane of glass.  Again, how this is implemented is somewhat different. Nutanix provides a somewhat traditional management GUI based on HTML 5 that can be used to manage the Nutanix system. Simplivity takes a different approach. Simplivity utilizes a Vcenter plug-in to manage the Simplivity Omicube.  This ties Simplivity to VMware, and will make it more difficult to support other hypervisors.

In conclusion, I believe that the two products would provide effectively the same capabilities for most customers with the single exception of the AWS support that Simplivity  provides. This support would provide the ability for customers to create a Hybrid cloud infrastructure that span the customers private cloud and the AWS public cloud. 

Wednesday, April 2, 2014

Is 2014 the Year of Object Based Storage?

Object based storage has actually been around for a long time.  Some implementations started to appear as early as 1996, and there have been different vendors offering the technology ever since.  However, it has never experienced the “explosion” in usage that some were predicting that it would. 

It least until now.

IDC said the OBS market is still in its infancy but it offers a promising future for organizations trying to balance scale, complexity, and costs. The leaders include Quantum, Amplidata, Cleversafe, Data Direct Networks, EMC, and Scality, with other notables such as Caringo, Cloudian, Hitachi Data Systems, NetApp, Basho, Huawei, NEC, and Tarmin.

Last year OBS solutions were expected to account for nearly 37% of file-and-OBS (FOBS) market revenues, with the overall FOBS market projected to be worth $23 billion, and reach $38 billion in 2017, according to IDC. At a compound annual growth rate (CAGR) of 24.5% from 2012 to 2017, scale-out FOBS – delivered either as software, virtual storage appliances, hardware appliances, or self-built for delivering cloud-based offerings – is taking advantage of the evolution of storage to being software-based.

IDC predicts that scale-up solutions, including unitary file servers and scale-up appliances and gateways, will fall on hard times throughout the forecast period, experiencing sluggish growth through 2016 before beginning to decline in 2017.

IDC said emerging OBS technologies include: Compustorage (hyperconverged), Seagate Open Storage platform, and Intel’s efforts with OpenStack. The revenue of all of OBS vendors combined is relatively small right now (but expected to grow rapidly) with a total addressable market (TAM) expected to be in the billions.  Noted Ashish Nadkarni, Research Director, Storage Systems, IDC. “Vendors like EMC and NetApp have not ignored this market – if anything they have laid the groundwork for it.”

One of the challenges that IT continues to confront is the growth of unstructured data.  This growth creates challenges around data protection, as well as for users when they go to find their data.  Object based storage addresses both of these issues. Use of technologies like Erasure Codes allows OBS to store data in a way that is both highly durable, as well as geographically distributed.  This eliminates the need to create multiple full copies of the data in multiple locations, as you would have to do with traditional NAS arrays. So, rather than having to place storage systems that comprise 300% of your actual data size, you can utilize as little as 50%.

In addition, because many object storage systems are software solutions that can be run on nodes using low cost server hardware and high capacity disk drives, they can cost significantly less than proprietary NAS systems. Throw in better data protection and enhanced features that can enhance search performance and efficient data tiering and it’s easy to see why OBS is catching on.


So, what’s the downside?  There are a couple.  First, it’s performance.  OBS typically cannot match the performance of traditional NAS arrays. With object retrieval latency in the 30-50ms range, applications that require high performance are going to have a problem with OBS.  This is one of the reasons that AWS recommends that you put data on Elastic Block Storage if you need good performance, as opposed to using S3.  The other challenge is that applications today are often not written to access data on OBS. Therefore changes to applications must be made, or the OBS storage must be accessed through a NAS gateway.  Introducing a NAS gateway, however, eliminates the flat namespace, as well as the ability to attach meaningful metadata to your files/object.  This reduces the utility of OBS significantly.  However, the use of NAS gateways as an interim solution may simply be a necessity if OBS is to take over the NAS space.