Thursday, August 25, 2016

Ae we living in a cloud computing bubble?

Talking with our customers, I keep hearing the same thing "IT management has decided on a Cloud First strategy".

Are we working in a cloud computing bubble?  That is, have business leaders (and perhaps some IT leaders) been lulled into assuming that moving or deploying to cloud is as easy as 1-2-3, without any heavy lifting required? So, have at it, spend the money, get cloud for cloud’s sake?

Listening to the cloud vendor messages, one can be forgiven for thinking that cloud deployments are a snap, and will quickly put a business on the path to digital nirvana. However, the history of enterprise software tells a different story. There have been many cases in recent decades of companies slapping expensive technology solutions on top of calcified processes and even more calcified business models, and expecting overnight success — but getting none.

Technology is essential to keep up, and the faster an organization can move to digital, the better. But like the tires on a race car, technology only makes the ride smoother, but is not the reason for success. A scan of the Forbes Global 2000 List of the World’s Largest Public Companies between 2006 and 2016 shows other forces at work that determine business success. The top companies in 2006 were Citigroup, GE, Bank of America, AIG, and HSBC — four US-based, the fifth in the UK.  The top companies this year are ICBC, China Construction Bank, Agricultural Bank of China, Berkshire Hathaway, and JPMorgan Chase — the top three based in China, the next two in the United States.  The point here is that no amount of advanced technology would have necessarily helped the 2006 leaders hold their leads, as they were knocked off their perches by players that emerged from other parts of the global economy with different models and markets.

Successful organizations understand that underlying corporate culture and business models, led by forward-thinking managers or leaders who nurture a spirit of innovation among all levels of employees, are what matter in the long run. They also understand that technology supports this in a big way, but cannot replace any deficiencies.

This is a point to keep in mind when considering the fact that enterprises may be investing hundreds of thousands or millions of dollars, euros, pounds or rupees in cloud computing solutions every year, yet are still uncertain about how and where this technology is best applied. IDC recently released estimates that up to $96.5 billion will be spent on cloud services worldwide this year alone, a number that will reach $195 billion by 2020. Gartner adds a few billion to the equation, suggesting that the worldwide public cloud services market is forecast to reach $204 billion this year alone.

Is this money being well spent? A recent survey of more than 400 enterprise IT executives, conducted by Wakefield Research and sponsored by LogicWorks, confirms there is general uncertainty of IT and business leaders on how to best leverage the cloud to drive growth and efficiency across their organizations, as well as the need for more thoughtful planning for both cloud migration and ongoing cloud maintenance. Eight in 10 executives believe that their company’s leadership team underestimates the time and cost required to maintain resources in the cloud.

There are issues with staffing for the new cloud and digital realities as well. The need for technically proficient people does not go away when things are shifted to the cloud. Even as the demand for enterprise-level, cloud-based services expands, the LogicWorks survey finds 43 percent believe their organization’s IT workforce is not completely prepared to address the challenges of managing their cloud resources over the next five years. It is a problem compounded by the high demand and relatively low supply for workers skilled in cloud, security, DevOps engineering and other IT positions.

Organizational preparation needs to be the most essential piece of cloud and digital engagements, and according to the LogicWorks survey, not enough is being done. In a compelling read at TechTarget, Mark Tonsetic, IT practice leader at CEB, outlines four ways to tell the “cloud story,” to ensure that the money and time spent on technology is met by appropriate transformation of the business:

Cloud and digital transformation are one in the same. Cloud may offering a compelling cost savings, but this is only one small piece of its value proposition, Tonsetic says. Information technology should be seen for what it is becoming: “an ingredient in corporate growth.” He urges cloud proponents to “stress speed and flexibility gains that can enable enterprise digital strategy,” and elevate this discussion to the board and investor level. Enterprise digitization and growth is today’s corporate holy grail.

Cloud and digital elevates the roles of IT leaders. IT leaders need to shift their focus from governance and policy to advice and education on the technology options available to their businesses. IT leaders need to serve as partners to their business users, making the “The cloud story that IT leaders need to tell their business partners should be about how IT will build new partnerships with the business to explore and exploit cloud opportunities. Relay the case for cloud computing in the language of new business opportunities, new models for business engagement and collaboration, and new opportunities for technology careers.”

Cloud and digital transform IT departments into competitive service brokers. Corporate IT no longer needs to operate as the owners and operators of email, CRM or even ERP systems, Tonsetic relates. These departments are now in a different kind of business — consulting and working with corporate leaders to define and execute transformational digital strategies.

Cloud and digital means new types of career opportunities. As mentioned in the LogicWorks survey, more than four in 10 executives say they don’t have the available skills to move forward with cloud. At the same time,there are highly capable IT staff members still involved in legacy or on-premises systems development, maintenance or operations. This pool of talent shouldn’t go to waste. “Forward-thinking IT organizations are careful to send teams the message that cloud and other technology developments present an opportunity for experimentation, innovation, and growth across IT — a refreshing change of pace for teams that have labored mostly to the tune of efficiency, Tonsetic relates.

Tuesday, August 9, 2016

Modernizing backups, or as I like to call it, data protection

Nearly every enterprise IT organization out there is experiencing a significant increase in customer expectations around application performance and business continuity. Response time once measured in seconds are now measured in milliseconds,  and downtime measured in hours or days is now expected to be minutes, if that.

To keep pace with all of that change in expectations, many organizations are implementing application modernization programs that allow them to take advantage of new technologies such as cloud enabled applications, etc. These changes are also driving changes in the infrastructure that those new/modern application run on. Modern network technologies, IaaS, even more virtualization, and containers are all being driven deep within the modern datacenter by these needs.

What hasn't been keeping up, however, are backups, or as I prefer to call it, data protection.  The old full's and incremental backups to some medium such as tape, or even disk, just isn't getting it done anymore.  The entire concept of backups/data protection is being replaced with Business Continuance. The focus has shifted from a siloed attempt to protect the businesses data, to ensuring that the business can continue to run.

With organizations considering availability, it’s no longer about simply needing to restore a single file or folder. It’s about complex processes like recovering a multi-tiered application that spans multiple servers and bringing each server and the data it requires into a consistent state with the others. It’s far more involved than the restore jobs of yesteryear.

In some ways, basic backup no longer has a place in the modern data center. As new technologies have come into use—the foremost being virtualization—you can now easily move workloads from one location to another. You can even performing maintenance during the day. The options around availability are much greater and more flexible than what backups alone provide.

Even so, the idea of a backup—that is, having copies of your data—is still viable. Now data centers have moved to advanced concepts like replicating data at the disk level or entire virtual machines, both from one store to another or even one site to another. This provides both increased protection and faster recoverability.

Organizations today aren’t just looking at availability on a per-application or per-server basis. The goal is to make everything available in the event of an outage. So it’s not “we have our order processing back online, but e-mail is still down.” Now it’s essential to have the entire business back up and running, not just a few services.


Should you have an availability event, can you benefit from backup alone? Backups certainly still have a place. For example, if you’re replicating changes to a VM and the source VM is somehow corrupted, that corruption will simply get replicated. So having a backup of the critical data on that server can play a role in ensuring recoverability. However, backup as the only method is no longer an option for businesses focused on being available. 

As newer technologies have emerged, the frequency of backups has also shortened. In previous years, your backup window simply couldn’t be anything less than nightly. These days, backups occur
much more frequently—even during production hours. And with technology like instant VM recovery in place, the concept of restoring a backup job is somewhat obsolete.

If your organization is like most, you’ve already begun or have made the investment in a modern data center. Despite your desire to simplify what you manage, it’s a complex mix of virtual machines, servers, storage and networking. Because you’ve made the investment to meet the demand to maintain operations, traditional backups alone just won’t scale to meet the availability needs of the organization in such an advanced data center environment. 

Your organization must have standards for what is and is not acceptable downtime. Comparing the businesses’ required levels of availability against what’s currently attainable can help you create a service baseline from which to work towards availability. Begin with the business requirements around application and environment availability, instead of what your backups can do today. This will help IT look for ways to cost-effectively take advantage of current technologies or invest in new ones to make meeting availability requirements a reality. 

Tuesday, December 15, 2015

Being a good listener is one of the key's to success, in any business

I'm in the sales business, and in my business learning to listen well is a major key to success. Actually hearing what people are saying, and not just waiting for your turn to speak is critical if you want to learn what it is your customer is looking for in a solution. Knowing what your customer needs, allows you provide a solution that the customer will embrace, and increase your chances of a sale.

Unfortunately, all to often I've been in meetings where there is at least one person who is more interested in formulating what they are going to say next than they are in hearing what the person who's currently speaking has to say.  In my business (IT) I find this to be endemic. I suspect that has a lot to do with the passion with which many IT professionals bring to their jobs.  That passion is a good thing, but it can also be a two edged sword.  Not hearing and absorbing what the current speaker has to say means that they often come across as "preachy" or "professorial" and miss important points of view and information.

So, I would encourage everyone to become a better listener. But being a good listener does not come easy for some of us. It takes time, practice and dedication. What comes to your mind when you think about listening to a friend or co-worker? Do you find yourself thinking about what you want to say in response to what they have said or are you fully engaged with what they are talking about? When it comes to connecting with others, it’s all about consciously listening to them and the information that they are sharing with you.

1. Eye contact - When it comes to being a good listener, it’s important for you to have eye contact with the other person. It shows that you are paying attention and engaged with the conversation. When you don’t have eye contact with the other person, it shows that you don’t care and are not interested in what they have to say. Practice having eye contact with the next person you have a conversation with.

2. Find the “Why” and “What” - For you to be a good listener, you need to find out the “Why” and “What.” Why are they talking to you and what is the message they are trying to share with you? Being a good listener takes practice and when you are able to practice finding out the “Why and “What” of the other person, you will be much more engaged in the conversation.

3. Focus on the other person - It’s easy for us to think about what we want to say after the other person has stopped talking. This will not make you a better listener. If you are constantly thinking about your response, you will always miss out on carefully listening to the other person. Focus on what they have to say. Find out the “Why” and “What” and maintain eye contact. Once the other person stops talking, then think about your response. But while you are listening, you must be consciously listening with your ears. A lot of times, when we listen to people, we are thinking within our brain what we want to say rather than opening our ears and purely listening to their message.

4. Limit distractions - We live in a society that is filled with so many distractions. We are constantly listening to so much noise that it’s a challenge to truly listen to another person. In order for you to be a good listener, you need to limit distractions during your conversation, whether it be the telephone, text messages coming in, emails arriving, or other interruptions. It takes a mental decision to limit distractions when you are listening to someone else. How can you possibly be a good listener if your phone continues to ring? It would be near to impossible to be a good listener with these distractions. Limit as many interruptions as you can when you are listening to someone else. This not only shows them that you care but you are practicing good social skills.

5. Engage - Engage yourself in the conversation. Being engaged is showing your attention towards the other person. Let the other person know that they have your attention and focus. When you are not engaged in the conversation, the other person will notice and will most likely not want to talk to you again. Show the other person that you care about them and are interested in what they have to say. One way you can show this is by responding with a short comment, such as  “Yes” or “I understand.” This expresses to the other person that you are truly listening. Make sure that you allow the other person to primarily do the talking while you are still engaged.

I believe that if you become a better listener, you'll be more successful both in business and in your personal relationships. It's not necessary something that comes naturally to all of us, so keep in mind that practice makes perfect.
Folks,

Please note that I've changed the name of my blog.  The name change reflects the change that I, the company I work for, and the industry is making.  Converged/hyperconverged infrastructure, the cloud, Openstack, DEVOPS, etc. are all changing the IT landscape and if you aren't changing with it, then you're going to get left behind.

So, I'm changing the name and focus of this blog since, hey, none of want to be called a dinosaur, do we?

--Joerg

Tuesday, May 12, 2015

Container Wars!

The container wars have started!

Containers have a huge amount of hype and momentum, and there are many spoils for whoever becomes dominant in the container ecosystem. The two major startups innovating in this space–CoreOS and Docker–have declared war on each other as part of gaining that control.

The Current Landscape

Recently, CoreOS announced Tectonic. Tectonic is a full solution for running containers, including CoreOS as the host OS, Docker or rkt as the container format, and Kubernetes for managing the cluster. It also uses a number of other CoreOS tools, such as etcd and fleet.

Despite Docker being an option on Tectonic, CoreOS’s messaging is not focused on Docker, and neither the Tectonic site nor the announcement included a single mention of Docker. It’s clear that CoreOS is moving in a different direction. CoreOS’ CEO Alex Polvi says that “Docker Platform will be a choice for companies that want vSphere for containers”, but that Rocket is the choice for “enterprises that already have an existing environment and want to add containers to it”. The latter is a far larger prize.

Companies will choose Docker Platform as an alternative to things like Cloud Foundry. Companies like Cloud Foundry will use things like Rocket to build Cloud Foundry.

Docker meanwhile, doesn’t mention CoreOS or Kubernetes anywhere in their docs, and on their site only mentions them in passing. Docker CEO Solomon Hykes reacted fairly negatively to the announcement of rkt back in December. He has also started to use the phrase “docker-native” to differentiate tools that Docker Inc. builds from other tools in the ecosystem, indicating other tools are second class.

Right now, both companies provide different pieces with their respective stacks and platforms.

To run containers successfully on bare metal server infrastructure, you need:

  1. A Linux host OS (Windows support for containers is coming with the next release of Windows).
  2. The container runtime system to start, stop, and monitor the containers running on a host.
  3. Some sort of orchestration to manage all those containers.
Tectonic provides all of these, with CoreOS, Docker or rkt, and Kubernetes. However, it appears to omit a pre-defined way to build a container image, the equivalent of the Dockerfile. This is by design, and there are many ways to build images (Chef, Puppet, Bash, etc) that can be leveraged.

On Docker, things are less clear. Docker isn’t opinionated on the host OS, but also doesn’t provide much help there. Docker Machine abstracts over it for some infrastructure services, where you use whatever host OS exists already, but doesn’t provide much help when you want to run the whole thing. Docker Swarm and Docker Machine provide some parts of orchestration. There is also Docker compose, which Docker has been recommending as part of this puzzle, but simultaneously saying it’s not intended to be used as part of production. Of course they have a Dockerfile to build the containers, though some indicate that this is immature for large teams.

The impression you get from Docker is that they want to own the entire stack. If you visit the Docker site, you could be forgiven for thinking that the only tools you use with Docker are Docker Inc’s tooling. However, Docker doesn’t really have good solutions for the host OS and orchestration components at present.

Similarly, Docker are pushing their own tools instead of alternatives, even when those tools aren’t really valid alternatives. Docker Compose, for example, is being pushed as an orchestration framework though this feature is still in the roadmap.

The container landscape is fairly new, but Docker has a pretty clear lead in terms of mindshare. Both companies are trying to control the conversation: Docker talking about Docker-native and generally focussing it’s marketing around the term Docker, while others in the space – CoreOS and Google for example – are focussing the conversation on “containers” rather than “Docker”.

This is made a little bit difficult by the head start that Docker has – they basically created the excitement around containers, and most people in the ecosystem talk about Docker rather than the container space. Docker has also done an incredibly good job of making docker easy to use and to try out.

By contrast, CoreOS and Kubernetes are not tools for beginners. You don’t really need them until you get code in production and suffer from the problems they solve, while Docker is something you can play around with locally. Docker’s ease of use, everything from the command line to the well thought out docs to boot2docker, are also well ahead of rkt and and CoreOS’s offering – which are much harder to get started with.

How does this play out?

If you’re a consumer in this space, looking to deploy production containers soon, this isn’t a particularly helpful war. The ecosystem is very young, people shipping containers in production are few and far between, and a little bit of maturing of the components would have been useful before the war emerged. We are going to end up with a multitude of different ways to do things, and it’s clear we’re far from having one true way.

From a business perspective, it’s difficult for any of the players to capitulate on their directions. Docker is certainly focusing on building the Docker ecosystem, to the exclusion of everyone else. Unfortunately, they don’t have all the pieces yet.

Other companies who want to play in the ecosystem are unlikely to be pleased by Docker’s positioning. CoreOS certainly isn’t alone in their desire for a more open ecosystem.

Ironically, Docker came about due to a closed ecosystem with a single dominant player. Heroku dominates the PaaS ecosystem to the extent that there really isn’t a PaaS ecosystem, just Heroku. Dotcloud failed to make inroads, and so opened its platform up to disrupt Heroku’s position and move things in a different direction such that Heroku’s dominance didn’t matter. In Docker, they certainly appear to have succeeded with that. Now that Docker is the dominant player is the disruptive ecosystem, CoreOS and other players will want to unseat them and fight on a level playing field before things settle too much.

The risk for Docker is that on this trajectory, if they lose the war they risk losing everything. If nobody else can play in this space, all of the companies that are left outside will build their own ecosystem that leaves Docker on the outside. Given that Docker lacks considerable parts of the ecosystem (mature orchestration being an obvious one), their attempts at owning the ecosystem are unlikely to succeed in the near term.

Meanwhile, CoreOS will need to replicate the approachability of the Docker toolset to compete effectively, and will need to do so before Docker solves the missing parts of its puzzle.

All of the other companies are sitting neutral right now. Google, MS, VMware are all avoiding committing one way or the other. Their motivations are typically clear, and it doesn’t benefit any of them to pick one or the other. The exception here is that the open ACI standard is likely to interest VMware at least, but I wouldn’t be surprised to see Google doing something in this space, too.

There is massive risk for all of the players in the ecosystem, depending on how this plays out. Existing players like Amazon, Google and Microsoft are providing differentiated services and tools around containers. The risk of not doing so, of owning no piece of the puzzle, is being sidelined and eventual commoditization. The one API that abstracts over the other tools is the one which wins.

Long story short – this is the start of a war that will probably be quite bloody, and that none of us is going to enjoy.

Monday, April 27, 2015

The Data Center of the Future, what does it look like?

Folks,

I've been spending a lot of time talking with customers about storage, flash, HDDs, Hyper-converged, cloud, etc. lately.  What's become clear to me recently, yes, I'm a  little slow, is that all of these technology changes are driving us toward sea changes in the enterprise data center. In this blog posting, I want to talk a little about how things are changing in regards to storage.  I'm going to talk a bit about Flash vs. HDD technology and where I see each of them going in the next few years, and I'll finish up with a discussion on how that will effect the enterprise data center going forward as well the the data center infrastructure industry in general.

I believe that the competition between flash and hard disk-based storage systems will continue to drive developments in both. Flash has the upper hand in performance and benefits from Moore's Law improvements in cost per bit, but has increasing limitations in lifecycle and reliability. Finding well-engineered solutions to these issues will define its progress. Hard disk storage, on the other hand, has cost and capacity on its side. Maintaining those advantages is the primary driver in its roadmap but I see limits to where that will take them.

Hard disk Drives (HDDs)
So, let's start with a discussion of HDDs.  Hard disk developments continue to wring a mixture of increased capacity and either stable or increased performance at lower cost. For example, Seagate introduced a 6TB disk in early 2014 which finessed existing techniques, but subsequently announced an 8TB disk at the end of 2014 based on Shingled Magnetic Recording (SMR). This works by allowing tracks on the disk to overlap each other, eliminating the fallow area previously used to separate them. The greater density this allows is offset by the need to rewrite multiple tracks at once. This slows down some write operations, but for a 25 percent increase in capacity -- and with little need for expensive revamps in manufacturing techniques.

If SMR is commercially successful, then it will speed the adoption of another technique, Two-Dimensional Magnetic Recording (TDMR) signal processing. This becomes necessary when tracks are so thin and/or close together that the read head picks up noise and signals from adjacent tracks when trying to retrieve the wanted data. A number of techniques can solve this, including multiple heads that read portions of multiple tracks simultaneously to let the drive mathematically subtract inter-track interference signals.

A third major improvement in hard disk density is Heat-Assisted Magnetic Recording (HAMR). This uses drives with lasers strapped to their heads, heating up the track just before the data is recorded. This produces smaller, better-defined magnetized areas with less mutual interference. Seagate had promised HAMR drives this year, but now says that 2017 is more likely.

Meanwhile, Hitachi has improved capacity in its top-end drives by filling them with helium. The gas has a much lower viscosity than air, so platters can be packed closer together. This allows for greater density at the drive level.

All these techniques are becoming adopted as previous innovations -- perpendicular rather than longitudinal recording, for example, where bits are stacked up like biscuits in a packet instead of on a plate -- are running out of steam. By combining all of the above ideas, the hard disk industry expects to be able to produce around three or four years of continuous capacity growth while maintaining price differential with flash. However, it should be noted that all of the innovation in HDDs is around capacity. I believe that HDDs will continue to dominate the large capacity, archive type of workloads for the next 2 or 3 years. After that ... well, read the next section on flash.

Some argue that the cloud will be taking over this space. However, even if this is true, cloud providers will continue to need very cheap high capacity HDDs until flash is able to take over this high capacity space as well based on $$/GB.

Flash
Flash memory is changing rapidly, with many innovations moving from small-scale deployment into the mainstream. Companies such as Intel and Samsung are predicting major advances in 3D NAND, where the basic one-transistor-per-cell architecture of flash memory is stacked into three dimensional arrays within a chip.

Intel, in conjunction with its partner Micron, is predicting 48GB per die this year by combining 32-deep 3D NAND with multi-level cells (MLC) that double the storage per transistor. The company says this will create 1TB SSDs that will fit in mobile form factors and be much more competitive with consumer hard disk drives -- still around five times cheaper at that size -- and 10TB enterprise-class SSDs, by 2018. Moore's Law will continue to drive down the cost per TB of flash at the same time as these capacity increases occur this making flash a viable replacement for high capacity HDDs in the next 3 to 5 years. Note that this assumes that SSD's will leverage technology such as deduplication in order to help reduce the footprint of data and drive down cost.

The following is a chart from a Wikibon article on the future of flash:


As you can see from the graph above, by 2017 the 4 year cost per TB of flash will be well below that of HDDs, and that this trend will continue until 2020 when the 4 year cost per TB of flash hits $9 per TB vs $74 per TB for HDDs. You can read the entire article here.

Conclusions
So, what does all this mean?  Among other things, it means that you can expect a shift to what the Wikibon article calls the "Electronic Data Center".  The Electronic Data Center is simply a data center where the mechanical disk drive has been replaced by something like flash, thus eliminating the last of the mechanical devices (they assume tape and tape robots are already gone in their scenario).  This will reduce the electricity and cooling needs, as well as the size/footprint of the data center of the future.

Let's assume for a moment that Wikibon is correct.  What does this mean to the data center infrastructure industry?

  1. Companies that build traditional storage arrays will need to shift their technology to "all flash", and they need to do it quickly.  You can see this already happening in companies such as EMC with the acquisition of XtremIO in order to obtain all flash technology.  Companies like NetApp, on the other hand, are developing their all flash solutions in house. In both those cases, however, all flash solutions are facing internal battles against engineering organizations that are vested in the status quo.  That will mean they could be slow to market with potentially inferior products. However, their sheer size in the market may protect them from complete failure.
  2. What about the raft of new startups producing all flash arrays?  Might the above provide an opening for one or more of those startups to "go big" in the market?  What about the rest? My take on this is that indeed, one or more might have the opportunity to "go big" due to the gap that might be created by the "big boys" moving too slowly or trying to shoe-horn old existing technologies into the data center. Most of them, however, will either die off, or be acquired buy a larger competitor.
However, I think that there is an even larger risk to the "storage only" companies both new and old. I believe that a couple of other market forces will put significant pressure on these "storage only" companies, including the new all flash startups.

Specifically, the trends towards cloud computing, Hyper-converged, and more and more emphasis on automation that is being driven by other IT trends such as DevOps will make standalone storage arrays less and less desirable to IT organizations.  This will force those companies to move beyond their roots into hyper-converged infrastructure, for example where they currently have little or no engineering expertise or management experience.

The companies who are able to embrace these kinds of moves will likely have a bright future in the data center of the future.  However, issues around "not invented here", and lack of engineering talent in the new areas of technology are going to make it a challenge for those very large storage companies going forward. Again, how they address these issues is going to be a determining factor in their future success.

To wrap it it up, I firmly believe that not everything is "moving to the public cloud" in the enterprise space. What I do believe is that:

  1. Some workloads currently running in the enterprise data center will move to the public cloud, and be managed by IT.
  2. Some workloads will remain in "private" clouds owned and operated by IT. However, those private clouds must offer at all of the same ease of use the internal customers that the public could offers. Most likely, they will leverage web-scale architectures (hyper-converged) in order to make management and management automation easier.
  3. Hybrid cloud management software will be used to allow both management, and automation to span between the enterprises private cloud and it's public cloud(s).
  4. DevOps and similar initiatives will drive significant automation into the hybrid clouds I describe above, as well as significant change to IT organizations.
  5. these changes will all be highly disruptive, and those IT organizations that embrace change will have an easier time over the next few years than those that don't. Very large IT organizations will have the hardest time making the changes. Yes, it is hard to turn the aircraft carrier. However, internal customers are demanding it of IT, and will go outside the IT organization to get what the want/need if necessary.
In the end the Data Center of the Future will look very different than the current enterprise data center. It will be a hybrid cloud that spans on-premise, and public clouds. It will be an all electronic data center that uses significantly less footprint, and electricity than current data centers. And finally, this infrastructure will leverage significant automation and be managed by an IT organization that looks very different than the current IT organization.


Wednesday, April 22, 2015

Structured or Unstructured PaaS??

Words, labels, tags, etc. in our industry mean something – at least for a while – and then marketing organizations tend to get involved and use words and labels and tags to best align to their specific agenda. For example, things like “webscale” or “cloud native apps” were strongly associated with the largest web companies (Google, Amazon, Twitter, Facebook, etc.). But over time, those terms got usurped by other groups in an effort to link their technologies to hot trends in the broader markets.

Another one that seems to be shifting is PaaS, or Platform as a Service. It’s sort of a funny acronym to say out loud, and people are starting to wonder about it’s future. But we’re not an industry that likes to stand still, so let’s move things around a little bit. Maybe PaaS is the wrong term, and it really should be “Platform”, since everything in IT should eventually be consumed as a service. I'm already hearing about XaaS (X as a Service) which pretty much means anything as a server, or perhaps everything as a service.

But not everyone believes that a Platform (or PaaS) should be an entirely structured model. There is lots of VC money being pumped into less structured models for delivering a platform, such as Mesosphere, CoreOS, Docker, Hashicorp, Kismatic, Cloud66, Apache Brooklyn (project) and Engine Yard acquiring OpDemand.

I’m not sure if “Structured PaaS” and “Unstructured PaaS” are really the right terms to use for this divergence of thinking about how to deliver a Platform, but they work for me. The Unstructured approach seems to appeal more to the DIY-focused start-ups, while Structured PaaS (eg. Cloud Foundry, OpenShift) seem to appeal more towards Enterprise markets that expect a lot more “structure” in terms of built-in governance, monitoring/logging, and infrastructure services (eg. load-balancing, higher-availability, etc.). The unstructured approach can be built in a variety of configurations, aka “batteries included but removable“, whereas the structured model will incorporate more out-of-the-box elements in a more closely configured model.

Given the inherent application portability that comes with either a container-centric model, or PaaS-centric, both of these are areas that IT professionals and developers should be taking a close look at, especially if they believe in a Hybrid Cloud model – whether that’s Private/Public or Public/Public. It’s also an area that will drive quite a bit of change around the associated operational tools, which are beginning to overlap with the native platform tools for deployment or config management (eg. CF BOSH or Dockerfiles or Vagrant).

It’s difficult to tell at this point which approach will likely gain greater market-share. The traditional money would tend to follow a more structured approach which aligns to Enterprise buying centers. But the unstructured IaaS approach by AWS has led it to a significant market-share lead for developers. Will unstructured history be any indication of the Platform market? Or will too many of those companies struggle to find viable financial models after taking all that VC capital and eventually just be a feature within a broader structured platform?  I want to hear what you think, all respectful comments are welcome.