Tuesday, December 15, 2015

Being a good listener is one of the key's to success, in any business

I'm in the sales business, and in my business learning to listen well is a major key to success. Actually hearing what people are saying, and not just waiting for your turn to speak is critical if you want to learn what it is your customer is looking for in a solution. Knowing what your customer needs, allows you provide a solution that the customer will embrace, and increase your chances of a sale.

Unfortunately, all to often I've been in meetings where there is at least one person who is more interested in formulating what they are going to say next than they are in hearing what the person who's currently speaking has to say.  In my business (IT) I find this to be endemic. I suspect that has a lot to do with the passion with which many IT professionals bring to their jobs.  That passion is a good thing, but it can also be a two edged sword.  Not hearing and absorbing what the current speaker has to say means that they often come across as "preachy" or "professorial" and miss important points of view and information.

So, I would encourage everyone to become a better listener. But being a good listener does not come easy for some of us. It takes time, practice and dedication. What comes to your mind when you think about listening to a friend or co-worker? Do you find yourself thinking about what you want to say in response to what they have said or are you fully engaged with what they are talking about? When it comes to connecting with others, it’s all about consciously listening to them and the information that they are sharing with you.

1. Eye contact - When it comes to being a good listener, it’s important for you to have eye contact with the other person. It shows that you are paying attention and engaged with the conversation. When you don’t have eye contact with the other person, it shows that you don’t care and are not interested in what they have to say. Practice having eye contact with the next person you have a conversation with.

2. Find the “Why” and “What” - For you to be a good listener, you need to find out the “Why” and “What.” Why are they talking to you and what is the message they are trying to share with you? Being a good listener takes practice and when you are able to practice finding out the “Why and “What” of the other person, you will be much more engaged in the conversation.

3. Focus on the other person - It’s easy for us to think about what we want to say after the other person has stopped talking. This will not make you a better listener. If you are constantly thinking about your response, you will always miss out on carefully listening to the other person. Focus on what they have to say. Find out the “Why” and “What” and maintain eye contact. Once the other person stops talking, then think about your response. But while you are listening, you must be consciously listening with your ears. A lot of times, when we listen to people, we are thinking within our brain what we want to say rather than opening our ears and purely listening to their message.

4. Limit distractions - We live in a society that is filled with so many distractions. We are constantly listening to so much noise that it’s a challenge to truly listen to another person. In order for you to be a good listener, you need to limit distractions during your conversation, whether it be the telephone, text messages coming in, emails arriving, or other interruptions. It takes a mental decision to limit distractions when you are listening to someone else. How can you possibly be a good listener if your phone continues to ring? It would be near to impossible to be a good listener with these distractions. Limit as many interruptions as you can when you are listening to someone else. This not only shows them that you care but you are practicing good social skills.

5. Engage - Engage yourself in the conversation. Being engaged is showing your attention towards the other person. Let the other person know that they have your attention and focus. When you are not engaged in the conversation, the other person will notice and will most likely not want to talk to you again. Show the other person that you care about them and are interested in what they have to say. One way you can show this is by responding with a short comment, such as  “Yes” or “I understand.” This expresses to the other person that you are truly listening. Make sure that you allow the other person to primarily do the talking while you are still engaged.

I believe that if you become a better listener, you'll be more successful both in business and in your personal relationships. It's not necessary something that comes naturally to all of us, so keep in mind that practice makes perfect.
Folks,

Please note that I've changed the name of my blog.  The name change reflects the change that I, the company I work for, and the industry is making.  Converged/hyperconverged infrastructure, the cloud, Openstack, DEVOPS, etc. are all changing the IT landscape and if you aren't changing with it, then you're going to get left behind.

So, I'm changing the name and focus of this blog since, hey, none of want to be called a dinosaur, do we?

--Joerg

Tuesday, May 12, 2015

Container Wars!

The container wars have started!

Containers have a huge amount of hype and momentum, and there are many spoils for whoever becomes dominant in the container ecosystem. The two major startups innovating in this space–CoreOS and Docker–have declared war on each other as part of gaining that control.

The Current Landscape

Recently, CoreOS announced Tectonic. Tectonic is a full solution for running containers, including CoreOS as the host OS, Docker or rkt as the container format, and Kubernetes for managing the cluster. It also uses a number of other CoreOS tools, such as etcd and fleet.

Despite Docker being an option on Tectonic, CoreOS’s messaging is not focused on Docker, and neither the Tectonic site nor the announcement included a single mention of Docker. It’s clear that CoreOS is moving in a different direction. CoreOS’ CEO Alex Polvi says that “Docker Platform will be a choice for companies that want vSphere for containers”, but that Rocket is the choice for “enterprises that already have an existing environment and want to add containers to it”. The latter is a far larger prize.

Companies will choose Docker Platform as an alternative to things like Cloud Foundry. Companies like Cloud Foundry will use things like Rocket to build Cloud Foundry.

Docker meanwhile, doesn’t mention CoreOS or Kubernetes anywhere in their docs, and on their site only mentions them in passing. Docker CEO Solomon Hykes reacted fairly negatively to the announcement of rkt back in December. He has also started to use the phrase “docker-native” to differentiate tools that Docker Inc. builds from other tools in the ecosystem, indicating other tools are second class.

Right now, both companies provide different pieces with their respective stacks and platforms.

To run containers successfully on bare metal server infrastructure, you need:

  1. A Linux host OS (Windows support for containers is coming with the next release of Windows).
  2. The container runtime system to start, stop, and monitor the containers running on a host.
  3. Some sort of orchestration to manage all those containers.
Tectonic provides all of these, with CoreOS, Docker or rkt, and Kubernetes. However, it appears to omit a pre-defined way to build a container image, the equivalent of the Dockerfile. This is by design, and there are many ways to build images (Chef, Puppet, Bash, etc) that can be leveraged.

On Docker, things are less clear. Docker isn’t opinionated on the host OS, but also doesn’t provide much help there. Docker Machine abstracts over it for some infrastructure services, where you use whatever host OS exists already, but doesn’t provide much help when you want to run the whole thing. Docker Swarm and Docker Machine provide some parts of orchestration. There is also Docker compose, which Docker has been recommending as part of this puzzle, but simultaneously saying it’s not intended to be used as part of production. Of course they have a Dockerfile to build the containers, though some indicate that this is immature for large teams.

The impression you get from Docker is that they want to own the entire stack. If you visit the Docker site, you could be forgiven for thinking that the only tools you use with Docker are Docker Inc’s tooling. However, Docker doesn’t really have good solutions for the host OS and orchestration components at present.

Similarly, Docker are pushing their own tools instead of alternatives, even when those tools aren’t really valid alternatives. Docker Compose, for example, is being pushed as an orchestration framework though this feature is still in the roadmap.

The container landscape is fairly new, but Docker has a pretty clear lead in terms of mindshare. Both companies are trying to control the conversation: Docker talking about Docker-native and generally focussing it’s marketing around the term Docker, while others in the space – CoreOS and Google for example – are focussing the conversation on “containers” rather than “Docker”.

This is made a little bit difficult by the head start that Docker has – they basically created the excitement around containers, and most people in the ecosystem talk about Docker rather than the container space. Docker has also done an incredibly good job of making docker easy to use and to try out.

By contrast, CoreOS and Kubernetes are not tools for beginners. You don’t really need them until you get code in production and suffer from the problems they solve, while Docker is something you can play around with locally. Docker’s ease of use, everything from the command line to the well thought out docs to boot2docker, are also well ahead of rkt and and CoreOS’s offering – which are much harder to get started with.

How does this play out?

If you’re a consumer in this space, looking to deploy production containers soon, this isn’t a particularly helpful war. The ecosystem is very young, people shipping containers in production are few and far between, and a little bit of maturing of the components would have been useful before the war emerged. We are going to end up with a multitude of different ways to do things, and it’s clear we’re far from having one true way.

From a business perspective, it’s difficult for any of the players to capitulate on their directions. Docker is certainly focusing on building the Docker ecosystem, to the exclusion of everyone else. Unfortunately, they don’t have all the pieces yet.

Other companies who want to play in the ecosystem are unlikely to be pleased by Docker’s positioning. CoreOS certainly isn’t alone in their desire for a more open ecosystem.

Ironically, Docker came about due to a closed ecosystem with a single dominant player. Heroku dominates the PaaS ecosystem to the extent that there really isn’t a PaaS ecosystem, just Heroku. Dotcloud failed to make inroads, and so opened its platform up to disrupt Heroku’s position and move things in a different direction such that Heroku’s dominance didn’t matter. In Docker, they certainly appear to have succeeded with that. Now that Docker is the dominant player is the disruptive ecosystem, CoreOS and other players will want to unseat them and fight on a level playing field before things settle too much.

The risk for Docker is that on this trajectory, if they lose the war they risk losing everything. If nobody else can play in this space, all of the companies that are left outside will build their own ecosystem that leaves Docker on the outside. Given that Docker lacks considerable parts of the ecosystem (mature orchestration being an obvious one), their attempts at owning the ecosystem are unlikely to succeed in the near term.

Meanwhile, CoreOS will need to replicate the approachability of the Docker toolset to compete effectively, and will need to do so before Docker solves the missing parts of its puzzle.

All of the other companies are sitting neutral right now. Google, MS, VMware are all avoiding committing one way or the other. Their motivations are typically clear, and it doesn’t benefit any of them to pick one or the other. The exception here is that the open ACI standard is likely to interest VMware at least, but I wouldn’t be surprised to see Google doing something in this space, too.

There is massive risk for all of the players in the ecosystem, depending on how this plays out. Existing players like Amazon, Google and Microsoft are providing differentiated services and tools around containers. The risk of not doing so, of owning no piece of the puzzle, is being sidelined and eventual commoditization. The one API that abstracts over the other tools is the one which wins.

Long story short – this is the start of a war that will probably be quite bloody, and that none of us is going to enjoy.

Monday, April 27, 2015

The Data Center of the Future, what does it look like?

Folks,

I've been spending a lot of time talking with customers about storage, flash, HDDs, Hyper-converged, cloud, etc. lately.  What's become clear to me recently, yes, I'm a  little slow, is that all of these technology changes are driving us toward sea changes in the enterprise data center. In this blog posting, I want to talk a little about how things are changing in regards to storage.  I'm going to talk a bit about Flash vs. HDD technology and where I see each of them going in the next few years, and I'll finish up with a discussion on how that will effect the enterprise data center going forward as well the the data center infrastructure industry in general.

I believe that the competition between flash and hard disk-based storage systems will continue to drive developments in both. Flash has the upper hand in performance and benefits from Moore's Law improvements in cost per bit, but has increasing limitations in lifecycle and reliability. Finding well-engineered solutions to these issues will define its progress. Hard disk storage, on the other hand, has cost and capacity on its side. Maintaining those advantages is the primary driver in its roadmap but I see limits to where that will take them.

Hard disk Drives (HDDs)
So, let's start with a discussion of HDDs.  Hard disk developments continue to wring a mixture of increased capacity and either stable or increased performance at lower cost. For example, Seagate introduced a 6TB disk in early 2014 which finessed existing techniques, but subsequently announced an 8TB disk at the end of 2014 based on Shingled Magnetic Recording (SMR). This works by allowing tracks on the disk to overlap each other, eliminating the fallow area previously used to separate them. The greater density this allows is offset by the need to rewrite multiple tracks at once. This slows down some write operations, but for a 25 percent increase in capacity -- and with little need for expensive revamps in manufacturing techniques.

If SMR is commercially successful, then it will speed the adoption of another technique, Two-Dimensional Magnetic Recording (TDMR) signal processing. This becomes necessary when tracks are so thin and/or close together that the read head picks up noise and signals from adjacent tracks when trying to retrieve the wanted data. A number of techniques can solve this, including multiple heads that read portions of multiple tracks simultaneously to let the drive mathematically subtract inter-track interference signals.

A third major improvement in hard disk density is Heat-Assisted Magnetic Recording (HAMR). This uses drives with lasers strapped to their heads, heating up the track just before the data is recorded. This produces smaller, better-defined magnetized areas with less mutual interference. Seagate had promised HAMR drives this year, but now says that 2017 is more likely.

Meanwhile, Hitachi has improved capacity in its top-end drives by filling them with helium. The gas has a much lower viscosity than air, so platters can be packed closer together. This allows for greater density at the drive level.

All these techniques are becoming adopted as previous innovations -- perpendicular rather than longitudinal recording, for example, where bits are stacked up like biscuits in a packet instead of on a plate -- are running out of steam. By combining all of the above ideas, the hard disk industry expects to be able to produce around three or four years of continuous capacity growth while maintaining price differential with flash. However, it should be noted that all of the innovation in HDDs is around capacity. I believe that HDDs will continue to dominate the large capacity, archive type of workloads for the next 2 or 3 years. After that ... well, read the next section on flash.

Some argue that the cloud will be taking over this space. However, even if this is true, cloud providers will continue to need very cheap high capacity HDDs until flash is able to take over this high capacity space as well based on $$/GB.

Flash
Flash memory is changing rapidly, with many innovations moving from small-scale deployment into the mainstream. Companies such as Intel and Samsung are predicting major advances in 3D NAND, where the basic one-transistor-per-cell architecture of flash memory is stacked into three dimensional arrays within a chip.

Intel, in conjunction with its partner Micron, is predicting 48GB per die this year by combining 32-deep 3D NAND with multi-level cells (MLC) that double the storage per transistor. The company says this will create 1TB SSDs that will fit in mobile form factors and be much more competitive with consumer hard disk drives -- still around five times cheaper at that size -- and 10TB enterprise-class SSDs, by 2018. Moore's Law will continue to drive down the cost per TB of flash at the same time as these capacity increases occur this making flash a viable replacement for high capacity HDDs in the next 3 to 5 years. Note that this assumes that SSD's will leverage technology such as deduplication in order to help reduce the footprint of data and drive down cost.

The following is a chart from a Wikibon article on the future of flash:


As you can see from the graph above, by 2017 the 4 year cost per TB of flash will be well below that of HDDs, and that this trend will continue until 2020 when the 4 year cost per TB of flash hits $9 per TB vs $74 per TB for HDDs. You can read the entire article here.

Conclusions
So, what does all this mean?  Among other things, it means that you can expect a shift to what the Wikibon article calls the "Electronic Data Center".  The Electronic Data Center is simply a data center where the mechanical disk drive has been replaced by something like flash, thus eliminating the last of the mechanical devices (they assume tape and tape robots are already gone in their scenario).  This will reduce the electricity and cooling needs, as well as the size/footprint of the data center of the future.

Let's assume for a moment that Wikibon is correct.  What does this mean to the data center infrastructure industry?

  1. Companies that build traditional storage arrays will need to shift their technology to "all flash", and they need to do it quickly.  You can see this already happening in companies such as EMC with the acquisition of XtremIO in order to obtain all flash technology.  Companies like NetApp, on the other hand, are developing their all flash solutions in house. In both those cases, however, all flash solutions are facing internal battles against engineering organizations that are vested in the status quo.  That will mean they could be slow to market with potentially inferior products. However, their sheer size in the market may protect them from complete failure.
  2. What about the raft of new startups producing all flash arrays?  Might the above provide an opening for one or more of those startups to "go big" in the market?  What about the rest? My take on this is that indeed, one or more might have the opportunity to "go big" due to the gap that might be created by the "big boys" moving too slowly or trying to shoe-horn old existing technologies into the data center. Most of them, however, will either die off, or be acquired buy a larger competitor.
However, I think that there is an even larger risk to the "storage only" companies both new and old. I believe that a couple of other market forces will put significant pressure on these "storage only" companies, including the new all flash startups.

Specifically, the trends towards cloud computing, Hyper-converged, and more and more emphasis on automation that is being driven by other IT trends such as DevOps will make standalone storage arrays less and less desirable to IT organizations.  This will force those companies to move beyond their roots into hyper-converged infrastructure, for example where they currently have little or no engineering expertise or management experience.

The companies who are able to embrace these kinds of moves will likely have a bright future in the data center of the future.  However, issues around "not invented here", and lack of engineering talent in the new areas of technology are going to make it a challenge for those very large storage companies going forward. Again, how they address these issues is going to be a determining factor in their future success.

To wrap it it up, I firmly believe that not everything is "moving to the public cloud" in the enterprise space. What I do believe is that:

  1. Some workloads currently running in the enterprise data center will move to the public cloud, and be managed by IT.
  2. Some workloads will remain in "private" clouds owned and operated by IT. However, those private clouds must offer at all of the same ease of use the internal customers that the public could offers. Most likely, they will leverage web-scale architectures (hyper-converged) in order to make management and management automation easier.
  3. Hybrid cloud management software will be used to allow both management, and automation to span between the enterprises private cloud and it's public cloud(s).
  4. DevOps and similar initiatives will drive significant automation into the hybrid clouds I describe above, as well as significant change to IT organizations.
  5. these changes will all be highly disruptive, and those IT organizations that embrace change will have an easier time over the next few years than those that don't. Very large IT organizations will have the hardest time making the changes. Yes, it is hard to turn the aircraft carrier. However, internal customers are demanding it of IT, and will go outside the IT organization to get what the want/need if necessary.
In the end the Data Center of the Future will look very different than the current enterprise data center. It will be a hybrid cloud that spans on-premise, and public clouds. It will be an all electronic data center that uses significantly less footprint, and electricity than current data centers. And finally, this infrastructure will leverage significant automation and be managed by an IT organization that looks very different than the current IT organization.


Wednesday, April 22, 2015

Structured or Unstructured PaaS??

Words, labels, tags, etc. in our industry mean something – at least for a while – and then marketing organizations tend to get involved and use words and labels and tags to best align to their specific agenda. For example, things like “webscale” or “cloud native apps” were strongly associated with the largest web companies (Google, Amazon, Twitter, Facebook, etc.). But over time, those terms got usurped by other groups in an effort to link their technologies to hot trends in the broader markets.

Another one that seems to be shifting is PaaS, or Platform as a Service. It’s sort of a funny acronym to say out loud, and people are starting to wonder about it’s future. But we’re not an industry that likes to stand still, so let’s move things around a little bit. Maybe PaaS is the wrong term, and it really should be “Platform”, since everything in IT should eventually be consumed as a service. I'm already hearing about XaaS (X as a Service) which pretty much means anything as a server, or perhaps everything as a service.

But not everyone believes that a Platform (or PaaS) should be an entirely structured model. There is lots of VC money being pumped into less structured models for delivering a platform, such as Mesosphere, CoreOS, Docker, Hashicorp, Kismatic, Cloud66, Apache Brooklyn (project) and Engine Yard acquiring OpDemand.

I’m not sure if “Structured PaaS” and “Unstructured PaaS” are really the right terms to use for this divergence of thinking about how to deliver a Platform, but they work for me. The Unstructured approach seems to appeal more to the DIY-focused start-ups, while Structured PaaS (eg. Cloud Foundry, OpenShift) seem to appeal more towards Enterprise markets that expect a lot more “structure” in terms of built-in governance, monitoring/logging, and infrastructure services (eg. load-balancing, higher-availability, etc.). The unstructured approach can be built in a variety of configurations, aka “batteries included but removable“, whereas the structured model will incorporate more out-of-the-box elements in a more closely configured model.

Given the inherent application portability that comes with either a container-centric model, or PaaS-centric, both of these are areas that IT professionals and developers should be taking a close look at, especially if they believe in a Hybrid Cloud model – whether that’s Private/Public or Public/Public. It’s also an area that will drive quite a bit of change around the associated operational tools, which are beginning to overlap with the native platform tools for deployment or config management (eg. CF BOSH or Dockerfiles or Vagrant).

It’s difficult to tell at this point which approach will likely gain greater market-share. The traditional money would tend to follow a more structured approach which aligns to Enterprise buying centers. But the unstructured IaaS approach by AWS has led it to a significant market-share lead for developers. Will unstructured history be any indication of the Platform market? Or will too many of those companies struggle to find viable financial models after taking all that VC capital and eventually just be a feature within a broader structured platform?  I want to hear what you think, all respectful comments are welcome.