View of London City

Tuesday, April 28, 2015

The 3 reasons why Docker got it right

Containers have been around for a while. But why they finally got their well deserved popularity only with the rise of Docker? Was it just a matter of market maturity, or something else? Having worked at Joyent, I had the luck of being in the container business before Docker was even invented, and I would like to give you my take on that.

> A brief history of containers


We hear this again and again in compute science: what we think has been recently invented by some computing visionary has actually its roots typically decades ago. It happened with hardware virtualisation (emulation), with the cloud client-server de-centralization (mainframes) and, yes, also with containers.

If you also started to hack with Unix back in the early 90's you'll certainly remember chroot. How many times I've used that to make sure my process wasn't messing around with the main OS environment. And you'll probably remember FreeBSD jails, that was adding all that required kernel-level isolation to implement the very first OS-level virtualisation system.

Sun Microsystem also believed strongly in containers and developed what they called "zones", definitely the most powerful and well thought container system. But despite Sun believed in containers more than it did on hardware level virtualisation, the market moved towards the latter, not because it was the right approach but simply because it allowed the guest OS to stay untouched. Unfortunately Sun never managed to see much of the results of zones as nobody knows what really happened to them after the acquisition from Oracle. Luckily another company, Joyent, picked up the legacy of OpenSolaris with its SmartOS derivative. SmartOS is now used as the foundation of the Joyent Cloud with an improved version of zones at the very core of it.

At the same time, yet another company, Parallels (now Odin), stewarded OpenVZ, a Linux open-source project for OS-level virtualisation. The commercial version of it was called Virtuozzo and Parallels sold it as their virtualisation system of choice.

Since late 2000's, Joyent and Parallels have been pioneering the container revolution but nobody talked about them as much as it's now being done for Docker. Let's try to understand why.

> Positioning of containers


The easy conclusion would be that the market wasn't just ready yet. We all know how timing is important when releasing something new and I'm sure this also played a role with containers. However, in my view, that's not the main reason.

Let's look at how these two companies were selling their container technology. Joyent made it all around performance and transparency: if you're using a container instead of a virtual machine (i.e. hardware level virtualised) you can get an order of magnitude of performance increase, as well as total transparency and visibility of the underlying hardware. That's absolutely spot on and relevant. But apparently it wasn't enough.

Parallels made it all around density. Parallels' target market was hosting companies and VPS providers, those who's selling a single server for something like four bucks a month. So, if you're selling a container instead of a virtual machine, you'll be able to squeeze twice or three times the amount of servers on the same physical host. Therefore you can keep your prices lower and attract more customers. Given that you're not reserving resources to a specific container, higher density is a real advantage that can be achieved without affecting performance too much. Absolutely true but again, it did not resonate too loud.

> The need to lower the overhead


In the last few years, we also witnessed the desperate need to lower the overhead. Distributed system caused server sprawl. Thousands of under utilised VMs running what we call micro services, each with a heavy baggage to carry: a multi-process, multi-user full OS, whose features are almost totally useless to them. Therefore the research in lowering the overhead: from ZeroVM (acquired by Backspace) to Cloudius Systems, that tried to rewrite the Linux kernel, chopping off those features that weren't really necessary to run single process instances.

> And then came Docker


Docker started as delivery model for the infrastructure behind the dotCloud PaaS, it was using containers to deliver something else. It was using containers to deliver application environments with the required agility and flexibility to deploy, scale and orchestrate. When Docker spun off, it added also the ability to package those environment and ship them to a central repository. Bingo. It turned containers as a simple mean to do something else. It wasn't the container per se, it was what containers unlocked: the ability to package, ship and run isolated application environments in a fraction of a second. 

And it was running on Linux. The most popular OS of all times.

> Why Docker got it right


All of this made me think that there are three main reasons behind the success of Docker.

1. It used containers to unlock a totally new use case

The use case that container unlocked according to Joyent, Parallels and Docker were all different: performance of a virtual server in the case of Joyent, density of virtual servers in the case of Parallels and application delivery with Docker. They all make a lot of sense but the first two were focused on delivering a virtual server, Docker moved on and used containers to deliver applications instead.

2. It did not try to compete against virtual machines

Joyent and Parallels tried to position containers against virtual machines. You could do something better with containers when using a container instead of a virtual machine. And that was a tough sale. Trying to address the same use case as what everybody already acknowledged as the job of a VM was hard. It was right but it would have required much longer time to establish itself.

Docker did not compete with VMs and, as demonstration of that, most people are actually running Docker inside VMs today... even if Bryan Cantrill (@bcantrill), CTO of Joyent, would have something to say about it! Docker runs either on the bare metal or in a VM, it does not matter much when what you want to achieve is to build, package and run lightweight application environments for distributed systems.

3. It did not try to reinvent Unix but used Unix for what it was built for

Docker didn't try to rewrite the Linux kernel. However it fully achieved the objective to reduce overhead. Containers can be used to run a single process with no burden to carry an entire OS. At the same time, the underlying host can make best use of its multi-process capabilities to effectively manage hundreds of containers.

Don't get my wrong. I absolutely believe about the superiority of containers when compared to virtual machines. I think both Joyent and Parallels did an amazing job spreading out their benefits like no other. However, I also recognise in Docker the unique ability to have made them shine much brighter than anyone has ever done before.

In conclusion, co-opting with the established worlds of virtual machines and Linux to exploit the largest reach, while adding fundamental value to them was the reason behind Docker's success. At the same time, looking at containers from an orthogonal perspective, not as the goal but as a mean to achieve something different than delivering a virtual server, is what landed containers on the mouth of everyone.

Tuesday, December 23, 2014

If cloud can't wait, will you?

A few days ago I have participated as a panelist in the webinar titled “Cloud Can’t Wait” alongside Michael Coté (@cote), analyst at 451 Research, Jared Stauffer (@jaredstauffer), CEO at Brinkster and Jim Foley, SVP Market Development at Flexiant.

We have debated the cloud opportunity. Sounds old? Maybe. However, surprisingly enough, the majority of IT infrastructure buyers haven’t adopted it yet. Skepticism, natural resistance to change, staff self-preservations and other excuses are amongst the primary reasons for that. If you think about it, this is actually pretty normal when a technology is so much disrupting the status quo.

The title of the webinar "Cloud Can’t Wait” may sound like a way to build the hype but, with regard to cloud, I think we all concur that, by now, the hype is way over. As I’m sure we agree that, indeed, the cloud can’t wait. Those who've fully embraced it have demonstrated to have significant advantages over those who haven't, and these advantages are directly affecting their competitiveness and even their ability to stay in business.

> The opportunity is for everyone


We talked about the cloud focusing on the infrastructure side of it. We have deliberately excluded SaaS consumption from the statistics and the debate, as that has a totally different adoption curve and, when put in the same basked, can easily mislead the conclusions. So rule number one, treat SaaS numbers separately.

Michael Coté presented an interesting categorisation of cloud infrastructure services, segmented as follows:
  • Infrastructure-as-a-Service (IaaS): compute, storage and network “raw” infrastructure.
  • Platform-as-a-Service (PaaS): supporting developers and middleware integration they require.
  • Infrastructure-Software-as-a-Service (ISaaS): the applications required to manage IT infrastructure, including backup, archiving, disaster recovery (DR), capacity planning and, more generically, IT management as a service.
Seeing ISaaS as third category was pretty interesting to me as we all knew it existed but we never managed to label it correctly. And as Michael stated later on, expertise in this specific category is what some service providers, mostly those coming from the managed services space, can actually offer as value add on top of raw infrastructure, in order to win business in this space.

So what is this cloud opportunity we are referring to? Again, Michael explained it this way:
“[With a 29% year over year growth rate] there is the opportunity to get involved early and [as a vendor] participating in gathering lots of that cash. Instead, cloud buyers such as developers or enterprises, are not interested in the participating in this growth, but in the innovation that comes out of this cloud space, they want to use this innovation and efficiency to really differentiate themselves in their own business"
So the opportunity is there and it is a win-win for everyone.

> Why people are buying cloud and who are they?


If you ask yourself why people are buying cloud and what they’re using it for, you maybe won’t find the answers easily. That’s where the work of 451 Research becomes really helpful. As Michael told us, from the conversations they have everyday, it came out that most organisations use the cloud because of "the agility that it brings, the speed you can deploy IT and [afterwards] that you can use IT as a differentiator. [Because cloud] speeds time to market”.

To that, I would add that cloud also speeds the ability to deliver changes which translates into adaptability, essential for any chance of success in our rapidly transforming economy.

Michael continued on this topic:
“Over the past roughly 5 to 10 years much of the focus of IT has been on cost savings, keeping the lights on as cheaply as possible, but things are changing and qualitatively we see this in conversations we have all the time, companies are more interested in using IT to actually do something rather than just saving money, and cloud is perfectly shaped for offering that”
Great. This seems to be now well understood. The days of explaining to organisation that there is more to the cloud than the simple shift from CAPEX to OPEX, are gone.

Who are buying cloud infrastructure services today? My first answer went to:
“Developers. This word returns a lot whenever we talk about cloud. They’ve been the reason of the success of AWS, for sure. That’s because they just 'get it', they understand the advantages of the cloud around how they can transform infrastructure into code. For them, spinning a server is just like writing any other line of code for doing anything else. They managed to take advantage of the cloud from the very early days and they contributed to make cloud what it is today under many aspects”
With regard to enterprises, I also added:
“enterprises are [currently] investing in private clouds because that’s the most natural evolution of their traditional IT departments, but eventually, as they get to provide cloud, it’s gonna be extremely easy to get them to consume cloud [services] from third parties. That’s because cloud is more of a mindset than just a technology"

> How can you profit from the cloud opportunity?


So you’re a service provider and you want to participate in the cloud opportunity. How do you do that? Michael suggests to use the “best execution venue” approach. That starts, as Michael explains, with understanding the type of workload or applications that you want to address. Then ask yourself what skills, capabilities and what assets do you have that you can leverage to address a specific type of workload? This will tell you what value you can bring on top of raw infrastructure in order to compete and take advantage of this fast-growing multi-billion market.

My comment on this was:
“Eventually service providers should not consider themselves just part of one of these [IaaS, PaaS or ISaaS, Ed.] segments. Eventually I think the segmentation of this type will not there anymore, and there will be another segmentation based more on use cases, where the service provider will specialise on something and will pick a few services to make the perfect portfolio to match a specific use case in a target market”
Yes I’m a big fan of the use case approach. As I’m a big fan of trying to understand what the cloud is exactly being used for. Even if the press tries to push the cloud as heavily commoditised service, you should never stop asking yourself what your customers are doing with it, what applications they’re running and what else you can do to make their life easier.

In any case, either you decide to leverage your existing capabilities or you try to learn what your customers want to do with your cloud, we all agreed around the following statement: it’s still very early days. As Michael again explains, there are still lots of options to get involved, it’s a great time to get involved, and the doors are definitely not closed.

I’d say they’re absolutely wide open. And many have already crossed the doorway. How about you?


Thursday, December 11, 2014

Docker: not just containers. Thoughts from DockerCon Europe

Developers. Developers. Developers. I guarantee this was the most spoken word at DockerCon Europe 2014, the hottest software conference that just took place in Amsterdam last week. I was so lucky to get a ticket (as it sold out in a couple of days!) and be part of this amazing event that, despite a few complaints heard regarding too much of a “marketing love fest”, offered a lot in understanding market directions, trends and opportunities for software vendors.

So what is Docker? A container technology? No. Well, yes, but there is more to Docker. Despite being known as container technology, Docker is mainly a tool for packaging, shipping and running applications. A piece of infrastructure is now a simple means to do something else and requires no infrastructure skills to consume it. With containers now mainstream, the industry has now completed a further step towards making developers the main driver of IT infrastructure demand.

But at DockerCon, Docker employees appointed the project as a “platform” with the goal of making it easy to build and run distributed applications. A platform made of different components that are “included, but removable”. In fact, during one of the keynote sessions, Solomon Hykes (@solomonstre), creator of the Docker project, announced three of these new components that are now available alongside the well-known Docker engine:
  • Docker Machine
  • Docker Swarm
  • Docker Compose
As the community demanded, these three components have not been incorporated in the same binary as the container engine. But with this launch, Docker is now officially stepping into orchestration, clustering and scheduling.

Apart from the keynote, many of the breakout sessions were run by Docker partners, showing lots of interesting projects and more building blocks for creative engineers. In other sessions, organizations like ING Bank, Société Générale and BBC, explained how they use Docker and its benefits, including how Docker helps build their continuous delivery pipeline. Besides adopting the required technology stack, continuous delivery was also described as a fundamental organizational change that companies need to go through eventually. To this point, my most popular tweet during the two days has been a simple quote from Henk Kolk, Chief Architect at ING Bank Netherlands (@henkkolk):


Here’s my paraphrased version of Kolk’s session – Break the silos, empower engineers, build small product development teams and ship decentralized micro services. Cultural and organizational change has been described as important as the revolution in software architecture or cloud adoption. There can’t be one without the other. So you’d better be ready, educated and embrace it.

> Docker Machine


The project that caught most of our attention at Flexiant was Docker Machine. It enables Docker to create machines into different clouds directly from the command line. My colleague Javi (@jpgriffo), author of krane.io, has been looking at it since it was a proposal and during the announcement of Docker Machine, we managed to send the very first pull request for the inclusion of a driver for Flexiant Concerto into the project, ahead of VMware and GCE. If Flexiant Concerto driver will be merged over the next days, Docker users will be able to go from “Zero to Docker” (as it was pitched by its author Ben Firshman – @bfirsh) in any cloud, with a single consistent driver. Exciting! We’re absolutely proud of this and we believe we have much more to give to the Docker community, given our expertise in cloud orchestration. Be prepared for more pull requests to follow.

> The Risk


Docker has been blowing minds since the first days of the famous video (21 months ago!). It makes so much sense that it’s been adopted with a speed we’ve never seen in any open source project before. Even those who do not understand it are trying to jump on the bandwagon just to leverage its brand and market traction. This doesn’t come without risks. With a large community, an eco-system with important stakes and a commercial entity behind (Docker, Inc.) there will be conflicts of interests, with “overstepping” onto the domain of those partners that helped make Docker what is today. We’ve already seen this with the CoreOS launch of Rocket a couple of days ago.

Docker, Inc. needs to drive revenue and, despite seeing Solomon Hykes make a lot of effort to keep an impartial and honest governance over his baby, I’m sure it’s not going to be a painless process. Good luck Solomon!

> The Opportunity


High risks usually mean high potential return. The return here can be high, not just for Docker, Inc., but for the whole world of IT. Learning Docker and understanding its advantages can drive the development of applications in a totally different way. Not having to create a heavy resource-wasting virtual machine (VM) for everything will boost the rise of micro services, distributed applications and, by reflection, cloud adoption. With this, comes scalability, flexibility, adaptability, innovation and progress. I don’t know if Docker will still be such a protagonist over the next year or two, but what I know is that it will have fundamentally changed the way we build and deliver software.

This post originally appeared on Flexiant.com.