Home » HappyPancake 🌟 AI Research · Newsletter · ML Labs · About

Containers, virtualization and clusters

Last week was a bit hectic. We are waiting for a bunch of datacenters to provide us with the test access to virtualised and hardware servers. These have to be benchmarked in order to see how they perform.

Some time during the week we realised two things:

  • We aren't going to get decent numbers out of the machines on DigitalOcean
  • Apparently DigitalOcean is using some cheap virtualisation environment which massively underperforms compared to VMWare

This realisation lead us to the point where we started evaluating dedicated hardware option instead of the VMs. We are going to run Docker containers in them anyway, so there is not going to be any vendor or hardware lock-in. Here are a few notes on that:

  • Dedicated hardware is fast
  • Good virtualisation software adds little overhead on top of HW; bad virtualisation - kills any performance
  • Docker containers add very little overhead (should be way below 10% in our scenario) but help a lot with software compartmentalisation

Within the last week I was following the footsteps of Pieter, learning from him and writing my first docker containers. There are a few gotchas, but the entire concept is amazing! It is probably the best IT thing that happened to me since beer. It solves "works on my machine" syndrome in most of the cases, making it extremely easy to work with software both locally and in remote environments. The experience is a lot better than Lokad.CQRS abstractions for file and Azure backends that I came up with earlier.

Eventually, while setting up the containers over and over again, we came to the conclusion that we want to automate the entire thing. Running a script to deploy new versions requires context switching which Tomas and Pieter don't like (I never been as productive in Linux as these guys, but I start feeling this too). Hence, we are thinking about using either Drone or fleet to deploy and manage containers across the cluster.

We will probably be using ubuntu 12.04 LTS for the containers (long-term support and stable code). Trying something like CoreOS for the host OS seems compelling because it is native to etcd (awesome) and fleet.

We'll see how it goes. This week is going to be about strengthening our continuous delivery story and getting more numbers from the stack in different configurations.

A few more other highlights from the previous week:

  • We switched to Slack from campfire (it is used for persisted chats between the team). Native client is awesome, works so much better than campfire and Skype group chats
  • wrk is an awesome tool for doing load testing while measuring throughput and latencies

Published: February 24, 2014.

Next post in HappyPancake story: Benchmarking and tuning the stack

🤗 Check out my newsletter! It is about building products with ChatGPT and LLMs: latest news, technical insights and my journey. Check out it out