In this article I'll try to describe how I use vagrant in my daily tasks as an operations dude as well as I deployed it at one of our customers to help the developers focusing on the coding part rather than the operations part.
Since the beginning of my career at inuits I'm using vagrant almost everyday. If I got payed every time I spin up a box I could have bought that tesla already some years ago! But unfortunately I'm not :)
For almost 99% of the use cases I use this nifty tool it's related to puppet. Writing modules, testing out some configuration changes on a virtual machine first before pushing the changes to development, fighting with selinux, .. it are all crucial but changes that requires a lot of destroyments of boxes.
Vagrant uses some virtualization techniques to start stop and halt the different development machines. I listed the ones I use on a daily base or from time to time :)
Like most of us I started using vagrant together with virtualbox. In the beginning I had a lot of issues when updating virtualbox, every time it was upgraded my vagrant environment failed to stay up and running.
Once I figured out most of my used vagrant boxes where reliant on the extension pack I never had any issue upgrading virtualbox anymore. The only thing I had to do was to upgrade the extension pack too.
It works great, but it's slow as hell when you spin up boxes again and again to test stuff out since it has to boot a whole vm every time. So about a year ago I started looking at some alternatives.
But I struggled configuring the lxc part on my previous fedora machine. It's one of the reasons I switched about a year ago. So I installed ArchLinux on my machine and started reading the related wiki pages.
After following those configuration steps for the networking and DNS part of the lxc functionality I succeeded by creating my very first lxc centos based container:
$ sudo lxc-create -t centos -n centos Host CPE ID from /etc/os-release: This is not a CentOS or Redhat host and release is missing, defaulting to 6 use -R|--release to specify release Checking cache download in /var/cache/lxc/centos/x86_64/6/rootfs ... Cache found. Updating... Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: be.mirror.eurid.eu * extras: be.mirror.eurid.eu * updates: be.mirror.eurid.eu base | 3.7 kB 00:00 extra | 3.3 kB 00:00 update | 3.4 kB 00:00 Setting up Update Process No Packages marked for Update Loaded plugins: fastestmirror Cleaning repos: base extras updates 0 package files removed Update finished Copy /var/cache/lxc/centos/x86_64/6/rootfs to /var/lib/lxc/centos/rootfs ... Copying rootfs to /var/lib/lxc/centos/rootfs ... sed: can't read /etc/init/tty.conf: No such file or directory Storing root password in '/var/lib/lxc/centos/tmp_root_pass' Expiring password for user root. passwd: Success Container rootfs and config have been created. Edit the config file to check/enable networking setup. The temporary root password is stored in: '/var/lib/lxc/centos/tmp_root_pass' The root password is set up as expired and will require it to be changed at first login, which you should do as soon as possible. If you lose the root password or wish to change it without starting the container, you can change it from the host by running the following command (which will also reset the expired flag): chroot /var/lib/lxc/centos/rootfs passwd
$ sudo lxc-start -d -n centos $ sudo lxc-ls -f NAME STATE IPV4 IPV6 AUTOSTART --------------------------------------------------------------------------- centos RUNNING 10.0.3.94 - NO $ sudo lxc-console -n centos CentOS release 6.5 (Final) Kernel 3.17.4-1-ARCH on an x86_64 centos login:
It was a great feeling finally having that box up and running with internet available so I could update the box and install software on it.
Next up was to get such an lxc container up and running using the vagrant framework for my local development purposes.
Manual crafted box
So I first needed to get a centos minimal container with some vagrant tweaks on it. Looking around for documentation about this part I found out a helpful blog post.
But every time I tried to get that freshly build custom box up and running it failed. It was frustrating as hell! But when I figured out the logic behind the vagrant-lxc-base-boxes project I finally got to a working setup and could close the issue myself.
So now I could configure a centos minimal lxc container for vagrant usage. But there are a lot of manual steps to perform if I want to keep that box up to date.
Script crafted box
When founding out about the vagrant-lxc-base-boxes project I tried to get up and running a centos box by following the documentation.
But once again it failed big time. After some digging around I came to the conclusion the environment $PATH's of both archlinux the host and centos the guest are not the same and where the cause of this issue. Together with the missing ssh server package I got the script working.
So I got a step further, I now could create a centos lxc vagrant box pretty easy myself.
Script crafted box with puppet preinstalled
Since I develop on puppet a lot, that minimal box needed at least the puppet software itself. The code of vagrant-lxc-base-boxes could do that but only for Debian based guests. So I refactored that code for the centos boxes too.
And it worked out great! So great I'm sharing the box through vagrantcloud so everyone can benefit of the usage of vagrant-lxc.
It felt great to get that vm up and running only through vagrant. It is like heaven, you can actually start coding in a vagrant-lxc box first, and test your code out on a so called real life production server only by using vagrant up!
Since I got some great experiences with vagrant myself for developing puppet-modules the developers at the customer found out vagrant is really helpful tool in the actual developing part of a project. So we started with a tutorial of vagrant together with virtualbox.
In the initial stage of the vagrant implementation we all used some internet provisioned vagrant box. In the never ending process of improving the infrastructure we came to setup where a custom base box is provisioned by the operations team.
This base box is deployed using the same puppet code and therefore the same configuration as a production like server.
Every now and then that base box is updated by one of the operations people and placed on an accessible place for the developers.
The developers have configured vagrant together with the vagrant-box-updater plugin. That way every time they bring up a vagrant project based on this base box a check will be performed if they are using the latest provisioned base box.
I do know it looks very like the golden images era. But I do believe also both team should have their focus on the right topic. Writing code for developers and managing infrastructure for operations.
Together with a well-thought deployment process (blog post coming soon) this works out really great.
Virtualbox custom base box
The base box is crafted based on vStone boxes and provisioned through puppet.
$ vagrant init vStone/centos-6.x-puppet.3.x $ vagrant up
Once the box is up and running some manual tasks needs to be done
$ vagrant ssh $ sudo -s # yum upgrade # yum groupinstall "Development Tools" -y # setenforce 1 # vim /etc/sysconfig/selinux
Some tweaks for the virtualbox guest additions on centos 6.5
# cd /usr/src/kernels/<kernel_release>/include/drm/ # ln -s /usr/include/drm/drm.h drm.h # ln -s /usr/include/drm/drm_sarea.h drm_sarea.h # ln -s /usr/include/drm/drm_mode.h drm_mode.h # ln -s /usr/include/drm/drm_fourcc.h drm_fourcc.h # exit $ exit
Check virtualbox guest additions using the vagrant-vbguest, that way the virtualbox guest additions are updated automatically every time you bring up the box
$ vagrant plugin install vagrant-vbguest $ vagrant suspend $ vagrant up
Package the existing box with some default vagrant configuration
$ vim '''Vagrantfile.pkg''' Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| config.vm.box_url = "http://path/to/custom.box" config.vm.hostname = "CUSTOMHOSTNAME" config.box_updater.autoupdate = true end
Creation of the box
$ vagrant package --output custom.box --vagrantfile Vagrantfile.pkg
The custom.box is the actual box you need to provision on an accessible place for the development team.
LXC custom base box
For the lxc part a custom base box can also be created. To get all the processes done automatically I extended the vagrant-lxc-base-boxes project with an own_box feature.
That way you can easily create a vagrant box from an actively running lxc container you configured for your own needs.
$ git clone firstname.lastname@example.org:visibilityspots/vagrant-lxc-base-boxes.git $ cd vagrant-lxc-base-boxes $ ACTIVE_CONTAINER=lxc-container-name \ make own_box
To get the name of the running lxc-container you can use the lxc-ls command.
When starting to use both virtualization platforms next to each other I had created 2 identical Vagrantfile with the only difference the box.
But when reading through the docs I found out about you could also override some settings.
So I combined those 2 vagrantfiles into one. Depending of which platform you choose (virtualbox by default) the right box and settings are configured.
That way you only have to manage one Vagrantfile!
# -*- mode: ruby -*- # vi: set ft=ruby : VAGRANTFILE_API_VERSION = "2" Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| config.vm.define "default" do |default| default.vm.hostname = "vagrant0" default.vm.provider :virtualbox do |virtualbox, override| override.vm.box = "vStone/centos-6.x-puppet.3.x" override.vm.network :forwarded_port, guest: 80, host: 8080 end default.vm.provider :lxc do |lxc, override| override.vm.box = "visibilityspots/centos-6.x-puppet-3.x" lxc.container_name = 'vagrant-container' end end end
I abuse the vagrant-yum-repo-server project to showcase the usage of vagrant in my world.
As you can imagine that open up gates for developers AND operations cause they are all using the same puppet-tree so you should have control over the configuration of the box in all stages of the project.
Vagrant-plugins I use
Some improvements where I need more time for are
- automating the actual box creation by for example jenkins
- auto update the base boxes processed like a development box using the ansible orchestration flow
And of course the suggestions made by who know you? Cause this setup can always be improved, it will never been done..