LK4D4 Blog


Full Category Index

Posts in “Docker”

Developing Arduino with Docker

I’m using Gentoo and using Arduino on Gentoo isn’t very easy: Arduino on Gentoo Linux.

It is easy with Docker though. Let’s see how we can upload our first program to Arduino Uno without installing anything apart from Docker.

Kernel Modules

For Arduino Uno I need to enable

Device Drivers -> USB support -> USB Modem (CDC ACM) support

as module.

Then I compiling and loading it with

make modules && make modules_install && modprobe cdc-acm

in my /usr/src/linux. At last I connect Arduino and see it as /dev/ttyACM0.

Installing ino

For this we just need image from hub.docker.com:

docker pull coopermaa/ino

It’s slightly outdated, but I sent PR to use new base image, because that’s how we do this in opensource world. Anyway it works great. Let’s create script for calling ino through Docker, add next script to your $PATH

#!/bin/sh
docker run --rm --privileged --device=/dev/ttyACM0 -v $(pwd):/app coopermaa/ino $@

and call it ino. Don’t forget

chmod +x ino

Alternatively you can use alias in .bashrc:

alias ino='docker run --privileged \
  --rm \
  --device=/dev/ttyACM0 \
  -v $(pwd):/app \
  coopermaa/ino'

but script worked better with my vim.

Uploading program

Let’s create program from template and upload it to board:

$ mkdir blink && cd blink
$ ino init -t blink
$ ino build && ino upload

Whoa! It’s alive!

Vim integration

I’m using Vim plugin for ino, you can easily install it with any plugin manager for vim. You don’t need anything special, it’ll just work. You can compile and upload your sketch with <Leader>ad.

Known issues

For using ino serial you need to add -t to docker run arguments to your script. It works pretty weird though, you need to kill process /usr/bin/python /usr/local/bin/ino serial by hands every time, but it works and looks not so bad.

Also files created by ino init will belong to root, which isn’t very convenient.

That’s all!

Thank you for reading and special thanks to coopermaa for ino image.

30 days of hacking Docker

Prelude

Yesterday I finished my first 30-day streak on GitHub. Most of contributions were to Docker – the biggest opensource project on Go. I learned a lot in this month, and it was really cool. I think that this is mostly because of Go language. I’ve been programming on Python for five years and I was never so excited about open source, because Python is not even half so fun as Go.

1. Tools

There are a lot of tools for go, some of them just are “must have”.

Goimports - like go fmt but with cool imports handling, I really think that go fmt needs to be replaced with Goimports in future Go versions.

Vet - analyzes code for some suspicious constructs. You can find with it: bad format strings, unreachable code, passing mutex by value and etc. PR about vet erros in Docker.

Golint - checks code for google style guide.

2. Editor

I love my awesome vim with awesome vim-go plugin, which is integrated with tools mentioned above. It formats code for me, adds needed imports, removes unused imports, shows documentation, supports tagbar and more. And my favourite - go to definition. I really suffered without it :) With vim-go my development rate became faster than I could imagine. You can see my config in my dotfiles repo.

3. Race detector

This is one of the most important and one of the most underestimated thing. Very useful and very easy to use. You can find description and examples here. I’ve found many race conditions with this tool (#1, #2, #3, #4, #5).

4. Docker specific

Docker has very smart and friendly community. You can always ask for help about hacking in #docker-dev on Freenode. But I’ll describe some simple tasks that appears when you try to hack docker first time.

Tests

There are three kinds of tests in docker repo:

  • unit - unit tests(ah, we all know what unit tests are, right?). These tests spreaded all over repository and can be run by make test-unit. You can run tests for one directory, specifying it in TESTDIRS variable. For example

    TESTDIRS="daemon" make test-unit
    

    will run tests only for daemon directory.

  • integration-cli - integration tests, that use external docker commands (for example docker build, docker run, etc.). It is very easy to write this kind of tests and you should do it if you think that your changes can change Docker’s behavior from client’s point of view. These tests are located in integration-cli directory and can be run by make test-integration-cli. You can run one or more specific tests with setting TESTFLAGS variable. For example

    TESTFLAGS="-run TestBuild" make test-integration-cli
    

    will run all tests whose names starts with TestBuild.

  • integration - integration tests, that use internal docker datastructures. It is deprecated now, so if you want to write tests you should prefer integration-cli or unit. These tests are located in integration directory and can be run by make test-integration.

All tests can be run by make test.

Build and run tests on host

All make commands execute in docker container, it can be pretty annoying to build container just for running unit tests for example.

So, for running unit test on host machine you need canonical Go workspace. When it’s ready you can just do symlink to docker repo in src/github.com/dotcloud/docker. But we still need right $GOPATH, here is the trick:

export GOPATH=<workspace>/src/github.com/dotcloud/docker/vendor:<workspace>

And then, for example you can run:

go test github.com/dotcloud/docker/daemon/networkdriver/ipallocator

Some tests require external libs for example libdevmapper, you can disable it with DOCKER_BUILDTAGS environment variable. For example:

export DOCKER_BUILDTAGS='exclude_graphdriver_devicemapper exclude_graphdriver_aufs'

For fast building dynamic binary you can use this snippet in docker repo:

export AUTO_GOPATH=1
export DOCKER_BUILDTAGS='exclude_graphdriver_devicemapper exclude_graphdriver_aufs'
hack/make.sh dynbinary

I use that DOCKER_BUILDTAGS for my btrfs system, so if you use aufs or devicemapper you should change it for your driver.

Race detection

To enable race detection in docker I’m using patch:

diff --git a/hack/make/binary b/hack/make/binary
index b97069a..74b202d 100755
--- a/hack/make/binary
+++ b/hack/make/binary
@@ -6,6 +6,7 @@ DEST=$1
 go build \
        -o "$DEST/docker-$VERSION" \
        "${BUILDFLAGS[@]}" \
+       -race \
        -ldflags "
                $LDFLAGS
                $LDFLAGS_STATIC_DOCKER

After that all binaries will be with race detection. Note that this will slow docker a lot.

Docker-stress

There is amazing docker-stress from Spotify for Docker load testing. Usage is pretty straightforward:

./docker-stress -c 50 -t 5

Here 50 clients are trying to run containers, which will alive for five seconds. docker-stress uses only docker run jobs for testing, so I prefer also to run in parallel sort of:

docker events
while true; do docker inspect $(docker ps -lq); done
while true; do docker build -t test test; done

and so on.

You definitely need to read Contributing to Docker and Setting Up a Dev Environment. I really don’t think that something else is needed for Docker hacking start.

Conclusion

This is all that I wanted to tell you about my first big opensource experience. Also, just today Docker folks launched some new projects and I am very excited about it. So, I want to invite you all to the magical world of Go, Opensource and, of course, Docker.

Deploying blog with docker and hugo

Prelude

Recently I moved my jabber-server to DigitalOcean VPS. Run Prosody in docker was so easy, that I decided create this blog. And of course deploy it with docker!

Content

At first we need container with templates and content for blog generation. I used next dockerfile:

FROM debian:jessie

RUN apt-get update && apt-get install --no-install-recommends -y ca-certificates git-core
RUN git clone http://github.com/LK4D4/lk4d4.darth.io.git /src
VOLUME ["/src"]
WORKDIR /src
ENTRYPOINT ["git"]
CMD ["pull"]

There is no magic here: just clone repo to /src (it will be used below), and update it on container start.

Build image:

docker build -t blog/content .

Create data container:

docker run --name blog_content blog/content

For updating content and templates from github we need just:

docker start blog_content

Hugo

Hugo – very fast static site generator, written in Go (so many cool things written in Go btw).

Idea is to run hugo in docker container so it reads contents from one directory and writes generated blog to another.

Hugo dockerfile:

FROM crosbymichael/golang

RUN apt-get update && apt-get install --no-install-recommends -y bzr

RUN go get github.com/spf13/hugo

VOLUME ["/var/www/blog"]

ENTRYPOINT ["hugo"]
CMD ["-w", "-s", "/src", "-d", "/var/www/blog"]

So, here we go get hugo and use /src(remember this from content container?) as source directory for it and /var/www/blog as destination.

Now build image and run container with hugo:

docker build -t blog/rendered .
docker run --name blog --volumes-from blog_content blog/rendered

Here the trick with --volumes-from – we used /src from blog_content container, and yeah, we’re going to use /var/www/blog from blog container.

Nginx

So, now we have container with templates and content blog_content, content with ready to use blog blog, it’s time to show this blog to the world.

I write simple config for nginx:

server {
    listen 80;
    server_name lk4d4.darth.io;
    location / {
        root /var/www/blog;
    }
}

Put it to sites-enabled directory, which used in this pretty dockerfile:

FROM dockerfile/nginx

ADD sites-enabled/ /etc/nginx/sites-enabled

Build image and run container with nginx:

docker build -t nginx .
docker run -p 80:80 -d --name=nginx --volumes-from=blog nginx

That’s it, now blog is running on lk4d4.darth.io and you can read it :) I can update it just with docker start blog_content.

Conclusions

It’s really fun to use docker. You don’t need to install and remove tons of crap on host machine, docker can handle it all for you.

docker