Running a multi-arch cluster with Rancher/Docker

Just a heads up — this no longer works! We’ll have to go back to the drawing board to update.

I just bought three raspberry pi’s to attempt to cheaply increase the capacity of my home cluster (I’m building something awesome 😃 )… This is a quick write up on what I did and how I did.

I currently run rancher as my orchestration tool of choice. But first, I have to get docker installed on my pi’s.

I followed this guide by hypriot to get started.

Once I got the pi up and running, it was a bit of a challenge to get rancher-agent built for arm. Arm support in rancher is still experimental, so you can follow these instructions or download the pre-built images listed at the bottom.


If you don’t want to build all the required containers from scratch, you can just follow these simpler instructions on your arm device:

docker pull withinboredom/agent-instance:v0.8.3
docker pull withinboredom/agent:v1.0.2
docker tag withinboredom/agent:v1.0.2 rancher/agent:v1.0.2

Now that you’ve gotten everything downloaded, go ahead and copy the command rancher gives you to add a custom host.

Within a few seconds you should see it appear in your infrastructure tab and you can deploy containers to it. _When you see the network agent_ come online, jump back to your machine and run this, after it fails.

docker tag withinboredom/agent-instance:v0.8.3 rancher/agent-instance:v0.8.3

And bam, everything should work — well, anything that is compiled on an arm processor!

You may need to follow these instructions to be able to view logs and enter containers from the web:

docker exec -it rancher-agent bash
# Then inside the container, run this
cp /usr/bin/nsenter  /var/lib/cattle/bin/nsenter

cadvisor is still broken for the moment … still trying to figure that out.

Build from scratch

On your arm device, we’ll need to build a few pieces from scratch. Depending on how powerful device you have, this may take awhile. First things first, you need to build s6-libs. This is pretty straight forward:

git clone
cd s6-builder
docker run -it rancher/s6-builder:v2.2.4.3_arm /opt/

Once you’ve built s6 for arm, you now need to build the rancher-instance:

# We need to build agent-instance
git clone
cd agent-instance/
git remote add imikushin

# You can change the version number to reflect what you actually need
git checkout v0.8.3
git fetch imikushin
git merge imikushin/arm --no-ff
cd ..

# Now we need to build the actual agent
git clone
cd rancher/agent
git remote add imikushin
git fetch imikushin
git merge imikushin/multiarch-hosts
# Change the from line at the top of the Dockerfile to armhf/ubuntu:14.04

# Now tag the resulting images for release (You can skip this)
docker tag rancher/agent-instance:v0.8.3_arm withinboredom/agent-instance:v0.8.3
docker tag rancher/agent:v1.0.2_arm withinboredom/agent:v1.0.2
docker push withinboredom/agent-instance:v0.8.3
docker push withinboredom/agent:v1.0.2

# And tag for running
docker tag rancher/agent-instance:dev rancher/agent-instance:v0.8.3

# Now go execute the steps from the Solution section above


  • Looks like there was an update this morning for some containers. I’ll try to get an update to the images this evening.

    • Oh great, it’s broken.

      • Lars Martin

        Anything I can help you with? I would love to see a “stable” update process even if not yet officially supported.

        • Just got back from vacation a few hours ago. Going to try latest again and hope for the best

          • Andy Hume

            Hey Rob, wondering if you got anywhere with this?

          • I’ve had no luck getting it to run since the day I wrote this post … Just waiting for them to fix it, or if I get some time I’ll try fixing it myself.

          • Alexander Selishchev

            Hi, tank you for your work, but could you please update post with “not working” label

          • Layne

            I got this to work. You can see my way too long blog post here: – but I think all that’s missing here is, combined with editing the file as described in the above comments, grabbing a cattle.jar release that has the multi-arch support and sticking it into the x86 server and bouncing. — Put it in /usr/share/cattle/cattle.jar inside the server container. This way, the Arm devices will download the proper arm binaries for everything, including cadvisor and various network tools, however cadvisor still doesn’t work. I haven’t locked that down yet. I did have to build my own agents as the provided agent images don’t work, but I am currently running a working hybrid environment after doing the above.

          • Thanks for this! I’ve added you to the “trusted user” list and un-spamified this amazing work. I’ve tried recompiling cadvisor for arm and using that binary in place of the one on the agent … but I didn’t know about USE_LOCAL_ARTIFACTS … that’s pretty awesome.

          • Layne

            Thanks for that. Disqus I guess is a little over-sensitive when it comes to spam. Also appreciate you laying this ground work for me. Wouldn’t have figured out the rest without this starting point. In fact, after looking through your other posts, I’m even working on converting my resource-heavy wordpress blog to Hugo and Caddy. So, just all around, thanks for an informative blog.

          • Layne

            I’ll try again, as I was marked as spam. I got this to work. The missing items are to download one of imikushin’s cattle.jar releases with multi-arch support and put it in the x86 server, also, start your x86 server with -e CATTLE_USE_LOCAL_ARTIFACTS=false as well. With that, and building the agents as above, with the edits to the run file, I have a working multi-arch environment. I wrote a tutorial, but you’ll have to google for it, links evidently mean spam.

          • Lars Martin

            I’ve successfully compiled latest changes from git. But the agent stops immediately:

            INFO: Running Agent Registration Process, CATTLE_URL=
            INFO: Checking for Docker version >=
            Please ensure Host Docker version is >= and container has r/w permissions to docker.sock

          • Andy Hume

            I had this working today on and off, though I’ve confused myself somewhat by really hacking around with things. E.g…

            I fixed the docker version check that you mention above simply by commenting out the `verify_docker_client_server_version` (line 454) in I don’t think it does anything other than just dump you out if it fails and print a few bits and pieces. I also had to hardcode the var_lib_docker variable to `/var/lib/docker` on line 153 as it wasn’t able to parse this out inside `resolve_var_lib_docker`.

            With those changes I can run the container and it registers the host with rancher-server. But for some reason not every single time.

          • Lars Martin

            Awesome. I’ve also fixed the version check by commenting out. Not sure if there might be some side effects since the Docker client version is 1.10.3 and Docker server version is 1.11.1 (at least if your running HypriotOS 0.8) But this could be fixed by changing the download url in Dockerfile. With your hint to hardcode the var_lib_docker variable I’m able to register the host. Thanks. First test looks good even if I don’t see any stats for that host in Rancher UI.

          • I couldn’t get cadvisor to work — it looks like it downloads the binaries from the rancher server, which in my case are x86 … If you dropped the right binaries in the right place while it was running it would probably report stats. I saw it failing to run it repeatedly in `docker logs`.

          • Lars Martin

            I managed to compile cadvisor for ARM – will try to use this binary now. But its not easy to find the place where the cadvisor download happens.

          • Try looking in /var/lib/cattle/bin/ on the running agent.

  • Cyril Coupel

    thanks for this help.
    But as the agent was upgraded to 1.1.0, i am no more able to create images for this release. Is anybody can help me?

    • Are @laynefink:disqus’s steps still working (a link is in the featured comment)? I had a hurricane destroy all my infrastructure back in October … so I can’t really be much help at the moment…

      • Cyril Coupel

        I’ll give it a try and keep you in touch as it is for the 0.8.3 and th elatest release is 1.1.0.

        • Looks like in 1.2.2 there’s no more single agent any longer.