Throughout my career, I’ve seen a couple recurring patterns related to the tools I write: I write a lot of small CLI tools and I like to share them with my coworkers (and whenever possible, the rest of the world).
This has led to several iterations of solving the problem How do I make this tool easy to run? since I don’t want to burden people with understanding the intricacies of all my tools’ dependencies. These tend to be Ruby, some number of gems, and possibly some other common unix utilities. The solutions I’ve come up with have included a lengthy README with detailed instructions, Bundler with Rake tasks to do all the heavy lifting for non-Ruby things, fpm, and even “curl bash piping” (yes, I’m horrible).
Recently I decided to use Docker to solve this problem, since I’m using it so much anyway. Using Docker has some huge benefits for sharing applications of all types: the dependencies list gets whittled down to just Docker, things work on more platforms, testing gets simpler, and it is the new hotness which makes people say “whoa” and that’s fun. That said, the downsides can be frustrating: working with files on your machine gets messy, more typing with the extra Docker-related preamble, things are less straightforward and clear, simple mistakes can lead to lots of images and containers to clean up, and the executable gets significantly larger (since the Docker image is a whole, albeit lightweight, OS userland to run the app). After weighing these pros and cons, I’ve found that telling a coworker to docker pull registry.url/my/app
and run it with --help
is so much more convenient than the alternatives.
So what’s the big deal?
Well, Docker allows the developer to perfectly specify how their app’s dependencies should be installed, how they’re configured, and precisely which versions are in place. It also means that if it works anywhere, it will work everywhere. Complete control over the environment in which my tool runs means I can do it right and focus less on making install instructions and more on ease of use.
How is it done?
I only really need to add a few pretty simple components to my codebase:
- Create a Dockerfile that installs, sets up, and runs my tool.
- If the tool works with files, I highly recommend creating a VOLUME to serve as a standard location for getting data into the tool. I tend to use
VOLUME /src
and then setWORKDIR /src
. More on this later. - The ENTRYPOINT should be just a bare run of my tool, say
ENTRYPOINT ["app"]
where my tool isapp
. - The CMD should be some sane defaults for command-line flags or arguments, say
CMD ["--source", "."]
where I’d pass a flag called--source
to my tool with an argument of.
for the current directory. - Bake in good practices, like creating and using a non-root user rather than running as root.
- If the tool works with files, I highly recommend creating a VOLUME to serve as a standard location for getting data into the tool. I tend to use
- Setup some CI goodness to poll my repo (because a tool worth writing is worth version-controlling and testing) and make it publish to a Docker registry on success.
- Mostly just needs to run my RSpec tests and, if they pass, run
docker build -t registry.url/my/app .
to build my image.
- Mostly just needs to run my RSpec tests and, if they pass, run
- Write README instructions that explain how to get Docker and how to run my app.
If the ENTRYPOINT
and CMD
are configured as described above and I’m using /src
for sharing local data with the container, then it can be pretty easy to share and run the tool.
Part of the README should include a line encouraging users to do something like this:
1 2 |
alias app='docker run --rm -v ${PWD}:/src registry.url/my/app' |
If the image is at registry.url/my/app
, and I want people to run my tool by calling the app
command, then this will be pretty clean for a user. If I write an app that requires a source directory to be passed in, just running app
will pass in my present working directory ${PWD}
on my machine, calling it /src
inside the Docker container. Since I specified WORKDIR /src
in the Dockerfile, this means that /src
will be translated to .
inside the container when my app runs. So .
on my machine becomes .
in the container! Also note the --rm
in the command. This means Docker will remove containers after they’ve exited. This will prevent all the extra containers from piling up after each run of my tool.
Putting it all together
I think a quick example is in order. Let’s pretend I’m writing an app to find all the HTML files in a directory that include capital letters in the name (eww). Say I have my app in a directory, and it consists of just an executable and a Dockerfile for building the tool. Here’s a tree
of such a directory:
1 2 3 4 5 |
app ├── Dockerfile └── bin └── app |
Here’s the contents of by app’s executable (app/bin/app
):
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 |
#!/usr/bin/env ruby require 'optparse' # CLI parsing options = {} OptionParser.new do |o| o.on('-s', '--source [DIRECTORY]', 'Source directory') do |s| options[:directory] = s || '.' end o.on('-h', '--help') do puts o exit end o.parse! end raise 'No directory specified!' unless options[:directory] raise 'Invalid directory' unless File.directory?(options[:directory]) Dir.glob(File.join(options[:directory], '**', '*.*')).each do |file| if File.basename(file) =~ /.*[A-Z].*/ && file =~ /\.(html|HTML|htm|HTM)$/ puts file end end |
Here’s my Dockerfile (app/Dockerfile
):
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
FROM ruby:2.3.4-alpine RUN apk --no-cache upgrade COPY . /app RUN addgroup -S app \ && adduser -S -g app app \ && chown -R app:app /app \ && chmod u+x /app/bin/app VOLUME /src USER app WORKDIR /src ENTRYPOINT ["/app/bin/app"] CMD ["-s", "."] |
Given the above files, build the Docker image:
1 2 3 |
cd app docker build -t app . |
Then run it:
1 2 |
docker run --rm -v ${PWD}:/src app --help |
You should get this output:
1 2 3 4 |
Usage: app [options] -s, --source [DIRECTORY] Source directory -h, --help |
Now try making an alias and running it:
1 2 3 |
alias app='docker run --rm -v ${PWD}:/src app' app --help |
You should get the same output. Now navigate to a directory that has some HTML files in it (or in one of its subdirectories). Try running the command:
1 2 |
app |
If none of your HTML files have capital letters, you should just get returned to your prompt. If there are files with capital letters in their names, you’ll get the relative paths to those files.
As you can see, the application looks like a regular system command, even though it’s buried inside of a 150MB Docker image. Hopefully it is clear why, from an end-user perspective, this is helpful.