I do not recommend using Docker for everything. In many cases a locally installed package or a full virtual machine are better suited for a given problem. Heavy use of Docker also means downloading dozens of images, with every one several hundred MBs in size.
However, there is a sweet spot in between, where a quick Docker command allows you to effortlessly complete a task without littering your host system with a lot of dependencies.
Some of the Docker invocations, that I find myself using increasingly often, I want to share with you today.
Preparations ¶
You need Docker installed, e.g. on Ubuntu by typing
sudo apt install docker.io
Then you should be able to type docker
in a terminal and get the following
output:
$ docker
Usage: docker [OPTIONS] COMMAND
A self-sufficient runtime for containers
Options:
...
Now you’re ready. Before we start with the actual calls, I’d like to explain three command-line parameters, that we will see over and over again:
- The
--rm
switch will remove the container again, after it ran. This is something that we dearly want in our set of one-off commands below. If we omit it, Docker will fill its local store with new containers over and over again with each call. - The
-i
parameter tells Docker, that we want to interact with the container, e.g., because we start a shell, where we want to type something. - The
-t
parameter allocates a pseudo-terminal, i.e., Docker will communicate special sequences back and forth, that are needed for command-line editing.
If you call docker
, but nothing is shown, make sure, that you have the -it
parameters added prior to the name of the image.
Note: Some of the Docker images invoked below are not official builds from the respective project, but third-party images from other developers. Take care when running arbitrary images, that those will do indeed what they claim they do.
Testing a Linux Feature ¶
If you want to test a command, or some package, in a Linux environment, before
you do it on your machine, or if you’re curious, what exactly rm -fr /
will
do to your system (don’t do that on your computer!), you can tell Docker
to start a bash shell on a pristine Ubuntu installation:
docker run -it --rm ubuntu:latest bash
Test Your Module in Cutting-Edge Python ¶
Worried, that your code might not work in the newest Python? Run it without the need to install it on your machine or wrangle with some PPAs:
docker run -it --rm \
--volume="$PWD:/code" \
--workdir /code \
python:3.9
This call has two more parameters, that I should explain:
- Normally, Docker containers are isolated from their host. However, sometimes
we do want to access files from the host in a container.
--volume
allows us to do this by mounting a local folder (in this case$PWD
, the current directory) to a folder in the container (here,/code
). --workdir
tells the container, that the command it starts should be run in this folder. Thepython
image will automatically runpython
. By setting the workdir, we can run import statements just as we would locally.
Run a Node.js App without Node or JS ¶
Say you have a specific Node version on your production system and want to run the same code locally in as similar an environment as possible. Docker allows you to specify exactly, which version you need.
docker run --rm \
--volume="$PWD:/home/node/app" \
--workdir /home/node/app \
-p 8080:80 \
node:12.16.1 npm run start
Here we use another useful parameter of Docker. -p
allows us to map ports
from the container to the host system. In this case, we map the port 80
of
the container (the standard HTTP port) to the port 8080
on our host system.
The application is then available in a browser under http://localhost:8080
.
Note, that we do not necessarily need the -it
parameters here, since we do
not expect to interact with the container.
Attention: If the call to npm run start
ends in a daemon being spun up
like nodemon
or pm2
, the script will end and with it the whole container,
including the daemon. You need to make sure, that whatever npm run start
calls will stay up.
Build Your Static Jekyll Blog without Ruby ¶
This is a quick one to build your blog with a static site generator like Jekyll without installing its dependencies.
export JEKYLL_VERSION=4.1.1
docker run --rm \
--volume="$PWD:/srv/jekyll" \
jekyll/jekyll:"$JEKYLL_VERSION" \
bundle update
Of course, you can do the same with Eleventy,
docker run --rm \
--volume="$PWD":/app \
femtopixel/eleventy --output=/app/_site
What is this strange “image name, then parameter” syntax? The image author
decided to specify CMD
and ENTRYPOINT
in the Dockerfile
in such a way, that Docker always calls a specific
program, when docker run
is called. In this case, it’s the eleventy
program, and --output=/app/_site
is the parameter to this. In the Jekyll
example, if you look closely, it’s the same. bundle update
is the parameter
to the jekyll
command, that Docker automatically calls for us.
Validate HTML, CSS and SVG Files with the Official W3C Validator ¶
The W3C publishes the code for its HTML validator as Docker image. This allows us to effortlessly validate local HTML files with a single command-line call and without installing Java on the host.
docker run -it --rm \
--volume "$PWD:/test" \
validator/validator:latest \
vnu-runtime-image/bin/vnu /test/index.html
The default for the image is to spin up a validator available under port 8888, but with the above invocation it will skip this and simply validate the given file.
Note, that since we mount the current folder under /test
in the container,
the call needs to reference to the test files with the /test
prefix. If we
want to remedy that, we can restructure the call, so that the container will
execute relative to its /test
folder:
docker run -it --rm \
--volume "$PWD:/test" \
--workdir /test \
validator/validator:latest \
/vnu-runtime-image/bin/vnu index.html
The validator can, by the way, also work with SVG and CSS files:
docker run -it --rm \
--volume "$PWD:/test" \
validator/validator:latest \
vnu-runtime-image/bin/vnu --svg /test/assets/colorize_icons.svg
docker run -it --rm \
--volume "$PWD:/test" \
validator/validator:latest \
vnu-runtime-image/bin/vnu --css /test/assets/site.css
Run LaTeX without Installation ¶
DANTE e.V., the German TeX user group, publishes a Docker image of the full TeX Live distro.
This makes converting a .tex
file to PDF a way less daunting task than if
one had to set up TeX on a system beforehand:
docker run --rm -it -v "$PWD":/home danteev/texlive latexmk -pdf document.tex
# results in document.pdf in the same folder
Convert XML to PDF with XSL-FO ¶
Sometimes, you need to build high-quality PDFs from XML input. One way to do
this is to use XSL-FO (Wikipedia).
However, to use it, you need a FO processor and an XSLT processor on your
system, and that usually means Java and getting CLASSPATH
right and all that
things. Can Docker make our lives easier? Why, yes:
docker run -it --rm \
--volume "$PWD":/src \
--volume /tmp:/dst \
fpco/docker-fop \
-c /src/fop.conf \
/src/input.xml \
-xsl /src/stylesheet.xsl \
/dst/output.pdf
We mount two local folders here: $PWD
to /src
and our local /tmp
to
/dst
in the container. This way we build the PDF directly to our /tmp
folder.
Download a Single Folder from a GitHub Repository ¶
A famously difficult task is to download a single folder from a larger repository on GitHub. There is no web UI for that, and Git doesn’t help us, too (apart from jumping through hoops with sparse checkouts and some other things, that still won’t result in a single folder).
But GitHub has, little known, also a
Subversion interface. And we can use this to
tell svn
to download a single folder:
docker run --rm \
--volume "$PWD":/src \
jgsqware/svn-client \
export https://github.com/googlefonts/noto-source/trunk/src
This command downloads the folder src
of the otherwise huge Google Noto-Fonts
repository and nothing more.
Run WPScan (Including Token and Database Upgrade) Against Your WordPress ¶
WPScan is a WordPress security tool, that scans a WordPress site for a huge amount of possible attack vectors. If you register on their site, you receive an API token, that you can use to test sites against their vulnerability database.
At the same time, WPScan stores some common info in a local store. If it deems the store too old, it will ask you whether it should update it. And if invoked again, will ask again. And again.
We want therefore to locally store the data, so that the container can find it on repeated calls.
Store this script somewhere on your
$PATH
under the name wpscan
(don’t
forget to make it executable).
#!/bin/bash
set -euo pipefail
CACHE_DIR="${XDG_CACHE_HOME:-$HOME/.cache/wpscan}"
if [[ ! -d $CACHE_DIR ]]; then
mkdir "$CACHE_DIR"
fi
docker run -it --rm \
--volume "$CACHE_DIR:/wpscan/.wpscan" \
wpscanteam/wpscan "$@"
Then you can call it just like if it were locally installed, API key and all:
$ wpscan --url example.com --api-token vushiezoonei7shiteeShiekohcoWohp
I hope I’ve shown you some useful Docker commands, that you can use in your day-to-day work. Have your own? Tell me on Twitter!