Fun experiences using Wine in Docker
Posted on Wed 13 April 2016 in blog
Background
I sometimes work with a legacy codebase that targets both Windows and Linux; the build system is GNU Make-based, and builds on Linux. For the Windows components, the build system invokes NMAKE, using Wine. Yes, it's messy; yes I want to replace it; but no there's no time budgeted right now.
Lately, I've been moving more and more of our build infrastructure to Docker. It makes keeping the build environments up-to-date for developers easier, and simplifies the setup for Continuous Integration. Check out my tool, Scuba for using Docker to perform local builds, and GitLab CI.
You can see where this is going. I decided to convert our legacy build VM into a Docker image; Wine and NMAKE included. I didn't know what I was getting myself into.
VM to Docker Image
Of course, the right way to create a Docker image is to use a
Dockerfile
. However, this current VM had experienced years of
tweaks, potentially relying on subtle toolchain-version-specific quirks. I
wasn't about to re-build it from scratch, so I decided to convert the VM
filesystem directly to a Docker image.
The initial conversion turned out to be straightforward. First, I cloned the VM, so I could work destructively. Next, I uninstalled everything that wasn't necessary for a Docker image (including KDE, X11, firewall, etc.) Then, I powered down the cloned VM, and mounted its virtual disk under another VM, running Docker. From there, it's as simple as using Tar to create the Docker image:
# cd /mnt/buildvm
# tar -c * | docker import --change='CMD /bin/bash' - buildsys:1
This adds all of the directories from the mounted build VM disk, and creates a
tar stream which is piped into docker import -
(where -
means "from
standard input"). Note that I'm also setting the CMD
to be /bin/bash
; this
way, the image can be run by simply using docker run -it buildsys:1
, without
having to specify /bin/bash
every run.
After the initial conversion was done and I no longer needed to "boot" in the conventional way, I continued to run the image, removing more stuff:
# You don't need a kernel when running under Docker,
# but don't want to remove other things that "depend" on it.
rpm -e --nodeps kernel-xxx
yum remove dracut grub plymouth
yum clean all && rm -rf /var/cache/yum
rm -rf /var/log/* /tmp/*
I definitely had to be careful not to remove things that Wine unexpectedly
relied upon. As I did this, I occasionally ran the image through a docker
export
/ docker import
cycle to actually reduce the virtual size of the
image.
Wine without X11
The first time I tried to run wine
in a Docker container, I was met with the
following warnings/errors:
Application tried to create a window, but no driver could be loaded.
Make sure that your X server is running and that $DISPLAY is set correctly.
Googling for the error yielded some results from some other guys crazy enough to try using Wine in Docker also, like this SuperUser post and this GitHub project. It seemed that I would need some sort of X server after all, and that Xvfb (X Virtual FrameBuffer) was the solution.
You can simply run xvfb-run wine whatever.exe
, and this will avoid the "no
$DISPLAY" problems. Great. However, I didn't want to change any of our build
scripts to have to run under Docker. Specifically, I didn't want to track down
every invocation of wine
and prefix it with xvfb-run
; what if we are
running on native X?
Instead, I came up with what I believe is a novel solution:
Specify the ENTRYPOINT
in my Dockerfile
to be xvfb-run
.
This essentially prefixes the user's command with whatever is specified in
ENTRYPOINT
- just what we want to do with xvfb-run
. So the last time I
re-imported the tarball, I added --change='ENTRYPOINT xvfb-run'
. There's
probably a way to do this after it's been imported, but this was the most
convenient at the time.
Now, when I run docker run --rm -it buildsys:1 /bin/bash
, I can verify that
$DISPLAY
is set, and Wine is happy. For now.
More to come...