Size does matter. Docker images can become quite large as each RUN
generates a new layer which becomes part of the image, even if it’s not in the final container. This wastes disk space and network bandwidth. The following are some steps for shrinking the size of a docker container in which builds have been performed — they work particularly well for containers which have go executables.
In the Dockerfile
- Remove any build archives
- Remove any packages which were installed to build/compile which are not needed later
Compact the Image
If you perform a docker export
of a container, it produces a tarball of the flattened Docker image. This can then be re-imported, at great size reduction1. For example, assuming the “big” image is named “consul-big” and the small named “consul-small”, executing the following command:
1 2 |
docker export $(docker ps -a | grep big |awk '{print $1}') | docker import - consul-small |
produces something like this:
1 2 3 |
consul-big latest 73780cdff089 10 minutes ago 297.4 MB consul-small latest b20aeaa4731c 16 minutes ago 35.9 MB |
A reduction of over 250MB. Not too shabby. I might be able to reduce it further, but I’m close to diminishing returns. A chunk of the size is due to dependencies which progrium/consul has, primarily in some accessory shell scripts.
May your Docker images grow ever smaller!
1 Your mileage may vary. The tip is provided without warranty of any kind. No images were harmed in the making of this blog post.
4 comments
3 pings
Skip to comment form ↓
Kyle C. Quest (@kcqon)
February 2, 2016 at 5:12 pm (UTC -5) Link to this comment
You can get even smaller images with DockerSlim [1]. It’ll keep only what your application needs, so you can use regular distros and you won’t have to use any Dockerfile optimizations or layer tricks.
[1] http://dockersl.im
Matt Williams
February 5, 2016 at 3:00 pm (UTC -5) Link to this comment
I’m curious — how do you tell what the application needs? I see that your app prefers http services — do you have to test every path in order to be sure that the code is recognized as being needed? I’m thinking in terms of a case where code is dynamically loaded, but only if it is needed. How are you handling such?
Thanks!
Rich Moyse
March 27, 2016 at 11:23 pm (UTC -5) Link to this comment
In short, images are generally larger than necessary, as Dockerfiles include build tooling and an application’s runtime bits within the resultant image. Therefore, composing an image of only runtime artifacts dramatically reduces its size and malware attack surface.
The bash script dkrcp [1] enables composing an image by simply copying files from other images, containers, stream, or host file system. This ability supports transferring only the runtime bits needed from build images/containers to a resultant image dedicated to running the application. So, if a developer can identify the runtime bits, dkrcp can copy them to construct a minimal image.
[1] http://tinyurl.com/dkrcp-github
Matt Williams
March 30, 2016 at 12:33 pm (UTC -5) Link to this comment
Agreed; assuming you can identify all of the pieces required.
TechNewsLetter Vol:9 | Devops Enthusiast
April 4, 2015 at 4:48 am (UTC -5) Link to this comment
[…] Shrinking Docker Images. […]
Running OBIEE 11g inside Docker | Sasikanth Kotti
April 13, 2015 at 3:35 am (UTC -5) Link to this comment
[…] of the image can be reduced by following the blog post Shrinking Docker Images by Matt Williams (http://matthewkwilliams.com/index.php/2015/03/23/shrinking-docker-images/) 15.The name of the final image is […]
Minimal docker : run your NodeJS app in <25mb of an image | Dinesh Ram Kali.
August 7, 2015 at 4:54 pm (UTC -5) Link to this comment
[…] the end of the day, one thing is clear: we’d like to shrinkimages as much as possible. Turns out, the easiest solution is, as often, the simplest one: start […]