Podman: How do I write a container that will do a build?
This article is part of a series.
In yesterday’s notes I used a locally compiled binary with a very slim docker file. This can work out fine, but what comes recommended by Vapor and Hummingbird? Can I understand those files yet?
What I’ve figure out by now is that those files have a build section that will spin up a container to do the compile, no cross compiling SDK needed. Potentially that means the app service will do that compile for me on the hardware that the binary will run on? So I don’t even have to know what the underlying architecture is? Interesting… And then also how does that get worked in to the fee?
NOTE: (One can use the static-sdk in a Containerfile compile, it’s just that one doesn’t have to.)
But first steps first. Currently the project layout looks like:
|- Container
| |- binary
| | |- SmallestServer (binary)
| |- Containerfile
|
|- Sources
| |- main.swift
|
|- buildLocalNRun.sh
|- Package.swift
This is going to simplified to:
|- Sources
| |- main.swift
|
|- buildContainerNRun.sh
|- Containerfile
|- Package.swift
The Container
folder can be deleted because there is no intermediate build product to store in the repo now.
Update shell script
Update the script to remove any reference to building or compiling the package that way. The only thing that will build is the container.
#!/bin/sh
## change in script if want to change.
DEFAULT_PORT="8080"
APP_NAME="SmallestServer"
REPOSITORY="smallserver"
## can change at runtime
## 'zsh ./buildLocalNRun.sh debug' will run debug build instead.
DEFAULT_CONFIG_VALUE="release"
CONFIGURATION="${1:-$DEFAULT_CONFIG_VALUE}"
## creates a tag based on the day and local time, e.g. mon104642
TAG=`date +"%a%H%M%S" | tr '[:upper:]' '[:lower:]'`
podman build -t $REPOSITORY:$TAG .
# no -d because want to see errors inline
# this one will keep podman a little tidier.
# podman run --rm --rmi -p $DEFAULT_PORT:8080 $REPOSITORY
podman run -p $DEFAULT_PORT:8080 $REPOSITORY:$TAG
A different Containerfile
So the two example Dockerfiles do a lot more than the tiny one. They change up the memory allocator (although how long it will be to jemalloc tbd.), they make sure backtracing works, move in resource files.
What they are still doing is ending up with a mostly statically linked executable so the final container does not have to have Swift on it. It appears to still need a C library? Something else the ubuntu base image has that alpine doesn’t?
Let’s go through the Vapor example section by section.
The Build Section
Difference #1, there is a named build section based on an image that has Swift installed.
# ================================
# Build image
### CHECK OUT THAT NAME after the AS!!
FROM swift:6.1-noble AS build
Then the base image gets updated, jemalloc gets installed, apt/lists gets cleaned out
# Install OS updates
RUN export DEBIAN_FRONTEND=noninteractive DEBCONF_NONINTERACTIVE_SEEN=true \
&& apt-get -q update \
&& apt-get -q dist-upgrade -y \
&& apt-get install -y libjemalloc-dev \
&& rm -rf /var/lib/apt/lists/*
Create a build area and move the project files into it, with a good reason to do the dependencies first (caching the dependencies).
# Set up a build area
WORKDIR /build
# First just resolve dependencies.
# This creates a cached layer that can be reused
# as long as your Package.swift/Package.resolved
# files do not change.
COPY ./Package.* ./
RUN swift package resolve \
$([ -f ./Package.resolved ] && echo "--force-resolved-versions" || true)
# Copy entire repo into container
COPY . .
Then, the build gets run and the products of that get put in a new directory called “staging”. Additionally any extra resources get moved, too. The Vapor and Hummingbird Dockerfiles do the almost exactly the same commands, but the Hummingbird template does each command as a separate RUN
. Each run gets cached separately, so I can see the pros and cons to both.
- The
--mount=type=cache,target=/build/.build
line allows for incremental builds, it’s a docker directive - It’s a release build.
- The {{name}} is a product of the GitHub template… FIND THEM ALL!
--static-swift-stdlib
flag to statically link the Swift runtime libraries for this platform. It still requires the same platform dependencies. For fully static there used to be a--static-executable
, but that’s gone now?- tell the linker to use a different malloc (see links above about memory issues)
- move the binary
- move the resources Swift Package Manager knows about.
## Skipped in hummingbird version because moves come after
## WORKDIR /staging
RUN mkdir /staging
# Build the application, with optimizations, with static linking, and using jemalloc
# N.B.: The static version of jemalloc is incompatible with the static Swift runtime.
RUN --mount=type=cache,target=/build/.build \
swift build -c release \
--product {{name}} \
--static-swift-stdlib \
-Xlinker -ljemalloc && \
# Copy main executable to staging area
cp "$(swift build -c release --show-bin-path)/{{name}}" /staging && \
# Copy resources bundled by SPM to staging area
find -L "$(swift build -c release --show-bin-path)" -regex '.*\.resources$' -exec cp -Ra {} /staging \;
Change the pwd to staging and copy over the swift-backtrace-static
binary to it, to enable the backtracing library to work. That will be enabled in the second part. This would not be necessary if the base image for the deployment had swift installed, but then it would be HUGE.
# Switch to the staging area
WORKDIR /staging
# Copy static swift backtracer binary to staging area
RUN cp "/usr/libexec/swift/linux/swift-backtrace-static" ./
Move files from the default Vapor folders. In Hummingbird for example, the “Public” folder is “public”
# Copy any resources from the public directory and views directory if the directories exist
# Ensure that by default, neither the directory nor any of its contents are writable.
RUN [ -d /build/Public ] && { mv /build/Public ./Public && chmod -R a-w ./Public; } || true
RUN [ -d /build/Resources ] && { mv /build/Resources ./Resources && chmod -R a-w ./Resources; } || true
The Deploy section
About halfway down the file is another FROM
call, so a second image is being made. It can refer to the first with --from=
labels.
# ================================
# Run image
# ================================
FROM ubuntu:noble
This is a common base, “Noble Numbat” is the most recent canonical release. It’s much bigger than scratch
or alpine
.
Then again run updates and clear the cache
# Make sure all system packages are up to date, and install only essential packages.
RUN export DEBIAN_FRONTEND=noninteractive DEBCONF_NONINTERACTIVE_SEEN=true \
&& apt-get -q update \
&& apt-get -q dist-upgrade -y \
&& apt-get -q install -y \
libjemalloc2 \
ca-certificates \
tzdata \
# If your app or its dependencies import FoundationNetworking, also install `libcurl4`.
# libcurl4 \
# If your app or its dependencies import FoundationXML, also install `libxml2`.
# libxml2 \
&& rm -r /var/lib/apt/lists/*
Add a user and move everything from staging into their home directory
# Create a vapor user and group with /app as its home directory
RUN useradd --user-group --create-home --system --skel /dev/null --home-dir /app vapor
# Switch to the new home directory
WORKDIR /app
# Copy built executable and any staged resources from builder
COPY --from=build --chown=vapor:vapor /staging /app
Enable backtracing and put the tool’s location in the environment
# Provide configuration needed by the built-in crash reporter and some sensible default behaviors.
ENV SWIFT_BACKTRACE=enable=yes,sanitize=yes,threads=all,images=all,interactive=no,swift-backtrace=./swift-backtrace-static
Switch to the user and run the program
# Ensure all further commands run as the vapor user
USER vapor:vapor
# Let Docker bind to port 8080
EXPOSE 8080
# Start the Vapor service when the image is run, default to listening on 8080 in production environment
ENTRYPOINT ["./{{name}}"]
CMD ["serve", "--env", "production", "--hostname", "0.0.0.0", "--port", "8080"]
The Redux
SOOOOOO…. How much of this can be stripped out?
What I left out
I wouldn’t leave these out in production of for a template, but for this specific example seemed fine:
- app is tiny so improved linux allocator seemed worth trying to leave out.
- I also didn’t update… because isn’t that what using a :latest is for?
- Copying resource files, because this example doesn’t have any
Leaving out the allocator and updating was partially because I was going to try to use scratch
or alpine
for the deployment container. Scratch doesn’t have a package manager and alpine uses apk
, so I would have had to translate the command. Turns out I had to use FROM ubuntu:noble
anyway to prevent a dynamic library loading error? (See below)
What’s new
What’s new from the previous section’s container file:
- The whole build section, obviously!
FROM ubuntu:noble
. Originally I thought I’d be able to usealpine
orscratch
instead, but the deployment container complained a missing dynamic library. Switching to ubuntu:noble made it go away… TODO… WHAT dynamic library was missing?- because there’s an OS in the container re-added adding a user.
- Backtrace support.
- copying the backtrace tool
- setting the env
I had to run it and re-run it a few times to shake out all the errors so I got a good look at all the caching features which help A LOT.
A Containerfile that Builds!
# ================================
# Build image
# ================================
FROM swift:6.1-noble AS build
# Install OS updates
RUN export DEBIAN_FRONTEND=noninteractive DEBCONF_NONINTERACTIVE_SEEN=true \
&& apt-get -q update \
&& apt-get -q dist-upgrade -y
# Set up a build area
WORKDIR /build
# First just resolve dependencies.
# This creates a cached layer that can be reused
# as long as your Package.swift/Package.resolved
# files do not change.
COPY ./Package.* ./
RUN swift package resolve \
$([ -f ./Package.resolved ] && echo "--force-resolved-versions" || true)
# Copy entire repo into container
COPY . .
RUN mkdir /staging
# Build the application, with optimizations, with static linking, and using jemalloc
# N.B.: The static version of jemalloc is incompatible with the static Swift runtime.
RUN --mount=type=cache,target=/build/.build \
swift build -c release \
--product SmallestServer \
--static-swift-stdlib && \
cp "$(swift build -c release --show-bin-path)/SmallestServer" /staging
# Switch to the staging area
WORKDIR /staging
# Copy static swift backtracer binary to staging area
RUN cp "/usr/libexec/swift/linux/swift-backtrace-static" ./
# ================================
# Run image
# ================================
# Most swift examples use Ubuntu.
FROM ubuntu:noble
RUN useradd \
--user-group \
--create-home \
--system \
--skel /dev/null \
--home-dir /app \
hummingbird
# Switch to the new home directory
WORKDIR /app
# give the binary to the hummingbird user
# COPY --from=build --chown=hbUser:hbGroup /staging /app/
COPY --from=build --chown=hummingbird:hummingbird /staging /app
# Provide configuration needed by the built-in crash reporter and some sensible default behaviors.
ENV SWIFT_BACKTRACE=enable=yes,sanitize=yes,threads=all,images=all,interactive=no,swift-backtrace=./swift-backtrace-static
# Ensure all further commands run as the hummingbird user
# USER hbUser:hbGroup
USER hummingbird:hummingbird
# Let Docker bind to port 8080
EXPOSE 8080
# Start the Hummingbird service when the image is run, default to listening on 8080 in production environment
RUN ls
ENTRYPOINT ["./SmallestServer"]
CMD [""]
Summary
There’s still a lot to learn about working with container. I’ll be switching to Swift Embedded for a bit and it’s possible I’ll use them for that. But I’ll keep linking to the series when something new comes up!