Podman: How do I deploy a Hummingbird App Server?
This article is part of a series.
What is Hummingbird?
Hummingbird is a very modular Swift based server framework build on top of SwiftNIO. It’s easy to use and it has a great community around it. (Why Hummingbird)
Vapor would also have been a great choice, but it comes fully bundled with more than I needed.
What’s it take to build a starter server for a container?
Not much! This is what it took for me to make the 00_smallest_server example
Create A Start
mkdir -p 00_smallest_server
swift package init --type executable --name SmallestServer
swift package add-dependency https://github.com/hummingbird-project/hummingbird --from 2.16.0
Update main.swift
In main.swift add
import Hummingbird
let hostname: String = "0.0.0.0"
let port: Int = 8080
let configuration: ApplicationConfiguration = .init(
address: .hostname(hostname, port: port),
serverName: "SmallestServer"
)
let router = Router()
.get("hello") { request, _ -> String in
return "Hello"
}
// create application using router
let app = Application(router: router, configuration:configuration)
// run hummingbird application
try await app.runService()
The code slightly modifies from code from the docs page to make it work in a container. If not working in a container leaving the default in place 127.0.0.1, the local loop back is the better choice.
let app = Application(router: router)
Old internet heads will balk at passing 0.0.0.0 because it means “listen on everything!” which could be a security risk. In a container that is apparently considered okay because it’s isolated, and these days will unlikely to have been given access to root. The container will be given an address once running which can be discovered with inspect
podman inspect $CONTAINER_ID_OR_NAME | grep IPAddress
On linux one can instead bind to the specific IP a container is given by listening on host.docker.internal (docs) or the more generic podman equivalent
let hostname: String = "host.docker.internal"
let hostname: String = "host.containers.internal"
I have not seen this working myself. Something about Containers and that Mac’s run them in VMs and that Linux has cgroups? On Linux there may be other options like adding --network=host
to the run command. (Interesting convo on the Hummingbird discord! link at bottom of page) (Built fine, core-dumped when running ‘Address not available (errno: 99)’)
Update Package.swift
Update Package.swift
to add the dependency to the executable and require at least MacOS 14.0:
import PackageDescription
let package = Package(
name: "SmallestServer",
platforms : [.macOS(.v14)],
dependencies: [
.package(url: "https://github.com/hummingbird-project/hummingbird", from: "2.16.0"),
],
targets: [
// Targets are the basic building blocks of a package, defining a module or a test suite.
// Targets can depend on other targets in this package and products from dependencies.
.executableTarget(
name: "SmallestServer",
dependencies: [.product(name: "Hummingbird", package: "hummingbird")]),
]
)
Build and run in the background
Slightly fancy version of building the package, running the server and going to the route in the browser.
swift build
./.build/arm64-apple-macosx/debug/SmallestServer & disown
open -a Safari http://127.0.0.1:8080/hello
## also
curl "http://127.0.0.1:8080/hello"
Usually one chooses to build and run a swift package with swift run
and that works with Hummingbird apps too!
However, I wanted to show launching the binary in background and disowning it to highlight a problem with hand deploying apps on a server. They need to be run in the background (the &
) and not quit when the terminal gets shut (the disown
). One might also see solutions with nohup, tmux, screen…
- https://www.baeldung.com/linux/detach-process-from-terminal
- https://superuser.com/questions/178587/how-do-i-detach-a-process-from-terminal-entirely
- https://www.networkworld.com/article/969269/how-to-keep-processes-running-after-logging-off-in-linux.html
If one runs ps
, though, one will see that the process isn’t listed there anymore. To get the pid to kill it one will need to
ps aux | grep SmallestServer
kill $PID # returned by ps aux
## or
pKill SmallestServer
Derived from a good stack exchange answer (but also available from reading man ps
) the aux
after the ps
corresponds to:
a = show processes for all users
u = display the process's user/owner with a name not a #
x = also show processes not attached to a terminal
So we’re looking trough all the processes being run by anyone and killing the one that has SmallestServer in the name.
The thing is, a good sys admin will have systemd kill user processes on logout, even if the process had been disown
’d. And even if they agree not to kill it this sys admin will have to agree to configure the server to expose your experimental process to the internet. I mean, if it’s my server, I’m saying no.
And if it’s just your server, and you just have this one little app… who wants to knock about alone in a big echo-y machine all alone like that?
Get the Server into a Container
An easier way to get a simple app up on the internet is through a provider that offers some kind of Platform as a Service offering, which is a step up from the Infrastructure as a Service model that a VPS might be a component of.
This business model probably only became viable because Containers
exist!
There are several ways to get the app into a container that can be hosted on an app service, but this article will focus on what we’ve done before, cross compiling a statically linked binary that will be as small as possible.
Cross Compile Locally
For this, first cross compile just a hello world.
This section is essentially an evolution from the more general Wrapping a Binary section from a couple days ago, but the Containerfile
will be a little bit different.
Shell Script
So let’s take the steps from that last post and make them a buildLocalNRun.sh
script. The script below generates a tag based on the day and time and will do a release build by default.
#!/bin/sh
## change in script if want to change.
DEFAULT_PORT="8080"
## can change at runtime
## 'zsh ./buildLocalNRun.sh debug' will run debug build instead.
DEFAULT_CONFIG_VALUE="release"
CONFIGURATION="${1:-$DEFAULT_CONFIG_VALUE}"
## make it if it doesn't exist. The enclosing folder will exist because
## The Containerfile is already there.
mkdir -p Container/binary/
## Do the statically linked build
swift build -c $CONFIGURATION --swift-sdk x86_64-swift-linux-musl
## Move it to the binary folder where the Containerfile thinks it is
cp .build/x86_64-swift-linux-musl/$CONFIGURATION/SmallestServer Container/binary/SmallestServer
## creates a tag based on the day and local time, e.g. mon104642
TAG=`date +"%a%H%M%S" | tr '[:upper:]' '[:lower:]'`
podman build -f Container/Containerfile -t smallserver:$TAG Container/
# no -d because want to see errors inline
# this one will keep podman a little tidier.
# podman run --rm --rmi -p $DEFAULT_PORT:8080 smallserver
podman run -p $DEFAULT_PORT:8080 smallserver:$TAG
Containerfile
And for this example an even more stripped down Containerfile:
FROM scratch
# Make and change into app directory
WORKDIR /app
# The binary directory contains the compiled app
COPY ./binary/ /app/
# Let Docker bind to port 8080
EXPOSE 8080
# Start the Hummingbird service when the image is run, default to listening on 8080 in production environment
ENTRYPOINT ["./SmallestServer"]
Notice the different base image. This time it’s scratch instead of alpine. The scratch
container provides NOTHING to the app’s context while the alpine
container still provides some basic operating system tools. Our project needs nothing from an OS, so it’s safer not to give it anything that can be exploited. That said, scratch containers can’t be ssh’d into to troubleshoot and other ENV problems can crop up. It isn’t the easiest base image to work with if size isn’t an issue. It’s not that I didn’t make a new user in the Containerfile because I was lazy. I didn’t make a new user in the Containerfile because I couldn’t.
Linker directive in Package.swift (Optional)
Like in the static sdk example, to make our resulting release build image even smaller optionally add a linker directive to strip symbols from the object file to Package.swift
.
targets: [
.executableTarget(
name: "hello-world",
dependencies: [
.product(name: "Hummingbird", package: "hummingbird"),
.product(name: "ArgumentParser", package: "swift-argument-parser"),
],
linkerSettings: [
// STRIP_STYLE = all
.unsafeFlags(["-Xlinker", "-s"], .when(configuration: .release)),
]
)
]
Push to Registry
The current build scripts will launch the Hummingbird app on the local computer. To hand it off to an App Server, it goes to a registry first.
Previously I pushed an image to Digital Ocean and Docker Hub.
Let’s add the push to Docker Hub to the bottom of the script:
## COMMENT OUT THE RUN!!
# podman run -p $DEFAULT_PORT:8080 $REPOSITORY:$TAG
## DON'T FORGET TO LOGIN to docker either using
## `docker login` or `podman login`
DOCKER_USER="someusername"
REMOTE_REPOSITORY="smalltest"
REMOTE_TAG="v1"
podman tag localhost/smallserver:$TAG docker.io/$DOCKER_USER/$REMOTE_REPOSITORY:$REMOTE_TAG
podman push docker.io/$DOCKER_USER/$REMOTE_REPOSITORY:$REMOTE_TAG
See the result here: https://hub.docker.com/repository/docker/carlynorama/smalltest/general
Make it ALIVE!
The next step is to sign up for a Digital Ocean App Server and link it to a repository in a registry.
Honestly, this was the least time consuming part of this whole process.
Changed nothing about the defaults except to move my service tier down from the 12/mo to the 5/mo service. And it worked!
Although have a different hummingbird server running there at the moment…
Next Up…
Next is switching from the most minimal set up to one based on the Vapor and Hummingbird templates, which are more robust.