July 4, 2017


In this tutorial we will deploy a Scala sbt application to a Kubenernetes cluster. Once you know how to deploy a Scala sbt application to Kubernetes, you should be able to apply the same basic steps to deploy Scala and akka microservices to Kubernetes. This tutorial is basically a "Hello World" example for Scala, sbt, and Kubernetes.


This tutorial assumes that you have a Kubernetes cluster running. I recommend using minikube. Minikube runs a single-node Kubernetes cluster inside of a Virtual machine on your workstation. This tutorial will assume that you are using minikube. You will need to have kubectl installed on your workstation; kubectl is the command-line interface for interacting with Kubernetes clusters. You'll need git installed to clone the example application we'll be deploying. A Docker client is required to publish built docker images to the Docker daemon running inside of minikube. See installing Docker for instructions for your platform. You'll need to have Scala and sbt installed as well. The example application uses Scala version 2.12.1. Finally you'll need a text editor and a terminal emulator.

Setup the Docker deployment

I've prepared a basic web server wrtten in Scala that uses akka-http for this tutorial. This web server responds to all requests with a configurable message. The first thing that we need to do is to clone the example web server repository to a local working directory.

git clone https://github.com/sjking/hello-akka-http.git

We will be using the Scala sbt native packager plugin to integrate Docker image builds with our sbt build tasks. If you want to just skip the walkthrough of setting up the Scala sbt application for Docker builds, you can go ahead and checkout the docker branch: git checkout docker. Otherwise, start by adding the following file at project/plugins.sbt with the following line:

addSbtPlugin("com.typesafe.sbt" %% "sbt-native-packager" % "1.0.4")

Next, open up build.sbt in your text editor and add the following lines to the top of the file:

import NativePackagerHelper._

enablePlugins(JavaAppPackaging, DockerPlugin)

The NativePackagerHelper package contains some sbt tasks and settings that we will make use of to bundle our Docker image. We also enable two plugins. The first plugin, JavaAppPackaging, is what compiles the libraries and source code we're using in our application to jars, and generates an executable script to run the application inside of the Docker image. The DockerPlugin is a required setting to use the sbt Docker plugin.

Next, add the following lines to the bottom of build.sbt:

javaOptions in Universal ++= Seq(

packageName in Docker := packageName.value

version in Docker := version.value

dockerExposedPorts := List(8001)

dockerLabels := Map("maintainer" -> "NoReply@steveking.site")

dockerBaseImage := "openjdk"

dockerRepository := Some("sjking")

defaultLinuxInstallLocation in Docker := "/usr/local"

daemonUser in Docker := "daemon"

mappings in Universal ++= directory( baseDirectory.value / "src" / "main" / "resources" )

The javaOptions setting above adds Java runtime options. In our case, we add the location of a Kubernetes specific configuration file, and a log4j properties file. We will come back to this later when we deploy our application to the cluster. The packageName setting uses the value of the name setting key in our build.sbt file to name the docker image. The version setting is set to the value of the version setting key, which is used to tag our Docker image. The dockerExposedPorts setting lets us expose port 8001 in the Docker container. We add a "maintainer" label to the Dockerfile using the dockerLabels setting. The dockerBaseImage defines the base Docker image that our Docker image is built on top of. More will be said about how Docker images are built in layers when we describe the Dockerfile generated by sbt native packager a little bit later. The dockerRepository setting above is used to set a public docker repository. It is assumed that the public Docker repository is hosted at Docker hub. The defaultLinuxInstallLocation setting is used to set the installation directory on our Docker container. We will be running our application as a daemon. The daemonUser setting lets us set the Linux user that will run our application. We use a non-privileged user called daemon. Finally, the mappings setting is used to copy the contents of the src/main/resources directory to our Docker image. The resources directory contains application and logging configuration defaults.

Stage the Docker deployment

The sbt native packager Docker plugin comes with a docker:stage task that is used to create a local directory under target/docker where all the deployment files will be written to. These files include a Docker file, dependency jars, and an executable script used to launch the application. Go ahead and run sbt docker:stage. If you have tree installed you can explore the directory structure by running tree target/docker/stage, which should output something like this:

├── Dockerfile
└── usr
    └── local
        ├── bin
        │   ├── hello-akka-http
        │   └── hello-akka-http.bat
        ├── conf
        │   └── application.ini
        ├── lib
        │   ├── com.typesafe.akka.akka-actor_2.12-2.5.3.jar
        │   ├── com.typesafe.akka.akka-http-core_2.12-10.0.9.jar
        │   ├── com.typesafe.akka.akka-http_2.12-10.0.9.jar
        │   ├── com.typesafe.akka.akka-parsing_2.12-10.0.9.jar
        │   ├── com.typesafe.akka.akka-slf4j_2.12-2.5.3.jar
        │   ├── com.typesafe.akka.akka-stream_2.12-2.5.3.jar
        │   ├── com.typesafe.config-1.3.1.jar
        │   ├── com.typesafe.ssl-config-core_2.12-0.2.1.jar
        │   ├── log4j.log4j-1.2.17.jar
        │   ├── org.reactivestreams.reactive-streams-1.0.0.jar
        │   ├── org.scala-lang.modules.scala-java8-compat_2.12-0.8.0.jar
        │   ├── org.scala-lang.modules.scala-parser-combinators_2.12-1.0.4.jar
        │   ├── org.scala-lang.scala-library-2.12.1.jar
        │   ├── org.slf4j.slf4j-api-1.7.25.jar
        │   ├── org.slf4j.slf4j-log4j12-1.7.25.jar
        │   └── site.steveking.hello-akka-http-1.0.jar
        └── resources
            ├── application.conf
            └── log4j.properties

6 directories, 22 files

Directory structure

Below is a description of the directory strucutre of the staged build:

  • /usr/local/lib: The location of all dependency jars.
  • /usr/local/bin: The location of the executable script used to launch the application.
  • /usr/local/conf/application.ini: Contains Java options used when launching the program. These are specified in the javaOption setting in build.sbt. See sbt tasks basic definitions for more information.
  • /usr/local/resources: Application configuration files that are "baked-in" to the Docker image.
  • Dockerfile: The Dockerfile is used for building the docker image.


According to the Dockerfile reference, "[a] Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image." A Dockerfile is used to generate a docker image. A docker image consists of layers. Each line in the Dockerfile creates a new layer. When building Docker images, if you were to change a line in the Dockerfile and then rebuild the image, then only the layer for the changed line and the layers after the changed line will be rebuilt. This is handy since some layers can take a while to build. Let's take a look at the Dockerfile that was generated when we ran docker:stage.

FROM openjdk
WORKDIR /usr/local
ADD usr /usr
RUN ["chown", "-R", "daemon:daemon", "."]
USER daemon
ENTRYPOINT ["bin/hello-akka-http"]
CMD []

The first line of the Dockerfile, beginning with FROM openjdk, initializes a new build stage and sets the base image for the Docker image. The openjdk base image is an open-source implementation of the Standard edition of the Java Platform. The second line of the Dockerfile sets the working directory of the Docker image being built. The third line, containing the ADD command, copies the files from the staged build to the working directory of the Docker image. The fourth line recursively changes the ownership of the working directory, /usr/local, on the Docker image, to the daemon user. The fifth line sets the user name to use when running the Docker image. The sixth line uses the ENTRYPOINT command to configure the executable to run to start the application in the running Docker container. The final line is not used in our Dockerfile.

Build the Docker image

The next step is to build the actual Docker image. First, make sure that you have a Docker client installed on your machine and that it is running. From the same terminal session that you're running sbt from, execute the following command:

eval $(minikube docker-env)

This command will allow us to build inside the same Docker daemon that is running inside minikube. This is great for local development because we don't need to push our Docker images to a remote Docker registry before deploying them to the minikube Kubernetes cluster.

Now, in the same terminal session, run the following command:

sbt docker:publishLocal

The above command will build the Docker image, and publish it to the Docker daemon running inside of minikube. To view the list of built images run docker images. After running this command you should see your newly minted Docker image. It should look something like this:

REPOSITORY               TAG                 IMAGE ID            CREATED             SIZE
sjking/hello-akka-http   1.0                 540406cf1ca6        2 days ago          650MB
openjdk                  latest              1fc03e5bdd37        12 days ago         609MB

Test the Docker image in minikube

The final step is to deploy our Docker image to minikube. under the hello-akka-http repository, take a look at the kubernetes/hello-pod.yaml file.

apiVersion: v1
kind: Pod
  name: hello
  - image: sjking/hello-akka-http:1.0
    name: hello
    imagePullPolicy: IfNotPresent
    - name: hello-conf-volume
      mountPath: /usr/local/etc
    - containerPort: 8001
      name: http
  - name: hello-conf-volume
      name: hello-conf

The hello-pod.yaml file describes a single Kubernetes Pod resource. As you can see, the spec.containers.image property is our built Docker image name with version tag. Also of note is the spec.containers.imagePullPolicy property. This property is set to IfNotPresent. This setting will make it so that Kubernetes will only pull the Docker image specified in spec.containers.image from a remote registry if that image is not already present in the Docker daemon running in minikube. You might be wondering why we're using port 8001 for spec.containers.ports.containerPort. Since we're running our Docker image as a non-priveliged user, we cannot bind to port 80. That is why we choose port 8001.

The kubernetes/configmap directory in the hello-akka-http repository contains two files: container.conf, and log4j.properties. These files are equivalent to the files that are located under the resources directory in our Scala application. However, we will be mounting these files to our Pod as Configmaps. Configmaps are used to mount application runtime configuration files on a Kubernetes Pod resource. We will be making a minor change to the default configuration in application.conf. Let's take a look at the container.conf file:

include "application.conf"

http {
  port = 8001
  message = "Hello Scala!"

In container.conf we include application.conf at the top of the file (remember that we copied the contents of the resources folder into our Docker image). This allows us to override the settings in application.conf, while also retaining any of the settings in application.conf that are not overridden as defaults.

We will now create the Kubernetes configmap resource. Run the following command from the root of hello-akka-http:

kubectl create configmap hello-conf --from-file=kubernetes/configmap/

After the configmaps are created, we can now create our Pod resource. Run the following command to do that:

kubectl create -f kubernetes/hello-pod.yaml

To test that our web server is now running, we can port-forward to the pod from our local machine, and make a request using curl. Run the following command to port-forward port 8001 from your machine to the same port on the pod:

kubectl port-forward hello 8001

Then, make a request using curl:

curl http://localhost:8001/
Hello Scala!


In the tutorial we deployed a simple Scala application to a Kubernetes cluster. The Scala sbt native packager Docker plugin has more features than are shown here. For example, it is possible to fully customize a Dockerfile by removing commands, and adding your own commands. I wrote this tutorial as there wasn't a step-by-step guide on deploying Scala sbt applications to Kubernetes that I could find. Please let me know if you have any questions or feedback.