Introduction
Suppose you have a Ktor application named app and you want to deploy it to a remote server.
Fat JAR
Deploying a Ktor application involves packaging your code into a runnable format (usually a “Fat JAR”) and configuring a remote server to run it continuously while handling incoming web traffic.
Executable JAR file
To run the application on a server, you need to bundle your code and all its dependencies into a single, executable JAR file.
Open your terminal in the root of your project and run the build command:
Once the build succeeds, locate your packaged application. It will be located at build/libs/ and will likely be named something like app.jar.
Server
Connect to your server via SSH and install the Java Runtime Environment (JRE) so it can execute your JAR file.
Update your package manager and install Java (replace 17 with 21 if your project uses Java 21):
Create a directory to hold your application:
Transfer the Application to the Server
You need to copy the Fat JAR from your local machine to the server.
Open a new terminal window on your local machine (do not close the SSH session) and use scp (Secure Copy Protocol):
Service
If you just run the JAR file in the terminal, it will stop the moment you close your SSH connection.
To keep it running in the background and ensure it automatically restarts if the server reboots, we will create a systemd service.
Back in your server’s SSH session, create a new service file:
Paste the following configuration (adjust the User if you want to run it under a specific service account instead of root/default):
[Unit]
Description=App Application
After=network.target
[Service]
User=root
# The path to your application directory
WorkingDirectory=/opt/app
# The command to start the app
ExecStart=/usr/bin/java -jar /opt/app/app.jar
SuccessExitStatus=143
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.targetSave and exit (Ctrl+O, Enter, Ctrl+X).
Reload systemd to recognize the new service, then start and enable it:
Check the status to ensure it’s running cleanly:
Reverse Proxy
By default, Ktor usually runs on port 8080. It is best practice not to expose this port directly to the web, but instead use a robust web server like Nginx to intercept standard HTTP/HTTPS traffic (ports 80 and 443) and forward it to your Ktor app.
Install Nginx on your server:
Create a new Nginx configuration file for your app:
Paste the following block (replace your_domain.com with your domain name or server IP):
server {
listen 80;
server_name your_domain.com;
location / {
proxy_pass http://localhost:8080; # Points to your Ktor port
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}Enable the site by creating a symlink:
Test the Nginx configuration and restart the service:
You should now be able to access your Ktor application by navigating to http://your_domain.com (or your server’s IP) in your web browser!
Secure with HTTPS
If you attached a domain name to your server, you can secure it with a free SSL certificate from Let’s Encrypt.
Follow the prompts, and Certbot will automatically rewrite your Nginx configuration to support HTTPS.
Docker
Using Docker is a fantastic next step. It isolates your application, its dependencies, and its runtime environment into a single, portable container. This means if it runs on your machine, it is guaranteed to run exactly the same way on your server—no more “it works on my machine” headaches!
Here is how to transition your Ktor deployment to a Dockerized setup using a Multi-Stage Docker Build. This approach is great because it builds your Fat JAR inside a temporary container, meaning you don’t even need Java or Gradle installed on your server.
Gradle
Dockerfile
In the root directory of your Ktor project (next to your build.gradle.kts), create a file simply named Dockerfile (no extension). Paste the following configuration:
# ==========================================
# Stage 1: Build the Fat JAR
# ==========================================
# Use an official Gradle image to build the app
FROM gradle:8.5-jdk17 AS build
# Copy your source skills into the container
COPY --chown=gradle:gradle . /home/gradle/src
WORKDIR /home/gradle/src
# Run the build command (creates the Fat JAR)
RUN gradle buildFatJar --no-daemon
# ==========================================
# Stage 2: Run the Application
# ==========================================
# Use a lightweight Java Runtime image for the final container
FROM eclipse-temurin:17-jre-alpine
# Create a directory for the app
WORKDIR /app
# Copy ONLY the built JAR from the previous stage
COPY --from=build /home/gradle/src/build/libs/*-all.jar app.jar
# Expose the port your Ktor app runs on (usually 8080)
EXPOSE 8080
# Command to run when the container starts
ENTRYPOINT ["java", "-jar", "app.jar"]Step 2: Create a docker-compose.yml (Recommended)
While you can run plain Docker commands, using Docker Compose is much better for managing server deployments. It allows you to define how your container should restart and run in the background.
In the same root directory, create a docker-compose.yml file:
version: '3.8'
services:
ktor-web:
build: .
container_name: ktor_backend
ports:
- "8080:8080" # Maps server port 8080 to container port 8080
restart: unless-stopped # Automatically restarts on crash or server rebootStep 3: Prepare Your Server
If you previously set up the systemd service from our last tutorial, you’ll want to stop and disable it so it doesn’t clash with Docker on port 8080:
Next, you need to install Docker and Docker Compose on your Ubuntu server. SSH into your server and run:
# Update packages
# Install Docker
# Install Docker Compose plugin
# Ensure Docker starts on boot
Step 4: Deploy on the Server
Instead of manually copying JAR files using scp, the cleanest way to deploy Docker apps is to use version control (like Git).
- Push your project (including the new
Dockerfileanddocker-compose.yml) to a Git repository (GitHub, GitLab, etc.). - On your server, clone the repository:
- Build and start your container in the background using Docker Compose:
Note: The --build flag forces Docker to execute your multi-stage Dockerfile, compiling your code fresh. The -d flag runs it in “detached” mode (in the background).
Step 5: Check Your Work
To verify your Ktor container is running smoothly, you can view the live logs:
(Press Ctrl+C to exit the log view).
What about Nginx?
If you set up Nginx as a reverse proxy in the previous tutorial, you don’t need to change anything! Nginx is already listening on port 80/443 and forwarding traffic to localhost:8080. Docker is now exposing your Ktor app on that exact same local port, so Nginx will seamlessly route external traffic right into your Docker container.
Ask AI
Would you like to take this a step further and set up a GitHub Actions workflow so that your server automatically pulls and rebuilds this Docker container every time you push new code?