Project Motivation

Why Golang?

In my career at agencies and startups, rapid prototyping is essential. Ruby tended to by the language of choice for many of these projects, as it's easy to get something built quickly, and if you follow style guidelines, easy to read. But as these sites and apps began to scale, runtime speed, the need for concurrency, memory management, and throughput became common bottlenecks with Ruby. Note: I'm not language-bashing here—I still use Ruby on a daily basis—but there's never a single language that's right for all projects.

Go is a statically typed and compiled language. While not as easy as Ruby was to jump in and create working proof-of-concepts from the start, the fundamentals of the language made sense. Once you get the Go concepts and syntax down (it is not an object oriented language), it's easy to move forward quickly. And for a relatively young language, it has a high adoption rate and vast community support and feature-complete packages.

Also, since Golang apps can be compiled into a single binary and executed with minimal OS resources/toolchains (See the Docker Build Notes static build section below), we can create a container with a very small footprint. As shown in the tutorial, Go can already compile for many different target platforms as well, which makes it easy to build binaries for, say, both x86 and Arm architectures.

Why Docker?

Breaking down application components into separate, unique services allows the engineer to work within confined, single-purpose environments. This is in opposition to singular monoliths, where chipping away in one section of the code can have negative cascading consequences within the rest of the application due to internal dependencies. I often follow the 12 Factor App methodology, which does a great job of explaining the details, architecture, and reasoning behind the process. For a companion piece about how this project specifically relates to the 12-factor-app, see my article on DevGenius: Creating a compiled Golang binary for use in a minimal Docker container as defined by the 12 Factor App methodology.

Using Docker properly can ease you into creating 12 Factor applications by forcing the engineer to think about not only breaking down an app into microservices, but also the potential application architecture involved. Also, by creating single purpose containers, you'll often be guided towards more parity with the production landscape, e.g.: different services in the AWS and Google Cloud Platform stacks.

Rather than developing an app on a Linux system and using the OS as a shared resource (e.g.: running web and database services), splitting these into separate containers services will accomplish several things. Firstly, it will declare and isolate the service dependencies. Web service dependencies are different than Database server dependencies. When you develop on a single machine, the dependencies for all pieces of the application are installed with access by the underlying shared OS. When running them in Docker containers, these dependencies are specific to the app service inside the container.

Why Kubernetes?

As mentioned in the above Docker section, microservice architecture can be essential in building highly-scalable applications. The Kubernetes section will give a high-level view of the components involved in a K8s cluster so that the Docker container can be replicated across nodes and accessed via a single endpoint.

What it is

This project is meant to be a high-level view of tying some basic Golang and Docker concepts together. It is meant to be a starting point, by highlighting both Go and Docker fundamentals, basic commands and tools, and an understanding of how Go and Docker can work together to build fast, lightweight, and portable containers.

What it ain't

While I've provided some basic working ideas and a buildable, working application, you should already have some fundamental knowledge of Go and Docker.

logo Golang Application

This repository is a work in progress, but I'll do my best to keep the Main branch in a working state. Initially, this project was to create a boilerplate for containerizing Go binaries for use in a K8s cluster. For now, just organizing my notes in order to be able to replicate this process from end-to-end. The idea is to keep this narrow and succinct and be able to use this as a simple boilerplate for Go containers.

Project Topics

This project is in three distinct parts, each which build on the previous:

1) A simple but functional rest API app written in Go. This rest API incorporates:
  • The Fiber Monitor middleware (API endpoint: /api/v1/metrics).
  • Creating and serving API documentation (using swag init) based on Swagger specifications: /api/v1/docs/).
  • A YAML configuration pattern for setting app variables.
  • Basic Go endpoint tests via go test.
  • Building a binary of the app and embedding external files (both native compilation and cross-compilation for armv6 as an example) so that it is portable and self contained.
  • Go Tools
    • File formatting for *.go files using gofmt.
    • Code linting for *.go files using golangci-lint.
    • Code documentation via godoc.
2) Using the app in a Docker container, covering:
  • Docker build concepts.
  • Docker run concepts.
  • Docker image versioning.
  • Ways to make use of bash scripts for repetitive tasks.
3) Using the Docker container in Kubernetes
  • This section is the most incomplete, but should be in a working state.
  • You should already have a working K8s cluster available for this section.
  • Does not provide much background, assumes some basic knowledge using kubectl.
  • This app will be deployed as a load-balanced Service across a Control Plane and 3 Worker nodes.

Assumptions

  • IP Addresses: For the most part, disregard the hard-coded IP addresses in here (e.g.: my K8s cluster and VM IPs (192.168..)). You'll have to sub in your own for your particular environment. Right now, laziness!
  • Container vs. Pod: I'm noticing a few instances where I'm using both container and pod to mean the same thing in the K8s section. Until I make them more consistent, assume they are interchangeable. A pod is basically a container in in K8s context. While a pod can technically have multiple containers, for this demonstration, assume a 1:1 relationship.
  • System My system and architecture is below, you'll have to adjust your commands if you're departing from Linux/x86_64.

uname -a

Linux mjw-udoo-01 5.4.0-110-generic 124-Ubuntu SMP Thu Apr 14 19:46:19 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux

Prerequisites

.env file:

The .env file contains the configuration for your app, and is used in the Docker build and run processes.

SERVERPORT=5000
DOCKERPORT=5000
DEBUG=false
DOCKERIMAGE=mattwiater/golangdocker

  • SERVERPORT: The port to open for the Golang app. Value: 5000
  • DOCKERPORT: The port to open for Docker to map to the port above.Value: 5000
  • DEBUG: Turn on debugging. Value: true/false
  • DOCKERIMAGE: The tag for your Docker image. Value: {your-docker-hub-account-username}/{your-docker-hub-image-name}

    Note

    The steps will refer to the docker image: mattwiater/golangdocker. You should change these steps to match your own image name in the .env file, e.g.: DOCKERIMAGE={your-docker-hub-account-username}/golangdocker

    Important

    If you want to tag the image differently, adjust the DOCKERIMAGE env variable to include an explicit tag in the format: {your-docker-hub-account-username}/{your-docker-hub-image-name} For example, if it is aVersion 1release, you might tag it :v1, e.g.: mattwiater/golangdocker:v1

Required for Kubernetes integration:

Optional:

While the idea is to get this up and running quickly, it is not a deep dive into Go, Docker, or K8S. Basic knowledge of these technologies is required.

For example, we can peek into the container via the API endpoint api/v1/host and see the docker assigned hostname: "b189564db0c5" and verify that it is one running a single process procs: 1:

{
  hostInfo: {
    hostname: "b189564db0c5",
    uptime: 1238849,
    bootTime: 1667920883,
    procs: 1,
    os: "linux",
    platform: "",
    platformFamily: "",
    platformVersion: "",
    kernelVersion: "5.4.0-110-generic",
    kernelArch: "x86_64",
    virtualizationSystem: "docker",
    virtualizationRole: "guest",
    hostId: "12345678-1234-5678-90ab-cddeefaabbcc"
    }
  }

Installation

The following programs will need to be installed:

My development environment

more /etc/os-release: Ubuntu 20.04.5 LTS

go version: go1.21 linux/amd64

docker -v: Docker version 24.0.6, build ed223bc

Simple Setup:

git clone git@github.com:mwiater/golangdocker.git
cd golangdocker
go mod tidy
go install github.com/swaggo/swag/cmd/swag@latest
go install golang.org/x/tools/cmd/godoc
go install gotest.tools/gotestsum@latest
curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s -- -b $(go env GOPATH)/bin v1.51.2

Preferred Setup: Anaconda

Once installed, you'll also need a compiler for your system, e.g. for Ubuntu: conda install gxx_linux-64

Create the environment: conda create -c conda-forge -n golangdocker go

Verify: conda info --envs

# conda environments:
#
base                     /home/matt/anaconda3
golangdocker             /home/matt/anaconda3/envs/golangdocker

Activate: conda activate golangdocker

git clone git@github.com:mwiater/golangdocker.git
cd golangdocker
go mod tidy
go install github.com/swaggo/swag/cmd/swag@latest
go install golang.org/x/tools/cmd/godoc
go install gotest.tools/gotestsum@latest
curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s -- -b $(go env GOPATH)/bin v1.51.2

When you're finished with the environment, you can deactivate it: conda deactivate

Or, remove it completely: conda env remove -n golangdocker

Makefile

There is a Makefile for convenience. At the moment, it's just acting as a script-runner. To view the executable targets, just type: make

Targets in this Makefile:

make docker-build
make docker-run
make golang-build
make golang-build-arm64
make golang-godoc
make golang-lint
make golang-run
make golang-test

For details on these commands, see the bash scripts in the 'scripts/' directory.

The bash scripts executed by the Makefile live in the /scripts directory.

Note

Many of the bash scripts execute helpers before the main command, e.g.: swag init, gofmt, etc. There are exit status checks in place so that, for example, if gofmt fails prior to the build (usually because of a syntax error), the script will report the error and exit before trying to build the go binary--which would likely fail due to the error found via gofmt. Here is an example of the script pattern:

...
echo -e "${CYANBOLD}Building Swagger docs...${RESET}"
swag init
status=$?
if test $status -ne 0
then
	echo -e "${REDBOLD}...Error: 'swag init' command failed:${RESET}"
	echo ""
	exit 1
fi
echo -e "${GREENBOLD}...Complete.${RESET}"
echo ""

echo -e "${CYANBOLD}Formatting *.go files...${RESET}"
for i in *.go **/*.go ; do
	gofmt -w "$i"
	status=$?
	if test $status -ne 0
	then
		echo -e "${REDBOLD}...Error: 'gofmt' command failed!${RESET}"
		echo ""
		exit 1
	fi
	echo "Formatted: $i"
done;
echo -e "${GREENBOLD}...Complete${RESET}"
echo ""
...

Running the Application

while developing the app, you should run it natively (not in a Docker container) via:

go run main.go

Or, for convenience, run: make golang-run

Site will be available at: http://192.168.0.91:5000/api/v1 (substitute your own IP address)

Warning

This step should be completed first before running via Docker to ensure everything is working properly with the application itself. If errors are introduced at this point, they will simply be carried over when trying to run it in a Docker container.

Application Output

When running the app, you should see output similar to:

┌────────────────────────────────────────────────────┐
│                   Fiber v2.40.0                    │
│               http://127.0.0.1:5000                │ 
│       (bound on host 0.0.0.0 and port 5000)        │
│                                                    │
│ Handlers ............ 14  Processes ........... 1  │
│ Prefork ....... Disabled  PID ................. 1  │
└────────────────────────────────────────────────────┘

To get an understanding of each of the endpoints, explore them while the app is running:

/
/api/v1
/api/v1/docs/
/api/v1/metrics
/api/v1/resource/
/api/v1/resource/all
/api/v1/resource/cpu
/api/v1/resource/host
/api/v1/resource/load
/api/v1/resource/memory
/api/v1/resource/network

Swagger Docs

As an API documentation example, this app is bundled with Swagger UI Documentation, available at the /api/v1/docs/ API endpoint. Along with documented endpoints, there is a full REST interface to test out API calls through the browser, complete with curl examples and header information.

Docker

In this section, we will take our working application binary and wrap it in a bare-minimum Docker container.

For installation on your system, see the official documentation.

Building the Docker container

Note

The steps will refer to the docker image: mattwiater/golangdocker. You should change these steps to match your own image name in the .env file, e.g.: DOCKERIMAGE={your-docker-hub-account-username}/golangdocker

To build, run: make docker-build

Once you have built your image successfully, check the output of docker images

REPOSITORY                TAG       IMAGE ID       CREATED          SIZE
mattwiater/golangdocker   latest    053f21052659   10 minutes ago   26.4MB
...

You should see your tagged image in the list, similar to the output above.

Docker Build notes

Using multi-stage builds, we will use a very simple Dockerfile to containerize our app. Notes have been added here for context, the original Dockerfile file is here.

# Stage 1: Use base Alpine image to prepare our binary, label it 'app'
FROM golang:alpine as app
# Add golangdocker user and group so that the Docker process in Scratch doesn't run as root
RUN addgroup -S golangdocker \
	&& adduser -S -u 10000 -g golangdocker golangdocker
# Change to the correct directory to hold our application source code
WORKDIR /go/src/app
# Copy all the files from the base of our repository to the current directory defined above
COPY . .
# Compile the application to a single statically-linked binary file
RUN CGO_ENABLED=0 go install -ldflags '-extldflags "-static"' -tags timetzdata

# Stage 2: Use the Docker Scratch image to copy our previous stage into
FROM scratch
# Grab necessary certificates as Scratch has none
COPY --from=alpine:latest /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
# Copy our binary to the root of the Scratch image (note: --from=app, the name we gave our first stage)
COPY --from=app /go/bin/golangdocker /golangdocker
# Copy the user that we created in the first stage so that we don't run the process as root
COPY --from=app /etc/passwd /etc/passwd
# Change to the non-root user
USER golangdocker
# Run our app
ENTRYPOINT ["/golangdocker"]

Note: Golang compilation flags

Note the last line of the top section: RUN CGO_ENABLED=0 go install -ldflags '-extldflags "-static"' -tags timetzdata Here we are disabling CGO and using the -static flag. This enables Go to build a statically compiled binary, with no required additional linked files needed, and very few OS resources required to execute it.

Note: Scratch is bare

The app could certainly be built on top of the Alpine image and used from that point, rather than re-building it on scratch. But, for this project, we only need to run a single go binary, we don't need all of the superfluous Alpine OS tools, allowing us to keep this image as small as possible by only including the bare minimum dependencies to run the binary in the container. If it was built upon the full Alpine image, the container would have access to common Linux commands like ls, bash, etc. This is often nice to have for testing, but does create minimal, unneeded overhead.

When interactively executed with the multi-stage build, all of the common Linux OS commands are not required, or included. in fact, just trying to run the ls command on our image results in an error:

docker run -it -p 5000:5000 --entrypoint ls -laF --rm mattwiater/golangdocker

docker: Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "ls": executable file not found in $PATH: unknown.

Note: Lightweight Scratch container

By using the scratch image in a multi-stage build the Docker container is as lightweight as possible. As of this writing, the docker binary built with the make make golang-build command is 25.5MB.

After building the Docker image with make docker-build, using docker images reveals that the image size is only 26.6MB--very little overhead!

docker images

REPOSITORY                TAG       IMAGE ID       CREATED          SIZE
mattwiater/golangdocker   latest    ecfe34d443c4   23 seconds ago   26.6MB

Running the Docker container

The make command below executes the following Docker command, using the .env variables you've defined:

docker run -it -p $DOCKERPORT:$SERVERPORT --rm--name golangdocker --hostname golangdocker $DOCKERIMAGE

Env vars used in the bash script:
SERVERPORT=5000
DOCKERPORT=5000
DOCKERIMAGE=mattwiater/golangdocker

For simplicity, the default setup above has both the application and the Docker container listening on port 5000. These ports can be different. The DOCKERPORT var is the port the container listens on, and then passes the request to the SERVERPORT port var.

To run the app in the container, simply run: make docker-run

You should see the default Fiber message, e.g.:

┌────────────────────────────────────────────────────┐
│                   Fiber v2.40.0                    │
│               http://127.0.0.1:5000                │ 
│       (bound on host 0.0.0.0 and port 5000)        │
│                                                    │
│ Handlers ............ 14  Processes ........... 1  │
│ Prefork ....... Disabled  PID ................. 1  │
└────────────────────────────────────────────────────┘
      

On your host machine, you can now access the container via http://{your-host-ip-address}:5000

Our build is simple, just a compiled Go binary that runs in a container. This binary collects local resources/stats for display as JSON via these API Endpoints using Fiber:

API Info:
/api/v1
System Info:
/api/v1/resource/
/api/v1/resource/all
/api/v1/resource/cpu
/api/v1/resource/host
/api/v1/resource/load
/api/v1/resource/memory
/api/v1/resource/network
API Metrics:

For simplicity, the default Fiber Monitor middleware is included and available at:

/api/v1/metrics

API Endpoint Documentation via Swagger

go install github.com/swaggo/swag/cmd/swag@latest

go get -u github.com/swaggo/fiber-swagger

When updating documentation, you must run this to regenerate docs data: swag init (swag init is incorporated into the bash scripts for convenience, e.g.: docker_run.sh)

Then, when you run the application, docs are avaialble at:

/api/v1/docs/index.html

Docker container resource constraints

There is an important piece missing in steps above: container resource constraints. In the repository docs and scripts, we are issuing the docker run command without the --cpus or --memory flags. Without these flags, your container will simply try and grab as muach of the host resources as it needs. As most applications make use of multiple containers, you'll likely have multiple containers running on the same host. These containers are likely doing different tasks and requesting host resources at differing rates, so containers should be tested and constrained appropriately.

For this app, I used ddosify to pummel it with traffic while testing different constraint values.

docker run -d -p 5000:5000 --rm --cpus=1 --memory=100m --name golangdocker --hostname golangdocker mattwiater/golangdocker

My development VM has 15Gb of ram, and an 8 core processor. The CPU flag above tells the container to limit itself to 1/8 of the total CPU availability. Since I have 8 cores, I can go as high as --cpus=8. The memory flag is straightforward: limit the container to 100Mb of host Ram.

While testing, I started these values low, and gradually increased them until the app was able to succesfully fulfill 10,000 API requests from ddosify.

Note: Load testing

While I'm runnnig this load test from a different host, all of this is on my local network--which is far from a real world environment. Likely, you'll have to continuously tune these values until you better understand the total load and capacity of your host system.

As these values hardly work the accross all systems (bare metal, VMs, cloud services--all with different capacities), it's hard to predict what values you'll need to set, so I've left them out of my scripts.

docker run -it --rm ddosify/ddosify

       __     __              _  ____
  ____/ /____/ /____   _____ (_)/ __/__  __
 / __  // __  // __ \ / ___// // /_ / / / /
/ /_/ // /_/ // /_/ /(__  )/ // __// /_/ /
\__,_/ \__,_/ \____//____//_//_/   \__, /
                                  /____/

Simple usage: ddosify -t targetsite.com

~ #  ddosify -t http://192.168.0.99:5000/api/v1/resource/cpu -n 10000
⚙️  Initializing...
🔥 Engine fired.

🛑 CTRL+C to gracefully stop.
✔️  Successful Run: 1400   100%       ❌ Failed Run: 0        0%       ⏱️  Avg. Duration: 0.00432s
✔️  Successful Run: 2900   100%       ❌ Failed Run: 0        0%       ⏱️  Avg. Duration: 0.00468s
✔️  Successful Run: 4400   100%       ❌ Failed Run: 0        0%       ⏱️  Avg. Duration: 0.00464s
✔️  Successful Run: 5900   100%       ❌ Failed Run: 0        0%       ⏱️  Avg. Duration: 0.00469s
✔️  Successful Run: 7400   100%       ❌ Failed Run: 0        0%       ⏱️  Avg. Duration: 0.00445s
✔️  Successful Run: 8901   100%       ❌ Failed Run: 0        0%       ⏱️  Avg. Duration: 0.00445s
✔️  Successful Run: 10000  100%       ❌ Failed Run: 0        0%       ⏱️  Avg. Duration: 0.00450s


RESULT
-------------------------------------
Success Count:    10000 (100%)
Failed Count:     0     (0%)

Durations (Avg):
  DNS                  :0.0000s
  Connection           :0.0000s
  Request Write        :0.0002s
  Server Processing    :0.0041s
  Response Read        :0.0002s
  Total                :0.0045s

Status Code (Message) :Count
  200 (OK)    :10000

Breakdown: Running Multiple Docker Containers

As we saw above, the make docker-run command executes a Docker command like this:

docker run -it -p $DOCKERPORT:$SERVERPORT --rm--name golangdocker --hostname golangdocker $DOCKERIMAGE

The variables above are defined in the .env file. So the executed command might look like this after variable interpolation:

docker run -it --rm -p 5000:5000 --name golangdocker --hostname golangdocker mattwiater/golangdocker

The -it (interactive mode) flag is important here. When executing with this flags, Docker runs the container and drops you inside the container shell, rather than back out to your main shell. If we want to create more than one container in the same shell, we need to us the -d (detached) flag instead. This will run the container in the background, allowing you to execute other commands within the same session. Instead of using the script, let's create a detatched container manually:

docker run -d --rm -p 5000:5000 --name golangdocker01 --hostname golangdocker01 mattwiater/golangdocker

Note that we've changed the --name and --hostname (to: golangdocker01) in the above example. These can be whatever you want, but for this example, it makes sense to number them.

Once you execute the command above, the only thnig you'll see this time is a hash (e.g.: ce6ce3cf3907e15238e34a397ff1b30b53decfae491a1a37d2be41d08598a7d1) before you are dropped back into your main shell. This hash is the container id. You can see this by issuing the docker ps command.

docker ps

CONTAINER ID   IMAGE                     COMMAND           CREATED          STATUS          PORTS                                       NAMES
ce6ce3cf3907   mattwiater/golangdocker   "/golangdocker"   14 seconds ago   Up 13 seconds   0.0.0.0:5000->5000/tcp, :::5000->5000/tcp   golangdocker01

Even though you're not in your container shell as in the previous examples, you can still access it the same way as before: http://{your-host-ip-address}:5000 Now, let's start a second container, utilizing port 5001 on your host this time:

docker run -d --rm -p 5001:5000 --name golangdocker02 --hostname golangdocker02 mattwiater/golangdocker

Again, not the change of the --name and --hostname (to: golangdocker02) in the command above. These must be unique values or Docker will complain. Issue docker ps again:

docker ps

CONTAINER ID   IMAGE                     COMMAND           CREATED          STATUS          PORTS                                       NAMES
d7574bd0ff3c   mattwiater/golangdocker   "/golangdocker"   5 seconds ago    Up 4 seconds    0.0.0.0:5001->5000/tcp, :::5001->5000/tcp   golangdocker02
ce6ce3cf3907   mattwiater/golangdocker   "/golangdocker"   9 minutes ago    Up 9 minutes    0.0.0.0:5000->5000/tcp, :::5000->5000/tcp   golangdocker01

Note

In the last docker run command, the host port has also been set to 5001, but the container port remains as 5000 (:::5001->5000/tcp in the output above). This is important, as the Docker image we built compiled the Golang app to listen on port 5000, so the container port should always be 5000. So the above out put tells you that your host is routing its own port 5000 to one Docker container, and also routing its own port 5001 to the other Docker container. Each Docker container recieves the request on that port, and routes it internally to the Golang app listening on port 5000. Since there are no internal port collisions between containers, we could spawn as many containers as we want, all listening internally on port 5000, as long as each one is assigned a unique and available host port.

Now, visit each container via a browser tab:

http://{your-host-ip-address}:5000

{
	hostInfo: {
		hostname: "golangdocker01",
		uptime: 148851,
		bootTime: 1675121619,
		procs: 1,
		os: "linux",
		platform: "",
		platformFamily: "",
		platformVersion: "",
		kernelVersion: "5.4.0-110-generic",
		kernelArch: "x86_64",
		virtualizationSystem: "docker",
		virtualizationRole: "guest",
		hostId: "12345678-1234-5678-90ab-cddeefaabbcc"
	}
}

http://{your-host-ip-address}:5001

{
	hostInfo: {
		hostname: "golangdocker02",
		uptime: 148853,
		bootTime: 1675121619,
		procs: 1,
		os: "linux",
		platform: "",
		platformFamily: "",
		platformVersion: "",
		kernelVersion: "5.4.0-110-generic",
		kernelArch: "x86_64",
		virtualizationSystem: "docker",
		virtualizationRole: "guest",
		hostId: "12345678-1234-5678-90ab-cddeefaabbcc"
	}
}

To stop the containers, issue the docker stop command: docker stop golangdocker01 && docker stop golangdocker02

In the above example, we are not running the containers with enforced restrictions, like limiting the amount of memory or CPU they can use on the host machine. See these flags (those staring with --cpu and --memory) and more here: Docker Run Options Running unrestricted containers on the same host machine is not good practice. Over time, they'll consume as many host resources as possible. In the Kubernetes section below, you'll see a better proctice of how to use K8s to orchestrate replicated containers accross mutiple nodes (hosts).

Tests

Very simple tests are in: api_test.go

Run Tests

Run via: clear && go test -v $(go list ./... | grep -v /docs | grep -v /config | grep -v /api)

Or via Makefile: make golang-test (which will execute the scripts/golang_test.sh script)

Clearing test cache...
...Complete.

Running tests...
PASS common.ExamplePrettyPrintJSONToConsole (0.00s)
PASS common.ExampleUniqueSlice (0.00s)
PASS common.ExampleSplitStringLines (0.00s)
PASS common
EMPTY .
PASS config.TestAppConfig (0.00s)
PASS config
PASS sysinfo.TestGetMemInfo (0.00s)
PASS sysinfo.TestGetCPUInfo (0.00s)
PASS sysinfo.TestGetHostInfo (0.00s)
PASS sysinfo.TestGetNetInfo (0.00s)
PASS sysinfo.TestGetLoadInfo (0.00s)
PASS sysinfo.ExampleTestTZ (0.00s)
PASS sysinfo.ExampleTestTLS (0.28s)
PASS sysinfo
PASS api.TestAPIRoutes (0.07s)
PASS api                                           
EMPTY docs

DONE 12 tests in 0.824s
...Complete.

Clear Test Cache

While the bash script automatically clears the test cache, if you run the tests manually, you can clear the test cache via: go clean -testcache

Linting

Note

This section is in progress.

To Do

golangci-lint

go install github.com/golangci/golangci-lint/cmd/golangci-lint@latest

Usage: golangci-lint run

Or via Makefile: make golang-lint (which will execute the scripts/golang_lint.sh script)

Godoc

Note

This section is in progress.

Generate and serve app documentation via godoc.

Usage

godoc -http=:6060

Or via Makefile: make golang-godoc (which will execute the scripts/golang_godoc.sh script)

Access via browser at: http://{your-ip-address}:6060/pkg/{app-module-name-in-go.mod}

E.g.: http://192.168.0.91:6060/pkg/github.com/mattwiater/golangdocker/

Load Testing

Note

This section is in progress.

A simple local load test example using Artillery.

To Do

Installation

npm install -g artillery@latest

Plugins

Official: Per-endpoint (URL) metrics

npm install artillery-plugin-metrics-by-endpoint

Test Phases

Config file: golangdocker-loadtest.yml

          config:
 phases:
  - duration: 60
   arrivalRate: 5
   name: Warm up
  - duration: 120
   arrivalRate: 5
   rampTo: 50
   name: Ramp up load
  - duration: 600
   arrivalRate: 50
   name: Sustained load
 plugins:
  metrics-by-endpoint:
   useOnlyRequestNames: false
 processor: "custom-artillery-functions.js"
scenarios:
 - name: "golang.0nezer0.com"
  flow:

  - get:
    url: "/v1"
    afterResponse: "customMetrics"

  - get:
    url: "/v1/cpu"
    afterResponse: "customMetrics"

  - get:
    url: "/v1/host"
    capture:
     - json: "$['hostInfo']['virtualizationSystem']"
      as: "virtualizationSystem"
     - json: "$['hostInfo']['hostname']"
      as: "hostname"
    afterResponse: "customMetrics"
  # - log: "{{ hostname }} [{{ virtualizationSystem }}]" # Here to ensure we are correctly load-balancing different pods in K8s deployment

  - get:
    url: "/v1/load"
    afterResponse: "customMetrics"

  - get:
    url: "/v1/mem"
    afterResponse: "customMetrics"

  - get:
    url: "/v1/net"
    afterResponse: "customMetrics"
          
          

Custom Scripts

Artillery reference: https://www.artillery.io/docs/guides/guides/extension-apis#example

This simple example makes use of a custom Fiber middleware wrapper that captures the time spent on the server in each API call and sets a Server-Timing response header, e.g.: Server-Timing: route;dur=16. See the RouteTimerHandler() function in api/api.go.

Custom script file: golangdocker-loadtest.yml

      //
// custom-artillery-functions.js
//

module.exports = {
  logHeaders: logHeaders,
  customMetrics: customMetrics
}

function logHeaders(requestParams, response, context, events, next) {
  // console.log(response.headers);
  return next();
}

function customMetrics(requestParams, response, context, events, next) {
  const latency = parseServerTimingLatency(response.headers["server-timing"], "route");
  const url = new URL(requestParams.url);
  const routePath = url.pathname.replaceAll("/", "_")
  events.emit("histogram", "route_latency"+routePath.trim(), latency);
  return next();
}

function parseServerTimingLatency(header, timingMetricName) {
  const serverTimings = header.split(",");

  for (let timing of serverTimings) {
    const timingDetails = timing.split(";");
    if (timingDetails[0] === timingMetricName) {
      return parseFloat(timingDetails[1].split("=")[1]);
    }
  }
}
      
      

Load Tests

In order to benchmark the different run processes, we need to start the app differently before sending a load test. You will also want to run these test form a different physical machine that where you're running the container from. Keep in mind that these are not real world load tests, as we are mostly testing to targets within the same network. These tests are mainly for comparisons of running the app with different mechanisms, e.g.: go app, inside Docker container, within K8s w/ replicas.

No container, bare app

With app running with no container, e.g.: make golang-run

clear && \
  artillery run --output golangdocker-bare.json --target http://192.168.0.91:5000/api golangdocker-loadtest.yml && \
  artillery report golangdocker-bare.json  

Docker Container

With app running in Docker container, e.g.: make docker-run

clear && \
  artillery run --output golangdocker-docker.json --target http://192.168.0.91:5000/api golangdocker-loadtest.yml && \
  artillery report golangdocker-docker.json  

Kubernetes

Assumes working K8s cluster and manual scaling of replicas for each test, e.g.:

clear && \
  artillery run --output golangdocker-k8s-3-replica.json --target http://192.168.0.91:5000/api golangdocker-loadtest.yml && \
  artillery report golangdocker-k8s-3-replica.json  
clear && \
  artillery run --output golangdocker-k8s-2-replica.json --target http://192.168.0.91:5000/api golangdocker-loadtest.yml && \
  artillery report golangdocker-k8s-2-replica.json  

Kubernetes

Note

This section is in progress.

This section walks through the high-level process of inegrating your Docker container in to your Kubernetes cluster. The following example will set up your Docker container to run as load-balanced replicas within your cluster.

Assumptions

You have built the container on the Control Plane node, e.g.:

To build, run: make docker-build

Once you have built your image successfully, check the output of docker images #=>

REPOSITORY                TAG       IMAGE ID       CREATED          SIZE
mattwiater/golangdocker   latest    053f21052659   10 minutes ago   26.4MB
...

You should see your tagged image in the list, similar to the output above.

Above we are going to use the :v1 tag so that we can use K8s Rolling Updates when we make changes to the image. If you have built images in the previous sections, you'll likely see multiple versions of your image with different tags:

docker images

REPOSITORY                TAG       IMAGE ID       CREATED         SIZE
mattwiater/golangdocker   latest    e9b376df3a3f   24 minutes ago  26.4MB
mattwiater/golangdocker   v1        e9b376df3a3f   4 minutes ago   26.4MB
...

And pushed it to docker hub, e.g.: docker push mattwiater/golangdocker:v1

Docker Hub Note

This step is important for the remaining nodes to download and run the image without having to manually build it locally on each node. K8s can use local images to spawn pods, but that would require a manual build on each node (downloading the repo, building the image, and changing the manifest entry for imagePullPolicy: Always to imagePullPolicy: Never), which we are skipping for this demonstration.

For rolling updates, we would just make the necessary updates to our code, build an image tagged with a new version, e.g.: :v1.1, :v2, etc., push it to docker hub, and then issue the command:

Need to fix

kubectl set image deployments/k8s-golang-api k8s-golang-api=mattwiater/golangdocker:v2

Problem: The command is not working with namespaced deployments, need to adjust. The command above tells K8s to update the existing deployment to the newer version and it will take care of bringing down the old pods and spawning new pods with no downtime.

Load Balancer

Since we want to make use of multiple container instances in our cluster which are accessible via a single external endpoint, we'll need to setup a load balancer.

The basic traffic path is for our setup is:

  • Ingress: Our domain maps to an exposed service so that we can reach the Service
  • Service: The load balancer which will route traffic from a singular endpoint to multiple Pods containers via internal Endpoints
  • Endpoints: Defines which target Pods to route traffic to: K8s internal pod IP Addresses and Port

For this example, we'll use Metal-LB to do the heavy lifting.

Metal LB

Installation: https://metallb.universe.tf/installation/

Ensure that Strict ARP Mode is enabled in your cluster:

kubectl edit configmap -n kube-system kube-proxy

Edit/Add the mode and strictARP fields to match below:

apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs"
ipvs:
  strictARP: true  

Next, set up the Metal-LB infrastructure and resources by applying the Metal-LB native Manifest via:

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.7/config/manifests/metallb-native.yaml

Your cluster will vary, but my setup has static IP Addresses on my local network:

kubectl get nodes

NAME STATUS ROLES AGE VERSION
mjw-udoo-01 Ready control-plane 181d v1.25.3
mjw-udoo-02 Ready worker 181d v1.25.3
mjw-udoo-03 Ready worker 181d v1.25.3
mjw-udoo-04 Ready worker 181d v1.25.3

Configure Metal-LB to add these IP addresses to the IPAddressPool (REF: https://metallb.universe.tf/usage/example/)

cat <<EOF | kubectl apply -f -
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: udoo
  namespace: metallb-system
spec:
  addresses:
  - 192.168.0.91-192.168.0.94
EOF  

Then, you can name your IPAddressPool and advertise it to the cluster. In my case, I've just named it udoo

cat <<EOF | kubectl apply -f -
apiVersion: metallb.io/v1beta1
kind: BGPAdvertisement
metadata:
  name: external
  namespace: metallb-system
spec:
  ipAddressPools:
  - udoo
EOF  

Then, create the namespace and deployment for the App. The following code creates the k8s-golang-api namespace for the app to run in and be identified with. It is up to you to choose a name that makes sense, but be sure to adjust the the following YAML snippets to reflect your Namespave name in all of the namespace: fields.

Create Namespace

Create the k8s-golang-api namespace to group all services, deployments, etc. Notice that all of the following YAML definitions use namespace: k8s-golang-api to access this new Namespace definition.

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Namespace
metadata:
  name: k8s-golang-api
EOF  

Create Deployment

The following defines how K8s will deploy the Pods on your system. It defines the names, associated Namespaces, number of Replicas, Resource Limits, Ports, etc.

cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: k8s-golang-api
  namespace: k8s-golang-api
  labels:
    app: k8s-golang-api
spec:
  replicas: 3
  selector:
    matchLabels:
      app: k8s-golang-api
  template:
    metadata:
      labels:
        app: k8s-golang-api
    spec:
      containers:
        - name: k8s-golang-api
          image: 'mattwiater/golangdocker:latest'
          env:
          - name: K8S_NODE_NAME
            valueFrom:
              fieldRef:
                fieldPath: spec.nodeName
          - name: K8S_NODE_IP
            valueFrom:
              fieldRef:
                fieldPath: status.hostIP
          - name: K8S_POD_NAME
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
          - name: K8S_POD_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
          - name: K8S_POD_IP
            valueFrom:
              fieldRef:
                fieldPath: status.podIP
          imagePullPolicy: Always
          resources:
            requests:
              memory: "500Mi"
              cpu: "250m"
            limits:
              memory: "500Mi"

              cpu: "250m"
          ports:
            - containerPort: 5000
              protocol: TCP
EOF  

The final two steps, Service and Ingress are responsible for routing external traffic into the cluster.

Create Service

You can see that the service is accepting incoming traffic on port 80, and routing to the Pods named k8s-golang-api that are already running on Port 5000 (defined in the Deployment manifest above: containerPort: 5000)

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
  name: k8s-golang-api
  namespace: k8s-golang-api
spec:
  type: LoadBalancer
  selector:
    app: k8s-golang-api
  ports:
  - name: web
    port: 80
    targetPort: 5000
EOF  

Create Ingress

In my setup, I want the containers to be accessible via Port 80 at the domain golang.0nezer0.com. So the Ingress section below defines the domain mapping to the Serice section

cat <<EOF | kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: k8s-golang-api-ingress
  namespace: k8s-golang-api
spec:
  defaultBackend:
    service:
      name: k8s-golang-api
      port:
        number: 80
  rules:
  - host: golang.0nezer0.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: k8s-golang-api
            port:
              number: 80
EOF
  

You can verify the setup via:

kubectl describe ingress k8s-golang-api-ingress -n=k8s-golang-api

Name:             k8s-golang-api-ingress
Labels:           
Namespace:        k8s-golang-api
Address:
Ingress Class:    
Default backend:  k8s-golang-api:80 (10.244.1.74:5000,10.244.2.104:5000,10.244.3.66:5000)
Rules:
  Host                Path  Backends
  ----                ----  --------
  golang.0nezer0.com
                      /   k8s-golang-api:80 (10.244.1.74:5000,10.244.2.104:5000,10.244.3.66:5000)
Annotations:          
Events:               

Note that the domain is listed and the Backend are pointing to the Service we created.

Ensure that you have an IP Address allocated for the Load Balancer:

kubectl get svc -n=k8s-golang-api

NAME             TYPE           CLUSTER-IP      EXTERNAL-IP    PORT(S)        AGE
k8s-golang-api   LoadBalancer   10.105.31.196   192.168.0.91   80:31188/TCP   21s

Assuming that your setup is also on your local network, make sure to add add an IP -> Domain mapping in /etc/hosts file on the machine you are accessing the cluster from:

192.168.0.91 golang.0nezer0.com

Horizontal Pod Autoscaler (HPA)

Note

This section needs more documentation.

kubectl autoscale deployment -n k8s-golang-api k8s-golang-api --cpu-percent=75 --memory-percent=75 --min=1 --max=3