It has been a while since I last posted, but between college, work, and kids, I’ve been pretty busy. That said, I recently attended KubeCon 2019 and saw a lot of interesting presentations. As a fan of Rancher, I gravitated toward a lot of their talks. One that really caught my attention was Darren Shepherd’s talk on k3s. I really liked what I saw; it made setting up Kubernetes really easy, lightened the dependency load for small clusters, but still is very much the right amount of “batteries included” like most things made by Rancher.
I decided to move my home server (which runs, among other things, this blog) to k3s. Here, I’ll walk through how I did it — at least specifically for running a WordPress blog — just to demonstrate how easy it is. Fair warning though, there is a lot of YAML ahead!
The Basics
First, I’ll lay out my scenario (your mileage may vary).
- OS: Ubuntu Linux 18.04 (LTS)
- Blog Docker Image:
wordpress:php7.2-apache
- DB: MariaDB 10 (Basically MySQL) running on a dedicated host (outside of Docker)
- Storage: HostPath-based (not using actual volumes) living under
/usr/local/volumes
on my server
This is to say that this is not a guide to running WordPress in an enterprise. This is just how I have my home server, which has been upgraded and carried along through numerous iterations of hosting my blog.
Note that all commands are run as root
(which you can get to via sudo su -
).
Installing K3S
Taken straight from the k3s GitHub repo:
1 2 |
curl -sfL https://get.k3s.io | sh - |
Wait a bit and you’ll have a k3s service (à la systemd
) up and running.
Note that if you’re paranoid like me and have decent firewall rules on your Linux box (mine acts as my home router/Internet gateway as well), then you’ll need to setup firewall rules to allow the Kubernetes networks outbound. I use ufw to manage iptables, but the rules more-or-less translate to:
1 2 3 |
iptables -A INPUT -s 10.42.0.0/16 -j ACCEPT iptables -A INPUT -s 10.43.0.0/16 -j ACCEPT |
Make sure these are persisted as they are what allow Pods to talk to things outside the Kubernetes network (including the Internet). I can’t find good documentation on the k3s site to back this up other than looking at the default values for k3s server --cluster-cidr
and k3s server --service-cidr
here.
TLS
I use Let’s Encrypt for securing my blog, so it was important to me to maintain this as I move to Kubernetes. Luckily, cert-manager exists and works with Kubernetes and Let’s Encrypt pretty easily. To add it, we need to install the controller and related resources:
1 2 3 4 5 6 |
# Create the required namespace kubectl create namespace cert-manager # Install cert-manager kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v0.12.0/cert-manager.yaml |
That installs cert-manager, but now we need a ClusterIssuer
that can provision certificates via Let’s Encrypt. Most guides will talk about using the staging API (which I highly recommend if this is your first time using Let’s Encrypt), but since I know this works already, I’m going to skip ahead to Prod.
Create a YAML file (I called mine certs-clusterissuer.yaml
) at put something like this in there:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
apiVersion: cert-manager.io/v1alpha2 kind: ClusterIssuer metadata: name: letsencrypt namespace: cert-manager spec: acme: server: https://acme-v02.api.letsencrypt.org/directory email: my.email@example.com privateKeySecretRef: name: letsencrypt-prod solvers: - http01: ingress: class: traefik |
Be sure to replace “my.email@example.com” with your real email address! Also note that this wires up a ClusterIssuer
kubernetes resource with Let’s Encrypt’s production API, but it won’t yet actually request any certificates. That’ll happen when we put the appropriate annotations on an Ingress resource later.
The kind: ClusterIssuer
bit allows a single issuer to work for the entire cluster, rather than requiring an issuer per namespace. Given my use-case, this makes sense, but in an enterprise, it might make sense to break this up more.
Apply the YAML:
1 2 |
kubectl apply -f certs-clusterissuer.yaml |
Blog Components
Namespace
I avoid the default namespace whenever I can (using Kubernetes at work has drilled this into me), so let’s make a dedicated namespace for the blog:
1 2 |
kubectl create namespace blog |
Deployment
Now for some REAL fun with YAML… let’s create the Deployment
resource, which describes how to actually run our WordPress container. We’ll call this file blog-dep.yaml
and put this content in there:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 |
apiVersion: apps/v1 kind: Deployment metadata: name: blog namespace: blog labels: app.kubernetes.io/name: blog app.kubernetes.io/part-of: blog app.kubernetes.io/component: wordpress spec: replicas: 1 strategy: type: RollingUpdate rollingUpdate: maxSurge: 2 maxUnavailable: 0 selector: matchLabels: app.kubernetes.io/part-of: blog app.kubernetes.io/component: wordpress template: metadata: labels: app.kubernetes.io/name: blog app.kubernetes.io/part-of: blog app.kubernetes.io/component: wordpress spec: volumes: - name: site-docroot hostPath: path: /usr/local/volumes/blog type: Directory containers: - name: www image: wordpress:php7.2-apache imagePullPolicy: Always stdin: true tty: true env: - name: WORDPRESS_DB_HOST value: mydbhost.home - name: WORDPRESS_DB_USER value: wpdbuser - name: WORDPRESS_DB_PASSWORD value: "897rw8uriwra232832909uad" - name: WORDPRESS_DB_NAME value: blogdb resources: limits: memory: 512Mi cpu: "2" requests: memory: 64Mi cpu: 50m livenessProbe: tcpSocket: port: 80 timeoutSeconds: 2 initialDelaySeconds: 2 periodSeconds: 2 failureThreshold: 3 readinessProbe: tcpSocket: port: 80 timeoutSeconds: 6 initialDelaySeconds: 2 periodSeconds: 5 failureThreshold: 3 ports: - name: www containerPort: 80 protocol: TCP volumeMounts: - name: site-docroot mountPath: /var/www/html |
Much of the above is pretty arbitrary and are just patterns I found helpful. That said, pay very close attention to these things:
1 2 3 |
hostPath: path: /usr/local/volumes/blog |
The above describes where on my host (meaning outside of Kubernetes) the document root and all variable data will be stored. This directory should already exist (for me, it is where my content was already stored previously). It gets mounted within the Pod/container at /var/www/html
.
1 2 3 4 5 6 7 8 9 10 |
env: - name: WORDPRESS_DB_HOST value: mydbhost.home - name: WORDPRESS_DB_USER value: wpdbuser - name: WORDPRESS_DB_PASSWORD value: "897rw8uriwra232832909uad" - name: WORDPRESS_DB_NAME value: blogdb |
Hopefully, the above values are intuitive, but they almost certainly need to be changed. The DB hostname should be resolvable (or be changed to an IP) and the credentials should have appropriate permissions on the DB, which should also exist. You’re also able to use a container for your DB as well, but setting that up is beyond the scope of this post. Let me know if you get stuck.
Also, make note of the resource lines for CPU and memory. If your site is running slow and you have excess capacity on your server, you can bump this up.
Now we need to apply our configuration:
1 2 |
kubectl apply -f blog-dep.yaml |
You can check on the status of your deployment via:
1 2 |
kubectl get pods -n blog |
You should see something like:
1 2 3 |
NAME READY STATUS RESTARTS AGE blog-5cff5978d7-kznkw 1/1 Running 0 21s |
You can get the logs for this via:
1 2 |
kubectl logs -n blog blog-5cff5978d7-kznkw |
Service
Kubernetes has a lot of layers of abstraction that make it very powerful and flexible but it also makes for a lot of configuration. Here, we’ll create the Service
resource that exposes the ports used by the Deployment to the rest of the Kubernetes cluster. You can think of this as giving the blog Deployment an internal DNS name within Kubernetes that allows only specific ports to be accessed via that name.
Create a file called blog-svc.yaml
and put this content in there:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
apiVersion: v1 kind: Service metadata: name: blog namespace: blog labels: app.kubernetes.io/name: blog app.kubernetes.io/part-of: blog app.kubernetes.io/component: blog-service spec: type: ClusterIP ports: - port: 80 targetPort: 80 protocol: TCP name: www selector: app.kubernetes.io/part-of: blog app.kubernetes.io/component: wordpress |
The type
here is ClusterIP
, which only exposes the service within the Kubernetes cluster. Also, notice the selector
rules; they are how the service finds its Pod(s).
Apply this configuration:
1 2 |
kubectl apply -f blog-svc.yaml |
Ingress
Now for our final piece of YAML: our Ingress resource. Note: when we deploy this resource, the cert-manager
will try to provision TLS certificates via Let’s Encrypt. So before running these steps, be sure that you know what your blog’s DNS name is supposed to be and that you’ve already pointed it at your server! For me, this was already set up since I was moving from a docker-compose
approach to Kubernetes. In any case, be sure your exact hostname is set correctly and resolvable externally before continuing.
The Ingress resource is kind of like an application load-balancer; it intelligently routes HTTP requests to upstream Pods in your cluster based on the hostname and/or URI. It can also do a few other handy things like automatic HTTP->HTTPS upgrading. We’ll make use of this feature here.
Create a file called blog-ingress.yaml
and put this content in there:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 |
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: blog namespace: blog labels: app.kubernetes.io/name: blog app.kubernetes.io/part-of: blog app.kubernetes.io/component: blog-ingress annotations: # Forces HTTPS ingress.kubernetes.io/ssl-redirect: "true" # Uses the right ingress class kubernetes.io/ingress.class: "traefik" # Tells this Ingress to issue a cert via our ClusterIssuer cert-manager.io/cluster-issuer: letsencrypt # Uses "http01" for ACME provisioning via Let's Encrypt cert-manager.io/acme-challenge-type: http01 spec: rules: - host: blog.gnagy.info http: paths: - backend: serviceName: blog servicePort: www tls: - hosts: - blog.gnagy.info secretName: blog-tls |
There’s a lot to pay attention to there. Both locations that say blog.gnagy.info
should be replaced with your blog’s hostname. Also, make note of the annotations used. Nothing should require changing, but it is worth understanding how this works to wire together our ClusterIssuer
and this Ingress
resource.
Apply this configuration:
1 2 |
kubectl apply -f blog-ingress.yaml |
In just a few minutes, you should have a WordPress installation running on k3s!
One thought on “Blogging on Kubernetes”
Comments are closed.