Deploying EKS with Prometheus and Grafana
Table of Contents
Introduction
On the face of it this is a fairly pedestrian post subject. The devil, of course, is in the details. Here I wanted to deploy a new EKS cluster, with Prometheus and Grafana, cleanly and with good architectural domain boundaries. I am using the open source self-hosted stack, not the managed versions in this exercise. I also deployed using Open Tofu.
The code and the short version is HERE. The following is the commentary.
Background
Although I have deployed and worked on many k8s clusters on work projects, typically I would expect to be working on clusters that have been running for some time. Any ’new’ cluster will be either a temporary PoC and/or following an existing template if any at all. It doesn’t really help that Kubernetes is not only complex, and festooned with options, but also a rapidly moving target with a quarterly cadence. I recently found myself with some time on my hands and took the opportunity to make a deployment with some production-grade components.
For personal study it’s potentially non-trivial and often ’non-cheap’ to deploy a representative Kubernetes cluster and there isn’t always a direct overlap between what’s available on one cloud provider or distribution and another. Here I am using EKS with some considerations toward keeping the costs down.
The Plan
I wanted to:
- Have sensible architectural domain boundaries
- Use EKS
- Keep costs as low as possible
- Not reinvent the wheel
- Keep to the brief
Domain Boundaries
It’s not unusual to encounter significant tech debt in real-world clusters. Depending on when a given cluster was originally built out and how it has been maintained, it could really be anything from brilliant to otherwise. One pattern that I have seen more than once is over-use of Terraform- someone new to k8s at the time was familiar with Terraform, they were deploying the cluster with Terraform so they deployed lots of stuff on the cluster with Terraform. If the org sticks around then at some point the legacy of this is likely to be problematic. Kubernetes management is often more like configuration management with changes in place rather than destroy and re-create. For this reason it’s best if possible to think about architectural domains early on and not to get carried away using a single pattern or tool beyond its sensible limits. Personally I think of Kubernetes as an operating system rather than a single resource. I wouldn’t use Terraform to maintain a stateful server either. Here I decided to do the initial cluster deployment with Terraform and the onward steps using helm
and other kube-native tools.
Using EKS
For this exercise I was more focussed on the services deployment than the underlying platform. Generally speaking, Kubernetes is a leveller in that things within it will be comparable across different platforms and providers, but of course the integrations with the rest of an eco system will often be specific to a particular one. I have mostly worked with with AWS and EKS so that’s what I am using here.
Keeping Costs Down
This cuts both ways. It’s very easy to overspend in the cloud and with Kubernetes perhaps even more so. By the second stage of this project I had 54 Terraform resources and 2 created by delegation. I wanted a clean deployment that I could rapidly and repeatedly deploy and teardown so that it only needed to be running for an hour or two at a time. This is not typical for kubernetes but it is economical and it encourages good practice with regard to code, runbooks, etc.
Separately, I already have some billing alarms set up in $5 increments. I won’t go in detail on these here but you should absolutely ensure you have these set up before you begin with anything that might be costly, like forgetting to tear down your cluster.
Not Reinventing The Wheel
In the 1980s blockbuster science documentary series Cosmos, American astrophysicist and famed science populariser Carl Sagan said, ‘If you wish to make an apple pie from scratch, you must first invent the universe’. Wise words. We can see that even AWS are using the well-known Anton Babenko Terraform library for their tutorials. I am doing the same, but only for the initial cluster deployment and teardown.
Keeping To The Brief
It’s typical when following online guides that they aren’t doing quite what you would have chosen - they may be going deep on something of little interest or gloss over something that I wanted to look at in more detail. In this case, the AWS tutorial (linked above) wanted me to run a CloudFormation stack to set up to use Cloud9 IDE to deploy my stuff. Sure there are some advantages to this, but then I saw that the section on Grafana begins with “An instance of Grafana has been pre-installed in your EKS cluster.” 🤦🏻♂️. In this case I wanted a clean, native repeatable deployment for Prometheus and Grafana so I moved on (whilst nabbing the Terraform for the cluster from the set up for the project as my starting point :P).
N.B. I am using a simple IAM user rather than “proper” roles to admin cluster, hence not going into (incoming) user management here
Stage 1 - Initial Manual Deployment
Using a small variation on the AWS example Terraform code (with only 2 nodes) I deployed my cluster. I’ve used the Babenko modules before but last time I was setting up a cluster from scratch the example in the readme was broken for that release and I had lost time fixing it. This version of this code ‘worked’ for me.
Great, so now I have my cluster, how to get my services deployed? For my first iteration I wound up following Michael Levan’s Setting Up Prometheus and Grafana on Aws Eks (Getting Started). I don’t normally go for video tutorials but here, the material was clear, on point, and brief (the clip is 3:06). You can see my runbook in the readme in the repo.
The Good
This worked, fantastic! I also had a ‘clean’ boundary between initial cluster setup and onward cluster management.
The Bad
There is an obvious limitation in that this tutorial uses port forwarding to access Prometheus and Grafana. Absolutely fine in this context but not as pukka as I wanted. There’s also a lot of ad-hoc system queries and commands, good for learning, not ideal for automation.
Stage 2 - Controllers with IAM roles for Service Accounts
For the next iteration I wanted to implement the AWS Load Balancer Controller deployed using helm
and using IAM for service accounts. This meant creating additional Terraform resources to create an IAM role for the service account and some custom values to pass in with my helm charts. I have deployed this controller in the past but there has been a lot of development since.
The Good
There was some fiddling about but yes it did, again work* (initially), this time without port forwarding
Neutral
- Load balancers created by the controller won’t be deleted as part of cluster
tofu destroy
and will block teardown of resources unless deployments are uninstalled first or deleted manually
The Bad
- We still have ad hoc commands with
kubectl
- There were no security groups with NLBs so these services are world open. In production we would make these private using a VPN or similar.*
*This turned out to be a major source of frustration. Clearly there have been some significant changes to the AWS load balancer controller in the last few months:
- I realised I was using an older version and updated to discover that security groups are now available and deployed by default but by default they make services world open.
- The load balancer defaulted to internal, which I had not expected and spent a lot of wasted time tracking down before
nslookup
revealed the truth
Refinement
Some considerable refinement later I had my custom YAML file updated to add
service:
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: "false"
# service.beta.kubernetes.io/aws-load-balancer-security-groups: "my-custom-security-group-id"
for each service. It is possible to add a custom security group for each service (as commented) but fundamentally it’s not worth it - I could not get it working easily and I can’t see a sensible use case for doing this ‘properly’ in this context. - I would recommend NACL’s; VPN; etc rather than rely on a delegated security group for a production cluster. Here I am using NACLs. These are generally a pain to set up and work with, as they were here.
Result
We now have a setup where the load balancer controller will automatically create a pass-listed load balancer URL for each service on deployment, using the most recent (at time of writing) versions of each thing:
EKS 1.29 Load Balancer Controller:
- app.kubernetes.io/version=v2.7.2
- helm.sh/chart=aws-load-balancer-controller-1.7.2
We are using IAM for service accounts for the load balancer controller. This is a gold standard.
Other Things I would Do For A Production Cluster
The following are essentially subjects in themselves and/or I have not pursued them further here:
- Harden kube endpoint by making it private
- Use Single Sign On for users
- Use a VPN to manage network access
- Set up authentication for Prometheus
- Alias load balancer DNS entries with ‘proper’ DNS entries using External DNS and ideally ACK Service controller for AWS Certificate Manager (ACM) for verified https connections. Disclaimer - I have done this previously in a production environment.
If I were stopping with helm
I would automate deployment in production with e.g. GitHub actions.