Introductions
So this is my collection of thoughts, ideas and craziness documented for everyone to see. I have a lot of different interests that I cycle through irregularly. It's hard to keep track of the bits and pieces so I figure I should write some of this down.
I don't like blogging, much to much like trying to keep a journal or a dairy ( not that there is anything wrong in that ). Instead I have specific, task-focused documents that I want to keep around so that when I revisit a topic, I can pick up where I left off.
If others find my documents and notes of interest, then that's great. That is not the reason why I'm doing this though.
If folks have suggestions or corrections, then great! Create a fork, branch and submit a PR. I don't mind being corrected and giving credit where credit is due. I'm not right all the time and there are much more knowledgable folks out there then me.
Kubernetes
Kubernetes is a big deal, and is something that I enjoy using.
I have four servers setup that host my kubernetes environment, and then another four servers that host various services that I use with my kubernetes environment
I have decided to document my kubernetes setup so that other folks can follow along and see what I have going on, and maybe learn a thing or two along the way. It is also documented because I tend to rebuild the mess every couple of months and I usually forget something along the way!
Order of Operations
- Installation
- Local Setup
- Load Balancer
- Certificate Manager
- Ingress Controller
- Prometheus
- NFS Provisioner
- Metrics Server
Installing Kubernetes
These are my notes on installing Kubernetes. They work for me in my home network. They might might work for you.
Setup
Almost all of the heavy lifting is done through Ansible. That gets the software on the server, but doesn't do much in the way of configuration, which is why I'm writting this up. You can find the playbooks and documents over here.
The current config has three machines, Basil, Lavender and Sage ( yes, herbs because why not ). Basil is the control-plane, while the other two are workers. They are cheap machines but regardless of the hardware, this should work for most machines.
Step One
Make sure the machines are updated, working and communicating with ansible, otherwise not much of this is going to work. Most often you will need to change the inventory configuration to make which ever IPs the servers are on.
After that is done, go ahead and execute the kubex playbook and grab a refreshing drink. It will take about twenty minutes.
Step Two
Next we need to get the control-plane setup. This isn't to hard and I put most of the configuration that I use inside a configuration file that the ansible scripts will build on Basil.
Execute ( on Basil ):
sudo kubeadm init --config /opt/kubeadm-config.yml
This will get kubeadm up and running and installing all the software. If you want to, look at the configuration file to see exactly what it is doing. You will probably have to customize it based on the version of kubernetes you are installing and the IP range you want to use.
I put everything in a configuration file since I was tired of dealing with command line options. And configuration as code, because that's just smart.
Once kubeadm is complete, make sure the setup the kubectl configuration file and copy the command that that allows other nodes to join.
Step Three
I prefer to setup networking before I worry about the worker nodes, but that's just me. You can do steps three and four at the same time if you can multitask.
The only networking controller that i have been able to get working reliably is Project Calico. The docs are functional, but not great.
Bottom line is it works every time I have setup Kubernetes unlike some of the others.
Execute ( on Basil ):
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.25.0/manifests/tigera-operator.yaml \
&& kubectl create -f /opt/calico-custom-resources.yml
It will take a few minutes, but should not be any problems installing.
Step Four
Remember that command you saved from Step Two? Yes? Good!
Log into each of your worker nodes and execute it. That will connect the workers to the control-plane and get the entire show running.
Step Five
It will take the networking a while to settle in, or at least it always has for me.
Execute ( on Basil ):
kubectl get nodes -o wide
You can also create a very basic nginx deployment just to see if things work.
kubectl apply -f https://gist.githubusercontent.com/bbrietzke/c59b6132c37ea36f9b84f1fee701a642/raw/00e1683fbd7422fc297629128b08bded6ca3c90b/kubernetes-test.yaml
Then:
open http://basil.local:30080/
And you should see a website in the browser of your choice.
Local Setup
There are some tools that I like to have on my local machine that makes working with Kubernetes much easier. This document will go through the installation and configuration of them.
What to install?
Since I'm on a Mac, everything is installed through (Homebrew)[https://brew.sh/].
brew install k9s helm kubernetes-cli
Configuration
Kubernetes Tools
The only thing that really needs configuration is the kubernetes configuration tools. For that, you need to get a copy of the kube config file from the control-plane.
mkdir ~/.kube
scp ubuntu@basil.local:.kube/config ~/.kube/config
This configures both k9s and kubectl, so that bit is done. Both of the tools should work as you would expect.
Helm
You don't have to configure helm, but it's not a bad idea either. My personal chart repository is configured as follows.
helm repo add bbrietzke http://bbrietzke.github.io/charts
I have a few charts that I tend to use, in particular for setting up namespaces. I have three, prod, dev, and infra. There isn't much else to customize, so this just works.
helm install namespaces bbrietzke/namespaces
It's also the first helm chart I created.
Running Kubernetes on bare-metal? We'll need a load balancer.
The installation is well documented ( and works ), so I won't repeated it here.
I ran the manifest method, just because it seems the easiest way to do it. If I do it again, I'll try the helm route.
Configuration
This is where the docs get confusing.
You need to do some configuration after the installation and it's not explicitly clear what kind of configuration.
Address Pools
The LB requires addresses that it can assign. You should have a configuration document similiar to:
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: available-pool
namespace: metallb-system
spec:
addresses:
- 10.0.0.10-10.0.0.15
Call it something like address-pools.yaml
so you don't forget what it is.
BGP
I know you need these to work, but don't ask me why. Call it advertisements.yml
.
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: available-pool-l2
namespace: metallb-system
spec:
ipAddressPools:
- available-pool
---
apiVersion: metallb.io/v1beta2
kind: BGPPeer
metadata:
name: available-pool-peer
namespace: metallb-system
spec:
myASN: 64500
peerASN: 64501
peerAddress: 10.0.0.1
---
apiVersion: metallb.io/v1beta1
kind: BGPAdvertisement
metadata:
name: available-pool-adv
namespace: metallb-system
spec:
ipAddressPools:
- available-pool
Applying the Manifests
Nothing out of the ordinary, if you build a manifest, you have to apply it.
kubectl apply -f address-pools.yaml
kubectl apply -f advertisements.yml
Usage
You need to create a load balancer service, which can be found over here. In the case of a MetalLb, the yaml document could look like:
apiVersion: v1
kind: Service
metadata:
name: example-service
spec:
selector:
app: nginx
ports:
- port: 80
targetPort: 80
name: http
- port: 443
targetPort: 443
name: https
type: LoadBalancer
Managing Certificates in Kubernetes
Dealing with TLS certificates is a pain in butt!
This document is just a reshash/shorten view with my specific configuration. You can find the full documentation over at cert-manager.io.
Installation
Helm
helm repo add jetstack https://charts.jetstack.io && helm repo update
There are a number of ancilliary resources that have to be installed. You can do it manually, or let the helm chart do it ( which is what I did ).
helm install \
cert-manager jetstack/cert-manager --namespace cert-manager --create-namespace --version v1.11.0 --set installCRDs=true
Go over verify section on the official docs to make sure it's working.
Configuration
You have to create issuers per namespace that will actually create and distribute the certificates. It's one of those resources that you created when you installed the helm charts.
Self-Signed
I created self-signed certificates for my namespaces just because.
Here is an example CRD:
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: dev-selfsigned-issuer
namespace: dev
spec:
selfSigned: {}
I'm sure I'll write up a Helm chart at somepoint with the issuers that I need.
Using the Certificate Manager
The certificates are mostly used by your ingress controllers to prove that the domain is valid and encrypt the communications between the origin and the client. I'm sure that can be used else where, but this is the scenerio that I use them for.
You will need to modify the ingress resource defination to be similiar to:
...
kind: Ingress
metadata:
namespace: dev
annotation:
cert-manager.io/issuer: dev-selfsigned-issuer
...
The namespace must match the name of the issuer for that namespace.
Ingress Controllers
Gots to have a way to get to the applications that you built, so you have to have an ingress controller. It handles domain name based routing and other stuffs.
Tools
I've been using the nginx ingress controller that is part of the kubernetes project to do the job. It seems to work out reasonably well.
You can run through the installation documentation to get things working. It's pretty straight forward.
One of the nice things that it will do is create a load balancer for you that you can use to immediately start directing traffic. At least it will if you have the MetalLb installed.
Example
# ingress.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-demo
labels:
app: nginx-demo
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/use-forwarded-headers: "true"
spec:
ingressClassName: nginx
rules:
- host: "faultycloud.io"
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: nginx-service
port:
number: 80
Setting up Prometheus
So you have a kubernetes cluster all setup, but how do you make sure that it's up and working? Well, you have to monitor it!
(Prometheus)[https://prometheus.io/] is a pretty complete all-in-one package for monitoring and alerting both stand alone nodes and the applications that run on them. It also has special hooks for monitoring Kubernetes and the pods that run on it.
Setup
So the easy way is to use the community helm chart. It will install Prometheus and a number of the other tools that falls under that umbrella as well. It doesn't do it perfectly, but it's pretty damn close to seemless.
Helm Charts
The Prometheus community build a set of helm charts over at (this Github page)[https://github.com/prometheus-community/helm-charts].
Which is nice because it's open source and publically available to all of us.
The instructions are pretty complete and thought out so getting started is not much of a problem. You have helm setup, right?
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
Take a look at the values, and see what you need to customize
helm show values prometheus-community/prometheus > prom_values.yaml
The obivious ones are setting up the persistent storage and adding additional scrape targets. I also ended up disabling the installation of AlertManager since it wouldn't connect with a persistent volume with the current configuration. Everything else is pretty boilerplate and, honestly, I just left things alone.
Here is my file:
server:
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: 'false'
hosts:
- prometheus.faultycloud.work
path: /
persistentVolume:
enabled: true
accessModes:
- ReadWriteOnce
size: 8Gi
volumeName: "prometheus"
alertmanager:
enabled: false
Applying the helm chart is pretty standard, and once it's done, you'll have a working Prometheus setup within a few minutes.
helm install -f prom_values.yml prometheus prometheus-community/prometheus -n infra
NFS Persistent Volumes
Do your pods need to have persitent volumes for your home Kubernetes cluster? Turns out, NFS is an option
https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner
Helm Charts
You can add the helm chart with:
helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
Then always pull the values, since you will need to customize the NFS server IP and path
helm show values nfs-subdir-external-provisioner/nfs-subdir-external-provisioner
An example customized one looks like the following:
nfs:
server: 10.0.0.155
path: /srv/nfs_shares/kubernetes
And of course, we have to provision:
helm install -f nfs_prov.yml nfs-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner -n infra
Metrics server is not quite as easy to install as advertise, at least on system installed with kubeadm
. But, the fix to make
it work is pretty simple, assuming you do it the insecure way.
The correct way is more complicated, but also secure.
Installing
The code and instructions to install metrics server can be found here. Again, not repeating them here so they go stale.
The instructions work perfectly, things get installed and then don't work. At all.
The metrics pod just never becomes ready and then logs complain about SSL certificates being invalid.
Cheating and Being Insecure
The easiest option is to add --kubelet-insecure-tls
to the spec.container.args array. I added it as the last one.
You can make this edit either in the deployment or prior to doing a kubectl apply
if you download the manifest files.
Embedded
Every year or three I get in the mood where I want to play with microcontrollers and LEDs and switches and wires and what-ever-else that I come across
Table of Contents
Raspberry Pi
Some notes and sources on Raspberry Pi things that I should keep track of.
Basically, I look for these things all the time, and never can find them.
Mechanical Drawings:
Using LED0 & LED1
echo heartbeat | sudo tee /sys/class/leds/led1/trigger
echo cpu | sudo tee /sys/class/leds/led0/trigger
GPIO from Bash
echo 3 > /sys/class/gpio/export
cat /sys/class/gpio/gpio3/value
echo out > /sys/class/gpio/gpio3/direction
echo 0 > /sys/class/gpio/gpio3/value
echo 1 > /sys/class/gpio/gpio3/value
echo high > /sys/class/gpio/gpio4/direction
Rust
Sources
Embedded Rust
So I like Rust, and I'm curious about how it works embedded into microcontrollers. But, the documentation is scattered and some of it is outdated.
This is my attempt to organize some of it for what I have found that works and is usefull.
General
ESP32
- Rust on ESP Book
- Embedded Rust on Espressif
- Awesome ESP Rust
- Rust on ESP32 STD demo app
- An embedded-hal implementation for ESP32[-XX] + ESP-IDF
- Embedded Rust Trainings for Espressif
ESP8266
Getting Started with Rust
RAID Arrays
Let's build a few different kinds of RAID arrays and then use them.
I'm not going to go into detail about what RAID is, or the different levels since there is plenty of documentation out there already. These are the commands to setup software RAID for Linux and the general workflow to follow.
Do we have any now?
Let's just double check
cat /proc/mdstat
Okay, which Devices?
Insert the new drives into the machine. You should know what the come up as, but if you don't, try:
lsblk
That should get you a list of all the block devices on the system and where they are being used at. Some of them don't make sense, but you should see the ones that you just added. If needed, run the command and copy down the results. Then insert the drives and execute it again.
I'll be using the following:
/dev/sda
/dev/sdb
/dev/sdc
/dev/sdd
Create the ARRAYS!
RAID 0
We'll start with RAID 0, since it will allow us to use all the drives as one big ( though not redundant ) block device.
sudo mdadm --create --verbose /dev/md0 --level=raid0 --raid-devices=4 /dev/sda /dev/sdb /dev/sdc /dev/sdd
RAID 1
Simple mirroring. The easiest to use, a decent redundancy package and not all that wasteful.
sudo mdadm --create --verbose /dev/md0 --level=raid1 --raid-devices=2 /dev/sda /dev/sdb
RAID 5
Probably the best all around choice. Best use of capacity and good performance.
sudo mdadm --create --verbose /dev/md0 --level=raid5 --raid-devices=4 /dev/sda /dev/sdb /dev/sdc /dev/sdd
Creating the FileSystem
sudo mkfs -t ext4 /dev/md0
Mounting
You should have these drives come up every time you want to use them, so add the entries to /etc/fstab
.
First you need the UUID of the array.
sudo blkid /dev/md0
Take the UUID and the following string and open up /etc/fstab
to add something along the lines of:
UUID=655d2d3e-ab31-49c7-9cc3-583ec81fd316 /srv ext4 defaults 0 0
Then you can execute sudo mount -a
and have the array appear where you wanted it.
Update config
sudo mdadm --detail --scan | sudo tee -a /etc/mdadm/mdadm.conf
then
sudo update-initramfs -u
Destroying RAID Arrays
You create RAID drives, so you should know who to tear them down.
So we built a few RAID arrays and mounted them, so that's grand. Now let's tear them down.
Unmount everything
Make sure you have the arrays unmounted from there normal ( or abnormal ) paths. If you don't, not much of this is going to work out well.
Check mdstat
Lets check to see which arrays we currently have configured.
cat /proc/mdstat
The above should return any arrays you current have built and what state they are in. In this case, we need to know the names of the arrays so that we can remove them.
Remove Arrays
Okay, let's stop the arrays.
mdadm --stop /dev/md127 # or whatever was returned above
Now that we don't have working arrays, we can zero them out so that they are empty.
mdadm --zero-superblock /dev/sda /dev/sdb /dev/sdc /dev/sdd
Trust, but Verify
Rerun the mdstat and make sure those pesky things are gone.
Runbooks
Runbooks are a way of documenting procedural Information Technology information that is repetitive in nature.
Most of the time, you see runbooks as a way of troubleshooting a problem, diagnosising an issue or a procedure.
This is a collection of the runbooks that I have decided to document for my area.
Table of Contents
Alerts
Create Database and User for PostgreSQL
Overview
Pre-Requistes
- Access to login into
callisto.lan
via ssh.
Steps
- Log into
callisto.lan
via SSH:ssh ubuntu@callisto.lan
- Sudo into the
postgres
user and execute thepsql
command line tool.sudo -u postgres psql
- Create the database:
- The database name should reflect the application that is using it and if it is production with the inclusion of
_prod
. CREATE DATABASE db01_prod;
- The database name should reflect the application that is using it and if it is production with the inclusion of
- Create the user with an encrypted password:
- We recommend using a long and complex password since the values will only be saved here and can be input into the target system at the same time.
- The user name should reflect the database that they are primarily accessing.
CREATE USER db01 WITH ENCRYPTED PASSWORD '3^On9D4p59^4';
- Grant permissions for the user to access the database:
GRANT ALL PRIVILEGES ON db01_prod TO db01;
Troubleshooting
- If the target system can not log into the database after the database/user creation process, you can simply re-run the above steps to make sure to make sure that they are correct.
- If the user is present, but the password has been forgotten, you may reset the password as follows:
ALTER USER db01_user WITH ENCRYPTED PASSWORD '3^On9D4p59^4';
Completion and Verification
You can log into the target database from callisto.lan
to verify if everything is setup correctly
psql -h 'callisto.lan' -U 'db01' -d 'db01_prod'
It will prompt you for the password, which we have available and should provide. If the login occurs, then the database/user/permissions should be okay.
Contacts
Not Applicable
Appendix
Changelog
* 2023/04/06 - Created