Tags | kubernetes |
Soft Prerequisites |
|
Let’s deploy our Python app!
First, let’s create our Python Docker container.
Copy the python/Dockerfile
and python/requirements.txt
file into a new folder named k8s/python
.
Let’s add a new version of our Python app with the file k8s/python/app.py
:
from flask import Flask, jsonify
import psycopg2
import os
app = Flask(__name__)
# Database connection parameters
db_params = {
"dbname": os.getenv("DB_NAME"),
"user": os.getenv("DB_USER"),
"password": os.getenv("DB_PASSWORD"),
"host": os.getenv("DB_HOST"),
}
# Function to create the "pressed" table
def create_pressed_table():
try:
conn = psycopg2.connect(**db_params)
cur = conn.cursor()
# Check if a row exists in the "pressed" table
cur.execute("CREATE TABLE IF NOT EXISTS pressed (count INTEGER);")
cur.execute("SELECT 1 FROM pressed LIMIT 1;")
if cur.fetchone() is None:
# If no row exists, insert an initial row with count = 1
cur.execute("INSERT INTO pressed (count) VALUES (1);")
else:
# If a row exists, increment the count
cur.execute("UPDATE pressed SET count = count + 1;")
conn.commit()
except Exception as e:
print(str(e))
finally:
if conn:
conn.close()
# Function to get the count from the "pressed" table
def get_pressed_count():
try:
conn = psycopg2.connect(**db_params)
cur = conn.cursor()
# Retrieve the count from the "pressed" table
cur.execute("SELECT count FROM pressed;")
count = cur.fetchone()[0] if cur.rowcount > 0 else 0
return count
except Exception as e:
print(str(e))
return 0
finally:
if conn:
conn.close()
# Define a route for the health check
@app.route('/health', methods=['GET'])
def health_check():
try:
conn = psycopg2.connect(**db_params)
return jsonify({'status': 'ok'})
except Exception as e:
return jsonify({'status': 'error', 'error': str(e)})
finally:
if conn:
conn.close()
# Define a route to get the status
@app.route('/api/get-status', methods=['GET'])
def get_status():
# Get the count from the "pressed" table
count = get_pressed_count()
return jsonify({'count': count})
# Define a route to increment the "pressed" table
@app.route('/api/pressed', methods=['GET'])
def increment_pressed():
try:
conn = psycopg2.connect(**db_params)
cur = conn.cursor()
# Increment the "pressed" table
cur.execute("UPDATE pressed SET count = count + 1;")
conn.commit()
# Get the updated count
count = get_pressed_count()
return jsonify({'count': count})
except Exception as e:
return jsonify({'error': str(e)})
finally:
if conn:
conn.close()
if __name__ == '__main__':
# Create the "pressed" table if it doesn't exist
create_pressed_table()
app.run(host='0.0.0.0', port=5000)
Let’s use another Kubernetes Service to expose the Python app to the cluster. Add the following content into the k8s/python/service.yaml
file:
apiVersion: v1
kind: Service
metadata:
name: python-service
spec:
selector:
app: python
ports:
- protocol: TCP
port: 5000
targetPort: 5000
To finish the YAML files, let’s create a deployment.yaml
under k8s/python
.
Notice that our Deployment now has two new shiny things: livenessProbe
and readinessProbe
. They tell the Kubernetes cluster that the app is ready to use probing the /health
endpoint from the Python app (you can check the source code above). This is very common Kubernetes pattern you’ll find in other applications as well.
We are also grabbing the DB_PASSWORD
environment variable from the Kubernetes Secret generated by the PostgreSQL installation (remember that?!) using the valueFrom
configuration.
apiVersion: apps/v1
kind: Deployment
metadata:
name: python-deployment
spec:
replicas: 1
selector:
matchLabels:
app: python
template:
metadata:
labels:
app: python
spec:
containers:
- name: python
image: harbor.<your-domain>/application/python:v2
ports:
- containerPort: 5000
env:
- name: DB_HOST
value: "postgresql.default.svc.cluster.local"
- name: DB_NAME
value: dbname
- name: DB_USER
value: youruser
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: postgresql-credentials
key: password
livenessProbe:
httpGet:
path: /health
port: 5000
initialDelaySeconds: 5
periodSeconds: 10
readinessProbe:
httpGet:
path: /health
port: 5000
initialDelaySeconds: 5
periodSeconds: 10
imagePullSecrets:
- name: your-regcred
Commit the changes, push then the GitHub repository and pull them back into the EC2 instance.
On your EC2 instance, let’s first build and push the Python container image:
cd /home/ubuntu/umuzi-k8s/k8s/python
docker build . -t harbor.<your-domain>/application/python:v2 --push
And apply the YAML files:
kubectl apply -f deployment.yaml
kubectl apply -f service.yaml
Let’s inspect the Python app logs to see if everything is up and running:
# gets the pod name
kubectl get pods | grep python
# shows the logs
# it will fail if the pod is in the `ContainerCreating` status
# just give it a little time before it starts up
kubectl logs <pod-name>
Everything should be running just fine!
BUT if you go to https://<your-domain>
you’ll notice everything seems to be up and running but your button doesn’t increment. When you try to inspect the network call made by the frontend (right click on the button > inspect > network tab) you will realize that the calls are failing.
The frontend is trying to access https://<your-domain>/api
but we never told the Ingress that the /api
route should point to the Python app. Let’s fix that!
On the k8s/nginx/ingress.yaml
file add a new rule:
# old path here, just to remember where we are
- path: /
pathType: Prefix
backend:
service:
# points to the service we created on the file above
name: nginx-service
port:
number: 80
# add this to your file
# be careful with identation, YAML is very strict about it
- path: /api
pathType: Prefix
backend:
service:
name: python-service
port:
number: 5000
Commit the changes and pull then on your EC2 instance.
Once there, reapply the ingress.yaml
file:
cd /home/ubuntu/umuzi-k8s/k8s/nginx
kubectl apply -f ingress.yaml
Inspect the nginx-ingress
and you should see the new route there:
kubectl describe ingress nginx-ingress
# ommited output
[...]
Host Path Backends
---- ---- --------
student-1-k42s.org
/ nginx-service:80 (10.42.0.45:80,10.42.0.46:80)
# here it is!
/api python-service:5000 (10.42.0.67:5000)
Now go back to your browser and try the button again, it should be working just fine!