Table of contents
Open Table of contents
Intro
I worked on many projects with Github as DevOps in my team. I have a responsibility to deliver applications from development to production. At first, I did it manually. I started to feel this is always reprising every time there is a change in the application.
Our production environment still uses a bare metal server. The application ran on VMs created in proxmox. Production ran a simple stack. but if a change or feature is added to the Github repository my dev friends say, “Hey I added new feature, can you deploy it?” and it keeps repeating. until I found solution to create Github actions that can continuously check and deliver the application.
provisioning DB
Our application depends on PostgreSQL. I used containerize DB because it’s simple to provisioning and you know.. I want to keep the server clean. to realize it creates a docker-compose file to automate provisioning.
version: '3.8' | |
services: | |
db: | |
container_name: postgres_db | |
image: postgres:13-alpine | |
restart: always | |
environment: | |
POSTGRES_USER: dbadmin | |
POSTGRES_PASSWORD: dbpassword | |
POSTGRES_DB: maindb | |
TZ: Asia/Jakarta | |
volumes: | |
- postgres_data:/var/lib/postgresql/data/ | |
networks: | |
vpcbr: | |
ipv4_address: 10.10.1.2 # better to keep it static | |
volumes: | |
postgres_data: | |
networks: | |
vpcbr: | |
driver: bridge | |
ipam: | |
config: | |
- subnet: 10.10.1.0/24 | |
gateway: 10.10.1.1 |
let’s make this up in daemon mode.
docker-compose -f docker-compose.db.yml up -d
Create deploy.sh
to automate working my repeating task in deployment, I use simple bash scripting.
this script would be run every deployment.
#!/bin/bash | |
# deploy.sh uncontainerized ci/cd | |
# stashing repository | |
# make sure branch isn't conflict | |
# bring it back, and pull new change | |
echo "git stashing repository..." | |
git stash | |
if [ $? -ne 0 ]; | |
then | |
echo "error when stashing repository..." | |
exit 1 | |
fi | |
# pull repository | |
echo "git pull..." | |
git pull | |
if [ $? -ne 0 ]; | |
then | |
echo "error when pulling repository..." | |
exit 1 | |
fi | |
# change env-dev to env-prod in app/settings.py | |
# make sure your program config is ready to production env | |
echo "changing settings file" | |
sed -i 's/.env-dev/.env-prod/g' app/settings.py | |
sed -i 's/localhost:8000/example.domain.com/g' static/js/vueApp.js | |
if [ $? -ne 0 ]; | |
then | |
echo "error when sed the text..." | |
exit 1 | |
fi | |
# collectstatic | |
source prod/bin/activate | |
echo "Install depedencies.." | |
pip install -r requirements.txt | |
if [ $? -ne 0 ]; | |
then | |
echo "Problem when installing depedencies, check requirements.txt.." | |
exit 1 | |
fi | |
python manage.py migrate --no-input | |
if [ $? -ne 0 ]; | |
then | |
echo "db connection failed, please check password and user correctly" | |
exit 2 | |
fi | |
python manage.py collectstatic --no-input | |
if [ $? -ne 0 ]; | |
then | |
echo "staticfile error please check path correctly" | |
exit 2 | |
fi | |
# restarting gunicorn and nginx | |
sudo gunicornrestart | |
if [ $? -ne 0 ]; | |
then | |
echo "gunicorn restart failed please check it out!" | |
fi | |
sudo nginxrestart | |
if [ $? -ne 0 ]; | |
then | |
echo "nginx restart failed please check it out!" | |
fi | |
# check server status | |
CODE=$(curl -s -w "%{http_code}\n" https://example.domain.com/ -o /dev/null) | |
GIT_SHA=$(git rev-parse HEAD | cut -c 1-7) | |
NODE=$(hostname) | |
NGINX=$(systemctl status nginx | awk '/Active/ {print $2" "$3}') | |
GUNI=$(systemctl status gunicorn | awk '/Active/ {print $2" "$3}') | |
echo "=========================================" | |
echo "deploy success!" | |
echo "on node -> $NODE" | |
echo "on commit -> $GIT_SHA" | |
echo "curl status -> $CODE" | |
echo "nginx status -> $NGINX" | |
echo "gunicorn status -> $GUNI" | |
echo "=========================================" | |
the working directory would be stashing before pulling the repository, this is done so that there is no conflict merge. and then did pulling repo. in production have little change configuration settings like DB. I use sed to change the settings file with one line. make sure already migrated. it always restarts service gunicorn and nginx to minimalization failure. see how to create gunicorn.service file.
Setup Github actions workflow
The first thing we should do is create GitHub secrets. Github secrets are encrypted secrets allowing us to store sensitive information in your organization, repository, or repository environments. learn more here.
SSH_KEY -> ssh private key (for server connection)
SSH_PORT -> ssh port
SSH_PASSPHRASE -> optional but highly recommended
SSH_HOST -> host ssh server (target server)
may you ask “Why Host and Port should be in github secrets?” i recommend to keep all data of server is encyrpted and secret for security purpose. next, what I did is set up Github actions to trigger change and run deploy.sh over ssh. here are my Github actions
name: Deployment | |
on: | |
workflow_run: | |
workflows: | |
- "Django CI" | |
types: | |
- completed | |
jobs: | |
build: | |
name: deploy to server over ssh command | |
runs-on: ubuntu-20.04 | |
steps: | |
- name: Deploy to Production | |
uses: appleboy/ssh-action@v0.1.4 | |
with: | |
host: ${{ secrets.SSH_PROD_HOST }} | |
username: ${{ secrets.SSH_PROD_USER }} | |
port: ${{ secrets.SSH_PROD_PORT }} | |
key: ${{ secrets.SSH_PROD_KEY }} | |
passphrase: ${{ secrets.SSH_PROD_PASSPHRASE }} | |
script: | | |
cd path/to/work | |
bash deploy.sh |