Automated Deployment with GitHub Actions
A complete CI/CD pipeline for static sites: dependency caching, secret management, and post-deployment validation
title: "Automated Deployment with GitHub Actions" summary: "A complete CI/CD pipeline for static sites: dependency caching, secret management, and post-deployment validation" date: "2025-01-01" tags: ["ci-cd", "github-actions", "aws", "infrastructure"] topics: ["ci-cd", "infrastructure", "developer-experience"] prerequisites: ["2025-12-28-architecture-of-a-modern-static-blog"] related: ["2025-12-31-ga4-data-api-integration", "2025-01-03-playwright-e2e-testing"] author: "asimon" published: true
Automated Deployment with GitHub Actions
Every push to main triggers a deployment. No manual steps, no SSH commands, no "works on my machine" surprises. This post walks through the GitHub Actions workflow that makes it happen.
Pipeline Overview
The deployment pipeline handles everything from build to validation:
ββββββββββββββββ ββββββββββββββββ ββββββββββββββββ
β CloudFormationββββββΆβ Build ββββββΆβ Deploy β
β Drift Check β β + Test β β to S3 β
ββββββββββββββββ ββββββββββββββββ ββββββββββββββββ
β
βΌ
ββββββββββββββββ
β Invalidate β
β CloudFront β
ββββββββββββββββ
β
βΌ
ββββββββββββββββ
β Validate β
β Deployment β
ββββββββββββββββ
The workflow runs on every push to main, with manual triggers available for emergency deployments or testing.
Dependency Caching
pnpm's content-addressable store makes caching efficient:
- name: Get pnpm store directory
run: echo "STORE_PATH=$(pnpm store path --silent)" >> $GITHUB_ENV
- name: Setup pnpm cache
uses: actions/cache@v3
with:
path: ${{ env.STORE_PATH }}
key: ${{ runner.os }}-pnpm-store-${{ hashFiles('**/pnpm-lock.yaml') }}
restore-keys: |
${{ runner.os }}-pnpm-store-
The cache key is based on the lockfile hash. When dependencies change, we get a fresh cache. When they don't, installation takes seconds instead of minutes.
Cache Fallback
The restore-keys fallback means even partial cache hits help. If only a few packages changed, we restore the previous cache and update the diff.
Secret Management with Parameter Store
Secrets live in AWS Parameter Store, not GitHub Secrets. This provides:
- Encryption at rest with KMS
- Audit trail via CloudTrail
- Rotation without updating CI config
- Shared access across multiple workflows
- name: Load environment variables from Parameter Store
run: |
GA4_SERVICE_ACCOUNT=$(aws ssm get-parameter \
--name "/asimon-blog/prod/ga4-service-account" \
--with-decryption \
--query "Parameter.Value" \
--output text)
# Mask in logs
echo "::add-mask::$GA4_SERVICE_ACCOUNT"
# Export for subsequent steps
echo "GA4_SERVICE_ACCOUNT=$GA4_SERVICE_ACCOUNT" >> $GITHUB_ENV
The ::add-mask:: directive tells GitHub Actions to redact this value from all logs. Even if a step accidentally prints it, you'll see *** instead.
Parameter Structure
I organize parameters by environment and purpose:
/asimon-blog/
prod/
ga4-service-account (SecureString)
ga4-property-id (String)
ga4-measurement-id (String)
github-actions-secret (SecureString)
The IAM role for GitHub Actions has read-only access to these specific paths:
{
"Effect": "Allow",
"Action": ["ssm:GetParameter"],
"Resource": "arn:aws:ssm:us-east-2:*:parameter/asimon-blog/prod/*"
}
Infrastructure Drift Detection
Before deploying, we verify the CloudFormation stack matches the template in the repo:
infra_drift_check:
name: Check CloudFormation Drift
runs-on: ubuntu-latest
steps:
- name: Verify CloudFormation template matches deployed stack
run: ./scripts/check-cloudformation-drift.sh
This catches scenarios where someone made manual changes to AWS resources. If drift is detected, the deployment fails and alerts you to investigate.
The Build Step
The build combines several operations:
- name: Generate view counts from GA4
run: node ./scripts/generate-view-counts.mjs
- name: Build static site
run: NODE_OPTIONS="--max-old-space-size=4096" pnpm build
env:
NODE_ENV: production
Key considerations:
- Memory allocation - Next.js builds can be memory-hungry. The 4GB heap prevents OOM crashes.
- View counts first - GA4 data must be fetched before the build reads it.
- Skip option - The workflow accepts
skip_ga4for emergency deploys without waiting for GA4.
S3 Deployment Strategy
Not all files deserve the same cache policy:
- name: Deploy to S3
run: |
# Static assets: cache forever (hashed filenames)
aws s3 sync out/ s3://$BUCKET_NAME/ \
--delete \
--cache-control "public, max-age=31536000, immutable" \
--exclude "*.html" \
--exclude "*.txt"
# HTML files: cache briefly (content changes)
aws s3 sync out/ s3://$BUCKET_NAME/ \
--include "*.html" \
--include "*.txt" \
--cache-control "public, max-age=3600"
| File Type | Cache Duration | Why | |-----------|---------------|-----| | JS/CSS/Images | 1 year | Hashed filenames change on content change | | HTML | 1 hour | Content updates need to propagate | | XML (Atom) | 1 hour | Feed readers expect fresh content |
The --delete flag removes files from S3 that no longer exist locally, keeping the bucket clean.
CloudFront Invalidation
After uploading to S3, we invalidate the CDN cache:
- name: Invalidate CloudFront
run: |
INVALIDATION_ID=$(aws cloudfront create-invalidation \
--distribution-id $DISTRIBUTION_ID \
--paths "/*" \
--query "Invalidation.Id" \
--output text)
# Wait for completion
aws cloudfront wait invalidation-completed \
--distribution-id $DISTRIBUTION_ID \
--id $INVALIDATION_ID
The wait command blocks until invalidation propagates to all edge locations. This adds 30-60 seconds but ensures post-deployment tests see fresh content.
Invalidation Costs
CloudFront includes 1,000 free invalidation paths per month. Using /* counts as one path, so you can deploy frequently without cost concerns.
Post-Deployment Validation
The final stage runs Playwright tests against the live site:
- name: Run post-deployment smoke tests
run: |
sleep 30 # Wait for CDN propagation
npx playwright test --project=production-smoke --reporter=list
These tests verify:
- Homepage returns 200
- Posts render correctly
- WWW redirect works
- Security headers are present
- Atom feed is valid XML
If any test fails, the workflow fails - you know immediately that something's wrong.
Playwright Browser Caching
Playwright browsers are large (~300MB). Caching saves significant time:
- name: Cache Playwright browsers
uses: actions/cache@v3
with:
path: ~/.cache/ms-playwright
key: ${{ runner.os }}-playwright-${{ hashFiles('**/pnpm-lock.yaml') }}
First run: ~2 minutes to download browsers. Cached run: ~5 seconds to restore.
Workflow Inputs
The workflow accepts manual triggers with options:
workflow_dispatch:
inputs:
skip_ga4:
description: 'Skip GA4 data generation'
default: 'false'
type: boolean
skip_tests:
description: 'Skip post-deployment tests'
default: 'false'
type: boolean
Useful scenarios:
- GA4 API is down β skip_ga4=true, deploy with stale counts
- Emergency hotfix β skip_tests=true, deploy faster
- Testing workflow changes β run manually on a branch
IAM Permissions
The GitHub Actions IAM user has minimal permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["s3:PutObject", "s3:DeleteObject", "s3:ListBucket"],
"Resource": ["arn:aws:s3:::asimon-blog-*"]
},
{
"Effect": "Allow",
"Action": ["cloudfront:CreateInvalidation", "cloudfront:GetDistribution"],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": ["ssm:GetParameter"],
"Resource": "arn:aws:ssm:us-east-2:*:parameter/asimon-blog/*"
}
]
}
No s3:*, no admin permissions. Just what's needed to deploy.
Deployment Summary
Every successful deployment ends with a summary:
- name: Deployment summary
run: |
echo "π Deployment completed successfully!"
echo ""
echo "π Deployment Summary:"
echo "- Site: https://asimon.blog"
echo "- CDN: CloudFront distribution $CLOUDFRONT_ID"
echo "- Build time: $(date -u)"
This makes the Actions log easy to scan for the outcome.
Total Execution Time
A typical deployment:
| Step | Duration | |------|----------| | Checkout + Setup | ~15s | | Cache restore | ~5s | | Install deps | ~10s (cached) | | GA4 fetch | ~5s | | Build | ~30s | | S3 upload | ~20s | | CloudFront invalidation | ~45s | | Post-deploy tests | ~30s | | Total | ~2.5 minutes |
From push to live in under 3 minutes.
Failure Handling
When things go wrong:
- Build fails β Deployment stops, S3 unchanged
- S3 upload fails β Partial upload, CloudFront still serves old content
- Invalidation fails β Old content cached longer, but still works
- Tests fail β Site is live but you know there's an issue
The pipeline is designed so failures don't break production. Each step either completes fully or leaves the previous state intact.
Summary
A good CI/CD pipeline is invisible when it works and informative when it doesn't. This workflow:
- Caches aggressively to minimize build times
- Secrets stay in Parameter Store for security and flexibility
- Validates before and after deployment
- Fails safely without breaking production
The full workflow file is ~400 lines of YAML, but most of that is validation and error handling. The core deploy logic is surprisingly simple: build, sync to S3, invalidate CDN.
Next: Comprehensive E2E Testing with Playwright covers the test architecture that makes post-deployment validation reliable.