GitHub Actions has become the default CI/CD platform for a significant share of the software industry. Its tight integration with GitHub's ecosystem -- pull requests, issues, releases, packages -- eliminates the friction of connecting an external CI service. But more importantly, GitHub Actions hits the right balance between simplicity and power. A basic pipeline takes minutes to set up, yet the platform supports complex multi-environment deployments, matrix testing strategies, and reusable workflow libraries that scale to enterprise needs.
This guide walks through building a production-grade CI/CD pipeline with GitHub Actions from the ground up. We cover core concepts, a real workflow that lints, tests, builds, and deploys, matrix builds for multi-environment testing, secrets management, caching strategies, reusable workflows, and deployment patterns that minimize risk. Every example uses real YAML that you can adapt to your own projects.
Core Concepts: Workflows, Jobs, Steps, and Runners
Understanding GitHub Actions' execution model is essential before writing your first workflow.
A workflow is a YAML file in .github/workflows/ that defines an automated process. Workflows are triggered by events -- a push to a branch, a pull request, a release, a schedule (cron), or a manual dispatch.
A job is a set of steps that execute on the same runner. Jobs within a workflow run in parallel by default. You can define dependencies between jobs to create sequential execution chains.
A step is an individual task within a job. A step can run a shell command or invoke an action (a reusable unit of code published to the GitHub Marketplace or defined in your repository).
A runner is the machine that executes your job. GitHub provides hosted runners for Ubuntu, macOS, and Windows. You can also register self-hosted runners for specialized hardware, compliance requirements, or cost optimization.
Here is the anatomy of a minimal workflow:
# .github/workflows/ci.yml
name: CI
on:
push:
branches: [main, develop]
pull_request:
branches: [main]
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Run linter
run: npm run lint
- name: Run tests
run: npm test
- name: Build
run: npm run build
The on block defines triggers. The jobs block defines one or more jobs. Each job specifies a runs-on target and a list of steps. The actions/checkout@v4 and actions/setup-node@v4 steps are community-maintained actions that handle common setup tasks.
The npm ci command (not npm install) is important for CI environments. It installs exact versions from package-lock.json, fails if the lock file is out of sync with package.json, and is significantly faster because it skips the dependency resolution step.
Building a Production Pipeline: Lint, Test, Build, Deploy
A production pipeline goes beyond a simple build. It enforces code quality, runs tests at multiple levels, creates deployable artifacts, and deploys to staging and production environments with appropriate gates.
Here is a complete pipeline for a Next.js application deployed to a cloud platform:
name: Production Pipeline
on:
push:
branches: [main]
pull_request:
branches: [main]
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
lint:
name: Lint & Type Check
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
- run: npm ci
- run: npm run lint
- run: npx tsc --noEmit
test:
name: Test
runs-on: ubuntu-latest
needs: lint
services:
postgres:
image: postgres:16
env:
POSTGRES_USER: test
POSTGRES_PASSWORD: test
POSTGRES_DB: testdb
ports:
- 5432:5432
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
- run: npm ci
- name: Run unit tests
run: npm run test:unit -- --coverage
env:
DATABASE_URL: postgresql://test:test@localhost:5432/testdb
- name: Run integration tests
run: npm run test:integration
env:
DATABASE_URL: postgresql://test:test@localhost:5432/testdb
- name: Upload coverage
if: always()
uses: actions/upload-artifact@v4
with:
name: coverage-report
path: coverage/
retention-days: 7
build:
name: Build
runs-on: ubuntu-latest
needs: test
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
- run: npm ci
- run: npm run build
- name: Upload build artifact
uses: actions/upload-artifact@v4
with:
name: build-output
path: .next/
retention-days: 1
deploy-staging:
name: Deploy to Staging
runs-on: ubuntu-latest
needs: build
if: github.ref == 'refs/heads/main'
environment:
name: staging
url: https://staging.example.com
steps:
- uses: actions/checkout@v4
- uses: actions/download-artifact@v4
with:
name: build-output
path: .next/
- name: Deploy to staging
run: |
npx vercel deploy --prebuilt --token=${{ secrets.VERCEL_TOKEN }} \
--env=staging
env:
VERCEL_ORG_ID: ${{ secrets.VERCEL_ORG_ID }}
VERCEL_PROJECT_ID: ${{ secrets.VERCEL_PROJECT_ID }}
deploy-production:
name: Deploy to Production
runs-on: ubuntu-latest
needs: deploy-staging
if: github.ref == 'refs/heads/main'
environment:
name: production
url: https://example.com
steps:
- uses: actions/checkout@v4
- uses: actions/download-artifact@v4
with:
name: build-output
path: .next/
- name: Deploy to production
run: |
npx vercel deploy --prebuilt --prod --token=${{ secrets.VERCEL_TOKEN }}
env:
VERCEL_ORG_ID: ${{ secrets.VERCEL_ORG_ID }}
VERCEL_PROJECT_ID: ${{ secrets.VERCEL_PROJECT_ID }}
Several patterns in this pipeline deserve attention. The concurrency block ensures only one pipeline runs per branch at a time, canceling any in-progress run when a new commit is pushed. This prevents wasted compute and avoids deploy races. The services block in the test job spins up a PostgreSQL container, giving your integration tests a real database without external dependencies. The environment blocks on the deploy jobs enable GitHub's environment protection rules, which can require manual approval before production deployment.
The pipeline flows sequentially: lint, then test, then build, then staging, then production. Each stage must pass before the next begins. The build artifact is created once and reused for both staging and production deployments, ensuring you deploy the exact same artifact you tested.
Matrix Builds for Multi-Environment Testing
Matrix builds let you test across multiple configurations -- Node.js versions, operating systems, database versions -- in parallel. This catches compatibility issues before they reach production.
jobs:
test:
name: Test (Node ${{ matrix.node-version }}, ${{ matrix.os }})
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
node-version: [18, 20, 22]
os: [ubuntu-latest, macos-latest, windows-latest]
exclude:
- os: macos-latest
node-version: 18
- os: windows-latest
node-version: 18
include:
- os: ubuntu-latest
node-version: 20
coverage: true
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: ${{ matrix.node-version }}
cache: 'npm'
- run: npm ci
- run: npm test
- name: Upload coverage
if: matrix.coverage
uses: codecov/codecov-action@v4
with:
token: ${{ secrets.CODECOV_TOKEN }}
The fail-fast: false setting ensures all matrix combinations run to completion even if one fails. This gives you the full picture of compatibility rather than stopping at the first failure. The exclude block removes unnecessary combinations (there is little value in testing Node 18 on macOS and Windows if you are dropping support for it soon). The include block adds custom properties to specific combinations -- here, enabling coverage uploads only for one specific configuration.
Secrets Management and Security
Secrets management in GitHub Actions operates at three levels: repository secrets, environment secrets, and organization secrets. Each has different visibility and access controls.
Repository secrets are available to all workflows in the repository. Use them for credentials that are not environment-specific, like a Slack webhook URL or a code coverage token.
Environment secrets are scoped to a specific deployment environment and can be protected with approval rules. Use them for credentials that differ between staging and production, like database URLs or API keys.
Organization secrets can be shared across multiple repositories. Use them for credentials that are common to your infrastructure, like a container registry password.
Never echo secrets in logs. GitHub automatically redacts recognized secrets from log output, but this is not foolproof for values that are transformed or embedded in URLs:
# Bad -- secret might appear in error output or URL encoding
- run: curl https://api.example.com?key=${{ secrets.API_KEY }}
# Better -- use environment variables
- name: Call API
run: |
response=$(curl -s -H "Authorization: Bearer $API_KEY" \
https://api.example.com/health)
echo "Status: $(echo $response | jq -r '.status')"
env:
API_KEY: ${{ secrets.API_KEY }}
For secrets that are too large for GitHub's 48KB limit (like service account JSON keys), base64 encode them before storing and decode them in your workflow:
- name: Decode service account key
run: echo "$GCP_SA_KEY" | base64 --decode > sa-key.json
env:
GCP_SA_KEY: ${{ secrets.GCP_SA_KEY }}
Use GITHUB_TOKEN (automatically provided in every workflow run) for operations against the current repository rather than creating personal access tokens. The GITHUB_TOKEN has scoped permissions that you can restrict in your workflow:
permissions:
contents: read
pull-requests: write
issues: read
Caching for Faster Builds
Build times directly impact developer productivity. Every minute spent waiting for CI is a minute not spent writing code. Caching dependencies, build outputs, and intermediate artifacts can cut pipeline times by 50-80 percent.
The actions/cache action provides general-purpose caching. Many setup actions (like actions/setup-node) include built-in caching for their respective package managers:
# Built-in caching with setup-node
- uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
# Explicit caching for other tools
- name: Cache Playwright browsers
uses: actions/cache@v4
with:
path: ~/.cache/ms-playwright
key: playwright-${{ runner.os }}-${{ hashFiles('package-lock.json') }}
- name: Cache Next.js build
uses: actions/cache@v4
with:
path: .next/cache
key: nextjs-${{ runner.os }}-${{ hashFiles('package-lock.json') }}-${{ hashFiles('**/*.ts', '**/*.tsx') }}
restore-keys: |
nextjs-${{ runner.os }}-${{ hashFiles('package-lock.json') }}-
nextjs-${{ runner.os }}-
The restore-keys fallback pattern is important. If no exact cache match exists, GitHub tries each restore key prefix in order. This means a cache from a previous build with different source files but the same dependencies will still be used, providing partial caching benefits rather than a cold start.
For Docker-based workflows, use docker/build-push-action with layer caching enabled:
- name: Build and push Docker image
uses: docker/build-push-action@v5
with:
context: .
push: true
tags: ghcr.io/${{ github.repository }}:${{ github.sha }}
cache-from: type=gha
cache-to: type=gha,mode=max
The type=gha cache backend stores Docker layer cache in GitHub Actions' cache, avoiding the need for a separate registry for cache layers.
Reusable Workflows and Composite Actions
As your organization grows, duplicating workflow YAML across repositories becomes a maintenance burden. Reusable workflows and composite actions solve this problem.
A reusable workflow is a full workflow defined in one repository and called from workflows in other repositories:
# In .github/workflows/reusable-deploy.yml of your shared repo
name: Reusable Deploy
on:
workflow_call:
inputs:
environment:
required: true
type: string
app-name:
required: true
type: string
secrets:
deploy-token:
required: true
jobs:
deploy:
runs-on: ubuntu-latest
environment: ${{ inputs.environment }}
steps:
- uses: actions/checkout@v4
- name: Deploy
run: |
echo "Deploying ${{ inputs.app-name }} to ${{ inputs.environment }}"
./scripts/deploy.sh
env:
DEPLOY_TOKEN: ${{ secrets.deploy-token }}
Calling this reusable workflow from another repository:
# In the consuming repository
jobs:
deploy-staging:
uses: your-org/shared-workflows/.github/workflows/reusable-deploy.yml@main
with:
environment: staging
app-name: my-service
secrets:
deploy-token: ${{ secrets.DEPLOY_TOKEN }}
A composite action is a reusable set of steps defined in an action.yml file. Composite actions are more granular than reusable workflows -- they are individual steps that you include within a job:
# In .github/actions/setup-project/action.yml
name: Setup Project
description: Install dependencies and configure environment
inputs:
node-version:
description: Node.js version
default: '20'
runs:
using: composite
steps:
- uses: actions/setup-node@v4
with:
node-version: ${{ inputs.node-version }}
cache: 'npm'
- run: npm ci
shell: bash
- run: npx prisma generate
shell: bash
Using the composite action:
steps:
- uses: actions/checkout@v4
- uses: ./.github/actions/setup-project
with:
node-version: '20'
- run: npm test
Use reusable workflows when you want to standardize an entire pipeline (the full deploy process). Use composite actions when you want to standardize specific steps (project setup, deployment commands).
Deployment Strategies and Pipeline Health
The staging-then-production pattern shown earlier is the baseline. For more sophisticated deployments, consider these strategies.
Canary deployments route a small percentage of traffic to the new version before rolling out fully. If error rates spike on the canary, you roll back before most users are affected.
Blue-green deployments maintain two identical production environments. You deploy to the inactive environment, verify it, then switch traffic. Rollback is instantaneous -- you just switch back.
Feature flag deployments decouple deployment from release. You deploy code to production with features behind flags, then enable them gradually through your feature flag service. This eliminates the deploy-as-release pattern entirely.
For monitoring pipeline health, GitHub Actions provides workflow run status, duration metrics, and failure logs. Augment these with notifications:
notify:
name: Notify on Failure
runs-on: ubuntu-latest
needs: [lint, test, build, deploy-production]
if: failure()
steps:
- name: Send Slack notification
uses: slackapi/slack-github-action@v1
with:
payload: |
{
"text": "Pipeline failed on ${{ github.repository }}",
"blocks": [
{
"type": "section",
"text": {
"type": "mrkdwn",
"text": "*Pipeline Failed*\n*Repo:* ${{ github.repository }}\n*Branch:* ${{ github.ref_name }}\n*Commit:* ${{ github.event.head_commit.message }}\n*Author:* ${{ github.event.head_commit.author.name }}\n<${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}|View Run>"
}
}
]
}
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}
SLACK_WEBHOOK_TYPE: INCOMING_WEBHOOK
Track your pipeline metrics over time: average duration, failure rate, time-to-recovery after failures, and the ratio of pushes that pass on the first attempt. A healthy pipeline has a failure rate below 10 percent and an average duration under 10 minutes. If your pipeline consistently takes longer, invest in caching, parallelization, and selective test execution.
Automating Your Pipeline with Expert Support
A well-built CI/CD pipeline is invisible infrastructure -- developers push code, and it reliably reaches production through automated quality gates. Building that pipeline well requires understanding not just GitHub Actions syntax, but deployment strategies, security practices, caching optimization, and operational patterns that keep the pipeline healthy as your codebase and team grow.
At Maranatha Technologies, we design and implement CI/CD pipelines that give development teams the confidence to ship quickly and safely. From initial GitHub Actions setup to advanced deployment strategies, reusable workflow libraries, and pipeline optimization, our DevOps team builds the automation infrastructure that lets your developers focus on writing code. If you are ready to automate your deployment pipeline, explore our DevOps and infrastructure services or reach out to start the conversation.