Supabase is one of the most powerful open-source backend platforms available today, and its free tier is generous enough to run real applications. But there is one critical gap that catches developers off guard: Supabase backups are not included on the free tier. If your Postgres database is corrupted, accidentally dropped, or your project hits the 500 MB database limit, you have no native recovery option.
This is the complete guide to Supabase backup on the free tier — a field-tested, zero-cost method built and used by the team at AI Agency Plus. By the end of this guide you will have a fully automated daily backup system that captures your entire Postgres database including schema, roles, indexes, sequences, triggers, functions, and row level security policies, and stores them securely in Cloudflare R2.
What Is Supabase and Why Does the Free Tier Have No Backups?
Supabase is an open-source Firebase alternative built on Postgres. It gives you a full Postgres database, Supabase Auth, edge functions, Supabase storage, a REST API, realtime subscriptions, and a clean Supabase dashboard — all from one platform.
Supabase Pricing Plans Explained
Understanding the Supabase pricing structure is important before choosing your backup strategy. Here is how the tiers compare:
| Feature | Free Tier | Pro Plan |
|---|---|---|
| Database size | 500 MB database limit | 8 GB database included |
| Daily backups | Not included | 7 days of daily backups |
| Point-in-time recovery | Not included | Add-on available |
| Edge functions | 500K invocations/month | 2M invocations/month |
| File storage | 1 GB | 100 GB |
| Active users (Supabase Auth) | 50,000 | 100,000 unique users |
| Project pausing | Projects are automatically paused after 1 week inactive | No pausing |
| Price | £0/month | Pro plan starts at ~$25/month per project |
Supabase’s native backup system — including point-in-time recovery (PITR) — is reserved for paid tiers. The pro plan includes 7 days of daily backups, and PITR is available as an add-on on pro and team plans. Free tier projects get none of this, which is why this guide exists.
Supabase vs Firebase: Why Postgres Makes Self-Backup Possible
One of the things that makes Supabase compelling over Firebase is that your data lives in a real Postgres database. That means you can use standard Postgres tools like pg_dump and psql to export and restore your data without being locked into proprietary formats. According to the Supabase docs, the platform is built on standard PostgreSQL, which makes logical backup using the Supabase CLI or direct pg_dump fully supported.
Why Free Tier Projects Need a Backup Strategy
Projects on the free tier face three real risks that make a solid backup strategy essential:
- No automated backups. Supabase does not run daily backups on the free tier. If data is lost, you start from scratch.
- Project pausing. Free tier projects are automatically paused after one week of inactivity. While pausing does not destroy data, it is a reminder that the free tier is not designed for mission-critical uptime.
- 500 MB database limit. Once your database size hits the limit, writes stop. A backup taken before that point may be your only way to recover a clean working state.
Architecture Overview: How This Backup System Works
This backup strategy connects three services:
- Make.com — daily scheduler and alert system
- GitHub Actions — runs
pg_dumpon an Ubuntu runner and uploads files - Cloudflare R2 — off-site object storage for all backup files
Make.com cannot run shell binaries like pg_dump directly — it is the scheduler and alerter only. GitHub Actions provides a free Ubuntu runner where the actual dump commands execute. The backup produces three files — 01roles.sql, 02schema.sql, and 03data.sql — plus a compressed .tar.gz archive, all stored in Cloudflare R2.
Free Tier Limits: What This System Actually Uses
| Service | Free Allowance | This Backup Uses |
|---|---|---|
| GitHub Actions | 2,000 minutes/month | ~3 min/day = 90 min/month |
| Cloudflare R2 Storage | 10 GB | Depends on DB size |
| Cloudflare R2 Writes | 1M ops/month | ~5 ops/day = 155/month |
| Cloudflare R2 Egress | Free forever | Free on every restore |
| Make.com Operations | 1,000/month | ~3 ops/day = 90/month |
The full setup runs comfortably within free-tier limits for any typical Supabase free-tier project.
Part 1: Setting Up Cloudflare R2 Storage
Cloudflare R2 is your off-site backup store. It uses the S3-compatible API, so standard AWS CLI commands work against it. Note that R2 requires a payment method to activate, but you will not be charged while within free-tier limits.
Step 1.1 — Find Your Cloudflare Account ID
- Go to dash.cloudflare.com and log in
- On the Home page, look at the right-hand sidebar
- Copy your Account ID — it is a 32-character hex string
- Save it — you will need it later as
R2_ACCOUNT_ID
Step 1.2 — Activate R2
- In the left sidebar click R2 Object Storage
- If not yet activated, click Purchase R2 Plan and add a payment method
- Complete the activation
Step 1.3 — Create Your Bucket
- In R2 Object Storage click Create bucket
- Fill in the following:
- Bucket name:
supabase-backups - Location: Choose your region (EU recommended for UK-based teams)
- Default storage class: Standard
- Bucket name:
- Click Create bucket
- Save the bucket name as
R2_BUCKET_NAME
Step 1.4 — Get Your Regional R2 Endpoint
After creating the bucket, click into it and find the S3 API endpoint. For an EU bucket it looks like:
texthttps://YOURACCOUNTID.eu.r2.cloudflarestorage.comImportant: If you chose the EU region, your endpoint contains
.eu.— this is different from the default endpoint which has no region prefix. Using the wrong endpoint causes silent upload failures with no error message.
Save the full URL as R2_ENDPOINT.
Step 1.5 — Create an R2 API Token
- In R2 Object Storage click Manage R2 API Tokens (top right)
- Click Create API Token
- Fill in:
- Token name:
supabase-backup-token - Permissions: Object Read & Write — must be Read & Write, not Read Only (Read Only will cause upload failures after 5 retries)
- Specify buckets: Select
supabase-backups - TTL: No expiry
- Token name:
- Click Create API Token
- Copy both values immediately — they are only shown once:
- Access Key ID — save as
R2_ACCESS_KEY_ID - Secret Access Key — save as
R2_SECRET_ACCESS_KEY
- Access Key ID — save as
R2 Values Checklist
Before moving on, confirm you have saved all five of these:
| Variable | Where to Find It |
|---|---|
R2_ACCOUNT_ID | Cloudflare Home — right sidebar |
R2_ENDPOINT | Bucket details — S3 API endpoint (include .eu. if EU region) |
R2_ACCESS_KEY_ID | R2 API Token creation page |
R2_SECRET_ACCESS_KEY | R2 API Token creation page (shown once only) |
R2_BUCKET_NAME | Your bucket name — supabase-backups |
Part 2: GitHub Repository Setup
The GitHub repository is where your backup workflow lives. It must be private — the SQL dump files contain your full database contents.
Step 2.1 — Create a Private Repository
- Go to github.com/new
- Fill in:
- Repository name:
supabase-backup - Visibility: Private — this is critical
- Tick Add a README file
- Repository name:
- Click Create repository
Step 2.2 — Get Your Supabase Session Pooler URL
Why Session Pooler? GitHub Actions runners only support IPv4 outbound connections. The Supabase free tier Direct connection URL resolves to an IPv6 address, which causes an immediate
Network is unreachableerror whenpg_dumptries to connect. The Session Pooler routes through IPv4. Note: the Transaction Pooler does not work withpg_dump— only Session Pooler is compatible.
- Open your Supabase dashboard and select your project
- Click the Connect button in the top menu bar
- Select the Direct tab
- Select the Session pooler radio button
- Set Type to URI
- Copy the connection string — it looks like:text
postgresql://postgres.YOURREF:PASSWORD@aws-0-eu-west-2.pooler.supabase.com:5432/postgresThe host containsaws-0-REGION.pooler.supabase.com— this confirms it is the IPv4-compatible Session Pooler - Save as
SUPABASE_DB_URL
Step 2.3 — Create a GitHub Personal Access Token (PAT)
Make.com uses this token to trigger and monitor your workflow via the GitHub API.
- In GitHub go to Profile photo → Settings → Developer settings → Personal access tokens → Fine-grained tokens
- Click Generate new token
- Fill in:
- Token name:
make-com-backup-trigger - Expiration: No expiration
- Repository access: Only select repositories —
supabase-backup - Permissions:
- Actions: Read and Write
- Contents: Read and Write
- Token name:
- Click Generate token
- Copy immediately — starts with
github_pat_... - Save as
GITHUB_PAT
Step 2.4 — Add Repository Secrets
Go to your repo — Settings → Secrets and variables → Actions → New repository secret and add these five secrets:
| Secret Name | Value |
|---|---|
SUPABASE_DB_URL | Session Pooler URI from Step 2.2 |
R2_ACCESS_KEY_ID | From Step 1.5 |
R2_SECRET_ACCESS_KEY | From Step 1.5 |
R2_BUCKET_NAME | supabase-backups |
R2_ENDPOINT | Full regional endpoint URL from Step 1.4 |
Step 2.5 — Add a Repository Variable
Still in Settings → Secrets and variables → Actions, click the Variables tab and add:
- Name:
BACKUP_ENABLED - Value:
true
Set this to false any time you want to pause all backups without editing the workflow code.
Part 3: Creating the GitHub Actions Backup Workflow
Step 3.1 — Create the Workflow File
- In your
supabase-backuprepo click Add file → Create new file - Type the filename exactly as:
.github/workflows/backup.yml - Paste the complete YAML below:
name: Supabase Full Backup
on:
workflow_dispatch: # Triggered by Make.com via GitHub API
schedule:
- cron: '0 2 * * *' # Safety net: runs automatically at 2:00 AM UTC daily
jobs:
backup:
name: Dump & Upload to Cloudflare R2
runs-on: ubuntu-latest
permissions:
contents: write
if: vars.BACKUP_ENABLED == 'true'
env:
SUPABASE_DB_URL: ${{ secrets.SUPABASE_DB_URL }}
R2_ACCESS_KEY_ID: ${{ secrets.R2_ACCESS_KEY_ID }}
R2_SECRET_ACCESS_KEY: ${{ secrets.R2_SECRET_ACCESS_KEY }}
R2_BUCKET_NAME: ${{ secrets.R2_BUCKET_NAME }}
R2_ENDPOINT: ${{ secrets.R2_ENDPOINT }}
steps:
# 1. Checkout repo — uses Node.js 24 (fixes Node 20 deprecation warning)
- name: Checkout repository
uses: actions/checkout@v5
# 2. Install PostgreSQL 17 client directly — no Docker, no Supabase CLI
# Fixes IPv6 unreachable error + 1.7 GB Docker image pull
# PATH export ensures all steps use v17 not the runner default v16
- name: Install PostgreSQL 17 client
run: |
wget -qO- https://www.postgresql.org/media/keys/ACCC4CF8.asc \
| sudo gpg --dearmor -o /usr/share/keyrings/postgresql.gpg
echo "deb [signed-by=/usr/share/keyrings/postgresql.gpg] \
https://apt.postgresql.org/pub/repos/apt $(lsb_release -cs)-pgdg main" \
| sudo tee /etc/apt/sources.list.d/postgresql.list
sudo apt-get update -qq
sudo apt-get install -y postgresql-client-17
echo "/usr/lib/postgresql/17/bin" >> $GITHUB_PATH
echo "Installed: $(/usr/lib/postgresql/17/bin/pg_dump --version)"
# 3. Generate timestamped folder name
- name: Set backup timestamp
run: echo "BACKUP_DATE=$(date -u +'%Y-%m-%d_%H-%M-%S')" >> $GITHUB_ENV
# 4. Create local backup directory
- name: Create backup directory
run: mkdir -p .backup/${{ env.BACKUP_DATE }}
# 5. Dump Roles — restore ORDER: this file must be restored FIRST
- name: Dump roles
run: |
pg_dumpall --dbname=$SUPABASE_DB_URL --roles-only \
-f .backup/${{ env.BACKUP_DATE }}/01roles.sql
# 6. Dump Schema — tables, indexes, sequences, constraints,
# RLS policies, triggers, functions, views, custom types
# Excludes all Supabase-managed internal schemas
- name: Dump schema
run: |
pg_dump $SUPABASE_DB_URL \
--schema-only --no-privileges \
--exclude-schema=auth \
--exclude-schema=storage \
--exclude-schema=realtime \
--exclude-schema=supabase_functions \
--exclude-schema=supabase_migrations \
--exclude-schema=graphql \
--exclude-schema=graphql_public \
--exclude-schema=vault \
--exclude-schema=pgsodium \
--exclude-schema=pgsodium_masks \
--exclude-schema=extensions \
--exclude-schema=pgbouncer \
--exclude-schema=net \
-f .backup/${{ env.BACKUP_DATE }}/02schema.sql
# 7. Dump Data — all row data using COPY protocol (fastest method)
# Restore ORDER: this file must be restored LAST
- name: Dump data
run: |
pg_dump $SUPABASE_DB_URL \
--data-only --no-privileges \
--exclude-schema=auth \
--exclude-schema=storage \
--exclude-schema=realtime \
--exclude-schema=supabase_functions \
--exclude-schema=supabase_migrations \
--exclude-schema=graphql \
--exclude-schema=graphql_public \
--exclude-schema=vault \
--exclude-schema=pgsodium \
--exclude-schema=pgsodium_masks \
--exclude-schema=extensions \
--exclude-schema=pgbouncer \
--exclude-schema=net \
-f .backup/${{ env.BACKUP_DATE }}/03data.sql
# 8. Verify all 3 files exist and are non-empty
- name: Verify backup files
run: |
echo "Backup file sizes:"
ls -lh .backup/${{ env.BACKUP_DATE }}/
FAILED=0
for f in 01roles.sql 02schema.sql 03data.sql; do
FILE=".backup/${{ env.BACKUP_DATE }}/$f"
if [ ! -s "$FILE" ]; then
echo "ERROR: $FILE is missing or empty!"
FAILED=1
else
echo "OK — $FILE: $(wc -l < $FILE) lines, $(du -sh $FILE | cut -f1)"
fi
done
if [ $FAILED -ne 0 ]; then exit 1; fi
# 9. Compress all 3 files into a single archive
- name: Compress backup
run: |
cd .backup
tar -czf ${{ env.BACKUP_DATE }}.tar.gz ${{ env.BACKUP_DATE }}/
echo "Archive size: $(du -sh ${{ env.BACKUP_DATE }}.tar.gz | cut -f1)"
# 10. Configure AWS CLI for Cloudflare R2
- name: Configure AWS CLI for R2
run: |
aws configure set aws_access_key_id $R2_ACCESS_KEY_ID
aws configure set aws_secret_access_key $R2_SECRET_ACCESS_KEY
aws configure set default.region auto
aws configure set default.output json
# 11. Upload compressed archive to R2
- name: Upload archive to R2
run: |
aws s3 cp .backup/${{ env.BACKUP_DATE }}.tar.gz \
s3://$R2_BUCKET_NAME/archives/${{ env.BACKUP_DATE }}.tar.gz \
--endpoint-url $R2_ENDPOINT --no-progress
echo "Archive uploaded successfully"
# 12. Upload individual SQL files to R2
- name: Upload individual SQL files to R2
run: |
for f in 01roles.sql 02schema.sql 03data.sql; do
aws s3 cp .backup/${{ env.BACKUP_DATE }}/$f \
s3://$R2_BUCKET_NAME/backups/${{ env.BACKUP_DATE }}/$f \
--endpoint-url $R2_ENDPOINT --no-progress
echo "$f uploaded successfully"
done
# 13. List recent backups in R2 to confirm
- name: List R2 bucket contents
run: |
echo "Recent backups in R2:"
aws s3 ls s3://$R2_BUCKET_NAME/backups/ \
--endpoint-url $R2_ENDPOINT --recursive --human-readable | tail -20
# 14. Cleanup local files
- name: Cleanup
if: always
run: rm -rf .backup/- Click Commit changes → Commit directly to main → Commit changes
Step 3.2 — Test the Workflow Manually
Before connecting Make.com, verify the workflow runs correctly on its own:
- Go to the Actions tab in your
supabase-backuprepo - Click Supabase Full Backup in the left sidebar
- Click Run workflow → Run workflow
- Watch the run — all 14 steps should complete with green checkmarks in approximately 2 minutes
- Go to Cloudflare R2 → supabase-backups and confirm files appear under both
backups/andarchives/
What the Three Dump Files Contain
| Restore Order | File | Contains |
|---|---|---|
| 1st | 01roles.sql | DB roles, grants, memberships |
| 2nd | 02schema.sql | Tables, indexes, sequences, RLS policies, triggers, functions |
| 3rd | 03data.sql | All row data (COPY protocol) |
The restore order is non-negotiable. Restoring data before the schema exists will fail, and restoring schema before roles are in place causes permission errors.
Part 4: Make.com — Scenario 1 (Schedule and Trigger)
Make.com orchestrates the daily schedule and monitors backup results. You need two separate scenarios.
Step 4.1 — Create a Data Store
- In Make.com left sidebar go to Data stores → Add data store
- Name it
Supabase Backup State - Add these three fields:
| Field name | Type |
|---|---|
run_id | Number |
triggered_at | Text |
status | Text |
- Set size to 1 MB and save
Step 4.2 — Create Scenario 1
- Go to Scenarios → Create a new scenario
- Name it
Supabase Backup — Trigger
Step 4.3 — Add These 6 Modules in Order
Module 1 — Schedule
- Run scenario: Every day
- Start time: 02:00
Module 2 — HTTP: Trigger GitHub Actions
- Type: HTTP — Make a request
- URL:
https://api.github.com/repos/YOUR_USERNAME/supabase-backup/actions/workflows/backup.yml/dispatches - Method: POST
- Headers:
Authorization:Bearer YOUR_GITHUB_PATAccept:application/vnd.github+jsonX-GitHub-Api-Version:2022-11-28
- Body type: Raw
- Content type:
application/json - Request content:
{"ref":"main"} - Parse response: Yes
- A successful trigger returns HTTP 204 No Content
Module 3 — Flow Control: Sleep
- Delay: 10 seconds (gives GitHub time to register the run)
Module 4 — HTTP: Get Run ID
- Type: HTTP — Make a request
- URL:
https://api.github.com/repos/YOUR_USERNAME/supabase-backup/actions/runs?per_page=1&event=workflow_dispatch - Method: GET
- Headers: Same three headers as Module 2
- Parse response: Yes
Module 5 — Data Store: Save Run ID
- Type: Data store — Add/replace a record
- Data store:
Supabase Backup State - Key:
latest_run - Fields:
run_id:{{4.data.workflow_runs[].id}}triggered_at:{{now}}status:queued
Module 6 — Router: Handle Trigger Errors
Add a Router after Module 2 with two paths:
- Path 1 (Success): Filter —
2.statusCode = 204— continue to Modules 3, 4, 5 - Path 2 (Failed): Filter —
2.statusCode not equal to 204— add an Email module with:- Subject:
Supabase backup trigger FAILED - Body:
GitHub API returned {{2.statusCode}}. Check your GITHUB_PAT token has Actions Read & Write permission.
- Subject:
Turn Scenario 1 ON.
Part 5: Make.com — Scenario 2 (Monitor and Alert)
Step 5.1 — Create Scenario 2
- Go to Scenarios → Create a new scenario
- Name it
Supabase Backup — Monitor Alert
Step 5.2 — Add These 4 Modules in Order
Module 1 — Schedule
- Run scenario: Every day
- Start time: 02:20 — 20 minutes after the trigger to allow GitHub Actions to complete
Module 2 — Data Store: Get Run ID
- Type: Data store — Get a record
- Data store:
Supabase Backup State - Key:
latest_run
Module 3 — HTTP: Check Workflow Status
- Type: HTTP — Make a request
- URL:
https://api.github.com/repos/YOUR_USERNAME/supabase-backup/actions/runs/{{2.run_id}} - Method: GET
- Headers: Same three headers as Scenario 1
- Parse response: Yes
Module 4 — Router: Branch on Conclusion
GitHub returns a conclusion field in the response. Add three paths:
- Path A (Success): Filter —
3.data.conclusion = success- Email module:
- Subject:
Supabase backup complete — {{formatDate(now; "DD/MM/YYYY")}} - Body:
Backup completed successfully. Date: {{formatDate(now; "DD/MM/YYYY HH:mm")}} | Run ID: {{2.run_id}} | Logs: https://github.com/YOUR_USERNAME/supabase-backup/actions/runs/{{2.run_id}}
- Subject:
- Data Store: Update status to
completed
- Email module:
- Path B (Failed): Filter —
3.data.conclusion = failure- Email module:
- Subject:
Supabase backup FAILED — {{formatDate(now; "DD/MM/YYYY")}} — ACTION REQUIRED - Body:
Your backup has FAILED. Run ID: {{2.run_id}} | Common causes: SUPABASE_DB_URL password changed | R2 token expired or set to Read Only | GitHub Actions minutes exhausted | Logs: https://github.com/YOUR_USERNAME/supabase-backup/actions/runs/{{2.run_id}}
- Subject:
- Email module:
- Path C (Still Running): Filter —
3.data.conclusionis empty- Email module:
- Subject:
Supabase backup still running after 20 mins - Body:
Check manually: https://github.com/YOUR_USERNAME/supabase-backup/actions/runs/{{2.run_id}}
- Subject:
- Email module:
Turn Scenario 2 ON.
Part 6: Testing and Verification
Step 6.1 — Full End-to-End Test
- In Make.com open Scenario 1 and click Run once
- Confirm all 6 modules show green checkmarks
- Confirm Module 2 shows status 204
- Wait 5 minutes, then open Scenario 2 and click Run once
- Confirm the Router goes to Path A (success)
- Check your inbox for the success email with the run ID and log link
Step 6.2 — Verify Files in Cloudflare R2
- Go to Cloudflare R2 → supabase-backups → Browse files
- You should see:
archives/YYYY-MM-DD_HH-MM-SS.tar.gz— compressed archivebackups/YYYY-MM-DD_HH-MM-SS/— folder containing all three SQL files
Step 6.3 — Spot-Check the Schema Dump
Download 02schema.sql and open it in any text editor. A healthy database schema dump should contain all of the following:
| What to look for | Confirms |
|---|---|
CREATE TABLE public. | Tables backed up |
CREATE INDEX | Indexes backed up |
CREATE SEQUENCE | Sequences backed up |
ENABLE ROW LEVEL SECURITY | Row level security status backed up |
CREATE POLICY | Row level security policies backed up |
CREATE OR REPLACE FUNCTION | Functions backed up |
CREATE TRIGGER | Triggers backed up |
Part 7: The Complete Restore Procedure
Always restore to a fresh Supabase project — never restore over a live database.
Step 7.1 — Download Backup Files from R2
Option A — Cloudflare Dashboard:
Go to R2 → supabase-backups → backups → your dated folder and download all three SQL files.
Option B — AWS CLI:
bashaws configure set aws_access_key_id YOUR_R2_ACCESS_KEY_ID aws configure set aws_secret_access_key YOUR_R2_SECRET_ACCESS_KEY aws configure set default.region auto aws s3 cp s3://supabase-backups/backups/2026-04-28_02-00-00 ./restore \ --recursive \ --endpoint-url https://YOURACCOUNTID.eu.r2.cloudflarestorage.com
Step 7.2 — Run the Restore in This Exact Order
Get the Direct connection URL of your new Supabase project from the Supabase dashboard and save it as NEW_DB.
bash# Step 1 — Roles first # Ignore any "role already exists" errors — this is completely normal psql $NEW_DB -f ./restore/01roles.sql # Step 2 — Schema: tables, indexes, sequences, RLS, functions, triggers psql $NEW_DB -f ./restore/02schema.sql # Step 3 — Data last psql $NEW_DB -f ./restore/03data.sql
Step 7.3 — Verify the Restore in Supabase Studio
After running all three restore commands, open the new project’s Supabase dashboard and check:
- Table Editor — all tables are present with correct row counts
- Authentication → Policies — row level security policies are present
- Database → Functions — all custom functions are listed
- Database → Triggers — all triggers are present
Part 8: Automatic Retention — Staying Within Free Tier Limits
Without a retention policy, backup files accumulate and will eventually push you over the Cloudflare R2 10 GB free storage limit. Set up Object Lifecycle Rules to auto-delete old files.
Step 8.1 — Add Lifecycle Rules
- Go to Cloudflare R2 → supabase-backups → Settings tab
- Click Object lifecycle rules → Add rule
- Add a rule for SQL files:
- Name:
delete-old-backups - Prefix:
backups/ - Action: Delete object
- Days after creation: 30
- Name:
- Add a second rule for archives:
- Name:
delete-old-archives - Prefix:
archives/ - Action: Delete object
- Days after creation: 30
- Name:
This keeps a rolling 30-day window of daily backups — more than double the 7 days of daily backups included with the Supabase Pro plan — with no manual cleanup required.
Part 9: Monthly Maintenance Checklist
Run through this checklist once a month to keep the system healthy:
- Verify recent backup files exist in the R2 bucket under
backups/ - Check Make.com Scenario 1 and Scenario 2 run history for failed executions
- Check GitHub Actions run history for any failed backup jobs
- Confirm your GitHub PAT has not expired: GitHub — Settings — Developer Settings — Fine-grained tokens
- Confirm your R2 API token has not expired: Cloudflare R2 — Manage R2 API Tokens
- Every 3 months: perform a full test restore on a spare Supabase project using the Part 7 procedure above
Known Issues and How They Were Fixed
During the build of this system, four common errors were encountered and resolved. If you hit any of these, here is exactly what to do:
| Error | Root Cause | Fix |
|---|---|---|
Network is unreachable on pg_dump | GitHub runners are IPv4-only; Supabase free tier Direct connection resolves to IPv6 | Use Session Pooler URL, not Direct connection |
server version mismatch (17 vs 16) | Ubuntu runner defaults to pg_dump v16; Supabase runs Postgres v17 | Export /usr/lib/postgresql/17/bin to $GITHUB_PATH as shown in Step 3.1 |
| R2 upload fails after 5 retries | R2 API token was set to Read Only instead of Read & Write | Recreate or edit the token with Object Read & Write permission |
| Wrong R2 endpoint error for EU bucket | EU buckets use .eu.r2.cloudflarestorage.com, not the default endpoint | Store the full regional URL as the R2_ENDPOINT secret |
Free Tier vs Pro Plan: When to Upgrade
The free tier backup workflow in this guide gives you 30-day rolling daily backup coverage at zero Supabase cost. The trade-off is that you own and maintain the infrastructure.
If your Supabase project is live with paying users, handling sensitive personal data, or approaching the 500 MB database limit, the Pro plan is worth the Supabase cost. The pro plan includes 7 days of daily backups managed by Supabase directly, optional point-in-time recovery, no project pausing, an 8 GB database size, and alerts in the Supabase dashboard for database health events. The pro plan starts at ~$25/month per project.
For a weekend project or early-stage startup, the free-tier approach in this guide is a solid backup strategy. For a production Supabase project handling real customers, upgrading to pro and enabling PITR removes the maintenance overhead entirely.
Frequently Asked Questions About Supabase Backup
Does Supabase back up my database on the free tier?
No. Supabase does not run automated backups for free tier projects. Daily backups are a feature of the Pro plan and above. If you are on the free tier and have not set up your own backup system, you have no recovery option if your data is lost or corrupted.
What is the difference between a logical backup and a physical backup?
A logical backup, which is what pg_dump produces, exports your database as SQL statements — CREATE TABLE, INSERT, CREATE POLICY, and so on. It is portable, human-readable, and can be restored to a fresh Supabase project. A physical backup copies the raw data files at the storage level. Supabase uses physical backups internally for its Pro plan daily backups and point-in-time recovery, but these are not accessible on the free tier.
What is point-in-time recovery and do I need it on the free tier?
Point-in-time recovery (PITR) lets you restore your database to any specific moment in time, not just to a daily snapshot. It protects against accidental data deletions that happen hours after your last backup. PITR is only available on the Supabase Pro and Team plans as a paid add-on. For most free tier projects, daily backups with 30-day retention provide adequate protection.
Why does pg_dump fail with “Network is unreachable”?
This happens because GitHub Actions runners only support IPv4 outbound connections, and the Supabase free tier Direct connection URL resolves to an IPv6 address. Use the Session Pooler URL from your Supabase dashboard instead. The Session Pooler host looks like aws-0-REGION.pooler.supabase.com and routes through IPv4 correctly.
Can I use the Supabase CLI instead of pg_dump?
You can, but it is not recommended for this architecture. Using the Supabase CLI on a GitHub Actions runner requires pulling a Docker image that is approximately 1.7 GB in size on every run. That adds minutes to each backup, burns through GitHub Actions free allowance faster, and can introduce IPv6 networking failures on GitHub-hosted runners. Direct pg_dump installs in seconds and produces identical output.
Does this backup include my Row Level Security policies?
Yes. The schema dump (02schema.sql) captures ENABLE ROW LEVEL SECURITY statements and all CREATE POLICY definitions for your tables. After a restore, your row level security configuration is fully in place. You can verify this in the Supabase dashboard under Authentication → Policies after running the restore.
Can I pause backups without deleting the workflow?
Yes. Set the BACKUP_ENABLED repository variable to false under Settings → Secrets and variables → Actions → Variables. This stops all backup runs without any code changes. Set it back to true to resume.
How do I restore my database from a backup?
Download the three SQL files from Cloudflare R2 and restore them in this exact order using psql against a new Supabase project: 01roles.sql first, 02schema.sql second, 03data.sql last. Never restore over a live database. Full instructions are in Part 7 of this guide.
Should I upgrade to the Supabase Pro plan?
If your project is in early development or handling non-critical data, this free-tier backup strategy works well at zero cost. If your project has paying users, handles sensitive data, or is growing toward the 500 MB database limit, the Pro plan removes all the maintenance overhead and adds PITR. The pro plan starts at ~$25/month and includes managed backups and no project pausing.
Final Thoughts
The Supabase free tier is a genuinely capable platform for building real applications, and the fact that it runs on standard Postgres means a team that backs up Supabase every day can do so using proven tools like pg_dump and psql — no proprietary backup formats, no vendor lock-in. This complete guide to Supabase backup proves that free tier projects can have the same backup discipline as a paid plan, at zero cost.
This guide was built and tested by the team at AI Agency Plus, a UK-based AI automation agency. If you need help setting up automated backup workflows, Supabase database architecture, or production-grade automation systems, get in touch with our team.
