If you’ve built a side project, an MVP, or a web app on Supabase’s free plan and come back a week later to find everything unresponsive, this guide is for you. Supabase pauses free tier projects after 7 days of inactivity, and while it’s a reasonable policy for a platform giving away a full Postgres database at no cost, it’s a genuine pain point for developers who aren’t generating daily traffic yet.
The good news: you don’t need to upgrade to prevent it. In this guide, I’ll walk you through a lightweight n8n automation that keeps your Supabase project active with daily database pings no cron job, no GitHub Actions workflow, no infrastructure overhead, and no billing surprises.
Why Supabase Pauses Free Tier Projects After 7 Days of Inactivity
Supabase is built on Postgres and provides a surprisingly complete backend stack on its free plan you get authentication via Supabase Auth, row level security (RLS), API access, storage, and a compute instance to run it all on. To keep this sustainable, Supabase pauses projects that go inactive for 7 consecutive days.
When a pause happens, your compute instance is essentially shut down. The database itself isn’t deleted, your schema, data, and backup history are preserved but the project takes around 30 seconds to wake up when it receives its next request. For a live app, that cold-start latency is unacceptable. For a dev environment or early-stage deployment, it’s a recurring annoyance.
The 7-day inactivity window is tracked against your database activity, not API requests or UI visits. That means even if you’re regularly checking the Supabase dashboard, if no actual queries are hitting the database, the timer keeps ticking. Understanding this distinction is the key to fixing the problem efficiently.
How Supabase Free Tier Pricing Works
Before diving into the fix, it’s worth understanding how Supabase pricing works across its plans so you can make an informed decision about whether staying on the free plan is right for you.
The free plan gives you:
- 2 active Supabase projects
- 500 MB database size
- 5 GB egress per month
- 1 GB disk storage
- Up to 50,000 monthly active users for authentication
- MFA support and basic Supabase Auth features
- Community support (no ticket via priority channels)
- A shared compute instance (no dedicated compute add-on)
The Pro plan (currently $25/month) removes the project pause behaviour entirely, increases your database size limit, unlocks point-in-time recovery, gives you dedicated compute options, per user pricing flexibility, and access to pro features like automated Supabase project backup, branching, etc. If you exceed free plan quotas on egress, database size, or API requests you’ll either hit a hard quota or be prompted to upgrade. Supabase is transparent about overages and billing, and you can configure spend caps to avoid unexpected charges.
For most early-stage projects and MVPs, the free plan is more than enough as long as you solve the inactivity problem. The enterprise plan and Pro plan also offer add-ons like Azure deployment regions, but that’s well outside the scope of this guide.
For most indie developers just getting started, upgrading to Pro simply to avoid pausing is overkill. The automation below solves it for free.
The Fix: A Daily Ping Using n8n (No Cron Job Required)
Rather than setting up a cron job on a server or configuring a GitHub Actions workflow (which adds environment variables, secrets management, and CI minutes to worry about), we’ll use n8n an open-source workflow automation platform. n8n runs on a self-hosted instance or its own free cloud tier, and it’s purpose-built for exactly this kind of scheduled automation.
The workflow we’re building will:
- Trigger once daily on a set schedule
- Generate 25 ping items to loop through
- Delete old rows from a dedicated
pingtable (cleanup runs first) - Wait a random 2–10 seconds between each insert to simulate natural db activity
- Insert a row with the current timestamp into the
pingtable on each iteration - Loop back until all 25 pings are complete
The randomised wait is a deliberate optimization, it avoids a mechanical burst of identical writes and instead spreads activity across a few minutes, which looks more like genuine database usage. The cleanup step ensures old rows are deleted on each run, keeping your database size stable without any manual intervention.
The Free Plan vs Pro Plan – What Actually Differs for This Use Case
It’s worth pausing here to validate whether this approach is right for your situation. Here’s the practical split:
Stay on the free plan if:
- Your project is in early development or an MVP stage
- You don’t need point-in-time recovery or dedicated compute
- Your database size stays well under 500 MB
- You don’t need to exceed free-tier rate limits on auth or API requests
- You’re okay self-managing uptime via automation
Upgrade to the Pro plan if:
- Your app has real users and can’t tolerate any cold-start risk
- You need RLS-heavy workloads with high query volume
- You rely on Supabase Auth at scale (beyond 50,000 MAU)
- You need cache-level performance guarantees or a dedicated compute instance
- You want priority support access (ticket via the Pro support channel)
For everything else side projects, internal tools, personal backends, developer portfolios the free plan with this workflow is perfectly viable.
Before Getting Started
You’ll need:
- A free Supabase account with at least one active Supabase project
- An n8n instance (self-hosted or n8n Cloud free tier)
- Your Supabase Project URL and Service Role Key (found in the Supabase dashboard under Project Settings → API)
- About 10 minutes
No deep technical knowledge is required beyond copying and pasting.
Step 1: Create the ping Table in Your Supabase Database
The workflow needs a table to write to. We’ll create a dedicated ping table, it’s intentionally minimal so it never meaningfully contributes to your database size or disk usage.
Open your Supabase dashboard, navigate to your project, and go to SQL Editor → New Query. Paste the following and run it:
CREATE TABLE IF NOT EXISTS public.ping ( id bigint GENERATED ALWAYS AS IDENTITY PRIMARY KEY, created_at timestamptz DEFAULT now() );
This creates a two-column table: an auto-incrementing id and a created_at timestamp. Nothing sensitive is stored here – no auth data, no user records, no schema logic. The IF NOT EXISTS clause means this query is safe to run multiple times without errors.
This only needs to be done once.
💡 Row Level Security note: For this particular table you can leave RLS disabled, since it’s only ever written to from your private n8n backend via the service role key. No public API requests or client-side queries will touch it.
Step 2: Connect Supabase to n8n
In your n8n instance, go to Credentials → Add Credential → Supabase. You’ll need two values from your Supabase project:
- Project URL under Project Settings → API → Project URL
- Service Role Key under Project Settings → API → Project API Keys → service_role
The service role key bypasses row level security and should never be exposed in client-side code or committed to a GitHub repository. Inside n8n it’s stored as an encrypted credential and only used server-side, that’s exactly the right use case for it.
Name the credential something clear like My Supabase DB, you’ll reference it inside two nodes of the imported workflow.
Step 3: Import the Workflow JSON
In n8n, go to Workflows → Import from JSON and paste the full workflow below. Once imported, you’ll configure your credentials and validate that the node connections are correct.
{ "name": "Supabase KeepAlive Ping", "nodes": [ { "parameters": { "rule": { "interval": [ { "triggerAtHour": 10, "triggerAtMinute": 7 } ] } }, "type": "n8n-nodes-base.scheduleTrigger", "typeVersion": 1.3, "position": [32, 352], "id": "7033369a-d408-4473-9d37-74688340f6a2", "name": "Schedule Trigger" }, { "parameters": { "jsCode": "const items = [];\nfor (let i = 0; i < 25; i++) {\n items.push({ json: { index: i } });\n}\nreturn items;" }, "type": "n8n-nodes-base.code", "typeVersion": 2, "position": [256, 352], "id": "740720e4-5526-4d9c-a5a7-99c7cb163cef", "name": "Generate 25 Pings" }, { "parameters": { "options": {} }, "type": "n8n-nodes-base.splitInBatches", "typeVersion": 3, "position": [480, 352], "id": "e162e000-1369-4628-9d17-81ca87da140d", "name": "Loop Over Items" }, { "parameters": { "operation": "delete", "tableId": "ping", "filters": { "conditions": [ { "keyName": "created_at", "condition": "lt", "keyValue": "={{ $now.minus({days: 7}).toISO() }}" } ] } }, "type": "n8n-nodes-base.supabase", "typeVersion": 1, "position": [704, 160], "id": "c54d8f33-62c3-4a15-b4bd-8ad839329d85", "name": "Cleanup Old Pings", "credentials": { "supabaseApi": { "id": "YOUR_SUPABASE_CREDENTIAL_ID", "name": "YOUR_SUPABASE_CREDENTIAL_NAME" } } }, { "parameters": { "jsCode": "const waitSeconds = Math.floor(Math.random() * 9) + 2;\nreturn [{ json: { ...items[0].json, waitSeconds } }];" }, "type": "n8n-nodes-base.code", "typeVersion": 2, "position": [704, 352], "id": "f108f884-6822-457c-a527-e683f0d1d2d5", "name": "Random Wait Duration" }, { "parameters": { "amount": "={{ $json.waitSeconds }}" }, "type": "n8n-nodes-base.wait", "typeVersion": 1.1, "position": [928, 352], "id": "e4c58e4b-a6ed-44c9-9e81-357ec13b6a4c", "name": "Wait - Random Time", "webhookId": "YOUR_WEBHOOK_ID_1" }, { "parameters": { "tableId": "ping", "fieldsUi": { "fieldValues": [ { "fieldId": "created_at", "fieldValue": "={{ $now }}" } ] } }, "type": "n8n-nodes-base.supabase", "typeVersion": 1, "position": [1152, 352], "id": "5c8ff375-e08f-45c5-8fad-70d91712d8df", "name": "Supabase Insert", "credentials": { "supabaseApi": { "id": "YOUR_SUPABASE_CREDENTIAL_ID", "name": "YOUR_SUPABASE_CREDENTIAL_NAME" } } }, { "parameters": { "amount": 1 }, "type": "n8n-nodes-base.wait", "typeVersion": 1.1, "position": [1376, 432], "id": "2405ad20-a350-49a4-b412-154e005db3be", "name": "Wait 1sec", "webhookId": "YOUR_WEBHOOK_ID_2" } ], "connections": { "Schedule Trigger": { "main": [[{ "node": "Generate 25 Pings", "type": "main", "index": 0 }]] }, "Generate 25 Pings": { "main": [[{ "node": "Loop Over Items", "type": "main", "index": 0 }]] }, "Loop Over Items": { "main": [ [{ "node": "Cleanup Old Pings", "type": "main", "index": 0 }], [{ "node": "Random Wait Duration", "type": "main", "index": 0 }] ] }, "Random Wait Duration": { "main": [[{ "node": "Wait - Random Time", "type": "main", "index": 0 }]] }, "Wait - Random Time": { "main": [[{ "node": "Supabase Insert", "type": "main", "index": 0 }]] }, "Supabase Insert": { "main": [[{ "node": "Wait 1sec", "type": "main", "index": 0 }]] }, "Wait 1sec": { "main": [[{ "node": "Loop Over Items", "type": "main", "index": 0 }]] } }, "active": true, "settings": { "executionOrder": "v1", "binaryMode": "separate" }, "versionId": "YOUR_WORKFLOW_VERSION_ID", "meta": { "templateCredsSetupCompleted": true, "instanceId": "YOUR_INSTANCE_ID" }, "id": "YOUR_WORKFLOW_ID", "tags": [] }Step 4: Reconnect Your Credentials and Configure the Nodes
After importing, n8n will flag the two Supabase nodes because the YOUR_SUPABASE_CREDENTIAL_ID placeholders don’t match any credential in your account. This is expected.
Click into Cleanup Old Pings and Supabase Insert one at a time. In each node, open the credentials dropdown and select the Supabase account you created in Step 2. Both nodes already have tableId set to ping and the correct filters pre-configured, once credentials are connected, they’re ready without any further changes.
Step 5: Configure the Schedule
The default trigger fires at 10:07 AM UTC every day. To change the time, open the Schedule Trigger node and update the hour and minute values:
"rule": { "interval": [ { "triggerAtHour": 10, // ← 0–23, UTC "triggerAtMinute": 7 // ← 0–59 } ] }Running it more frequently, say every 6 hours gives you extra margin if you’re worried about n8n downtime. Just be mindful of your n8n execution quota if you’re on a free cloud plan.
Step 6L Test, Validate, and Go Live
Click Test Workflow to run it manually before activating the schedule. Watch the execution panel each node should turn green in sequence as the loop works through all 25 iterations.
Once the test run completes, go to your Supabase dashboard, open Table Editor → ping, and confirm that rows have appeared with recent created_at timestamps. If you see them, the entire pipeline, n8n scheduling, credential authentication, API connection to your Postgres database, and the insert query, is all working correctly.
Toggle the workflow to Active. From this point, it runs on schedule every day with no further input from you.
After the first live run, you’ll see ~25 rows in the
pingtable. Each subsequent day’s run will delete rows older than 7 days before inserting new ones, keeping the table capped at around 175 rows well under 1 MB of disk usage.
What’s Happening Under the Hood
The two Code nodes carry all the logic. Here’s exactly what they do:
Generate 25 Pings creates a simple array of 25 objects that the Loop node will iterate through one at a time:
const items = []; for (let i = 0; i < 25; i++) { items.push({ json: { index: i } }); } return items;Random Wait Duration picks a random integer between 2 and 10, which gets passed as the wait duration before the next db insert:
const waitSeconds = Math.floor(Math.random() * 9) + 2; // Produces a random integer from 2 to 10 return [{ json: { ...items[0].json, waitSeconds } }];This randomisation is the optimization that makes the activity pattern feel organic rather than mechanical. Without it, 25 inserts firing back-to-back in under a second would look nothing like genuine application traffic. With it, the total execution window spans several minutes of spread-out writes.
Is There a GitHub Actions Workflow Alternative?
Yes, a GitHub Actions workflow is a common alternative approach that some developers prefer because it ties the automation directly to their repository. You’d configure a scheduled YAML workflow, store your Supabase URL and service role key as GitHub repository secrets (environment variables), and run a simple curl or psql command to insert a row.
The tradeoff: GitHub Actions free tier has a monthly minutes quota, and managing environment variables and secrets in a repository adds overhead. For a task this simple, n8n gives you a visual UI, built-in credential management, execution history, and error logging all without touching a YAML file. But if you’re already deep in a GitHub-centric workflow, the Actions route is a valid path.
Frequently Asked Questions
Will this actually prevent my Supabase project from pausing?
Yes. Supabase’s inactivity timer resets on any database activity, including inserts to a ping table. As long as this workflow runs at least once within the 7-day window, the pause timer never reaches zero. Daily runs give you a healthy buffer against any missed executions.
Does this count toward my free plan storage limits?
The impact is negligible. 25 rows per day, with a rolling delete that purges anything older than 7 days, means the table never exceeds around 175 rows. With each row containing just an integer ID and a timestamp, you’re looking at a few KB of database size, far below any threshold that would affect your free tier quota.
What if my n8n instance goes offline for a day?
Your Supabase free tier project stays safe as long as the gap in activity is under 7 days of inactivity. For extra resilience, host n8n on a VPS with a Docker restart policy, or use n8n Cloud which maintains uptime independently. Avoid unexpected downtime on self-hosted instances by setting up process monitoring.
Is there a rate limit on Supabase inserts I need to worry about?
The free plan enforces rate limits on incoming API requests, but 25 small inserts spread across a few minutes represents minimal load, far below anything that would trigger a limit. If you’re also running other background jobs that generate high API request volume, consolidate them to stay within your daily quota.
Can I use this for multiple projects?
Absolutely. Duplicate the workflow in n8n and point each copy to a different Supabase credential. Each copy will independently keep its respective Supabase free project alive. You can run as many as you need from a single n8n instance.
Should I cache the ping table response for anything?
No, the table has no application use beyond generating database activity. There’s no need to cache its contents, expose it through your API, or reference it from your frontend. Think of it purely as a background maintenance mechanism, invisible to your actual application logic.
When should I upgrade to the Pro plan instead?
If your project has real production users who would be impacted by a 30-second cold start, or if you’re approaching the 500 MB database size limit, or if you need point-in-time recovery, it’s time to upgrade. Pro features like dedicated compute instances, branching, and priority support (ticket via the Pro support channel) are worth the cost at that stage. For MVPs and projects still in development, this automation is the right call.
Wrapping Up
Keeping a free Supabase project alive past the 7-day limit doesn’t require a paid upgrade, a complex deployment, or any ongoing manual work. A single n8n workflow one ping table, 25 daily inserts, and an automatic cleanup is all it takes.
Whether you’re building your first web app, testing a backend before committing to a pro plan, or just keeping a personal project warm between sprints, this setup runs silently in the background and removes one more thing from your mental overhead entirely.
