March 24, 2026 · 10 min read★ Featured
GitHub acts as the hub. A merge triggers Vercel and Render independently via webhooks. Each layer deploys on its own terms.
Vercel and Render deploy and go live independently. Each typically within 2–4 minutes.
After covering what CI/CD is and why it matters. Now, the actual setup: how GitHub, Vercel, and Render work together to take a commit from your laptop to a live deployment automatically.
The previous post explained what CI/CD is and why it matters even for a solo side project: every manual deploy is a chance to forget a step, and automation removes that risk entirely.
This post makes it concrete. Here is the actual setup running behind this site: GitHub as the code repository, Vercel deploying the Next.js frontend, and Render deploying the Wagtail backend. No elaborate configuration, no dedicated CI server: just three tools that connect to each other and do the right thing when you push code.
The goal isn't to walk through every click in every dashboard. It's to explain the shape of the setup: what each tool does, why it fits its role, and what actually happens between a git push and a live deployment.
Before going through each piece, it helps to see how they fit together.
GitHub is the central hub. It holds the code and acts as the trigger point for everything else. When you push a commit, GitHub notifies the other tools via webhooks, essentially a tap on the shoulder saying "something changed, go check."
Vercel listens to GitHub and handles the Next.js frontend. It was built specifically for this kind of framework, so the integration is seamless: connect a repository, and Vercel takes care of the rest.
Render listens to GitHub and handles the Wagtail backend. It's a general-purpose hosting platform that works well for Python applications, and it supports the same webhook-driven deploy model as Vercel.
The two deployments are independent. A change to the frontend triggers a Vercel deploy but not a Render one, and vice versa. This is one of the practical benefits of the decoupled architecture covered earlier in this series: each layer can be updated, rolled back, and scaled on its own terms.
GitHub does two things in this setup: it stores the code, and it triggers deployments.
The repository structure for a decoupled application mirrors the architecture itself: one repository for the Next.js frontend, one for the Wagtail backend. Keeping them separate means each codebase has its own history, its own pull requests, and its own deployment pipeline. A change to the frontend has no footprint in the backend repository, and vice versa. This separation also makes it easier to manage access, dependencies, and deployment settings independently for each layer.
The key concept is the branch protection. On main, the rule is simple: nothing lands there without passing the automated checks first. You work on a feature branch, open a pull request, the checks run, and only if they pass can the branch be merged. This is the CI gate in practice: not a separate tool, just a setting in GitHub that enforces the rule.
When a merge happens, GitHub fires a webhook to each connected platform. That webhook is what starts the deployment on Vercel and Render. The whole thing is automatic and happens within seconds of the merge.
Vercel and Next.js come from the same company, which means the integration is as close to zero-configuration as it gets.
Connecting a repository takes a few minutes in the Vercel dashboard: select the GitHub repository, confirm the framework is Next.js, and set the environment variables the frontend needs. Mainly, the Wagtail API URL, any public keys, and so on. Vercel detects the framework automatically and knows how to build it.
From that point on, every push to main triggers a full build and deployment. Vercel pulls the latest code, runs npm run build, and if the build succeeds, the new version goes live. If it fails, the previous version stays live and you get notified. There is no moment where the site is broken and unreachable: the old version keeps serving traffic until the new one is confirmed healthy.
One feature worth knowing about is preview deployments. Every pull request automatically gets its own temporary URL and a live preview of that branch, deployed in isolation. This means you can review a change in a real environment before it ever touches production. For a solo project it feels like overkill, but it is genuinely useful for catching layout issues or content rendering problems that only appear in a real build.
Render plays the same role for the backend that Vercel plays for the frontend, though with a few more moving parts because a Django application is more stateful than a Next.js build.
The connection starts the same way: link the GitHub repository in the Render dashboard, select the branch, and configure the service. For Wagtail, this means setting the runtime to Python, specifying the start command (gunicorn or equivalent), and providing the environment variables. Mainly, database connection string, secret key, allowed hosts, and the rest.
The more important consideration is what happens during a deploy beyond just the code. A Wagtail application typically needs static files collected and database migrations applied when the code changes. In this setup, both steps are handled by a startup script that runs every time the container starts: collecting static files first, applying migrations second, then handing off to Gunicorn to start serving traffic.
This sequencing matters. Static files and migrations are in place before the application accepts any requests, which means the new code and the database schema are always in sync by the time users can reach the site. The whole sequence lives in a single script committed to the repository, so it is versioned alongside the application code and behaves the same way on every deploy.
The database itself lives separately, in my case a managed PostgreSQL instance, also on Render. It persists across deploys and is not part of the deployment pipeline. Media files (images uploaded through the Wagtail admin) similarly live in external storage, separate from the application container. This separation is what makes the deploy safe: only the application code changes, everything else stays stable. For the moment I'm using Cloudinary as free tool which has been reliable so far to store the few images uploaded so far.
In this setup, a migration failure logs a warning but still allows the application to start. This is a deliberate tradeoff for a personal project on a free Render instance, where a hard stop on failure could cause a timeout. For a low-traffic blog the risk is acceptable, but in a production environment with real users, you would want a failed migration to stop the deployment entirely.
With all three tools connected, a typical change plays out like this.
You finish a change on a feature branch and push it to GitHub. GitHub shows the branch and runs any checks configured: at minimum a build check, more if you have tests. If the checks pass, you open a pull request and merge it into main.
Because the frontend and backend live in separate repositories, each merge is independent. A frontend change merged in the Next.js repository fires a webhook to Vercel. A backend change merged in the Wagtail repository fires a webhook to Render. The two platforms only hear about the repositories they are connected to.
On the Vercel side, the build takes a minute or two. If it succeeds, the new frontend is deployed globally to Vercel's edge network. On the Render side, the startup script runs, begin collecting static files and applying migrations, then Gunicorn starts up and the new backend begins serving traffic. If anything fails: a build error, a failed migration, an environment variable missing, then the platform stops the deployment and keeps the previous version running. You get a notification with the error details. The site never goes down because of a bad deploy.
The whole sequence, from merge to live, typically takes two to four minutes. For a change that used to mean opening a terminal, running commands in the right order, and hoping nothing was forgotten, that is a meaningful improvement.
Think of the three tools as three people with clearly defined roles in a small publishing house.
GitHub is the editor's desk. The single place where all approved copy lands before anything else happens. Nothing goes to print until it passes through here.
Vercel is the print-on-demand service for the front-facing catalogue. The moment the editor's desk approves a change, the catalogue reprints automatically. If there's a problem with the new version, the previous catalogue stays in circulation.
Render is the warehouse that fulfils orders from the back. It holds the inventory and the records. When a change arrives, it updates the records first (migrations), then starts fulfilling from the new inventory. If the records update fails, nothing ships until the problem is fixed.
Each operation is independent but coordinated by the same trigger: a change approved at the editor's desk.
This is the setup as it runs today: two repositories, three connected tools, and a deployment process that requires no manual steps beyond writing and merging the code. It will evolve: a more complete test suite, stricter branch protection rules, more automated checks before anything reaches production. But the foundation is solid and the principle stays the same.
Looking back at the series, this post closes a loop that started with the very first article. The architecture decision to decouple frontend and backend was the beginning. Every post since then: content modeling, rendering, performance, observability and deployment, has been about making that architecture production-ready. CI/CD is the last piece: the mechanism that lets you keep improving the system without manual deployments getting in the way.
The series of posts on this topic may not be finished but this feels like a natural resting point. The stack is built, deployed, monitored, and now automated. What comes next will be refinements on top of a foundation that works. Future additions will arrive from learning, failures or improvements over the baseline created so far.
Questions about CI/CD or this specific setup? Reach out via the contact form or connect on LinkedIn!