March 10, 2026 · 13 min read★ Featured
BetterStack log stream with an LQL filter active.
Creating a monitor alert in Betterstack.
Incident alert notification from Betterstack monitoring.
You've implemented your Next.js frontend and Wagtail backend with structured logging. Now learn how to read what you collected, write queries that surface real problems, and set up uptime monitors and log-based alerts so BetterStack tells you when something goes wrong. Before your users do.
The first article explained why Next.js server and client logging need fundamentally different approaches. The second extended that visibility to your Wagtail backend using a queue-based logger that protects your request thread.
At this point you have logs flowing from three distinct sources into BetterStack:
@logtail/browser) - client-side events, form interactions, frontend errors@logtail/node) - Server Components, API Routes, Server Actionslogtail-python) - API views, page model validation, database errorsBut logs sitting in a dashboard are only half the story. The other half is knowing how to read them when something goes wrong, and better yet, having BetterStack tell you before you need to go looking.
This article covers two things: interpreting the structured logs you've already built, and setting up uptime monitors and log-based alerts so production issues surface automatically.
Before writing queries or configuring alerts, it helps to understand the shape of data flowing into BetterStack from your setup.
Every log entry from any of your three sources shares a common structure in BetterStack:
dt — Timestamp (ISO 8601, indexed for range queries)
level — info / warn / error
message — The string you passed to logger.info(), logger.error(), etc.
logger_name — Source module (__name__ in Python, auto-set in Node)
environment — "server" or "client" (from buildLogContext() in your sanitize.ts)
deployment — NODE_ENV or Django's equivalentOn top of that, every structured extra={} dict or meta object you passed becomes a top-level field. So a log like this from your Wagtail API view:
logger.info("Pages API listing", extra={
"slug": request.GET.get("slug"),
"type": request.GET.get("type"),
"ip": request.META.get("REMOTE_ADDR"),
})arrives in BetterStack as a flat JSON record with slug, type, and ip as first-class queryable fields alongside level, message, and logger_name. This is why the extra={} convention in the Wagtail article matters so much: you're not just annotating a string, you're building a structured record you can filter and aggregate later.
Because you're logging from three different environments with consistent field names, BetterStack lets you query across all of them simultaneously or filter to a single source. Here's what each layer naturally tells you:
Browser logs answer: What was the user doing? Form submissions, navigation events, client-side errors, interactions that preceded a crash.
Next.js server logs answer: What did the application do? Which pages rendered, which API routes were called, which Server Actions executed, where server errors appeared.
Wagtail logs answer: What did the data layer do? Which API endpoints were hit, what parameters were passed, whether database operations succeeded, what content editors changed in the admin.
Combined, a single user-visible error often has a trail across all three layers. A form submission error in the browser was preceded by a failed Next.js API route, which received a malformed response from Wagtail, which hit a database timeout. Each layer logged its piece. Your job is to follow the chain.
BetterStack uses a SQL-like query language called Logs Query Language (LQL). You write it in the search bar above the log stream. Here are the patterns you'll actually use.
-- All errors across every source
level = "error"
-- Only Wagtail errors
level = "error" AND logger_name LIKE "core.%"
-- Only Next.js server errors
level = "error" AND environment = "server" AND logger_name NOT LIKE "%.py"
-- Only client errors
level = "error" AND environment = "client"The logger_name LIKE "core.%" pattern is why the __name__ convention from the Wagtail article pays off. Every module in your Django project has a logger named after its Python path: core.views, blog.models, core.exception_handlers. You can then filter to any layer of your backend without adding extra fields.
When a user reports an error at a specific time, this is your starting point:
-- Find everything around that timestamp (adjust the range)
dt >= "2026-03-01T14:30:00Z" AND dt <= "2026-03-01T14:35:00Z"
-- Narrow to errors in that window
dt >= "2026-03-01T14:30:00Z" AND dt <= "2026-03-01T14:35:00Z" AND level = "error"
-- Find the slug they were looking at
slug = "my-failing-post" AND level = "error"Once you find the Wagtail log showing the database error, note the timestamp, then switch to the Next.js source and look at the same window. You'll see the API route that received the bad response. Adjust the timestamp forward slightly (network latency) and you'll find the client log showing what the user saw.
Individual errors are easy to notice. The subtle problems are increases in warning rates or shifts in error distributions that don't look dramatic on their own.
-- Count errors per hour (paste into the "Chart" view)
level = "error"
-- Then group by: dt (1 hour intervals)
-- Find the most common error messages this week
level = "error"
-- Group by: message
-- Find which Wagtail endpoints are generating the most warnings
logger_name = "core.views" AND level = "warning"
-- Group by: messageThe Chart tab in BetterStack visualises any filtered query over time. Set the time range to the last 7 days, filter to level = "error", and group by hour. A spike that matches a deployment time is almost always a regression. A gradual upward trend over days usually points to a data problem or external service degradation.
The Wagtail article added logging to clean() in page models:
logger_name = "blog.models" AND level = "warning"This shows every time a content editor triggered a validation error in the Wagtail admin. The structured extra={} fields give you page_slug, image_id, and owner. This allows to see not just that validation failed, but which editor, on which page, and what asset caused it. Useful for noticing if a particular image upload workflow is consistently problematic.
The most powerful use of cross-source querying is tracing a user-visible error back to its root cause:
-- 1. Start: client logged an error
environment = "client" AND level = "error" AND dt >= "2026-03-01T14:00:00Z"
-- 2. Note the timestamp. Switch to: did Next.js log anything at the same time?
environment = "server" AND level = "error" AND dt >= "2026-03-01T14:00:00Z"
-- 3. Was it a Wagtail API failure? Check for the slug that was requested
logger_name = "core.views" AND level = "error" AND slug = "the-failing-slug"You're following the request as it moved through your stack. Each source logged a fragment; together they give you the full picture.
Interpreting logs tells you what happened after the fact. Uptime monitoring tells you the moment your service becomes unavailable, before users report it.
BetterStack Uptime makes periodic HTTP requests to URLs you configure, typically every 30 seconds, a minute or even 15 minutes if not critical, from multiple geographic locations. If a request fails or takes too long to respond, BetterStack marks the monitor as down and sends you an alert. When the URL becomes reachable again, it sends an all-clear.
This is distinct from log-based alerts, which react to what your application logs. uptime monitors detect outages from the outside, the same perspective your users have. That framing is the key principle behind choosing what to monitor: ping the URLs your users actually visit, not internal API endpoints or backend services directly.
If your homepage is broken, you want to know immediately. If Wagtail is down but the frontend is serving cached pages fine, that's a different severity. Monitoring the user-facing URLs captures the thing that matters most: can a visitor load your site?
In the BetterStack dashboard, navigate to Uptime → Monitors → New Monitor.
Homepage — the baseline. If this fails, everything is down:
URL: https://your-nextjs-domain.vercel.app/
Method: GET
Interval: 60 seconds
Threshold: 3 failed checks before alerting (avoids false positives on transient errors)
Expected status: 200A content page that requires a live Wagtail response this is the more valuable check. Your homepage might be statically cached, but a blog post or listing page that depends on fresh data from Wagtail will return a 500 or render a broken state if the backend is down. Pick a stable, permanent URL from your site:
URL: https://your_website_domain/blog/
Method: GET
Interval: 60 seconds
Threshold: 3 failed checks
Expected status: 200This single check exercises the entire stack from the user's perspective: Next.js must be running, the Wagtail API must respond, and the page must render without error. If any layer fails, this monitor catches it without you needing to maintain separate checks for each backend service.
A specific post or detail page if your content is critical (e.g. a high-traffic article), monitor it directly. Use a page you know will always exist:
URL: https://your_website_domain/blog/your-stable-post-slug/
Method: GET
Interval: 60 seconds
Expected status: 200Avoid monitoring dynamically generated or paginated URLs (?page=2, search results, etc.) since these can return legitimate 404s if content changes, generating false alerts.
After creating monitors, configure where alerts go: Uptime → On-call → Alert destinations.
For a solo developer or small team, email is sufficient. For anything requiring fast response, connect Slack:
Destination type: Slack
Channel: #alerts (create a dedicated channel, not #general)
Notify on: Down, RecoveryThe recovery notification matters as much as the down alert. Knowing your service came back up without you doing anything (e.g. Vercel redeployed, Render restarted a crashed worker) is different from knowing you need to take action.
Uptime monitors check availability from the outside, not correctness from the inside. A page that loads successfully but renders an empty blog listing because Wagtail returned an empty result set due to a bad query, will return 200 and pass the monitor. For that kind of silent data failure, you need log-based alerts.
Log-based alerts fire when patterns in your log stream match conditions you define. They're the bridge between raw observability and automatic notification.
Think of log-based alerts as standing queries that run continuously. You define a condition, for example "more than 5 errors in the last 10 minutes from core.views" and BetterStack evaluates it on a rolling window. When the threshold is crossed, you get notified. When it drops back below, you get an all-clear.
This is different from uptime monitoring: uptime tells you if your service is reachable, log alerts tell you if it's misbehaving even while technically reachable.
Navigate to Logs → Alerts → New Alert.
Alert 1: Wagtail database errors
Database errors are the highest-signal failure mode for a Wagtail backend. A single OperationalError might be transient; a cluster is a real problem.
Name: Wagtail database errors
Query: level = "error" AND logger_name LIKE "core.%" AND message LIKE "%database%"
Threshold: 3 occurrences
Window: 10 minutes
Alert after: 1 consecutive breachThis fires if your Wagtail API logs 3 or more database errors in any 10-minute window. It is the pattern added to listing_view and detail_view in the previous article.
Alert 2: Next.js server error spike
A sudden increase in server errors often signals a bad deploy or an upstream dependency failure:
Name: Next.js server error spike
Query: level = "error" AND environment = "server"
Threshold: 10 occurrences
Window: 5 minutes
Alert after: 1 consecutive breachTune the threshold to your traffic. For low-traffic applications, even 3-4 server errors in 5 minutes is worth knowing about. For high-traffic, set it proportionally to your normal error rate.
Alert 3: Client error spike
Client errors are noisier than server errors (browser extensions, ad blockers, and user network issues all generate them), so set a higher threshold:
Name: Client error spike
Query: level = "error" AND environment = "client"
Threshold: 20 occurrences
Window: 10 minutes
Alert after: 2 consecutive breachesThe "2 consecutive breaches" setting filters out momentary noise. If the rate stays elevated for two consecutive evaluation windows, it's real.
Alert 4: Wagtail API returning no results
This catches a subtle failure mode: Wagtail is up and returning 200s, but the database has no data (e.g., a bad migration wiped a table). You added logging to listing_view, now use it:
Name: Pages API returning empty
Query: logger_name = "core.views" AND message = "Pages API listing" AND level = "warning"
Threshold: 5 occurrences
Window: 5 minutesFor this to work, add a warning to your listing_view when the response contains zero results:
def listing_view(self, request):
logger.info("Pages API listing", extra={
"slug": request.GET.get("slug"),
"type": request.GET.get("type"),
"ip": request.META.get("REMOTE_ADDR"),
})
response = super().listing_view(request)
# Detect empty results — possible data layer problem
if hasattr(response, 'data') and response.data.get('meta', {}).get('total_count', -1) == 0:
logger.warning("Pages API returned zero results", extra={
"slug": request.GET.get("slug"),
"type": request.GET.get("type"),
})
return responseLog-based alerts use the same notification channels as uptime monitors. In the alert creation form, select the Slack channel or email address you configured earlier. The same #alerts channel works well for both uptime and log alerts all production signals in one place.
To make the system concrete, here's how it plays out when something actually breaks.
Scenario: Your Wagtail backend's database connection pool is exhausted under load. API requests start failing.
What happens automatically:
django.request logs 500 errors for failing API requests → your "Wagtail database errors" log alert fires → Slack notification within minutes.What you do:
level = "error" AND logger_name LIKE "core.%". You see database OperationalError messages.environment = "server" at the same timestamp. The Next.js logs show the API routes receiving 500s from Wagtail, confirming the origin is the Wagtail layer, not Next.js itself.The entire diagnostic path, from alert to root cause to confirmation, stays inside BetterStack. No ssh or complicated access into servers trying to tail -f logs from a degraded system.
extra={} fields from your Wagtail views and meta objects from Next.js become first-class queryable fields in BetterStack. Use them deliberately.logger_name LIKE "core.%" pattern lets you filter to any layer of your Django backend without additional configuration.#alerts Slack channel keeps production signals out of general conversation and makes on-call easier to manage.The three articles in this series cover the full observability stack for a Next.js + Wagtail application:
@logtail/browser and @logtail/node.The thread running through all three is the same: match your strategy to your execution model, structure your data deliberately, and build systems that surface problems automatically rather than waiting for user reports.
You've now got end-to-end visibility from the browser through the Next.js layer to the Wagtail backend, with alerts on the failure modes that matter most. That's the foundation for shipping with confidence.
Questions about observability in your application? Reach out via the contact form or connect on LinkedIn!