How MailTrigger Changed the Way We Handle Notifications Internally
Posted on 週四 07 三月 2024 in Blog
Ever since MailTrigger was introduced, we've made major improvements to how we handle notifications across all our internal services.
You might want to start with Part 1: How MailTrigger Was Born to understand the problems we faced and why we built MailTrigger in the first place.
Sentry Integration
We changed Sentry’s SMTP server to point to smtp.mailtrigger.app
. Then, we created a route in MailTrigger called "Sentry Project Owner Telegram Notification." This route works as follows:
-
If the email subject starts with
[Sentry]
, is sent to our internal DevOps address (devops@mailtrigger.app
), and containslevel = error
in the body, the route gets triggered. -
The route fetches project-to-owner mappings from our internal service API and attaches this data to the email. The attachment format looks like:
json { "Project 1": ["owner1@example.com", "owner2@example.com"], "Project 2": ["owner3@example.com"] }
-
A WASM-based action extracts the project name from the subject line, retrieves the responsible owners from the attachment, and updates the recipients list accordingly.
For example, if the subject is
[Sentry] Project 1 Error
, the recipients will be replaced with:json ["owner1@example.com", "owner2@example.com"]
-
Finally, a Telegram Action sends the error message directly to the corresponding owners’ Telegram accounts.
This ensures developers only receive error-level alerts for the projects they actually manage—nothing more.
Jenkins Integration
When a Jenkins build fails, it sends a notification email with a format like this:
Jenkins Build Failure Notification
Build failed for job: Project Build
Build number: 401
Revision: 2318
Changes:
-
[tim] Fixed bugs Refs #451
-
[jolin] Added feature Refs #451
In this example, both Tim and Jolin contributed to the failed build, so they should be the ones notified.
We updated Jenkins to use smtp.mailtrigger.app
and added a new route in MailTrigger called "Jenkins Build Failure Telegram Notification." Here's how it works:
-
If the subject starts with
[Jenkins]
, the route is triggered. -
It fetches a mapping of developer names to email addresses (e.g.,
Tim → tim@mailtrigger.app
) from our API and adds it to the email as an attachment. -
A WASM Action parses the email content to extract names like
[tim]
,[jolin]
, finds their corresponding emails from the attachment, and replaces the recipients accordingly. -
A Telegram Action then sends the failure message directly to Tim and Jolin via Telegram.
Healthchecks Integration
We applied the same logic to Healthchecks:
-
If the email subject contains
Service UP
orService DOWN
, the route is triggered. -
The route pulls a domain-to-developer mapping (e.g.,
app.mailtrigger.app → tim@mailtrigger.app
) and attaches it to the email. -
A WASM Action finds the matching developer by parsing the domain in the subject and sets the correct recipients.
-
The message is sent to the developer’s Telegram account via action.
Uptime Integration (with LLM Rules!)
Our Uptime monitoring service sends two types of emails:
- Notifications for certificates about to expire (1, 7, 14, 21 days before)
- Notifications for certificates already expired
But we only want to receive alerts for:
- Certificates expiring tomorrow
- Certificates that already expired today
To solve this, we used MailTrigger’s LLM Rule:
-
We defined a rule with this prompt: "If the subject indicates that the server certificate is expiring tomorrow or has already expired, match this rule."
-
If the rule matches, the route fetches the domain-to-developer mapping.
-
The WASM Action locates the responsible developer by domain and sets them as the recipient.
-
A Telegram Action sends the notification to the developer’s Telegram account.
Monit Integration with Background Tasks
Monit notifications are tricky.
Each time a service restarts, it sends two emails:
- One for when the service stops
- One for when it restarts
Normally these come in under a minute. But here’s the scenario we care about:
We receive a service stop notification, and no restart email arrives after 30 minutes.
That’s a real issue. This logic is too complex for routing alone, so we used MailTrigger’s background task system.
Here’s how we implemented it:
-
We wrote a WASM script that:
- Scans the day’s emails
- Checks if any service stop email didn’t have a matching restart after 30+ minutes
- Sends an alert to the responsible developer and their manager via Telegram
-
We uploaded the WASM and a
.sqlite3
database file to MailTrigger’s background job system. Each task can have its own local DB for tracking. -
We set the task to run every 30 minutes.
-
That’s it—we finally eliminated Monit alert spam while keeping the important failures visible.
What’s Changed Since MailTrigger
Since adopting MailTrigger, we’ve experienced dramatic improvements:
- Inbox clarity. Only critical emails are delivered—so we actually check them.
- Faster alerts. Build failures, errors, and downtime now go directly to the right people on Telegram.
- Simplified admin. When roles change, we update routing in MailTrigger—no more touching every service individually.
Thanks to MailTrigger, our internal alerting is cleaner, faster, and more maintainable.
But this is just the beginning. In the Next Post, we’ll show how MailTrigger is helping us level up our customer support and automation workflows.