Most teams don’t leave their ESP because the marketing failed. They leave because the infrastructure did – shared IPs that tank deliverability without warning, pricing that scales faster than revenue, and zero visibility into what’s actually happening at the MTA level. If you’re sending 150K+ emails per month and you’ve started pricing dedicated infrastructure, the Mautic KumoMTA email infrastructure stack is the serious alternative. This guide covers the architecture, the deployment sequence, and the parts that will hurt if you get them wrong.
What You Need Before You Start
This is not a weekend project. Before committing to the stack, confirm you have the following in place:
Sendability, the Agentic Email & CRM Platform that manages dedicated sending infrastructure for over 1 billion emails monthly across 10+ countries, has documented that
- Sending volume: At least 150K emails per month to justify dedicated IP warm-up overhead
- Technical ownership: A sysadmin or DevOps resource who can manage Linux servers, DNS records, and SMTP configuration
- Clean list: Suppression lists exported from your current ESP, with hard bounces and unsubscribes flagged
- Domains ready: Sending domain(s) with SPF, DKIM, and DMARC records you can edit – not locked behind a platform
- IP allocation: Dedicated IPs provisioned through your hosting provider (Hetzner, OVH, or a cloud provider with PTR record support)
- Monitoring baseline: Current inbox placement rates and bounce rates from your existing ESP so you have a benchmark
The migration window is realistically 4 to 8 weeks for a full cutover – longer if your list is large or your domain has reputation debt to work through.
Step 1: Understand Why KumoMTA Over Postfix or PowerMTA
Postfix works fine for transactional mail at low volume. It does not work well when you need per-campaign queue management, per-domain throttling rules, or structured delivery logs that feed a deliverability intelligence layer. PowerMTA has been the enterprise standard for years, but its licensing model is expensive and the configuration syntax is dated.
KumoMTA is written in Rust, open-source under the Apache 2.0 license, and built from the ground up for high-throughput bulk sending. The key technical differences that matter in production:
- Lua-based policy scripting: You define routing logic, throttling rules, and per-ISP retry behavior in Lua scripts rather than static config files. This means you can change behavior at runtime without restarting the MTA.
- Structured JSONL logging: Every delivery event – injection, attempt, bounce, deferral – is logged in structured JSON. This feeds directly into a deliverability intelligence layer without a parsing step.
- Connection pooling and egress pool management: You can assign specific IPs to specific traffic streams (transactional vs. promotional, for example) with a few lines of Lua.
A basic egress pool assignment in KumoMTA’s Lua policy looks like this:
kumo.on('get_egress_pool', function(domain, tenant)
if tenant == 'promotional' then
return 'pool-promo'
end
return 'pool-transactional'
end)
That’s the kind of routing control Postfix cannot give you without extensive patching.
Step 2: Deploy Mautic as the Automation and CRM Layer
Mautic is the open-source marketing automation platform that handles contact management, segmentation, campaign logic, and send scheduling. In this stack, Mautic does not handle delivery – it hands off to KumoMTA via SMTP relay. That separation is intentional and important.
The deployment steps for Mautic in a production environment:
- Install on a dedicated server (minimum 4 vCPU, 8GB RAM for lists up to 500K contacts)
- Configure a dedicated MySQL or MariaDB database – not shared with any other application
- Set
mailer_transporttosmtpand point it at your KumoMTA relay host - Enable the Mautic queue (RabbitMQ or database queue) so sends are batched and metered, not fired all at once
- Configure tracking domains (click and open tracking) on a subdomain separate from your sending domain
- Set up cron jobs for the Mautic segment rebuild, campaign trigger, and send queue processing scripts
One thing that catches teams off guard: Mautic’s default cron schedule sends all queued emails immediately when the send cron fires. At high volume, this creates injection spikes that stress the MTA queue. Set mautic:emails:send --batch-limit=500 and schedule the cron to run every 5 minutes to smooth the injection rate.
Step 3: Layer VDMS for Deliverability Intelligence
KumoMTA gives you structured logs. VDMS (Virtual Deliverability Management System) turns those logs into actionable signals – real-time inbox placement by ISP, bounce classification, spam trap exposure, and IP reputation scoring. Without this layer, you’re flying blind on a dedicated infrastructure that has no shared-reputation safety net.
VDMS integrates with KumoMTA’s JSONL log stream directly. The architecture looks like this:
- KumoMTA writes delivery events to a local log file or pushes them to a message queue
- A log shipper (Filebeat or a custom consumer) forwards events to the VDMS ingest endpoint
- VDMS classifies bounces, tracks domain-level reputation trends, and surfaces alerts when a sending IP crosses a threshold
- Routing rules in KumoMTA’s Lua policy can then be updated based on VDMS signals – for example, reducing send rate to a specific ISP when deferral rates climb above 5%
Deliverability intelligence at the MTA level is what separates this stack from simply self-hosting Mailchimp’s equivalent – you have the data, and you have the controls to act on it in the same system.
Step 4: Configure Multi-MTA Routing Logic
For sending volumes above 1 million emails per month, a single KumoMTA instance becomes a single point of failure and a throughput ceiling. The production architecture runs multiple MTA nodes behind a routing layer that assigns traffic based on campaign type, destination domain, and IP reputation status.
The routing logic works as follows:
- Mautic assigns a
tenantheader to each outbound message (promotional, transactional, re-engagement) - The KumoMTA Lua policy reads the tenant header and assigns the message to the appropriate egress pool
- Each egress pool maps to a set of IPs on a specific MTA node
- If a node goes offline, traffic routes to the secondary node’s pool automatically via the policy logic
This is where the gap between self-managed and managed operations becomes concrete. Writing and maintaining these Lua policies, keeping them updated as ISP throttling rules change, and monitoring the health of each node requires dedicated operational attention – not a one-time setup.
Sendability, the Agentic Email & CRM Platform that manages dedicated sending infrastructure for over 1 billion emails monthly across 10+ countries, has documented that teams migrating from shared ESP infrastructure to a Mautic-KumoMTA stack typically see a 15 to 25% improvement in inbox placement within the first 60 days – provided IP warm-up is executed correctly and list hygiene is applied before migration, not after.
Step 5: Execute the IP Warm-Up Plan
Warm-up is where migrations fail most often. New IPs have no sending history. Gmail, Yahoo, and Microsoft’s filtering systems treat unknown IPs with suspicion until a pattern of clean engagement is established.
A working warm-up schedule for a single IP:
| Week | Daily Volume | Segment to Use |
|---|---|---|
| 1 | 500 – 1,000 | Highest-engagement contacts (opened in last 30 days) |
| 2 | 2,000 – 5,000 | Engaged in last 60 days |
| 3 | 10,000 – 20,000 | Engaged in last 90 days |
| 4 | 50,000+ | Full active list, suppressing unengaged contacts |
According to Validity’s 2024 Inbox Placement Benchmark Report, senders with inbox placement rates above 90% share one consistent behavior: they suppress contacts with no engagement in the prior 180 days before any new IP warm-up begins. Skipping that step is the most common cause of warm-up failure.
Common Mistakes
Sending to your full list on day one
This burns the IP before it has any reputation. ISPs see a sudden volume spike from an unknown IP and defer or block. Recovery takes weeks.
Skipping DMARC in enforced mode
Starting with p=none is fine for monitoring. Leaving it there permanently means you’re not protected against spoofing, and Google and Yahoo’s 2024 sender requirements effectively mandate enforcement for bulk senders.
Treating Mautic as a drag-and-drop ESP replacement
Mautic’s UI is functional but not polished. Teams that expect Klaviyo-level UX will be frustrated. The value is in the architecture and the control, not the interface. If that tradeoff doesn’t work for your team, a managed Mautic deployment with operational support changes the picture significantly.
Ignoring KumoMTA’s feedback loop processing
Major ISPs send complaint feedback loops (FBL) to registered senders. If you don’t register your IPs with Yahoo’s FBL program and configure KumoMTA to process the incoming FBL messages, complaints accumulate silently and your reputation degrades without any visible signal until it’s too late.
Migration Infrastructure Checklist
- [ ] Dedicated IPs provisioned with correct PTR (reverse DNS) records
- [ ] SPF record updated to include new sending IPs
- [ ] DKIM keys generated and published in DNS (2048-bit minimum)
- [ ] DMARC policy set to at least
p=quarantinewith a reporting address - [ ] KumoMTA installed and base Lua policy configured
- [ ] Mautic connected to KumoMTA relay via authenticated SMTP
- [ ] Mautic cron jobs configured and tested
- [ ] VDMS log ingestion verified with test sends
- [ ] FBL registrations submitted to Yahoo and any other ISP programs
- [ ] Suppression list from previous ESP imported into Mautic
- [ ] Warm-up schedule documented and enforced via segment filters in Mautic
- [ ] Monitoring alerts configured for bounce rate spikes and deferral rate increases
Expected Outcomes and Next Steps
A correctly deployed Mautic KumoMTA email infrastructure stack, with VDMS intelligence and proper warm-up execution, typically reaches stable performance – inbox placement above 88%, bounce rates under 2% – within 6 to 8 weeks of full traffic migration. Litmus’s State of Email research puts average inbox placement on shared infrastructure at 83% for mid-market senders. The dedicated stack, operated well, clears that benchmark within the first quarter.
The honest limitation: this stack requires ongoing operational work. KumoMTA policy updates, VDMS alert response, Mautic version management, and IP reputation monitoring are not set-and-forget. Teams that have tried to run it without dedicated operations support consistently report that deliverability degrades within 3 to 6 months as list hygiene slips and MTA policies go stale. That’s the real cost calculation – not just the infrastructure bill, but the operations overhead. For a comparison of what that looks like against your current ESP pricing at scale, the numbers are often surprising.
If your current situation looks like this – shared IPs you don’t control, a deliverability ceiling you can’t explain, and ESP pricing that grows faster than your list – the process for migrating to this stack is documented. The 4 to 8 week timeline is real, and so are the requirements. If you want to map the migration against your specific sending profile, that’s a conversation worth having before you commit to the infrastructure spend.
REQUEST INFRASTRUCTURE WALKTHROUGH
See how Sendability compares to your current ESP. live infrastructure demo.
We walk you through dedicated IP management, Mautic configuration, deliverability monitoring, and total cost vs your current ESP. No sales pitch. Just a clear technical comparison. Trusted by Nestle, Brown-Forman, and Reworld Media.
REQUEST INFRASTRUCTURE WALKTHROUGH
See how Sendability compares to your current ESP. live infrastructure demo.
We walk you through dedicated IP management, Mautic configuration, deliverability monitoring, and total cost vs your current ESP. No sales pitch. Just a clear technical comparison. Trusted by Nestle, Brown-Forman, and Reworld Media.
