What is automated underwriting?
By Ed Saul, CEO · · ~10 min read
Most life and health insurance applications used to take four to six weeks to underwrite. A questionnaire was filled in, evidence was requested, an underwriter read everything, formed a view, and the policy was issued — or referred, or declined, or offered with terms. The cycle was measured in days at best, weeks more often.
Today, well-designed automated underwriting can decide a clean application in under ninety seconds. Not by replacing the underwriter, but by handling the work that doesn't actually need one — the straightforward applications where the rules say accept, the answers are clear, and there's no judgement call to make. That's most applications, when you measure them. The rest still go to a human.
This is a guide to what's changed, how the technology actually works, and how to think about it if you're evaluating a platform. We've been building underwriting systems since 2007, so most of what follows is what we've learned the hard way.
What is automated underwriting?
Automated underwriting is software that evaluates an insurance application and makes — or recommends — an underwriting decision without a human reading the case file. In its simplest form, it's a rules engine: a configurable logic layer that takes structured answers from an application (age, smoker status, BMI, health disclosures, occupation, sum insured) and applies your insurer's underwriting rules against them.
That's been around in some form since the late 1990s. What changed in the last decade is two things. First, online sales and adviser portals made it possible to capture an application's structured data well enough that a rules engine could reliably evaluate it — without the messy, free-text-laden paper forms that used to make automation impossible. Second, the rules themselves got more capable: cover-level granularity, multi-factor decisioning, exclusion replacement logic, reinsurer involvement in rule design, and audit trails that satisfy regulators.
Modern automated underwriting is not a single decision. It's a fabric of small decisions executed in real time as the applicant fills in the form, accumulating into a final outcome the moment the last question is answered.
It's also not, in the strict sense, "AI". The bulk of automated underwriting is rules-based and deterministic — explainable, reproducible, regulator-friendly. Some platforms add AI on top of that for complex cases (we'll get to the difference shortly), but the core engine is rules. Intelligent Life shipped one of the earliest production deployments in 2007. The category is older than most people realise.
How automated underwriting works: the four stages
A clean application moves through four stages between "applicant clicks submit" and "policy issued". The same stages are present whether the journey takes ninety seconds or several weeks; what differs is how much of it the software does for you.
1. Application capture
The applicant answers a structured questionnaire — typically health, lifestyle, occupation, financial situation, and the cover they want. A capable platform doesn't show a single static form; it adapts as the applicant answers, asking follow-up questions only when relevant, hiding sections that don't apply, branching based on the cover types selected. This matters because application length is the biggest single driver of abandonment. Every irrelevant question is a chance to lose the customer.
2. Real-time evaluation
As soon as enough has been disclosed, the rules engine begins to evaluate. Each underwriting rule looks at one or more disclosed facts and produces an outcome — a loading, an exclusion, a referral trigger, an outright decline, or nothing at all. Rules combine: the same applicant might trigger no rules for life cover, a 25% loading for income protection, and a referral for trauma cover, all in the same pass. Cover-level granularity is the difference between "refer the application" and "accept three covers and refer the fourth" — and it directly determines how many cases your underwriters actually have to look at.
3. Decision and offer
Once every rule has been evaluated, the platform reaches one of four outcomes per cover: accept on standard terms, accept with non-standard terms (loadings or exclusions), decline, or refer for human review. The applicant sees the outcome immediately. If the offer is acceptable, they sign and pay; if not, they negotiate, or walk away. Either way, the decision was made by the platform, not by a queue.
4. Issuance and audit trail
The final cover is issued, documents generated, premium collected. Behind the scenes, every rule that fired and every fact it evaluated is recorded — by date, by version, by applicant. This part matters most for the parts of the business no one talks about until something goes wrong: the regulator asks how a decision was reached, the reinsurer queries claims experience, the customer disputes their loading three years later. A good audit trail closes those conversations in minutes. A bad one keeps them open for months.
The four stages aren't unique to automated underwriting — every life and health policy goes through some version of them, even on paper. What automation does is collapse the elapsed time of stages two and three from days to seconds, and make stage four bulletproof. The compounding effect on cost-to-issue and customer experience is what's actually transformed the category.
What does "straight-through processing" actually mean?
Straight-through processing — STP — is the fraction of applications that complete every stage of the underwriting workflow without any human touch on the insurer's side. It's the headline metric most automated underwriting platforms compete on. "We do 80% STP" is shorthand for "four out of five applications never touch an underwriter".
It's a useful number, but it's not the goal.
The goal is accurate decisioning. STP is one consequence of getting that right. A platform that decides every application instantly but loads them all at 50% has 100% STP and is terrible. A platform that refers every borderline case to a human has lower STP and might be doing exactly what it should. Reading STP without context tells you almost nothing.
The other thing STP doesn't tell you is what gets referred and why. Two platforms could both report 70% STP — one because the rules are well-designed and the genuine grey-area cases (15–20% of typical portfolios) get a human look; the other because the rules are blunt and refer half of healthy applicants for safety. The second platform has worse outcomes despite the same headline number.
Realistic STP ranges by product, from production deployments we've seen:
- Direct life insurance, healthy adult, modest sums insured: 70–90% achievable in well-tuned deployments.
- Income protection, occupation-rated: 50–70% — occupation classes pull more cases into manual review.
- Trauma / critical illness: 60–80%, depending on how many conditions the rules cover.
- Older lives or large sums insured: 30–60% — more financial underwriting, more medical evidence.
The right STP for your book depends on your product mix, distribution channel, and underwriting philosophy. Anyone quoting a single number for "STP we'll achieve" is selling you on a metric they can't actually predict in your business until they see your data.
Rules engine vs AI underwriting: what's the difference?
These get conflated in vendor marketing. They're complementary capabilities that do different jobs.
A rules engine is deterministic. Given the same inputs, it always produces the same output. Rules are written by your underwriting and actuarial team and encode your insurer's underwriting philosophy explicitly. Every rule is auditable: you can see what fired, why, and what the outcome was. Regulators like rules engines because they can be explained. Reinsurers like them because they can be reviewed and endorsed. Most automated underwriting work — the bulk of decision-making — should run through a rules engine.
AI underwriting, as the term is used today, usually means a large language model summarising or scoring something the rules engine couldn't easily handle. That includes:
- Reading PII anonymised questions as structured data and extracting findings the rules can use.
- Producing a risk score across multiple dimensions for a borderline case, as decision support for a human underwriter.
- Generating a plain-language explanation of a complex case to accompany a referral.
AI is probabilistic. It produces different outputs for slightly different inputs, and it can be wrong. Used well, it's a force multiplier for underwriters facing the 15–25% of cases that rules can't cleanly resolve. Used poorly, it becomes a black-box decision-maker that no one can defend in front of a regulator.
The pattern that works in production looks like:
- Rules engine handles the bulk of decisions on its own.
- AI assists on referrals — never decides them. It produces structured output (a risk score, a recommended loading range, a plain-language summary), which a human underwriter reads alongside the case.
- Humans own the call on every non-straight-through decision. The AI is decision-support, not decision-making.
Done that way, you get the speed of automation, the explainability of rules, and the breadth of judgement only humans bring. Done the other way — AI as primary decision-maker — you trade transparency for marginal gains and inherit a regulatory problem.
How to evaluate an automated underwriting platform
If you're shortlisting vendors, these are the questions worth asking. Each one separates the platforms that actually work in production from the ones that demo well and break in deployment.
- Who controls the rules? Your actuarial and underwriting team needs to be able to change rules without filing a developer ticket. If updating a smoker rate or adding a new exclusion requires code changes, the platform isn't actually configurable.
- How are decisions audited and explained? For every issued decision, can you list the rules that fired, the facts they evaluated, and the version of the rule book in effect at that moment? If not, you have an audit problem waiting to surface.
- What's the multi-tenant story? If you run any white-label arrangements (broker-branded products, bancassurance, group schemes), can the platform truly isolate them — separate branding, separate rules, separate user bases — or is it one shared instance with cosmetic theming?
- Has a reinsurer reviewed the rules? Reinsurer involvement in rule design is rare and valuable. It signals that the rule set has been reviewed by people whose money is at stake.
- Multi-jurisdiction support. If you sell across borders, can the same platform run rules, disclosures, and consent flows specific to each jurisdiction — or do you need a separate deployment per market?
- AI module — explainable? Human-in-the-loop? If the vendor is shipping AI, ask to see the rationale field on a real assessment. If the answer is "we'll send you a video", they don't have the explainability story they think they do.
- Integration surface. API depth, identity (OAuth/OIDC), payment, policy administration, reinsurance reporting. A modern platform has all of these as standard. A legacy one has "we can build that for you".
- Operational model. Who runs it, who patches it, who handles security disclosures? "You" is the wrong answer for a SaaS in 2026.
- Track record. How many years has the platform been in production, in how many markets, processing how many applications a year? New entrants do exist. Most go through their painful learning at your customers' expense.
- Implementation approach. Software project or managed delivery? The first is your problem. The second is the vendor's.
We'd answer all ten — under NDA — for any insurer evaluating iUnderwrite. Most of the answers are on the platform page; the ones that aren't are the ones we keep behind a briefing.
Beyond the platform: what underwriting still needs humans for
Not every application should be automated. Some categories should always go to a human, regardless of how good the rules engine is:
- Impaired lives. Anything where the applicant has multiple disclosed conditions, particularly in combination, deserves a human eye. The interactions between conditions are where rule-based systems hit their limits.
- Large sums insured. Beyond a threshold (which varies by insurer and market — typically $1M–$2M for life, lower for income protection), financial underwriting and the cost of getting it wrong both increase. Human judgement is worth the time.
- Complex commercial and group covers. Buy-sell, key person, and large group schemes have business-specific factors a rules engine struggles with cleanly.
- Unusual occupations and avocations. Rules engines handle the long tail of recreational risk poorly. A trained underwriter handles it well.
- Exception requests. When an applicant negotiates terms or requests reinstatement, a human is reading the case anyway.
A platform that pretends none of this is true is selling you something other than safety. The right operating model is automation by default, human review by exception — with the exceptions clearly defined, well staffed, and supported by good tooling. That's what the underwriting workbench inside iUnderwrite is built for.
Where to from here
Automated underwriting isn't a single product feature; it's a way of organising the whole new-business workflow. The platforms that do it well move applications quickly, treat customers fairly, give underwriters the tools to make better decisions on the cases that need them, and produce defensible audit trails for everyone who's going to ask.
If you'd like to see what that looks like in practice, we can walk you through iUnderwrite configured with your products, your rules, and your branding — or arrange a 45-minute briefing on how Intelligent Life has shipped this technology across eight markets since 2007. Either way, get in touch.