AI Underwriting Mistakes Carriers Still Make

I have a slightly uncomfortable take for carriers: most AI underwriting failures are not technology failures. They are underwriting management failures wearing a shiny software badge.
I have seen this movie a few times. A carrier buys a promising tool, runs a tidy pilot, proves it can score submissions faster, and then watches adoption stall because the real underwriting floor is full of messy PDFs, half-finished broker emails, edge-case referrals, old rules, new appetite changes, and at least one spreadsheet named Final_v7_REALLY_FINAL.xlsx.
The promise of AI underwriting is real. McKinsey has written that underwriters spend a large share of their time on administrative work rather than risk assessment. Any carrier or MGA leader who has watched a senior underwriter copy vehicle data from a PDF into a policy system knows how painful that is. That is expensive talent doing clerical gymnastics.
But faster data entry alone does not make better underwriting. If anything, it can help a carrier make the wrong decision with impressive speed. Below are the AI underwriting mistakes I still see carriers make, and what I would do differently if I were sitting in the underwriting transformation chair.
Mistake 1: Buying speed before defining judgment
Everyone wants faster quotes. Fair enough. Brokers want turnaround before lunch, not sometime after the next lunar cycle. But speed is not a strategy if nobody has agreed what a good underwriting decision looks like.
I once sat in a meeting where an executive celebrated that a new workflow could move a submission to quote in under two minutes. A senior underwriter, who had been quiet for most of the session, finally said, 'That is great, but it still misses the garaging mismatch.' The room went very still. The tool had accelerated the process, but it had not improved the decision.
That is the core problem. AI underwriting should reduce the admin load so underwriters can spend more time on risk selection, pricing adequacy, and portfolio management. If the system only moves submissions faster through the same weak controls, you have built a conveyor belt for leakage.
Before automating, carriers should define the decision standard. What risks should flow through? What needs referral? What should be declined? Which fields must be verified before bind? Which discounts are allowed only with proof? These are not technical questions. These are underwriting questions.
Mistake 2: Feeding the system bad data and expecting wisdom
Bad data is the great humbler of insurance automation. It does not matter how polished the interface looks if the underlying data is incomplete, duplicated, stale, or inconsistent.
Commercial auto is my favorite example because it is delightfully messy. Fleet schedules arrive as PDFs, Excel files, CSVs, scanned documents, and occasionally something that looks like it was photographed on a kitchen counter. VINs are missing digits. Driver names are reversed. Garaging ZIP codes do not match stated operations. One file says the vehicle is a cargo van, another says private passenger auto. Everyone acts surprised when the pricing result looks odd.
The mistake carriers make is treating data cleanup as a pre-project chore instead of a permanent underwriting capability. Clean intake, normalization, validation, and enrichment need to sit inside the workflow. If a broker sends incomplete data, the system should identify the gaps immediately. If the VIN does not match the vehicle description, that should be flagged before rating. If a prior coverage document conflicts with the application, the underwriter should see that early, not after bind.
Inaza’s platform is built around this point: workflow automation is tied to data capture, reporting, analytics, and a unified data warehouse. That matters because underwriting improvement depends on what you can see, track, and defend.
Mistake 3: Running pilots on clean cases only
Pilot theater is alive and well in insurance. We take 100 clean submissions, remove the ugly exceptions, run a demo, clap politely, and then wonder why production is harder.
Real underwriting is not clean. Real underwriting has broker notes that say, 'same as expiring except updated drivers,' except the updated drivers are in a separate attachment. Real underwriting has forms missing signatures, loss runs with unclear valuation dates, and risks that technically pass eligibility but smell a little funny to an experienced underwriter.
A useful AI underwriting pilot should include the awkward cases. That means incomplete submissions, non-standard files, duplicate data, conflicting information, referrals, appetite exceptions, and renewal accounts with messy history. If the workflow only works when the submission is perfect, it is a demo, not an operating model.
This is where deployment style matters. One Inaza differentiator I like is the ability to configure and deploy production-ready workflows quickly, without the usual proof-of-concept back and forth. The practical benefit is simple: instead of testing a toy process, carriers can test how automation performs against the actual work underwriters do.
Mistake 4: Treating underwriters like obstacles
Here is a fast way to make an underwriting team hate a transformation project: tell them the system will make decisions for them, then give them no explanation when it does.
Underwriters do not need a mysterious score dropped onto their desk like a fortune cookie. They need reasons. They need the data behind the recommendation. They need to know which rule triggered a referral, which data source was used, what changed from expiring, and whether the system is confident enough to proceed.
The best AI underwriting programs keep underwriters in control of judgment while removing repetitive work. They show clear recommendations, evidence, and escalation paths. They also capture overrides. If an underwriter disagrees with the system, that should become useful feedback, not a private act of rebellion.
This matters for compliance too. A carrier needs to defend underwriting decisions to regulators, reinsurers, internal audit, and sometimes a very grumpy producer. Explainability is not a nice extra. It is the receipt.
Mistake 5: Treating data enrichment as an afterthought
Too many carriers enrich data late, manually, or only when something feels suspicious. That is backwards. Enrichment should be part of the normal underwriting path.
Think of any ordinary service quote. If you are planning a move, a good company asks about stairs, elevators, distance, packing, and specialty items because those details change the price and the work involved. A transparent provider like a trusted Bay Area moving company makes the process easier by collecting the right details before the job starts. Insurance underwriting is obviously more complex, but the principle is the same: better questions and better supporting data produce better quotes.
For P&C carriers and MGAs, enrichment might mean vehicle data, driver history, property hazard data, business information, court records, claims history, or third-party fraud indicators. The point is not to collect everything possible. The point is to collect what changes the decision.
Inaza supports pre-built API templates for providers such as Verisk, LexisNexis, HazardHub, and others. That kind of connectivity matters because underwriters should not have to bounce between portals or manually paste findings into notes. The enrichment should arrive in the workflow, attached to the decision, and visible in reporting.
Mistake 6: Separating underwriting from claims and fraud
Some carriers still behave as if underwriting and claims live on different planets. Underwriting sets the rules, claims pays the consequences, and fraud teams clean up the mess later. That separation made more sense when data moved slowly. Today, it is a liability.
Fraud pressure is rising, and the tools available to bad actors are getting better. Verisk’s 2025 fraud report highlights how carriers are seeing digital fraud concerns grow as generative tools become easier to use. While that report focuses heavily on claims, the underwriting lesson is obvious: if false identities, manipulated documents, or inconsistent histories enter at application, the claim file inherits the problem.
AI underwriting should help catch suspicious patterns before bind. That might include mismatched addresses, inconsistent driver histories, unusual prior coverage patterns, questionable documents, or repeat signals across submissions. The goal is not to turn every underwriter into a fraud investigator. The goal is to surface the right warning signs before the carrier takes the risk.
Claims data should also flow back into underwriting. If a segment consistently produces higher severity, late-reported losses, litigation, or repair-cost surprises, underwriting needs to know. The feedback loop between claims and underwriting is one of the most underused assets in insurance.
Mistake 7: Measuring activity instead of outcomes
A dashboard that says your team processed 40 percent more submissions is nice. A dashboard that shows those submissions produced profitable, accurately priced business is much better.
This is where carriers often fall into the volume trap. They track submission count, quote count, and turnaround time, then declare victory. Those metrics matter, but they do not tell the whole story.
A serious AI underwriting scorecard should connect speed with underwriting quality. That includes quote-to-bind ratio, referral accuracy, premium leakage, override rates, straight-through eligibility, missing-data frequency, loss ratio by segment, renewal correction rates, and post-bind endorsement patterns. If automation improves speed but increases corrections, exceptions, or leakage, the carrier has not improved the operation. It has simply moved the mess downstream.
This is also why benchmarks matter. Inaza includes industry benchmarks, including sources such as Aon, Munich Re, Howden, and others, so teams can compare performance against the market and build stronger portfolio narratives. That is useful not only for management reporting, but also for reinsurance conversations and renewal strategy.
Mistake 8: Forgetting that every workflow creates intelligence
My favorite carriers treat underwriting workflows as data-generating assets. Every submission tells you something. Every missing field, every referral, every override, every decline reason, and every broker follow-up is a signal.
The mistake is letting that intelligence disappear into emails, notes, and disconnected systems. When workflow data is not captured in a warehouse, leadership loses the ability to answer basic questions. Which brokers send the cleanest submissions? Which appetite rules generate the most referrals? Which discounts are most often misapplied? Which segments look profitable at quote but deteriorate after claims experience develops?
This is where a connected data layer becomes more than a back-office convenience. It becomes underwriting memory. Without it, carriers repeat the same debates every quarter because nobody can prove what is happening.
Inaza’s data warehouse foundation is important here. Automating a workflow is useful. Capturing the data from that workflow and turning it into dashboards, analytics, and portfolio insight is where the long-term value lives.
What I would do differently
If I were advising a carrier starting again, I would avoid the grand transformation speech. Those speeches tend to age poorly.
I would pick one high-friction underwriting workflow with measurable pain. Maybe commercial auto fleet intake. Maybe proof-of-prior checks. Maybe renewal triage. Maybe submission prioritization. Then I would map the current path from broker email to quote, including every ugly exception.
Next, I would define the underwriting decision rules in plain English. Not technical language. Plain English. What gets accepted, referred, declined, enriched, or sent back for missing data? Then I would automate around those rules, make the reasons visible to underwriters, and capture every outcome.
Finally, I would measure the results with business metrics, not vanity metrics. Faster is good. Faster with fewer errors, less leakage, better referral quality, and clearer audit trails is the prize.
That is the practical path for AI underwriting. It is not a moonshot. It is a disciplined rebuild of the work carriers already do, with better data, better controls, and less clerical drag.
Frequently Asked Questions
What is the biggest AI underwriting mistake carriers make? The biggest mistake is automating speed before defining underwriting judgment. If a carrier has unclear appetite rules, weak data controls, or poor referral logic, AI underwriting can simply move bad decisions faster.
Does AI underwriting replace underwriters? No, at least not in a well-designed program. The strongest use case is removing repetitive tasks like data extraction, validation, enrichment, and routing so underwriters can focus on risk assessment, pricing, and portfolio decisions.
How should carriers measure AI underwriting success? Carriers should measure turnaround time, referral accuracy, quote-to-bind ratio, premium leakage, override rates, data completeness, loss ratio by segment, and audit readiness. Speed alone is not enough.
Why does data quality matter so much in underwriting automation? Data quality determines whether the system can make useful recommendations. Missing VINs, inconsistent driver data, outdated loss runs, and conflicting documents can create pricing errors, referral noise, and compliance risk.
How can carriers start without replacing their core systems? A practical approach is to start with one workflow and integrate automation into existing systems through APIs and configurable templates. This reduces disruption while giving teams measurable results.
Build AI underwriting that underwriters will actually use
The carriers that win with AI underwriting will not be the ones with the flashiest demo. They will be the ones that connect clean data, practical workflows, explainable decisions, and measurable outcomes.
If you want to automate underwriting without forcing your team through a painful system replacement, Inaza can help you deploy configurable workflows, enrich data through ready-made API templates, capture workflow intelligence in a unified data warehouse, and give leaders the dashboards they need to manage performance.
The hot take, one last time: do not buy AI to make underwriting look modern. Use it to make underwriting more accurate, more defensible, and less annoying for the very people who understand the risk best.


