back to resources
BLOG
Solutions
AI Architecture

Intelligent Underwriting: Gen AI’s Impact on Modern Lending

How lenders can accelerate loan and mortgage underwriting with a Compound AI approach

By
Luke Watkins
,
Published on
March 25, 2025
12
min read

TLDR

Key Metrics:

The creation of large language models (LLMs), a form of generative AI, is like the advent of electricity: new, paradigm-shifting, and incredibly useful, but not very well understood in terms of how it can be used–though that is changing.

To identify use cases for this technology, it's helpful to start with an existing problem and consider how generative AI can solve it. In this blog, we'll focus on the long and labor-intensive process of underwriting loans and mortgages, and how LLMs can speed it up to half the time it would take manually.

This implementation uses a Compound AI approach, which is the flavor of AI solution building that we at deepset have been focusing on and is quickly becoming the standard in AI product development. It describes the process of combining many different AI components, data sources, and integrations to power mission-critical applications. 

The problem

“How can I accelerate the throughput of accepted loan applications, whilst maintaining high-quality standards and minimize additional spend on headcount?”

Before granting loans to potential customers and companies, a bank must go through a thorough phase of analysis, risk assessment, and fact-finding. This is the underwriting phase.

During underwriting, bank analysts review many complex loan documents based on risk profiles, including lengthy financial records with low-quality scans and handwritten notes. This process of extracting and synthesizing key information typically takes more than two weeks per loan and places significant pressure on credit and loan analysts who face both strict monthly underwriting quotas and zero-error expectations.

The solution: An AI underwriting copilot

To automate the labor- and time-intensive underwriting process, we built a credit-memo copilot powered by LLMs and Compound AI, which we called the “AI underwriter”.

This digital assistant works alongside human analysts throughout the underwriting workflow. Think of it as a specialized team member that can read, process, and extract information from thousands of loan documents in minutes rather than days, while maintaining the contextual understanding needed for accurate underwriting decisions.

The copilot acts as the connective tissue between scattered loan information and the analyst's decision-making process. It handles the full spectrum of document processing tasks: Ingesting documents in various formats, extracting critical data points, analyzing relationships between information, answering specific queries with source attribution, and automatically populating standardized sections of credit memos. 

By automating these labor-intensive aspects of underwriting, the copilot doesn't replace the human analyst but augments their capabilities. Instead of spending hours searching for information, analysts can focus on higher-value risk assessment and decision-making.

Now, let's examine how this copilot is technically implemented through a Compound AI approach.

The process: How to build an AI underwriter

Our copilot requires a customized, end-to-end solution. Advanced AI models (such as Claude, DeepSeek, GPT...) provide powerful back-end capabilities, but to deliver tangible benefits to knowledge workers' daily tasks, they need to be integrated into a comprehensive product ecosystem with multiple technical components, workflows, and an intuitive interface. 

With an enterprise-grade platform like deepset, you can implement an underwriting copilot as a fully integrated Compound AI system. Here’s a high-level diagram of our AI underwriter:

The Compound AI copilot can

  1. Ingest thousands of documents in various file formats to support each loan application.
  2. Prepare the data for use by AI in a fully integrated and automated manner: Using OCR (optical character recognition) to convert messy or scanned documents into readable text, data labeling, and obfuscating sensitive data.
  3. Integrate information from external sources such as Experian, Bloomberg, Refinitiv, etc.
  4. Filter or select certain deals or files for analysis.
  5. Extract information from the documents in a standardized way.
  6. Receive complex questions about a specific deal in a chat interface. The answers are prompt, well-reasoned, accurate, and comprehensive.
  7. Ensure that the AI-generated answers are correct, truthful and hallucination-free
  8. Adapt to changes in the AI market (e.g. seamless integration of a new model that emerges at 10% of the cost).

A closer look at the AI underwriter implementation

Let’s look at the three modules of the copilot (data management, data retrieval, and intelligent document processing) in detail:

Data management

The power of the AI underwriter lies in its ability to combine specific details from multiple information sources to paint a nuanced picture of each deal. The deepset AI Platform manages the flow of data, extracting and consolidating raw data, transforming it into highly expressive text embeddings, and storing them in a vector database. The data is also enriched with labels and metadata, which allows analysts to view information from different perspectives and enhances the AI solution’s ability to provide meaningful insights.

Data retrieval

When executed, the AI agent searches the knowledge base for relevant documents and passes this information to the LLM. The powerful AI orchestration framework Haystack is the underlying technology that handles document retrieval and gives the LLMs the right context to generate accurate answers, as in a classic retrieval augmented generation (RAG) setup.

Intelligent Document Processing (IDP)

Typically, a loan memo or application may require over 100 questions to be answered. Most of these are routine questions that apply to every deal, such as "Who is the borrower?" and "What is the summary of the loan?". For each new transaction/loan, a set of pre-selected questions is run through the Compound AI system and the answers are populated in a batch. This reduces the analyst's workload and allows them to focus on more complex questions.

Step 1: Create question sets to run against your knowledge base (data). Step 2: Use AI to build out a final document (e.g. the credit / lending memo) 

deepset AI Platform

For teams looking to build powerful solutions quickly, the deepset AI Platform delivers value in three key ways:

  • It provides the necessary building blocks and integrations.
  • It offers the ability to add proprietary logic to any Compound AI system through custom components.
  • It has built-in workflows that follow best practices for building, testing, and monitoring your solution. 

 Let's zoom in on that last point.

AI best practices 1: Build with model agnosticism

deepset is fully model-agnostic, allowing you to use and test any LLM. This modular approach allows seamless integration of new models without vendor lock-in. In the above solution, we used OpenAI's GPT4o model, but it can simply be exchanged for e.g. Claude 3.7 Sonnet, or Deepseek R1 to balance cost, latency, and reasoning ability.

AI best practices 2: Test setups side-by-side

To get to an expert loan officer who can produce output that's familiar to credit and loan analysts, you need to design an optimized and tuned AI pipeline. This is easier than ever with deepset's Prompt Explorer, a sandbox environment where you can test, modify, and compare prompts and other configurations.

AI best practices 3: Ensuring observability

Our underwriting agent must provide consistently reliable information. Here's how we enforce it:

  • We run different versions of the query and flag errors when results differ. High accuracy comes from systematic verification through redundancy, not from trying to create a single perfect query and answer. 
  • To ensure factual accuracy, we enrich the prompt with instructions to the LLM to cite the source and to refuse to answer without relevant information. 
  • We implement a timeliness check to verify that responses use the most current data. 

With these guardrails in place to deliver high correctness, credit analysts can trust this Compound AI approach because every response is fully verified and linked to its source documentation.

Verifiable references in the output to ensure the analyst can fact-check answers, building trust.

The results

A substantial increase in deal throughput, customer satisfaction – and operational efficiency.

Automating manual document analysis saves underwriters time, often by 50% or more. This acceleration enables financial institutions to process more applications in the same amount of time, providing the following key benefits:

  • A competitive advantage by building not only a solution, but also the foundation for additional Compound AI products and the expertise to build them.
  • Improved customer satisfaction through faster approvals.
  • An expansion of underwriting operations in line with strategic objectives.

Timelines and effort

Deploying Compound AI has traditionally required a large team of cloud engineers, NLP specialists, and AI infrastructure experts. This has made it difficult for organizations to build quickly and within budget.

But AI orchestration frameworks and tools have changed that: Haystack enables organizations to build, deploy, and scale AI agents and applications tailored to their most critical business challenges. The deepset AI Platform abstracts away the technical complexity, making it easy to focus solely on the end goal - accelerating your underwriting process.

For a Compound AI system like our loan underwriting copilot, you can usually expect to have a working solution (including all the integrations to data sources) in one month.

Curious about building AI Apps and Agents?

meet the author

Table of Contents

See why organizations like Airbus, The Economist, and OakNorth choose deepset.

Book Demo
EXPLORE DEEPSET AI PLATFORM