Enterprise sales teams lose more deals than they should, not because their products fall short, but because the proposals don't. They arrive late, feel generic, miss buyer-specific context, and fail to connect product capability to the client's actual business situation. At KPI Partners, we’ve seen a consistent pattern across BFSI transformations: institutions that unify data and operationalize analytics within a single platform move from reactive risk controls to continuous, real-time decisioning.
The Irony?
Every enterprise already has the data to fix this. CRM records, pricing intelligence, past win summaries, competitive positioning, compliance constraints, it's all there. The problem is that proposals are still largely hand-crafted by humans who must hunt through that data manually, synthesize it under deadline pressure, and produce something that reads like it was written with strategic intent.
We, at KPI Partners, have decided to change that. This is the story of how we built an Agentic Proposal Generator on Databricks Agent Bricks and Lakebase, and what we learned along the way.
The Problem with AI-Generated Proposals
Early experiments with generative AI for proposal drafting ran into a common wall: hallucinated facts.
LLMs are fluent, confident, and fast. They will write a proposal that sounds precisely tailored to a client, referencing pricing tiers, competitor comparisons, and compliance postures, without actually knowing any of it. The output looks compelling until a subject matter expert reviews it and spots the fabricated figures.
The root cause isn't the model. It's grounding. An LLM can only be as accurate as the context you give it. And 'context,' in an enterprise sales setting, means, live customer tables, current pricing data, real CRM history, and up-to-date product catalogs. Getting that data into the model at inference time, reliably, at low latency, at the right scope, was the core engineering challenge.
This is exactly where Databricks Lakebase changed the architecture.
Why Lakebase Was the Missing Piece?
Traditional Lakehouse architectures are optimized for analytical workloads: Large-scale batch processing, aggregations, and BI dashboards. They're not designed for the kind of point-lookups and row-level writes that agentic application needs, fetching a specific customer's pricing tier in milliseconds, or writing a draft proposal record back to a table once an agent completes its task.
Lakebase gives Databricks the transactional, low-latency operational layer that agentic AI requires. It bridges the gap between the analytical richness of the Lakehouse and the millisecond-latency requirements of production AI applications.
For our Proposal Generator, Lakebase serves as the single source of operational truth for every agent in the pipeline:
- Customer tables with master records, industry verticals, and account size
- CRM history including past interactions, wins, losses, and references
- CPQ and pricing tables with product bundles, discount structures, and margin data
- Financial and loyalty data covering revenue, lifetime value, and renewal probability
- Proposal output tables storing generated drafts for feedback and future fine-tuning
All of this is accessed via Model Context Protocol (MCP), which hydrates Lakebase records directly into agent context windows at runtime. No batch jobs. No data exports. No stale cache. Agents see the same data your sales ops team sees, in real time.
The Multi-Agent Architecture
A single LLM call can't generate a great enterprise proposal. The task is too broad, too multi-step, and too dependent on heterogeneous data sources. We needed a structured, multi-agent orchestration layer, and Databricks Agent Bricks gave us exactly that.
Here's how the seven agents divide the work:
Supervisor Agent
The Orchestrator. It receives the trigger (from a sales rep via the chatbot interface, or from an external CRM event like a new RFP upload) and sequences the downstream agents. It maintains workflow state and handles retries when agents need additional context.
Information Extraction Agent
Before any proposal writing begins, this agent pulls structured data from Lakebase, the client's master record, CRM engagement history, financial profile, and packages it as grounded context for downstream agents.
Knowledge Assistant Agent
A RAG-powered retrieval agent that queries the Databricks Vector Search index. It retrieves relevant case studies, past proposals for similar clients, product documentation, and competitive positioning materials.
RFP Analyzer Agent
If an RFP or brief has been uploaded, this agent parses it to extract client requirements, evaluation criteria, scope, timelines, and hidden constraints. The output feeds directly into the Proposal Generator as a structured requirements document.
Template & Tone Agent
Enterprise proposals don't just need to be accurate; they need to sound right. This agent applies client-specific branding, adjusts tone for the buyer persona, and applies BYOT (Bring Your Own Template) rules so the output matches existing proposal formats.
Proposal Generator Agent
Synthesizes all upstream inputs into a coherent, persuasive proposal document. Output formats include PowerPoint, PDF, and Word.
Compliance & Guardrail Agent
The final gate before delivery. This agent validates every proposal against legal, security, and brand guidelines. It flags hallucinated pricing, off-limits claims, and regulatory violations before the document ever reaches a sales rep.
Genie: Conversational Analytics Over Proposal Performance
One use case we didn't anticipate at the start, but which quickly became one of the most-requested features was giving sales leaders the ability to query proposal pipeline data conversationally.
With Genie connected to our Lakebase proposal tables, a VP of Sales can ask: 'Show me all proposals generated for manufacturing clients in Q1 where estimated deal value exceeded $500K', and get a structured answer in seconds, without touching a dashboard.
Genie turns Lakebase from a passive data store into an active intelligence layer for the sales leadership team.
What We Measured
The results from early enterprise deployments validated the architecture's core hypothesis:
|
Metric |
Before |
After |
|
Proposal generation time |
3–5 days |
20–40 minutes |
|
Factual accuracy (pricing/product) |
Variable |
Grounded via Lakebase |
|
Persona customization |
Manual, inconsistent |
Automated via CRM + persona tables |
|
Compliance review cycles |
2–3 rounds |
Near-zero (caught at agent level) |
What This Opens Up
The architecture we built for proposal generation is a pattern, not a one-off. The same multi-agent framework, grounded in Lakebase operational data, orchestrated via Agent Bricks, and governed by Unity Catalog, applies to:
- Contract drafting and review for legal and procurement teams
- RFP response automation for organizations that respond to large volumes of tenders
- Customer success playbooks personalized to account health signals from Lakebase
- Investor and board materials synthesized from financial data in real time
Databricks has invested heavily in making the Lakehouse operational through Lakebase. This accelerator is a demonstration of what becomes possible when you combine that operational layer with structured multi-agent orchestration.
Get the Accelerator
KPI Partners is making this solution available to enterprise Databricks customers as a pre-built accelerator - Agentic Proposal Generator. It deploys on your existing Databricks environment, no new infrastructure required. Fully serverless, model-agnostic (OpenAI, Anthropic, Google, Meta), and governed by Unity Catalog end-to-end.
If your sales team is still writing proposals by hand, or your AI experiments are producing fluent-but-factually-unreliable drafts, we should talk.
Comments
Comments not added yet!