
Case Study
Venture Capital LATAM: AI Accelerates Investment Intelligence
About the Company
A venture capital fund whose investment workflows depend on fast access to institutional knowledge — deal history, partner discussions, due diligence notes, internal decisions, and portfolio updates. As with many VC teams, the fund's operational context is distributed across the tools used daily by partners and analysts: Slack for conversations, Notion for decision records and internal notes, Google Drive for decks and documents, and Affinity CRM for deal tracking and relationships.
As the fund scales, maintaining continuity across these sources becomes essential for decision quality, execution speed, and onboarding new team members — without relying on manual information transfers.
The Challenges
The fund's internal knowledge was scattered across multiple systems, making it time-consuming to reconstruct context for answering a single question. Information existed in different formats and locations — Slack threads, Notion pages, Drive folders, and CRM records — and frequently, the team needed to switch between tools or "ask someone" to piece together the full story behind a deal, decision, or relationship.
Before the project, a typical context search took between 9 and 12 minutes per question. Beyond time loss, the fragmentation created concrete operational delays: important details could be overlooked, the historical rationale for decisions was not always easy to trace back to the original source, and onboarding new team members required additional effort to transmit knowledge that already existed — but was not consistently accessible.
At the same time, the solution needed to meet strict privacy and access control requirements. The fund required the assistant to operate entirely within its own AWS environment and that responses always be based solely on documents each user was authorized to view.
The Solution
In partnership with AWS, Elevata designed and implemented a GenAI-based knowledge assistant operating directly within Slack, capable of retrieving reliable, permission-aware context from the fund's internal systems.
Rather than training a custom model on the fund's data, Elevata implemented a Retrieval Augmented Generation (RAG) architecture. In this model, the assistant retrieves relevant internal content at query time and uses that retrieved context to generate the response, keeping data under control and respecting existing permissions.
Architecture decisions and model selection
Elevata evaluated multiple foundation model options available via Amazon Bedrock and SageMaker JumpStart, focusing on natural language understanding, contextual response generation, and secure integration with private corporate data.
Criteria prioritized the ability to summarize investment and business context clearly, support information retrieval from multiple sources with document-level security, and operate entirely within the fund's AWS account to meet privacy requirements — while offering a Slack-native experience to maximize adoption.
The final architecture combined:
Amazon Bedrock for language generation tasks
Cohere for semantic embedding generation
Amazon OpenSearch Service for vector search and metadata filtering
RBAC in OpenSearch, applied during retrieval to ensure each user accesses only authorized content
AWS Cognito for JWT-based authentication, integrated with the permission model
This approach ensured access control was applied before any context reached the model, preventing accidental leaks and keeping responses always anchored in permitted sources.
Building the knowledge layer with retrieval and metadata
Rather than fine-tuning an LLM, Elevata built a retrieval layer over the fund's existing content — documents, Slack threads, and CRM records — enriched through chunking, embeddings, and metadata tagging.
Insights was indexed with a schema that enables filtering by source, owner or user group, tags, and date, improving retrieval quality and ensuring response traceability back to the original source.
A dedicated Amazon OpenSearch cluster was configured with k-NN vector indices to enable semantic retrieval, along with custom ingestion pipelines to embed and store documents as they were ingested. RBAC rules were tied to user identity, allowing the same question to yield different results based on permissions — without changing the user experience.
Data integration with the fund's stack
To create a unified knowledge experience, Elevata synced the assistant with the fund's core systems:
Slack (channels and threads)
Notion (notes, policies, and decision records)
Google Drive (decks, documents, and spreadsheets)
Affinity CRM (deals, contacts, and due diligence records)
With these sources indexed in a common retrieval layer, the assistant could answer questions by combining cross-tool context — for example, retrieving Slack discussion history, validating the current deal stage in Affinity, and linking the response to the specific decision record in Notion that documented the rationale.
Runtime flow and iteration
In practice, users interact with the assistant directly in Slack. A natural language question triggers retrieval of relevant passages in OpenSearch, filtered via RBAC. The retrieved context is then sent to a model on Amazon Bedrock, which generates a response based exclusively on those sources.
Elevata conducted two iterative rounds of testing with analysts and partners to refine prompts, improve retrieval precision, and validate that responses consistently respected RBAC rules and the fund's internal tone. This iteration focused on accuracy, latency, and edge case handling, ensuring the assistant was useful in real investment workflows — not just controlled scenarios.
The Results
The fund now has a centralized, Slack-native assistant capable of delivering relevant internal context in seconds — without the team needing to switch between tools or rely on informal knowledge transfers. The impact was immediate on daily investment operations, where speed and continuity are fundamental.
After deployment, the fund observed measurable gains:
~75% reduction in time spent on search and internal alignment, accelerating decision-making
Response time under 2 seconds per query, enabling real-time use in Slack
94% understanding accuracy and 89% contextual coherence, according to internal evaluation
92% weekly usage by the core team, indicating high adoption among analysts and partners
First-question resolution rate of 0.85, reducing repeated searches and manual follow-ups
Beyond speed gains, the assistant significantly improved onboarding and institutional knowledge continuity by making context easier to retrieve and verify. Since responses are always based on metadata-tagged retrieved documents, the team can trace each response back to its original source, strengthening documentation discipline and reducing dependence on individual memory.
Next steps are already mapped in a Phase 2 roadmap. With the permission-aware foundation established in Phase 1, the fund can expand the assistant to more proactive, action-oriented workflows, such as deal alerts, follow-up reminders, tag updates, and other execution loops directly in Slack — helping the team advance opportunities with less manual coordination.
Next step
Let's Design Your Next Success Case
We show how to apply cloud, data, and AI with governance to create measurable business impact.
Get in touch