Elevata
Brazilian Venture Capital: AI Assistant on AWS Accelerates Decisions

Case Study

Brazilian Venture Capital: AI Assistant on AWS Accelerates Decisions

November 5, 2025Venture Capital Brasil

About the Company

A venture capital fund whose investment team depends on fast access to high-quality internal context — deal history, partner discussions, meeting notes, and ongoing portfolio updates — to make decisions and advance opportunities with agility.

As with most VC teams, the fund's operations rely on multiple tools such as chat, documents, and CRM. The challenge was not the lack of information, but its excess, distributed across different systems, difficult to retrieve quickly, and complicated to validate under pressure — especially considering that access to sensitive deal data must be rigorously controlled.

The Challenges

The investment team needed a reliable way to search and synthesize internal knowledge without switching between tools or relying on informal "ask someone" workflows.

Critical information was scattered across Slack threads, Notion pages, Google Drive documents, and Affinity CRM records. This fragmentation made it difficult to build a single source of truth for deal history and references, while adding latency to day-to-day decision-making.

Before the project, finding the right context to answer a single question took an average of 8 to 10 minutes. The time spent was only part of the problem. Access control was equally critical: the fund needed a solution capable of delivering fast answers without violating permissions, ensuring that information retrieval and responses were always based solely on documents each user was authorized to view, in compliance with security and privacy policies.

The Solution

Elevata designed and implemented a GenAI-based knowledge assistant, operating directly within Slack and capable of retrieving reliable, permission-aware context from the fund's internal systems.

Rather than training a model on the fund's data, the solution was built with a Retrieval Augmented Generation (RAG) approach. In this model, the assistant retrieves relevant internal content at query time and uses that context to generate the response, keeping data under control and preventing unauthorized exposure.

Architecture decisions and model selection

Elevata began the project by evaluating different foundation models available via Amazon Bedrock and SageMaker JumpStart. The goal was not only generation quality but also deployment security and compatibility with a RAG architecture capable of applying access control at data retrieval time.

Evaluation criteria included language fluency and summarization quality, ability to anchor responses in corporate knowledge bases, ease of integration with the fund's stack, and support for scalable vector search with document-level access control.

The final architecture combined:

  • Amazon Bedrock for response generation

  • Amazon OpenSearch Service for vector search and data retrieval

  • Role-based access control (RBAC) in OpenSearch, applied during retrieval, ensuring results are filtered by user before reaching the model

  • Embeddings generated with Cohere, stored in k-NN indices in OpenSearch to support real-time semantic search

This architecture ensured high-quality responses, always anchored in trusted internal sources and aligned with the fund's permission model.

Building the knowledge layer via retrieval (not training)

Rather than fine-tuning an LLM, Elevata structured the fund's knowledge as an indexed dataset and used retrieval to provide runtime context. Indexed content included meeting notes, investment memos, Slack threads, and investor updates.

Documents were enriched with metadata — such as author, tags, teams, creation date, and permissions — enabling retrieval to be filtered precisely, not only by semantic similarity but also by scope and access.

Data ingestion and platform integrations

To connect the fund's systems into a usable knowledge base, Elevata implemented ingestion pipelines using Airbyte and AWS Lambda. These pipelines collected data from:

  • Slack (channels and threads)

  • Notion (pages and blocks)

  • Google Drive (documents and spreadsheets)

  • Affinity CRM (deals and pipeline context)

Each source was normalized and indexed in OpenSearch, with metadata preserving origin and improving filtering during retrieval.

Retrieval and response flow in Slack

At runtime, the assistant follows a clear, permission-aware flow:

  1. The user asks a natural language question in Slack.

  2. The assistant authenticates the user via AWS Cognito (JWT-based authentication).

  3. The system performs a semantic vector search in OpenSearch, applying RBAC filters to ensure only authorized documents are retrieved.

  4. The selected passages are sent to a model on Amazon Bedrock, which generates a coherent response personalized to the user's question and based exclusively on retrieved internal context.

This flow allowed the team to ask a single question and receive a response combining information from multiple sources — for example, synthesizing a Slack discussion, an investment memo in Notion, and the latest deal status in Affinity — without switching between tools or manually assembling the narrative.

Iteration and testing

Elevata conducted two rounds of acceptance testing with the fund's partners and analysts. Tests focused on accuracy, latency, and edge case handling. After each round, prompt templates and retrieval filters were refined to increase first-response resolution rate and reduce false positives in content retrieval.

The Results

The assistant significantly reduced the operational friction associated with the fund's internal knowledge work, centralizing search and synthesis in a single Slack experience without compromising rigorous per-user access controls.

After deployment, the fund observed measurable gains:

  • ~80% reduction in time spent searching for context, accelerating daily investment operations

  • Average latency below 2 seconds per query, enabling rapid iteration in Slack conversations

  • 93% understanding accuracy and 88% response coherence, according to internal evaluation

  • 85% first-response resolution, reducing the need for additional searches

  • Over 90% weekly usage by the team within three weeks, indicating high adoption among partners and analysts

Beyond speed and usability, the solution strengthened the internal documentation culture by improving traceability. Since the assistant always retrieves information from metadata-tagged indexed sources, users can trace responses directly back to originating documents and discussions.

Next step

Let's Design Your Next Success Case

We show how to apply cloud, data, and AI with governance to create measurable business impact.

Get in touch