...

Konnect Research Cloud (KRC)

How We Keep Your Data Private, Secure And Completely Yours

Executive Summary

Konnect Insights' Konnect Research Cloud (KRC) brings the power of large language models (LLMs) including Claude and GPT models directly to your brand's customer intelligence workflows. But there's a critical question every enterprise must ask before deploying AI on their customer data:

The Core Question

How do you get the benefits of frontier AI without handing your proprietary brand data, customer conversations, and competitive insights to a third-party AI provider?

This document answers that question. It explains exactly how KRC is built, what data AI models see and don't see, and how the KRC Trust Layer, Konnect Insights' proprietary data governance architecture ensures that every insight is generated from your data, while your data remains entirely in your control.

The Challenge: AI Power Vs Data Privacy

Modern AI assistants are transformative. They can summarise thousands of customer messages, identify trending complaints, surface agent performance issues, and write perfect responses in seconds. But they come with a challenge that is rarely discussed openly:

WITHOUT A TRUST LAYER

  • Raw customer messages sent to AI APIs
  • Brand data potentially used for model training
  • No isolation between customers
  • Compliance and regulatory risk
  • Unlimited AI access with no authentication

WITH THE KRC TRUST LAYER

  • AI receives only aggregated, structured query results
  • Data never enters AI training pipelines
  • Hard data walls between every brand
  • Audit-ready, enterprise-grade security
  • Every request authenticated and tenant-isolated

How KRC Is Built: The Architecture

KRC is built on three interlocking layers. Each layer has a specific role in ensuring that AI responses are both intelligent and secure.

Layer 1: Your Brand’s Private Data Store

Every brand on Konnect Insights has a dedicated, isolated index. No two brands share a data partition. Your customer mentions, social media messages, ticket data, agent replies, sentiment scores, and engagement metrics all live in a single namespace keyed exclusively to your account.

Technical Detail

Each brand is assigned a unique Id. All data is stored in an index. This is enforced at the infrastructure level, not just at the application level, making cross-brand data access architecturally impossible, not just policy-restricted.

Layer 2: The KRC Trust Layer

This is the heart of KRC’s security model, a secure, two-way data gateway that ensures AI models work with your context without receiving your raw data. Konnect Insights has built its own equivalent: the KRC Trust Layer.

The KRC Trust Layer is a Model Context Protocol (MCP) server that sits between your brand data and any AI model. It acts as an intelligent, authenticated intermediary:

  • It receives a natural language question from the AI (e.g. "What were the top complaint categories this week?")
  • It translates that question into a precise DSL query targeting only your brand’s data index
  • It fetches the results from your private store
  • It returns structured, minimal data back to the AI, only what is needed to answer the question
  • It never passes raw data or full message archives to any AI model

The Agentforce Parallel

Just as Salesforce's Agentforce Trust Layer acts as a governed bridge between ChatGPT and Salesforce CRM data ensuring proprietary business context is available to AI without exposing raw records, the KRC Trust Layer acts as the governed bridge between AI models and your brand's social and CX intelligence. Your data is the fuel. The Trust Layer is the firewall.

Layer 3: Token-Gated Authentication

Every single request to the KRC Trust Layer must carry a valid, encrypted API token. There is no anonymous access. The authentication flow works as follows:

Step 1
Token received with each request via API query parameter
Step 2
Token is validated remotely against Konnect Insights’ secure server
Step 3
Token is AES-GCM decrypted locally to extract the brand’s Account ID/ Group ID which is the key to their data partition.
Step 4
The Account ID/Group ID is bound to the request context and cannot be altered by any external party, including AI models
Step 5
All DSL queries are executed exclusively against that brand’s index, data from other brands is physically unreachable

KRC TRUST LAYER ARCHITECTURE

KRC Trust Layer Architecture Diagram

What AI Models Actually See

This is the section most enterprise customers care most about. Here is a precise, transparent breakdown of what Claude, GPT, or any other AI model receives when KRC is in use:

Data Category
What AI Receives
Your raw customer messages
Only when directly relevant to answer a query, and only up to a configurable limit (default: 15 documents). Never as a bulk export.
Competitor brand data
Never. The index namespace enforces hard isolation between brands.
Historical full archive
Never in bulk. AI receives only what is returned by a scoped query.
Aggregated metrics & counts
Yes, this is the primary form of data AI uses. Sentiment counts, ticket volumes, TAT averages, agent performance stats.
DSL query
AI constructs the query: KRC executes it. AI never has direct database access.

The KRC Data Flow: Step By Step

Below is a plain-language walkthrough of what happens when an analyst asks KRC a question like "What were the top complaint themes for my brand last week?"

1
Analyst types their question.
2
The request is routed to the KRC MCP server with the analyst's encrypted API token.
3
The KRC Trust Layer validates the token remotely and decrypts it to extract the brand's Group ID. Invalid or missing tokens are immediately rejected with a 403 error.
4
The AI model receives the natural language question and the KRC tool definition which describes available data fields, but contains zero actual customer data.
5
The AI generates a precise DSL query (structured code, not text). It never touches the database directly.
6
The KRC Trust Layer receives the DSL query, appends the brand's Group ID as the mandatory index target, and executes the query against your private data partition.
7
Your index returns matching results, only from your brand's data.
8
The KRC Trust Layer passes a limited, structured result set (default max: 15 documents) back to the AI model.
9
The AI model synthesises the results into a natural language answer with insights.
10
The analyst receives a clear, accurate, data-backed response powered by AI, but grounded entirely in their own brand's data.

Security By Design: Key Technical Guarantees

The KRC Trust Layer enforces the following security guarantees at the code and infrastructure level not just by policy:

Security Property
How it is enforced
Tenant Isolation
Each brand's data is in a dedicated index. The index name is resolved from the authenticated token the AI model cannot influence or override it.
AES-GCM Token Encryption
All non-health endpoints require a valid token. Requests without or with invalid tokens receive a 403 error and execution halts immediately, no data is queried.
AI has no direct DB access
Tokens are encrypted using AES-GCM, a modern authenticated encryption standard. Tampered or forged tokens fail decryption and are rejected.
Hard document limits
The AI model generates query code (DSL JSON), but only the KRC Trust Layer executes it. AI cannot bypass query limits, change the target index, or request raw bulk exports.
No Data Persistence in AI
KRC uses stateless, per-request AI interactions. No brand data is stored in the AI model's memory or session beyond the current request.
Context Variable Isolation
The Group ID is stored in a Python ContextVar — an async-safe, request-scoped variable that prevents group ID leakage between concurrent requests.

The KRC Trust Principle

When Salesforce announced Agentforce Sales in ChatGPT, they articulated a principle that resonates deeply with how we have built KRC:

"Even the most powerful AI needs to be fluent in the unique context of your business – your pipeline, your history with a client, and your strategic priorities."

- Salesforce on Agentforce

At Konnect Insights, this is our operating philosophy for KRC. AI is only useful when it understands your brand. It is only trustworthy when it cannot see anyone else’s. The KRC Trust Layer is how we deliver both at once.

The KRC Trust Principle

Your brand's intelligence, your customer conversations, your sentiment data, your CX metrics is your most competitive asset. KRC gives you the intelligence of frontier AI models while ensuring that asset remains locked in a vault only you hold the key to. AI gets the context. You keep the data.

What Makes KRC Different From Generic AI Tools

Security Property
Generic AI
(ChatGPT, Claude direct)
KRC
(Powered by KRC Trust Layer)
Data stays private
Data sent directly to AI API
AI receives only query results, never raw data
Tenant isolation
No concept of tenants
Hard-enforced per-brand data walls
Real-time insights
Based on pasted snapshots
Live queries against your index
Authentication & access control
Account-level only
Token-gated, per-brand, cryptographically enforced
Configurable result limits
Unlimited data exposure
Hard caps on documents returned to AI
CX-specific intelligence
General purposes only
Purpose-built for Omni-channel CX Intelligence.

Summary: Enterprise AI, Enterprise Trust

KRC is an enterprise-grade AI intelligence layer purpose-built for the customer experience domain, with security architecture that matches the sensitivity of the data it handles.

The KRC Trust Layer ensures that:

Intelligence at Scale. Privacy by Architecture.

This Is Konnect Insights’ KRC Trust Layer.