GenAI Data Processing

Modified on Mon, 8 Jul at 12:15 PM

Approach

As a provider of customer service systems to enterprise organizations, Talkative takes the following approach with GenAI:

• privacy conscious - where possible, we maximize the privacy of end users and organizations, by seeking to limit the data shared with AI providers as far as possible

• choice of model - we are AI model agnostic and will always provide you choice with the models you use

• your controls - where possible we want to give you the controls to manage your data, your way


Applications of GenAI

There are various uses of GenAI in the Talkative platform, broadly split into two categories:

1. Knowledgebase-driven use cases: e.g. chatbot responses, copilot - where you can choose your model

2. General use cases: e.g. interaction summaries - currently no model choice

GenAI FeatureChoose your model?
GenAI chatbotYes
Copilot - AutocompleteYes
Copilot - Suggested ResponsesYes
Copilot - NaviYes
Agent message rephraseNo
AI insights reportsNo
AI phrase matchingNo
AI agent training No


1. Knowledgebase-driven use case architecture


Setup:

• First, you upload your knowledgebase source material to Talkative

• This creates a text file within the Talkative application, within its regional AWS S3 bucket

• For larger datasets, Weaviate is used to create a vector store of the knowledgebase data

• Select your LLM for use at runtime

• Configure the Talkative application such that the feature can be used (e.g. create a Chatbot config, or a Copilot config)


Runtime:

• The customer's message and conversation transcript is added, along with the Knowledgebase, to a prompt that is sent to the LLM.

• AWS Cohere is used as the embedding model

• The response from the LLM is added to the system's response (to the customer or within a copilot message)


2. General use case architecture

Setup:

• Enable the feature within Talkative settings

• Currently, there are no options to choose your LLM for these features. OpenAI currently powers all of these features


Runtime:

• Depending on the feature, the text is sent to the LLM (OpenAI)

• The response from the LLM is added to the system's response, e.g. creating an interaction summary, or rephrasing a message



Data Processing


OpenAI: OpenAI does not train its models on your data. Data is stored for 30 days. Please contact us if you require a different data processing duration.  Please familiarise yourself with OpenAI's privacy policies: https://openai.com/policies/api-data-usage-policies


AWS Bedrock (LLAMA, Anthropic models): AWS does not share the data with the model providers, and Bedrock doesn't store or log your prompts and completions. 
https://docs.aws.amazon.com/bedrock/latest/userguide/data-protection.html

Was this article helpful?

That’s Great!

Thank you for your feedback

Sorry! We couldn't be helpful

Thank you for your feedback

Let us know how can we improve this article!

Select at least one of the reasons
CAPTCHA verification is required.

Feedback sent

We appreciate your effort and will try to fix the article