Approach
As a provider of customer service systems to enterprise organizations, Talkative takes the following approach with GenAI:
• privacy conscious - where possible, we maximize the privacy of end users and organizations, by seeking to limit the data shared with AI providers as far as possible
• choice of model - we are AI model agnostic and will always provide you choice with the models you use
• your controls - where possible we want to give you the controls to manage your data, your way
Applications of GenAI
There are various uses of GenAI in the Talkative platform, broadly split into two categories:
1. Knowledgebase-driven use cases: e.g. chatbot responses, copilot - where you can choose your model
2. General use cases: e.g. interaction summaries - currently no model choice
GenAI Feature | Choose your model? |
GenAI chatbot | Yes |
Copilot - Autocomplete | Yes |
Copilot - Suggested Responses | Yes |
Copilot - Navi | Yes |
Agent message rephrase | No |
AI insights reports | No |
AI phrase matching | No |
AI agent training | No |
1. Knowledgebase-driven use case architecture
Setup:
• First, you upload your knowledgebase source material to Talkative
• This creates a text file within the Talkative application, within its regional AWS S3 bucket
• For larger datasets, Weaviate is used to create a vector store of the knowledgebase data
• Select your LLM for use at runtime
• Configure the Talkative application such that the feature can be used (e.g. create a Chatbot config, or a Copilot config)
Runtime:
• The customer's message and conversation transcript is added, along with the Knowledgebase, to a prompt that is sent to the LLM.
• AWS Cohere is used as the embedding model
• The response from the LLM is added to the system's response (to the customer or within a copilot message)
2. General use case architecture
Setup:
• Enable the feature within Talkative settings
• Currently, there are no options to choose your LLM for these features. OpenAI currently powers all of these features
Runtime:
• Depending on the feature, the text is sent to the LLM (OpenAI)
• The response from the LLM is added to the system's response, e.g. creating an interaction summary, or rephrasing a message
Data Processing
OpenAI: OpenAI does not train its models on your data. Data is stored for 30 days. Please contact us if you require a different data processing duration. Please familiarise yourself with OpenAI's privacy policies: https://openai.com/policies/api-data-usage-policies
AWS Bedrock (LLAMA, Anthropic models): AWS does not share the data with the model providers, and Bedrock doesn't store or log your prompts and completions. https://docs.aws.amazon.com/bedrock/latest/userguide/data-protection.html
Was this article helpful?
That’s Great!
Thank you for your feedback
Sorry! We couldn't be helpful
Thank you for your feedback
Feedback sent
We appreciate your effort and will try to fix the article