To break the conversation, there needs to be specific intents. When those intents are detected, conversation with Generative AI Module will be broken and flow will continue with the next module.
If there is no accurate answer for given input into the resources and if the intents are not recognized, Generative AI Module will create an answer by using large language model of ChatGPT.
If there is no accurate answer for given input into the resources and if the intents are not recognized, Generative AI Module will create an answer by using large language model of ChatGPT, with respect to its persona also.
From AI&NLP Page, user need to add a new AI Provider for Generative AI. Detailed explanation is on the document: “How to add Generative AI Provider”.
Knowledge Base is a library for a specific topic. Under a KB, you can have different sources just like in a library. Different books, information etc but they are all related under the main topic.
Under Company Management go to Knowledge Base tab and click on “ADD KNOWLEDGE BASE”.
You can view and edit your created knowledge base to add the desired context that will be used by the generative ai module.
You can simply add, edit and delete resources on UI.
Language selection is important to increase efficiency of the document.
b) Intent Resources
Intents can be added as a type of document.
Make sure that you selected the type as intent.
Utterances should be added as in the example. After you added the utterances and saved, Generative AI Module will detect them as intent and make the flow continue into connected next module.
Auto Generative AI module can be added to your flow when you expect the user to add an input after any bot message. This way when the user enter the text it will be processed by the Generative AI module and according to the persona and knowledge base it will replay the user message or move on with the flow connection if a related intent is detected.
b) Set a persona and select the language for your generative ai module.
Avoid using too many words in persona. It should be clear description which focuses on AI’s chat style. Be careful while entering certain rules under persona. Because your certain rule descriptions can also effect the performance in wrong ways.
c) Set the settings of the generative ai module refer to the “Generative AI Settings Options Description” at the end of the document.
d) Set and select a connection for each of the added intents and entities that you want to detect. So when added intent is detected, flow will continue with the connected module.
e) Set Open AI intent settings as desired (optional)
On the other hand, when you decrease the temperature, it makes the model more conservative and predictable, generating responses that are more coherent and consistent with what it has learned from the training data. This means the model is more likely to generate responses that are relevant to the context and grammatically correct, but it may also be less creative and more repetitive.
In summary, changing the temperature on OpenAI playground allows you to adjust the balance between creativity and consistency in the language generated by the model, depending on your specific needs or preferences.
Maximum Length:
The "maximum length" option on OpenAI playground sets a limit on the length of the text that the model can generate in response to a prompt.
When you increase the maximum length, it allows the model to generate longer and more detailed responses. This means the model can provide more information and context in its answers, but it may also result in longer wait times for the response to be generated.
On the other hand, when you decrease the maximum length, it limits the amount of text the model can generate. This means the response may be more concise and to the point, but it may also lack sufficient detail or context.
In summary, changing the maximum length on OpenAI playground allows you to control the length and level of detail in the language generated by the model, depending on your specific needs or preferences.
Stop Sequences:
The "stop sequences" option on OpenAI playground allows you to specify certain words or phrases that, if generated by the model, will cause it to stop generating text.
When you add stop sequences, the model will generate text until it encounters one of the specified words or phrases. This means you can control the output of the model and ensure that it does not generate responses that are irrelevant or inappropriate.
For example, if you are using the model to generate product descriptions and you don't want the descriptions to include certain words like "expensive" or "outdated", you can add those words to the stop sequences. This will prevent the model from generating descriptions that contain those words.
In summary, changing the stop sequences on OpenAI playground allows you to specify words or phrases that will cause the model to stop generating text, helping you to control the output and ensure that it meets your specific requirements.
Top P:
The "Top P" option on OpenAI playground determines how many possible words or tokens the model considers when generating its next word or sequence of words.
When you increase the Top P value, the model will consider more possible words to use in its response. This means that the model has a wider range of options to choose from, potentially resulting in more diverse and creative responses.
On the other hand, when you decrease the Top P value, the model will consider fewer possible words. This means that the model is more likely to choose words that have a higher probability of being correct, resulting in responses that may be more predictable but less creative.
In summary, changing the Top P value on OpenAI playground allows you to adjust how widely the model searches for the best next word or sequence of words, depending on your specific needs or preferences.
Frequency Penalty:
The "Frequency Penalty" option on OpenAI playground penalizes the model for repeatedly using the same words or phrases in its response.
When you increase the frequency penalty, the model is penalized more for using the same words or phrases multiple times in its response. This means the model will try to generate responses that are more diverse and use a wider range of vocabulary.
On the other hand, when you decrease the frequency penalty, the model is penalized less for using the same words or phrases multiple times in its response. This means that the model may generate responses that are more repetitive or use the same words multiple times.
In summary, changing the frequency penalty on OpenAI playground allows you to encourage or discourage the model from using the same words or phrases multiple times in its response, depending on your specific needs or preferences.
Presence Penalty:
The "Presence Penalty" option on OpenAI playground penalizes the model for generating responses that contain words or phrases that were in the prompt.
When you increase the presence penalty, the model is penalized more for generating responses that contain words or phrases that were in the prompt. This means the model will try to generate responses that are more distinct from the prompt, using less common or unrelated vocabulary.
On the other hand, when you decrease the presence penalty, the model is penalized less for generating responses that contain words or phrases that were in the prompt. This means that the model may generate responses that are more closely related to the prompt, using similar or related vocabulary.
In summary, changing the presence penalty on OpenAI playground allows you to encourage or discourage the model from generating responses that contain words or phrases that were in the prompt, depending on your specific needs or preferences.
The "Best Of" option on OpenAI playground determines how many response options the model generates before selecting the "best" one to display.
When you increase the Best Of value, the model generates more response options before selecting the "best" one to display. This means that the model has more opportunities to generate different responses, potentially resulting in a better overall response.
On the other hand, when you decrease the Best Of value, the model generates fewer response options before selecting the "best" one to display. This means that the model has fewer opportunities to generate different responses, potentially resulting in a less optimal overall response.
In summary, changing the Best Of value on OpenAI playground allows you to adjust how many response options the model generates before selecting the "best" one to display, depending on your specific needs or preferences.
Maximum Similarity Score:
It basically represents the minimum similarity expectation between user input and Knowledge Base content. If the similarity is above the limit you set, then module will answer from Knowledge Base documents. If the similarity is below the limit you set, then module will skip Knowledge Base documents.
As an example, if you have the word “okay” in your Knowledge Base, Gen AI Module will answer from related document when the user gives input as “okay”. But it will find a low similarity since the only similar word is “okay”. Here, you should set a higher Minimum Similarity Score in order not to find unnecessary similarity between “okay” words. Suggested number is: 0.7