How to Use Generative AI Module

How to Use Generative AI Module

What is Generative AI Module for?

The Generative AI Module is designed for generating human-like text based on given prompts or input. It utilizes advanced natural language processing and artificial intelligence techniques to produce coherent and contextually relevant responses. This module is used where generating text content is required while interacting with the client. It can be employed to create dynamic responses based on the inserted knowledge base of your creation.

What is smart logic of Generative AI Module?

When an input is given to Generative AI Module, it gives an answer by making a search on the resources loaded. According to the resources, an answer will be created and presented under the limits of its given persona.

To break the conversation, there needs to be specific intents. When those intents are detected, conversation with Generative AI Module will be broken and flow will continue with the next module.

If there is no accurate answer for given input into the resources and if the intents are not recognized, Generative AI Module will create an answer by using large language model of ChatGPT.

If there is no accurate answer for given input into the resources and if the intents are not recognized, Generative AI Module will create an answer by using large language model of ChatGPT, with respect to its persona also.

How to Use Generative AI Module

0) Add An AI Provider

From AI&NLP Page, user need to add a new AI Provider for Generative AI. Detailed explanation is on the document: “How to add Generative AI Provider”.

1) Add “Knowledge Base” and “Resources”

Knowledge Base is a library for a specific topic. Under a KB, you can have different sources just like in a library. Different books, information etc but they are all related under the main topic.

Under Company Management go to Knowledge Base tab and click on “ADD KNOWLEDGE BASE”.


You can view and edit your created knowledge base to add the desired context that will be used by the generative ai module.


Resources

a) Document Resources
Resources are like the books, papers etc in a library section. They can be assumed as texts that GPT will learn and apply due to.

You can simply add, edit and delete resources on UI.

Please select your document type as “Resource”.

Language selection is important to increase efficiency of the document.

b) Intent Resources

Intents can be added as a type of document.

Make sure that you selected the type as intent.

Utterances should be added as in the example. After you added the utterances and saved, Generative AI Module will detect them as intent and make the flow continue into connected next module.


2) Add an Auto Generative AI module to your flow

Auto Generative AI module can be added to your flow when you expect the user to add an input after any bot message. This way when the user enter the text it will be processed by the Generative AI module and according to the persona and knowledge base it will replay the user message or move on with the flow connection if a related intent is detected.

3) Create an Auto Generative Module

4) Connect the module to your Auto Generative Module


5) Set up Auto Generative Module according to the need:

      a) Select the related knowledge base.


      b) Set a persona and select the language for your generative ai module.

      Avoid using too many words in persona. It should be clear description which focuses on AI’s chat style. Be careful while entering certain rules       under persona. Because your certain rule descriptions can also effect the performance in wrong ways.

      c) Set the settings of the generative ai module refer to the “Generative AI Settings Options Description” at the end of the document.


      d) Set and select a connection for each of the added intents and entities that you want to detect. So when added intent is detected, flow will            continue with the connected module.

      e) Set Open AI intent settings as desired (optional)


Description for Open AI intent settings options:

  • Temperature: 
The "temperature" option on OpenAI playground determines how creative or conservative the language generated by the model will be. When you increase the temperature, it makes the model more creative and unpredictable, generating more diverse and unusual responses. This means the model is more likely to come up with novel and unexpected responses, but it may also generate nonsensical or irrelevant ones.

On the other hand, when you decrease the temperature, it makes the model more conservative and predictable, generating responses that are more coherent and consistent with what it has learned from the training data. This means the model is more likely to generate responses that are relevant to the context and grammatically correct, but it may also be less creative and more repetitive.

In summary, changing the temperature on OpenAI playground allows you to adjust the balance between creativity and consistency in the language generated by the model, depending on your specific needs or preferences.

  • Maximum Length:

The "maximum length" option on OpenAI playground sets a limit on the length of the text that the model can generate in response to a prompt.

When you increase the maximum length, it allows the model to generate longer and more detailed responses. This means the model can provide more information and context in its answers, but it may also result in longer wait times for the response to be generated.

On the other hand, when you decrease the maximum length, it limits the amount of text the model can generate. This means the response may be more concise and to the point, but it may also lack sufficient detail or context.

In summary, changing the maximum length on OpenAI playground allows you to control the length and level of detail in the language generated by the model, depending on your specific needs or preferences.

  • Stop Sequences:

The "stop sequences" option on OpenAI playground allows you to specify certain words or phrases that, if generated by the model, will cause it to stop generating text.

When you add stop sequences, the model will generate text until it encounters one of the specified words or phrases. This means you can control the output of the model and ensure that it does not generate responses that are irrelevant or inappropriate.

For example, if you are using the model to generate product descriptions and you don't want the descriptions to include certain words like "expensive" or "outdated", you can add those words to the stop sequences. This will prevent the model from generating descriptions that contain those words.

In summary, changing the stop sequences on OpenAI playground allows you to specify words or phrases that will cause the model to stop generating text, helping you to control the output and ensure that it meets your specific requirements.

  • Top P:

The "Top P" option on OpenAI playground determines how many possible words or tokens the model considers when generating its next word or sequence of words.

When you increase the Top P value, the model will consider more possible words to use in its response. This means that the model has a wider range of options to choose from, potentially resulting in more diverse and creative responses.

On the other hand, when you decrease the Top P value, the model will consider fewer possible words. This means that the model is more likely to choose words that have a higher probability of being correct, resulting in responses that may be more predictable but less creative.

In summary, changing the Top P value on OpenAI playground allows you to adjust how widely the model searches for the best next word or sequence of words, depending on your specific needs or preferences.

  • Frequency Penalty:

The "Frequency Penalty" option on OpenAI playground penalizes the model for repeatedly using the same words or phrases in its response.

When you increase the frequency penalty, the model is penalized more for using the same words or phrases multiple times in its response. This means the model will try to generate responses that are more diverse and use a wider range of vocabulary.

On the other hand, when you decrease the frequency penalty, the model is penalized less for using the same words or phrases multiple times in its response. This means that the model may generate responses that are more repetitive or use the same words multiple times.

In summary, changing the frequency penalty on OpenAI playground allows you to encourage or discourage the model from using the same words or phrases multiple times in its response, depending on your specific needs or preferences.

  • Presence Penalty:

The "Presence Penalty" option on OpenAI playground penalizes the model for generating responses that contain words or phrases that were in the prompt.

When you increase the presence penalty, the model is penalized more for generating responses that contain words or phrases that were in the prompt. This means the model will try to generate responses that are more distinct from the prompt, using less common or unrelated vocabulary.

On the other hand, when you decrease the presence penalty, the model is penalized less for generating responses that contain words or phrases that were in the prompt. This means that the model may generate responses that are more closely related to the prompt, using similar or related vocabulary.

In summary, changing the presence penalty on OpenAI playground allows you to encourage or discourage the model from generating responses that contain words or phrases that were in the prompt, depending on your specific needs or preferences.

  1. Best Of:

The "Best Of" option on OpenAI playground determines how many response options the model generates before selecting the "best" one to display.

When you increase the Best Of value, the model generates more response options before selecting the "best" one to display. This means that the model has more opportunities to generate different responses, potentially resulting in a better overall response.

On the other hand, when you decrease the Best Of value, the model generates fewer response options before selecting the "best" one to display. This means that the model has fewer opportunities to generate different responses, potentially resulting in a less optimal overall response.

In summary, changing the Best Of value on OpenAI playground allows you to adjust how many response options the model generates before selecting the "best" one to display, depending on your specific needs or preferences.

  • Maximum Similarity Score:

It basically represents the minimum similarity expectation between user input and Knowledge Base content. If the similarity is above the limit you set, then module will answer from Knowledge Base documents. If the similarity is below the limit you set, then module will skip Knowledge Base documents.

As an example, if you have the word “okay” in your Knowledge Base, Gen AI Module will answer from related document when the user gives input as “okay”. But it will find a low similarity since the only similar word is “okay”. Here, you should set a higher Minimum Similarity Score in order not to find unnecessary similarity between “okay” words. Suggested number is: 0.7


    • Related Articles

    • Generative AI: Dictionary & Keys

      What are dictionary & keys? Dictionary is a feature of Generative AI, which is designed to show texts in multiple fields and update those text with only one action into those multiple fields. Keys are the pointers that we use into Gen AI Knowledge ...
    • Generative AI Endpoint Calls

      This document is created to explain how to use endpoints for Generative AI. With API Endpoints improvement, now clients will be able to create document with placeholder keys in it to be able to use it dynamically. Also, with new endpoints clients ...
    • How to Use AI Action

      Action usage AI action helps user to create a comprehensive conversation without too much effort. MindBehind supports three AI providers: Google DialogFlow, IBM Watson, CLU and Microsoft Luis. After connecting to a provider, the page fills up with ...
    • AI Designer Tricks

      General Settings 1. How to use 'Global Keyword' 1. Go to 'Settings' gear icon so see more options of customization. Functionality: Global Keyword helps you to achieve immediate bot reaction without AI . The condition is 1 score* wording resemblance. ...
    • How to Use Zapier Action

      Action usage Zapier action is used when you want to integrate MindBehind with other famous platforms onlinei thanks to Zapier wide market. This action will send the current assistant state to Zapier for any custom further actions you want to do with ...