OpenAI's GPT engine is a powerful language model that you can integrate into your chatbot to enhance the chatbot's conversational abilities and experience for your end users. This guide walks you through integrating OpenAI into your BotDistrikt chatbot.

To integrate OpenAI into your BotDistrikt chatbot:

  1. Ensure you have Created an active Access Token on Personality -> Settings

  2. Go to Integrations --> Artificial Intelligence --> OpenAI

  3. Enter a valid API key* and click Save

Navigate to Trainings to train the OpenAI model

  1. Select the available chat model from the dropdown.

  2. Choose the appropriate temperature according to your chatbot's personality. Increased temperature indicates more random responses to the prompts. Decreased temperature ensures more stable and predictable responses.

  3. Maximum tokens to generate indicates your chatbot's response length. Requests use up to 2048 to 4000 tokens (shared between prompt and completion). Your limit varies by the model you choose in 1. A token (approximately) equals 4 characters in plain English).

  4. Top P indicates the normalcy of words within a response. A higher number indicates using more normal words and less includes the inclusion of more unique words in your chatbot's responses. The diversity is controlled through a nucleus sampling of 0.5 (consideration of half of all likelihood-weighted options).

  5. Adjust the Frequency Penalty to control how often your chatbot repeats the same words. High-frequency penalty for less repetition and low-frequency penalty for more repetition.

  6. Adjust the Presence Penalty to control how much your chatbot focuses on specific topics. Set a high presence penalty for more variety in responses and a low presence penalty for more focus on a topic.

  7. Set the Max Conversation Tokens to indicate the maximum number of tokens to keep in the chatbot's conversation history. As the conversation history grows, the oldest messages are removed (if tokens exceed the specified number).

  8. Toggle Generate Embeddings for Text Responses to retrieve the results of a similarity search of text responses (from the websites and documents in Sources) and the users' input message.

  9. Select the Embedding Model between the website and document sources. The model represents words or phrases as numerical vectors in a continuous vector space, allowing computers to process and understand textual data more effectively.

  10. To optimize responses from the document or web resources, enter a number in Embeddings Top K. This feature presents the top K most semantically related entries or documents, enabling the RAG model to use these documents as reference to generate more informed, accurate, and relevant responses to the given input.

  11. Enter an Intro Prompt to provide context, instructions, or other information relevant to the model and use case. The prompt can determine the character, behaviour, disposition and function of the chatbot. To write effective prompts, describe the task you want it to complete and provide background information. Give the model some examples of the desired output. Also note that the order of presenting information matters.

  12. Import a Story

  13. The number of tokens used is displayed in the bottom right corner of the Intro Prompt box. Tokens are units of text that these models process and generate which the model can turn into embeddings. Example: "I am a chatbot" utilizes 5 tokens.

Click Save.

To test your bot, navigate to Testing and test your bot accordingly.

*To generate a valid OpenAI API key:

  1. Create an OpenAI account

  2. Log in to your OpenAI account

  3. Navigate to the API section

  4. Create a new API Key

  5. Name your API key (for reference)

  6. Choose an (appropriate plan)

  7. Generate the API key

  8. Enter in BotDistrikt Integrations --> Artificial Intelligence --> OpenAI --> Valid API key

Last updated