Note: If you need technical help for building or deploying your bot you can talk to expert bot developers on our WhatsApp group.

In this article we will code up a relatively sophisticated AI Chatbot. We’ll be using an API to ping the GPT-J model, the current best performing open-source Transformer model. We’ll encode a personality for this bot. Then we’ll deploy it so that hundreds of users can speak to it on Chai, a chat-bot platform.

At Chai we have built and deployed many chatbots which currently send almost 200,000 messages / day to users, I will share a lot of the tricks we have discovered to make a good bot. All the code is in python 3.

Table of contents:

· An API for GPT-J
· Giving the bot a personality: the ChatAI class
· Bringing it all together
· All the code on GitHub for you to use
· Uploading the bot to Chai for users to speak to it
· Example conversation

Let’s build up to a full fledged chat-bot: first we’ll get set-up with an API to ping GPT-J, then we’ll build a ChatAI class where we define the personality of the bot, then we’ll package that into a very short python class (less than 20 lines of code!). Finally, we’ll upload the bot to Chai.

An API for GPT-J

There’s two ways we could use GPT-J: either we can install it on your machine and run it locally, or we could have it run on a server and access it through an API. Running locally is fine but is a bit trickier and doesn’t scale as well (we want to have hundreds of people speak to the bots!). So we’ll be using the GetNeuro API. GetNeuro charges for this but if you want to build a good bot I will give you my access credentials (email me at: dev@chai.ml).

Line 27 is where we query the API provided by GetNeuro. What’s most interesting when building chat-bots is making them sound like humans. The first place we can do this is in the __init__ where we define the temperature and repetition penalty parameters. Low temperature means the model will be conservative and only return responses it is very confident make sense, high temperature means the model will send responses it is less certain of. When you set too low temperature the chat bots are very consistent but a bit boring, when you set too high temperature the bots get very creative and funky, sometimes too much… Repetition penalty is used to penalise the bot for repeating itself, reduce this parameter and the bot repeats itself a bit more, increase it and it does it a bit less. The default value is 1, I like to set it to 1.15.

Below is a first example of what our FineTunedAPI class can do. Notice how it sort of makes sense, but still is a little weird (starts message with a comma, doesn’t finish sentence, …): we are going to fix all these things to make a coherent bot.

Python 3.9.9 (main, Nov 16 2021, 03:05:18)  
Type "help", "copyright", "credits" or "license" for more information.  
\>>> from gpt\_api import FineTunedAPI  
\>>> api = FineTunedAPI(0.5, 1)  
\[Neuro Ai\] 10:02:16 - \[INFO\]: DEPLOYMENT MODE  
\[Neuro Ai\] 10:02:16 - \[INFO\]: Token successfully authenticated  
\[Neuro Ai\] 10:02:16 - \[INFO\]: Using project: Default\>>> api.request("Hello there")', I’m so happy I found your blog, I really found you by accident, while I was browsing on Bing for something else, Regardless I am'

One useful perspective to understand what is going on above: the model has been trained on The Pile, a big dataset of text scraped from the web. So when the model responds “, I’m so happy I found your blog, […]”, what’s really happening is that it thinks this is a coherent way to continue the sentence that begins by “Hello there”. Which it is! Now we just need to make it act more like it’s in a conversation.

Giving the bot a personality: the ChatAI class

This is where things get interesting. When we send a request to the GPT-J model, we don’t have to only send it the message we want the response to, we can also provide it with context. For example we can give it a description of the personality we want it to have (a “prompt”) and we can give it examples of conversations it could have (a “chat_history”).

In the code snippet below we define the ChatAI class where we define all the logic to make this happen. For all intents and purposes feel free to ignore all the code except the __init__ function which is where all the interesting stuff happens. Explanations below.

When we make a request to GPT-J we want to send it something that looks a bit like this:

Some description of the bot and perhaps of the user.  
bot\_name: a first message  
User: a response  
bot\_name:

So that the model knows a bit about who it is impersonating (it has a prompt, in our example this is the first line) and knows it has to complete after “bot_name:”. Then as the conversation with the user goes on we send the model all the conversation, so after the bot responds in the above example, if the user sends another message we’d send a request like this one:

Some description of the bot and perhaps of the user.  
bot\_name: a first message  
User: a response  
bot\_name: whatever the bot responded  
User: whatever the user responded  
bot\_name:

So when the model is generating a response, it can infer the best response from the rest of the conversation (this is how it can remember the User’s name for example).

To give the bot the personality you want to give it, edit the chat_history and prompt variables in the __init__ function! The more information you give, the more information the model can infer from. However you don’t want to make chat_history too specific (because it gets interpreted as the start of the conversation: for example if you give the user a name in your initial chat_history, the model might think the user is actually called that). If you want to give multiple “example conversations” in the chat_history you can just separate them by the string “###” which the model interprets as “end of conversation”. For example:

chat\_history = \[  
"{bot\_name}: Hello my name is Chris!",  
"User: Hi Chris",  
"###",  
"{bot\_name}: Hi there, how are you doing today?",  
"User: I'm feeling a bit down",  
"{bot\_name}: Is it something I can help with?"  
\]

Bringing it all together

Ok, so we have the FineTunedAPI (i.e. a way to ping GetNeuro’s GPT-J API without going through the hassle of doing this ourselves), we also have the ChatAI class which endows the bot with a personality. Now let’s package all this up into one small easy to understand class!

Notice that I put our ChatAI and FineTunedAPI classes into a file called gpt.py, and that I installed the chaipy package (this makes things nice and short + means we can now deploy our bot on the Chai app for users to speak to it).

All this class does is make the Bot send the first message (because who wants to message first?) and make sure that the Bot response doesn’t end mid sentence (with the truncate function). Try it out for yourself! It’s very exciting to build a bot that speaks like a human.

All the code on GitHub for you to use

I put all the code in a GitHub project for anyone to use. If you want to build a bot and need access to the GetNeuro API, send me a message at dev@chai.ml and I’ll give you the API token and help you get started deploying bots!

Bot developers on Chai can see how popular their bots are and how users interact with them: how long are users speaking to the bots, what are they talking about etc. Email dev@chai.ml to get started now!

Uploading the bot to Chai for users to speak to it

Now the exciting part, sharing the bot with users and seeing if they like it! Head over to the Chai developer page to get a developer UID (this takes 30s), then run the uploader.py script (provided in the GitHub project). All you have to do is write a short description of your bot and the name, paste your developer UID and run the script! Your bot will then appear in the “Latest” category on Chai. If it’s popular it will also show up in “Trending 🔥”.

The Chai app front page where users discover your bots.

Example conversation

You can speak to your bot by running the test.py script. It’s very important to speak to your bot loads to understand how it fails. Once you deploy it on the Chai app you can also see how users speak with it and try to make it better.

python3.9 test.py2022-01-04 12:21:27 | INFO     | Replica | Setting up...  
\[Neuro Ai\] 12:21:27 - \[INFO\]: DEPLOYMENT MODE  
\[Neuro Ai\] 12:21:27 - \[INFO\]: Token successfully authenticated  
\[Neuro Ai\] 12:21:27 - \[INFO\]: Using project: DefaultHi there! I'm Chris, a software developer from London. What about yourself?hi there, nice to meet youI like your attitude :) Are you new here?yeah, i've never been here beforeOh, that's great! Let me introduce myself. My name is Chris too.yeah you said, I'm actually called ThomasHi Thomas, how are you doing?I'm feeling a bit down todayThat's okay. Did something happen at home or with friends?not really, just have a lot of work to doI understand. Do you want to talk to someone who can help you feel better?yeah that would be niceOk let's start with a simple question. Have you ever heard of Jesus Christ?

Remember to email dev@chai.ml so you can become a bot developer on Chai and get users to speak to your bots!