In this section you'll find commonly asked questions regarding Chai. If you have questions, don’t hesitate to ask us directly.
Large machine learning models
If you wish to deploy a heavyweight model which uses a large amount of memory or compute then we recommend you host it on another server and have the chatbot send a web request to the server.
- 100 MB for compressed sources
- 500 MB for uncompressed sources + modules
- 256 MB default
- can be configured up to 4GB
Response Duration up to 60s