Ollama is an AI model management tool that allows users to install and use custom large language models locally.
The project aims to:
- Create a Discord bot that will utilize Ollama and chat to chat with users!
- User Preferences on Chat
- Message Persistance on Channels and Threads
- Containerization with Docker
- Slash Commands Compatible
- Generated Token Length Handling for >2000
or >6000 characters- Token Length Handling of any message size
- External WebUI Integration
- Administrator Role Compatible
- Allow others to create their own models personalized for their own servers!
- Documentation on creating your own LLM
- Documentation on web scrapping and cleaning
- Clone this repo using
git clone https://github.com/kevinthedang/discord-ollama.git
or just use GitHub Desktop to clone the repo. - You will need a
.env
file in the root of the project directory with the bot's token. There is a.env.sample
is provided for you as a reference for what environment variables.- For example,
CLIENT_TOKEN = [Bot Token]
- For example,
- Please refer to the docs for bot setup. NOTE: These guides assume you already know how to setup a bot account for discord.
- Local Machine Setup
- Docker Setup for Servers and Local Machines
- Local use is not recommended.
- NodeJS
- This project uses
v20.10.0+
(npm10.2.5
). Consider using nvm for multiple NodeJS versions.- To run dev in
ts-node
, usingv18.18.2
is recommended. CAUTION:v18.19.0
orlts/hydrogen
will not run properly. - To run dev with
tsx
, you can usev20.10.0
or earlier.
- To run dev in
- This project supports any NodeJS version above
16.x.x
to only allow ESModules.
- This project uses
- Ollama
- Ollama Docker Image
- IMPORTANT: For Nvidia GPU setup, install
nvidia container toolkit/runtime
then configure it with Docker to utilize Nvidia driver.
- Discord Developer Portal
- Discord.js Docs
- Setting up Docker (Ubuntu 20.04)
discord-ollama © 2023 by Kevin Dang is licensed under CC BY 4.0