TechSage is a multi-agent LLM platform delivering daily insights on technology, programming, cloud architecture, and more. Utilize OpenAI's LLMs or local models via Ollama, powered by CrewAI's multi-agent system, to stay ahead in the tech world.
Prerequisites • Installation • Configure • Launch • Docker
- Python >= 3.10, <= 3.13
ollama
(if using a local model) install here- May need to install the c++ build tool if you don't already have it
To install TechSage, run:
pip install https://github.com/VictorGoubet/techsage/archive/refs/tags/v1.tar.gz
Replace v1
with the release you want to use.
Execute this command only if you want to use the shell interface with specific configuration. For the Streamlit interface, you can configure everything directly within it.
configure-sage
--model <your-model-name>
: Name of the model to use (default:llama3:8b
).--model_url <your-model-url>
: API URL of the model to use (default:http://localhost:11434/v1
).--verbose <1 or 0>
: Verbose level during configuration (default: 0).--local <True or False>
: Use a local model with Ollama or an OpenAI API model (default: True).--openai_api_key <key>
: Your OpenAI API key (required if local mode is disabled or using crew memory).--google_search_api_key <key>
: Delpha Google Search API key. If empty, a local Google search will be performed. Modifyapi_google_search
method intools.py
to use another API. A DuckDuckGo tool is also available.
After setting up, launch the script with admin rights. If no configuration is provided, the default configuration will be used:
launch-sage
Note: Be sure to have ollama running if you intend to use local models
--streamlit <true or false>
: Iftrue
, the Streamlit interface will be used; otherwise, a shell interface will appear.
Lazy to setup everything ? Just use the dedicated docker image and go to http://localhost:8501
docker run -d -v ollama:/root/.ollama -p 8501:8501 victorgoubet/techsage:latest
First install GPU drivers for docker:
- Linux: NVIDIA Container Toolkit.
- Windows: Nvidia Cuda on WSL
- Mac: Not supported
docker run -d --gpus=all -v ollama:/root/.ollama -p 8501:8501 victorgoubet/techsage:latest
Note: GPU version not really stable