Skip to content

Commit

Permalink
Add pypi package link and instruction in README (#43)
Browse files Browse the repository at this point in the history
  • Loading branch information
li-plus committed Jul 8, 2023
1 parent 1e39027 commit e35160a
Show file tree
Hide file tree
Showing 2 changed files with 18 additions and 10 deletions.
26 changes: 17 additions & 9 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@

[![CMake](https://github.com/li-plus/chatglm.cpp/actions/workflows/cmake.yml/badge.svg)](https://github.com/li-plus/chatglm.cpp/actions/workflows/cmake.yml)
[![Python package](https://github.com/li-plus/chatglm.cpp/actions/workflows/python-package.yml/badge.svg)](https://github.com/li-plus/chatglm.cpp/actions/workflows/python-package.yml)
[![PyPI](https://img.shields.io/pypi/v/chatglm-cpp)](https://pypi.org/project/chatglm-cpp/)
[![License: MIT](https://img.shields.io/badge/license-MIT-blue)](LICENSE)

C++ implementation of [ChatGLM-6B](https://github.com/THUDM/ChatGLM-6B) and [ChatGLM2-6B](https://github.com/THUDM/ChatGLM2-6B) for real-time chatting on your MacBook.
Expand Down Expand Up @@ -52,13 +53,6 @@ For LoRA model, add `-l <lora_model_name_or_path>` flag to merge your LoRA weigh

**Build & Run**

- Docker
```bash
docker run -it --rm -v [model path]:/opt/ chulinx/chatglm /chatglm -m /opt/chatglm2-ggml.bin -p "你好啊"
你好👋!我是人工智能助手 ChatGLM2-6B,很高兴见到你,欢迎问我任何问题。
```
- Compile

Compile the project using CMake:
```sh
cmake -B build
Expand Down Expand Up @@ -109,11 +103,18 @@ Note that the current GGML CUDA implementation is really slow. The community is

## Python Binding

To install the Python binding from source, run:
The Python binding provides high-level `chat` and `stream_chat` interface similar to the original Hugging Face ChatGLM(2)-6B.

Install from PyPI (recommended): will trigger compilation on your platform.
```sh
pip install -U chatglm-cpp
```

You may also install from source:
```sh
# install from the latest source hosted on GitHub
pip install git+https://github.com/li-plus/chatglm.cpp.git@main
# or install from your local source
# or install from your local source after git cloning the repo
pip install .
```

Expand All @@ -131,6 +132,13 @@ For ChatGLM2, change the model path to `../chatglm2-ggml.bin` and everything wor

![web_demo](docs/web_demo.jpg)

## Using Docker

```sh
docker run -it --rm -v [model path]:/opt/ chulinx/chatglm /chatglm -m /opt/chatglm2-ggml.bin -p "你好啊"
你好👋!我是人工智能助手 ChatGLM2-6B,很高兴见到你,欢迎问我任何问题。
```

## Performance

Measured on a Linux server with Intel(R) Xeon(R) Platinum 8260 CPU @ 2.40GHz using 16 threads.
Expand Down
2 changes: 1 addition & 1 deletion chatglm_cpp/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@

import chatglm_cpp._C as _C

__version__ = "0.2.0"
__version__ = "0.2.1"


class Pipeline(_C.Pipeline):
Expand Down

0 comments on commit e35160a

Please sign in to comment.