Skip to content

Commit

Permalink
Merge pull request #301 from Gregory-Pereira/fix-testing-framwork-lla…
Browse files Browse the repository at this point in the history
…macpp-ref

fixing testing framework with image ref being moved
  • Loading branch information
rhatdan committed Apr 22, 2024
2 parents 462d26e + 5db0a4d commit 199bcb4
Show file tree
Hide file tree
Showing 21 changed files with 25 additions and 25 deletions.
2 changes: 1 addition & 1 deletion CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ application:
- amd64
ports:
- 8001
image: quay.io/ai-lab/llamacpp-python:latest
image: quay.io/ai-lab/llamacpp_python:latest
- name: streamlit-chat-app
contextdir: .
containerfile: app/Containerfile
Expand Down
6 changes: 3 additions & 3 deletions ailab-images.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
## Images (x86_64, aarch64) currently built from GH Actions in this repository

- quay.io/ai-lab/llamacpp-python:latest
- quay.io/ai-lab/llamacpp-python-cuda:latest
- quay.io/ai-lab/llamacpp-python-vulkan:latest
- quay.io/ai-lab/llamacpp_python:latest
- quay.io/ai-lab/llamacpp_python_cuda:latest
- quay.io/ai-lab/llamacpp_python_vulkan:latest
- quay.io/ai-lab/summarizer:latest
- quay.io/ai-lab/chatbot:latest
- quay.io/ai-lab/rag:latest
Expand Down
6 changes: 3 additions & 3 deletions model_servers/llamacpp_python/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ make -f Makefile build
To pull the base model service image:

```bash
podman pull quay.io/ai-lab/llamacpp-python
podman pull quay.io/ai-lab/llamacpp_python
```


Expand All @@ -40,7 +40,7 @@ make -f Makefile build-cuda
To pull the base model service image:

```bash
podman pull quay.io/ai-lab/llamacpp-python-cuda
podman pull quay.io/ai-lab/llamacpp_python_cuda
```

**IMPORTANT!**
Expand All @@ -67,7 +67,7 @@ make -f Makefile build-vulkan
To pull the base model service image:

```bash
podman pull quay.io/ai-lab/llamacpp-python-vulkan
podman pull quay.io/ai-lab/llamacpp_python_vulkan
```


Expand Down
2 changes: 1 addition & 1 deletion model_servers/llamacpp_python/tests/conftest.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@
REGISTRY = os.environ['REGISTRY']

if not 'IMAGE_NAME' in os.environ:
IMAGE_NAME = 'containers/llamacpp-python:latest'
IMAGE_NAME = 'containers/llamacpp_python:latest'
else:
IMAGE_NAME = os.environ['IMAGE_NAME']

Expand Down
2 changes: 1 addition & 1 deletion recipes/audio/audio_to_text/bootc/Containerfile
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ RUN set -eu; mkdir -p /usr/ssh && \
ARG RECIPE=audio-to-text
ARG MODEL_IMAGE=quay.io/ai-lab/mistral-7b-instruct:latest
ARG APP_IMAGE=quay.io/ai-lab/${RECIPE}:latest
ARG SERVER_IMAGE=quay.io/ai-lab/llamacpp-python:latest
ARG SERVER_IMAGE=quay.io/ai-lab/llamacpp_python:latest
ARG TARGETARCH

# Add quadlet files to setup system to automatically run AI application on boot
Expand Down
2 changes: 1 addition & 1 deletion recipes/audio/audio_to_text/bootc/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ podman build --build-arg "sshpubkey=$(cat ~/.ssh/id_rsa.pub)" \

# for GPU powered sample LLM application with llamacpp cuda model server
podman build --build-arg "sshpubkey=$(cat ~/.ssh/id_rsa.pub)" \
--build-arg "model-server-image="quay.io/ai-lab/llamacpp-python-cuda:latest" \
--build-arg "model-server-image="quay.io/ai-lab/llamacpp_python_cuda:latest" \
--from <YOUR BOOTABLE IMAGE WITH NVIDIA/CUDA> \
--cap-add SYS_ADMIN \
--platform linux/amd64 \
Expand Down
2 changes: 1 addition & 1 deletion recipes/common/Makefile.common
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ IMAGE_NAME ?= $(REGISTRY_ORG)/${APP}:latest
APP_IMAGE ?= $(REGISTRY)/$(IMAGE_NAME)
CHROMADB_IMAGE ?= $(REGISTRY)/$(REGISTRY_ORG)/chromadb:latest
MODEL_IMAGE ?= $(REGISTRY)/$(REGISTRY_ORG)/mistral-7b-instruct:latest
SERVER_IMAGE ?= $(REGISTRY)/$(REGISTRY_ORG)/llamacpp-python:latest
SERVER_IMAGE ?= $(REGISTRY)/$(REGISTRY_ORG)/llamacpp_python:latest
SSH_PUBKEY ?= $(shell cat ${HOME}/.ssh/id_rsa.pub;)
BOOTC_IMAGE ?= quay.io/$(REGISTRY_ORG)/${APP}-bootc:latest
BOOTC_IMAGE_BUILDER ?= quay.io/centos-bootc/bootc-image-builder
Expand Down
2 changes: 1 addition & 1 deletion recipes/common/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ used to override defaults for a variety of make targets.
|CHROMADB_IMAGE | ChromaDB image to be used for application | `$(REGISTRY)/$(REGISTRY_ORG)/chromadb:latest` |
|DISK_TYPE | Disk type to be created by BOOTC_IMAGE_BUILDER | `qcow2` (Options: ami, iso, vmdk, raw) |
|MODEL_IMAGE | AI Model to be used by application | `$(REGISTRY)/$(REGISTRY_ORG)/mistral-7b-instruct:latest`|
|SERVER_IMAGE | AI Model Server Application | `$(REGISTRY)/$(REGISTRY_ORG)/llamacpp-python:latest` |
|SERVER_IMAGE | AI Model Server Application | `$(REGISTRY)/$(REGISTRY_ORG)/llamacpp_python:latest` |
|SSH_PUBKEY | SSH Public key preloaded in bootc image. | `$(shell cat ${HOME}/.ssh/id_rsa.pub;)` |
|FROM | Overrides first FROM instruction within Containerfile| `FROM` line defined in the Containerfile |
|ARCH | Use alternate arch for image build | Current Arch |
Expand Down
2 changes: 1 addition & 1 deletion recipes/natural_language_processing/chatbot/ai-lab.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ application:
- amd64
ports:
- 8001
image: quay.io/ai-lab/llamacpp-python:latest
image: quay.io/ai-lab/llamacppp_python:latest
- name: streamlit-chat-app
contextdir: app
containerfile: Containerfile
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ RUN set -eu; mkdir -p /usr/ssh && \
ARG RECIPE=chatbot
ARG MODEL_IMAGE=quay.io/ai-lab/mistral-7b-instruct:latest
ARG APP_IMAGE=quay.io/ai-lab/${RECIPE}:latest
ARG SERVER_IMAGE=quay.io/ai-lab/llamacpp-python:latest
ARG SERVER_IMAGE=quay.io/ai-lab/llamacpp_python:latest
ARG TARGETARCH

# Add quadlet files to setup system to automatically run AI application on boot
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ podman build --build-arg "sshpubkey=$(cat ~/.ssh/id_rsa.pub)" \

# for GPU powered sample LLM application with llamacpp cuda model server
podman build --build-arg "sshpubkey=$(cat ~/.ssh/id_rsa.pub)" \
--build-arg "model-server-image="quay.io/ai-lab/llamacpp-python-cuda:latest" \
--build-arg "model-server-image="quay.io/ai-lab/llamacpp_python_cuda:latest" \
--from <YOUR BOOTABLE IMAGE WITH NVIDIA/CUDA> \
--cap-add SYS_ADMIN \
--platform linux/amd64 \
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@
- name: Run Model
containers.podman.podman_container:
name: llamacpp_python
image: ghcr.io/containers/llamacpp-python:latest
image: ghcr.io/containers/llamacpp_python:latest
state: started
interactive: true
tty: true
Expand Down
2 changes: 1 addition & 1 deletion recipes/natural_language_processing/codegen/ai-lab.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ application:
- amd64
ports:
- 8001
image: quay.io/ai-lab/llamacpp-python:latest
image: quay.io/ai-lab/llamacpp_python:latest
- name: codegen-app
contextdir: app
containerfile: Containerfile
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ RUN set -eu; mkdir -p /usr/ssh && \
ARG RECIPE=codegen
ARG MODEL_IMAGE=quay.io/ai-lab/mistral-7b-instruct:latest
ARG APP_IMAGE=quay.io/ai-lab/${RECIPE}:latest
ARG SERVER_IMAGE=quay.io/ai-lab/llamacpp-python:latest
ARG SERVER_IMAGE=quay.io/ai-lab/llamacpp_python:latest
ARG TARGETARCH

# Add quadlet files to setup system to automatically run AI application on boot
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@
- name: Run Model
containers.podman.podman_container:
name: llamacpp_python
image: ghcr.io/containers/llamacpp-python:latest
image: ghcr.io/containers/llamacpp_python:latest
state: started
interactive: true
tty: true
Expand Down
2 changes: 1 addition & 1 deletion recipes/natural_language_processing/rag/ai-lab.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ application:
- amd64
ports:
- 8001
image: quay.io/ai-lab/llamacpp-python:latest
image: quay.io/ai-lab/llamacpp_python:latest
- name: chromadb-server
contextdir: ../../../vector_dbs/chromadb
containerfile: Containerfile
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ RUN set -eu; mkdir -p /usr/ssh && \
ARG RECIPE=rag
ARG MODEL_IMAGE=quay.io/ai-lab/mistral-7b-instruct:latest
ARG APP_IMAGE=quay.io/ai-lab/${RECIPE}:latest
ARG SERVER_IMAGE=quay.io/ai-lab/llamacpp-python:latest
ARG SERVER_IMAGE=quay.io/ai-lab/llamacpp_python:latest
ARG CHROMADBImage=quay.io/ai-lab/chromadb
ARG TARGETARCH

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@
- name: Run Model
containers.podman.podman_container:
name: llamacpp_python
image: ghcr.io/containers/llamacpp-python:latest
image: ghcr.io/containers/llamacpp_python:latest
state: started
interactive: true
tty: true
Expand Down
2 changes: 1 addition & 1 deletion recipes/natural_language_processing/summarizer/ai-lab.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ application:
- amd64
ports:
- 8001
image: quay.io/ai-lab/llamacpp-python:latest
image: quay.io/ai-lab/llamacpp_python:latest
- name: streamlit-summary-app
contextdir: app
containerfile: Containerfile
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ RUN set -eu; mkdir -p /usr/ssh && \
ARG RECIPE=summarizer
ARG MODEL_IMAGE=quay.io/ai-lab/mistral-7b-instruct:latest
ARG APP_IMAGE=quay.io/ai-lab/${RECIPE}:latest
ARG SERVER_IMAGE=quay.io/ai-lab/llamacpp-python:latest
ARG SERVER_IMAGE=quay.io/ai-lab/llamacpp_python:latest
ARG TARGETARCH

# Add quadlet files to setup system to automatically run AI application on boot
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@
- name: Run Model
containers.podman.podman_container:
name: llamacpp_python
image: ghcr.io/containers/llamacpp-python:latest
image: ghcr.io/containers/llamacpp_python:latest
state: started
interactive: true
tty: true
Expand Down

0 comments on commit 199bcb4

Please sign in to comment.