Skip to content

Latest commit

 

History

History
59 lines (39 loc) · 3.2 KB

File metadata and controls

59 lines (39 loc) · 3.2 KB

Using Semantic Workbench with python assistants

This project provides an example of a very basic agent connected to Semantic Workbench.

The agent doesn't do anything real, it simply echoes back messages sent by the user. The code here is only meant to show the basics, to familiarize with code structure and integration with Semantic Workbench.

Suggested Development Environment

  • Use GitHub Codespaces for a quick, turn-key dev environment: /.devcontainer/README.md
  • VS Code is recommended for development

Pre-requisites

Steps

  • Use VS Code > Run and Debug (ctrl/cmd+shift+d) > semantic-workbench to start the app and service from this workspace
  • Use VS Code > Run and Debug (ctrl/cmd+shift+d) > launch assistant to start the assistant.
  • If running in a devcontainer, follow the instructions in GitHub Codespaces / devcontainer README for any additional steps.
  • Return to the workbench app to interact with the assistant
  • Add a new assistant from the main menu of the app, choose the assistant name as defined by the service_name in chat.py
  • Click the newly created assistant to configure and interact with it

Starting the example from CLI

If you're not using VS Code and/or Codespaces, you can also work from the command line, using poetry:

cd <PATH TO THIS FOLDER>

poetry install

poetry run start-semantic-workbench-assistant assistant.chat:app

Create your own assistant

Copy the contents of this folder to your project.

  • The paths are already set if you put in the same repo root and relative path of /<your_projects>/<your_assistant_name>
  • If placed in a different location, update the references in the pyproject.toml to point to the appropriate locations for the semantic-workbench-* packages

From Development to Production

It's important to highlight how Semantic Workbench is a development tool, and it's not designed to host agents in a production environment. The workbench helps with testing and debugging, in a development and isolated environment, usually your localhost.

The core of your assistant/AI application, e.g. how it reacts to users, how it invokes tools, how it stores data, can be developed with any framework, such as Semantic Kernel, Langchain, OpenAI assistants, etc. That is typically the code you will add to chat.py.

Semantic Workbench is not a framework. Dependencies on semantic-workbench-assistant package are used only to test and debug your code in Semantic Workbench. When an assistant is fully developed and ready for production, configurable settings should be hard coded, dependencies on semantic-workbench-assistant and similar should be removed.