A specialized server that enables LLMs (Large Language Models) to gather specific information through sequential questioning. This project implements the MCP (Model Control Protocol) standard for seamless integration with LLM clients.
🎉 Version 1.0.0 Released 🎉
The Sequential Questioning MCP Server is now complete and ready for production deployment. All planned features have been implemented, tested, and documented.
- Sequential Questioning Engine: Generates contextually appropriate follow-up questions based on previous responses
- MCP Protocol Support: Full implementation of the MCP specification for integration with LLMs
- Robust API: RESTful API with comprehensive validation and error handling
- Vector Database Integration: Efficient storage and retrieval of question patterns
- Comprehensive Monitoring: Performance metrics and observability with Prometheus and Grafana
- Production-Ready Deployment: Kubernetes deployment configuration with multi-environment support
- High Availability: Horizontal Pod Autoscaler and Pod Disruption Budget for production reliability
- Security: Network policies to restrict traffic and secure the application
- API Reference
- Architecture
- Usage Examples
- Deployment Guide
- Operational Runbook
- Load Testing
- Deployment Verification
- Final Deployment Plan
- Release Notes
- Python 3.10+
- Docker and Docker Compose (for local development)
- Kubernetes cluster (for production deployment)
- PostgreSQL 15.4+
- Access to a Qdrant instance
The easiest way to get started is to use our initialization script:
./scripts/initialize_app.sh
This script will:
- Check if Docker is running
- Start all necessary containers with Docker Compose
- Run database migrations automatically
- Provide information on how to access the application
The application will be available at http://localhost:8001
-
Clone the repository
git clone https://github.com/your-organization/sequential-questioning.git cd sequential-questioning
-
Install dependencies
pip install -e ".[dev]"
-
Set up environment variables
cp .env.example .env # Edit .env file with your configuration
-
Run the development server
uvicorn app.main:app --reload
docker-compose up -d
If you're starting the application manually, don't forget to run the database migrations:
export DATABASE_URL="postgresql://postgres:postgres@localhost:5432/postgres"
bash scripts/run_migrations.sh
-
Development Environment
kubectl apply -k k8s/overlays/dev
-
Staging Environment
kubectl apply -k k8s/overlays/staging
-
Production Environment
kubectl apply -k k8s/overlays/prod
See the Final Deployment Plan and Operational Runbook for detailed instructions.
Access Prometheus and Grafana dashboards for monitoring:
kubectl port-forward -n monitoring svc/prometheus 9090:9090
kubectl port-forward -n monitoring svc/grafana 3000:3000
Automated CI/CD pipeline with GitHub Actions:
- Continuous Integration: Linting, type checking, and testing
- Continuous Deployment: Automated deployments to dev, staging, and production
- Deployment Verification: Automated checks post-deployment
Run the test suite:
pytest
Run performance tests:
python -m tests.performance.test_sequential_questioning_load
If the application is running but the database tables don't exist:
- Make sure the database container is running
- Run the database migrations manually:
export DATABASE_URL="postgresql://postgres:postgres@localhost:5432/postgres" bash scripts/run_migrations.sh
If you encounter the error pydantic.errors.PydanticImportError: BaseSettings has been moved to the pydantic-settings package
, ensure that:
- The
pydantic-settings
package is included in your dependencies - You're importing
BaseSettings
frompydantic_settings
instead of directly frompydantic
This project uses Pydantic v2.x which moved BaseSettings
to a separate package.
Contributions are welcome! Please see CONTRIBUTING.md for guidelines.
For support or inquiries, contact support@example.com