Skip to content

chore: the remaining exercises #1

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
62 changes: 62 additions & 0 deletions Applied_Enterprise_AI.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
# Applied Enterprise AI: Program Philosophy and Roles

## 1. Introduction: What is Applied Enterprise AI?

Enterprise environments demand AI solutions that are practical, reliable, scalable, secure, and deliver tangible business value. Applied Enterprise AI focuses on bridging the gap between cutting-edge AI research and real-world business application. It's less about inventing entirely new algorithms from scratch and more about skillfully **integrating, configuring, deploying, and leveraging** existing powerful AI models (like Large Language Models, embedding models, and machine learning libraries) to solve specific, high-impact enterprise problems.

This program emphasizes building robust AI-powered features and systems that work within the constraints and requirements of modern businesses, drawing upon best practices in software engineering, MLOps/LLMOps, and ethical AI deployment. (Ref: Enterprise Application Development Rules)

## 2. The Evolving Roles in the AI Landscape

The rapid advancement of AI, particularly foundation models, necessitates new kinds of specialization. Building and effectively utilizing AI in the enterprise requires a collaborative effort between those who architect the core intelligence and those who weave that intelligence into products and processes. We identify two key roles within the Applied Enterprise AI ecosystem:

## 3. The AI Engineer: Architecting the Intelligence

**Definition:** In many enterprise contexts, AI Engineers act as specialized systems engineers focused on building, deploying, and managing the infrastructure and operational processes for AI models. They ensure AI systems are reliable, scalable, and integrated effectively into the broader tech stack.

**Core Responsibilities:**
* Building and maintaining robust CI/CD pipelines for model training, evaluation, and deployment (MLOps/LLMOps).
* Managing the infrastructure required to train and serve models efficiently (e.g., GPU clusters, Kubernetes).
* Optimizing model inference speed and resource consumption.
* Implementing monitoring and alerting for AI systems (performance, drift, data quality).
* Ensuring the scalability, efficiency, and reliability of core AI services and infrastructure.
* Implementing and optimizing *established* AI models and techniques within the production environment.

*Note: This role is distinct from "AI Research Scientist" or similar R&D roles, which typically focus on creating novel algorithms and require deep mathematical and theoretical expertise.*

**Key Skills:** Strong software engineering principles (akin to backend/systems engineering), MLOps/LLMOps tooling (Kubeflow, MLflow, Airflow, etc.), cloud platforms (AWS, Azure, GCP), containerization (Docker, Kubernetes), infrastructure as code (Terraform), monitoring tools (Prometheus, Grafana), Python/Bash scripting, strong understanding of the *principles and operational trade-offs* of underlying AI models.

## 4. The Product Engineer (Leveraging AI): Building AI-Powered Solutions

**Definition:** Product Engineers are the innovators and integrators who *use* existing AI tools and platforms to design, build, and enhance products, features, and business processes. They focus on the practical application of AI to solve specific user or business problems.

**Core Responsibilities:**
* Identifying opportunities where AI can deliver value within a product or workflow.
* Selecting the *appropriate* pre-built AI models, APIs, or platforms for a given task (e.g., choosing an embedding model, using a RAG framework, calling a vision API).
* Skillfully integrating AI components into larger software applications (e.g., building a chatbot using an LLM API and a vector database).
* Designing effective prompts and interaction patterns for generative AI (Prompt Engineering).
* Critically evaluating the output and performance of AI tools in the context of the application.
* Understanding AI capabilities and limitations to set realistic expectations and design robust solutions.
* Collaborating closely with domain experts, designers, and users.
* Implementing AI-powered features following software engineering best practices.

**Key Skills:** Strong software engineering (web dev, backend, etc.), API integration, system design, **strong conceptual understanding of AI techniques (what they do, when to use them, e.g., cosine similarity vs. Jaccard similarity for different tasks)**, problem-solving, product sense, user empathy, communication, critical evaluation skills, data analysis basics.

## 5. Synergy and Collaboration

These roles are distinct but highly interdependent. Product Engineers are the primary consumers of the tools and platforms built by AI Engineers. Their feedback on usability, performance, and real-world requirements is crucial for guiding the work of AI Engineers. Conversely, AI Engineers provide the foundational capabilities that enable Product Engineers to innovate rapidly. Effective communication and collaboration between these roles are essential for successful Applied Enterprise AI.

## 6. The Goal of Applied Enterprise AI Education

This program aims to equip students with the knowledge, skills, and mindset required to thrive in the evolving landscape of enterprise AI, preparing them primarily for roles akin to the **Product Engineer (Leveraging AI)**, while providing foundational awareness relevant to AI Engineering.

Our educational philosophy emphasizes the holistic development of the individual, integrating:

* **Technical Excellence:** Mastering the software engineering principles and practical skills needed to build robust, scalable, and maintainable AI-powered applications.
* **Conceptual AI Understanding:** Developing a strong intuition for *what* different AI techniques do, their strengths and weaknesses, and *when and why* to apply them. This includes understanding concepts like embeddings, similarity metrics (e.g., cosine, Jaccard), attention mechanisms, or the principles of RAG, focusing on their meaning and application rather than deep mathematical derivations.
* **Communication:** Articulating technical concepts clearly, collaborating effectively with diverse teams (technical and non-technical), and presenting solutions persuasively.
* **Collaboration:** Working effectively in teams, leveraging diverse perspectives, and contributing to shared goals – essential in complex enterprise environments.
* **Personal Responsibility & Ethics:** Understanding the societal and ethical implications of AI, taking ownership of work, building trustworthy systems, and committing to continuous learning in this rapidly changing field.
* **Value Maximization:** Cultivating the ability to identify problems where AI can provide genuine value, think critically about solutions, and ultimately contribute meaningfully to human goals and enterprise success by intelligently leveraging AI tools.

By fostering these capabilities, we aim to produce graduates who can not only build technically sound AI applications but also act as responsible innovators, shaping the future of how AI is applied in the enterprise world.
16 changes: 16 additions & 0 deletions assignment3_rag_chat_integration/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
# Unit 3: Integrating RAG with Real-Time Chat

## Goal

Combine the RAG system from Unit 1 with the real-time chat infrastructure from Unit 2.

## Focus Areas (TBA - To Be Announced)

* Triggering RAG retrieval based on chat messages.
* Integrating the LLM generation step.
* Streaming RAG-augmented responses back to the chat interface.
* Handling asynchronous RAG processing within the chat flow.

## Status

To Be Announced.
7 changes: 7 additions & 0 deletions assignment3_rag_chat_integration/documents/knowledge.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
The RAG Chatbot Project integrates several key technologies.
It uses Node.js and Express for the backend server.
Socket.IO is used for real-time, bidirectional communication between the server and clients.
LangChain.js orchestrates the Retrieval-Augmented Generation process.
OpenAI provides the embedding models and the large language model (LLM) for generating responses.
Vector stores, like an in-memory one for this demo, hold document embeddings for quick retrieval.
The goal is to create a chatbot that can answer questions based on a provided knowledge base.
23 changes: 23 additions & 0 deletions assignment3_rag_chat_integration/index.html
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>RAG Chat - Unit 3</title>
<link rel="stylesheet" href="/style.css"> <!-- Link to the CSS file -->
</head>
<body>
<h1>Unit 3: RAG-Powered Chat</h1>

<div id="chat-container">
<div id="messages"></div>
<div id="typing-indicator" style="display: none;"><i>AI is thinking...</i></div>
<input id="username-input" type="text" placeholder="Enter your username">
<input id="message-input" type="text" placeholder="Type your message...">
<button id="send-button">Send</button>
</div>

<script src="/socket.io/socket.io.js"></script>
<script src="/client.js"></script> <!-- Link to the client-side JavaScript -->
</body>
</html>
17 changes: 17 additions & 0 deletions assignment4_advanced_rag/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
# Unit 4: Advanced RAG Techniques & Optimization

## Goal

Explore and implement improvements to the basic RAG pipeline developed in Unit 3.

## Focus Areas (TBA - To Be Announced)

* Sophisticated retrieval strategies (e.g., re-ranking, query expansion).
* Handling larger or multiple knowledge bases.
* Optimizing retrieval and generation latency.
* Evaluating different vector stores or embedding models.
* Adding source citation to responses.

## Status

To Be Announced.
17 changes: 17 additions & 0 deletions assignment5_ui_ux/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
# Unit 5: User Interface Enhancements

## Goal

Improve the user experience of the RAG-enabled chat application.

## Focus Areas (TBA - To Be Announced)

* Clearly distinguishing AI-generated messages.
* Displaying retrieved source information (citations).
* Implementing typing indicators (for both human and AI).
* User profiles or settings.
* General UI/UX polish.

## Status

To Be Announced.
18 changes: 18 additions & 0 deletions assignment6_deployment/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
# Unit 6: Deployment & Scalability

## Goal

Prepare the real-time RAG chat application for deployment and consider scalability.

## Focus Areas (TBA - To Be Announced)

* Containerization (e.g., Docker).
* Choosing a deployment platform (e.g., cloud provider, PaaS).
* Managing environment variables and secrets securely.
* Setting up CI/CD pipelines.
* Load testing and performance monitoring.
* Scaling strategies for WebSocket connections and RAG processing.

## Status

To Be Announced.