Skip to content

Commit 91c842e

Browse files
committed
draft of LLM workshop
1 parent a3eced7 commit 91c842e

File tree

7 files changed

+558
-0
lines changed

7 files changed

+558
-0
lines changed

book/_toc.yml

Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -37,6 +37,21 @@ parts:
3737
title: "Rule 6: Test Code"
3838
- file: golden-rules/collaborate
3939
title: "Rule 7: Collaborate"
40+
- file: llms/overview
41+
title: Using LLMs
42+
sections:
43+
- file: llms/setup-guide
44+
title: Setup Guide
45+
- file: llms/effective-prompting
46+
title: Effective Prompting
47+
- file: llms/generating-code
48+
title: Generating Code
49+
- file: llms/debugging-errors
50+
title: Debugging Errors
51+
- file: llms/human-in-the-loop
52+
title: The Importance of Human-in-the-Loop
53+
54+
4055
- caption: Installation
4156
chapters:
4257
- file: install/common

book/llms/debugging-errors.ipynb

Lines changed: 201 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,201 @@
1+
{
2+
"cells": [
3+
{
4+
"cell_type": "markdown",
5+
"id": "914cfac4",
6+
"metadata": {},
7+
"source": [
8+
"(llms-debugging-errors)=\n",
9+
"# Debugging Errors\n",
10+
"\n",
11+
"LLMs are highly effective at identifying and fixing bugs in code, thanks to their training on vast datasets that include common coding mistakes and solutions. Instead of manually troubleshooting errors, you can leverage an AI coding assistant to quickly resolve issues and get back to building your project.\n",
12+
"\n"
13+
]
14+
},
15+
{
16+
"cell_type": "markdown",
17+
"id": "5cc84426",
18+
"metadata": {},
19+
"source": [
20+
"## Quick Fixes\n",
21+
"For minor errors, LLMs are very effective at providing quick fixes. In large projects, be careful to not provide too much context, as this can lead the LLM to focus on the wrong aspects of the code. Try being specific about the snippet you want it to analyze, rather than providing a lot of background information about the code's purpose or functionality. LLMs work best with concise, focused prompts that directly address the code snippet at hand.\n",
22+
"\n",
23+
"Practice using an LLM to solve the simple bugs you encounter in the code below which plots a sine wave. Typically you can copy and paste the code snippet into the LLM's input field and ask it to identify and fix the errors. \n",
24+
"\n",
25+
"```{admonition} Example Prompt\n",
26+
":class: example\n",
27+
"As an example prompt, you can use:\n",
28+
"> The following Python code has an error. Identify the error and provide a corrected version of the code. \n",
29+
"> \n",
30+
"> [Insert code snippet here]\n",
31+
"```\n",
32+
"\n",
33+
"\n",
34+
"Below is the code with some simple errors:"
35+
]
36+
},
37+
{
38+
"cell_type": "code",
39+
"execution_count": null,
40+
"id": "8b6ca224",
41+
"metadata": {},
42+
"outputs": [],
43+
"source": [
44+
"# ---------------------- student exercise --------------------------------- #\n",
45+
"# DEBUG AND FIX THE FOLLOWING CODE USING AI FEATURES (hint: there are 3 bugs)\n",
46+
"\n",
47+
"import numpy as np\n",
48+
"import matplotlib.pyplot as plt\n",
49+
"\n",
50+
"x = np.linspace(0, 2 * np.pi, 100)\n",
51+
"y = np.sin[x]\n",
52+
"\n",
53+
"plt.plot(x, y)\n",
54+
"plt.title = \"Sine Wave\"\n",
55+
"plt.xlabel(\"x values\")\n",
56+
"plt.ylable(\"sin(x)\")\n",
57+
"plt.show()\n",
58+
"\n",
59+
"# ---------------------- student exercise --------------------------------- #"
60+
]
61+
},
62+
{
63+
"cell_type": "markdown",
64+
"id": "3f33d375",
65+
"metadata": {
66+
"vscode": {
67+
"languageId": "plaintext"
68+
}
69+
},
70+
"source": [
71+
"## Providing Error Messages \n",
72+
"\n",
73+
"Typically when you run into an error, you will see an error message in the console. You can copy and paste this error message into the LLM to help it understand the problem. The LLM can then provide a solution based on the error message. This is especially useful for more complex issues where the code itself may not clearly indicate the problem. \n",
74+
"\n",
75+
"The below Python snippet is meant to plot a list of numbers after taking their inverse, however, it fails for a certain value. It still produces a (mostly correct) plot, but the error is visible in the console. Run the code below to see the error message, then copy and paste it into the LLM to get a fixed version.\n",
76+
"\n",
77+
"You will see an error message similar to:\n",
78+
"> RuntimeWarning: divide by zero encountered in divide\n",
79+
"\n",
80+
"```{admonition} Example Prompt\n",
81+
":class: example\n",
82+
"As an example prompt, you can use:\n",
83+
"> The following Python code has an error, please identify the error and provide a corrected version of the code:\n",
84+
"> \n",
85+
"> [Insert code snippet here]\n",
86+
">\n",
87+
"> The error message is:\n",
88+
"> [Insert error message here]\n",
89+
"```\n",
90+
"\n"
91+
]
92+
},
93+
{
94+
"cell_type": "code",
95+
"execution_count": null,
96+
"id": "95cca551",
97+
"metadata": {},
98+
"outputs": [],
99+
"source": [
100+
"# ---------------------- student exercise --------------------------------- #\n",
101+
"import numpy as np\n",
102+
"import matplotlib.pyplot as plt\n",
103+
"\n",
104+
"def compute_inverse(arr):\n",
105+
" return 1 / arr\n",
106+
"\n",
107+
"x = np.linspace(-1, 1, 101)\n",
108+
"y = compute_inverse(x)\n",
109+
"\n",
110+
"plt.plot(x, y)\n",
111+
"plt.title(\"Inverse Function\")\n",
112+
"plt.xlabel(\"x values\")\n",
113+
"plt.ylabel(\"1/x\")\n",
114+
"plt.show()\n",
115+
"# ---------------------- student exercise --------------------------------- #"
116+
]
117+
},
118+
{
119+
"cell_type": "markdown",
120+
"id": "48ff0294",
121+
"metadata": {},
122+
"source": [
123+
"More complex errors may require you to provide additional context or information about the code. In these cases, you can include a brief description of what the code is supposed to do, along with the error message. This helps the LLM understand the intended functionality and provide a more accurate solution.\n",
124+
"\n",
125+
"The error message you see on more complex errors may also provide more context about the issue in the form of a `traceback`. A traceback is a report that provides information about the sequence of function calls that led to the error. It can help you understand where the error occurred in the code and what might have caused it. \n",
126+
"\n",
127+
"```{admonition} Tip\n",
128+
":class: tip\n",
129+
"Tracebacks can sometimes be very long and often contain a lot of information that is not relevant to the specific error you are trying to fix. When providing a traceback to an LLM, try to focus on the most relevant parts of the traceback which are typically the last few lines. In general, it is often sufficient to include the last few traceback calls and the final error message when debugging more complex issues.\n",
130+
"```"
131+
]
132+
},
133+
{
134+
"cell_type": "markdown",
135+
"id": "fcb0ca2f",
136+
"metadata": {},
137+
"source": [
138+
"## Iterative Debugging\n",
139+
"The previous examples were able to be solved with a single prompt. However, more complex issues may require an iterative approach. This means you may need to provide additional context or information to the LLM based on its initial response, or it solves the first issue but after rerunning the code, another bug appears. In these cases, you can continue to refine your prompts and provide more information until the LLM arrives at a solution.\n",
140+
"\n",
141+
"```{admonition} Tip\n",
142+
":class: tip\n",
143+
"When interacting with an LLM, it typically has a memory of the conversation, so you can refer back to previous messages. This means you can just provide the error messages without providing the code snippet again if it was already provided in the conversation. This can help streamline the debugging process and make it easier to focus on the specific issues at hand.\n",
144+
"```\n",
145+
"\n",
146+
"Try running the code below to see an example of an iterative debugging process. The goal is to visualize a noisy sine wave. You can copy and paste the error message into the LLM, and then continue to refine your prompts based on its responses until you arrive at a solution. There are two errors in the code, so after providing the first error message, you can run the code again to see the second error message and provide that to the LLM as well until you have a working fully working code snippet."
147+
]
148+
},
149+
{
150+
"cell_type": "code",
151+
"execution_count": null,
152+
"id": "90dba6ee",
153+
"metadata": {},
154+
"outputs": [],
155+
"source": [
156+
"# ---------------------- student exercise --------------------------------- #\n",
157+
"import numpy as np\n",
158+
"import matplotlib.pyplot as plt\n",
159+
"\n",
160+
"def scale_data(data, factor):\n",
161+
" return data * factor\n",
162+
"\n",
163+
"def generate_and_scale_data():\n",
164+
" x = np.linspace(0, 10, 100)\n",
165+
" noise = np.random.normal(0, 1, 100)\n",
166+
" y = np.sin(x) + noise\n",
167+
" scaled = scale_data(y, \"2\")\n",
168+
" return x, scaled\n",
169+
"\n",
170+
"x, y_scaled = generate_and_scale_data()\n",
171+
"plt.scatter(x, y_scaled[::2])\n",
172+
"plt.title(\"Noisy Sine Wave\")\n",
173+
"plt.xlabel(\"x\")\n",
174+
"plt.ylabel(\"scaled y\")\n",
175+
"plt.show()\n",
176+
"# ---------------------- student exercise --------------------------------- #"
177+
]
178+
}
179+
],
180+
"metadata": {
181+
"kernelspec": {
182+
"display_name": "base",
183+
"language": "python",
184+
"name": "python3"
185+
},
186+
"language_info": {
187+
"codemirror_mode": {
188+
"name": "ipython",
189+
"version": 3
190+
},
191+
"file_extension": ".py",
192+
"mimetype": "text/x-python",
193+
"name": "python",
194+
"nbconvert_exporter": "python",
195+
"pygments_lexer": "ipython3",
196+
"version": "3.11.5"
197+
}
198+
},
199+
"nbformat": 4,
200+
"nbformat_minor": 5
201+
}

book/llms/effective-prompting.md

Lines changed: 43 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,43 @@
1+
(llms-effective-prompting)=
2+
# Effective Prompting
3+
When interacting with an LLM about your code, the way you prompt the model can significantly impact the quality of its responses. We recommend using the integrated AI chat panel if present in your editor instead of external chatbot like ChatGPT. This allows the LLM to directly reference your project files and the broader code context.
4+
5+
```{tip}
6+
If you are using a coding assistant embedded into your editor, such as GitHub Copilot or Cursor AI, you can open the chatbot panel to interact with the LLM. This is typically found in the sidebar of your coding editor, allowing you to ask questions and get code suggestions directly related to your current project.
7+
```
8+
9+
## Best Practices for Prompting LLMs
10+
When prompting an LLM, it is essential to be clear and specific about what you want. Here are some strategies to improve your prompts:
11+
1. **Be Specific**: Instead of asking a vague question, provide clear details about what you need. The more specific your question, the more related and useful the response will be.
12+
13+
2. **Specify Context**: If you are working with a specific language, library, or framework, mention it in your prompt. This helps the LLM tailor its response to your needs.
14+
15+
```{admonition} Example
16+
:class: example
17+
Say you want to create a numpy array with random values. Instead of asking "How do I create a random array?", you can ask "How do I create a numpy array with random integers between 1 and 10 in Python?".
18+
```
19+
20+
3. **Desired Output Format**: If you want the response in a specific format, such as code with comments or a brief explanation, include that in your prompt.
21+
22+
```{admonition} Example
23+
:class: example
24+
Instead of asking "How do I use Matplotlib in Python?", you can ask "Outline the steps to create a simple line plot using Matplotlib in Python, and provide a code example with comments explaining each step."
25+
```
26+
27+
4. **Task Definition**: Telling the LLM what you want it to do in clear steps is far more important than telling it what not to do.
28+
29+
```{admonition} Example
30+
:class: example
31+
Instead of saying "Don't give me a long explanation," you can say "Provide a brief example."
32+
```
33+
34+
35+
5. **Structure**: Using a structure format in your question can lead to drastically better results. Rather than asking a general multipart question, break it down into smaller, more manageable parts with clear instructions.
36+
37+
6. . **Role Definition**: If you want the LLM to act as a specific type of expert, such as a patient mentor or an expert in a particular field, specify that in your prompt. This helps the LLM understand the tone and depth of response you expect.
38+
39+
40+
```{admonition} Tip
41+
:class: tip
42+
These skills apply to any LLM interaction, not just coding assistants. Next time you are using a personal assistant like ChatGPT, remember that specific prompts with context will yield better results!
43+
```

0 commit comments

Comments
 (0)