Skip to content

Llm #9

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 4 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
15 changes: 15 additions & 0 deletions book/_toc.yml
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,21 @@ parts:
title: "Rule 6: Test Code"
- file: golden-rules/collaborate
title: "Rule 7: Collaborate"
- file: llms/overview
title: Using LLMs
sections:
- file: llms/setup-guide
title: Setup Guide
- file: llms/effective-prompting
title: Effective Prompting
- file: llms/generating-code
title: Generating Code
- file: llms/debugging-errors
title: Debugging Errors
- file: llms/human-in-the-loop
title: The Importance of Human-in-the-Loop


- caption: Installation
chapters:
- file: install/common
Expand Down
201 changes: 201 additions & 0 deletions book/llms/debugging-errors.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,201 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "914cfac4",
"metadata": {},
"source": [
"(llms-debugging-errors)=\n",
"# Debugging Errors\n",
"\n",
"LLMs are highly effective at identifying and fixing bugs in code, thanks to their training on vast datasets that include common coding mistakes and solutions. Instead of manually troubleshooting errors, you can leverage an AI coding assistant to quickly resolve issues and get back to building your project.\n",
"\n"
]
},
{
"cell_type": "markdown",
"id": "5cc84426",
"metadata": {},
"source": [
"## Quick Fixes\n",
"For minor errors, LLMs are very effective at providing quick fixes. In large projects, be careful to not provide too much context, as this can lead the LLM to focus on the wrong aspects of the code. Try being specific about the snippet you want it to analyze, rather than providing a lot of background information about the code's purpose or functionality. LLMs work best with concise, focused prompts that directly address the code snippet at hand.\n",
"\n",
"Practice using an LLM to solve the simple bugs you encounter in the code below which plots a sine wave. Typically you can copy and paste the code snippet into the LLM's input field and ask it to identify and fix the errors. \n",
"\n",
"```{admonition} Example Prompt\n",
":class: example\n",
"As an example prompt, you can use:\n",
"> The following Python code has an error. Identify the error and provide a corrected version of the code. \n",
"> \n",
"> [Insert code snippet here]\n",
"```\n",
"\n",
"\n",
"Below is the code with some simple errors:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8b6ca224",
"metadata": {},
"outputs": [],
"source": [
"# ---------------------- student exercise --------------------------------- #\n",
"# DEBUG AND FIX THE FOLLOWING CODE USING AI FEATURES (hint: there are 3 bugs)\n",
"\n",
"import numpy as np\n",
"import matplotlib.pyplot as plt\n",
"\n",
"x = np.linspace(0, 2 * np.pi, 100)\n",
"y = np.sin[x]\n",
"\n",
"plt.plot(x, y)\n",
"plt.title = \"Sine Wave\"\n",
"plt.xlabel(\"x values\")\n",
"plt.ylable(\"sin(x)\")\n",
"plt.show()\n",
"\n",
"# ---------------------- student exercise --------------------------------- #"
]
},
{
"cell_type": "markdown",
"id": "3f33d375",
"metadata": {
"vscode": {
"languageId": "plaintext"
}
},
"source": [
"## Providing Error Messages \n",
"\n",
"Typically when you run into an error, you will see an error message in the console. You can copy and paste this error message into the LLM to help it understand the problem. The LLM can then provide a solution based on the error message. This is especially useful for more complex issues where the code itself may not clearly indicate the problem. \n",
"\n",
"The below Python snippet is meant to plot a list of numbers after taking their inverse, however, it fails for a certain value. It still produces a (mostly correct) plot, but the error is visible in the console. Run the code below to see the error message, then copy and paste it into the LLM to get a fixed version.\n",
"\n",
"You will see an error message similar to:\n",
"> RuntimeWarning: divide by zero encountered in divide\n",
"\n",
"```{admonition} Example Prompt\n",
":class: example\n",
"As an example prompt, you can use:\n",
"> The following Python code has an error, please identify the error and provide a corrected version of the code:\n",
"> \n",
"> [Insert code snippet here]\n",
">\n",
"> The error message is:\n",
"> [Insert error message here]\n",
"```\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "95cca551",
"metadata": {},
"outputs": [],
"source": [
"# ---------------------- student exercise --------------------------------- #\n",
"import numpy as np\n",
"import matplotlib.pyplot as plt\n",
"\n",
"def compute_inverse(arr):\n",
" return 1 / arr\n",
"\n",
"x = np.linspace(-1, 1, 101)\n",
"y = compute_inverse(x)\n",
"\n",
"plt.plot(x, y)\n",
"plt.title(\"Inverse Function\")\n",
"plt.xlabel(\"x values\")\n",
"plt.ylabel(\"1/x\")\n",
"plt.show()\n",
"# ---------------------- student exercise --------------------------------- #"
]
},
{
"cell_type": "markdown",
"id": "48ff0294",
"metadata": {},
"source": [
"More complex errors may require you to provide additional context or information about the code. In these cases, you can include a brief description of what the code is supposed to do, along with the error message. This helps the LLM understand the intended functionality and provide a more accurate solution.\n",
"\n",
"The error message you see on more complex errors may also provide more context about the issue in the form of a `traceback`. A traceback is a report that provides information about the sequence of function calls that led to the error. It can help you understand where the error occurred in the code and what might have caused it. \n",
"\n",
"```{admonition} Tip\n",
":class: tip\n",
"Tracebacks can sometimes be very long and often contain a lot of information that is not relevant to the specific error you are trying to fix. When providing a traceback to an LLM, try to focus on the most relevant parts of the traceback which are typically the last few lines. In general, it is often sufficient to include the last few traceback calls and the final error message when debugging more complex issues.\n",
"```"
]
},
{
"cell_type": "markdown",
"id": "fcb0ca2f",
"metadata": {},
"source": [
"## Iterative Debugging\n",
"The previous examples were able to be solved with a single prompt. However, more complex issues may require an iterative approach. This means you may need to provide additional context or information to the LLM based on its initial response, or it solves the first issue but after rerunning the code, another bug appears. In these cases, you can continue to refine your prompts and provide more information until the LLM arrives at a solution.\n",
"\n",
"```{admonition} Tip\n",
":class: tip\n",
"When interacting with an LLM, it typically has a memory of the conversation, so you can refer back to previous messages. This means you can just provide the error messages without providing the code snippet again if it was already provided in the conversation. This can help streamline the debugging process and make it easier to focus on the specific issues at hand.\n",
"```\n",
"\n",
"Try running the code below to see an example of an iterative debugging process. The goal is to visualize a noisy sine wave. You can copy and paste the error message into the LLM, and then continue to refine your prompts based on its responses until you arrive at a solution. There are two errors in the code, so after providing the first error message, you can run the code again to see the second error message and provide that to the LLM as well until you have a working fully working code snippet."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "90dba6ee",
"metadata": {},
"outputs": [],
"source": [
"# ---------------------- student exercise --------------------------------- #\n",
"import numpy as np\n",
"import matplotlib.pyplot as plt\n",
"\n",
"def scale_data(data, factor):\n",
" return data * factor\n",
"\n",
"def generate_and_scale_data():\n",
" x = np.linspace(0, 10, 100)\n",
" noise = np.random.normal(0, 1, 100)\n",
" y = np.sin(x) + noise\n",
" scaled = scale_data(y, \"2\")\n",
" return x, scaled\n",
"\n",
"x, y_scaled = generate_and_scale_data()\n",
"plt.scatter(x, y_scaled[::2])\n",
"plt.title(\"Noisy Sine Wave\")\n",
"plt.xlabel(\"x\")\n",
"plt.ylabel(\"scaled y\")\n",
"plt.show()\n",
"# ---------------------- student exercise --------------------------------- #"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "base",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.5"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
43 changes: 43 additions & 0 deletions book/llms/effective-prompting.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
(llms-effective-prompting)=
# Effective Prompting
When interacting with an LLM about your code, the way you prompt the model can significantly impact the quality of its responses. We recommend using the integrated AI chat panel if present in your editor instead of external chatbot like ChatGPT. This allows the LLM to directly reference your project files and the broader code context.

```{tip}
If you are using a coding assistant embedded into your editor, such as GitHub Copilot or Cursor AI, you can open the chatbot panel to interact with the LLM. This is typically found in the sidebar of your coding editor, allowing you to ask questions and get code suggestions directly related to your current project.
```

## Best Practices for Prompting LLMs
When prompting an LLM, it is essential to be clear and specific about what you want. Here are some strategies to improve your prompts:
1. **Be Specific**: Instead of asking a vague question, provide clear details about what you need. The more specific your question, the more related and useful the response will be.

2. **Specify Context**: If you are working with a specific language, library, or framework, mention it in your prompt. This helps the LLM tailor its response to your needs.

```{admonition} Example
:class: example
Say you want to create a numpy array with random values. Instead of asking "How do I create a random array?", you can ask "How do I create a numpy array with random integers between 1 and 10 in Python?".
```

3. **Desired Output Format**: If you want the response in a specific format, such as code with comments or a brief explanation, include that in your prompt.

```{admonition} Example
:class: example
Instead of asking "How do I use Matplotlib in Python?", you can ask "Outline the steps to create a simple line plot using Matplotlib in Python, and provide a code example with comments explaining each step."
```

4. **Task Definition**: Telling the LLM what you want it to do in clear steps is far more important than telling it what not to do.

```{admonition} Example
:class: example
Instead of saying "Don't give me a long explanation," you can say "Provide a brief example."
```


5. **Structure**: Using a structure format in your question can lead to drastically better results. Rather than asking a general multipart question, break it down into smaller, more manageable parts with clear instructions.

6. . **Role Definition**: If you want the LLM to act as a specific type of expert, such as a patient mentor or an expert in a particular field, specify that in your prompt. This helps the LLM understand the tone and depth of response you expect.


```{admonition} Tip
:class: tip
These skills apply to any LLM interaction, not just coding assistants. Next time you are using a personal assistant like ChatGPT, remember that specific prompts with context will yield better results!
```
Loading