Welcome to the SnailSploit repository for the research article “How I Transferred a Jailbreak from GPT-4o to GPT-01”. This repository contains the jailbroken transcript and related resources used in the study, showcasing vulnerabilities in AI systems related to context-based manipulation.
-
jailbroken_transcript.txt
A complete text file of the jailbroken GPT-4o conversation used to demonstrate the transfer vulnerability to GPT-01.Warning: This file is provided for educational and research purposes only. Please read the disclaimer below before proceeding.
-
README.md
This document, providing context and guidelines for using the transcript.
In this experiment, we explored how a jailbroken state in GPT-4o could be seamlessly transferred to GPT-01 by simply pasting the conversation transcript. The study reveals key shortcomings in context sanitization and highlights the importance of improved input validation mechanisms in AI systems.
The detailed methodology, findings, and implications are discussed in the article:
“How I Transferred a Jailbreak from GPT-4o to GPT-01”.
- Clone the Repository
https://github.com/SnailSploit/jailbreak-gpt01.git
-
Open the Transcript
Use any text editor to view thejailbroken_transcript.txt
file. -
Analysis
Study the structure of the prompts to understand the methods used to bypass restrictions. For context and application details, refer to the linked article.
This repository and its contents are strictly for educational and research purposes only. Any attempt to misuse the provided transcript or methodology in unethical, harmful, or illegal ways is strongly discouraged and may violate laws or terms of service.
By accessing this repository, you agree to use the information responsibly and within the bounds of ethical research practices.
Follow us for more research on adversarial techniques, AI exploitation, and cybersecurity innovations: