Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

probe: do-not-answer #517

Open
leondz opened this issue Feb 27, 2024 · 0 comments
Open

probe: do-not-answer #517

leondz opened this issue Feb 27, 2024 · 0 comments
Labels
new plugin Describes an entirely new probe, detector, generator or harness probes Content & activity of LLM probes

Comments

@leondz
Copy link
Owner

leondz commented Feb 27, 2024

Do-Not-Answer: A Dataset for Evaluating Safeguards in LLMs

https://arxiv.org/abs/2308.13387

an open-source dataset to evaluate LLMs' safety mechanism at a low cost. The dataset consists only of prompts to which responsible language models should not answer.

@leondz leondz added probes Content & activity of LLM probes new plugin Describes an entirely new probe, detector, generator or harness labels Feb 27, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
new plugin Describes an entirely new probe, detector, generator or harness probes Content & activity of LLM probes
Projects
None yet
Development

No branches or pull requests

1 participant