garak checks if an LLM can be made to fail in a way we don't want. garak probes for hallucination, data leakage, prompt injection, misinformation, toxicity generation, jailbreaks, and many other ...
If you use this code or results in your research, please cite our paper (see Citation section).
Some results have been hidden because they may be inaccessible to you
Show inaccessible results