Skip to content
#

ai-cyber-security

Here are 3 public repositories matching this topic...

Language: All
Filter by language

The Prompt Injection Testing Tool is a Python script designed to assess the security of your AI system's prompt handling against a predefined list of user prompts commonly used for injection attacks. This tool utilizes the OpenAI GPT-3.5 model to generate responses to system-user prompt pairs and outputs the results to a CSV file for analysis.

  • Updated Mar 21, 2024
  • Python

MINOTAUR: The STRONGEST Secure Prompt EVER! Prompt Security Challenge, Impossible GPT Security, Prompts Cybersecurity, Prompting Vulnerabilities, FlowGPT, Secure Prompting, Secure LLMs, Prompt Hacker, Cutting-edge Ai Security, Unbreakable GPT Agent, Anti GPT Leak, System Prompt Security.

  • Updated Mar 27, 2024

Improve this page

Add a description, image, and links to the ai-cyber-security topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the ai-cyber-security topic, visit your repo's landing page and select "manage topics."

Learn more