Papers and resources related to the security and privacy of LLMs 🤖
-
Updated
Nov 27, 2024 - Python
Papers and resources related to the security and privacy of LLMs 🤖
A one-stop repository for large language model (LLM) unlearning. Supports TOFU, MUSE and is an easily extensible framework for new datasets, evaluations, methods, and other benchmarks.
Python package for measuring memorization in LLMs.
The fastest Trust Layer for AI Agents
An Execution Isolation Architecture for LLM-Based Agentic Systems
It is a comprehensive resource hub compiling all LLM papers accepted at the International Conference on Learning Representations (ICLR) in 2024.
LLM security and privacy
LLM Platform Security: Applying a Systematic Evaluation Framework to OpenAI's ChatGPT Plugins
Open source PII detection and anonymization tool: easy-to-use, configurable, and extensible
Make Zettelkasten-style note-taking the foundation of interactions with Large Language Models (LLMs).
Example of running last_layer with FastAPI on vercel
Add a description, image, and links to the llm-privacy topic page so that developers can more easily learn about it.
To associate your repository with the llm-privacy topic, visit your repo's landing page and select "manage topics."