[ICLR 2024 Spotlight 🔥 ] - [ Best Paper Award SoCal NLP 2023 🏆] - Jailbreak in pieces: Compositional Adversarial Attacks on Multi-Modal Language Models
alignment ai-safety vlm llm vision-language-models cross-modality-safety-alignment multi-modal-models
-
Updated
Jun 6, 2024 - Python