Implementation of the sparse attention pattern proposed by the Deepseek team in their "Native Sparse Attention" paper
-
Updated
Feb 25, 2025 - Python
Implementation of the sparse attention pattern proposed by the Deepseek team in their "Native Sparse Attention" paper
ShadowKV: KV Cache in Shadows for High-Throughput Long-Context LLM Inference
The official implementation of the paper <MoA: Mixture of Sparse Attention for Automatic Large Language Model Compression>
SpargeAttention: A training-free sparse attention that can accelerate any model inference.
Demo code for CVPR2023 paper "Sparsifiner: Learning Sparse Instance-Dependent Attention for Efficient Vision Transformers"
Pytorch implementation of "Structural Similarity-Inspired Unfolding for Lightweight Image Super-Resolution"
Building Native Sparse Attention
Text Summarization Modeling with three different Attention Types
Add a description, image, and links to the sparse-attention topic page so that developers can more easily learn about it.
To associate your repository with the sparse-attention topic, visit your repo's landing page and select "manage topics."