Skip to content
@ParCIS

ParCIS Lab, BUPT

Parallel Computing and Intelligent Systems Laboratory (ParCIS Lab), Beijing University of Posts and Telecommunications

Popular repositories Loading

  1. Magicube Magicube Public

    Magicube is a high-performance library for quantized sparse matrix operations (SpMM and SDDMM) of deep learning on Tensor Cores.

    C++ 85 17

  2. Chimera Chimera Public

    Chimera: Efficiently Training Large-Scale Neural Networks with Bidirectional Pipelines.

    Python 56 8

  3. Ok-Topk Ok-Topk Public

    Ok-Topk is a scheme for distributed training with sparse gradients. Ok-Topk integrates a novel sparse allreduce algorithm (less than 6k communication volume which is asymptotically optimal) with th…

    Python 24 8

  4. FlashSparse FlashSparse Public

    FlashSparse significantly reduces the computation redundancy for unstructured sparsity (for SpMM and SDDMM) on Tensor Cores through a Swap-and-Transpose mapping strategy. FlashSparse is accepted by…

    Cuda 2 1

  5. DNN-cpp-proxies DNN-cpp-proxies Public

    C++/MPI proxies for distributed training of deep neural networks.

    C++ 1

Repositories

Showing 5 of 5 repositories
  • DNN-cpp-proxies Public

    C++/MPI proxies for distributed training of deep neural networks.

    ParCIS/DNN-cpp-proxies’s past year of commit activity
    C++ 1 GPL-3.0 0 0 0 Updated Jan 26, 2025
  • FlashSparse Public

    FlashSparse significantly reduces the computation redundancy for unstructured sparsity (for SpMM and SDDMM) on Tensor Cores through a Swap-and-Transpose mapping strategy. FlashSparse is accepted by PPoPP 2025.

    ParCIS/FlashSparse’s past year of commit activity
    Cuda 2 1 1 0 Updated Jan 21, 2025
  • Chimera Public

    Chimera: Efficiently Training Large-Scale Neural Networks with Bidirectional Pipelines.

    ParCIS/Chimera’s past year of commit activity
    Python 56 GPL-3.0 8 3 1 Updated Dec 5, 2023
  • Ok-Topk Public

    Ok-Topk is a scheme for distributed training with sparse gradients. Ok-Topk integrates a novel sparse allreduce algorithm (less than 6k communication volume which is asymptotically optimal) with the decentralized parallel Stochastic Gradient Descent (SGD) optimizer, and its convergence is proved theoretically and empirically.

    ParCIS/Ok-Topk’s past year of commit activity
    Python 24 GPL-3.0 8 2 2 Updated Dec 10, 2022
  • Magicube Public

    Magicube is a high-performance library for quantized sparse matrix operations (SpMM and SDDMM) of deep learning on Tensor Cores.

    ParCIS/Magicube’s past year of commit activity
    C++ 85 GPL-3.0 17 2 0 Updated Nov 23, 2022

People

This organization has no public members. You must be a member to see who’s a part of this organization.

Top languages

Loading…

Most used topics

Loading…