From ee96f0b65433d074034a1efc30e044cb7b0e91ac Mon Sep 17 00:00:00 2001 From: Bowen Tan Date: Sun, 21 Jul 2024 18:32:28 -0700 Subject: [PATCH] updated. --- README.md | 52 +++++++++++++++++++++++++++-------------- redco/trainers/utils.py | 2 +- setup.py | 2 +- 3 files changed, 36 insertions(+), 20 deletions(-) diff --git a/README.md b/README.md index 5daf14e..e4fffc3 100644 --- a/README.md +++ b/README.md @@ -2,12 +2,20 @@ **Red Coast** (redco) is a lightweight and user-friendly tool designed to automate distributed training and inference for large models while simplifying the ML pipeline development process without necessitating MLSys expertise from users. -Check out our [Tech Report](https://arxiv.org/pdf/2310.16355.pdf) for details! -Here is also a [Quick Tutorial](tutorials/quick.md) for you to become an expert of distributed training with Redco in several minutes! +Check out our [Tech Report](https://aclanthology.org/2024.naacl-demo.14/) for more details! -* Redco allows for the simple implementation of distributed training and inference, eliminating the need for additional coding efforts or complex configurations, but still exhibits efficiency comparable to the most advanced model parallel tools. -* Redco enables customization of arbitrary ML pipelines within three functions, eliminating repetitive ans boilerplate coding, such as multi-host related processing, etc. We demonstrate that this mechanism is widely applicable to various ML algorithms -* The backend of Redco is based on JAX, but users doesn't need to be JAX experts. Knowing `numpy` is good enough! +**RedCoast: A Lightweight Tool to Automate Distributed Training of LLMs on Any GPU/TPUs** \ +Bowen Tan, Yun Zhu, Lijuan Liu, Hongyi Wang, Yonghao Zhuang, Jindong Chen, Eric Xing, Zhiting Hu \ +NAACL 2024, Demo / MLSys Workshop @ NeurIPS 2023 \ +[[Paper]](https://aclanthology.org/2024.naacl-demo.14/) +[[Twitter]](https://x.com/BowenTan8/status/1730240627068031295?s=20) +[[Slides]](https://drive.google.com/file/d/1MmBjxP5gInqhg0ydasby2a5UauLZFxQH/view) +[[Demo Video]](https://bowentan.bitcron.com/RedCoast_demo.webm) \ +(Best Demo Paper Runner Up @ NAACL 2024) + +RedCoast supports *Large Models* + *Complex Algorithms*, in a *lightweight* and *user-friendly* manner: +* Large Models beyond Transformers, e.g, [Stable Diffusion](examples/text_to_image), etc. +* Complex algorithms beyond cross entropy, e.g., [Meta Learning](examples/meta_learning), etc. ![](images/redco_coding.png) @@ -56,20 +64,28 @@ Go to [example/language_modeling](examples%2Flanguage_modeling) and [examples/te ## Reference - -We now have a [paper](https://arxiv.org/pdf/2310.16355.pdf) you can cite for the Red Coast library: - ``` -RedCoast: A Lightweight Tool to Automate Distributed Training of LLMs on Any GPU/TPUs -Bowen Tan, Yun Zhu, Lijuan Liu, Hongyi Wang, Yonghao Zhuang, Jindong Chen, Eric Xing, Zhiting Hu -NAACL 2024, Demo -Mlsys Workshop @ NeurIPS 2023 - -@article{tan2023redco, - title={RedCoast: A Lightweight Tool to Automate Distributed Training of LLMs on Any GPU/TPUs}, - author={Tan, Bowen and Zhu, Yun and Liu, Lijuan and Wang, Hongyi and Zhuang, Yonghao and Chen, Jindong and Xing, Eric and Hu, Zhiting}, - journal={arXiv preprint arXiv:2310.16355}, - year={2023} +@inproceedings{tan-etal-2024-redcoast, + title = "{R}ed{C}oast: A Lightweight Tool to Automate Distributed Training of {LLM}s on Any {GPU}/{TPU}s", + author = "Tan, Bowen and + Zhu, Yun and + Liu, Lijuan and + Wang, Hongyi and + Zhuang, Yonghao and + Chen, Jindong and + Xing, Eric and + Hu, Zhiting", + editor = "Chang, Kai-Wei and + Lee, Annie and + Rajani, Nazneen", + booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 3: System Demonstrations)", + month = jun, + year = "2024", + address = "Mexico City, Mexico", + publisher = "Association for Computational Linguistics", + url = "https://aclanthology.org/2024.naacl-demo.14", + pages = "137--147", + abstract = "The recent progress of AI can be largely attributed to large language models (LLMs). However, their escalating memory requirements introduce challenges for machine learning (ML) researchers and engineers. Addressing this requires developers to partition a large model to distribute it across multiple GPUs or TPUs. This necessitates considerable coding and intricate configuration efforts with existing model parallel tools, such as Megatron-LM, DeepSpeed, and Alpa. These tools require users{'} expertise in machine learning systems (MLSys), creating a bottleneck in LLM development, particularly for developers without MLSys background. In this work, we present RedCoast (Redco), a lightweight and user-friendly tool crafted to automate distributed training and inference for LLMs, as well as to simplify ML pipeline development. The design of Redco emphasizes two key aspects. Firstly, to automate model parallelism, our study identifies two straightforward rules to generate tensor parallel strategies for any given LLM. Integrating these rules into Redco facilitates effortless distributed LLM training and inference, eliminating the need of additional coding or complex configurations. We demonstrate the effectiveness by applying Redco on a set of LLM architectures, such as GPT-J, LLaMA, T5, and OPT, up to the size of 66B. Secondly, we propose a mechanism that allows for the customization of diverse ML pipelines through the definition of merely three functions, avoiding redundant and formulaic code like multi-host related processing. This mechanism proves adaptable across a spectrum of ML algorithms, from foundational language modeling to complex algorithms like meta-learning and reinforcement learning. As a result, Redco implementations exhibit significantly fewer lines of code compared to their official counterparts. RedCoast (Redco) has been released under Apache 2.0 license at https://github.com/tanyuqian/redco.", } ``` diff --git a/redco/trainers/utils.py b/redco/trainers/utils.py index 401436d..4e94a1d 100644 --- a/redco/trainers/utils.py +++ b/redco/trainers/utils.py @@ -42,7 +42,7 @@ def loss_and_grads(batch_): loss = jnp.mean(loss) grads = jax.tree.map(lambda x: jnp.mean(x, axis=0), grads) - new_state = state.apply_gradients(grads=jax.tree_map( + new_state = state.apply_gradients(grads=jax.tree.map( lambda grad, param: grad.astype(param.dtype), grads, state.params)) metrics = {'loss': loss, 'step': state.step, 'grad_norm': l2_norm(grads)} diff --git a/setup.py b/setup.py index f89530c..65d7323 100644 --- a/setup.py +++ b/setup.py @@ -17,7 +17,7 @@ setup( name="redco", - version="0.4.19", + version="0.4.20", author="Bowen Tan", packages=find_packages(), install_requires=['jax', 'flax', 'optax', 'numpy'],