Skip to content

Commit

Permalink
Update README.md (#1634)
Browse files Browse the repository at this point in the history
Corrected typos and grammatical issues.
  • Loading branch information
jeanniefinks authored Mar 13, 2024
1 parent 4e791ee commit e09ae26
Showing 1 changed file with 23 additions and 23 deletions.
46 changes: 23 additions & 23 deletions research/mpt/README.md
Original file line number Diff line number Diff line change
@@ -1,42 +1,42 @@
*LAST UPDATED: 11/24/2023*

# **Sparse Finetuned LLMs with DeepSparse**
# **Sparse Fine-Tuned LLMs With DeepSparse**

DeepSparse has support for performant inference of sparse large language models, starting with Mosaic's MPT and Meta's Llama 2.
Check out our paper [Sparse Finetuning for Inference Acceleration of Large Language Models](https://arxiv.org/abs/2310.06927)
Check out our paper [Sparse Fine-tuning for Inference Acceleration of Large Language Models](https://arxiv.org/abs/2310.06927)

In this research overview, we will discuss:
1. [Our Sparse Fineuning Research](#sparse-finetuning-research)
2. [How to try Text Generation with DeepSparse](#try-it-now)
1. [Our Sparse Fine-Tuning Research](#sparse-finetuning-research)
2. [How to Try Text Generation With DeepSparse](#try-it-now)

## **Sparse Finetuning Research**
## **Sparse Fine-Tuning Research**

We show that MPT-7B and Llama-2-7B can be pruned to ~60% sparsity with INT8 quantization (and 70% sparsity without quantization), with no accuracy drop, using a technique called **Sparse Finetuning**, where we prune the network during the finetuning process.
We show that MPT-7B and Llama-2-7B can be pruned to ~60% sparsity with INT8 quantization (and 70% sparsity without quantization), with no accuracy drop, using a technique called **Sparse Fine-Tuning**, where we prune the network during the fine-tuning process.

When running the pruned network with DeepSparse, we can accelerate inference by ~7x over the dense-FP32 baseline!

### **Sparse Finetuning on Grade-School Math (GSM)**
### **Sparse Fine-Tuning on Grade-School Math (GSM)**

Training LLMs consist of two steps. First, the model is pre-trained on a very large corpus of text (typically >1T tokens). Then, the model is adapted for downstream use by continuing training with a much smaller high quality curated dataset. This second step is called finetuning.
Training LLMs consists of two steps. First, the model is pre-trained on a very large corpus of text (typically >1T tokens). Then, the model is adapted for downstream use by continuing training with a much smaller high-quality curated dataset. This second step is called fine-tuning.

Fine-tuning is useful for two main reasons:
1. It can teach the model *how to respond* to input (often called **instruction tuning**).
2. It can teach the model *new information* (often called **domain adaptation**).

An example of how domain adaptation is helpful is solving the [Grade-school math (GSM) dataset](https://huggingface.co/datasets/gsm8k). GSM is a set of grade school word problems and a notoriously difficult task for LLMs, as evidenced by the 0% zero-shot accuracy of MPT-7B. By fine-tuning with a very small set of ~7k training examples, however, we can boost the model's accuracy on the test set to 28.2%.
An example of how domain adaptation is helpful in solving the [Grade-school math (GSM) dataset](https://huggingface.co/datasets/gsm8k). GSM is a set of grade school word problems and a notoriously difficult task for LLMs, as evidenced by the 0% zero-shot accuracy of MPT-7B. By fine-tuning with a very small set of ~7k training examples, however, we can boost the model's accuracy on the test set to 28.2%.

The key insight from [our paper](https://arxiv.org/abs/2310.06927) is that we can prune the network during the finetuning process. We apply [SparseGPT](https://arxiv.org/pdf/2301.00774.pdf) to prune the network after dense finetuning and retrain for 2 epochs with L2 distillation. The result is a 60% sparse-quantized model with no accuracy drop on GSM8k runs 7x faster than the dense baseline with DeepSparse!
The key insight from [our paper](https://arxiv.org/abs/2310.06927) is that we can prune the network during the fine-tuning process. We apply [SparseGPT](https://arxiv.org/pdf/2301.00774.pdf) to prune the network after dense fine-tuning and retrain for 2 epochs with L2 distillation. The result is a 60% sparse-quantized model with no accuracy drop on GSM8k runs 7x faster than the dense baseline with DeepSparse!

<div align="center">
<img src="https://github.com/neuralmagic/deepsparse/assets/3195154/f9a86726-12f5-4926-8d8c-668c449faa84" width="60%"/>
<img src="https://github.com/neuralmagic/deepsparse/assets/3195154/f9a86726-12f5-4926-8d8c-668c449faa84" width="60%" ALT="Sparse Fine-Tuned LLMs on GSM8k"/>
</div>

- [See the paper on Arxiv](https://arxiv.org/abs/2310.06927)
- [See our Llama 2 expansion blog on the initial paper](https://neuralmagic.com/blog/fast-llama-2-on-cpus-with-sparse-fine-tuning-and-deepsparse/)
- [See the paper on Arxiv](https://arxiv.org/abs/2310.06927).
- [See our Llama 2 expansion blog on the initial paper](https://neuralmagic.com/blog/fast-llama-2-on-cpus-with-sparse-fine-tuning-and-deepsparse/).

### **How Is This Useful For Real World Use?**
### **How Is This Useful For Real-World Use?**

While GSM is a "toy" math dataset, it serves as an example of how LLMs can be adapted to solve tasks which the general pretrained model cannot. Given the treasure-troves of domain-specific data held by companies, we expect to see many production models fine-tuned to create more accurate models fit to business tasks. Using Neural Magic, you can deploy these fine-tuned models performantly on CPUs!
While GSM is a "toy" math dataset, it serves as an example of how LLMs can be adapted to solve tasks that the general pre-trained model cannot. Given the treasure troves of domain-specific data held by companies, we expect to see many production models fine-tuned to create more accurate models fit to business tasks. Using Neural Magic, you can deploy these fine-tuned models performantly on CPUs!

## Try It Now

Expand Down Expand Up @@ -82,22 +82,22 @@ print(output.generations[0].text)
```

#### Other Resources
- [Check out all the GSM models on SparseZoo](https://sparsezoo.neuralmagic.com/?datasets=gsm8k&ungrouped=true)
- [Try out the live demo on Hugging Face Spaces](https://huggingface.co/spaces/neuralmagic/sparse-mpt-7b-gsm8k) and view the [collection of paper, demos, and models](https://huggingface.co/collections/neuralmagic/sparse-finetuning-mpt-65241d875b29204d6d42697d)
- [Check out the detailed `TextGeneration` Pipeline documentation](https://github.com/neuralmagic/deepsparse/blob/main/docs/llms/text-generation-pipeline.md)
- [Check out all the GSM models on SparseZoo](https://sparsezoo.neuralmagic.com/?datasets=gsm8k&ungrouped=true).
- [Try out the live demo on Hugging Face Spaces](https://huggingface.co/spaces/neuralmagic/sparse-mpt-7b-gsm8k) and view the [collection of paper, demos, and models](https://huggingface.co/collections/neuralmagic/sparse-finetuning-mpt-65241d875b29204d6d42697d).
- [Check out the detailed `TextGeneration` Pipeline documentation](https://github.com/neuralmagic/deepsparse/blob/main/docs/llms/text-generation-pipeline.md).

## **Roadmap**

Following these initial results, we are rapidly expanding our support for LLMs across the Neural Magic stack, including:

- **Productizing Sparse Fine Tuning**: Enable external users to apply the sparse fine-tuning to business datasets
- **Expanding Model Support**: Apply sparse fine-tuning results to Mistral models
- **Pushing to Higher Sparsity**: Improving our pruning algorithms to reach higher sparsity
- **Building General Sparse Model**: Create sparse model that can perform well on general tasks like OpenLLM leaderboard
- **Productizing Sparse Fine-Tuning**: Enable external users to apply the sparse fine-tuning to business datasets.
- **Expanding Model Support**: Apply sparse fine-tuning results to Mistral models.
- **Pushing to Higher Sparsity**: Improving our pruning algorithms to reach higher sparsity.
- **Building General Sparse Model**: Create a sparse model that can perform well on general tasks like OpenLLM leaderboard.

## **Feedback / Roadmap Requests**

We are excited to add initial support for LLMs in the Neural Magic stack and plan to bring many ongoing improvements over the coming months. For questions or requests regarding LLMs, please reach out in any of the following channels:
We are excited to add initial support for LLMs in the Neural Magic stack and plan to bring many ongoing improvements over the coming months. For questions or requests regarding LLMs, reach out through any of the following channels:
- [Neural Magic Community Slack](https://join.slack.com/t/discuss-neuralmagic/shared_invite/zt-q1a1cnvo-YBoICSIw3L1dmQpjBeDurQ)
- [GitHub Issue Queue](https://github.com/neuralmagic/deepsparse/issues)
- [Contact Form](http://neuralmagic.com/contact/)

0 comments on commit e09ae26

Please sign in to comment.