Skip to content

Commit

Permalink
fix typos, add codespell pre-commit hook (#264)
Browse files Browse the repository at this point in the history
* fix typos, add codespell pre-commit hook

* Update .pre-commit-config.yaml

---------

Co-authored-by: Sebastian Raschka <mail@sebastianraschka.com>
  • Loading branch information
BioGeek and rasbt authored Jul 16, 2024
1 parent 6ffd628 commit 48bd72c
Show file tree
Hide file tree
Showing 5 changed files with 21 additions and 4 deletions.
17 changes: 17 additions & 0 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
# A tool used by developers to identify spelling errors in text.
# Readers may ignore this file.

default_stages: [commit]

repos:
- repo: https://github.com/codespell-project/codespell
rev: v2.3.0
hooks:
- id: codespell
name: codespell
description: Check for spelling errors in text.
entry: codespell
language: python
args:
- "-L ocassion,occassion,ot,te,tje"
files: \.txt$|\.md$|\.py|\.ipynb$
Original file line number Diff line number Diff line change
Expand Up @@ -317,7 +317,7 @@
"id": "f78e346f-3b85-44e6-9feb-f01131381148"
},
"source": [
"- The implementation below uses PyTorch's [`scaled_dot_product_attention`](https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html) function, which implements a memory-optimized version of self-attention calld [flash attention](https://arxiv.org/abs/2205.14135)"
"- The implementation below uses PyTorch's [`scaled_dot_product_attention`](https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html) function, which implements a memory-optimized version of self-attention called [flash attention](https://arxiv.org/abs/2205.14135)"
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion ch04/01_main-chapter-code/ch04.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -1043,7 +1043,7 @@
"id": "dec7d03d-9ff3-4ca3-ad67-01b67c2f5457",
"metadata": {},
"source": [
"- We are almost there: now let's plug in the transformer block into the architecture we coded at the very beginning of this chapter so that we obtain a useable GPT architecture\n",
"- We are almost there: now let's plug in the transformer block into the architecture we coded at the very beginning of this chapter so that we obtain a usable GPT architecture\n",
"- Note that the transformer block is repeated multiple times; in the case of the smallest 124M GPT-2 model, we repeat it 12 times:"
]
},
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -370,7 +370,7 @@ def replace_linear_with_lora(model, rank, alpha):
action='store_true',
default=False,
help=(
"Disable padding, which means each example may have a different lenght."
"Disable padding, which means each example may have a different length."
" This requires setting `--batch_size 1`."
)
)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -166,7 +166,7 @@
" return response.choices[0].message.content\n",
"\n",
"\n",
"# Prepare intput\n",
"# Prepare input\n",
"sentence = \"I ate breakfast\"\n",
"prompt = f\"Convert the following sentence to passive voice: '{sentence}'\"\n",
"run_chatgpt(prompt, client)"
Expand Down

0 comments on commit 48bd72c

Please sign in to comment.