Skip to content

Commit

Permalink
mock no need torch
Browse files Browse the repository at this point in the history
  • Loading branch information
andreea-popescu-reef committed Sep 4, 2024
1 parent a8f6459 commit abc4a45
Show file tree
Hide file tree
Showing 3 changed files with 12 additions and 9 deletions.
2 changes: 2 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,9 @@
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)

Script to generate batches of random unique prompts to be used in the Compute Horde project synthetic jobs.

The prompt that generates prompts is inspired from [Bittensor Subnet 18 (Cortex. t)] (https://github.com/Datura-ai/cortex.t/blob/276cfcf742e8b442500435a1c1862ac4dffa9e20/cortext/utils.py#L193) (licensed under the MIT License.)

The generated prompts will be saved in `<output_folder_path>/prompts_<uuid>.txt`, each line of the text file containing a prompt.


Expand Down
6 changes: 3 additions & 3 deletions pdm.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

13 changes: 7 additions & 6 deletions src/compute_horde_prompt_gen/model.py
Original file line number Diff line number Diff line change
@@ -1,9 +1,4 @@
import torch
import logging
from transformers import (
AutoTokenizer,
AutoModelForCausalLM,
)

from prompt import PROMPT_ENDING

Expand All @@ -15,14 +10,20 @@ def __init__(self):
pass

def generate(self, prompts: list[str], num_return_sequences: int, **_kwargs):
return torch.rand(len(prompts) * num_return_sequences)
return [1 for _ in range(len(prompts) * num_return_sequences)]

def decode(self, _output):
return f"COPY PASTE INPUT PROMPT {PROMPT_ENDING} Here is the list of prompts:\nHow are you?\nDescribe something\nCount to ten\n"


class GenerativeModel:
def __init__(self, model_path: str, quantize: bool = False):
import torch
from transformers import (
AutoTokenizer,
AutoModelForCausalLM,
)

quantization_config = None
if quantize:
from transformers import BitsAndBytesConfig
Expand Down

0 comments on commit abc4a45

Please sign in to comment.