Skip to content

camille-vanhoffelen/wet-toast-talk-radio

Repository files navigation

Wet Toast Talk Radio logo

Wet Toast Talk Radio

Radio Listen Twitch Watch Ko-fi Donate

Fake talk. Fake issues. Real giggles.

Generating content for Wet Toast Talk Radio, a 24/7 non-stop internet parody radio inspired by GTA.

We don’t do reruns - all shows are generated daily. We use ChatGPT + a lot of prompt engineering and randomization for the writing of the scripts, and the amazing transformer + diffusion models of tortoise-tts for speech generation.

Checkout our website!

ToC

πŸš€ Getting Started

Prerequisites

  • python >= 3.10
  • ffmpeg brew install ffmpeg
  • libshout brew install libshout

Install from Source

Install the package with pip:

pip install -r requirements.txt
pip install -e .

or with your preferred virtual environment manager (this project uses pdm for dependency management).

CLI Configuration

Add your OpenAI API key in the following config.yml file in the project dir:

scriptwriter:
  llm:
    openai_api_key: YOUR_OPENAI_API_KEY
audio_generator:
  use_s3_model_cache: false

🍞 Usage

⚠️ This CLI is designed as a demo of WTTR's generative capabilities. The full production services require to be deployed as part of the stack in the aws directory.

✍️ Script Generation

To write a single script show:

python -m wet_toast_talk_radio.main scriptwriter SHOW_NAME [--output-dir OUTPUT_DIR]

Currently available shows:

Show Name Host Description
the-great-debate Julie The show where you tune in to take sides
modern-mindfulness Orion Combining mindfulness and exposure therapy to let go of modern anxiety
the-expert-zone Nick The show where we ask the experts the difficult questions
prolove Zara The dating advice show where we love listening to you
advert Ian Advertisements from our beloved sponsors

For example, to generate an advertisement script in the folder output:

python -m wet_toast_talk_radio.main scriptwriter advert --output-dir output

πŸ—£ Audio Generation

⚠️ Audio generation is very slow with CPU. Generation with CPU is ~ 11x slower than real time, generation with Nvidia T4 16GB GPU is ~ 1.5x slower than real time. On first usage, models might also take a few minutes to download.

To generate audio for a given script:

python -m wet_toast_talk_radio.main audio-generator generate [--script SCRIPT_PATH --output-dir OUTPUT_DIR]

For example, to generate audio for a script the-great-debate-6c817b.jsonl in the folder output:

python -m wet_toast_talk_radio.main audio-generator generate --script output/the-great-debate-6c817b.jsonl --output-dir output

βš™οΈ Development

Prerequisites

Install

pdm install --dev

Configuration

This is used for local development, and assumes services mocked with localstack.

message_queue:
  sqs:
    local: true
media_store:
  s3:
    local: true
    bucket_name: "media-store"
audio_generator:
  use_s3_model_cache: true
scriptwriter:
  llm:
    openai_api_key: sm:/wet-toast-talk-radio/scriptwriter/openai-api-key
disc_jockey:
  media_transcoder:
    clean_tmp_dir: false
  shout_client:
    password: "hackme"

Dependency Management

Add new dependencies with pdm:

pdm add torch

Then update requirements.txt and dev-requirements.txt by running:

sh create-requirements.sh

Testing

Unit tests are run with:

pdm run pytest

The test folder containes integration tests that need the docker-compose up cmd to run. These tests are skipped by default but can be enabled with the following flag:

pdm run pytest --integration

Localstack

We use localstack to mock AWS Services locally. These are configured in a docker-compose. To run:

docker-compose up

S3

A localstack s3 bucket named wet-toast-talk-radio is located at http://localhost:4566.

You can access it from the cli like this:

aws --endpoint-url=http://localhost:4566 s3 cp ./wet_toast_talk_radio/media_store/virtual/data s3://wet-toast-talk-radio/raw --recursive
aws --endpoint-url=http://localhost:4566 s3 ls s3://media-store/raw/
aws --endpoint-url=http://localhost:4566 s3 ls s3://media-store/transcoded/
aws --endpoint-url=http://localhost:4566 s3 rm s3://media-store/transcoded/ --recursive

SQS

A localstack sqs MQ is located at http://localhost:4566.

You can access it from the cli like this:

aws --endpoint-url=http://localhost:4566 sqs list-queues

Icecast

A Icecast and Ices service will start on http://localhost:8000/

CI

We use Github Actions to build our production docker images. Workflows are found under .github/workflows.

Deployment

Wet Toast Talk Radio is deployed to AWS. See ./aws/README.md.

Code Guidelines

We use black as our code formattter.

We use ruff as our linter.

We use pytest as our testing framework.

Commits should follow the following convention: refactor|feat|fix|docs|breaking|misc|chore|test: description

😎 Credits

Authors:

A special thanks to:

  • Berenike Melchior, Unwavering Support
  • Andy Moore, Comedy Consultant
  • Gab, Nerd Advisor
  • Gautier Roquancourt, Design Expert
  • All the smart, beautiful people who gave feedback

Built with:

🀝 License

MIT license