Skip to content

Add ability to iterate on a TUI (like v0.dev) #7

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
shobrook opened this issue Jan 3, 2025 · 11 comments
Open

Add ability to iterate on a TUI (like v0.dev) #7

shobrook opened this issue Jan 3, 2025 · 11 comments

Comments

@shobrook
Copy link
Owner

shobrook commented Jan 3, 2025

No description provided.

@apockill
Copy link
Contributor

apockill commented Jan 5, 2025

I'm working on something to address #3, do you have any preferences in how to do it?

I was thinking instead of saving the script to .termite/####_script_name.py, there's a project directory for each script.

.
└── .termite
    ├── another_cool_tool
    │   ├── metadata.json
    │   └── tui.py
    └── cool_tool
        ├── metadata.json
        └── tui.py

When termite is called, it could create a dto a la:

@dataclass
class Project:
	_metadata_path: Path
    _tui_path: Path

    @property
    def script(self) -> str:
         ...
   
    @property 
    def metadata(self) -> Metadata:
        ...

    @staticmethod
    def from_dir() -> "Project":
        ...

    @staticmethod
    def create(script, design) -> "Project":
         ...

Then metadata can contain converstations and context from past runs.

If the user calls termite <prompt> --name "cool_tool" it can use that name. In the future, instead of ####_script_name/ an LLM can generate a useful name for the tool.

To run a tool, perhaps termite --run cool_tool would suffice.


Let me know what you think, I'm happy to implement this. Cheers!

@shobrook
Copy link
Owner Author

shobrook commented Jan 6, 2025

This is a good idea. Only thing I'm wondering is how the actual UI should look for iterating on a TUI. In v0.dev, the chat is side-by-side with the output. That might not be so doable in the terminal. Any ideas?

@apockill
Copy link
Contributor

apockill commented Jan 6, 2025

It's a good point. In this application, I suppose you don't really care to see what the LLM has to say, only what the TUI looks like. So maybe the workflow is:

What do you want to make? (Ctrl-C to exit)
> Make a GPU monitor using nvidia-smi
<The usual loading bars...>
Press `q` to exit your TUI and return to the conversation
<TUI pops up>

User presses q

What do you want to make? (Ctrl-C to exit)
> Make a GPU monitor using nvidia-smi
What would you like to change?
> It doesn't seem to be refreshing live, there must be some kind of issue with your callbacks ... blah blah ...

Then it jumps back to loading bars / showing the TUI. Thoughts? On the next repitition it might show the same prompt:

What do you want to make? (Ctrl-C to exit)
> Make a GPU monitor using nvidia-smi
What would you like to change?
> It doesn't seem to be refreshing live, there must be some kind of issue with your callbacks ... blah blah ...
What would you like to change?
> Add more colors, I want it to be NVIDIA-green themed

etc etc

@shobrook
Copy link
Owner Author

shobrook commented Jan 7, 2025

That makes sense. How do we guarantee that pressing q will always quit the TUI and return the user to the chat session?

@apockill
Copy link
Contributor

apockill commented Jan 7, 2025

That I do not know 😁

It might be easier to just have Ctrl+C exit the TUI, we catch the KeyboardException in the main chat loop, and then go back to prompting. If they then Ctrl+C while in the chat loop, it exits the program. That's nice from a UX standpoint, it keeps the "exit" mechanics the same.

If you like the general proposal, I can play around with implementations and see if I can experiment with possible ways to exit the TUI reliably.

@shobrook
Copy link
Owner Author

shobrook commented Jan 9, 2025

That seems clean and simple. We should make sure that Ctrl+C and q (if implemented) behave the same way, though.

@shobrook
Copy link
Owner Author

shobrook commented Jan 9, 2025

Also, how do you think issue #6 should be supported in this new workflow? Maybe just a simple Y/n input prompt before opening the TUI?

@silopolis
Copy link

silopolis commented Jan 23, 2025

Hello there :)

First, congrats and thanks for sharing this pretty exciting tool 🙏

After giving it a few tries, this is indeed the first feature I thought I'd like because it is not that simple to drop the perfect prompt the first time and of course, each new "answer" seeds new ideas... and bugs 😅

The second, which is related to things discussed here is that I wished Termite would:

  • use the current directory (or one specified on the CLI) as the root of the Python project (with all usual conventions and good practices) it is assisting in building,
  • use ./.termite/ to store its special sauce (config, metadata, prompts/conversations histories, design document, etc),
  • generate code and project files (deps, venv) in CWD instead of burying it deep into $HOME

TY
J

@shobrook
Copy link
Owner Author

shobrook commented Jan 23, 2025

I'd merge any PR that allows iteration on the TUI.

And as for your other ideas, those also sound good to me, although much lower priority.

@chrisdlees
Copy link

I dunno about you guys, but I'm thinking that the newest "thinking" models might replace an agentic approach.... I may just have to try a stab at PR myself

@shobrook
Copy link
Owner Author

shobrook commented Mar 19, 2025

I wouldn't call Termite an agent, but yes I think using a reasoning model would improve results. @chrisdlees If you give this a go, I'd recommend using LiteLLM.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants