Releases: yusufcanb/tlm
1.2
One-liner RAG has arrived in tlm
v1.2! π
Version 1.2 of tlm
introduces one-liner Retrieval Augmented Generation (RAG) with the new tlm ask
command. This beta feature allows you to ask questions and get contextually relevant answers directly within your codebase and documentation.
Inspired by the Repomix project, tlm ask
provides a similar context gathering mechanism, implemented efficiently in Go. However, tlm
goes a step further by bridging context retrieval with local and open-source LLM prompting, enabling security and privacy with the comfort of your terminal.
Key Features of tlm ask
:
- Instant Answers: Get quick answers to direct questions using
tlm ask "<prompt>"
. - Contextual Understanding: Enhance answer accuracy by providing context. Use the
--context
flag and specify a directory for analysis, e.g.,tlm ask --context . "<prompt>"
. - Granular Context Control: Further refine the context using
--include
and--exclude
flags with file patterns. Target specific files or exclude irrelevant ones, e.g.,tlm ask --context . --include *.md "<prompt>"
ortlm ask --context . --exclude **/*_test.go "<prompt>"
.
Example Usage:
tlm ask "What is the main purpose of this function?"
tlm ask --context ./src --include *.go "How does authentication work?"
tlm ask --context ./docs --include *.md --exclude README.md "Summarize the key concepts."
tlm ask --interactive "What are the dependencies?"
1.2-pre
Use the model you like! π₯³π
Now, starting from 1.2-pre version, tlm will deprecate the use of Modelfiles and will be able to work with any base model without creating it's own. That was the most wanted request from the earlier discussions. Initially, I wanted to abstract user from underlying model so they can just focus on getting good results. But, with the boom of the new open-source models, I've decided not to have an opinion on which model to use. Users can choose and decide which one is the best!
Changelog
- Removal of the Modelfile approach.
tlm
now will use base models without requiring custom model creation. tlm config
will list all available Ollama models and let you to select a default model to work with.- The default model is now
qwen2.5-coder:3b
which is accurate and blazing fast at the same time.
Full Changelog: 1.1...1.2-pre
1.1
1.1 is out! π₯³ π
Thank you so much to everyone who showed interest in the initial release. Your support has been incredible! Within just two weeks, tlm has rocketed to 231 stars from its zero point. This overwhelming response is truly humbling and inspiring.
It's because the engagement that I'm thrilled to announce the release of version 1.1. This update aims to enhance the project's robustness and maintainability, laying the groundwork for the continued growth and enabling easier collaboration of the project.
$ tlm s 'get me a cowsay to express excitement of tlm 1.1 release'
β > Thinking... (1.198s)
β > cowsay "tlm 1.1 is out! let's celebrate!"
β > Executing...
----------------------------------
< tlm 1.1 is out! let's celebrate! >
----------------------------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
Changelog
- Ability to override automatic shell detection and generate suggestions for different shells.
- Suggestion/Explanation preset optimizations. (Precise/Balanced/Creative)
- Non-interactive configuration.
- E2E tests for acceptance.
- Informs user on new releases.
Full Changelog: 1.0...1.1
Discussions
- Integration with Homebrew, Scoop and Snap stores for easier distribution.
- Code signing.
1.0
What's Changed
- Release/1.0 by @yusufcanb in #9
New Contributors
- @sadikkuzu made their first contribution in #7
Full Changelog: 1.0-rc3...1.0
1.0-rc3
What's Changed
- Install script minor updates by @slim-abid in #4
- Updated README and added contribution by @eomeragic1 in #5
- Release Candidate 3 for 1.0 by @yusufcanb in #6
New Contributors
- @slim-abid made their first contribution in #4
- @eomeragic1 made their first contribution in #5
Full Changelog: 1.0-rc2...1.0-rc3