Releases: browser-use/web-ui
DeepResearch Lands on Browser-Use Web UI, with Collaborative Agents! 🤖🤝📚
Thanks to @vvincent1234. Now, you can seamlessly leverage DeepResearch's advanced capabilities in WebUI.
Important Notes:
- DeepResearch feature is currently in alpha version and under rapid development. Stay updated by watching this repository.
- DeepResearch consumes relatively many tokens. Please reduce Max Search Iteration and Max Query per Iteration according to your needs. These two represent the maximum number of search iterations and the number of simultaneous queries per search iteration respectively.
What's New?
2025/02/09
- Hotfix some bugs
- Split extracted content and limit max content lenght
2025/02/07
- Added a stop button, allowing you to stop your research at any time.
- Use your own browser. However, using your own browser currently only supports single searches per iteration.
- Currently recommending the gemini-2.0-flash-thinking-exp-01-21 model. This is because excessively long extracted content can sometimes cause API call errors.
Key benefits of this integration include:
- DeepResearch within Your Browser: Access all DeepResearch features directly in your own browser – no more need for external search APIs! 🌐
- Collaborative Agents: Harness the power of multiple AI agents working in concert. 🤖🤝
- Indexed Information Sources: Easily save and access all referenced articles for future reference, promoting transparency and ensuring the reliability of your research. 📚
How to Get Started:
- Update Your Code: Pull the latest version to experience the new features. ⬆️
- Choose a Powerful LLM: To fully utilize DeepResearch, select a reasoning-capable LLM such as
gemini-2.0-flash-thinking-exp-01-21
,deepseek-r1
, oro3-mini
. 🧠 - Enter Your Research Topic: Navigate to the DeepResearch section within the Browser-Use Web UI and input your research theme. 📝
- Configure Parameters: Adjust the
max_search_iteration_input
andmax_query_per_iter_input
according to the complexity of your research. ⚙️ - Run Deep Research: Click the "run_deep_research" button and wait for your professional research report to be generated. ⏳
🚀 Local DeepSeek-r1 Power with Ollama!
Hey everyone,
We've just rolled out a new release packed with awesome updates:
- Browser-Use Upgrade: We're now fully compatible with the latest
browser-use
version 0.1.29! 🎉 - Local Ollama Integration: Get ready for completely local and private AI with support for the incredible
deepseek-r1
model via Ollama! 🏠
Before You Dive In:
- Update Code: Don't forget to
git pull
to grab the latest code changes. - Reinstall Dependencies: Run
pip install -r requirements.txt
to ensure all your dependencies are up to date.
Important Notes on deepseek-r1
:
- Model Size Matters: We've found that
deepseek-r1:14b
and larger models work exceptionally well! Smaller models may not provide the best experience, so we recommend sticking with the larger options. 🤔
How to Get Started with Ollama and deepseek-r1
:
- Install Ollama: Head over to ollama and download/install Ollama on your system. 💻
- Run
deepseek-r1
: Open your terminal and run the command:ollama run deepseek-r1:14b
(or a larger model if you prefer). - WebUI Setup: Launch the WebUI following the instructions. Here's a crucial step: Uncheck "Use Vision" and set "Max Actions per Step" to 1. ✅
- Enjoy! You're now all set to experience the power of local
deepseek-r1
. Have fun! 🥳
Happy Chinese New Year! 🏮
✨ DeepSeek-r1 + Browser-use = New Magic ✨
🚀 Exciting news! Your browser-use can now engage in deep thinking!
Notes:
- The current version is a preview version for DeepSeek-r1 under development, please keep updating code to use.
- The current version only support the official DeepSeek-r1 api to use.
How to Use:
-
🔑 Configure API Key: Make sure you have set the correct DEEPSEEK_API_KEY in your .env file.
-
🌐 Launch WebUI: Launch the WebUI as instructed in the README.
-
👀 Disable Vision: In Agent Settings, uncheck "Use_Vision".
-
🤖 Select Model: In LLM Provider, select "deepseek", and in Model Name, select "deepseek-reasoner".
-
🎉 Enjoy!
Hotfix some errors
- Upgrade browser-use==0.1.19 to solve Font OS error on Windows.
- Fix parsing result error in stream feature(Headless=True), supported return agent history file.
- Fix status of Stop button in stream feature.
Please update latest codes and pip install -r requirements.txt
New WebUI: Enhanced Features and Compatibility
- A brand-new WebUI interface with added features like video display.
- Adapted for the latest version of browser-use, with native support for models like Ollama, Gemini, and DeepSeek. Please update your code and run
pip install -r requirements.txt
. - Ability to stop agent tasks at any time.
- Real-time page display in the WebUI when headless=True.
- Improved custom browser usage, fixing a bug about using own browser on Mac.
- Support for Docker environment installation.
Original version
-
A Brand New WebUI: We offer a comprehensive web interface that supports a wide range of browser-use functionalities. This UI is designed to be user-friendly and enables easy interaction with the browser agent.
-
Expanded LLM Support: We've integrated support for various Large Language Models (LLMs), including: Gemini, OpenAI, Azure OpenAI, Anthropic, DeepSeek, Ollama etc. And we plan to add support for even more models in the future.
-
Custom Browser Support: You can use your own browser with our tool, eliminating the need to re-login to sites or deal with other authentication challenges. This feature also supports high-definition screen recording.
-
Customized Agent: We've implemented a custom agent that enhances browser-use with Optimized prompts.