Releases
v3.1.0
Added
Changed
Customize combo boxes and context menus to fit the new style (#2535 )
Improve view bar scaling and Model Settings layout (#2520
Make the logo spin while the model is generating (#2557 )
Server: Reply to wrong GET/POST method with HTTP 405 instead of 404 (by @cosmic-snow in #2615 )
Update theme for menus (by @3Simplex in #2578 )
Move the "stop" button to the message box (#2561 )
Build with CUDA 11.8 for better compatibility (#2639 )
Make links in latest news section clickable (#2643 )
Support translation of settings choices (#2667 , #2690 )
Improve LocalDocs view's error message (by @cosmic-snow in #2679 )
Ignore case of LocalDocs file extensions (#2642 , #2684 )
Update llama.cpp to commit 87e397d00 from July 19th (#2694 )
Add support for GPT-NeoX, Gemma 2, OpenELM, ChatGLM, and Jais architectures (all with Vulkan support)
Enable Vulkan support for StarCoder2, XVERSE, Command R, and OLMo
Show scrollbar in chat collections list as needed (by @cosmic-snow in #2691 )
Removed
Fixed
Fix placement of thumbs-down and datalake opt-in dialogs (#2540 )
Select the correct folder with the Linux fallback folder dialog (#2541 )
Fix clone button sometimes producing blank model info (#2545 )
Fix jerky chat view scrolling (#2555 )
Fix "reload" showing for chats with missing models (#2520
Fix property binding loop warning (#2601 )
Fix UI hang with certain chat view content (#2543 )
Fix crash when Kompute falls back to CPU (#2640 )
Fix several Vulkan resource management issues (#2694 )
You can’t perform that action at this time.