Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add flatpak manifest #1427

Merged
merged 4 commits into from
Oct 5, 2023
Merged

Add flatpak manifest #1427

merged 4 commits into from
Oct 5, 2023

Conversation

qnixsynapse
Copy link
Contributor

@qnixsynapse qnixsynapse commented Sep 15, 2023

Describe your changes

Add flatpak manifest for building flatpaks on Linux

Refer: #698

TODOs:

  • Create flatpak manifests
  • Add circleci tests
  • submit to flathub

Testers and reviews welcome! You will need org.kde.Sdk 6.5 and org.freedesktop.Sdk.Extension.node14 for it to build. Please do install it before running flatpak-builder.

@cosmic-snow
Copy link
Collaborator

cosmic-snow commented Sep 17, 2023

I've tested this PR on native Linux (v2.4.19 / Linux Mint 21.2) and it has been working fine for me.

Things tested so far:

  • building, local (i.e. user) installation
  • changing the model directory to new one, non-standard (Flatpak has its own default dir and I wanted to try a different one)
  • downloading a model in said directory (Orca Mini Small)
  • had to restart after download, but then the model loaded fine
  • GPU and CPU ran fine

I think having to restart sometimes after a model download is not uniquely a Flatpak problem, I've seen it mentioned by others in the past, too.

@apage43
Copy link
Member

apage43 commented Sep 22, 2023

If you have flatpak-builder older than 1.2.3 (most recent on ubuntu 22.04 is 1.2.2) you'll hit the bug fixed here flatpak/flatpak-builder#497 - there's a workaround in the linked issue (but it bypasses a git vulnerability mitigation)

Copy link
Member

@apage43 apage43 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

works here too

<description>
<p>Cross platform Qt based GUI for GPT4All versions with GPT-J as the base model.</p>
<ul>
<li>Fast CPU and GPU based inference using ggml for GPT-J and LLaMA based models</li>
Copy link
Member

@apage43 apage43 Sep 22, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

model support is a bit weird right now - latest versions do not support models that were requiring the old llama.cpp forks in order to not break support for old files (gptj, mpt), and only llama is currently supported with Vulkan - we're currently working on updating our llama.cpp to a post-GGUF version before re-adding models so that we can unify all the file handling logic and ideally match file formats with any other ggml implementations of those models (that have also updated to GGUF)

sideloading was always tricky for non-llama ggml models as there's not a single "ggml format" for many model types that were implemented outside llama.cpp as different implementations wound up making different decisions

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I copied @Tim453 's appdata and made some changes.

Feel free to change it according to your liking.
Also we will need some up to date screenshots.

@ngtanthanh
Copy link

.

@manyoso manyoso merged commit 5f3d739 into nomic-ai:main Oct 5, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants