Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

backend: allow disabling debug messages #1439

Closed
wants to merge 1 commit into from
Closed

backend: allow disabling debug messages #1439

wants to merge 1 commit into from

Conversation

apage43
Copy link
Member

@apage43 apage43 commented Sep 18, 2023

Allows disabling (most) logging in the backend - unless it is enabled by another option explicitly requesting it, or is fatal - and turns this option on for the Python binding builds in CircleCI

Mostly ifdefs around logging statements, except in llama.cpp (the file) (see submodule change and PR nomic-ai/llama.cpp#6 to update master) where I instead ifdef away fprintf to avoid introducing more merge conflicts than absolutely necessary.

This notably does not prevent the

objc[66083]: Class GGMLMetalClass is implemented in both /Users/aaron/proj/gpt4all/gpt4all-bindings/python/gpt4all/llmodel_DO_NOT_MODIFY/build/libreplit-mainline-metal.dylib (0x110744210) and /Users/aaron/proj/gpt4all/gpt4all-bindings/python/gpt4all/llmodel_DO_NOT_MODIFY/build/libllamamodel-mainline-metal.dylib (0x111e6c210). One of the two will be used. Which one is undefined.

message that occurs on MacOS - this comes from the Objective-C runtime and is printed because more than one library that is statically linked against llama.cpp has a GGMLMetalClass defined in it. This is difficult to prevent because Obj-C classes are not namespaced and cannot be hidden like other symbols in statically linked libraries, and we currently load multiple model backend libraries that may link this code into the same app simultaneously.

This is at least harmless, as the class contains no code and is only used to leverage Objective-C functionality on MacOS to find the library path - all of our backend files live in the same directory, so it does not matter which implementation is actually used here. In order to prevent this message from being output we would need to either prevent multiple backend libraries from being resident at once (which would mean making them able to be unloaded and reloaded safely, likely not currently the case), OR somehow generate different classnames for each library that links llama.cpp (a change not likely to be upstreamable as its only needed accommodate loading multiple distinct instances of the llama.cpp library into the same application)

@apage43
Copy link
Member Author

apage43 commented Oct 12, 2023

obsoleted by upstream

@apage43 apage43 closed this Oct 12, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant