Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Properly selecting between shaders for GL/GLES/WebGL #324

Open
agausmann opened this issue Apr 11, 2020 · 2 comments
Open

Properly selecting between shaders for GL/GLES/WebGL #324

agausmann opened this issue Apr 11, 2020 · 2 comments

Comments

@agausmann
Copy link
Contributor

agausmann commented Apr 11, 2020

At the moment, opengl_graphics selects between shader sources at compile time by checking target_os = "emscripten", but this isn't a perfect solution. I'm working on a library that translates GL to WebGL in wasm32-unknown-unknown, which will also require GLSL ES shaders; however, it won't work with this logic. It is also possible to target OpenGL ES outside of WebAssembly, and such use cases will also require GLSL ES shaders. I did implement some manual shader selection using feature flags in #318 and #319, but that isn't the best way to solve this either.

GL has a parameter called SHADING_LANGUAGE_VERSION which allows you to query the supported shader version at runtime. GL, WebGL and GL ES all support this, with the following formats:

  • OpenGL: <major>.<minor> <vendor-specific> (Reference, section 22.2)
  • OpenGL ES: OpenGL ES GLSL ES <major>.<minor> <vendor-specific> (Reference, section 6.1.6)
  • WebGL: WebGL GLSL ES <major>.<minor> <vendor-specific> (Reference, section 5.14.3)

I propose that the logic is changed to detect the shader version using this scheme instead of at compile time. This will mean that the user does not have to manually pick a shader version at compile time, with the cost being some additional overhead; there will be some extra time overhead at startup from querying the GL, and slightly larger binary sizes due to all of the shader versions being included (though it should be less than a few KB, and the included shaders can even be manually pruned if needed using default feature flags).

@agausmann
Copy link
Contributor Author

To be clear on why I think the current feature-based solution isn't great:

  • It requires the end-user to manually manage shader versions. I'd rather have a system that can automatically pick them, and I think the small extra runtime cost is generally accepted in the case of platform-dependent APIs. For example, at compile time we don't have to manually manage the OpenGL implementation that provides the bindings. Those function pointers get loaded at runtime.

  • It violates the principle that feature flags should be backwards-compatible. When I, as the end user, enable the webgl feature, though I'm technically taking on the responsibility of making sure it is compatible with the platform, it is still breaking compatibility with platforms that use standard GLSL.

@bvssvni
Copy link
Member

bvssvni commented Apr 11, 2020

This seems reasonable. As a precaution, we could try developing against a different branch to test this for while before releasing it. What do you think?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants