This is a general problem with OpenGL and there not being a difference between loading a core function and an equally-named extension function. In my case the application is running on JVM (written in Scala or Java) and uses LWJGL + GLFW bindings, but even knowing how do this for native C/C++ applications would be helpful, if there is a way, it should be probably possible to transform it into the JVM world. How can I test my application to make sure it works with some particular version of OpenGL (3.2, 4.0) without actually running it on a hardware which does supports newer version? There might be other issues like this both in API and shader use lurking around which prevent compatibility with lower OpenGL versions and I would like to know about them. I would like my application to run on anything supporting 3.2, but I do not have a regular access to hardware which does not actually support 4.5. The application requests OpenGL 4.0 core profile, 3.2 core profile and 3.2 forward compatible core profile (in this order), but in spite of obtaining a 4.0 profile a call to glCreateTextures succeeds without any warning. To make it work properly we need to enforce the direct mapping, which becomes available with glClipControl(GL_LOWER_LEFT,GL_ZERO_TO_ONE).Īlternatively, the trick could be made with glDepthRange(-1,1), but unfortunately the specs says that far and near values are clamped to range, so no hope here.Recently I realized my application is using an OpenGL call which is available only on OpenGL 4.5 - glCreateTextures, and I realized it only because I was trying to run the application on a MacOS computer supporting only OpenGL 4.1 and it crashed there. This indirect mapping will result in loss of FP precision because 0.5 will be added to near-zero values of upcoming fragment’s depth, which will vanish the trick. But without ARB_clip_control the mapping of z to depth values converts range to range, so depth will range from 0.5 to 1.0 in our case. As the result, all clipped z values lay in range - those fragments behind the projection plane are all culled. Using glClipControl(GL_LOWER_LEFT,GL_ZERO_TO_ONE) we set the clipping volume to be:Ġ=zNear).Most values of the depth will be concentrated in near-zero region, that is why we need the help of exponential part of FP to prevent the loss of precision for near-zero values. Therefore, the depth value (assuming default direct mapping of depth values) is calculated as follows:Īs a result, all pixels behind the clipping plane have their depth values laying outside region, so those are culled for the fragments sitting on the clipping plane the depth is 1.0, for all further fragments the depth approaches positive zero.Īs depth is reversed (the further the fragment the smaller the depth) we must set glDepthFunc(GL_GREATER) to make the depth test working properly.Īnd the final part, which brings sense to all that: we must use the floating-point z-buffer. ZNear: distance to the near clipping plane (tiny constant, like 0.0000.1). I am confused… Somewhy I was sure it is, but now… Hm… Maybe I just confused myself…Ĭonfigure the projection matrix like this: It is important that the range is not -Wc<=Zc<=Wc for the approach described below. Using glClipControl(GL_LOWER_LEFT,GL_ZERO_TO_ONE) we set the clipping volume to be: Here is my plan that involves this extension: May I ask why you are so excited with ARB_clip_control? This extension could slightly improve precision, but I can still live quite good without it.I see this extension as part of the solution of the depth-fighting problem. If you write application for wider audience, well … many years could pass until many of them could try your application. If it runs on other people computers and you can control how they manage drivers, have NV cards, presumably post-Fermi, etc., you should wait at least a year to have a proper functioning. If it runs on your computer it is just fine. At least on your own computer.īut don’t force others to use them, since they can be unstable or miss some functionality.Īlso, it is not useless to write an application based on new features. For other vendors the time could pass until the support is brought to users. But usually it requires some time for thorough testing and implementation polishing, inducing certain latency for the inclusion in the release version of the drivers. It is really great for enthusiasts wanting to try new features.
Opengl 4.5 support drivers#
NVIDIA releases beta drivers with the latest OpenGL at the same time a new specification is announced.