我们目前正在扩展我们的模拟和可视化软件,以支持通过网络进行高效的流式传输并查看GRID。
我们考虑的主要应用是使用Quadro M6000运行的服务器,并将可视化视频流式传输到(移动)客户端。
为此,我们使用的是GRID SDK 2.3,它似乎是公开发布的最新版本,尽管某处应该有5.0版本...
我们有一个64位的OpenGL 4.5应用程序并使用NvIFR来捕获和编码我们的主渲染目标的内容,这非常有效,但是:只有在渲染目标的纹理类型从GL_TEXTURE_2D切换到GL_TEXTURE_RECTANGLE之后。
在GRID SDK的NvIFROpenGL.h之后,调用NvIFROGLTransferFramebufferToH264Enc()要求纹理类型为GL_TEXTURE_RECTANGLE。
如果不是这种情况,由于此调用中的纹理目标不匹配,将抛出OpenGL无效操作。
1)为什么?
2)周围有什么办法吗?
我可以看到GL_TEXTURE_RECTANGLE提供了一些向后兼容性,因为GL_TEXTURE_2D不必支持非二次幂的维度,但在实践中,我们克服了这个限制大约十年前。
将我们的主渲染目标的类型切换到GL_TEXTURE_RECTANGLE需要在着色器中进行大量特殊处理,因为纹理坐标非标准化,我们需要为sampler2D和samplerRect维护不同的过滤着色器等。
是否真的如此重要,只支持相当过时的矩形纹理,但普通的2d纹理不是吗?
现在,我们将帧缓冲区的内容加入到一个额外的矩形纹理中,然后我们对其进行编码和回读,这对于每帧720p的额外blit来说花费0.25到0.5毫秒,这不是戏剧性的,但仍然不方便。
是否可以扩展NvIFROGLTransferFramebufferToH264Enc()的实现以使用GL_TEXTURE_2D纹理?
以上来自于谷歌翻译
以下为原文
We are currently extending our simula
tion and visualization software to support efficient streaming over network and looked into GRID. The main application we consider will be a server running with a Quadro M6000 and streaming a video of the visualization to (mobile) clients.
For this, we are using the GRID SDK 2.3, which appears to be the latest version publicly available, although there ought to be a version 5.0 somewhere …
We have a 64 bit OpenGL 4.5 application and use NvIFR to capture and encode the contents of our main render target, which works quite well, BUT: Only after the render target's texture type has been switched from GL_TEXTURE_2D to GL_TEXTURE_RECTANGLE. Following NvIFROpenGL.h of the GRID SDK, the call
NvIFROGLTransferFramebufferToH264Enc()requires the texture type to be GL_TEXTURE_RECTANGLE. If this is not the case, an OpenGL invalid operation will be thrown due to a texture target mismatch inside this call.
1) Why is that?
2) Is there any way around it?
I can see that GL_TEXTURE_RECTANGLE provides some backward compatibility, given that GL_TEXTURE_2D does not have to support non-power-of-two dimensions, but in practice, we overcame this limitation about a decade ago. Switching our main render target's type to GL_TEXTURE_RECTANGLE would require a lot of special handling in shaders for sampling due to the unnormalized texture coordinates and we would need to maintain different filtering shaders etc. for sampler2D and samplerRect. Is it really so crucial that only the quite obsolete rectangle textures are supported, but the common 2d textures are not?
Right now, we blit the contents of our framebuffer into an additional rectangle texture that we then encode and read back, which costs us 0.25 to 0.5 ms for the additional blit in 720p per frame, which is not dramatic, but still inconvenient. Could the implementation of
NvIFROGLTransferFramebufferToH264Enc()be extended to also work with GL_TEXTURE_2D textures?
0