Tuesday, December 12, 2006

Rendering to multiple textures

To render to multiple textures, you still use the glDrawBuffers function. However, you use the enums from GL_EXT_framebuffer_object.


glDrawBuffers(2, buffers);

If a framebuffer object is bound with textures attached to GL_COLOR_ATTACHMENT0_EXT and GL_COLOR_ATTACHMENT1_EXT, they will both be rendered to now.

Monday, December 11, 2006

Rendering to multiple buffers

Rendering to multiple buffers, known as MRT to the Direct3D world, can be accomplished with the GL_ARB_draw_buffers extension. Using this extension is incredibly simple. For example, to draw to the back and AUX0 buffers simultaneously, you would use the following code:

GLenum buffers[] = { GL_BACK, GL_AUX0 };
glDrawBuffersARB(2, buffers)

And there you have it.

Of course, this is rather dull unless we can actually write different colors to different buffers. In order to do this, shaders must be used. In GLSL, you can select which buffer is written to by writing to gl_FragData[n] in place of gl_FragColor. If you are using GL_ARB_fragment_program, you can select which buffer is written to by writing to result.color[n].


GL_ARB_draw_buffers specification

Sunday, December 10, 2006

Streaming texture updates

It is possible to stream updates to a texture, often skipping a costly data copy, using the GL_ARB_pixel_buffer_object extension which was promoted to core in OpenGL 2.1

To do this, you must bind a buffer to GL_PIXEL_UNPACK_BUFFER_ARB, map it, write your texture data into the mapped buffer, unmap it, and call glTexSubImage2D referencing into the buffer.

// Bind buffer
glBindBuffer(GL_PIXEL_UNPACK_BUFFER_ARB, myBuffer);

// Null existing data
glBufferData(GL_PIXEL_UNPACK_BUFFER_ARB, width * height * bytesPerPixel, NULL, GL_STREAM_DRAW);

// Map buffer. Returns pointer to buffer memory
void *pboMemory = glMapBuffer(GL_PIXEL_UNPACK_BUFFER_ARB, GL_WRITE_ONLY);

writeImage(pboMemory); // Writes data into pboMemory

// Unmaps buffer, indicating we are done writing data to it
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, width, height, GL_BGRA, GL_UNSIGNED_INT_8_8_8_8_REV, (char *)NULL);

// Unbind buffer

Normally the driver must copy the data you pass into glTexSubImage2D. However, by storing the data in a buffer, which is memory managed by the driver, we can avoid this copy. Furthermore, by using a universally hardware accelerated texture format (see this tip) we prevent any data processing, ensuring the fastest possible download to the GPU.


GL_ARB_pixel_buffer_object specification

Saturday, December 9, 2006

Setting vertex attribute locations

While you can query the location of a generic vertex attribute in a GLSL shader by calling glGetAttribLocation(GLuint program, const GLchar *name), you can also set the location of a generic vertex attribute manually using glBindAttribLocation(GLuint program, GLuint index, const GLchar *name)

NOTES: Manually set attribute locations do not take effect until glLinkProgram(GLuint program) is called.

You cannot manually assign a location to a built in vertex attribute (e.g. gl_Vertex).

It is possible to assign the same location to multiple attributes. This process is known as aliasing, and is only allowed if just one of the aliased attributes is active in the executable program. HOWEVER the implementation is not required to check for aliasing and is free to employ optimizations that only work in the abscence of aliasing.

Any vertex attribute which is not manually assigned a location will be assigned one by the linker, and this location can be queried with glGetAttribLocation(GLuint program, const GLchar *name).

glBindAttribLocation man page

Thursday, December 7, 2006

Render to Texture

OpenGL supports fast crossplatform offscreen rendering through the GL_EXT_framebuffer_object extension.

To render to a texture using the framebuffer object you must

1) Create a framebuffer object
glGenFramebuffersEXT(1, &myFBO);

2) Bind the framebuffer object
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, myFBO);

3) Attach a texture to the FBO

4) If you need depth testing, create and attach a depth renderbuffer

// Gen renderbuffer
glGenRenderbuffersEXT(1, &myRB);

// Bind renderbuffer
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, myRB);

// Init as a depth buffer
glRenderbufferStorageEXT( GL_RENDERBUFFER_EXT, GL_DEPTH_COMPONENT24, width, height);

// Attach to the FBO for depth

5) Render just like you normally would.

6) Unbind the FBO (and renderbuffer if necessary). glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0); glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, 0);

7) Use the texture you rendered to!


All textures and renderbuffers attached to the framebuffer object must have the same dimensions.

You must use the GL_EXT_packed_depth_stencil extension to use stencil testing with framebuffer objects.

You can only render to RGB, RGBA, and depth textures using framebuffer objects.


Framebuffer object specification

Checking for a D3D shader version

Direct3D's shader version (1.1, 1.4, 2.0, 3.0, 4.0) is a very useful way to keep track of what shaders can execute on what hardware. While OpenGL does not provide a direct way to acquire this number, it is still possible to determine a shader model based on certain OpenGL extensions.

If a card supports GL_ARB_vertex_program and/or GL_ARB_vertex_shader, it supports vertex shader 1.1.

If a card supports GL_NV_texture_shader and GL_NV_register_combiners, it supports pixel shader 1.1.

If a card supports GL_ATI_fragment_shader or GL_ATI_text_fragment_shader it supports pixel shader 1.4.

If a card supports GL_ARB_fragment_program and/or GL_ARB_fragment_shader it supports Shader Model 2.0.

If a card supports GL_NV_vertex_program3 or GL_ATI_shader_texture_lod it it supports Shader Model 3.0.

If a card supports GL_EXT_gpu_shader4 it is a Shader Model 4.0 card. (Geometry shaders are implemented in GL_EXT_geometry_shader4)

NOTE: In Mac OS 10.4.3 and later, all GPUs report support for GL_ARB_fragment_shader and GL_ARB_vertex_shader even if they do not support this extension in hardware. To determine if these extensions are hardware accelerated, call CGLGetParameter(kCGLCPGPUFragmentProcessing) and CGLGetParameter(kCGLCPGPUVertexProcessing), repsectively.


Mac OpenGL mailing list

Wednesday, December 6, 2006

Setting up Antialiasing

GL_ARB_multisample defines the mechanism for antialising in OpenGL.

In order to use antialiasing of any kind, you must create a proper pixel format for your rendering context. This is a platform specific process. Furthermore, in order to check for multisampling support, you must have an active rendering context. Thus it is advised that you first create a dummy context that every GPU should support. Once this dummy context is initialized, you can check for antialiasing support and destroy the dummy context. If antialiasing support is available, you may then create a pixel format that uses antialiasing.

In Windows, the following attributes must be added to your pixel format attribute array in order to use antialiasing: WGL_SAMPLE_BUFFERS_ARB and WGL_SAMPLES_ARB. WGL_SAMPLE_BUFFERS_ARB should be set to 1. WGL_SAMPLES_ARB should be set to the number of samples you/the user request (2, 4, 6, 8, etc.)

In MacOS X, you can request antialiasing through Cocoa, AGL, or CGL depending on what your application needs.

The following attributes must be added to an NSOpenGLPixelFormatAttribute array : NSOpenGLPFASampleBuffers and NSOpenGLPFASamples. These directly correspond to WGL_SAMPLE_BUFFERS_ARB and WGL_SAMPLES_ARB.

If you are using AGL, the following attributes must be added to your attribute array: AGL_SAMPLE_BUFFERS_ARB and AGL_SAMPLES_ARB. These directly correspond to WGL_SAMPLE_BUFFERS_ARB and WGL_SAMPLES_ARB.

If you are using CGL, the following attributes must be added to your attribute array: kCGLPFASampleBuffers and kCGLPFASamples. These directly correspond to WGL_SAMPLE_BUFFERS_ARB and WGL_SAMPLES_ARB.

Once a context with a sample buffer and a number of samples greater than 1 has been created, antialising can be activated with the following command


Multisampling specification
Multisampling in Windows
FSAA in Mac OS X

What is the fastest way to draw geometry?

Ideally, you should use a static vertex buffer object, with a stride of 32 bytes (or a multiple of 32). The most optimal draw call is glDrawRangeElements. Indices should be unsigned shorts.

Vertex data should be stored as GLfloat, GLshort, or GLubyte, as these formats are supported by most GPUs.


ATI performance tuning(page 10)
Using VBOs(page 14)

UNCONFIRMED: Internally on all GPUs, vertices contain four values: xyzw. By default, if a w value is not provided, it is set to 1. Theoretically, if you perform this padding in your application before sending the data to OpenGL, you can improve performance.

Shaders expect all data to be float, thus it is expected that on programmable GPUs specifying all vertex data as float will improve performance by limiting data conversion.

What is the optimal 32 bit texture format?

The optimal texture format on all systems, regardless of endianness or operating system, is an internal format of GL_RGBA8, a type of GL_UNSIGNED_INT_8_8_8_8_REV, and a format of GL_BGRA. An optimal call to glTexImage2D would thus be this:

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, border, GL_BGRA, GL_UNSIGNED_INT_8_8_8_8_REV, NULL);

Remember that because this is a packed integer format, you are responsible for taking endianness into account when packing the pixels.

GL_NV_pixel_data_range (look at NVIDIA Implementation Details).
Mac OpenGL mailing list