Next Generation Graphics API
Recently I have been reading 2 interesting papers concerning the "next generation" graphics APIs. Both describe research APIs that are directly inspired by OpenGL, and both introduce relevant ideas to anyone writing a graphics api.
The first paper describes the SMASH api, used as a test/research API at University of Waterloo (Canada). The second one talks about OpenRT, an API used for RealTime Raytracing, developed at Saarland University (Germany).
SMASH
SMASH is a "test bed for next generation shaders", inspired by OpenGL (paper dated around 2001). It's hard to sum up features, but like OpenGL (and OpenGL2.0) it's a low leve API that tries to make the best use of graphics hardware. It's full of excellent ideas, in particular parameters binding is quite interesting. A few key points:
- improvements on standard OpenGL geometry assembler (introducing triangle strip steering modes)
-
very flexible parameters management
- parameters can be bound to primitives and primitives's vertices
- parameters are typed: Params (generic parameters), TexCoords, Colors, Covectors, Normals, Planes, Vectors, Tangents, Points.
- parameters are transformed depending on type (ex: some parameters need to be transformed by the modelview matrix)
- parameters are lenght-variable (support for parameters of variable dimensions is achieved using a double stack: numbers stack + number index stack)
- user can control "when" parameters will be transformed
-
shader management
- can build shader on the fly (beginshader/endshader)
- can also be precompiled and loaded directly...
- can "prefetch" shaders...
- ...
- etc.
OpenRT
OpenRT, like SMASH, is strongly inspired by OpenGL, but in a different context: realtime raytracing. It's not as low level as OpenGL of course, because of natural constraints imposed by realtime raytracing (in particular, there is no "immediate mode"). Some features:
- allows complex scene (since instance based)
- object are declared and built using an OpenGL-like syntax. Acceleration structures are automatically generated.
- truly object oriented
- geometry and shaders are used by instance
- shaders can be used "dynamically" (dynamically linked)
- shader parameters can be set "per shader", "per triangle", "per vertex"
- shader derived from a specific class + callbacks
- raytracing makes it easy to combine shaders (no real need for "multipass" algorithms)
There's not much of a conclusion to be made from these two APIs, it is still a burgeoning field and of course we have OpenGL version 2, and DirectX 10 just around the corner to further enlighten us.
Anyway, go read these papers to get a sense of where next-generation APIs are leading.