Thursday, 9 October 2014

Cross-API Rendering 1: Creating a Graphics API Abstract Layer for multiple API's

Writing rendering code for games today often involves supporting multiple platforms - PC of course, but also consoles like PS4 and Xbox One. Mac and Linux support are also becoming desirable - take a look at Steam on Linux for example.

So it's a bit awkward when writing software to go across these platforms when you have to use different graphics API's. For Windows, it's DirectX 11 - soon to be DX12. Some Windows users - particularly simulation people -

So we need a layer of commands between generic C++ code and a graphics API that could be DirectX 11, OpenGL 3+, or a console API, depending on the circumstances.

First question: why have a new API? Why not just choose a standard one, say OpenGL 3.0, then write adaptors for all the others so that we can write our rendercode IN GL3.0 and run it on any target? Firstly, I prefer a higher-level interface than GL, or DirectX or any other API offers. I want the minimum number of commands to achieve any task, so none of the existing API's fit the bill. Secondly, the way the abstracted API works is by sending ALL the information that we could possibly need. That is, we're sending MORE information that DX11 needs, more than GL3.0 needs. We're sending enough information that any KNOWN API will be able to interpret it.

This means that the API I'm creating will necessarily expand, slightly, as I add support for each new platform. And there will be redundancy - for example if one API specifies sampling in the shader, and one does it in C++, we must specify both, and they should match.

What a graphics API needs to be able to do

  • Create and initialize device objects
    • Textures
    • Shaders
    • Buffers
    • State objects
    • Queries
  • Perform GPU rendering or calculation commands using device objects
    • State changes
    • Draw calls
    • Compute dispatches
  • Free the device objects when no longer needed
The list above covers pretty much all the low-level constructs that we need to create. You can subdivide them - textures can be 3D, 2D or 1D. They can be arrays of 2D textures. They can be rendertargets, they can be compute targets. Buffers can be vertex buffers, index buffers, constant buffers (for shaders), or various other kinds.

Not all API's use state objects, e.g. Blend States. In OpenGL you mostly just set the individual parts of the state with single commands.

Draw calls and compute dispatches are really the same thing with different outputs - in both cases, you're going to run a shader with textures and buffers as inputs. But while a draw call puts to a render target, compute dispatches put random access output to textures or buffers.

What a high-level graphics API should do

Because we don't want to reinvent the wheel, it would be nice if our API could have some higher-level constructs.
  • Create and initialize
    • Materials
    • Meshes
    • Lights
    • Framebuffers/Rendertargets
    • Effects - grouped shaders
  • Perform higher-level draw functions
    • Print text
    • Draw full-screen quads
    • Draw screen-space quads - great for debugging
    • Draw geometric shapes - lines, circles and so on
    • Push and pop state - great to make sure render code doesn't leave a mess
And where possible, we want this API to be light and clean - to use as few commands as possible to perform common tasks.

Next, I'll describe the RenderPlatform class, the core of the new API.