Wednesday, 10 June 2015

Compiling HLSL effect files for OpenGL: Part 2 - translating HLSL syntax to GLSL

I'll talk here about some of the approaches used to enable HLSL syntax parsing for GLSL. The best resource I know for Flex/Bison parser generators is "Flex and Bison" by John Levine.

After the initial implementation of the effect parser for GLSL, I had a set of about 20 effect files (I use the .sfx extension) which use another 40-ish include files (.sl files) for shared definitions. At the top of each .sfx we had:

#include ""

- and "" was different for each API, containing #defines that allowed the same code to be compiled for HLSL and GLSL. For read-writeable texture load and store (as opposed to sampling) we had for GL:

 #define IMAGE_LOAD(tex,pos) imageLoad(tex,int2(pos))
 #define IMAGE_LOAD_3D(tex,pos) imageLoad(tex,int3(pos))
 #define IMAGE_STORE(tex,pos,value) imageStore(tex,int2(pos),value)
 #define IMAGE_STORE_3D(tex,pos,value) imageStore(tex,int3(pos),value)

whereas for HLSL, we had:

 #define IMAGE_LOAD(tex,pos) tex[pos]
 #define IMAGE_LOAD_3D(tex,pos) tex[pos]
 #define IMAGE_STORE(tex,pos,value) tex[pos]=value;
 #define IMAGE_STORE_3D(tex,pos,value) tex[pos]=value;

I didn't want to force users to include a special file to get standard HLSL shaders to work, and I didn't want to force them to use these ugly macros where the HLSL syntax is usually more elegant. So - delving into the Bison .ypp file where I'd added the C grammar, I put something like this:

postfix_exp '[' expression ']'
// Is it actually a texture we're indexing?
 GLTextureType glTextureType=GetTextureType(buildFunction,$$.varName);
 ostringstream str;
  bool rw=IsRWTexture(glTextureType);
This looks at any expression of the form A[B] and tries to see if A is a texture or image (as GLSL calls read-write textures), and replaces the expression with an imageLoad or a texelFetch. But postfix_exp '[' expression ']' actually recognizes A[B] even if it's on the right-hand-side. So,


would then become:


- which is wrong. But that's fine as long as further up the line we match:

unary_exp '=' assignment_exp
- where unary_exp matches the A[B] expression, and assignment_exp matches the RHS. In this case the imageLoad is replaced with an imageStore, and the whole thing becomes:


Which is correct GLSL. Note: we wrap pos in ivec2() to convert it - just in case it's an unsigned int vector (a uvec2). Many GLSL compilers will complain about implicit conversion, whereas HLSL just steams through - so adding the explicit conversion gives us HLSL-style behaviour. If it's already an ivec, the conversion should optimize out at the GLSL-compile stage.

Texture Dimensions

For obtaining texture sizes in GL, we had:

#define GET_DIMENSIONS(tex,X,Y) {ivec2 iv=textureSize(tex,0); X=iv.x;Y=iv.y;}
#define GET_DIMENSIONS_3D(tex,X,Y,Z) {ivec3 iv=textureSize(tex,0); X=iv.x;Y=iv.y;Z=iv.z;}

While for HLSL, we had definitions like this:

#define GET_DIMENSIONS(tex,x,y) tex.GetDimensions(x,y)
#define GET_DIMENSIONS_3D(tex,x,y,z) tex.GetDimensions(x,y,z)

You can see that we've been forced to use two separate macros for 2D and 3D, in order to know whether we're using an ivec2 or an ivec3 in GLSL. In this case, the bare GLSL textureSize() functions are more succinct than HLSL's oddball approach of multiple individual output parameters. But the goal here (for now) is to support the HLSL syntax so that forces the complex macros you see above.

We do this by matching GetDimensions as a token in the lexer:

<in_shader>"GetDimensions" {          stdReturn(GET_DIMS);

and  in the parser:

get_dims_exp: postfix_exp '.' GET_DIMS
 string texture=$1.text;
 string command=$3.text;

So when we match:

 get_dims_exp '(' assignment_exp ',' assignment_exp ')'

...we have sufficient information to construct an expression of the form:

{ivec2 iv=textureSize(tex,0); X=iv.x;Y=iv.y;}

where in this case X and Y are the two assignment_exps.

The first part of this series is here; the code is at github.