Are Explicit Location Bindings a Good Idea for a Shading Language?

Probably Not.


Both the HLSL and GLSL shading languages support mechanisms for assigning explicit, programmer-selected locations to certain shader parameters. In HLSL, this is done with the regsiter keyword:

// texture/buffer/resource:
Texture2D someTexture          : register(t0); 

// sampler state:
SamplerState linearSampler     : register(s0);

// unordered access view (UAV):
RWStructuredBuffer<T> someData : register(u0);

// constant buffer:
cbuffer PerFrame               : register(b0)    
    // offset for values in buffer:
    float4x4 view              : packoffset(c0);
    float4x4 proj              : packoffset(c4);
    // ...

When setting shader parameters through the Direct3D API, these explicit locations tell us where to bind data for each parameter. For example, since the cbuffer PerFrame is bound to register b0, we will associate data with it by binding an ID3D11Buffer* to constant buffer slot zero (with, say, PSSetConstantBuffers).

The OpenGL Shading Language did not initially support such mechanisms, but subsequent API revisions have added more and more uses of the layout keyword:

// texture/buffer/resource
layout(binding = 0) sampler2D someTexture;

// shader storage buffer (SSB)
layout(binding = 0) buffer T someData[];

// uniform buffer:
layout(binding = 0) PerFrame
    mat4 view;
    mat4 proj;
    // ...

// input and output attributes:
layout(location = 2) in  vec3 normal;
layout(location = 0) out vec4 color;

// default block uniforms (not backed by buffer)
layout(location = 0) uniform mat4 model;
layout(location = 1) uniform mat4 modelInvTranspose;

It is clear that location binding was an afterthought in the design of both languages; the syntax is ugly and obtrusive. Using explicit locations can also be error-prone, since it becomes the programmer’s responsibility to avoid conflicts, etc., and ensure a match between application and shader code. The shading languages “want” you to write your code without any explicit locations.

If you talk to game graphics programmers, though, you will find that they use explicit locations almost exclusively. If you try to give them a shading language without this feature (as GLSL did), they will keep demanding that you add it until you relent (as GLSL did).

Why Do We Need These Things?

If the programmer does not assign explicit locations, then it is up to the shader compiler to do so. Unfortunately, there is no particular scheme that the compiler is required to implement, and in particular:

  • The locations assigned to parameters might not reflect their declared order.
  • A parameter might not be assigned a location at all (if it is statically unreferenced in the shader code).
  • Two different GL implementations might (indeed, will) assign locations differently.
  • A single implementation might assign locations differently for two shaders that share parameters in common.

When an application relies on the shader compiler to assign locations, it must then query the resulting assignment through a “reflection” interface before it can go on to bind shader parameters. What used to be a call like PSSetConstantBuffers(0, ...) must now be somethin glike PSSetConstantBuffers(queriedLocations[0], ...). In the case of Direct3D, these locations can be queried once a shader is compiled to bytecode, after which the relevant meta-data can be stripped, and the overhead of reflection can be avoided at runtime; this is not an option in OpenGL.

Even statically querying the compiler-assigned locations does not help us with the issue that two different shaders with identical (or near-identical) parameter lists may end up with completely different location assignments. This makes it impossible to bind “long-lived” parameters (e.g., per-frame or camera-related uniforms) once and re-use that state across many draw calls. Every time we change shaders, we would need to re-bind everything since the locations might have changed. In the context of OpenGL, this issue means that linkage between separately-compiled vertex and fragment requires an exact signature match (no attributes dropped), unless explicit locations are used.

As it stands today, you can have clean shader code at the cost of messy application logic (and the loss of some useful mix-and-match functionality), or you can have clean application logic at the cost of uglier shader code.

A Brief Digression

On the face of it, the whole situation is a bit silly. When I declare a function in C, I don’t have to specify explicit “locations” for the parameters lest the compiler reorder them behind my back (and eliminate those I’m not using):

int SomeFunction(
    layout(location = 0) float x,
    layout(location = 1) float A[] );

When I declare a struct, I don’t have to declare the byte offset for each field (or, again, worry about unused fields being optimized away):

struct T {
    layout(offset = 0) int32_t x;
    layout(offset = 4) float y;

In practice, most C compilers provide fairly strong guarantees about struct layout, and conform to a platform ABI which guarantees the calling convention for functions, even across binaries generated with different compilers. A high level of interoperability can be achieved, all without the onerous busy-work of assigning locations/offsets manually.

Guarantees, and the Lack Thereof

Why don’t shader compilers provide similar guarantees? For example, why not just assign locations to shader parameters in a well-defined manner based on lexical order: the first texture gets location #0, the next gets #1, and so on? After all, what makes the parameters of a shader any different from the parameters of a C function?

(In fact, the Direct3D system already follows just such an approach for the matching of input and output attributes across stage boundaries. The attributes declared in a shader entry point are assigned locations in the input/output signature in a well-defined fashion based on order of declaration, and unused attributes aren’t skipped.)

Historically, the rationale for not providing guarantees about layout assignment was so that shader compilers could optimize away unreferenced parameters. By assigning locations only to those textures or constants that are actually used, it might be possible to compile shaders that would otherwise fail due to resource limits. In the case of GLSL, different implementations might perform different optimizations, and thus some might do a better job of eliminating parameters than others; the final number of parameters is thus implementation-specific.

This historical rationale breaks down for two reasons. First is the simple fact that on modern graphics hardware, the limits are much harder to reach. The Direct3D 10/11 limits of 128 resources, 16 samplers, and 15 constant buffers is more than enough for most shaders (the limit of only 8 UAVs is a bit more restrictive). Second, and more important, is that if a programmer really cares about staying within certain resource bounds, they will carefully declare only the parameters they intend to use rather than count on implementation-specific optimizations in driver compilers to get them under the limits (at which point they could just as easily use explicit locations).

One wrinkle is that common practice in HLSL is to define several shaders in the same file, and to declare uniform and resource parameters at the global scope. This practice increases the apparent benefit of optimizing away unreferenced parameters. The underlying problem, though, is that the language design forces programmers to use global variables for what are, logically, function parameters. Trying to “fix” this design decision by optimizing away unused parameters is treating the symptoms rather than the disease.

As far as I can tell, there is no particularly compelling reason why a modern shading language should not just assign locations to parameters in a straightforward and deterministic manner. We need an ABI and calling convention for interface between the application and shader code, not a black box.

So Then What About Explicit Locations?

If our compiler assigned locations in a deterministic fashion, would there still be a need for explicit location bindings? Deterministic assignment would serve the same purpose in eliminating the need for a “reflection” API to query parameter bindings (though one could, of course, still be provided).

The remaining benefit of explicit locations is that they allow for us to make the parameter signatures of different shaders “match,” as described above. In the simplest case, a deterministic assignment strategy can ensure that if two shaders share some initial subsequence of parameters in common, then they assign matching locations to those parameters. In the more complex cases (where we want a “gap” in the parameter matching), it seems like the answer is right there in C already: unions allow us to create heterogeneous layouts just fine.

All we need to do, then, is provide a shading language the same kinds of tools that C has for describing layouts (structs and unions), and we should be able to rid ourselves of all these fiddly layout and register declarations.

So What Now?

In the long run, this whole issue is probably moot, as “bindless” resource mechanisms start to supplant the very idea of binding locations as a concept in shading languages and graphics APIs.

In the short term, though, I hope that we can get support for deterministic and predictable layout assignment in near-future versions of HLSL and GLSL. This would allow us to write cleaner and simpler shader declarations, without having to compromise the simplicity of our C/C++ application logic.


Learning to Write

The recent slew of changes that Apple has made to the secret iDevice developer agreement has finally pushed me to write up these thoughts, which have been nagging at me for a while now.

In the new world of computing, “devices” are king. And nobody sells a device these days without also having the forethought to build a walled garden around it. Apple is now the poster child for the “app store” model, but the video game industry had already proven the value of controlling both the hardware platform and software distribution.

It would be disingenuous of me to decry the practice outright. I’ve owned a variety of these locked-down devices, and have purchased software through the “approved” online stores. Yes, I have been frustrated by the consequences of restrictive copy-protection – re-buying games that I had already purchased when my XBOX 360 came up with a Red Ring of Death. Yes, I have often pondered “jailbreaking” these devices, and occasionally tried it out on those that I was willing to risk “bricking.”

In all of this, I have never stood up and “voted with my wallet” – passing up on a device because I didn’t approve of its software development and distribution model. Simply put, the value I got out of these devices – the latest Nintendo game, or the ease-of-use of the iPhone – surpassed the cost – the inability to load my own software and fully customize the device. Or at least, that was the only cost I could perceive at the time.

Now that I have a daughter, my perception is a bit different.

My generation has already formed a powerful attachment to our mobile devices – I often joke that given the choice between air and my iPhone I would have to think carefully. My daughter is going to grow up thinking that it is normal to be able to stream live video from across the world while riding in the back seat of a car. We could debate whether this pervasive access to technology will be harmful for the next generation, but this misses the point. The presence or absence of technology is not what is important. What is important is how future generations will relate to the technology of their time.

Remember that written language was once a “technology.” Those of you reading this have grown up in a world immersed in that technology; most of us are within sight of written words every moment of every day. The spread of literacy over millennia has changed human society, human history. We know that mastery of this technology – the ability to both consume and produce – is necessary for success in our society.

Those in my generation who consider themselves “computer literate” can often trace their learning process back to a handful of software systems: “turtle graphics” in Logo, BASIC on the Apple II or DOS, HyperCard on the early Mac. All of these systems allowed anyone with a computer to experiment with programming, and allowed even young children to use the technology to create and not just consume. The Scratch project provides a similar exploratory programming environment for today’s children. My wife teaches a technology course that, among other things, has 7th and 8th grade students build their own computer games using Scratch.

In case you hadn’t noticed, all of those software systems have another feature in common – none of them are allowed in Apple’s App Store. Children who want to use an iPad to create animations or games in Scratch can no longer do so. If they are especially motivated, and have helpful parents, I suppose they could pay to join Apple’s developer program, go out and buy dense technical books to teach them C, Objective C and Apple’s proprietary APIs, and spend months or years creating a project that would have taken mere minutes in Scratch.

That hardly sounds fair, though. It’s like telling kids they can have their Dr. Seuss book after they finish a thesis on Finnegans Wake. There is a reason these simple, intuitive programming environments exist, and the companies selling these devices shouldn’t just ignore the programmers of the future – the authors of tomorrow’s technology – in the name of platform lock-in.

I guess I’ve already nailed the point into the ground, but just to throw in a few closing comments: There are two technologies that shaped me most in my formative years – books, and desktop computers. What really frightens me is that both of these are brilliant ideas that would never be invented in today’s world. Books, which can be read, shared, bought and sold by anyone in any country – that don’t need to be bought separately for at-home and on-the-go use – would never even be considered by today’s media companies. The earliest PCs/Macs, which allowed anybody to buy, sell, install and develop any software they wanted, without diverting revenue back to the hardware manufacturer, would be seen as a squandered opportunity.

These technologies – books and computers – changed the world for the better, and now we are making haste to bury them in favor of their more profitable successors.


I have been neglectful of my nascent blog since the beginning, but now I have an excuse. Just over a month ago, I became a father.

Having spent a month at home with my wife, and falling in love with baby Morganne has been an amazing experience. Even though many people had told me what to expect, I was completely unprepared for what an exhilarating and terrifying endeavor it is to be responsible for a new life (in every sense of the word).

I’m back at work now, which I’m sure will present its own challenges for the new parents. Whether this bodes well or ill for the prospect of further posts, I do not know.