Metal on Intel graphics: potential bug when using global constant arrays?

I'm using a Mid-2014 Macbook Pro with Intel Iris graphics. There seems to be a problem when running Metal shader programs that use global constant arrays.

I've reproduced the problem by making a small modification to the Learn-Metal-CPP tutorial. I've specifically modified the MSL shader program in "01-primitive.cpp" to get each triangle vertex's position and color from a global array defined in the shader itself. The shader's constant array values are identical to the values being passed in as vertex arrays in the original tutorial.

I'd expect the resulting image to look like the original tutorial, which looked like this:

However, my version of the program that uses the shader global arrays produces the following result:

Here is my shader source that produced the wrong 2nd result above. You can replace the shader in Learn-Metal-CPP's 01-primitive.cpp with my shader to reproduce my result (on my hardware at least):

#include <metal_stdlib>
using namespace metal;

constant float2 myGlobalPositions[3] = { float2(-0.8, 0.8), float2(0.0,-0.8), float2(0.8,0.8) };
constant float3 myGlobalColors[3] = { float3(1.0, 0.3, 0.2), float3(0.8, 1.0, 0.0), float3(0.8, 0.0, 1.0) };

struct v2f
{
    float4 position [[position]];
    float3 color;
};

v2f vertex vertexMain( uint vertexId [[vertex_id]],
                                       device const float3* positions [[buffer(0)]],
                                       device const float3* colors [[buffer(1)]] )
{
    v2f o;

    // This uses neither of the global const arrays. It produces the correct result.
    // o.position = float4( positions[ vertexId ], 1.0 );
    // o.color = colors[ vertexId ];

    // This does not use myGlobalPositions. It produces the correct result.
    // o.position = float4( positions[ vertexId ], 1.0 );
    // o.color = myGlobalColors[vertexId];

    // This uses myGlobalPositions and myGlobalColors. IT PRODUCES THE WRONG RESULT.
    o.position = float4( myGlobalPositions[vertexId], 0.0, 1.0);
    o.color = myGlobalColors[vertexId];

    return o;
}

float4 fragment fragmentMain( v2f in [[stage_in]] )
{
    return float4( in.color, 1.0 );
}

I believe the issue has something to do with the alignment of the shader global array data. If I mess around with the sizes of the global arrays, I can sometimes make it produce the correct result. For example, making myGlobalColors start at a 32-byte-aligned boundary seems to produce the correct results.

I've also attached my full source for 01-primitive.cpp in case that helps.

Has anyone run into this issue? What would it take to get a fix for it?

  • Because you are bypassing the processing of the values to any colour correction taking place after the shader is compiled.

Add a Comment