Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Material Data rework #459

Merged
merged 36 commits into from
Aug 16, 2020
Merged

Material Data rework #459

merged 36 commits into from
Aug 16, 2020

Conversation

mosra
Copy link
Owner

@mosra mosra commented Jul 16, 2020

What we want

  • Store basically any arbitrary material attribute in a (broad enough, but limited) set of types
    • Phong, Flat, PBR metallic/roughness, PBR specular/glossiness material attributes
    • Color, normal, displacement, height maps
    • Single- and two-component maps packed arbitrarily together (like MSFT_packing_normalRoughnessMetallic or MSFT_packing_occlusionRoughnessMetallic in glTF)
    • alpha masking/blending, two-sided, wireframe, ... properties
    • Engine/editor-specific things (object selection color, default visibility, "use this particular shader", "fix/generate normals on this object"...)
  • Fast enough to search -- O(log n), however there's usually not 100s properties so a hashmap overhead wouldn't be worth it
  • Have the representation without internal pointers or other things to make it easily serializable like the new MeshData in Scene data serialization #427

Proposal

Trade::MaterialData being basically just a sorted Array of Trade::MaterialAttributeData, where each Trade::MaterialAttributeData stores a type, string and the data in a fixed-size container:

|------------- x B ------------|

+------+------- .. -----+------+
| type | name   ..   \0 | data |
| 1 B  |  (x - n - 2) B | n B  |
+------+------- .. -----+------+

Where:

  • x is the total size of the Trade::MaterialAttributeData struct
  • type is a 8-bit Trade::MaterialAttributeType type index
  • data is of a size matching type, at the offset of (x - sizeof(type)) B
  • name is a null-terminated string filling up the rest

Apart from everything being stored in-place without any external pointers (and thus trivially copyable and (de)serializable) this has the advantage of all strings starting at regular offsets, which simplifies implementation of the binary search. Name filling the rest of the structure means smaller types can have longer names -- I'd say it's more likely for a 4-byte integer to have many more different uses than a 3x3 float matrix, for example.

Because the layout is relatively simple, a material can be created in a rather obvious fashion, with the library only having to (at most) sort the data afterwards for efficient binary search:

Trade::MaterialData material{{
    {Trade::MaterialAttribute::DiffuseColor, 0xfa3689_rgbf},
    {Trade::MaterialAttribute::DiffuseTexture, 3u},
    {Trade::MaterialAttribute::AlphaMask, 0.5f},
    {"highlightColor", 0x00ffff66_rgbaf},
    {"doubleSided", true}
}};

Moreover, with enough effort, the above is still simple enough to be constexpr-constructible, meaning a material definition can be embedded into executable constant data (like currently done with mesh data in Primitives::cubeSolid(), for example). More efficient (but more complicated) solutions that I had in mind were neither easy to construct nor constexpr.

Choosing the size tradeoff

Basically like with UTF-8/16/32:

  • (Like UTF-32.) Make it possible to store 4x4 matrices with large enough (14-char) names, which would make the size 80 bytes at least (or even going all the way to doubles, 144 bytes). Would mean the most flexibility and the easiest implementation, but the storage overhead gets quite painful when storing many materials, leading to users designing custom app-specific representations instead (which is something I'd like to avoid -- same goal as when redesigning the MeshData).
  • (Like UTF-16, current state.) Choose a middle ground and cap at 64 bytes, fitting at most 3x4 float matrices (because the largest transformation matrix I can think of is color calibration which has the bottom row always 0 0 0 1 anyway). Larger types (or doubles) then would need to be split into multiple attributes (DoubleMatrixCol1, DoubleMatrixCol2, ...) and concatenated by the user, but that should be a rare case.
  • (Like UTF-8.) Cap at 4-component float vectors and 14-char names, with 32 bytes in total. This would mean everything larger (color calibration and texture transformation matrices) would need to get split but I could imagine that working well (TextureOffset, TextureScale, ...). Problem might be that 14-char names might not be long enough for all uses (OcclusionRoughnessMetallicTextureScale, e.g.) and may need to get shortened, causing custom attributes to have much less clear naming.

Which one is best?

Material layers

  • Based on the comment below
  • Expose named layers for the ratified glTF extensions
    • MaterialAttributeData constructor taking just a MaterialLayer as a shorthand for MaterialAttribute::LayerName + layer name
    • Overloads of all getters taking a named layer instead of a layer string

Further ideas, TODOs

  • Right now there's just the basic Phong properties listed, go through all glTF extensions and gather the PBR and other material properties
    • Look into Assimp PBR properties as well it's bad, unsurprisingly
    • USD? too general
  • Base material reference -- if a model has 100 materials, I'd assume these would differ only in very minor properties like texture IDs or base colors, and there one might want to have a base material setting the common attributes + 99 smaller diffs to it. This is also a way to mitigate the above-mentioned size issues. postponed
    • Material merging / patching -- basically an inverse operation to the above postponed
  • PhongMaterialData / PbrMetallicRoughnessMaterialData subclasses of the above that provide convenience accessors to the above (so one could call material.diffuseColor() and get a Color4 instead of having to call material.attribute<Color4>(Trade::MaterialAttribute::DiffuseColor). The subclasses would have no additional data members, only methods, which would make them safely static_cast-able from the base MaterialData.
    • Provide some convenience const PhongMaterialData& as<T>() that does the cast, checking that type size is equal (and possibly that types() match?)
    • Also T&& as<T>() && that can operate directly on rvalues (which means subclass moves can't be disabled)
  • In addition to the key/value store, having a "rough type" enum that would hint what kind of attributes the material contains (phong, flat, pbr whatever) and thus suggesting the user to cast to one of those subclasses. Or this could be an enum set in case a material has for example PBR attributes but also a Phong fallback (or if it has both PBR metallic roughness and PBR specular glossiness properties in case of glTF). An enum set would also nicely solve backwards compatibility -- existing code could still use the same material as a Phong one, while new code can use the PBR properties from it.
  • Add a "texture swizzle" attribute (fourCC?) to support the texture packing glTF extensions -- for example an occlusion and normal texture attribute would reference the same texture ID, but one would use the R channel and the other the GB channels. Not all supported GL variants support texture swizzle, so (a restricted subset of) this would need to get implemented directly in shaders. GL texture swizzle is texture state, not sampler state, so it can't be used
    • For simplicity and the "90% use case" provide the common packing (metallic/roughness, specular/glossiness) as dedicated attributes, and have shaders consume that by default
    • For the "generic use case" provide spearate metallic, roughness, specular, glossiness attributes (+ swizzle, UV set and UV transform for each), the shaders would have that opt-in but with perf implications (multiple fetches, extra binding slots etc.)
    • Add hasMetallicRoughnessTexture(), ... queries for common packing variants that would either look for the dedicated packed attribute or check separate attributes and their swizzles if they match the desired packing, so it's possible to have convenient access for arbitrary packings as well. If such query returns true, it means the first texture (e.g., metallic) can be queried for all (ID UV set, UV transform) properties as the others will have the same, thus it's no need to have also metallicRoughnessTexture(), metallicRoughnessCoordinateSet() etc). Conversely, metallicTexture() will return the MetallicRoughnessTexture attribute if there's no MetallicTexture present.
  • Ability to store a 64bit integer or a (non-owned) pointer value for very app-specific handles
    • implicitly castable using attribute<FooBar*>(), attributeOr<FooBar*>("thing", nullptr) should work also
  • Ability to store a string
    • With explicit size, so zero bytes are preserved and thus strings can be abused to store arbitrary binary data
  • Instead of a Bool attribute, have just True, to avoid a third state (so one can do a hasAttribute("DoubleSided") instead of attributeOr("DoubleSided", false) and then be unsure what's actually the default) the special-casing needed in the implementation/Tests and the additional doc and congitive overhead isn't worth it
  • "Optional" convenience access with tryAttribute() and attributeOr()
  • Backwards compatibility:
    • AbstractMaterialData is now an alias to MaterialData, with virtual destructor and the original constructor signature deprecated, all getters working on the new key/value store instead, PhongMaterialData deriving from it
    • PhongMaterialData storing its properties in the key/value array instead of additional members
    • Deprecate and discourage MaterialData::flags() as it's much less efficient than calling hasAttribute() for a particular bit
    • Controversial: changing AbstractImporter::material() to return Optional<MaterialData> instead of Pointer<MaterialData> as the subclasses will all have the same size as the base now. This would be a breaking change that can't really be made backwards-compatible without introducing some ugly chameleon type, but easy to adapt to.
      • Actually provide a type derived from Optional that's convertible to Pointer -- easy enough to do, and makes it practically painless for users
  • Write up usage docs once optional access and convenience subclasses are done, mention also layers, texture packing and swizzle usage
  • Move type compatibility checks to MaterialData so both names and strings are checked and MaterialAttributeData can stay constexpr? Would ensure that wrong types aren't accidentally used in constexpr materials. postponed, it adds a bit of flexibility and headroom when done this way
  • Allow constructing Color properties from Vector3 with implicit alpha? would need to be attribute-specific because constructing an arbitrary Vector4 property with W set to 1.0f might not always make sense nope, too little benefit for too much complexity
    • Related, allow constructing float or unsigned int properties from integers? (texture IDs etc) same
  • glTF alpha mask threshold default is different from ours default, sync them
  • Add convenience queries for verifying that all textures have the same coordinate set / texture matrix even if specified using multiple attribs
    • In that case it would be good to have textureMatrix() / textureCoordinates() return the common value, even if it's specified using separate attribs
  • Because there's no whitelisted growable allocator for MaterialAttributeData, make it possible to pass a DefaultInit to arrayShrink() so we can easily go from a growable array back to one with a default deleter
  • Adapt plugins
    • Adapt AssimpImporter, expose the PBR attributes (needs extra file type check for the metallic/roughness texture, sigh)
    • Adapt TinyGltfImporter (metallic/roughness, specular/glossiness)
    • TinyGltfImporter packing extensions postponed
    • TinyGltfImporter layer extensions
    • Trivially adapt OpenGexImporter (no PBR there, ahahah)

Open questions

  • Is there any point in not storing any strings at all and doing it with importer.materialAttributeNameFor() like currently with custom attributes in MeshData? To me this doesn't really make sense as the data are of variable size (one int vs a 3x4 matrix) and there's lot of otherwise unused space that is easy to reuse for a string name. At the anticipated scale, where most materials are likely to have custom attributes, the additional indirection and dependency on importer doesn't seem convenient.
  • Material attributes (like MeshData attributes, same naming) or material properties? Is there any significant semantic difference between those two? keep attributes for consistency
  • Allow multiple values of the same attribute? Like two diffuse textures (which are meant to be multiplied(?) together). Assimp seems to support this, but I don't see a real-world feasibility. yes, but only via material layers (see comment below)
  • Fix naming inconsitencies -- BaseColor / BaseColorTexture vs DiffuseColor / DiffuseTexture (glTF uses Factor / Texture everywhere but BaseColorFactor seems excessively long), Metalness vs MetallicRoughnessTexture ... (or just leave it at that?) it's okay i guess
    • Also CoordinateSet -- using TextureCoordinates instead to have a common prefix with other texture attributes
  • glTF specular/glossiness defines specular color as three-component while our Phong has it four-component drop the alpha? Could be pretty much backwards compatible as Color3 is implicitly convertible to Color4. The alpha can be used to allow specular reflections on fully transparent materials (soap bubbles) and extensions like ADOBE_materials_thin_transparency add it back, so it would be good to have the possibility -- use Color4, set alpha to zero in glTF import and expose SpecularTextureSwizzle that defaults to RGB (with implicit zero alpha, and implicitly so when SpecularGlossinessTexture is present) but can be overriden to RGBA
  • How to denote unlit/flat materials? -- with MaterialType::Flat
  • Think how to handle backwards compatibility (serialization...) in case some names get obsoleted / renamed -- postponed to Scene data serialization #427, until then it's not really a problem as we encourage using enums instead
  • MetallicRoughnessTextureMatrix doesn't fit together with Matrix3x3 (3 letters too much), what to do?
    1. store a 3x2 matrix?
    2. shorten the names? (TexMatrix? ugh, would still be a problem with SpecularGlossinessTextureMatrix)
    3. store metallic and roughness matrices separately? would make the material 64 bytes heavier in the "90% case"
    4. drop all packed attributes altogether and leave only hasMetallicRoughnessTexture() etc convenience checks? would make everything simpler but the material data will get heavier (instead of one attrib it'll need four for the base ID+swizzle, plus everything extra duplicated)
    5. keep only packed MetallicRoughnessTexture as a shorthand for MetallicTexture + RoughnessTexture + two TextureSwizzles, drop combined texture matrix and coordinate set as that's already expressible by either the global attributes (most commonly) or per-map attributes (full flexibility)
  • Figure out mapping between PBR specular/glossiness <-> Phong and metallic/roughness <-> specular/glossiness, which makes Phong <-> metallic/roughness possible as well -- example code postponed
  • glTF says specular/glossiness can represent a broader range of materials and is computationally heavier, while Filament says it's legacy and not recommended!?
    • it avoids some artifacts on metal/dielectric transitions and allows for colored specular, the KHR_materials_specular extension adds specular coloring
  • glTF default factor for clearcoat is 0, that doesn't make sense for layers in general (each layer would need to override that, which is excessive), same for roughness, defined to 0 in the layer but to 1 in the base material -- leave them at 1 so a layer doesn't need to explicitly override those when it's just a texture
    • watch out for the difference when implementing this in glTF importer
  • Create a MaterialTools library that does conversion between various types, material merging etc.? or there's too little of it for a full-blown library? postponed
  • Uhh, glTF metallic/roughness is actually nothing/roughness/metallic ... need to change all the assumptions everywhere (and now the MSFT extension naming makes sense, ffs!)
    • What about other maps, do the defaults match?

@mosra mosra added this to the 2020.0b milestone Jul 16, 2020
@mosra mosra self-assigned this Jul 17, 2020
@codecov-commenter
Copy link

codecov-commenter commented Jul 17, 2020

Codecov Report

Merging #459 into master will increase coverage by 1.48%.
The diff coverage is 99.79%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #459      +/-   ##
==========================================
+ Coverage   75.52%   77.01%   +1.48%     
==========================================
  Files         398      409      +11     
  Lines       23291    24805    +1514     
==========================================
+ Hits        17590    19103    +1513     
- Misses       5701     5702       +1     
Impacted Files Coverage Δ
src/Magnum/GL/Attribute.h 100.00% <ø> (ø)
src/Magnum/Math/TypeTraits.h 100.00% <ø> (ø)
src/Magnum/Shaders/Phong.h 100.00% <ø> (ø)
src/Magnum/Trade/MaterialData.h 100.00% <ø> (ø)
src/Magnum/Trade/Trade.h 100.00% <ø> (ø)
...agnumPlugins/AnyImageImporter/AnyImageImporter.cpp 89.62% <ø> (ø)
...agnumPlugins/AnySceneImporter/AnySceneImporter.cpp 47.26% <0.00%> (ø)
.../MagnumPlugins/AnySceneImporter/AnySceneImporter.h 100.00% <ø> (ø)
src/MagnumPlugins/ObjImporter/ObjImporter.cpp 93.54% <ø> (ø)
src/MagnumPlugins/TgaImporter/TgaImporter.cpp 100.00% <ø> (ø)
... and 35 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 2e4f2b7...4cd50b0. Read the comment docs.

@pezcode
Copy link
Contributor

pezcode commented Jul 23, 2020

Allow multiple values of the same attribute? Like two diffuse textures (which are meant to be multiplied(?) together). Assimp seems to support this, but I don't see a real-world feasibility.

This would be useful for layered materials. Not sure how many file formats actually support this, but it might be worth having the extensibility.

@mosra
Copy link
Owner Author

mosra commented Jul 23, 2020

@pezcode oh, thanks -- definitely something I missed. I did a bit of research and found:

From what I understand, physically the amount of possible layers isn't limited in any way. glTF extensions seem to be grouping particular layer properties together, which to me seems a cleaner way to handle this than "here you go, this material has three normal maps and two factor values, enjoy", but it basically means you need a new extension for each new layer. I can see three ways of implementing this:

  1. "Here you go, this material has three normal maps and two factor values, enjoy." That's how Assimp does it, but I'm afraid this is rather error prone (i.e., how to easily know which attributes go to which layer?) and will quickly become a mess with more than one extra layer.
  2. For each "known" layer, add a group of new named properties. This would be how glTF does it except that there's no semantic notion of what goes together in a layer, apart from a common string prefix (and then one can't have two layers of the same type, again). Which means a mess again when custom layers are added.
  3. Add a builtin notion of layers, where each layer would contain a range of attributes, possibly with well-defined "factor" / "mask" attributes for each. Internally this would be the same as case 1, but access to those would be through e.g. data.attribute(layer, name) instead of data.attribute(name, index).

I think option 3 could work best / be most flexible and would be easy to fit into current design as well. Ideas? :)

@pezcode
Copy link
Contributor

pezcode commented Jul 23, 2020

Would option 3 require a fixed number of layers with known attributes? Ideally, each attribute is simply associated with a layer, and users can check if layers have the attributes they need. Could also 'tag' layers for convenience, e.g. "thin film" if the importer supports those semantics (e.g. coming from a GLTF extension).

How would that interact with another importer that considers clear coat or thin film part of the main material attributes? Or are you mainly focusing on GLTF at this point?

@mosra
Copy link
Owner Author

mosra commented Jul 24, 2020

After a bit of experimenting with the design, here's a code example that I think might work:

MaterialData data{
    // Base material defined with both specular/glossiness and 
    // metallic/roughness parameters, plus a clearcoat layer
    MaterialType::PbrSpecularGlossiness|MaterialType::PbrMetallicRoughness|MaterialType::PbrClearCoat, 
    
    // Attributes for the base material and three additional layers
    {
        // Base material, expressed in both metallic/roughness ...
        {MaterialAttribute::Metallic, 0.27646f},
        {MaterialAttribute::BaseColor, 0xffcc33_rgbf},
        {MaterialAttribute::RoughnessTexture, a},
        // ... and specular/glossiness
        {MaterialAttribute::DiffuseColor, 0xffcc33_rgbf},
        {MaterialAttribute::SpecularGlossinessTexture, b},
        
        // Clear coat layer
        {MaterialAttribute::LayerName, MaterialLayer::ClearCoat},
        {MaterialAttribute::LayerIntensityTexture, c},
        {MaterialAttribute::RoughnessTexture,  d},
        {MaterialAttribute::NormalTexture,  e},

        // A custom sheen layer. LayerName is completely optional, useful for
        // annotation or for convenient access with attribute(layerName, name)
        //{MaterialAttribute::LayerName, "sheen"},
        {MaterialAttribute::LayerIntensityTexture, f},
        {"sheenTexture", g},
        
        // Thin film layer, name specified via a shorthand
        {MaterialLayer::ThinFilm},
        {MaterialAttribute::LayerIntensity, 0.35f},
        {MaterialAttribute::ThinFilmThicknessMinimum,  450.0f},
        {MaterialAttribute::ThinFilmThicknessMaximum,  1200.0f}
    },
    
    // Attribute range for each layer -- this is how the class knows what 
    // attribute is from what layer
    {5, 9, 11, 15}
};

// Basic queries
data.layerCount() == 4
data.layer(2) == "sheen"
data.attributeCount() == 5 // base material
data.attributeCount(MaterialLayer::ThinFilm) == 4

// Getting an attribute for the base material or a layer; either by index or
// a name
data.attribute(MaterialAttribute::RoughnessTexture) // base material
data.attribute(/*layer=*/1, MaterialAttribute::RoughnessTexture)
data.attribute(MaterialLayer::ClearCoat, MaterialAttribute::RoughnessTexture)

// Attempting to query the base material for an attribute that's present only
// in some layer will fail
data.tryAttribute(MaterialAttribute::ThinFilmThicknessMinimum) == NullOpt

As a completely custom use case, here's how a layered material for a procedural landscape could be described -- the key point is that sand/grass/rock textures are static tiled textures, while the actual masks are dynamically generated (as opposed to everything being described through a single combined diffuse texture):

MaterialData proceduralLandscape{
    // Custom material that doesn't fit any of the predefined-ones
    MaterialTypes{},

    {
        // Sand layer
        {MaterialAttribute::LayerFactorTexture, a},
        {MaterialAttribute::DiffuseTexture, sandTileTexture},

        // Grass layer
        {MaterialAttribute::LayerFactorTexture, b},
        {"strandLengthTexture", c},
        {MaterialAttribute::DiffuseTexture, grassTileTexture},
        
        // Rock layer
        {MaterialAttribute::LayerFactorTexture, d},
        {MaterialAttribute::DiffuseTexture, rockTileTexture}
    },
    
    // There's no base material, everything is in layers
    {0, 2, 5, 7}
};

How would that interact with another importer that considers clear coat or thin film part of the main material attributes?

From what I understand, a layer usually needs some "mask" or "intensity" describing where and how to apply it, so even if an importer library puts everything together it still needs some way to express what mask goes to what layer for example. And based on that I would populate the layer info.

Or are you mainly focusing on GLTF at this point?

I'm only using glTF because it's a sufficiently flexible and well-documented solution that works in practice, but I don't want to limit the API to just the feature set it supports, no.

Thanks a lot for the feedback, btw. :)

EDIT: updated snippets to reflect current state (2020-08-09)

@eundersander
Copy link
Contributor

How is a material with multiple layers different from just having multiple materials? If you decided to treat each layer as its own material, I guess the layer masks become just another attribute of each material (opacity?)

Would your proposed multi-layer material include information on how to blend the layers? I mean, in addition to the mask, what is the blend op?

@mosra
Copy link
Owner Author

mosra commented Jul 24, 2020

How is a material with multiple layers different from just having multiple materials?

@eundersander In the extreme custom case (the procedural landscape material example above where the blending is trivial) yes, it's basically the same, and additionally if the material boundaries would follow mesh topology, the mesh could get separated by material into a set of meshes each with a different non-layered material. (If it's not easily separable then you'd need to render the mesh multiple times with different materials, which might be hard/impossible.)

This is however trying to account for the case of not-so-simple coating/film/... layers that affect material behavior in complex ways (laquered wood, oil film on leather, metallic paint...), as described by the following glTF extensions, among other resources:

Would your proposed multi-layer material include information on how to blend the layers?

No, that's out of scope of this API, as the blending is defined by the layer itself. Each of those layer extensions defines, as I understand it, its own way how to affect the underlying material, which means the blending is given implicitly from the layer type (same as, let's say, purpose of a metalness map is also given implicitly -- if you don't know how to use it, you can't use it).

The purpose of the layer API would be -- if a complex material has three roughness maps, to know which roughness map is used by what material layer (and thus how is it used), and how the layers stack together.

@mosra mosra force-pushed the materialdata branch 4 times, most recently from e9502b1 to 4e72d4c Compare July 29, 2020 17:26
@bigbike
Copy link

bigbike commented Jul 29, 2020

Read the proposal as well as all the comments. I think it is a fairly good design.

Some specific comments:

// Attribute range for each layer -- this is how the class knows what
// attribute is from what layer
{0, 5, 9, 11, 15}

Manually count the numbers? It could be rather painful if there are many layers and many attributes.

// There's no base material, everything is in layers
{0, 0, 2, 5, 7}

In the case, it still needs a "0" even if there is no basic (main) material. I can imagine I will easily forget this. :)

Screen Shot 2020-07-29 at 11 14 27 AM

The figure is wrong as "Data" should be directly after the "Type"?

@mosra
Copy link
Owner Author

mosra commented Jul 29, 2020

Thanks!

Manually count the numbers?

In most cases it won't be manual -- you'll know that there's, say, 8 attributes for the base material, so you put 8 there, then there's a thin film layer with 5 attributes, so you put 5+8 there etc. I don't expect there being more than 2-3 layers at most, and even that would be a fairly complex material already. I'd say it's error-prone the same way as calculating vertex attribute offsets :)

In the case, it still needs a "0" even if there is no basic (main) material.

Edited the original comment -- now it's just {5, 9, 11, 15} and {0, 2, 5, 7}, the implicit initial zero wasn't that useful and actually made the internals more complicated.

The figure is wrong as "Data" should be directly after the "Type"?

No, the name is first, in order to be always at the same offset and work well with binary search. Data is tucked at the end.

@mosra mosra force-pushed the materialdata branch 5 times, most recently from 816eaa2 to 4ea5d78 Compare August 9, 2020 12:55
@mosra mosra force-pushed the materialdata branch 3 times, most recently from 18bb263 to 4d7e741 Compare August 11, 2020 17:00
Well, "basic". Practically mirrors glTF PBR materials:

 - builtin metallic/roughness
 - the KHR_materials_pbrSpecularGlossiness extension
 - extra normal/occlusion/emission maps
 - exposes the implicit metallic/roughness and specular/glossiness
   packing, but also allows separate maps with arbitrary packings as
   well as two-channel normal maps (instead of three-channel)
 - provides convenience checks for the most common packing schemes
   including MSFT_packing_normalRoughnessMetallic and the three variants
   of MSFT_packing_occlusionRoughnessMetallic
 - teaches PhongMaterialData to recognize packed specular/glossiness
   maps as well

Next up is exposing at least one layer extension, and then I'm done
here.
For convenient reinterpretation as a concrete material type. Also I
don't care if some language lawyers think this is an UB.
In these cases there can be more than one attribute, so complaining that
just one particular attribute is not found is misleading.
This is a bit huge because of all the new overloads that take a
MaterialLayer instead of a string, but all that is just boring
boilerplate. Additionally this:

 * exposes glTF clear coat parameters (which, interestingly enough,
   reuse existing attributes and don't introduce any new)
 * provides a convenience wrapper in PbrClearCoatMaterialData
 * and a convenience base for material layer wrappers that redirect
   all APIs with implicit layer argument to desired layer instead of the
   base material
For some reason I forgot this one.
Turns out glTF doesn't actually put metalness into R and roughness into
G, even though the naming suggests that. This was done originally, but
then they changed that in order to be compatible with UE4 and allow for
a more efficient storage of an occlusion map.

Because this feels extremely arbitrary, the docs have added rationale
for each of the packing variants, and I'm also renaming the packed
attribute and checks to imply the red channel isn't used.
There's actually a lot of code involved in checking if all textures use
the same transform or coordinate set, especially when considering all
fallback variants and potential future expansion with separate texture
offset/scale/rotation attributes.

A lot of the complexity was thus hidden in plugin implementations, which
were each trying to find a common value for all textures to save the
user from doing the same. All that code can now be removed and left up
to the material APIs themselves -- now it's just about checking
hasCommonTextureTransformation() and then retrieving that one common
transformation, independently on how the material actually defines it.
glTF, always some surprise waiting around the corner.
The new materials now commonly import separate per-texture matrices
instead of a single one even if they're all the same because that makes
the plugin implementation *much* simpler. However, existing code that
assumes there's just one matrix would get broken because textureMatrix()
would not return something else. By changing that to return a common
matrix if present and falling back to the global one we can preserve the
original behavior.
THe new MaterialData APIs introduce MaterialAttributeType::Bool and
everything kinda explodes with that.

Seriously, what were the people in 1979 thinking.
@mosra mosra merged commit 8c5a2d6 into master Aug 16, 2020
@mosra mosra deleted the materialdata branch August 16, 2020 19:33
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Development

Successfully merging this pull request may close these issues.

5 participants