-
Notifications
You must be signed in to change notification settings - Fork 439
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SceneData rework #525
SceneData rework #525
Conversation
94fd381
to
9a8c22b
Compare
Codecov Report
@@ Coverage Diff @@
## master #525 +/- ##
==========================================
+ Coverage 79.82% 80.27% +0.44%
==========================================
Files 511 511
Lines 32036 32818 +782
==========================================
+ Hits 25574 26345 +771
- Misses 6462 6473 +11
Continue to review full report at Codecov.
|
c4dadd2
to
ab2b0a9
Compare
9ec0feb
to
c1c6d40
Compare
66ae589
to
fe64511
Compare
Needed for backwards compatibility purposes currently, eventually it'll become a public API in the upcoming SceneTools library.
…pat. Now the code has all needed backwards compatibility in place.
That's what plugins implemented with the legacy API did, as it's more flexible than a matrix.
Even though this API is deprecated and thus not meant to be used, most existing code is still using the previous APIs and relying on the backwards compatibiity interfaces. And I wasted quite some time debugging why the scene looks like empty.
Another thought I had was about allowing a 2x2 / 3x3 matrix to be used for rotation, but there's the ambiguity with it possibly containing scaling / shear / whatever, which would then cause EXPLODING HEADACHES when converting to quaternions / complex numbers.
Hmm, doesn't look right. A bug somewhere!
As the comment says -- before the user code expected that if the scene hierarchy is broken, particular objects will fail to import. Now the whole scene fails to import, so we don't even get to know the actual (expanded) deprecated 2D/3D object count. To reduce the suffering, return at least the dimension-less object count there. It won't include the duplicates from the single-function-object conversion but better than reporting 0.
Honestly I don't care much, this is just that the original PrimitiveImporter tests started to fail because they expected object3DForName() to return -1 for a 2D name, but that's no longer the case. It would be possible to fix this, but I doubt anyone ever relied on such behavior for 2D scenes, so just add a test that acknowledges this new behavior.
This has to be solved on a more systematic level, perhaps even by switching all types to be 64-bit. In the following commit all *AsArray() and *Int() functions will output the object IDs as well, meaning this would need to be handled in each and every API, which is a huge maintenance burden. As it's very unlikely that there actually will *ever* be >4G objects, one possible option would be to introduce some "object ID hash" field that would provide (contiguous?) remapping of the object ID to 32-bit values, and the Into() and AsArray() accessors would return this remapping instead of the original. But then again it'd cause issues with for example animation references that would still reference the original 64-bit value.
Because that way one can query a field with *AsArray() and iterate through it in a single expression. This also resolves the pending issue where it was more than annoying to fetch object mapping for TRS fields when only a subset of the fields is available.
This got originally added as some sort of a kludge to make it easy to go to the parent transformation, assuming Parent and Transformation share the same object mapping: parentTransformation = transformations[parents[i]] But after some ACTUAL REAL WORLD use, I realized that there's often a set of objects that have a Parent defined, and then another, completely disjoint, set of objects that have a transformation (for example certain nodes having no transformation at all because it's an identity). And so this parent indirection is not only useless, but in fact an additional complication. Let's say we make a map of the transformations, where transformationMap[i] is a transformation for object i: transformationMap = {} for j in range(len(transformations)): transformationMap[transformationObject[j]] = transformation[j] Then, with *no* assumptions about shared object mapping, the indirection would cause parent transformation retrieval to look like this: parentTransformation = transformationMap[parentObjects[parents[i]] While *without* the indirection, it'd be just parentTransformation = transformationMap[parents[i]]
Now it's a field and its corresponding object mapping, instead of field and "objects": - Goes better with the concept that there's not really any materialized "object" anywhere, just fields mapped to them. - No more weird singular/plural difference between field() and objects(), it's field() and mapping() now. - The objectCount() that actually wasn't really object count is now a mappingBound(), an upper bound for object IDs contained in the object mapping views. Which is quite self-explanatory without having to mention every time that the range may be sparse.
Currently used by the per-object access APIs to make the lookup constant- or logarithmic-time instead of linear, available for use by external data consumers as well.
Not really important right now as the SceneData from these are used only in internal deprecated APIs, but at least it might speed up the children2D() / children3D() queries. Mainly done so I don't forget to do this later when these APIs are published in the SceneTools library. What's not done is the rather complex logic in the single-function conversion utility, where a field could retain the implicit/ordered flags in *some* scenarios. There's too many corner cases so better be conservative and don't preserve anything rather than mark something as ordered while it's no longer the case. The corner cases are hopefully all checled for (and XFAIL'd) in the test.
Hm, this will soon need some color distinction, it's starting to get hard to read.
Mostly just to avoid the return types changing to incompatible types in the future, breaking existing code. The internals are currently not fully ready to operate with 64-bit object IDs, especially the AsArray() APIs -- those I will have to solve in the future somehow. Returning 64-bit values in the pairs would add four byte padding after basically each value, which is way too wasteful for the common case. The Into() APIs could eventually get 64-bit overloads though.
To align with the new SceneData.
This is implemented in a generic way internally, to allow copying of arbitrary fields, not just skins for meshes, in the upcoming SceneTools utility.
With the previous commits, existing plugin implementations built and ran against the new code, however it introduced several ABI breaks meaning that existing plugin binaries would crash. This forces them to be recompiled to match the new version string.
Merged the current state, few bits got scrapped and put aside into #542 because they're too cursed. Items marked as postponed now live in the project Projects board and will be gradually implemented in the future. |
Final bit in the series of MeshData / MaterialData / ... reworks that brings pure data-oriented joy to scene hierarchies in a way that should be flexible enough to be fed into / serialized from common ECS frameworks like entt or flecs.
"MVP":
SceneFieldData
having the same layout on 32 and 64 bits (preparing for Scene data serialization #427)trivialimplicit", "field is offset-only")allow object mapping to be omitted if implicitscrapped, moved to SceneData field specification optimizations #542 due to unforeseen consequences"field is trivial", special-casing so one doesn't need to providescrapped, moved to SceneData field specification optimizations #542 due to unforeseen consequences-1
for scenes that contain just root nodesArray
otherwise (or -1 / identity transform for the "usual" properties like parent and transform? how to distinguish objects that are outside of a hierarchy & without transformations then?postponed)fooIntoInternal()
helperOTOH we could provide an iterating API where one would have to switch & cast on the type, handle all possible field types and ...ugh, nope, this is not the way"get all fields that this object has", could be useful for debuggingslow as hell, users should instead directly ask for meshes/lights/...Optional<Int>
used for checking which scenes does an object belong to -- if-1
, the object is top-level, ifNullOpt
the object does not belong to this scene("object index" vs "object mapping" vs ??, "field" vs "entry of a field",object mapping and "field"SceneObjectType
vsSceneIndexType
, better idea forobjectCount(SceneField)
that describes "how many entries a field has", since one object can have more than one field entry it's not really "object count", entry count?)*Into()
/*AsArray()
helpers should probably give out objects as well?*Into()
SceneFieldType::Pointer
andSceneFieldType::MutablePointer
, with arbitraryfield<T*>()
accepted for those (same as with MaterialData)SceneField::ImporterState
for per-object importer state, of aPointer
type -- needed for compat with existing codeSceneData
basically references the exact same node hierarchy, with all its data...could the Archetype support mentioned below solve this (with some additional restrictions)?or should I just introduce custom fields for such case, missing out on the opportunity to deal with them through the common interfaces?sub-scene references, where the sub-scene uses a different field set and encoding?scene layers like with materials, where there can be a subrange of fields for each layer, and the fields can repeat?or some " field, variant 2" that allows certain sub-parts of the scene to use custom encoding?that's no different than a sub-scene / sub-layerMore systematic way to deal with 64-bit object IDs and references. We need them to store arbitrary object handles, but the convenience APIs all have 32-bit as it's very unlikely that there actually will ever be >4G objects.postponedfooFor()
to 64-bit probably makes sense thoInto()
andAsArray()
accessors would return this remapping instead of the original (and the data provider would be forced to provide such field?). But then again it'd cause issues with for example animation references that would still reference the original 64-bit value, there one would have to look up the hash first...changepostponedmappingBound()
to a Range? / rename tomappingRange()
?Hooking up into the rest:
SceneData
to have each object just one mesh/light/camera/skin, to support classic workflows and make the backwards-compat wrappers easier to writeSceneTools
librarySceneData::children2D()
/children3D()
object2D()
/object3D()
inAbstractImporter
, write backwards-compat functions that extract this out of the new SceneData using the (slow) per-object convenience APIsAbstractImporter::objectForName()
/objectName()
(drop the 2D/3D distinction), works on objects up toSceneData::objectCount()
AbstractImporter::objectCount()
huh... but how to distinguish between scenes and have it robust w/o having to retrieve scenedata first? or havethere's a global object count (which is needed anyway since animations, skins etc reference objects globally also)objectForName(scene, name)
andobjectCount(scene)
on the importer directly? ahhhhor drop this and store names in SceneData directly? might be wasteful if the users don't need that, doesn't allow searching... OTOH it'd be cool to have a builtin possibility to put them there so the SceneData can be made self-containednope, handle externallyTrade::AnimationData
need to have a 64-bit object ID nowAbstractImporter::sceneFieldName()
/sceneFieldForName()
for custom fieldsSceneField::SkinId
needs to be distinguished for 2D and 3D, I suppose? Or have ais2D()
/is3D()
getter on the SceneData itself,which looks for transformation fields and then decides? What if neither is available?checked and populated in the constructor, skin fields require a transformation field to disambiguate the dimensionalityor just to the first and it's expected that extra meshes have extra skins?-- same skin for all meshesSceneField::MeshMaterial
needs to be signed because otherwise it's impossible to represent meshes without materials (as it has to have the same object mapping as meshes)Adapt ObjImporterit doesn't have any scene / material assignment yet, nothing to doCurrently it fails if there's no scene that'd list the root nodes, implement a fallback (with a warning) that collects all root nodes insteadpostponed, needs a BitArrayAnd consequently warn also if there are scenes but also loose nodes not referenced by any of thempostponed, needs a BitArraysame handling as TinyGltf for scene-less filespostponedmight need the same handling as TinyGltf for scene-less filesnope, nodes are there only if a scene ismight need the same handling as TinyGltf for scene-less filesnope, there's just one implicit scenemagnum-sceneconverter --info
(listing what fields a scene has and how many objects is for each)DartIntegration
magnum-player
Followup tasks:
postponedSceneFieldType::String
, internally represented as begin+size pairs, with the data being taken from the data blob, or as offsets if the field is offset-only; begin+size instead of implicit offset array in orde to allow the same string being shared between multiple fieldsor allow 2D transforms used for 3D and vice versa, like MeshData does for positionsno (at least for now), conversion complexity not worth the 0.001% use caseSupport thepostponedBool
typeSupport storing direct (non-owned) pointers topostponedMeshData
,MaterialData
if one so desires (could be useful for the upcomingSceneTools
library as well)Support sparse fields (highest bit of a 32-/64-bit object index used to distinguish "dead" entries), for easier (de)serialization of live systemspostponedAdditional per-field flags ("field is sparse", "field is ordered", ...), flags for describing field data in additon to object datapostponedpostponedSceneField::Present
/Belongs
/ ... ? which is a bit array and enumerates which objects belong to the scenepostponedSceneField::InstanceMesh
to support glTF's EXT_mesh_gpu_instancingpostponedCollisionMesh
?Mesh view properties (index/vertex offset and count, instance offset and count), all should share the same object mapping; all should be retrievable using a single convenience API that fills a few separate viewscurrently doable via custom fields, postponed to when importer support is thereGL::AbstractShaderProgram::draw()
that takes raw data instead of a ton ofGL::MeshView
references -- c2ecaa6Adapt the method chaining overload in all builtin shaders for that (make a macro!!)postponedMaterial texture transform + layer properties that get applied on top of the referenced materialpostponedInstead of failing, synthesize parent + transform info in thenot really possible, transformless objects are common (plus the type distinguishes between 2D and 3D), non-node objects are also a thing nowAsArray()
helpers if not present ---1
/ identity for all objects? need to think about how to distinguish transformation-less objects in this caseMeshMaterial
, all-1
ssolved by thetransformationObjectsAsArray()
helper that picks object mapping forTransformation
/ TRS fields depending on which of them is available, currently this has to be done by the user and is error-proneInto
/AsArray
helpers returning the object mapping as wellReal-world test sharing the data with entt/flecs?i don't see why this new way would be worse than the cursed OOP API that was there beforepostponedEnttIntegration
using https://github.com/skypjack/entt/wiki/Crash-Course:-entity-component-system#snapshot-complete-vs-continuous ?SceneField
probably should be 32-bit to have enough bits for arbitrarily allocated entity IDs (hopefully that's enough and we don't need 64)Open Qs:
Parent
field currently denotes that an object is a hierarchical node; so any other (custom) field could be assigned to objects to denote those are octree cells; encoding of the octree is up to the userStoring child information (inverse of the parent information) -- allows users to pick better representation for given task,postponedparentsAsArray()
/childrenAsArray()
then falls back to querying the information from the other field if the primary is not availablecan't see why this would be useful anymoreSceneFieldType::SceneField
? e.g. for a potential acceleratingSceneField::InverseMapping
field that maps object IDs to fields (i.e., is sorted by object ID, or is trivial and sparse ...)Think more aboutnot self-referencing, but the Parent field implies the object is a nodeSceneField::Parent
, if it really needs to be self-referencing (what else the object mapping view would be? the exact same view?), and if it could be restricted to transforms onlyrenamepostponedParent
toParentNode
? to make room for parent octree cell etcSupport sparse object IDs (for easier (de)serialization of live systems, again)postponedSupport more than one array per field (for easy storage of "archetypes"-like ECS state)postponed