-
Notifications
You must be signed in to change notification settings - Fork 44
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
NodePerformanceTest #100
NodePerformanceTest #100
Conversation
The files for a 10000 asteroids stress test
Thanks for this. This needs a small 150-pixel screenshot.ext and a larger screenshot_large.ext per SubmittingModels.md. The large one should be embedded in the readme. If the screenshots are smaller as a JPG, then it might be worth using that format instead of PNG. I like to add a screenshot caption indicating which renderer was used, so others can duplicate for regression testing. Though this isn't a requirement. Otherwise all looks good to me! |
Thanks @rsahlin, this is clearly a useful performance testing model to have. I'm concerned about adding a stress-test model this heavy to the collection here. Its name "10000 Asteroids" will be very tempting for new users to click on first thing when they get to this repo, possibly making it their first impression of glTF and glTF's overall performance. If it's here at all it should have a name that warns people, like "HeavyStressTest" or something similar that doesn't alphabetize to the top or look tempting to be the first thing a new user would test. Also, every model here offers the |
Small (150px) jpeg screenshot added Large image renamed to screenshot_large and changed to jpeg format Updated body with ##screenshot section
@echadwick-artist Thanks - I've updated accordingly |
Thanks for your comment @emackey I see your point - what about something like 'NodeStressTest'? Ok, I did not know that we favour the |
Rename to a boring name so that only those really interested will use the model
Renamed to NodeStressTest to make the model sound more boring. I am having doubts storing the Instead I think we should be sure to provide information on how to unpack a |
While I'm strongly supporting the goal of having such a model, I wouldn't support merging this particular model into the There are many reasons why it would be good to have such a model. The most important ones are discussed in the threads surrounding KhronosGroup/glTF#1699, #56 , and the issues/threads that are linked from these. The model in its current form does have a somewhat "artistic touch" because of its original name, "10000 Asteroids". A name like "10000 randomly shaped low-polygon objects with 100 grayscale textures distributed in a torus shape" doesn't roll off the tounge that easily. But the point is: If this model is primarily intended as a "stress test", then it is only one point within a many-dimensional space of such models. Right now, it is a model with
We might want to have...
Derived from these many dimensions of "stress testing", there are concerns when trying to host this sort of model. These refer to
Again: We should definitely have this kind of models. But I think that in order to be useful and maintainable, we should take an approach that is similar to that of the https://github.com/KhronosGroup/glTF-Asset-Generator - namely, generating such models programmatically. I recently did spend some time in a spare-time project that is basically just a thin layer around glTF-Transform, with convenience functions for generating glTF assets. Below is a snippet based on that, which I quickly hacked together to mimic the structure of this asset. It generates
import fs from "fs";
import { Document } from "@gltf-transform/core";
import { Texture } from "@gltf-transform/core";
import { Node } from "@gltf-transform/core";
import { prune } from "@gltf-transform/functions";
import { MeshPrimitiveGeometries } from "../gltf/MeshPrimitiveGeometries";
import { Meshes } from "../gltf/Meshes";
import { Materials } from "../gltf/Materials";
import { Io } from "../gltf/Io";
import { Geometries } from "../base/Geometries";
import { Images } from "../base/Images";
function createMeshPrimitiveGeometry(document: Document) {
const numPointsX = 8;
const numPointsY = 8;
const geometry = Geometries.createPlaneGeometry(numPointsX, numPointsY);
const meshPrimitiveGeometry = MeshPrimitiveGeometries.create(
document,
geometry
);
delete meshPrimitiveGeometry.colorsAccessor;
delete meshPrimitiveGeometry.tangentsAccessor;
return meshPrimitiveGeometry;
}
async function createMesh(document: Document, texture: Texture) {
const meshPrimitiveGeometry = createMeshPrimitiveGeometry(document);
const material = Materials.createMaterialFromTexture(document, texture);
const mesh = Meshes.createSimpleMesh(
document,
meshPrimitiveGeometry,
material,
undefined
);
return mesh;
}
async function createChildren(document: Document, root: Node, numNodesX: number, numNodesY: number) {
const numTextures = 100;
const textureSizeX = 512;
const textureSizeY = 512;
const textures: Texture[] = [];
for (let i = 0; i < numTextures; i++) {
const imageData = await Images.createSineTexturePng(
textureSizeX,
textureSizeY,
1.0,
1.0
);
const texture = document.createTexture();
texture.setURI("");
texture.setMimeType("image/png");
texture.setImage(imageData);
textures.push(texture);
}
for (let x = 0; x < numNodesX; x++) {
for (let y = 0; y < numNodesY; y++) {
console.log("Create " + x + " " + y);
const index = x * numNodesY + y;
const texture = textures[index % numTextures];
const node = document.createNode();
node.setTranslation([x * 1.1, y * 1.1, 0]);
const mesh = await createMesh(document, texture);
node.setMesh(mesh);
root.addChild(node);
}
}
}
async function createDocument(numNodesX: number, numNodesY: number) {
const document = new Document();
document.createBuffer();
const scene = document.createScene();
document.getRoot().setDefaultScene(scene);
const root = document.createNode();
scene.addChild(root);
await createChildren(document, root, numNodesX, numNodesY);
await document.transform(prune());
return document;
}
async function create(numNodesX: number, numNodesY: number) {
const document = await createDocument(numNodesX, numNodesY);
const io = await Io.get();
const glb = await io.writeBinary(document);
fs.writeFileSync("stressTest-"+numNodesX+"-"+numNodesY+".glb", glb);
}
create(100, 100); It generates the following (66MB) glTF asset: Yeah, it doesn't look sooo fancy compared to the asteroids. But adding the line Of course, this raises the question about the dimensions that should be covered by such stress tests. But I think that there are some low-hanging fruits and obvious combinations or things to explore (namely, the ones that I listed above), and the main question is how to "encode" that in a form that produces the results that we actually want to test out. |
Hello @javagl and thanks for your comments. The intention of this model is not what you are looking for. I think that what you are proposing is some sort of regression test suite, where focus seems to be on (a large) number of permutations. As for the threads that you refer to:
|
Whatever you can do with the asteroids, you can also do it with the model that is generated with that code snippet. In both cases, you have a model with 10000 nodes and 10000 meshes (with a few dozen vertices) and 100 textures of size 512x512. The engine does not care whether these meshes+textures are "asteroids", or this boring artificial grid of textured rectangles. When you optimize the load time and rendering of the artificial model, then it will have the exact same effect on the asteroids model. Of course, one can argue for this particular model because of its aesthetic component (and I'd really like to see those asteroids rotating around the center at different speeds and maybe even tumbling a bit and such 🙂 ). And it may be more encuraging for implementors to see this particular model in the list of "official sample assets" with the goal of achieving a certain load time and FPS: You can say that it should load in less than 10 seconds and render at 60FPS, but it wouldn't make sense to define such goals for each configuration of the artificial models. But we should be careful with how these goals are phrased. Some of that may be obvious. But people might select this model in the glTF-Sample-Viewer, and might see that it takes far longer than 10 seconds to load, not being aware that the reason for that is just their slow network. Others might drop this model into https://sandbox.babylonjs.com/ and see that it takes about a minute to load. (But there are good, profound technical reasons for that...) An aside: Here is a comparison of this model (with 60MB) and a similar one with 34MB: (The second one just uses smaller textures) I don't want to "block" merging this model. If people think that this particular (stress test) model should be in the repo, then that's OK for me. But it is only one model, and we should think about how to address the linked issues of stress/limit test models in a broader sense. |
Add one directional lightsource and camera to the model.
Renamed to NodePerformanceTest to make sure it is known that this is not the type of stress tests discussed earlier and in other threads (where purpose is to know where a viewer/engine breaks)
Yes? @echadwick-artist and @emackey - What is the merge strategy for this repo? |
The point is that it will be hard to say what it is that this model is "testing". Depending on what the "bottleneck" is in each specific case, one could probably reduce the load time from <10 seconds to <5 seconds by reducing the texture size. This would be independent of the number of nodes (which is the main point that this model is supposed to test, according to its name). You know, I'm a stickler, and I might be overthinking all this. But I have already taken an internal note to summarize my (over)thoughts about "stress test models" in either one of the existing issues, or open a new issue here. So this is largely independent of this particular model now. |
Independent of any other issues, there needs to be an infrastructure update before PRs can be merged. The general topic of Merge Policy can be discussed during the Tooling meeting. |
I disagree.
As such I believe it is a useful model. |
I agree it's useful for testing. I think my main hesitation is that it's intended to be a harsh test, demonstrating detrimental things that can happen to one's system as a result of a model too large. We have a number of tests checking if technical features are correctly supported, but so far we don't have any tests for what happens when the glTF is malformed, or when textures are oversized, when there are too many nodes, too many polygons, etc. I'm concerned that's a category we may not want in this repository, as users of these sample assets have traditionally prided themselves on trying to get every last model to appear correctly. During the previous Tooling TSG meeting, a suggestion arose that we could have a generation script (possibly glTF-Asset-Generator or similar) produce these kinds of models. A script could generate models of arbitrarily large size or complexity without needing to check in all the various sizes into source control. I think this would be more a more flexible approach to supplying users with large stress test models. |
I agree that it is a useful model. But...
In this case, the engine might be spending most of the time with loading textures, and the fact that this model has 10000 nodes might not be relevant. We have to be very careful with describing the goal and purpose of the model, and the insights that can be gained from using it as "benchmark". |
GLB File contents now: Total size: 38613212 JSON chunk 12447596 Buffer (chunk 1): 26165588, images using 1135181
Why would you think that? And again - this is not a stress test to push an engine towards breaking point! In my opinion the future of glTF is doomed if the only usecase we can think of is one small model displayed in a wegGL viewer. |
So? Notice that the size of textures in the file now is around 1 megabyte - hardly something that should take time to load. |
@rsahlin My ... (concerns? doubts? or let's just call it) 'open questions' have been carved out into #105 , so this is now independent of this PR. One of the next things that I'll probably implement is tests for animations. Most viewers will be able to display 10000 nodes smoothly. Most viewers will not be able to display 10000 animated nodes smoothly. Just another dimension to be tested. |
Great! Is this ready to merge? |
Yes, I would say so - questions resolved and checks passes 👍 |
Ping - still no movement towards merging? |
Ping @echadwick-artist - could we please merge this now? |
There's still the question about how to address #56 , and given that there is an n-dimensional space that we'd like to "fill", this model is a single point in that space, and therefore, not the solution to that issue. But ... that alone is probably not be a reason to not merge this. Nobody was clearly opposed to adding this model. So I'll just hit "Merge" now 🤷♂️ |
The purpose of this model is to have a performance test for a demanding node, mesh and material usecase.
Each node references unique geometry and material, there are in total 100 textures used by 10000 meshes in a flat node hierarchy.
Ideally this should load quickly (less than 10 seconds) and render at a high fps - this model can be used to optimize performance as needed.
On my reference renderer, using a 4 year old mid-level PC, it loads in around 6 seconds and displays at 144 fps (which is the monitor refresh max)