You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm heavily testing fTetWild on a model that is well representing what kind of geometry I want to deal with.
It's a CAD produced building element, composed of many sub-elements.
The model is composed of 21k nodes and 18k triangles.
I've compiled fTetWild under Windows, with VS2022, into Release.
To compute the result, I'm using the CSG approach with all sub-elements linked into a huge union chain (JSON).
On my mid range laptop (core i5 8265U + 16Go), overall process with default settings takes 200s.
The result is good at first sight, but if you zoom in :
The result tretrahedralization is largely not respecting model boundaries.
Here are the computations parameters taken from early stage of process :
So here are my questions ;-)
Is the 200s processing time a "normal" score according to your experience ?
Are there any optimizations I can do to severally reduce that processing time ?
I've tested by increasing target edge size (x10) but processing time get worse (230s).
I've tested increasing Min Energy threshold, but this only saved a few seconds.
Is there maybe a parallelized version that could run on multiple cores / thread locally ?
As far as geometric precision is concerned, you can see that with a 2E-3 effective Epsilon, many of the model sub-objects have been "swallowed" during the process, even if these objects are thicker/larger than this limit.
Addtionnaly, you can see that there are many tetras that are protrusing/penetrating the initial model.
To reduce this phenomenon I have to reduce envelop, but computation time seams to be quadratic from this parameter, leading to very very long process time.
I've tested TetWild too, with better results, but with processing time that are not acceptable for my purposes (15 minutes on that model).
Is there other thing to do to reduce that phenomenon ?
Thanks a lot !
The text was updated successfully, but these errors were encountered:
Hi dear friends,
I'm heavily testing fTetWild on a model that is well representing what kind of geometry I want to deal with.
It's a CAD produced building element, composed of many sub-elements.
The model is composed of 21k nodes and 18k triangles.
I've compiled fTetWild under Windows, with VS2022, into Release.
To compute the result, I'm using the CSG approach with all sub-elements linked into a huge union chain (JSON).
On my mid range laptop (core i5 8265U + 16Go), overall process with default settings takes 200s.
The result is good at first sight, but if you zoom in :
The result tretrahedralization is largely not respecting model boundaries.
Here are the computations parameters taken from early stage of process :
So here are my questions ;-)
Is the 200s processing time a "normal" score according to your experience ?
Are there any optimizations I can do to severally reduce that processing time ?
I've tested by increasing target edge size (x10) but processing time get worse (230s).
I've tested increasing Min Energy threshold, but this only saved a few seconds.
Is there maybe a parallelized version that could run on multiple cores / thread locally ?
As far as geometric precision is concerned, you can see that with a 2E-3 effective Epsilon, many of the model sub-objects have been "swallowed" during the process, even if these objects are thicker/larger than this limit.
Addtionnaly, you can see that there are many tetras that are protrusing/penetrating the initial model.
To reduce this phenomenon I have to reduce envelop, but computation time seams to be quadratic from this parameter, leading to very very long process time.
I've tested TetWild too, with better results, but with processing time that are not acceptable for my purposes (15 minutes on that model).
Is there other thing to do to reduce that phenomenon ?
Thanks a lot !
The text was updated successfully, but these errors were encountered: