You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
So here's a discussion thread regarding the development of 3D visualization capabilities for the INTO-CPS Application.
I developed a proof-of-concept/prototype for generic co-simulation visualization with basic features as a standalone angular application during the fall of 2020, and i am currently investigating the prospect of integrating it into the INTO-CPS App. However, there are many considerations and tasks ahead if this functionality is to be integrated into the app in a meaningful way.
The thought is to use this thread to keep track of ideas and progress regarding development of 3D visualization for the app (and to test out the discussion feature for these types of things). Feel free to share any comments, questions, feature suggestions etc
General Idea
The idea is to create a 3D visualization component based on WebGL/three.js for visualizing co-simulation results. Its function in relation to the rest of the INTO-CPS application would be similar to the current plotting component, providing a means to display results of co-simulations during/after execution. Similar to tools available in other simulation software, such as the 3D Animation editor in 20-sim.
The user will have access to an editable 3D scene, and be able to populate this scene with objects to represent system parts. In addition, a series of bindings can be defined by the user, to assign FMU outputs to manipulate various properties of objects in the 3D scene, like position, rotation, scale or visibility.
After defining (or loading) a visualization configuration (scene content + fmu bindings), a state of the 3D environment can be generated for every simulation step and be displayed to the user.
Configuration
A visualization configuration would be composed of two primary parts: a hierarchy of displayable objects, and a series of FMU-bindings to define how specific FMU output values should manipulate different properties of these objects.
Proposed structure of configuration data associated with the visualization:
I'm thinking it would be appropriate for the user to define this data somewhere in the multimodel and/or cosim configuration UI, and export the configuration to a json file upon running an experiment, similar to other experiment-related config data.
For displaying the 3D environment, it might be best to open a seperate window for the 3D scene to avoid layout constraints of the current config interface, but embedding it directly is also an option.
Features / Todo:
I count on updating this list as things get sorted out - i'll add any feature suggestions here to keep them in one place, so let me know if you have any ideas.
Add predefined geometries to scene
Object selection and editing
Create, edit and delete FMU-bindings
FMU-bindings to manipulate object position, rotation, scale and opacity
Output value adjustment (scale, offset, limit to range)
Apply bindings to step/generate scene state from step data
Step navigation and playback (post-simulation)
Separate modes for scene editing and step rendering
Integrate visualization configuration into INTO-CPS UI
Attach step renderer to running co-simulation (live-visualization)
Tree-view of scene hierarchy
Dedicated UI-elements for listing/editing FMU-bindings
Import/export scene hierarchy to json-file
Import/export fmu-bindings to json-file
Configurable reference frame for FMU-bindings manipulating object transforms (ie relative to parent)
Access to some "Standard Library" of common shapes and objects
Handle non-numeric data types
Import custom models and textures (CAD, blender, etc)
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hi all,
So here's a discussion thread regarding the development of 3D visualization capabilities for the INTO-CPS Application.
I developed a proof-of-concept/prototype for generic co-simulation visualization with basic features as a standalone angular application during the fall of 2020, and i am currently investigating the prospect of integrating it into the INTO-CPS App. However, there are many considerations and tasks ahead if this functionality is to be integrated into the app in a meaningful way.
The thought is to use this thread to keep track of ideas and progress regarding development of 3D visualization for the app (and to test out the discussion feature for these types of things). Feel free to share any comments, questions, feature suggestions etc
General Idea
The idea is to create a 3D visualization component based on WebGL/three.js for visualizing co-simulation results. Its function in relation to the rest of the INTO-CPS application would be similar to the current plotting component, providing a means to display results of co-simulations during/after execution. Similar to tools available in other simulation software, such as the 3D Animation editor in 20-sim.
The user will have access to an editable 3D scene, and be able to populate this scene with objects to represent system parts. In addition, a series of bindings can be defined by the user, to assign FMU outputs to manipulate various properties of objects in the 3D scene, like position, rotation, scale or visibility.
After defining (or loading) a visualization configuration (scene content + fmu bindings), a state of the 3D environment can be generated for every simulation step and be displayed to the user.
Configuration
A visualization configuration would be composed of two primary parts: a hierarchy of displayable objects, and a series of FMU-bindings to define how specific FMU output values should manipulate different properties of these objects.
Proposed structure of configuration data associated with the visualization:
scene_config:
fmubindings_config:
I'm thinking it would be appropriate for the user to define this data somewhere in the multimodel and/or cosim configuration UI, and export the configuration to a json file upon running an experiment, similar to other experiment-related config data.
For displaying the 3D environment, it might be best to open a seperate window for the 3D scene to avoid layout constraints of the current config interface, but embedding it directly is also an option.
Features / Todo:
I count on updating this list as things get sorted out - i'll add any feature suggestions here to keep them in one place, so let me know if you have any ideas.
Beta Was this translation helpful? Give feedback.
All reactions