diff --git a/docs/getting_started/important_concepts.md b/docs/getting_started/important_concepts.md new file mode 100644 index 000000000..380f77956 --- /dev/null +++ b/docs/getting_started/important_concepts.md @@ -0,0 +1,111 @@ +--- +icon: material/lightbulb +--- + +# **Important Concepts** + +In this document, we discuss and disambiguate a number of concepts that are central to working with OmniGibson and BEHAVIOR-1K. + +## **BEHAVIOR concepts** + +At a high level, the BEHAVIOR dataset consists of tasks, synsets, categories, objects and substances. These are all interconnected and are used to define and simulate household robotics. + +### Tasks + +Tasks in the BEHAVIOR are first order logic formalizations of 1000+ long-horizon household activities that survey participants indicated they would benefit from robot help with. Each task is defined in a single BDDL file that includes the list of objects needed for the task (the *object scope*), and their *initial conditions* (e.g. what a scene should look like when the task begins) and *goal conditions* (e.g. what needs to be true for the task to be considered completed). Task definitions are symbolic - they can be grounded in a particular scene with particular object, which is called a *task instance*. Task instances are created through a process called *sampling* that finds scenes and rooms that match the task's requirements, and configures non-scene objects into configurations that satisfy the task's initial conditions. + +### Synsets + +Synsets are the nouns used in the BDDL object scopes, expanded from the WordNet hierarchy with additional synsets to suit BEHAVIOR needs. Synsets are laid out in the form of a directed acyclic graph, so each synset can have parents/ancestors and children/descendants. When a task object scope requires a synset (e.g. "grocery.n.01"), instantiations of the task might use objects belonging to any descendant of that synset (e.g. an apple, assigned to "apple.n.01"), allowing a high degree of flexibility in task definitions. Each synset is annotated with abilities and parameters that define the kind of behaviors expected from objects of that synset (e.g. a faucet is a source of water, a door is openable, a stove is a heat source, etc.) + +### Categories + +Categories act as a bridge between synsets and OmniGibson's objects. Each category is mapped to one leaf synset and can contain multiple objects. The purpose of a category is to disambiguate between objects that are semantically the same but functionally & physically not; e.g. both a wall-mounted sink and a standing sink are `sink.n.01` semantically (e.g. they have the same functions and can be used for the same purposes), but they should not be swapped for one another during object randomization for the sake of physical and visual realism. As a result, wall_mounted_sink and standing_sink are different categories, but they are mapped to the same synset and thus can be used for the same task-relevant purposes. + +### Objects + +Objects denote specific 3D object models in the dataset. Each object belongs to one category and has a unique 6-character ID that identifies that object in the dataset. Objects can have articulations and metadata annotated, used in OmniGibson to simulate the abilities expected by the object's assigned synset. For example, a faucet is a fluid source, so it needs to have an annotation for the position the water will come out of. + +### Scenes + +Scenes are specific configurations of objects. A scene file by default will contain the information needed to lay out all the objects to form the scene. BEHAVIOR-1K ships with 50 base scenes that show a variety of different environments like houses, offices, restaurants, etc. and these scenes can be randomized by performing object randomization by replacing objects with other objects from the same category within the existing objects' bounding boxes. During task sampling, additional objects as requested in the object scope can be added, and these scene/task combinations (*task instances*) can be saved separately. BEHAVIOR-1K ships with at least one instantiation of each task. + +### Substances / Systems + +Some synsets, such as water, are marked as substances. For substance synsets, categories and objects are not provided, instead, these synsets are mapped to *particle systems* inside OmniGibson. Particle systems can act in a variety of ways: some like water act and are rendered as fluids, others like stains are simply visual particles with custom meshes. Substances are implemented singletons at the scene level, e.g. there is only one *water* particle system in a scene, and its particles may be arbitrarily placed in the scene. At a symbolic level, other objects can be filled with, covered in, or simply containing, particles of a particle system. + +### Transition Rules + +Transition rules define complex physical or chemical interactions between objects and substances not natively supported by Omniverse. They specify input and output synsets, conditions for transitions, and involve rules for washing, drying, slicing, dicing, melting, and recipe-based transformations. Each rule type has specific input and output requirements and conditions. When the input requirements are satisfied, the rule will be applied, causing the removal of some objects/substances and the addition of others into the scene. + + +## **Components of the BEHAVIOR ecosystem** + +The BEHAVIOR ecosystem consists of four components: BDDL (the symbolic knowledgebase), OmniGibson (the simulator), the BEHAVIOR dataset (the scene and object assets) and the OmniGibson assets (robots etc). + +### OmniGibson + +OmniGibson is the main software component of the BEHAVIOR ecosystem. It is a robotics simulator built on NVIDIA Isaac Sim and is the successor of the BEHAVIOR team's previous well known simulator, iGibson. OmniGibson is designed to meet the needs of the BEHAVIOR project, including realistic rendering, high-fidelity physics, and the ability to simulate soft bodies and fluids. + +OmniGibson is a Python package, and it requires Isaac Sim to be available locally to function. It can also be used independently from the BEHAVIOR ecosystem to perform robot learning on different robots, assets, and tasks. The OmniGibson stack is discussed further in the "OmniGibson, Omniverse and Isaac Sim" section. + +### OmniGibson Assets + +The OmniGibson assets are a collection of robots and other simple graphical assets that are downloaded into the omnigibson/data directory. These assets are necessary to be able to OmniGibson (e.g. no robot simulation without robots!) for any purpose, and as such are shipped separately from the BEHAVIOR dataset which contains the items needed to simulate BEHAVIOR tasks. These assets are not encrypted. + +### The BEHAVIOR dataset + +The BEHAVIOR dataset consists of the scene, object and particle system assets that are used to simulate the BEHAVIOR-1K tasks. Most of the assets were procured through ShapeNet and TurboSquid and the dataset is encrypted to comply with their license. + +* Objects are represented as USD files that contain the geometry, materials, and physics properties of the objects. Materials are separately provided. +* Scene assets are represented as JSON files containing OmniGibson state dumps that describe a particular configuration of the USD objects in a scene. Scene directories also include additional information such as traversability maps of the scene with various subsets of objects included. *In the currently shipped versions of OmniGibson scenes, "clutter" objects that are not task-relevant are not included (e.g. the products for sale at the supermarket), to reduce the complexity of the scenes and improve simulation performance.* +* The particle system assets are represented as JSON files describing the parameters of the particle system. Some particle systems also contain USD assets that are used as particles of that system. Other systems are rendered directly using isosurfaces, etc. + +### BDDL + +The BEHAVIOR Domain Definition Language (BDDL) library contains the symbolic knowledgebase for the BEHAVIOR ecosystem and the tools for interacting with it. The BDDL library contains the below main components: + +* The BEHAVIOR Object Taxonomy, which contains a tree of nouns ("synsets") derived from WordNet and enriched with annotations and relationships that are useful for robotics and AI. The Object Taxonomy also includes mapping of BEHAVIOR dataset categories and systems to synsets. The Object Taxonomy can be accessed using the `bddl.object_taxonomy` module. +* The BEHAVIOR Domain Definition Language (BDDL) standard, parsers, and implementations of all of the first-order logic predicates and functions defined in the standard. +* The definitions of the 1,000 tasks that are part of the BEHAVIOR-1K dataset. These are defined with initial and goal conditions as first-order logic predicates in BDDL. +* The backend abstract base class, which needs to be implemented by a simulator (e.g. OmniGibson) to provide the necessary functionality to sample the initial conditions and check the predicates in goal conditions of tasks. +* Transition rule definitions, which define recipes, like cooking, that result in the removal and addition of nouns into the environment state at a given time. Some of these transitions are critical to completion of a task, e.g. blending lemons and water in a blender need to produce the blender substance for a `making_lemonade` task to be feasible. These need to be implemented by the simulator. +* The knowledgebase module (`bddl.knowledge_base`) that contains an ORM representation of all of the BDDL + BEHAVIOR dataset concepts. This can be used to investigate the relationships between objects, synsets, categories, substances, systems, and tasks. The [BEHAVIOR knowledgebase website](https://behavior.stanford.edu/knowledgebase) is a web interface to this module. + + +## **OmniGibson, Omniverse, Isaac Sim and PhysX** + +OmniGibson is an open-source project that is built on top of NVIDIA's Isaac Sim and Omniverse. Here we discuss the relationship between these components. + +### Omniverse + +Omniverse is a platform developed by NVIDIA that provides a set of tools and services for creating, sharing, and rendering 3D content. + +Omniverse on its own is a SDK containing a UI, a photorealistic renderer (RTX/Hydra), a scene representation (USD), a Physics engine (PhysX) and a number of other features. Its components, and other custom code, can be used in different combinations to create "Omniverse apps". + +An Omniverse app usually involves rendering, but does not have to involve physics simulation. NVIDIA develops a number of such apps in-house, e.g. Omniverse Create which can be used as a CAD design tool, and Isaac Sim, which is an application for robotics simulation. + +### PhysX + +PhysX is a physics engine owned and developed by NVIDIA and used in a variety of games and platforms like Unity. It is integrated into Omniverse and thus can be used to apply physics updates to the state of the scene in an Omniverse app. + +PhysX supports important features that are necessary for robotics simulation, such as articulated bodies, joints, motors, controllers, etc. + +### Isaac Sim + +Isaac Sim is an Omniverse app developed by NVIDIA that is designed for robotics simulation. It is built on top of Omniverse and uses PhysX for physics simulation. As an Omniverse app, it's defined as a list of Omniverse components that need to be enabled to comprise the application, as well as providing a thin layer of custom logic to support launching the application as a library and programmatically stepping the simulation rather than launching it as an asychronous, standalone desktop application. + +It's important to note that the Omniverse SDK is generally meant as a CAD / collaboration / rendering platform and is monetized as such. Isaac Sim is a bit of a special case in that its main purpose is robotics simulation, which usually involves starting with a fixed state and simulating through physics, rather than manually making changes to a CAD file manually, or by making animations using keyframes. The application also runs as a MDP where the viewport updates on step rather than asynchronously like a typical interactive desktop app. As a result, a lot of Omniverse features are not used in Isaac Sim, and some features (e.g. timestamps, live windows, etc.) do not quite work as expected. + +### OmniGibson + +OmniGibson is a Python package that is built by the BEHAVIOR team at the Stanford Vision and Learning Group on top of Isaac Sim and provides a number of features that are necessary for simulating BEHAVIOR tasks. OmniGibson: + +* completely abstracts away the Isaac Sim interface (e.g. users do not interact with NVIDIA code / interfaces / abstractions at all), instead providing a familiar scene/object/robot/task interface similar to those introduced in iGibson +* provides a number of fast high-level APIs for interacting with the simulator, such as loading scenes, setting up tasks, and controlling robots +* implements samplers and checkers for all of the predicates and functions defined in the BDDL standard to allow instantiation and simulation of BEHAVIOR-1K tasks +* includes utilities for working with the BEHAVIOR dataset including decryption, saving / loading scene states, etc. +* supports very simple vectorization across multiple copies of the scene to aid with training reinforcement learning agents +* provides easily configurable controllers (direct joint control, inverse kinematics, operational space, differential drive, etc.) that can be used to control robots in the simulator + +OmniGibson is shipped as a Python package through pip or GitHub, however, it requires Isaac Sim to be installed locally to function. It can also be used independently from the BEHAVIOR ecosystem to perform robot learning on different robots, assets, and tasks. diff --git a/docs/miscellaneous/contact.md b/docs/miscellaneous/contact.md index 5c1412489..b6ac6e39b 100644 --- a/docs/miscellaneous/contact.md +++ b/docs/miscellaneous/contact.md @@ -1,5 +1,5 @@ # **Contact** -If you have any questions, comments, or concerns, please feel free to reach out to use by joining our Discord server: +If you have any questions, comments, or concerns, please feel free to reach out to us by joining our Discord server: \ No newline at end of file diff --git a/docs/miscellaneous/contributing.md b/docs/miscellaneous/contributing.md index dc8999150..0303b279b 100644 --- a/docs/miscellaneous/contributing.md +++ b/docs/miscellaneous/contributing.md @@ -8,12 +8,44 @@ If you encounter any bugs or have feature requests that could enhance the platfo When reporting a bug, please kindly provide detailed information about the issue, including steps to reproduce it, any error messages, and relevant system details. For feature requests, we appreciate a clear description of the desired functionality and its potential benefits for the OmniGibson community. -## **Pull Requests** +You can also ask questions on our Discord channel about issues. + +## **Branch Structure** + +The OmniGibson repository uses the below branching structure: + +* *main* is the branch that contains the latest released version of OmniGibson. No code should be directly pushed to *main* and no pull requests should be merged directly to *main*. It is updated at release time by OmniGibson core team members. External users are expected to be on this branch. +* *og-develop* is the development branch that contains the latest, stable development version. Internal users and developers are expected to be on, or branching from, this branch. Pull requests for new features should be made into this branch. **It is our expectation that og-develop is always stable, e.g. all tests need to be passing and all PRs need to be complete features to be merged.** + +## **How to contribute** We are always open to pull requests that address bugs, add new features, or improve the platform in any way. If you are considering submitting a pull request, we recommend opening an issue first to discuss the changes you would like to make. This will help us ensure that your proposed changes align with the goals of the project and that we can provide guidance on the best way to implement them. -When submitting a pull request, please ensure that your code adheres to the following guidelines: +**Before starting a pull request, understand our expectations. Your PR must:** + +1. Contain clean code with properly written English comments +2. Contain all of the changes (no follow-up PRs), and **only** the changes (no huge PRs containing a bunch of things), necessary for **one** feature +3. Should leave og-develop in a fully stable state as you found it + +You can follow the below items to develop a feature: + +1. **Branch off of og-develop.** You can start by checking out og-develop and branching from it. If you are an OmniGibson team member, you can push your branches onto the OmniGibson repo directly. Otherwise, you can fork the repository. +2. **Implement your feature.** You can implement your feature, as discussed with the OmniGibson team on your feature request or otherwise. Some things to pay attention to: + - **Examples:** If you are creating any new major features, create an example that a user can run to try out your feature. See the existing examples in the examples directory for inspiration, and follow the same structure that allows examples to be run headlessly as integration tests. + - **Code style:** We follow the [PEP 8](https://www.python.org/dev/peps/pep-0008/) style guide for Python code. Please ensure that your code is formatted according to these guidelines. We have pre-commits that we recommend installing that fix style issues and sort imports. These are also applied automatically on pull requests. + - **Inline documentation:** We request that all new APIs be documented via docstrings, and that functions be reasonably commented. +3. **Write user documentation**: If your changes affect the public API or introduce new features, please update the relevant documentation to reflect these changes. If you are creating new features, consider writing a tutorial. +4. **Testing**: Please include tests to ensure that the new functionality works as expected and that existing functionality remains unaffected. This will both confirm that your feature works, and it will protect your feature against regressions that can be caused by unrelated PRs by others. Unit tests are run on each pull request and failures will prevent PRs from being merged. +5. **Create PR**: After you are done with all of the above steps, create a pull request on the OmniGibson repo. **Make sure you are picking og-develop as the base branch.** A friendly bot will complain if you don't. In the pull request description, explain the feature and the need for changes, link to any discussions with developers, and assign the feature for review by one of the core developers. +6. **Go through review process**: Your reviewers may leave comments on things to be changed, or ask you questions. Even if you fix things or answer questions, do **NOT** mark things as resolved, let the reviewer do so in their next pass. After you are done responding, click the button to request another round of reviews. Repeat until there are no open conversations left. +7. **Merged!** Once the reviewer is satisfied, they will go ahead and merge your PR. The PR will be merged into og-develop for immediate developer use, and included in the next release for public use. Public releases happen every few months. Thanks a lot for your contribution, and congratulations on becoming a contributor to what we hope will be the world's leading robotics benchmark! + +## **Continuous Integration** +The BEHAVIOR suite has continuous integration running via Github Actions in containers on our compute cluster. To keep our cluster safe, the CI will only be run on external work after one of our team members approves it. + +* Tests and profiling are run directly on PRs and merges on the OmniGibson repo using our hosted runners +* Docker image builds are performed using GitHub-owned runners +* Docs builds are run on the behavior-website repo along with the rest of the website. +* When GitHub releases are created, a source distribution will be packed and shipped on PyPI by a hosted runner -- **Code Style**: We follow the [PEP 8](https://www.python.org/dev/peps/pep-0008/) style guide for Python code. Please ensure that your code is formatted according to these guidelines. -- **Documentation**: If your changes affect the public API or introduce new features, please update the relevant documentation to reflect these changes. -- **Testing**: If your changes affect the behavior of the platform, please include tests to ensure that the new functionality works as expected and that existing functionality remains unaffected. \ No newline at end of file +For more information about the workflows and runners, please reach out on our Discord channel. \ No newline at end of file diff --git a/docs/modules/under_the_hood.md b/docs/modules/under_the_hood.md new file mode 100644 index 000000000..09ad72f22 --- /dev/null +++ b/docs/modules/under_the_hood.md @@ -0,0 +1,53 @@ +--- +icon: material/car-wrench +--- + +# Under the Hood: Isaac Sim Details +In this page, we discuss the particulars of certain Isaac Sim features and behaviors. + +## Playing and Stopping +TODO + +## CPU and GPU dynamics and pipelines +TODO + +## Sources of Truth: USD, PhysX and Fabric +In Isaac Sim, there are three competing representations of the current state of the scene: USD, PhysX, and Fabric. These are used in different contexts, with USD being the main source of truth for loading and representing the scene, PhysX only being used opaquely during physics simulation, and Fabric providing a faster source of truth for the renderer during physics simulation. + +### USD +USD is the scene graph representation of the scene, as directly loaded from the USD files. This is the main scene / stage representation used by Omniverse apps. + + * This representation involves maintaining the full USD tree in memory and mutating it as the scene changes. + * It is a complete, flexible representation containing all scene meshes and hierarchy that works really well for representing static scenes (e.g. no realtime physics simulation), e.g. for usual CAD workflows. + * During physics simulation, we need to repeatedly update the transforms of the objects in the scene so that they will be rendered in their new poses. USD is not optimized for this: especially due to specific USD features like transforms being defined locally (so to compute a world transform, you need to traverse the tree). Queries and reads/writes using the Pixar USD library are also overall relatively slow. + +### PhysX +PhysX contains an internal physics-only representation of the scene that it uses to perform computations during physics simulation. + + * The PhysX representation is only available when simulation is playing (e.g. when it is stopped, all PhysX internal storage is freed, and when it is played again, the scene is reloaded from USD). + * This representation is the fastest source for everything it provides (e.g. transforms, joint states, etc.) since it only contains physics-relevant information and provides methods to access these in a tensorized manner, called tensor APIs, used in a number of places in OmniGibson. + * But it does not contain any rendering information and is not available when simulation is stopped. As such, it cannot be used as the renderer as the source of truth. + * Therefore, by default, PhysX explicitly updates the USD state after every step so that the renderer and the representation of the scene in the viewport are updated. This is a really slow operation for large scenes, causing frame rates to drop below 10 fps even for our smallest scenes. + +### Fabric +Fabric (formerly Flatcache) is an optimized representation of the scene that is a flattened version of the USD scene graph that is optimized for fast accesses to transforms and for rendering. + + * It can be enabled using the ENABLE_FLATCACHE global macro in OmniGibson, which causes the renderer to use Fabric to get object transforms instead of USD, and causes PhysX to stop updating the USD state after every step and update the Fabric state instead. + * The Fabric state exists alongside the USD and captures much of the same information, although it is not as complete as USD. It is optimized for fast reads and writes of object transforms and is used by the renderer to render the scene. + * The information it contains is usually fresher than the USD, e.g. when Fabric is enabled, special attention needs to be paid in order to not accidentally access stale information from USD instead of Fabric. + * Fabric stores world transforms directly, e.g. any changes of a transform of an object's parent will not be reflected in the child's position because the child separately stores its world transform. One main advantage of this setup is that it is not necessary to traverse the tree to compute world transforms. + * A new library called `usdrt` provides an interface that can be used to access Fabric state in a way that is similar to the Pixar USD library. This is used in a number of places in OmniGibson to access Fabric state. + +To conclude, with ENABLE_FLATCACHE enabled, there will be three concurrent representations of the scene state in OmniGibson. USD will be the source of truth for the meshes and the hierarchy. While physics simulation is playing, PhysX will be the source of truth for the physics state of the scene, and we will use it for fast accesses to compute controls etc., and finally on every render step, PhysX will update Fabric which will then be the source of truth for the renderer and for the OmniGibson pose APIs. + +The ENABLE_FLATCACHE macro is recommended to be enabled since large scenes will be unplayable without it, but it can be disabled for small scenes, in which case the Fabric representation will not be used, PhysX will update the USD's local transforms on every step, and the renderer will use USD directly. + +## Lazy Imports +Almost all of OmniGibson's simulation functionality uses Isaac Sim code, objects, and components to function. These Python components often need to be imported e.g. via an `import omni.isaac.core.utils.prims` statement. However, such imports of Omniverse libraries can only be performed if the Isaac Sim application has already been launched. Launching the application takes up to 10 minutes on the first try due to shader compilation, and 20 seconds every time after that, and requires the presence of a compatible GPU and permissions. However, certain OmniGibson functionality (e.g. downloading datasets, running linters, etc.) does not require the actual _execution_ of any Isaac Sim code, and should not be blocked by the need to import Isaac Sim libraries. + +To solve this problem, OmniGibson uses a lazy import system. The `omnigibson.lazy` module, often imported as `import omnigibson.lazy as lazy` provides an interface that only imports modules when they are first used. + +Thus, there are two important requirements enforced in OmniGibson with respect to lazy imports: + +1. All imports of omni, pxr, etc. libraries should happen through the `omnigibson.lazy` module. Classes and functions can then be accessed using their fully qualified name. For example, instead of `from omni.isaac.core.utils.prims import get_prim_at_path` and then calling `get_prim_at_path(...)`, you should first import the lazy import module `import omnigibson.lazy as lazy` and then call your function using the full name `lazy.omni.isaac.core.utils.prims.get_prim_at_path(...)`. +2. No module except `omnigibson/utils/deprecated_utils.py` should import any Isaac Sim modules at load time (that module is ignored by docs, linters, etc.). This is to ensure that the OmniGibson package can be imported and used without the need to launch Isaac Sim. Instead, Isaac Sim modules should be imported only when they are needed, and only in the functions that use them. If a class needs to inherit from a class in an Isaac Sim module, the class can be placed in the deprecated_utils.py file, or it can be wrapped in a function to delay the import, like in the case of simulator.py. \ No newline at end of file diff --git a/mkdocs.yml b/mkdocs.yml index be5945b40..e20d35850 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -2,6 +2,7 @@ yaml-language-server: $schema=https://squidfunk.github.io/mkdocs-material/schema site_name: OmniGibson Documentation repo_name: StanfordVL/OmniGibson +site_url: https://behavior.stanford.edu/omnigibson repo_url: https://github.com/StanfordVL/OmniGibson theme: name: material @@ -12,7 +13,9 @@ theme: features: - navigation.tracking - - navigation.tabs + - navigation.instant + - navigation.expand + - toc.integrate - content.code.copy extra: @@ -85,9 +88,10 @@ nav: - Getting Started: - Installation: getting_started/installation.md - Quickstart: getting_started/quickstart.md + - Important Concepts: getting_started/important_concepts.md - Examples: getting_started/examples.md - Running on SLURM: getting_started/slurm.md - - Modules: + - OmniGibson Modules: - Overview: modules/overview.md - Prims: modules/prims.md - Objects: modules/objects.md @@ -102,6 +106,7 @@ nav: - Tasks: modules/tasks.md - Environments: modules/environments.md - Vector Environments: modules/vector_environments.md + - Under the Hood - Isaac Sim: modules/under_the_hood.md - Tutorials: - Demo Collection: tutorials/demo_collection.md - Saving and Loading Simulation State: tutorials/save_load.md @@ -114,6 +119,7 @@ nav: - Contributing: miscellaneous/contributing.md - Changelog: https://github.com/StanfordVL/OmniGibson/releases - Contact Us: miscellaneous/contact.md + - API Reference: reference/* extra: analytics: