diff --git a/docs/texts/introduction.md b/docs/texts/introduction.md index 439a6b2..b694fa6 100644 --- a/docs/texts/introduction.md +++ b/docs/texts/introduction.md @@ -47,7 +47,7 @@ Furthermore, **veScale** is also developing a _Mixed Mode_* of partial _Eager_ a **veScale** is designed and implemented on top of a primitive called _DTensor_ that provides a global tensor semantic with local shards distributed on multiple devices. **veScale** extends and enhances the _PyTorch DTensor_ for our production standard, and further develops the _Auto-Plan*_ and _Auto-Paralleize_ with a unified configuration and API. -Furthermore, **veScale** also supports online _Auto-Reshard_ for distributed checkpoints, which will be open-sourced as a new project -- **OmniStore**. +Furthermore, **veScale** also supports online _Auto-Reshard_ for distributed checkpoints. (`*` is under development) diff --git a/vescale/checkpoint/README.md b/vescale/checkpoint/README.md index 26b8d1f..b9c0bbe 100644 --- a/vescale/checkpoint/README.md +++ b/vescale/checkpoint/README.md @@ -21,8 +21,6 @@ abstracting away the complexities of underlying details such as process rank and `vescale.checkpoint` incorporates [fast checkpointing](https://arxiv.org/abs/2402.15627) and various I/O optimization techinques, enhancing I/O efficiency during LLM training. -`vescale.checkpoint` will be a part of `OmniStore` project, a new open-source project coming soon. - `vescale.checkpoint` is built on top of `PyTorch Distributed Checkpoint` with significant differences as discussed above. ## How to use `vescale.checkpoint`?