Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Updated few links with direct URLs in data_performance.ipynb #2227

Merged
merged 11 commits into from
Sep 27, 2023
14 changes: 7 additions & 7 deletions site/en/guide/data_performance.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -275,7 +275,7 @@
"### Prefetching\n",
"\n",
"Prefetching overlaps the preprocessing and model execution of a training step.\n",
"While the model is executing training step `s`, the input pipeline is reading the data for step `s+1`.\n",
"While the model is executing training `steps`, the input pipeline is reading the data for `steps+1`.\n",
RenuPatelGoogle marked this conversation as resolved.
Show resolved Hide resolved
"Doing so reduces the step time to the maximum (as opposed to the sum) of the training and the time it takes to extract the data.\n",
"\n",
"The `tf.data` API provides the `tf.data.Dataset.prefetch` transformation.\n",
Expand Down Expand Up @@ -713,12 +713,12 @@
"Here is a summary of the best practices for designing performant TensorFlow\n",
"input pipelines:\n",
"\n",
"* [Use the `prefetch` transformation](#Pipelining) to overlap the work of a producer and consumer\n",
"* [Parallelize the data reading transformation](#Parallelizing-data-extraction) using the `interleave` transformation\n",
"* [Parallelize the `map` transformation](#Parallelizing-data-transformation) by setting the `num_parallel_calls` argument\n",
"* [Use the `cache` transformation](#Caching) to cache data in memory during the first epoch\n",
"* [Vectorize user-defined functions](#Map-and-batch) passed in to the `map` transformation\n",
"* [Reduce memory usage](#Reducing-memory-footprint) when applying the `interleave`, `prefetch`, and `shuffle` transformations"
"* [Use the `prefetch` transformation](#prefetching) to overlap the work of a producer and consumer\n",
Copy link
Member

@MarkDaoust MarkDaoust Jul 19, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Automated anchor links for headings are generated differently on different systems, I don't remember the formatting for colab vs. github vs. tensorflow.org. It's best to include an <a name=""> at the target location so that it unambiguously works in all contexts.

Copy link
Contributor Author

@RenuPatelGoogle RenuPatelGoogle Aug 4, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @MarkDaoust ,I don't have idea about the target location. In this case, Can I mention full URL of the working link? or you can suggest.

Copy link
Member

@MarkDaoust MarkDaoust Aug 4, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, my last comment didn't render correctly.

#Reducing-memory-footprint is trying to point to ### Reduce memory footprint heading.

Please add something like <a name="reduce-memory"> at each heading you're linking to, and update the link to match (like [link](#reduce-memory)). This is a good idea because each website normalizes the titles to anchors differently.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You mean like below changes for the line - Use the prefetch transformation to overlap the work of a producer and consumer

Use the [<a name="prefetch">](#prefetching) transformation

or

Use the [prefetch](https://www.tensorflow.org/guide/data_performance#prefetching) transformation

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No. For [Reduce memory usage](#Reducing-memory-footprint)

go down to the section you're trying to link to and add the <a> there:

### Reduce memory footprint

<a name="reduce_memory_footprint">

Copy link
Contributor Author

@RenuPatelGoogle RenuPatelGoogle Sep 26, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@MarkDaoust, Sorry for the delay. I have updated the file as you suggested. Please verify it once if it is same as you described. Thank you.

"* [Parallelize the data reading transformation](#parallelizing_data_extraction) using the `interleave` transformation\n",
"* [Parallelize the `map` transformation](#parallelizing_data_transformation) by setting the `num_parallel_calls` argument\n",
"* [Use the `cache` transformation](#caching) to cache data in memory during the first epoch\n",
"* [Vectorize user-defined functions](#vectorizing_mapping) passed in to the `map` transformation\n",
"* [Reduce memory usage](#reducing_memory_footprint) when applying the `interleave`, `prefetch`, and `shuffle` transformations"
]
},
{
Expand Down