Skip to content

Commit

Permalink
udpated
Browse files Browse the repository at this point in the history
  • Loading branch information
jakehova committed Apr 4, 2022
1 parent 3fa4771 commit 7b72fe7
Show file tree
Hide file tree
Showing 2 changed files with 75 additions and 0 deletions.
2 changes: 2 additions & 0 deletions tech/tensorflow/machine-learning.md
Original file line number Diff line number Diff line change
Expand Up @@ -67,4 +67,6 @@
* models are json or bin files. add up all the json/bin files that re pulled down and you have your model size
* Use memory tab in chrome dev tools
* load the model, go to memory tab, click create snapshot => view total ram in the statistics drop down option
73 changes: 73 additions & 0 deletions tech/tensorflow/raw-models.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,73 @@
# Raw Models

## Types
* Layers Model
* retain higher level building blocks for ease of use and debugging
* NOT optimized
* Graph Model
* highly optimized
* can combine multiple poerations in one batch

## How
* [Documentation](https://js.tensorflow.org/api/latest/#Models-Loading)
* Stored in two file
* model.json
* metadata about model type, architecture, config details
* declares a "format" key that has either layers model or graph model
* one or more ".bin" files. Typically have "shard<#>of<#>.bin"
* each bin file is 4MB or less

### Loading a Raw Model
* tf.loadGraphModel(<path to model>)
* tf.loadLayerModel(<path to model>)


### Saving a model
* Can always save a model for use offline using the command: model.save(<localhost>)

## Inspect
* can use ```model.summary()``` command to provide information about the model. Provides:
* model layers
* output shapes at each layer
* number of params in each layer
* total number of params
* number of trainable and non-trainable params


## Sample Code
```
const MODEL_PATH = 'https://storage.googleapis.com/jmstore/TensorFlowJS/EdX/SavedModels/sqftToPropertyPrice/model.json';
let model = undefined;
async function loadModel() {
model = await tf.loadLayersModel(MODEL_PATH);
model.summary();
// Create a batch of 1.
const input = tf.tensor2d([[870]]);
// Create a batch of 3
const inputBatch = tf.tensor2d([[500], [1100], [970]]);
// Actually make the predictions for each batch.
const result = model.predict(input);
const resultBatch = model.predict(inputBatch);
// Print results to console.
result.print(); // Or use .arraySync() to get results back as array.
resultBatch.print(); // Or use .arraySync() to get results back as array.
input.dispose();
inputBatch.dispose();
result.dispose();
resultBatch.dispose();
model.dispose();
}
loadModel();
```
* In this example:
* A total of 6 tensors are created when the code is run.
* 2 tensors are created when the model loads. The 'model.dispose()' method disposes of this tensor.
* 2 tensors are created, one for each input. The input.dispose() and inputBatch.dispose() methods dispose of these tensors.
* 2 tensors are created and returned as outputs. The result.dispose() and 'resultBatch.dispose()' methods dispose of these tensors.

0 comments on commit 7b72fe7

Please sign in to comment.