Skip to content

Commit

Permalink
[Doc] Update Ada Grad as the default optimiser. (#9061)
Browse files Browse the repository at this point in the history
Signed-off-by: rithin-pullela-aws <[email protected]>
  • Loading branch information
rithin-pullela-aws authored Jan 14, 2025
1 parent f87790d commit 800a39e
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion _ml-commons-plugin/algorithms.md
Original file line number Diff line number Diff line change
Expand Up @@ -77,7 +77,7 @@ Parameter | Type | Description | Default value
`beta2` | Double | The exponential decay rates for the moment estimates. | `0.99`
`decay_rate` | Double | The Root Mean Squared Propagation (RMSProp). | `0.9`
`momentum_type` | String | The defined Stochastic Gradient Descent (SGD) momentum type that helps accelerate gradient vectors in the right directions, leading to a fast convergence.| `STANDARD`
`optimiser` | String | The optimizer used in the model. | `SIMPLE_SGD`
`optimiser` | String | The optimizer used in the model. | `ADA_GRAD`
`objective` | String | The objective function used. | `SQUARED_LOSS`
`epochs` | Integer | The number of iterations. | `5`|
`batch_size` | Integer | The minimum batch size. | `1`
Expand Down

0 comments on commit 800a39e

Please sign in to comment.