From f7960a94ff47dd39d2468948293d6c99a53d647e Mon Sep 17 00:00:00 2001 From: rithin-pullela-aws Date: Mon, 13 Jan 2025 16:25:51 -0800 Subject: [PATCH] [Doc] Update Ada Grad as the default optimiser. Signed-off-by: rithin-pullela-aws --- _ml-commons-plugin/algorithms.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/_ml-commons-plugin/algorithms.md b/_ml-commons-plugin/algorithms.md index a5a173b358..799355ca02 100644 --- a/_ml-commons-plugin/algorithms.md +++ b/_ml-commons-plugin/algorithms.md @@ -77,7 +77,7 @@ Parameter | Type | Description | Default value `beta2` | Double | The exponential decay rates for the moment estimates. | `0.99` `decay_rate` | Double | The Root Mean Squared Propagation (RMSProp). | `0.9` `momentum_type` | String | The defined Stochastic Gradient Descent (SGD) momentum type that helps accelerate gradient vectors in the right directions, leading to a fast convergence.| `STANDARD` -`optimiser` | String | The optimizer used in the model. | `SIMPLE_SGD` +`optimiser` | String | The optimizer used in the model. | `ADA_GRAD` `objective` | String | The objective function used. | `SQUARED_LOSS` `epochs` | Integer | The number of iterations. | `5`| `batch_size` | Integer | The minimum batch size. | `1`