diff --git a/report_thesis/src/sections/proposed_approach/testing_validation.tex b/report_thesis/src/sections/proposed_approach/testing_validation.tex index 644e9ab0..26c81381 100644 --- a/report_thesis/src/sections/proposed_approach/testing_validation.tex +++ b/report_thesis/src/sections/proposed_approach/testing_validation.tex @@ -212,7 +212,7 @@ \subsubsection{Discussion of Testing and Validation Strategy} In our initial and optimization experiments, we prioritize cross-validation metrics to evaluate the models. This strategy mitigates the risk of overfitting to the test set by avoiding a bias towards lower \gls{rmsep} values. Conversely, for the stacking ensemble experiment, we emphasize test set metrics to comprehensively assess the ensemble's performance, while still considering cross-validation metrics. -This approach aligns with standard machine learning conventions\cite{geronHandsonMachineLearning2023}. +Using cross-validation for initial model selection and tuning experiments aligns with standard machine learning conventions\cite{geronHandsonMachineLearning2023}. In the initial experiment, cross-validation metrics serve as thresholds for model selection. During the optimization phase, only cross-validation metrics guide the search for optimal hyperparameters. For the stacking ensemble experiment, both cross-validation and test set metrics are evaluated, with a primary focus on the \gls{rmsep} metric.