From c238e04cb13f5d97e22a26b718e42b83190d0916 Mon Sep 17 00:00:00 2001 From: Christian Bager Bach Houmann Date: Thu, 13 Jun 2024 10:14:12 +0200 Subject: [PATCH] update cite meaning --- .../src/sections/proposed_approach/testing_validation.tex | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/report_thesis/src/sections/proposed_approach/testing_validation.tex b/report_thesis/src/sections/proposed_approach/testing_validation.tex index 644e9ab0..26c81381 100644 --- a/report_thesis/src/sections/proposed_approach/testing_validation.tex +++ b/report_thesis/src/sections/proposed_approach/testing_validation.tex @@ -212,7 +212,7 @@ \subsubsection{Discussion of Testing and Validation Strategy} In our initial and optimization experiments, we prioritize cross-validation metrics to evaluate the models. This strategy mitigates the risk of overfitting to the test set by avoiding a bias towards lower \gls{rmsep} values. Conversely, for the stacking ensemble experiment, we emphasize test set metrics to comprehensively assess the ensemble's performance, while still considering cross-validation metrics. -This approach aligns with standard machine learning conventions\cite{geronHandsonMachineLearning2023}. +Using cross-validation for initial model selection and tuning experiments aligns with standard machine learning conventions\cite{geronHandsonMachineLearning2023}. In the initial experiment, cross-validation metrics serve as thresholds for model selection. During the optimization phase, only cross-validation metrics guide the search for optimal hyperparameters. For the stacking ensemble experiment, both cross-validation and test set metrics are evaluated, with a primary focus on the \gls{rmsep} metric.