Stability: Difference between revisions

No change in size ,  25 February 2023
Line 14: Line 14:


#[[Cross-validation]]: This method is commonly used for evaluating model stability. In this approach, data is divided into multiple [[fold]]s and the model trained and assessed on each fold. The average performance across all folds then serves to judge how stable the model truly is.
#[[Cross-validation]]: This method is commonly used for evaluating model stability. In this approach, data is divided into multiple [[fold]]s and the model trained and assessed on each fold. The average performance across all folds then serves to judge how stable the model truly is.
#[[Bootstrapping]]: This resampling technique involves drawing multiple examples with replacement from the training data to generate multiple [[training set[[s. The model is then trained on each set and its average performance used to assess its stability.
#[[Bootstrapping]]: This resampling technique involves drawing multiple examples with replacement from the training data to generate multiple [[training set]]s. The model is then trained on each set and its average performance used to assess its stability.
#[[Regularization]]: This technique helps control [[overfitting]] in a model by adding a penalty term to the loss function. Regularization helps improve model stability by preventing it from fitting [[noise]] in data.
#[[Regularization]]: This technique helps control [[overfitting]] in a model by adding a penalty term to the loss function. Regularization helps improve model stability by preventing it from fitting [[noise]] in data.