Regularization
Regularization is a method that reduces overfitting. It keeps the model simple and stable. It controls how large the model weights grow during training.
Why Regularization Is Important
- Stops overfitting
- Improves generalization
- Prevents models from memorizing noise
How Regularization Works
The model adds a penalty to the loss function. This penalty pushes the weights to stay small. Small weights create smoother decision boundaries.
Main Types of Regularization
1. L1 Regularization
L1 adds the sum of absolute weights to the loss. It pushes some weights to zero. This creates sparse models.
2. L2 Regularization
L2 adds the sum of squared weights to the loss. It keeps weights small and stable. It is used in most ML and DL tasks.
3. Dropout
Dropout turns off random neurons during training. This forces the network to learn stronger patterns.
4. Early Stopping
Training stops when validation error stops improving. This avoids learning noise.
When To Use Regularization
- When the model overfits
- When training data is small
- When the model is too complex
Regularization in Deep Learning
- Dropout layers
- L2 weight decay
- Batch normalization
Regularization in Moroccan Darija
Regularization hiya tariqa katsayad model bach ma yoverfitich. Kat7dd men l weights w katkhlli model y3mmem mzyan.
Types
- L1. Kay7tt absolute weights f loss.
- L2. Kay7tt squared weights f loss.
- Dropout. Kaytfi neurons f training.
- Early stopping. Kaywaqf training mlli validation t9ef.
Conclusion
Regularization reduces overfitting. It keeps models clean, stable, and reliable. It forms a core part of modern machine learning.