Summary
Foreward
Preface
Before we start
Section I: Introduction
1: Introduction
1.1: Terminology
1.2: Process of Training a Machine Learning Model
1.3: Preventing Overfitting
1.4: Code Conventions
1.5: Datasets Used
1.6: References
Section II: Feature Engineering
2: Domain Specific Feature Engineering
2.1: Introduction
2.2: Domain-Specific Feature Engineering
2.3: References
3: EDA Feature Engineering
3.1: Introduction
3.2: Car Sales
3.3: Coupon Recommendation
3.4: Conclusion
4: Higher Order Feature Engineering
4.1: Engineering Categorical Features
4.2: Engineering Ordinal Features
4.3: Engineering Numerical Features
4.4: Conclusion
5: Interaction Effect Feature Engineering
5.1: Interaction Plot
5.2: SHAP
5.3: Putting Everything Together
5.4: Conclusion
5.5: References
Section III: Feature Selection
6: Fundamentals of Feature Selection
6.1: Introduction
6.2: Different Feature Selection Methods
6.3: Filter Method
6.4: Wrapper Method
6.5: Putting Everything Together
6.6: Conclusion
7: Feature Selection Concerning Modeling Techniques
7.1: Lasso, Ridge, and ElasticNet
7.2: Feature Importance of Tree Models
7.3: Boruta
7.4: Using Tree-Based Feature Importance for Linear Model
7.5: Using Linear Model Feature Importance for Tree Models
7.6: Linear Regression
7.7: SVM
7.8: PCA
7.9: Putting Everything Together
7.10: Conclusion
8: Feature Selection Using Metaheuristic Algorithms
8.1: Exhaustive Feature Selection
8.2: Genetic Algorithm
8.3: Simulated Annealing
8.4: Ant Colony Optimization
8.5: Particle Swarm Optimization
8.6: Putting Everything Together
8.7: Conclusion
8.8: References
Section IV: Model Explanation
9: Explaining Model and Model Predictions to Layman
9.1: Introduction
9.2: Explainable models
9.3: Explanation Techniques
9.4: Putting Everything Together
9.5: Conclusion
9.6: References
Section V: Special Chapters
10: Feature Engineering & Selection for Text Classification
10.1: Introduction
10.2: Feature Construction
10.3: Feature Selection
10.4: Feature Extraction
10.5: Feature Reduction
10.6: Conclusion
10.7: References
11: Things That Can Give Additional Improvement
11.1: Introduction
11.2: Hyperparameter Tuning
11.3: Ensemble Learning
11.4: Signal Processing
11.5: Conclusion
11.6: References
Section III:
Feature Selection