🎓LearnByTeaching.aiTry Free
Practice Questions

Machine Learning Practice Questions: Test Your Knowledge | LearnByTeaching.ai

Assess your machine learning understanding with these 40 questions covering supervised learning, unsupervised learning, neural networks, and model evaluation. These questions test both theoretical foundations and practical intuition needed to build effective ML systems.

40 questions total

Supervised Learning

Test your understanding of supervised learning.

Q1Easysupervised-learning

In linear regression, the cost function typically minimized is:

Q2Mediumsupervised-learning

The bias-variance tradeoff implies that a model with very high complexity will likely have:

Q3Mediumsupervised-learning

L2 regularization (Ridge regression) adds what term to the loss function?

Q4Easysupervised-learning

A decision tree splits data at each node by:

Q5Hardsupervised-learning

The kernel trick in SVMs allows the algorithm to:

Q6Easysupervised-learning

Logistic regression outputs:

Q7Mediumsupervised-learning

In k-nearest neighbors (KNN), increasing k generally:

Q8Mediumsupervised-learning

Naive Bayes is called 'naive' because it assumes:

Q9Mediumensemble-methods

Random Forest improves over a single decision tree by:

Q10Mediumensemble-methods

Gradient boosting builds an ensemble by:

Unsupervised Learning

Test your understanding of unsupervised learning.

Q11Easyunsupervised-learning

K-means clustering requires the user to specify in advance:

Q12Mediumunsupervised-learning

Principal Component Analysis (PCA) finds directions that:

Q13Mediumunsupervised-learning

DBSCAN differs from K-means in that DBSCAN:

Q14Mediumunsupervised-learning

In hierarchical clustering, a dendrogram shows:

Q15Mediumunsupervised-learning

The elbow method for choosing k in K-means plots:

Q16Easyunsupervised-learning

t-SNE is primarily used for:

Q17Hardunsupervised-learning

Gaussian Mixture Models (GMM) extend K-means by:

Q18Mediumunsupervised-learning

Autoencoders learn representations by:

Q19Mediumunsupervised-learning

The silhouette score measures:

Q20Easyunsupervised-learning

Association rule mining (e.g., Apriori algorithm) is commonly used in:

Neural Networks and Deep Learning

Test your understanding of neural networks and deep learning.

Q21Mediumneural-networks

The vanishing gradient problem in deep networks causes:

Q22Easyneural-networks

The ReLU activation function is defined as:

Q23Mediumdeep-learning

Dropout regularization works by:

Q24Mediumdeep-learning

Convolutional Neural Networks (CNNs) are particularly effective for image data because:

Q25Easyneural-networks

In backpropagation, the gradient of the loss with respect to each weight is computed using:

Q26Mediumdeep-learning

Batch normalization helps training by:

Q27Easydeep-learning

A recurrent neural network (RNN) is designed for:

Q28Mediumdeep-learning

The Transformer architecture relies primarily on:

Q29Easydeep-learning

Transfer learning involves:

Q30Mediumdeep-learning

GANs (Generative Adversarial Networks) consist of:

Model Evaluation and Practical ML

Test your understanding of model evaluation and practical ml.

Q31Easymodel-evaluation

K-fold cross-validation works by:

Q32Mediummodel-evaluation

Precision in binary classification measures:

Q33Mediummodel-evaluation

The ROC curve plots:

Q34Mediummodel-evaluation

Data leakage occurs when:

Q35Mediumfeature-engineering

Feature scaling (standardization or normalization) is important for which algorithms?

Q36Mediummodel-evaluation

The F1 score is the:

Q37Easymodel-evaluation

When dealing with a highly imbalanced dataset (99% negative, 1% positive), accuracy is a poor metric because:

Q38Easymodel-evaluation

Hyperparameter tuning using grid search:

Q39Easyfeature-engineering

One-hot encoding is used to convert:

Q40Easymodel-evaluation

Early stopping is a regularization technique that:

Scoring Guide

Total possible: 40

Excellent36-40: Strong ML foundations — you understand both the math and the practical considerations.
Good28-35: Solid grasp of core concepts. Strengthen your understanding of regularization, evaluation metrics, and deep learning architectures.
Needs WorkBelow 28: Focus on the fundamentals — bias-variance tradeoff, gradient descent, and proper model evaluation before tackling advanced topics.

Study Recommendations

  • Implement linear regression, logistic regression, and a simple neural network from scratch in NumPy
  • Practice explaining the bias-variance tradeoff and when to use which regularization technique
  • Build end-to-end ML projects with real messy data to develop practical intuition
  • Study the math behind backpropagation with pen and paper before relying on frameworks
  • Use the teach-back method — explain each algorithm's assumptions, strengths, and limitations as if teaching a colleague
0 of 40 answered0%

More Machine Learning Resources

Want more machine learning practice?

Upload your notes and LearnByTeaching.ai generates unlimited practice questions tailored to your course. Then teach the concepts to AI students who challenge your understanding.

Try LearnByTeaching.ai — It's Free