Menu
Cart

Napkin Folding — machine learning

A real-life mistake I made about penalizer terms

Posted by Cameron Davidson-Pilon at

I made a very interesting mistake, and I wanted to share it with you because it's quite enlightening to statistical learning in general. It concerns a penalizer term in maximum-likelihood estimation. Normally, one deals only with the penalizer coefficient, that is, one plays around with \(\lambda\) in an MLE optimization like: $$ \min_{\theta} -\ell(\theta) + \lambda ||\theta||_p^p $$ where \(\ell\) is the log-likelihood and \(||\cdot||\) is the \(p\) norm. This family of problems is typically solved by calculus because both...

Read more →

Feature Space in Machine Learning

Posted by Cameron Davidson-Pilon at

Feature space refers to the \(n\)-dimensions where your variables live (not including a target variable, if it is present). The term is used often in ML literature because a task in ML is feature extraction, hence we view all variables as features. For example, consider the data set with: Target \(Y \equiv\) Thickness of car tires after some testing period Variables \(X_1 \equiv\) distance travelled in test \(X_2 \equiv\) time duration of test \(X_3 \equiv\) amount of chemical \(C\) in...

Read more →