Advanced –A Treatise on Parametric Forms of Multimodal Distributions

This document is part of a developing theoretical framework authored by Christopher Lee Burgess. Abstract Classical variance is insufficient for characterizing the geometric structure of multimodal data. In this work, we will define a new quantity called the pseudovariance, and demonstrate how it captures shape, modality, and dispersion through a general class of functions we … Continue reading Advanced –A Treatise on Parametric Forms of Multimodal Distributions

Advanced – Ridge Regression Notes (Module 2): A Closer Look at the Ridge Estimator

With a clear understanding of the framework of Ridge Regression, we are now well-equipped to delve deeper into some of its nuances. A key aspect of this exploration involves examining the parameters of the distribution over the Ridge weights, denoted as $latex w_{Ridge}$. Through this, we will uncover a crucial property: while Ridge Regression helps … Continue reading Advanced – Ridge Regression Notes (Module 2): A Closer Look at the Ridge Estimator

Advanced – Ridge Regression Notes (Module 1)

In our previous discussions about Linear Regression (and the OLS Estimator), we identified a key limitation: multicollinearity. When the predictor variables (columns of $latex X$) are highly correlated, the matrix $latex X^{T}X$ becomes nearly singular, affecting the stability of our OLS estimator. Ridge Regression effectively addresses the limitations of OLS regression by incorporating a parameter … Continue reading Advanced – Ridge Regression Notes (Module 1)

Advanced – MAP Estimation using Simulated Annealing

In the preceding sections, we covered the intricacies of Linear Regression, explored the concept of Maximum Likelihood Estimation (MLE), and further dissected the statistical properties of the OLS estimator. Having laid some groundwork with earlier topics, the next step involves a thorough examination of MAP estimation. Both MLE and MAP are referred to as point … Continue reading Advanced – MAP Estimation using Simulated Annealing

Advanced – A Closer Look at the OLS Estimator

Understanding how our estimators behave is crucial for making accurate predictions. In my blog about Linear Regression, we covered the OLS estimator, which characterizes the weight vector of our linear regression model in terms of $latex X$ and $latex y$: $latex w_{LS} = (X^{T}X)^{-1}X^{T}y$ In many practical situations, we assume that $latex y$ is drawn … Continue reading Advanced – A Closer Look at the OLS Estimator

Advanced – Linear Regression

Linear regression serves as a fundamental stepping stone into the world of machine learning, embodying both simplicity and the power of predictive analytics. Conceptually, it rests on a graceful mathematical framework that elegantly unravels its potential and delineates its limitations. This guide will walk you through the mathematical fundamentals, offering a clear exposition of its foundational … Continue reading Advanced – Linear Regression

Advanced – Maximum Likelihood Estimation

In statistical inference, one often encounters a dataset $latex X = \{x_1, x_2, \ldots, x_k\} \subset \mathbb{R}^n$ and seek to characterize it by estimating the parameters $latex \theta$ of a chosen probability distribution $latex p(X | \theta)$. A prevalent technique for achieving this is Maximum Likelihood Estimation (MLE). At its core, MLE is the method … Continue reading Advanced – Maximum Likelihood Estimation