Developmental – On Negentropy

What defines structure within a system, and how is it intertwined with the universe's inherent randomness? This question is rooted in Erwin Schrödinger's seminal work, "What is Life?", where he introduces the concept of 'negative entropy' as a measure of a system's structural integrity. For our analysis, let's denote this as $latex N_E$. The intriguing … Continue reading Developmental – On Negentropy

Advanced – Ridge Regression Notes (Module 2): A Closer Look at the Ridge Estimator

With a clear understanding of the framework of Ridge Regression, we are now well-equipped to delve deeper into some of its nuances. A key aspect of this exploration involves examining the parameters of the distribution over the Ridge weights, denoted as $latex w_{Ridge}$. Through this, we will uncover a crucial property: while Ridge Regression helps … Continue reading Advanced – Ridge Regression Notes (Module 2): A Closer Look at the Ridge Estimator

Advanced – Ridge Regression Notes (Module 1)

In our previous discussions about Linear Regression (and the OLS Estimator), we identified a key limitation: multicollinearity. When the predictor variables (columns of $latex X$) are highly correlated, the matrix $latex X^{T}X$ becomes nearly singular, affecting the stability of our OLS estimator. Ridge Regression effectively addresses the limitations of OLS regression by incorporating a parameter … Continue reading Advanced – Ridge Regression Notes (Module 1)

Advanced – MAP Estimation using Simulated Annealing

In the preceding sections, we covered the intricacies of Linear Regression, explored the concept of Maximum Likelihood Estimation (MLE), and further dissected the statistical properties of the OLS estimator. Having laid some groundwork with earlier topics, the next step involves a thorough examination of MAP estimation. Both MLE and MAP are referred to as point … Continue reading Advanced – MAP Estimation using Simulated Annealing

Advanced – A Closer Look at the OLS Estimator

Understanding how our estimators behave is crucial for making accurate predictions. In my blog about Linear Regression, we covered the OLS estimator, which characterizes the weight vector of our linear regression model in terms of $latex X$ and $latex y$: $latex w_{LS} = (X^{T}X)^{-1}X^{T}y$ In many practical situations, we assume that $latex y$ is drawn … Continue reading Advanced – A Closer Look at the OLS Estimator

Advanced – Linear Regression

Linear regression serves as a fundamental stepping stone into the world of machine learning, embodying both simplicity and the power of predictive analytics. Conceptually, it rests on a graceful mathematical framework that elegantly unravels its potential and delineates its limitations. This guide will walk you through the mathematical fundamentals, offering a clear exposition of its foundational … Continue reading Advanced – Linear Regression

Advanced – Maximum A Posteriori Estimation Decoding

(Part One: AWGN Model) A useful example of MAP estimation was NASA's 1997 US patent, pertaining to the invention of a MAP decoder for digital communications. MAP decoding is a probabilistic decoding method that selects the most likely transmitted sequence given the received sequence and the channel's statistical properties. It is fundamental in the field … Continue reading Advanced – Maximum A Posteriori Estimation Decoding

Advanced – Maximum Likelihood Estimation

In statistical inference, one often encounters a dataset $latex X = \{x_1, x_2, \ldots, x_k\} \subset \mathbb{R}^n$ and seek to characterize it by estimating the parameters $latex \theta$ of a chosen probability distribution $latex p(X | \theta)$. A prevalent technique for achieving this is Maximum Likelihood Estimation (MLE). At its core, MLE is the method … Continue reading Advanced – Maximum Likelihood Estimation