Blog

Developmental – DMD Meets Space City (Part I)

Imagine being able to detect subtle patterns in Earth's landscape using mathematical tools. With NASA's API for obtaining LANDSAT Imagery and the Dynamic Mode Decomposition (DMD) technique, this becomes not only possible but accessible to anyone with enough curiosity and a reasonable understanding of python. Here are the highlights: LANDSAT Summary Accessing NASA's API DMD … Continue reading Developmental – DMD Meets Space City (Part I)

Developmental – What About Those Pesky Integrals?

So what about those pesky definite integrals? I mean the ones integrating over a high number of dimensions. Many ML problems, especially in Bayesian statistics, involve computing probabilities or expectations over high-dimensional spaces. For this reason, we're gonna need a clever way to compute these integrals. Enter Monte Carlo Integration! This technique isn't just a … Continue reading Developmental – What About Those Pesky Integrals?

Advanced – Ridge Regression Notes (Module 3)

Welcome to the final module of our comprehensive study of Ridge Regression! In Module 1, we uncovered various facets of Ridge Regression, starting with SVD (Singular Value Decomposition) approach. We carefully dissected the formula for the Ridge estimator: $latex w_{Ridge} = (X^{T}X + \lambda I)^{-1}X^{T}y$, unraveling its intricacies through calculus. Our previous discussions in Module … Continue reading Advanced – Ridge Regression Notes (Module 3)

Developmental – On Negentropy

What defines structure within a system, and how is it intertwined with the universe's inherent randomness? This question is rooted in Erwin Schrödinger's seminal work, "What is Life?", where he introduces the concept of 'negative entropy' as a measure of a system's structural integrity. For our analysis, let's denote this as $latex N_E$. The intriguing … Continue reading Developmental – On Negentropy

Advanced – Ridge Regression Notes (Module 2): A Closer Look at the Ridge Estimator

With a clear understanding of the framework of Ridge Regression, we are now well-equipped to delve deeper into some of its nuances. A key aspect of this exploration involves examining the parameters of the distribution over the Ridge weights, denoted as $latex w_{Ridge}$. Through this, we will uncover a crucial property: while Ridge Regression helps … Continue reading Advanced – Ridge Regression Notes (Module 2): A Closer Look at the Ridge Estimator

Advanced – Ridge Regression Notes (Module 1)

In our previous discussions about Linear Regression (and the OLS Estimator), we identified a key limitation: multicollinearity. When the predictor variables (columns of $latex X$) are highly correlated, the matrix $latex X^{T}X$ becomes nearly singular, affecting the stability of our OLS estimator. Ridge Regression effectively addresses the limitations of OLS regression by incorporating a parameter … Continue reading Advanced – Ridge Regression Notes (Module 1)

Advanced – MAP Estimation using Simulated Annealing

In the preceding sections, we covered the intricacies of Linear Regression, explored the concept of Maximum Likelihood Estimation (MLE), and further dissected the statistical properties of the OLS estimator. Having laid some groundwork with earlier topics, the next step involves a thorough examination of MAP estimation. Both MLE and MAP are referred to as point … Continue reading Advanced – MAP Estimation using Simulated Annealing

Advanced – A Closer Look at the OLS Estimator

Understanding how our estimators behave is crucial for making accurate predictions. In my blog about Linear Regression, we covered the OLS estimator, which characterizes the weight vector of our linear regression model in terms of $latex X$ and $latex y$: $latex w_{LS} = (X^{T}X)^{-1}X^{T}y$ In many practical situations, we assume that $latex y$ is drawn … Continue reading Advanced – A Closer Look at the OLS Estimator