AI: Logistic Regression

Logistic regression is a supervised machine learning algorithm used for binary classification tasks. Unlike linear regression, which predicts continuous values, logistic regression predicts the probability that a given input belongs to a certain class.
Read more

Regularization

Regularization is a way to make sure our model doesn't become too complicated. It ensures the model doesn’t overfit the training data while still making good predictions on new data. Think of it as adding a 'rule' or 'constraint' that prevents the model from relying too much on any specific feature or predictor.
Read more

Softmax

Softmax is a mathematical function commonly used in machine learning, particularly in the context of classification problems. It transforms a vector of raw scores, often called logits, from a model into a vector of probabilities that sum to one. The probabilities generated by the softmax function represent the likelihood of each class being the correct classification. $$\sigma(\mathbf{z})_i = \frac{e^{z_i}}{\sum_{j=1}^K e^{z_j}}$$
Read more

Support Vector Machine

Support Vector Machine (SVM) is a supervised learning algorithm used for classification and regression. It finds the best hyperplane that separates the data into different classes with the largest possible margin. SVM can work well with high-dimensional data and use different kernel functions to transform data for better separation when it is not linearly separable.$$f(x) = sign(w^T x + b)$$
Read more

Random Forest

Random Forest is an ensemble machine learning algorithm that builds multiple decision trees during training and merges their outputs to improve accuracy and reduce overfitting. It is commonly used for both classification and regression tasks. By averaging the predictions of several decision trees, Random Forest reduces the variance and increases model robustness, making it less prone to errors from noisy data. $$\text{Entropy}_{\text{after}} = \frac{|S_l|}{|S|}\text{Entropy}(S_l) + \frac{|S_r|}{|S|}\text{Entropy}(S_r)$$
Read more
Understanding the Taylor Series and Its Applications in Machine Learning© Karobben

Understanding the Taylor Series and Its Applications in Machine Learning

The Taylor Series is a mathematical tool that approximates complex functions with polynomials, playing a crucial role in machine learning optimization. It enhances gradient descent by incorporating second-order information, leading to faster and more stable convergence. Additionally, it aids in linearizing non-linear models and informs regularization techniques. This post explores the significance of the Taylor Series in improving model training efficiency and understanding model behavior. $$\cos(x) = \sum_{n=0}^{\infty} \frac{(-1)^n}{(2n)!} x^{2n}$$
Read more
Multi-layer Neural Nets© Karobben
Hidden Markov Model© Karobben
Artificial Intelligent 1© Karobben