summaryrefslogtreecommitdiff
path: root/SI/Resource/Data Science/Machine Learning/Contents/Optimization.md
blob: 22834425533159d09f46bc17ddbe4a816a480ee8 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
---
id: 2023-12-17
aliases: December 17, 2023
tags:
- link-note
- Data-Science
- Machine-Learning
- Optimization
---

# Optimization

## Math

### Partial Differentiation/Derivative

- Differentiate about a specific variable
- Consider others as constants
- $\dfrac{\partial y}{\partial x}$
- e.g., $f(x,y) = x^2 + xy + 3$

### Chain Rule

- $\dfrac{dy}{dx} = \dfrac{dy}{du}*\dfrac{du}{dx}$
- e.g., $y = ln(u), u = 2x + 4$

## Loss Function

### Mean Squared Error (MSE)

- $L = \frac{1}{N} \sum_{i=1}^{N} (y_i - \hat{y_i})^2$

## Parameter Calculation

### Least Square Method (LSM)

- Minimize error of data
- a: slope (coefficient)
- b: intercept
- $L = \sum_{i=1}^{N} (y_i - (ax_i + b))^2$

#### Method 1.

- $0 = \dfrac{\partial L}{\partial a} = \sum_{i=1}^{N} 2(y_i - (ax_i + b))(-x_i) = 2(a\sum_{i=1}^{N} x_i^2 + b\sum_{i=1}^{N} x_i - \sum_{i=1}^{N} x_iy_i)$
- $0 = \dfrac{\partial L}{\partial b} = \sum_{i=1}^{N} 2(y_i - (ax_i + b))(-1) = 2(a\sum_{i=1}^{N} x_i + b\sum_{i=1}^{N}1 - \sum_{i=1}^{N} y_i)$
- $a^* = \dfrac{\sum_{i=1}^{N}(x-\bar{x})(y-\bar{y})}{\sum_{i=1}^{N}(x-\bar{x})^2}$
- $b^* = \bar{y} - a^*\bar{x}$

#### Method 2.

- Partial differentiation with respect to matrix $||Y - WX||^2$
- $-2X^T(Y-WX) = 0$
- $W = (X^TX)^{-1}X^TY$