인공지능/Machine Learning

[Machine Learning] LinearRegression 오차(error) 구하기

건휘맨 2024. 4. 12. 17:52

오차(error)란

실제값 - 예측값 (오차가 적을수록 똑똑한 인공지능)

 

# 실제값 - 예측값 변수로 저장
error = y_test - y_pred

# 저장한 변수를 제곱하고 평균을 구한다
(error ** 2).mean()

>>> y_test
41     77798.83
17    125370.37
20    118474.03
45     64926.08
23    108733.99
14    132602.65
28    103282.38
47     42559.73
32     97427.84
18    124266.90
Name: Profit, dtype: float64

>>> y_pred = regressor.predict(X_test)

>>> y_pred
array([ 75017.13778857, 129992.67932128, 116991.86227872,  45109.83439244,
       111410.63726075, 152704.94532349, 104084.06108572,  45478.09269869,
        99131.67843245, 131009.36870589])
        
>>> error = y_test - y_pred
>>> error 
41     2781.692211
17    -4622.309321
20     1482.167721
45    19816.245608
23    -2676.647261
14   -20102.295323
28     -801.681086
47    -2918.362699
32    -1703.838432
18    -6742.468706
Name: Profit, dtype: float64

# MSE

>>> (error**2).mean()
89277416.70428361

#RMSE

>>> np.sqrt((error**2).mean())
9448.672748290292