I know this question already have been answer but I have made some update to the GD function :
### COST FUNCTION
def cost(theta,X,y):
### Evaluate half MSE (Mean square error)
m = len(y)
error = np.dot(X,theta) - y
J = np.sum(error ** 2)/(2*m)
return J
cost(theta,X,y)
def GD(X,y,theta,alpha):
cost_histo = [0]
theta_histo = [0]
# an arbitrary gradient, to pass the initial while() check
delta = [np.repeat(1,len(X))]
# Initial theta
old_cost = cost(theta,X,y)
while (np.max(np.abs(delta)) > 1e-6):
error = np.dot(X,theta) - y
delta = np.dot(np.transpose(X),error)/len(y)
trial_theta = theta - alpha * delta
trial_cost = cost(trial_theta,X,y)
while (trial_cost >= old_cost):
trial_theta = (theta +trial_theta)/2
trial_cost = cost(trial_theta,X,y)
cost_histo = cost_histo + trial_cost
theta_histo = theta_histo + trial_theta
old_cost = trial_cost
theta = trial_theta
Intercept = theta[0]
Slope = theta[1]
return [Intercept,Slope]
res = GD(X,y,theta,alpha)
This function reduce the alpha over the iteration making the function too converge faster see Estimating linear regression with Gradient Descent (Steepest Descent) for an example in R. I apply the same logic but in Python.