Following @thomas-jungblut implementation in python, i did the same for Octave. If you find something wrong please let me know and i will fix+update.
Data comes from a txt file with the following rows:
1 10 1000
2 20 2500
3 25 3500
4 40 5500
5 60 6200
think about it as a very rough sample for features [number of bedrooms] [mts2] and last column [rent price] which is what we want to predict.
Here is the Octave implementation:
%
% Linear Regression with multiple variables
%
% Alpha for learning curve
alphaNum = 0.0005;
% Number of features
n = 2;
% Number of iterations for Gradient Descent algorithm
iterations = 10000
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% No need to update after here
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
DATA = load('CHANGE_WITH_DATA_FILE_PATH');
% Initial theta values
theta = ones(n + 1, 1);
% Number of training samples
m = length(DATA(:, 1));
% X with one mor column (x0 filled with '1's)
X = ones(m, 1);
for i = 1:n
X = [X, DATA(:,i)];
endfor
% Expected data must go always in the last column
y = DATA(:, n + 1)
function gradientDescent(x, y, theta, alphaNum, iterations)
iterations = [];
costs = [];
m = length(y);
for iteration = 1:10000
hypothesis = x * theta;
loss = hypothesis - y;
% J(theta)
cost = sum(loss.^2) / (2 * m);
% Save for the graphic to see if the algorithm did work
iterations = [iterations, iteration];
costs = [costs, cost];
gradient = (x' * loss) / m; % /m is for the average
theta = theta - (alphaNum * gradient);
endfor
% Show final theta values
display(theta)
% Show J(theta) graphic evolution to check it worked, tendency must be zero
plot(iterations, costs);
endfunction
% Execute gradient descent
gradientDescent(X, y, theta, alphaNum, iterations);