🌐
GitHub
github.com › mattnedrich › GradientDescentExample
GitHub - mattnedrich/GradientDescentExample: Example demonstrating how gradient descent may be used to solve a linear regression problem
This example project demonstrates how the gradient descent algorithm may be used to solve a linear regression problem. A more detailed description of this example can be found here. The example code is in Python (version 2.6 or higher will work).
Starred by 549 users
Forked by 301 users
Languages   Python 100.0% | Python 100.0%
🌐
GitHub
github.com › Arko98 › Gradient-Descent-Algorithms
GitHub - Arko98/Gradient-Descent-Algorithms: A collection of various gradient descent algorithms implemented in Python from scratch · GitHub
A collection of various gradient descent algorithms implemented in Python from scratch - Arko98/Gradient-Descent-Algorithms
Starred by 41 users
Forked by 20 users
Languages   Python
🌐
GitHub
github.com › TomasBeuzen › deep-learning-with-pytorch › blob › main › chapters › chapter1_gradient-descent.ipynb
deep-learning-with-pytorch/chapters/chapter1_gradient-descent.ipynb at main · TomasBeuzen/deep-learning-with-pytorch
Content from the University of British Columbia's Master of Data Science course DSCI 572. - deep-learning-with-pytorch/chapters/chapter1_gradient-descent.ipynb at main · TomasBeuzen/deep-learning-with-pytorch
Author   TomasBeuzen
🌐
GitHub
github.com › matvi › GradientDescent
GitHub - matvi/GradientDescent: Implementation of Gradient Descent Optimization method with Python from scratch · GitHub
Implementation of Gradient Descent Optimization method with Python from scratch - matvi/GradientDescent
Author   matvi
🌐
GitHub
github.com › ozzieliu › python-tutorials › blob › master › Linear Regression › Linear Regression with Gradient Descent.ipynb
python-tutorials/Linear Regression/Linear Regression with Gradient Descent.ipynb at master · ozzieliu/python-tutorials
"OK, let's try to implement this in Python. First I declare some parameters. Alpha is my learning rate, and iterations defines how many times I want to perform the update.\n", ... "Then I transform the data frame holding my data into an array for simpler matrix math. And then write a helper function to calculate the cost function as defined above. Using np.dot for inner matrix multiplication" ... "Now, I split the gradient descent algorithm into 4 parts so that I can see what's going on.
Author   ozzieliu
🌐
GitHub
github.com › CalebEverett › pysgd
GitHub - CalebEverett/pysgd: Python implementation of gradient descent algorithms.
Python implementation of gradient descent algorithms. - CalebEverett/pysgd
Forked by 3 users
Languages   Jupyter Notebook 99.7% | Python 0.3% | Jupyter Notebook 99.7% | Python 0.3%
🌐
GitHub
github.com › xbeat › Machine-Learning › blob › main › Building a Gradient Descent Optimizer from Scratch in Python.md
Machine-Learning/Building a Gradient Descent Optimizer from Scratch in Python.md at main · xbeat/Machine-Learning
It's used to minimize a cost function by iteratively moving in the direction of steepest descent. In this presentation, we'll build a gradient descent optimizer from scratch in Python.
Author   xbeat
🌐
GitHub
github.com › dshahid380 › Gradient-descent-Algorithm
GitHub - dshahid380/Gradient-descent-Algorithm: Gradient Descent algorithm implement using python and numpy ( mathematical implementation of gradient descent )
Gradient Descent algorithm implement using python and numpy ( mathematical implementation of gradient descent ) - dshahid380/Gradient-descent-Algorithm
Starred by 4 users
Forked by 6 users
Languages   Python 100.0% | Python 100.0%
Find elsewhere
🌐
GitHub
github.com › CamNZ › gradient-descent-from-scratch
GitHub - CamNZ/gradient-descent-from-scratch: A two part tutorial series implementing the gradient descent algorithm without the use of any machine learning libraries
A basic understanding of calculus, linear algebra and python programming are required to get the most out of these tutorials. Jupyter notebooks that contain explanations of underlying concepts followed by code that can be run from within the notebook. Part 1 - Intoduction to gradient descent on a ...
Author   CamNZ
🌐
GitHub
github.com › arsenyturin › SGD-From-Scratch
GitHub - arsenyturin/SGD-From-Scratch: Stochastic gradient descent from scratch for linear regression · GitHub
This notebook illustrates the nature of the Stochastic Gradient Descent (SGD) and walks through all the necessary steps to create SGD from scratch in Python. Gradient Descent is an essential part of many machine learning algorithms, including neural networks.
Starred by 41 users
Forked by 17 users
Languages   Jupyter Notebook
🌐
GitHub
github.com › topics › gradient-descent-algorithm
gradient-descent-algorithm · GitHub Topics · GitHub
We have implemented Gradient Descent to find the best 'm' (Slope) and 'b' (Intercept). linear-regression python3 gradient-descent gradient-descent-algorithm linearregression-gradientdescent
🌐
GitHub
github.com › shayideep › gradientDescentMethod
GitHub - shayideep/gradientDescentMethod: Gradient Descent Method Python
Implement gradient descent in your favorite coding language · Running: Change X and T parameters in the program file with custom inputs. Run Command Py gradientDescentMethod.py Input: Python File Output: Display gradientDescentMethod(x,t)
Author   shayideep
Top answer
1 of 6
146

I think your code is a bit too complicated and it needs more structure, because otherwise you'll be lost in all equations and operations. In the end this regression boils down to four operations:

  1. Calculate the hypothesis h = X * theta
  2. Calculate the loss = h - y and maybe the squared cost (loss^2)/2m
  3. Calculate the gradient = X' * loss / m
  4. Update the parameters theta = theta - alpha * gradient

In your case, I guess you have confused m with n. Here m denotes the number of examples in your training set, not the number of features.

Let's have a look at my variation of your code:

import numpy as np
import random

# m denotes the number of examples here, not the number of features
def gradientDescent(x, y, theta, alpha, m, numIterations):
    xTrans = x.transpose()
    for i in range(0, numIterations):
        hypothesis = np.dot(x, theta)
        loss = hypothesis - y
        # avg cost per example (the 2 in 2*m doesn't really matter here.
        # But to be consistent with the gradient, I include it)
        cost = np.sum(loss ** 2) / (2 * m)
        print("Iteration %d | Cost: %f" % (i, cost))
        # avg gradient per example
        gradient = np.dot(xTrans, loss) / m
        # update
        theta = theta - alpha * gradient
    return theta


def genData(numPoints, bias, variance):
    x = np.zeros(shape=(numPoints, 2))
    y = np.zeros(shape=numPoints)
    # basically a straight line
    for i in range(0, numPoints):
        # bias feature
        x[i][0] = 1
        x[i][1] = i
        # our target variable
        y[i] = (i + bias) + random.uniform(0, 1) * variance
    return x, y

# gen 100 points with a bias of 25 and 10 variance as a bit of noise
x, y = genData(100, 25, 10)
m, n = np.shape(x)
numIterations= 100000
alpha = 0.0005
theta = np.ones(n)
theta = gradientDescent(x, y, theta, alpha, m, numIterations)
print(theta)

At first I create a small random dataset which should look like this:

As you can see I also added the generated regression line and formula that was calculated by excel.

You need to take care about the intuition of the regression using gradient descent. As you do a complete batch pass over your data X, you need to reduce the m-losses of every example to a single weight update. In this case, this is the average of the sum over the gradients, thus the division by m.

The next thing you need to take care about is to track the convergence and adjust the learning rate. For that matter you should always track your cost every iteration, maybe even plot it.

If you run my example, the theta returned will look like this:

Iteration 99997 | Cost: 47883.706462
Iteration 99998 | Cost: 47883.706462
Iteration 99999 | Cost: 47883.706462
[ 29.25567368   1.01108458]

Which is actually quite close to the equation that was calculated by excel (y = x + 30). Note that as we passed the bias into the first column, the first theta value denotes the bias weight.

2 of 6
12

Below you can find my implementation of gradient descent for linear regression problem.

At first, you calculate gradient like X.T * (X * w - y) / N and update your current theta with this gradient simultaneously.

  • X: feature matrix
  • y: target values
  • w: weights/values
  • N: size of training set

Here is the python code:

import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
import random

def generateSample(N, variance=100):
    X = np.matrix(range(N)).T + 1
    Y = np.matrix([random.random() * variance + i * 10 + 900 for i in range(len(X))]).T
    return X, Y

def fitModel_gradient(x, y):
    N = len(x)
    w = np.zeros((x.shape[1], 1))
    eta = 0.0001

    maxIteration = 100000
    for i in range(maxIteration):
        error = x * w - y
        gradient = x.T * error / N
        w = w - eta * gradient
    return w

def plotModel(x, y, w):
    plt.plot(x[:,1], y, "x")
    plt.plot(x[:,1], x * w, "r-")
    plt.show()

def test(N, variance, modelFunction):
    X, Y = generateSample(N, variance)
    X = np.hstack([np.matrix(np.ones(len(X))).T, X])
    w = modelFunction(X, Y)
    plotModel(X, Y, w)


test(50, 600, fitModel_gradient)
test(50, 1000, fitModel_gradient)
test(100, 200, fitModel_gradient)

🌐
GitHub
github.com › topics › gradient-descent
gradient-descent · GitHub Topics · GitHub
1 month ago - Every matrix operation, activation function, backpropagation step, and optimizer is coded using only Pythons standard libary (math, random, csv, json). python json csv neural-network random matrix gradient-descent backpropagation loss-functions optimizers weights-and-biases
🌐
GitHub
github.com › bhattbhavesh91 › gradient-descent-numpy-example
GitHub - bhattbhavesh91/gradient-descent-numpy-example: Gradient Descent Algorithm Implementation using NumPy
October 14, 2019 - This example project demonstrates how the gradient descent algorithm may be used to solve a linear regression problem. The algorithm is implemented using Numpy package of python.
Starred by 3 users
Forked by 5 users
Languages   Jupyter Notebook 98.8% | Python 1.2% | Jupyter Notebook 98.8% | Python 1.2%
🌐
GitHub
github.com › sudharsan13296 › Hands-On-Deep-Learning-Algorithms-with-Python › blob › master › 03. Gradient Descent and its variants › 3.02 Performing Gradient Descent in Regression.ipynb
Hands-On-Deep-Learning-Algorithms-with-Python/03. Gradient Descent and its variants/3.02 Performing Gradient Descent in Regression.ipynb at master · sudharsan13296/Hands-On-Deep-Learning-Algorithms-with-Python
In the next section, we will learn several variants of gradient descent algorithm." ] } ], "metadata": { "kernelspec": { "display_name": "Python [default]", "language": "python", "name": "python2" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 2 ·
Author   sudharsan13296
🌐
GitHub
github.com › behnamasadi › gradient_descent
GitHub - behnamasadi/gradient_descent: simple implementation of gradient descent method in python
simple implementation of gradient descent method in python - behnamasadi/gradient_descent
Author   behnamasadi