\[
\def\R{{\mathbf{R}}}
\newcommand{\abs}[1]{\lvert#1\rvert}
\newcommand{\norme}[1]{\lVert#1\rVert}
\]
Descent algorithm with constant rate
Algorithm
Require \(f \colon \R^n \to \R\), \(x_{0} \in \R^n\) ( initial point)
- \(\eta = 0.1\)
- \(k \gets 0\)
- continue = true
- While continue
- \(d_k = -\nabla f(x_k)\)
- \(x_{k+1} \gets x_{k} + \eta d_{k}\)
- \(k \gets k + 1\)
- continue = stop_function(\(\nabla f_k\),\(x_k\),\(x_{k+1}\),\(f_k\),\(f_{k+1}\),AbsTol,RelTol,\(\varepsilon\))
- EndWhile
Application : Simple Linear Regression
Code
x1 = 1.; x2 = sqrt(5*9/2-x1^2);
x = [-x2, -x1, 0., x1, x2]
n = length(x)
X = [ones(n) x]
a₁ = 1; a₀ = 2
y = a₁*x .+ a₀ # model
include("assets/julia/MyOptims.jl")
A = X'*X
println("A = ", A)
b = -X'*y
println("b = ", b)
println("n= ",n)
x0 = (n/2)*[9,1]
println("X*x0 = ", X*x0)
f(x) = (2/n)*(0.5*x'*A*x + b'*x + 0.5*y'*y)
println("x0 = ", x0)
println("f(x0) = ", f(x0))
xₖ = x0
#p1 = plot()
nb_levels = 7
Xsol = zeros(nb_levels+1,2)
Xsol[1,:] = x0
for nbit in 1:nb_levels
xsol, flag, fsol, ∇f_xsol , nb_iter = my_descent(f,x0,nbit_max=nbit)
println("xsol = ", xsol)
xₖ = xsol
Xsol[nbit+1,:] = xsol
end
A = [5.0 0.0; 0.0 45.0]
b = [-10.0, -45.0]
n= 5
X*x0 = [10.90797688063037, 20.0, 22.5, 25.0, 34.09202311936963]
x0 = [22.5, 2.5]
f(x0) = 440.5
xsol = [18.4, -0.20000000000000018]
xsol = [15.119999999999997, 1.9600000000000004]
xsol = [12.495999999999997, 0.23199999999999954]
xsol = [10.396799999999997, 1.6144000000000003]
xsol = [8.717439999999998, 0.5084799999999998]
xsol = [7.373951999999998, 1.3932160000000002]
xsol = [6.299161599999998, 0.6854271999999997]
With the Flux Package
Find the same results with Flux
(see https://fluxml.ai/Flux.jl/stable/)
Code
x_train = reshape(Float32.(x),1,n)
println("x_train = ", x_train)
y_train = reshape(Float32.(y),1,n)
println("y_train = ", y_train)
using Flux, Statistics
model = Dense(1 => 1)
model.weight[1,1]=Float32((n/2))
model.bias[1] = Float32((n/2)*9)
println("model.weight = ", model.weight)
println("model.bias = ", model.bias)
println("model(x_train) = ", model(x_train))
loss(model, x, y) = mean(abs2.(model(x) .- y));
println("loss(model, x_train, y_train) = ", loss(model, x_train, y_train))
println(abs2.(model(x_train) .- y_train))
opt = Descent()
data = [(x_train, y_train)]
println("data = ", data)
println("n= ",n)
#data = Flux.DataLoader((x_train, y_train), batchsize=5)
println("data = ", first(data))
XsolNN = zeros(nb_levels+1,2)
XsolNN[1,:] = x0
println("Flux.setup(rule, model) = ", Flux.setup(opt, model))
dLdm, _, _ = gradient(loss, model, x_train, y_train)
println("dLdm = ", dLdm)
for epoch in 1:nb_levels
Flux.train!(loss, model, data, opt)
println("model.weight = ", model.weight)
println("model.bias = ", model.bias)
XsolNN[epoch+1,:] = [model.bias[1], model.weight[1,1]]
end
println("Xsol = ", Xsol)
println("XsolNN = ", XsolNN)
println("Xsol - XsolNN")
display(Xsol - XsolNN)
x_train = Float32[-4.6368093 -1.0 0.0 1.0 4.6368093]
y_train = Float32[-2.6368093 1.0 2.0 3.0 6.6368093]
model.weight = Float32[2.5;;]
model.bias = Float32[22.5]
model(x_train) = Float32[10.907976 20.0 22.5 25.0 34.092026]
loss(model, x_train, y_train) = 440.5
Float32[183.46121 361.0 420.25 484.0 753.7889]
data = Tuple{Matrix{Float32}, Matrix{Float32}}[([-4.6368093 -1.0 0.0 1.0 4.6368093], [-2.6368093 1.0 2.0 3.0 6.6368093])]
n= 5
data = (Float32[-4.6368093 -1.0 0.0 1.0 4.6368093], Float32[-2.6368093 1.0 2.0 3.0 6.6368093])
Flux.setup(rule, model) = (weight = Leaf(Descent(0.1), nothing), bias = Leaf(Descent(0.1), nothing), σ = ())
dLdm = (weight = Float32[27.000004;;], bias = Float32[41.0], σ = nothing)
model.weight = Float32[-0.20000052;;]
model.bias = Float32[18.4]
model.weight = Float32[1.9600008;;]
model.bias = Float32[15.12]
model.weight = Float32[0.23199975;;]
model.bias = Float32[12.496]
model.weight = Float32[1.6144003;;]
model.bias = Float32[10.3968]
model.weight = Float32[0.50847995;;]
model.bias = Float32[8.717441]
model.weight = Float32[1.3932161;;]
model.bias = Float32[7.3739524]
model.weight = Float32[0.685427;;]
model.bias = Float32[6.299162]
Xsol = [22.5 2.5; 18.4 -0.20000000000000018; 15.119999999999997 1.9600000000000004; 12.495999999999997 0.23199999999999954; 10.396799999999997 1.6144000000000003; 8.717439999999998 0.5084799999999998; 7.373951999999998 1.3932160000000002; 6.299161599999998 0.6854271999999997]
XsolNN = [22.5 2.5; 18.399999618530273 -0.20000052452087402; 15.119999885559082 1.96000075340271; 12.496000289916992 0.23199975490570068; 10.39680004119873 1.6144002676010132; 8.717440605163574 0.5084799528121948; 7.373952388763428 1.3932161331176758; 6.299161911010742 0.6854270100593567]
Xsol - XsolNN
8×2 Matrix{Float64}:
0.0 0.0
3.8147e-7 5.24521e-7
1.14441e-7 -7.53403e-7
-2.89917e-7 2.45094e-7
-4.11987e-8 -2.67601e-7
-6.05164e-7 4.71878e-8
-3.88763e-7 -1.33118e-7
-3.11011e-7 1.89941e-7
MNIST
This example comes from the book Statistics with julia
, Yoni Nazarathy and Hayden Klol, Springer, 2021