This code was written in python 3, could you tell me what your pseudocode would look like? I can not understand the calculations that are being made:
#dobro n * 2
# x * weight
import random
import numpy as np
def derivada(n):
return n*(1-n)
x = 0.85
y = 0.25
w = random.random()
#épocas
for i in range(10):
a=np.tanh(x*w)
e = y-a#erro
w+= x* derivada(e)
print(a)
I tried to do the pseudocode this way, but it's not working too well.
algoritm "untitled"
var
er, n, f, x1, w1, w2, u, y : real
b, yd, i : inteiro
Begin
b <- 1
x1 <- 1
w1 <- 0
u <- (x1*w1)+b
y <- tan(u)
yd <- 5
er <- yd-y
for i de 1 to 10 do
n <- 0.5
f <- (n*x1*er)
w1 <- w1+f
Write(w1)
endfor
// Commands
End
Can you tell me what's wrong?
Basically, what's happening is you have these variables:
x
- Input value to perceptron
y
- Expected output from perceptron
w
- Weight value on perceptron
The derivato(n)
function returns the derivative of the tanh curve. This is used to calculate the adjustment to the w
variable.
x
is set to 0.85, y
is set to 0.25. w
is initialized to a random number.
10 times, a
is the output of the perceptron. This is equal to tanh(x*w)
where x
is the input, w
is the weight, and tanh
being the tanh function.
The error (e
variable) is calculated by doing y-a
, where y is the expected output. (the ground truth)
The adjustment to the weight (w
) is calculated by calculating the derivative of the tanh curve at e
and multiplying by x
. So the adjustment is x*derivato(e)
Then, the adjustment is added to the weight, to adjust it.