Gradient descent algorithm¶
cost(W,b) fnc. 의 최소값을 찾는다.
어떤 점에서든 시작할 수 있다.
조금씩 W를 바꿔 최적의 cost(W,b)를 찾는다.
In [29]:
import tensorflow as tf
import matplotlib.pyplot as plt
In [30]:
X = [1,2,3]
Y = [1,2,3]
W = tf.placeholder(tf.float32)
hypothesis = X * W
cost = tf.reduce_mean(tf.square(hypothesis - Y))
sess = tf.Session()
sess.run(tf.global_variables_initializer())
W_val = []
cost_val = []
for i in range(-30, 50):
feed_W = i * 0.1
curr_cost, curr_W = sess.run([cost, W], feed_dict = {W : feed_W})
W_val.append(curr_W)
cost_val.append(curr_cost)
plt.plot(W_val, cost_val)
plt.show()
In [31]:
X_data = [1,2,3]
Y_data = [1,2,3]
W = tf.Variable(tf.random_normal([1]), name ='weight')
hypothesis = X_data * W
cost = tf.reduce_mean(tf.square(hypothesis - Y_data))
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01)
train = optimizer.minimize(cost)
sess = tf.Session()
sess.run(tf.global_variables_initializer())
for step in range(50):
print(step, sess.run(W))
sess.run(train)
In [38]:
X = [1,2,3]
Y = [1,2,3]
W = tf.Variable(5.)
hypothesis = X * W
gradient = tf.reduce_mean((X * W - Y) * X) * 2
cost = tf.reduce_mean(tf.square(hypothesis - Y))
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01)
gvs = optimizer.compute_gradients(cost)
apply_gradients = optimizer.apply_gradients(gvs)
sess = tf.Session()
sess.run(tf.global_variables_initializer())
for step in range(50):
print(step, sess.run([gradient, W]))
sess.run(apply_gradients)
In [ ]:
'AI > Deep Learning' 카테고리의 다른 글
딥러닝: 04. Logistic Classification(로지스틱 회귀) (0) | 2019.07.30 |
---|---|
딥러닝: 03. 다중 선형 회귀(Multi Variable linear regression) (0) | 2019.07.30 |
딥러닝 : 01. Tensorflow의 정의 (0) | 2019.07.26 |
딥러닝 : 로지스틱 회귀 코딩 (0) | 2019.07.25 |
titanic : Machine Learning from Disaster - kaggle 연습 (0) | 2019.07.23 |
댓글