RError.com

RError.com Logo RError.com Logo

RError.com Navigation

  • 主页

Mobile menu

Close
  • 主页
  • 系统&网络
    • 热门问题
    • 最新问题
    • 标签
  • Ubuntu
    • 热门问题
    • 最新问题
    • 标签
  • 帮助
主页 / 问题 / 871607
Accepted
alex-rudenkiy
alex-rudenkiy
Asked:2020-08-22 00:53:18 +0000 UTC2020-08-22 00:53:18 +0000 UTC 2020-08-22 00:53:18 +0000 UTC

使用勾股定理训练神经网络

  • 772

我想教神经网络勾股定理。一切似乎都正确完成,标准化数据,模型正确,但不清楚错误是什么..

import numpy as np

from keras.models import Sequential
from keras.layers.core import Dense, Activation
from keras.utils import np_utils

np.random.seed()

NB_EPOCH = 500
VERBOSE = 1

X_in = [[ 0 , 44 ], [ 0 , 18 ], [ 38 , 0 ], [ 48 , 14 ], [ 0 , 36 ], [ 14 , 0 ], [ 34 , 0 ], [ 0 , 0 ], [ 0 , 38 ], [ 32 , 0 ], [ 28 , 0 ], [ 36 , 0 ], [ 20 , 48 ], [ 0 , 6 ], [ 0 , 20 ], [ 0 , 42 ], [ 0 , 8 ], [ 24 , 32 ], [ 4 , 0 ], [ 6 , 8 ], [ 24 , 10 ], [ 0 , 22 ], [ 16 , 12 ], [ 30 , 40 ], [ 0 , 32 ], [ 0 , 32 ], [ 16 , 0 ], [ 48 , 20 ], [ 0 , 8 ], [ 32 , 0 ], [ 0 , 46 ], [ 0 , 22 ], [ 0 , 8 ], [ 10 , 24 ], [ 0 , 36 ], [ 14 , 0 ], [ 0 , 22 ], [ 42 , 0 ], [ 16 , 12 ], [ 40 , 30 ], [ 44 , 0 ], [ 40 , 0 ], [ 34 , 0 ], [ 0 , 32 ], [ 40 , 30 ], [ 32 , 0 ], [ 0 , 30 ], [ 24 , 18 ], [ 0 , 26 ], [ 22 , 0 ], [ 0 , 4 ], [ 16 , 0 ], [ 10 , 0 ], [ 0 , 32 ], [ 0 , 42 ], [ 2 , 0 ], [ 0 , 38 ], [ 32 , 24 ], [ 48 , 0 ], [ 20 , 0 ], [ 0 , 18 ], [ 0 , 38 ], [ 14 , 48 ], [ 40 , 42 ], [ 16 , 12 ], [ 26 , 0 ], [ 0 , 20 ], [ 40 , 30 ], [ 16 , 30 ], [ 36 , 48 ], [ 36 , 0 ], [ 18 , 24 ], [ 34 , 0 ], [ 16 , 0 ], [ 0 , 24 ], [ 0 , 24 ], [ 0 , 18 ], [ 38 , 0 ], [ 28 , 0 ], [ 0 , 34 ], [ 0 , 36 ], [ 24 , 32 ], [ 16 , 30 ], [ 40 , 30 ], [ 24 , 0 ], [ 0 , 14 ], [ 8 , 6 ], [ 12 , 0 ], [ 16 , 0 ], [ 16 , 30 ], [ 48 , 14 ], [ 0 , 30 ], [ 38 , 0 ], [ 38 , 0 ], [ 0 , 8 ], [ 36 , 48 ], [ 0 , 32 ], [ 10 , 24 ], [ 46 , 0 ], [ 24 , 10 ], [ 30 , 0 ], [ 0 , 48 ], [ 40 , 0 ], [ 42 , 0 ], [ 32 , 24 ], [ 32 , 0 ], [ 12 , 16 ], [ 0 , 4 ], [ 0 , 28 ], [ 32 , 0 ], [ 40 , 42 ], [ 46 , 0 ], [ 0 , 24 ], [ 30 , 16 ], [ 36 , 48 ], [ 40 , 0 ], [ 24 , 0 ], [ 0 , 22 ], [ 40 , 42 ], [ 10 , 24 ], [ 0 , 16 ], [ 14 , 48 ], [ 22 , 0 ], [ 0 , 22 ], [ 30 , 0 ], [ 0 , 2 ], [ 48 , 20 ], [ 6 , 0 ], [ 6 , 0 ], [ 28 , 0 ], [ 20 , 0 ], [ 0 , 40 ], [ 42 , 0 ], [ 48 , 36 ], [ 14 , 0 ], [ 10 , 24 ], [ 0 , 30 ], [ 48 , 20 ], [ 40 , 30 ], [ 0 , 0 ], [ 42 , 40 ], [ 0 , 48 ], [ 32 , 24 ]]
X_answer = [[44] ,[18] ,[38] ,[50] ,[36] ,[14] ,[34] ,[0] ,[38] ,[32] ,[28] ,[36] ,[52] ,[6] ,[20] ,[42] ,[8] ,[40] ,[4] ,[10] ,[26] ,[22] ,[20] ,[50] ,[32] ,[32] ,[16] ,[52] ,[8] ,[32] ,[46] ,[22] ,[8] ,[26] ,[36] ,[14] ,[22] ,[42] ,[20] ,[50] ,[44] ,[40] ,[34] ,[32] ,[50] ,[32] ,[30] ,[30] ,[26] ,[22] ,[4] ,[16] ,[10] ,[32] ,[42] ,[2] ,[38] ,[40] ,[48] ,[20] ,[18] ,[38] ,[50] ,[58] ,[20] ,[26] ,[20] ,[50] ,[34] ,[60] ,[36] ,[30] ,[34] ,[16] ,[24] ,[24] ,[18] ,[38] ,[28] ,[34] ,[36] ,[40] ,[34] ,[50] ,[24] ,[14] ,[10] ,[12] ,[16] ,[34] ,[50] ,[30] ,[38] ,[38] ,[8] ,[60] ,[32] ,[26] ,[46] ,[26] ,[30] ,[48] ,[40] ,[42] ,[40] ,[32] ,[20] ,[4] ,[28] ,[32] ,[58] ,[46] ,[24] ,[34] ,[60] ,[40] ,[24] ,[22] ,[58] ,[26] ,[16] ,[50] ,[22] ,[22] ,[30] ,[2] ,[52] ,[6] ,[6] ,[28] ,[20] ,[40] ,[42] ,[60] ,[14] ,[26] ,[30] ,[52] ,[50] ,[0] ,[58] ,[48] ,[40]]
X_in = np.asarray(X_in, dtype=np.float32)
X_answer = np.asarray(X_answer, dtype=np.float32)

X_in /= np.amax(X_in)
X_answer /= np.amax(X_answer)

model = Sequential()
model.add(Dense(10, input_dim = 2, activation='relu'))
model.add(Dense(10, activation='relu'))
model.add(Dense(1, activation='softmax'))

model.compile(loss='mean_squared_error', optimizer='adam', metrics=['accuracy'])

history = model.fit(X_in, X_answer, epochs=NB_EPOCH, verbose=VERBOSE)

对于 100 个 epoch,对于 500 个相同的结果:

Epoch 1/100
143/143 [==============================] - 0s 2ms/step - loss: 0.2966 - acc: 0.0280
Epoch 2/100
143/143 [==============================] - 0s 52us/step - loss: 0.2966 - acc: 0.0280
Epoch 3/100
143/143 [==============================] - 0s 52us/step - loss: 0.2966 - acc: 0.0280
Epoch 4/100
143/143 [==============================] - 0s 38us/step - loss: 0.2966 - acc: 0.0280
Epoch 5/100
143/143 [==============================] - 0s 52us/step - loss: 0.2966 - acc: 0.0280
Epoch 6/100
143/143 [==============================] - 0s 45us/step - loss: 0.2966 - acc: 0.0280
Epoch 7/100
143/143 [==============================] - 0s 45us/step - loss: 0.2966 - acc: 0.0280
Epoch 8/100
143/143 [==============================] - 0s 42us/step - loss: 0.2966 - acc: 0.0280
Epoch 9/100
143/143 [==============================] - 0s 45us/step - loss: 0.2966 - acc: 0.0280
Epoch 10/100
143/143 [==============================] - 0s 42us/step - loss: 0.2966 - acc: 0.0280
Epoch 11/100
143/143 [==============================] - 0s 38us/step - loss: 0.2966 - acc: 0.0280
Epoch 12/100
143/143 [==============================] - 0s 45us/step - loss: 0.2966 - acc: 0.0280
Epoch 13/100
143/143 [==============================] - 0s 42us/step - loss: 0.2966 - acc: 0.0280
Epoch 14/100
143/143 [==============================] - 0s 42us/step - loss: 0.2966 - acc: 0.0280
Epoch 15/100
143/143 [==============================] - 0s 42us/step - loss: 0.2966 - acc: 0.0280
Epoch 16/100
143/143 [==============================] - 0s 45us/step - loss: 0.2966 - acc: 0.0280
Epoch 17/100
143/143 [==============================] - 0s 38us/step - loss: 0.2966 - acc: 0.0280
Epoch 18/100
143/143 [==============================] - 0s 38us/step - loss: 0.2966 - acc: 0.0280
Epoch 19/100
143/143 [==============================] - 0s 42us/step - loss: 0.2966 - acc: 0.0280
Epoch 20/100
143/143 [==============================] - 0s 35us/step - loss: 0.2966 - acc: 0.0280
Epoch 21/100
143/143 [==============================] - 0s 45us/step - loss: 0.2966 - acc: 0.0280
Epoch 22/100
143/143 [==============================] - 0s 42us/step - loss: 0.2966 - acc: 0.0280
Epoch 23/100
143/143 [==============================] - 0s 35us/step - loss: 0.2966 - acc: 0.0280
Epoch 24/100
143/143 [==============================] - 0s 45us/step - loss: 0.2966 - acc: 0.0280
Epoch 25/100
143/143 [==============================] - 0s 38us/step - loss: 0.2966 - acc: 0.0280
Epoch 26/100
143/143 [==============================] - 0s 42us/step - loss: 0.2966 - acc: 0.0280
Epoch 27/100
143/143 [==============================] - 0s 42us/step - loss: 0.2966 - acc: 0.0280
Epoch 28/100
143/143 [==============================] - 0s 45us/step - loss: 0.2966 - acc: 0.0280
Epoch 29/100
143/143 [==============================] - 0s 45us/step - loss: 0.2966 - acc: 0.0280
Epoch 30/100
143/143 [==============================] - 0s 45us/step - loss: 0.2966 - acc: 0.0280
Epoch 31/100
143/143 [==============================] - 0s 45us/step - loss: 0.2966 - acc: 0.0280
Epoch 32/100
143/143 [==============================] - 0s 52us/step - loss: 0.2966 - acc: 0.0280
Epoch 33/100
143/143 [==============================] - 0s 42us/step - loss: 0.2966 - acc: 0.0280
Epoch 34/100
143/143 [==============================] - 0s 45us/step - loss: 0.2966 - acc: 0.0280
Epoch 35/100
143/143 [==============================] - 0s 38us/step - loss: 0.2966 - acc: 0.0280
Epoch 36/100
143/143 [==============================] - 0s 42us/step - loss: 0.2966 - acc: 0.0280
Epoch 37/100
143/143 [==============================] - 0s 49us/step - loss: 0.2966 - acc: 0.0280
Epoch 38/100
143/143 [==============================] - 0s 38us/step - loss: 0.2966 - acc: 0.0280
Epoch 39/100
143/143 [==============================] - 0s 45us/step - loss: 0.2966 - acc: 0.0280
Epoch 40/100
143/143 [==============================] - 0s 42us/step - loss: 0.2966 - acc: 0.0280
Epoch 41/100
143/143 [==============================] - 0s 42us/step - loss: 0.2966 - acc: 0.0280
Epoch 42/100
143/143 [==============================] - 0s 38us/step - loss: 0.2966 - acc: 0.0280
Epoch 43/100
143/143 [==============================] - 0s 38us/step - loss: 0.2966 - acc: 0.0280
Epoch 44/100
143/143 [==============================] - 0s 45us/step - loss: 0.2966 - acc: 0.0280
Epoch 45/100
143/143 [==============================] - 0s 38us/step - loss: 0.2966 - acc: 0.0280
Epoch 46/100
143/143 [==============================] - 0s 45us/step - loss: 0.2966 - acc: 0.0280
Epoch 47/100
143/143 [==============================] - 0s 45us/step - loss: 0.2966 - acc: 0.0280
Epoch 48/100
143/143 [==============================] - 0s 42us/step - loss: 0.2966 - acc: 0.0280
Epoch 49/100
143/143 [==============================] - 0s 38us/step - loss: 0.2966 - acc: 0.0280
Epoch 50/100
143/143 [==============================] - 0s 42us/step - loss: 0.2966 - acc: 0.0280
Epoch 51/100
143/143 [==============================] - 0s 42us/step - loss: 0.2966 - acc: 0.0280
Epoch 52/100
143/143 [==============================] - 0s 42us/step - loss: 0.2966 - acc: 0.0280
Epoch 53/100
143/143 [==============================] - 0s 42us/step - loss: 0.2966 - acc: 0.0280
Epoch 54/100
143/143 [==============================] - 0s 42us/step - loss: 0.2966 - acc: 0.0280
Epoch 55/100
143/143 [==============================] - 0s 49us/step - loss: 0.2966 - acc: 0.0280
Epoch 56/100
143/143 [==============================] - 0s 45us/step - loss: 0.2966 - acc: 0.0280
Epoch 57/100
143/143 [==============================] - 0s 38us/step - loss: 0.2966 - acc: 0.0280
Epoch 58/100
143/143 [==============================] - 0s 45us/step - loss: 0.2966 - acc: 0.0280
Epoch 59/100
143/143 [==============================] - 0s 35us/step - loss: 0.2966 - acc: 0.0280
Epoch 60/100
143/143 [==============================] - 0s 42us/step - loss: 0.2966 - acc: 0.0280
Epoch 61/100
143/143 [==============================] - 0s 35us/step - loss: 0.2966 - acc: 0.0280
Epoch 62/100
143/143 [==============================] - 0s 38us/step - loss: 0.2966 - acc: 0.0280
Epoch 63/100
143/143 [==============================] - 0s 38us/step - loss: 0.2966 - acc: 0.0280
Epoch 64/100
143/143 [==============================] - 0s 38us/step - loss: 0.2966 - acc: 0.0280
Epoch 65/100
143/143 [==============================] - 0s 52us/step - loss: 0.2966 - acc: 0.0280
Epoch 66/100
143/143 [==============================] - 0s 38us/step - loss: 0.2966 - acc: 0.0280
Epoch 67/100
143/143 [==============================] - 0s 45us/step - loss: 0.2966 - acc: 0.0280
Epoch 68/100
143/143 [==============================] - 0s 38us/step - loss: 0.2966 - acc: 0.0280
Epoch 69/100
143/143 [==============================] - 0s 52us/step - loss: 0.2966 - acc: 0.0280
Epoch 70/100
143/143 [==============================] - 0s 42us/step - loss: 0.2966 - acc: 0.0280
Epoch 71/100
143/143 [==============================] - 0s 42us/step - loss: 0.2966 - acc: 0.0280
Epoch 72/100
143/143 [==============================] - 0s 45us/step - loss: 0.2966 - acc: 0.0280
Epoch 73/100
143/143 [==============================] - 0s 39us/step - loss: 0.2966 - acc: 0.0280
Epoch 74/100
143/143 [==============================] - 0s 42us/step - loss: 0.2966 - acc: 0.0280
Epoch 75/100
143/143 [==============================] - 0s 38us/step - loss: 0.2966 - acc: 0.0280
Epoch 76/100
143/143 [==============================] - 0s 45us/step - loss: 0.2966 - acc: 0.0280
Epoch 77/100
143/143 [==============================] - 0s 42us/step - loss: 0.2966 - acc: 0.0280
Epoch 78/100
143/143 [==============================] - 0s 49us/step - loss: 0.2966 - acc: 0.0280
Epoch 79/100
143/143 [==============================] - 0s 38us/step - loss: 0.2966 - acc: 0.0280
Epoch 80/100
143/143 [==============================] - 0s 49us/step - loss: 0.2966 - acc: 0.0280
Epoch 81/100
143/143 [==============================] - 0s 42us/step - loss: 0.2966 - acc: 0.0280
Epoch 82/100
143/143 [==============================] - 0s 35us/step - loss: 0.2966 - acc: 0.0280
Epoch 83/100
143/143 [==============================] - 0s 42us/step - loss: 0.2966 - acc: 0.0280
Epoch 84/100
143/143 [==============================] - 0s 35us/step - loss: 0.2966 - acc: 0.0280
Epoch 85/100
143/143 [==============================] - 0s 42us/step - loss: 0.2966 - acc: 0.0280
Epoch 86/100
143/143 [==============================] - 0s 38us/step - loss: 0.2966 - acc: 0.0280
Epoch 87/100
143/143 [==============================] - 0s 38us/step - loss: 0.2966 - acc: 0.0280
Epoch 88/100
143/143 [==============================] - 0s 42us/step - loss: 0.2966 - acc: 0.0280
Epoch 89/100
143/143 [==============================] - 0s 38us/step - loss: 0.2966 - acc: 0.0280
Epoch 90/100
143/143 [==============================] - 0s 49us/step - loss: 0.2966 - acc: 0.0280
Epoch 91/100
143/143 [==============================] - 0s 38us/step - loss: 0.2966 - acc: 0.0280
Epoch 92/100
143/143 [==============================] - 0s 45us/step - loss: 0.2966 - acc: 0.0280
Epoch 93/100
143/143 [==============================] - 0s 38us/step - loss: 0.2966 - acc: 0.0280
Epoch 94/100
143/143 [==============================] - 0s 45us/step - loss: 0.2966 - acc: 0.0280
Epoch 95/100
143/143 [==============================] - 0s 42us/step - loss: 0.2966 - acc: 0.0280
Epoch 96/100
143/143 [==============================] - 0s 42us/step - loss: 0.2966 - acc: 0.0280
Epoch 97/100
143/143 [==============================] - 0s 42us/step - loss: 0.2966 - acc: 0.0280
Epoch 98/100
143/143 [==============================] - 0s 45us/step - loss: 0.2966 - acc: 0.0280
Epoch 99/100
143/143 [==============================] - 0s 42us/step - loss: 0.2966 - acc: 0.0280
Epoch 100/100
143/143 [==============================] - 0s 38us/step - loss: 0.2966 - acc: 0.0280
python
  • 2 2 个回答
  • 10 Views

2 个回答

  • Voted
  1. L. Murashov
    2020-08-22T01:28:47Z2020-08-22T01:28:47Z

    你在最后一层有错误的激活函数。Softmax 受限 [0..1]

    model.add(Dense(1, activation='softmax'))
    

    取而代之

    # После слоя не применяется функция активации
    model.add(Dense(1))
    

    此外,准确性在这里毫无用处。回归任务。

    metrics=['accuracy']
    
    • 3
  2. Best Answer
    MaxU - stop genocide of UA
    2020-08-22T05:18:06Z2020-08-22T05:18:06Z

    正如@L.Murashov已经说过的那样,激活函数softmax用于多类分类问题 - 它有助于计算样本属于每个类的概率。

    还有几点需要注意:

    1. 训练样本中有太多(大约一半)零值 - 这是勾股定理的退化情况,当三角形的一条边的长度为零时。
    2. 为了更好地训练模型,您可以采用更大的样本 - 在您的情况下,生成所需的尽可能多的数据不是问题。
    3. 对于这个(简单)任务,一个隐藏层就足够了
    4. 作为输出层的激活函数,可以使用linear
    5. 作为损失和度量的函数,您可以选择mean_squared_error

    例子:

    import numpy as np
    from keras.models import Sequential
    from keras.layers.core import Dense, Activation
    
    N = 5000
    np.random.seed(1234)
    X = np.random.randint(0, 50, size=(N,2))
    y = np.linalg.norm(X, axis=1)
    
    NB_EPOCHS = 100
    VERBOSE = 1
    
    model = Sequential()
    model.add(Dense(20, input_dim = 2, activation='relu'))
    model.add(Dense(1, activation='linear'))
    model.compile(loss='mse', optimizer='adam', metrics=['mean_squared_error'])
    model.fit(X, y, epochs=NB_EPOCHS, verbose=VERBOSE)
    

    教育:

    ...
    5000/5000 [==============================] - 0s 22us/step - loss: 0.0057 - mean_squared_error: 0.0057
    Epoch 98/100
    5000/5000 [==============================] - 0s 22us/step - loss: 0.0048 - mean_squared_error: 0.0048
    Epoch 99/100
    5000/5000 [==============================] - 0s 22us/step - loss: 0.0045 - mean_squared_error: 0.0045
    Epoch 100/100
    5000/5000 [==============================] - 0s 22us/step - loss: 0.0043 - mean_squared_error: 0.0043
    Out[70]: <keras.callbacks.History at 0xf2d9ac8>
    

    预言:

    In [71]: model.predict(np.array([[3,4], [10,10], [5,6]]))
    Out[71]:
    array([[ 5.018393],
           [14.130004],
           [ 7.841759]], dtype=float32)
    
    • 3

相关问题

Sidebar

Stats

  • 问题 10021
  • Answers 30001
  • 最佳答案 8000
  • 用户 6900
  • 常问
  • 回答
  • Marko Smith

    是否可以在 C++ 中继承类 <---> 结构?

    • 2 个回答
  • Marko Smith

    这种神经网络架构适合文本分类吗?

    • 1 个回答
  • Marko Smith

    为什么分配的工作方式不同?

    • 3 个回答
  • Marko Smith

    控制台中的光标坐标

    • 1 个回答
  • Marko Smith

    如何在 C++ 中删除类的实例?

    • 4 个回答
  • Marko Smith

    点是否属于线段的问题

    • 2 个回答
  • Marko Smith

    json结构错误

    • 1 个回答
  • Marko Smith

    ServiceWorker 中的“获取”事件

    • 1 个回答
  • Marko Smith

    c ++控制台应用程序exe文件[重复]

    • 1 个回答
  • Marko Smith

    按多列从sql表中选择

    • 1 个回答
  • Martin Hope
    Alexandr_TT 圣诞树动画 2020-12-23 00:38:08 +0000 UTC
  • Martin Hope
    Suvitruf - Andrei Apanasik 什么是空? 2020-08-21 01:48:09 +0000 UTC
  • Martin Hope
    Air 究竟是什么标识了网站访问者? 2020-11-03 15:49:20 +0000 UTC
  • Martin Hope
    Qwertiy 号码显示 9223372036854775807 2020-07-11 18:16:49 +0000 UTC
  • Martin Hope
    user216109 如何为黑客设下陷阱,或充分击退攻击? 2020-05-10 02:22:52 +0000 UTC
  • Martin Hope
    Qwertiy 并变成3个无穷大 2020-11-06 07:15:57 +0000 UTC
  • Martin Hope
    koks_rs 什么是样板代码? 2020-10-27 15:43:19 +0000 UTC
  • Martin Hope
    Sirop4ik 向 git 提交发布的正确方法是什么? 2020-10-05 00:02:00 +0000 UTC
  • Martin Hope
    faoxis 为什么在这么多示例中函数都称为 foo? 2020-08-15 04:42:49 +0000 UTC
  • Martin Hope
    Pavel Mayorov 如何从事件或回调函数中返回值?或者至少等他们完成。 2020-08-11 16:49:28 +0000 UTC

热门标签

javascript python java php c# c++ html android jquery mysql

Explore

  • 主页
  • 问题
    • 热门问题
    • 最新问题
  • 标签
  • 帮助

Footer

RError.com

关于我们

  • 关于我们
  • 联系我们

Legal Stuff

  • Privacy Policy

帮助

© 2023 RError.com All Rights Reserve   沪ICP备12040472号-5