所以我决定做一个神经网络,理论上应该学习逻辑运算 OR(或),但现在我在准备训练示例的答案时遇到了问题
In [1]: from __future__ import print_function
...: import numpy as np
...: from keras.datasets import mnist
...: from keras.models import Sequential
...: from keras.layers.core import Dense, Activation
...: from keras.optimizers import SGD
...: from keras.utils import np_utils
...: np.random.seed(1671) # для воспроизводимости результатов
...:
...: NB_EPOCH = 20
...: BATCH_SIZE = 3
...: VERBOSE = 1
...: NB_CLASSES = 1 # количество результатов
...: OPTIMIZER = SGD() # СГС-оптимизатор
...: N_HIDDEN = 64
In [2]: X_in = [[1,0],[1,1],[0,0],[0,1],[1,1],[0,0],[1,1]]
...: x_otvet = [1,1,0,1,1,0,1]
...: X_in = np.asarray(X_in, dtype=np.float32)
...: x_otvet = np.asarray(x_otvet, dtype=np.float32)
In [3]: x_otvet = np_utils.to_categorical(x_otvet, NB_CLASSES)
Traceback (most recent call last):
File "<ipython-input-12-a268ac2ea14c>", line 1, in <module>
x_otvet = np_utils.to_categorical(x_otvet, NB_CLASSES)
File "D:\Users\Alex\Anaconda3\lib\site-packages\keras\utils\np_utils.py", line 31, in to_categorical
categorical[np.arange(n), y] = 1
IndexError: index 1 is out of bounds for axis 1 with size 1
如果省略np_utils.to_categorical,则会得到以下信息:
In [13]: model = Sequential()
...: model.add(Dense(NB_CLASSES, input_shape=(2,)))
...: model.add(Activation('softmax'))
...: model.summary()
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_2 (Dense) (None, 1) 3
_________________________________________________________________
activation_2 (Activation) (None, 1) 0
=================================================================
Total params: 3
Trainable params: 3
Non-trainable params: 0
_________________________________________________________________
In [14]: model.compile(loss='categorical_crossentropy',
...: optimizer=OPTIMIZER,
...: metrics=['accuracy'])
In [15]: history = model.fit(X_in, x_otvet,
...: batch_size=BATCH_SIZE, epochs=NB_EPOCH,
...: verbose=VERBOSE)
Traceback (most recent call last):
File "<ipython-input-15-c47f30350ab5>", line 3, in <module>
verbose=VERBOSE)
File "D:\Users\Alex\Anaconda3\lib\site-packages\keras\models.py", line 1002, in fit
validation_steps=validation_steps)
File "D:\Users\Alex\Anaconda3\lib\site-packages\keras\engine\training.py", line 1630, in fit
batch_size=batch_size)
File "D:\Users\Alex\Anaconda3\lib\site-packages\keras\engine\training.py", line 1493, in _standardize_user_data
self._feed_output_shapes)
File "D:\Users\Alex\Anaconda3\lib\site-packages\keras\engine\training.py", line 256, in _check_loss_and_target_compatibility
' while using as loss `categorical_crossentropy`. '
ValueError: You are passing a target array of shape (7, 1) while using as loss `categorical_crossentropy`. `categorical_crossentropy` expects targets to be binary matrices (1s and 0s) of shape (samples, classes). If your targets are integer classes, you can convert them to the expected format via:
```
from keras.utils import to_categorical
y_binary = to_categorical(y_int)
```
Alternatively, you can use the loss function `sparse_categorical_crossentropy` instead, which does expect integer targets.
PS如果这是一个愚蠢的问题,我很抱歉。
更新:
好吧,总的来说,我有点纠正我的错误,但现在的问题是我无法理解为什么我的 NS 不喜欢输入数据的格式,数组中有两个元素,但他写了一个?(
In [16]: NB_CLASSES = 2
In [18]: x_otvet = np_utils.to_categorical(x_otvet, NB_CLASSES)
In [20]: model = Sequential()
...: model.add(Dense(NB_CLASSES, input_shape=(2,)))
...: model.add(Activation('softmax'))
...: model.summary()
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_3 (Dense) (None, 2) 6
_________________________________________________________________
activation_3 (Activation) (None, 2) 0
=================================================================
Total params: 6
Trainable params: 6
Non-trainable params: 0
_________________________________________________________________
In [22]: model.compile(loss='categorical_crossentropy', optimizer=OPTIMIZER, metrics=['accuracy'])
In [23]: history = model.fit(X_in, x_otvet, batch_size=BATCH_SIZE, epochs=NB_EPOCH, verbose=VERBOSE)
Epoch 1/20
7/7 [==============================] - 1s 75ms/step - loss: 0.4861 - acc: 0.8571
Epoch 2/20
7/7 [==============================] - 0s 643us/step - loss: 0.4833 - acc: 1.0000
Epoch 3/20
7/7 [==============================] - 0s 572us/step - loss: 0.4784 - acc: 0.7143
Epoch 4/20
7/7 [==============================] - 0s 500us/step - loss: 0.4751 - acc: 0.7143
Epoch 5/20
7/7 [==============================] - 0s 500us/step - loss: 0.4712 - acc: 0.7143
Epoch 6/20
7/7 [==============================] - 0s 572us/step - loss: 0.4680 - acc: 0.7143
Epoch 7/20
7/7 [==============================] - 0s 643us/step - loss: 0.4637 - acc: 0.7143
Epoch 8/20
7/7 [==============================] - 0s 500us/step - loss: 0.4603 - acc: 0.7143
Epoch 9/20
7/7 [==============================] - 0s 500us/step - loss: 0.4556 - acc: 0.7143
Epoch 10/20
7/7 [==============================] - 0s 500us/step - loss: 0.4525 - acc: 0.7143
Epoch 11/20
7/7 [==============================] - 0s 500us/step - loss: 0.4492 - acc: 0.7143
Epoch 12/20
7/7 [==============================] - 0s 500us/step - loss: 0.4457 - acc: 0.7143
Epoch 13/20
7/7 [==============================] - 0s 429us/step - loss: 0.4416 - acc: 0.7143
Epoch 14/20
7/7 [==============================] - 0s 643us/step - loss: 0.4390 - acc: 0.7143
Epoch 15/20
7/7 [==============================] - 0s 500us/step - loss: 0.4367 - acc: 0.7143
Epoch 16/20
7/7 [==============================] - 0s 500us/step - loss: 0.4342 - acc: 0.7143
Epoch 17/20
7/7 [==============================] - 0s 429us/step - loss: 0.4320 - acc: 0.7143
Epoch 18/20
7/7 [==============================] - 0s 500us/step - loss: 0.4290 - acc: 0.7143
Epoch 19/20
7/7 [==============================] - 0s 500us/step - loss: 0.4269 - acc: 0.7143
Epoch 20/20
7/7 [==============================] - 0s 500us/step - loss: 0.4242 - acc: 0.7143
In [25]: answer = model.predict([1,0])
Traceback (most recent call last):
File "<ipython-input-25-9f7569b0b419>", line 1, in <module>
answer = model.predict([1,0])
File "D:\Users\Alex\Anaconda3\lib\site-packages\keras\models.py", line 1064, in predict
steps=steps)
File "D:\Users\Alex\Anaconda3\lib\site-packages\keras\engine\training.py", line 1817, in predict
check_batch_axis=False)
File "D:\Users\Alex\Anaconda3\lib\site-packages\keras\engine\training.py", line 123, in _standardize_input_data
str(data_shape))
ValueError: Error when checking : expected dense_3_input to have shape (2,) but got array with shape (1,)
In [26]: answer = model.predict([1,0])
Traceback (most recent call last):
File "<ipython-input-26-9f7569b0b419>", line 1, in <module>
answer = model.predict([1,0])
File "D:\Users\Alex\Anaconda3\lib\site-packages\keras\models.py", line 1064, in predict
steps=steps)
File "D:\Users\Alex\Anaconda3\lib\site-packages\keras\engine\training.py", line 1817, in predict
check_batch_axis=False)
File "D:\Users\Alex\Anaconda3\lib\site-packages\keras\engine\training.py", line 123, in _standardize_input_data
str(data_shape))
ValueError: Error when checking : expected dense_3_input to have shape (2,) but got array with shape (1,)
您有两个类(
0和1),而不仅仅是一个 - 这导致了错误:NB_CLASSES可以动态计算:如果 Numpy 输入是一个向量:
更新:您的模型需要一个包含两列作为输入的 2D Numpy 矩阵: