RError.com

RError.com Logo RError.com Logo

RError.com Navigation

  • 主页

Mobile menu

Close
  • 主页
  • 系统&网络
    • 热门问题
    • 最新问题
    • 标签
  • Ubuntu
    • 热门问题
    • 最新问题
    • 标签
  • 帮助
主页 / 问题 / 944642
Accepted
Pavel
Pavel
Asked:2020-02-14 14:46:47 +0000 UTC2020-02-14 14:46:47 +0000 UTC 2020-02-14 14:46:47 +0000 UTC

训练模型时loss和acc的值相同

  • 772

同事,不要告诉我可能是什么问题:

我正在尝试训练一个模型来预测公司的违约情况。有一个包含 100,000 家公司和 37 个特征的数据集。

将数据集拆分为训练和测试

from sklearn.model_selection import train_test_split
train, test = train_test_split(df_findataset, test_size=0.2)
X_train = train.iloc[:,0:36]
Y_train = train.iloc[:,-1] # (0 - нет дефолта, 1- дефолтная компания)

x_test = test.iloc[:,0:36]
y_test = test.iloc[:,-1]

创建了一个模型:

x_train = tf.keras.utils.normalize(x_train, axis=1)
x_test = tf.keras.utils.normalize(x_test, axis=1)

model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Dense(50, input_dim=36, activation='relu'))
model.add(tf.keras.layers.Dense(2, activation='relu'))
model.add(tf.keras.layers.Dense(1, activation='sigmoid'))

model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])

model.fit(x_train, y_train, epochs=10, batch_size=10)

训练时,发出以下内容 - 一切都一样:(

Epoch 1/10
108960/108960 [==============================] - 14s 125us/step - loss: 0.1534 - acc: 0.9645
Epoch 2/10
108960/108960 [==============================] - 13s 123us/step - loss: 0.1534 - acc: 0.9645
Epoch 3/10
108960/108960 [==============================] - 14s 125us/step - loss: 0.1534 - acc: 0.9645
Epoch 4/10
108960/108960 [==============================] - 14s 131us/step - loss: 0.1534 - acc: 0.9645
Epoch 5/10
108960/108960 [==============================] - 15s 139us/step - loss: 0.1534 - acc: 0.9645
Epoch 6/10
108960/108960 [==============================] - 15s 136us/step - loss: 0.1534 - acc: 0.9645
Epoch 7/10
108960/108960 [==============================] - 15s 139us/step - loss: 0.1534 - acc: 0.9645
Epoch 8/10
108960/108960 [==============================] - 16s 143us/step - loss: 0.1534 - acc: 0.9645
Epoch 9/10
108960/108960 [==============================] - 14s 129us/step - loss: 0.1534 - acc: 0.9645
Epoch 10/10
108960/108960 [==============================] - 15s 140us/step - loss: 0.1534 - acc: 0.9645
Out[152]:
<tensorflow.python.keras.callbacks.History at 0x7fed4cb9feb8>

告诉我它可能与什么相关联:(((数据似乎已被尽可能多地清理(尽我所能:())

pandas
  • 1 1 个回答
  • 10 Views

1 个回答

  • Voted
  1. Best Answer
    MaxU - stop genocide of UA
    2020-02-14T15:46:00Z2020-02-14T15:46:00Z

    试试这样:

    import pandas as pd
    import numpy as np
    import tensorflow as tf
    from keras import Sequential
    from keras.layers import *
    from keras.callbacks import *
    from keras.models import save_model, load_model
    from sklearn.model_selection import train_test_split
    
    filename = r'C:\work\ML\SO\944642-Keras_NN\fin2018.csv'
    
    # let's set 'Регистрационный номер' as an index as 
    df = pd.read_csv(filename, index_col=0)
    
    X_train, X_test, Y_train, Y_test = \
      train_test_split(
        tf.keras.utils.normalize(df.drop(['bankrt_status'],axis=1)),
        #df.drop(['Регистрационный номер','bankrt_status'],axis=1),
        df['bankrt_status'], 
        test_size=0.2)
    
    
    model = Sequential()
    model.add(Dense(64, input_dim=X_train.shape[1], activation='relu'))
    model.add(Dense(16, activation='relu'))
    model.add(Dropout(0.2))
    model.add(Dense(1, activation='sigmoid'))
    
    model.compile(loss='binary_crossentropy', optimizer='adamax', metrics=['accuracy'])
    
    
    model_fn = r'c:\temp\model.h5'
    # Keras callbacks
    early_stop = EarlyStopping(monitor='val_acc', min_delta=0.0001,
                               patience=5, verbose=1, mode='auto')
    chkpt = ModelCheckpoint(str(model_fn), 
                            monitor='val_acc', 
                            verbose=1, 
                            save_best_only=True, 
                            mode='auto')
    callbacks = [early_stop, chkpt]
    
    
    model.fit(X_train, Y_train, validation_split=0.15, epochs=20, batch_size=32,
              callbacks=callbacks)
    
    model = load_model(model_fn)
    
    score, acc = model.evaluate(X_test, Y_test)
    print(f"test score: {score:0.3f}\ttest accuracy: {acc:0.4f}")
    

    教育:

    Train on 92616 samples, validate on 16344 samples
    Epoch 1/20
    92616/92616 [==============================] - 3s 31us/step - loss: 0.1314 - acc: 0.9646 - val_loss: 0.1185 - val_acc: 0.9637
    
    Epoch 00001: val_acc improved from -inf to 0.96372, saving model to c:\temp\model.h5
    Epoch 2/20
    92616/92616 [==============================] - 3s 33us/step - loss: 0.1208 - acc: 0.9651 - val_loss: 0.1175 - val_acc: 0.9641
    
    Epoch 00002: val_acc improved from 0.96372 to 0.96415, saving model to c:\temp\model.h5
    Epoch 3/20
    92616/92616 [==============================] - 3s 28us/step - loss: 0.1192 - acc: 0.9656 - val_loss: 0.1163 - val_acc: 0.9645
    
    Epoch 00003: val_acc improved from 0.96415 to 0.96451, saving model to c:\temp\model.h5
    Epoch 4/20
    92616/92616 [==============================] - 3s 29us/step - loss: 0.1178 - acc: 0.9655 - val_loss: 0.1160 - val_acc: 0.9645
    
    Epoch 00004: val_acc did not improve from 0.96451
    Epoch 5/20
    92616/92616 [==============================] - 2s 27us/step - loss: 0.1163 - acc: 0.9655 - val_loss: 0.1152 - val_acc: 0.9646
    
    Epoch 00005: val_acc improved from 0.96451 to 0.96464, saving model to c:\temp\model.h5
    Epoch 6/20
    92616/92616 [==============================] - 3s 31us/step - loss: 0.1155 - acc: 0.9657 - val_loss: 0.1149 - val_acc: 0.9646
    
    Epoch 00006: val_acc did not improve from 0.96464
    Epoch 7/20
    92616/92616 [==============================] - 2s 27us/step - loss: 0.1145 - acc: 0.9657 - val_loss: 0.1146 - val_acc: 0.9646
    
    Epoch 00007: val_acc did not improve from 0.96464
    Epoch 8/20
    92616/92616 [==============================] - 2s 27us/step - loss: 0.1143 - acc: 0.9654 - val_loss: 0.1158 - val_acc: 0.9647
    
    Epoch 00008: val_acc improved from 0.96464 to 0.96470, saving model to c:\temp\model.h5
    Epoch 9/20
    92616/92616 [==============================] - 3s 29us/step - loss: 0.1132 - acc: 0.9657 - val_loss: 0.1150 - val_acc: 0.9649
    
    Epoch 00009: val_acc improved from 0.96470 to 0.96488, saving model to c:\temp\model.h5
    Epoch 10/20
    92616/92616 [==============================] - 3s 34us/step - loss: 0.1129 - acc: 0.9658 - val_loss: 0.1140 - val_acc: 0.9648
    
    Epoch 00010: val_acc did not improve from 0.96488
    Epoch 11/20
    92616/92616 [==============================] - 2s 27us/step - loss: 0.1131 - acc: 0.9657 - val_loss: 0.1135 - val_acc: 0.9647
    
    Epoch 00011: val_acc did not improve from 0.96488
    Epoch 12/20
    92616/92616 [==============================] - 2s 26us/step - loss: 0.1124 - acc: 0.9657 - val_loss: 0.1142 - val_acc: 0.9648
    
    Epoch 00012: val_acc did not improve from 0.96488
    Epoch 13/20
    92616/92616 [==============================] - 2s 26us/step - loss: 0.1122 - acc: 0.9656 - val_loss: 0.1136 - val_acc: 0.9648
    
    Epoch 00013: val_acc did not improve from 0.96488
    Epoch 14/20
    92616/92616 [==============================] - 2s 26us/step - loss: 0.1119 - acc: 0.9658 - val_loss: 0.1142 - val_acc: 0.9648
    
    Epoch 00014: val_acc did not improve from 0.96488
    Epoch 00014: early stopping
    27241/27241 [==============================] - 0s 14us/step
    test score: 0.107       test accuracy: 0.9674
    

    考试:

    test score: 0.107       test accuracy: 0.9674
    
    • 2

相关问题

  • pandas IPython 和 Jupyter Notebook 中不显示俄语名称的列

Sidebar

Stats

  • 问题 10021
  • Answers 30001
  • 最佳答案 8000
  • 用户 6900
  • 常问
  • 回答
  • Marko Smith

    根据浏览器窗口的大小调整背景图案的大小

    • 2 个回答
  • Marko Smith

    理解for循环的执行逻辑

    • 1 个回答
  • Marko Smith

    复制动态数组时出错(C++)

    • 1 个回答
  • Marko Smith

    Or and If,elif,else 构造[重复]

    • 1 个回答
  • Marko Smith

    如何构建支持 x64 的 APK

    • 1 个回答
  • Marko Smith

    如何使按钮的输入宽度?

    • 2 个回答
  • Marko Smith

    如何显示对象变量的名称?

    • 3 个回答
  • Marko Smith

    如何循环一个函数?

    • 1 个回答
  • Marko Smith

    LOWORD 宏有什么作用?

    • 2 个回答
  • Marko Smith

    从字符串的开头删除直到并包括一个字符

    • 2 个回答
  • Martin Hope
    Alexandr_TT 2020年新年大赛! 2020-12-20 18:20:21 +0000 UTC
  • Martin Hope
    Alexandr_TT 圣诞树动画 2020-12-23 00:38:08 +0000 UTC
  • Martin Hope
    Air 究竟是什么标识了网站访问者? 2020-11-03 15:49:20 +0000 UTC
  • Martin Hope
    Qwertiy 号码显示 9223372036854775807 2020-07-11 18:16:49 +0000 UTC
  • Martin Hope
    user216109 如何为黑客设下陷阱,或充分击退攻击? 2020-05-10 02:22:52 +0000 UTC
  • Martin Hope
    Qwertiy 并变成3个无穷大 2020-11-06 07:15:57 +0000 UTC
  • Martin Hope
    koks_rs 什么是样板代码? 2020-10-27 15:43:19 +0000 UTC
  • Martin Hope
    Sirop4ik 向 git 提交发布的正确方法是什么? 2020-10-05 00:02:00 +0000 UTC
  • Martin Hope
    faoxis 为什么在这么多示例中函数都称为 foo? 2020-08-15 04:42:49 +0000 UTC
  • Martin Hope
    Pavel Mayorov 如何从事件或回调函数中返回值?或者至少等他们完成。 2020-08-11 16:49:28 +0000 UTC

热门标签

javascript python java php c# c++ html android jquery mysql

Explore

  • 主页
  • 问题
    • 热门问题
    • 最新问题
  • 标签
  • 帮助

Footer

RError.com

关于我们

  • 关于我们
  • 联系我们

Legal Stuff

  • Privacy Policy

帮助

© 2023 RError.com All Rights Reserve   沪ICP备12040472号-5