1700511040
用SQL做一下更新。
1700511041
1700511042
#WHITEUPDATE EURUSD_TRAINING_15MSET CLASSIFICATION=0;#YELLOWUPDATE EURUSD_TRAINING_15MSET CLASSIFICATION=1WHERE R>=10 AND F>=10;UPDATE EURUSD_TRAINING_15MSET CLASSIFICATION=1WHERE ABS(R-F)<=1;#RED-RUPDATE EURUSD_TRAINING_15MSET CLASSIFICATION=2WHERE R>=F+10;#RED-FUPDATE EURUSD_TRAINING_15MSET CLASSIFICATION=3WHERE F>=R+10;#CYAN-RUPDATE EURUSD_TRAINING_15MSET CLASSIFICATION=4WHERE R=1 AND F=0;#CYAN-FUPDATE EURUSD_TRAINING_15MSET CLASSIFICATION=5WHERE R=0 AND F=1;#GREEN-RUPDATE EURUSD_TRAINING_15MSET CLASSIFICATION=6WHERE R=2 AND F=0;#GREEN-FUPDATE EURUSD_TRAINING_15MSET CLASSIFICATION=7WHERE R=0 AND F=2;#BLUE-RUPDATE EURUSD_TRAINING_15MSET CLASSIFICATION=8WHERE R=3 AND F=0;UPDATE EURUSD_TRAINING_15MSET CLASSIFICATION=8WHERE R=2 AND F=1;#BLUE-FUPDATE EURUSD_TRAINING_15MSET CLASSIFICATION=9WHERE R=0 AND F=3;UPDATE EURUSD_TRAINING_15MSET CLASSIFICATION=9WHERE R=1 AND F=2;#PURPLE-RUPDATE EURUSD_TRAINING_15MSET CLASSIFICATION=10WHERE R=4 AND F=0;UPDATE EURUSD_TRAINING_15MSET CLASSIFICATION=10WHERE R=3 AND F=1;#PURPLE-FUPDATE EURUSD_TRAINING_15MSET CLASSIFICATION=11WHERE R=0 AND F=4;UPDATE EURUSD_TRAINING_15MSET CLASSIFICATION=11WHERE R=1 AND F=3;
1700511043
1700511044
将这些训练样本导出为文本,编辑文件make_file_15m.py。
1700511045
1700511046
import os,sys,MySQLdbimport numpy as npdb=MySQLdb.connect(host=‘localhost’, user=‘root’, passwd=‘111111’, db=‘FOREX’)cursor=db.cursor()cursor.execute(‘USE FOREX;’)sql=‘SELECT * FROM EURUSD_TRAINING_15M;’cursor.execute(sql)result=cursor.fetchall()for i in range(cursor.rowcount): printstr(result[i][0])+’,’+str(result[i][1])+’,’+str(result[i][2])+’,’+str(result[i][3])+’,’+str(r esult[i][4])+’,’+str(result[i][5])+’,’+str(result[i][6])+’,’+str(result[i][7])+’,’+str(result[i][8])cursor.close()db.close()
1700511047
1700511048
在Shell下调用。
1700511049
1700511050
python make_file_15m.py >> record_15M.txt
1700511051
1700511052
截断文件,只保留最后的350万条记录。
1700511053
1700511054
tail-3500000 record_15M.txt >> record_15M_3500000.txt
1700511055
1700511056
编辑训练文件train_1.py。
1700511057
1700511058
from keras.models import Sequentialfrom keras.layers import Dense, Dropout, Activationfrom keras.optimizers import SGDimport os,sys,timeimport numpy as npprint “Start to generate network”model=Sequential()model.add(Dense(120, input_dim=63))model.add(Activation(‘sigmoid’))model.add(Dense(80, input_dim=100))model.add(Activation(‘sigmoid’))model.add(Dense(60, input_dim=70))model.add(Activation(‘sigmoid’))model.add(Dense(50, input_dim=120))model.add(Activation(‘sigmoid’))model.add(Dense(40, input_dim=50))model.add(Activation(‘sigmoid’))model.add(Dense(30, input_dim=40))model.add(Activation(‘sigmoid’))model.add(Dense(12, input_dim=30))model.add(Activation(‘softmax’))model.compile(loss=‘categorical_crossentropy’, optimizer=‘rmsprop’, metrics=[‘accuracy’])print “start to load data”records=open(‘./record_15M_3500000.txt’,‘r’)X_train=[]y_train=[]line_pointer=-1for line in records.readlines()
: line_pointer=line_pointer + 1 X_train.append([]) y_train.append([]) values=line.split(‘,’) if(line_pointer<=14)
: line_length=line_pointer else
: line_length=14 the_time=time.strptime(str(values[0]),”%Y-%m-%d %H:%M:%S”) X_train[line_pointer].append(float(time.strftime(“%H”,the_time))) X_train[line_pointer].append(float(time.strftime(“%m”,the_time))) X_train[line_pointer].append(float(time.strftime(“%w”,the_time))) X_train[line_pointer].append(float(values[1])) X_train[line_pointer].append(float(values[2])) X_train[line_pointer].append(float(values[3])) X_train[line_pointer].append(float(values[4])) for j in range(line_length)
: X_train[line_pointer].append(X_train[line_pointer-1][j*4+4]) X_train[line_pointer].append(X_train[line_pointer-1][j*4+5]) X_train[line_pointer].append(X_train[line_pointer-1][j*4+6]) X_train[line_pointer].append(X_train[line_pointer-1][j*4+7]) for i in range(63-len(X_train[line_pointer]))
: X_train[line_pointer].append(0) for k in range(12)
: y_train[line_pointer].append(0) y_train[line_pointer][int(values[8])]=1 if line_pointer%10000==0
: print line_pointerprint “start training”
#print X_train[0]
#print X_train[100]model.fit(X_train, y_train, nb_epoch=20, batch_size=2000, validation_split=0.15)json_string=model.to_json()open(‘./my_model_architecture_1.json’, ‘w’).write(json_string)model.save_weights(‘./my_model_weights_1.h5’)pre=model.predict(X_train)predicted=np.zeros((12,12))for i in range(len(pre))
: max_train=0 max_pre=0 for m in range(12)
: if(y_train[i][m]==1)
: max_train=m for m in range(12)
: if(pre[i][max_pre] < pre[i][m])
: max_pre=m predicted[max_train][max_pre]=predicted[max_train][max_pre] + 1for i in range(12)
: for j in range(12)
: print predicted[i][j], print ””
1700511059
1700511060
这段代码只执行了20轮,执行后会输出下面的内容。
1700511061
1700511062
Train on 2975000 samples, validate on 525000 samplesEpoch 1/202975000/2975000 [==============================]-14s-loss: 1.8795-acc: 0.3742-val_loss
:1.6941-val_acc: 0.4185Epoch 2/202975000/2975000 [==============================]-14s-loss: 1.8547-acc: 0.3751-val_loss
:1.6928-val_acc: 0.4185Epoch 3/202975000/2975000 [==============================]-14s-loss: 1.8531-acc: 0.3751-val_loss
:1.6813-val_acc: 0.4185Epoch 4/202975000/2975000 [==============================]-14s-loss: 1.8504-acc: 0.3751-val_loss
:1.6750-val_acc: 0.4185Epoch 5/202975000/2975000 [==============================]-14s-loss: 1.8476-acc: 0.3750-val_loss
:1.6702-val_acc: 0.4185Epoch 6/202975000/2975000 [==============================]-14s-loss: 1.8458-acc: 0.3750-val_loss
:1.6696-val_acc: 0.4180Epoch 7/202975000/2975000 [==============================]-14s-loss: 1.8448-acc: 0.3751-val_loss
:1.6637-val_acc: 0.4187Epoch 8/202975000/2975000 [==============================]-14s-loss: 1.8439-acc: 0.3752-val_loss
:1.6783-val_acc: 0.4174Epoch 9/202975000/2975000 [==============================]-14s-loss: 1.8428-acc: 0.3752-val_loss
:1.6555-val_acc: 0.4186Epoch 10/202975000/2975000 [==============================]-14s-loss: 1.8415-acc: 0.3752-val_loss
:1.6528-val_acc: 0.4186Epoch 11/202975000/2975000 [==============================]-14s-loss: 1.8405-acc: 0.3752-val_loss
:1.6534-val_acc: 0.4185Epoch 12/202975000/2975000 [==============================]-14s-loss: 1.8398-acc: 0.3753-val_loss
:1.6525-val_acc: 0.4185Epoch 13/202975000/2975000 [==============================]-14s-loss: 1.8391-acc: 0.3753-val_loss
:1.6504-val_acc: 0.4186Epoch 14/202975000/2975000 [==============================]-14s-loss: 1.8384-acc: 0.3754-val_loss
:1.6573-val_acc: 0.4186Epoch 15/202975000/2975000 [==============================]-14s-loss: 1.8376-acc: 0.3754-val_loss
:1.6475-val_acc: 0.4185Epoch 16/202975000/2975000 [==============================]-14s-loss: 1.8366-acc: 0.3754-val_loss
:1.6498-val_acc: 0.4186Epoch 17/202975000/2975000 [==============================]-14s-loss: 1.8348-acc: 0.3754-val_loss
:1.6688-val_acc: 0.4168Epoch 18/202975000/2975000 [==============================]-14s-loss: 1.8321-acc: 0.3756-val_loss
:1.6759-val_acc: 0.4167Epoch 19/202975000/2975000 [==============================]-13s-loss: 1.8280-acc: 0.3760-val_loss
:1.6991-val_acc: 0.4127Epoch 20/202975000/2975000 [==============================]-13s-loss: 1.8239-acc: 0.3762-val_loss
:1.7493-val_acc: 0.407727434.0 144944.0 0.0 0.0 0.0 197.0 0.0 0.0 0.0 0.0 0.0 0.028574.0 1305778.0 0.0 0.0 9.0 1268.0 0.0 0.0 0.0 0.0 0.0 0.01115.0 5092.0 0.0 0.0 0.0 5.0 0.0 0.0 0.0 0.0 0.0 0.01067.0 5173.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.017999.0 533833.0 0.0 0.0 5.0 1337.0 0.0 0.0 0.0 0.0 0.0 0.017493.0 528829.0 0.0 0.0 36.0 1471.0 0.0 0.0 0.0 0.0 0.0 0.013725.0 205288.0 0.0 0.0 0.0 728.0 0.0 0.0 0.0 0.0 0.0 0.013521.0 202727.0 0.0 0.0 10.0 713.0 0.0 0.0 0.0 0.0 0.0 0.015244.0 135881.0 0.0 0.0 0.0 375.0 0.0 0.0 0.0 0.0 0.0 0.014773.0 134573.0 0.0 0.0 0.0 416.0 0.0 0.0 0.0 0.0 0.0 0.08522.0 61368.0 0.0 0.0 0.0 125.0 0.0 0.0 0.0 0.0 0.0 0.08434.0 61777.0 0.0 0.0 0.0 141.0 0.0 0.0 0.0 0.0 0.0 0.0
1700511063
1700511064
为了展示得清晰一些,我们把这个表格放到Excel里面去,如图18-22所示。
1700511065
1700511066
1700511067
1700511068
1700511069
图18-22 20轮训练
1700511070
1700511071
纵轴表示训练集中的类别标示,横轴表示实际预测时得到的类别标识。这样,每个单元格的含义就很明确了。以纵轴为“2”、横轴为“1”的5092这个单元格为例,表示有5092个原本被标记为“2”类在模型做判断的过程中被判断为“1”类的误判情况。而横轴为“1”、纵轴为“1”的“1305778”的含义是,有1305778个样本应为“1”类,并被判断为“1”类,属于正确识别的情况。理论上讲,应该是对角线上深色单元格中的数值越大越好。在这个模型中,经过20轮迭代后,判断正确的样本数量为1334688个,只占约38%。在这个阶段,可以通过增大轮数让算法继续收敛。
1700511072
1700511073
为了让收敛速度加快,可以使用ReLU作为激励函数。修改这个部分。
1700511074
1700511075
model=Sequential()model.add(Dense(120, input_dim=63))model.add(Activation(‘relu’))model.add(Dense(80, input_dim=100))model.add(Activation(‘relu’))model.add(Dense(60, input_dim=70))model.add(Activation(‘relu’))model.add(Dense(50, input_dim=120))model.add(Activation(‘relu’))model.add(Dense(40, input_dim=50))model.add(Activation(‘relu’))model.add(Dense(30, input_dim=40))model.add(Activation(‘relu’))model.add(Dense(12, input_dim=30))model.add(Activation(‘softmax’))
1700511076
1700511077
将训练的轮数变为150轮,并将验证集比例调整到35%。
1700511078
1700511079
model.fit(X_train, y_train, nb_epoch=150, batch_size=2000, validation_split=0.35)
1700511080
1700511081
在训练结束后,处理一下输出数据,如图18-23所示。
1700511082
1700511083
1700511084
1700511085
1700511086
图18-23 150轮训练
1700511087
1700511088
这次训练的数据变化很明显。在这个模型中,从模型标记的角度来看,应该是“0”和“1”以外的类别识别正确率越高越好。尽管结果还是很不理想,但比只进行20轮训练时已经有了进步,熵减的方向没错——让分类结果向对角线集中。
1700511089
[
上一页 ]
[ :1.70051104e+09 ]
[
下一页 ]