[討論] 自寫的類神經倒傳遞LM演算法卻收斂不了,懇請幫忙!!已回收
我自己寫了一個類神經倒傳遞LM演算法,做一個XOR的問題。我採用一個隱藏層(含有
兩個神經元),輸出一個OUTPUT,但是我卻怎麼樣也收斂不了,並且還會越來越大。
程式我也怎麼也檢查不出哪裡出錯。想請板上高手幫忙,謝謝!
clear;
%-------設定網路參數-----
input=[0 0 1 1;0 1 0 1]; target=[1 0 0 1];
hidden_layer=2
MAX_learn=5000;
target_error=0.01;
mu=0.001; mu_dec=0.1; mu_inc=10;
%-------------------------
%-------初始化-----------
jacob=[]; err=[]; perform=0; err_before=0; flag1=1;
InputWeighting=-1+2*rand(hidden_layer,size(input,1)+1);
LayerWeighting=-1+2*rand(size(target,1),hidden_layer+1);
pattern_num=size(input,2);
%-----------------------
%-------lm演算法程式----
while(flag1<MAX_learn)
for i=1:pattern_num
w11=InputWeighting(1,1);w12=InputWeighting(1,2);wb1=InputWeighting(1,3);
w21=InputWeighting(2,1);w22=InputWeighting(2,2);wb2=InputWeighting(2,3);
u11=LayerWeighting(1,1);u12=LayerWeighting(1,2);ub1=LayerWeighting(1,3);
w_all=[w11 w12 wb1 w21 w22 wb2 u11 u12 ub1];
v=logsig(InputWeighting*[input(:,i);1]); %隱藏層輸出
y=logsig(LayerWeighting*[v;1]); %輸出曾輸出
e=target(:,i)-y; %error
err(i,:)=e;
s1=-y*(1-y)*v(1)*(1-v(1));
s2=-y*(1-y)*v(2)*(1-v(2));
s3=-y*(1-y);
j=[s1*[input(:,i);1]' s2*[input(:,i);1]' s3*[v(1) v(2) 1]];
jacob(i,:)=j;
end
performance_index=(0.5*sum(err.^2))/pattern_num;
Q=(jacob')*jacob;
G=(jacob')*err;
delta_w=inv(Q+mu*eye(9))*G; %權重變異量
w_all=w_all-delta_w'; %修改權重
InputWighting=InputWeighting+[w_all(1:3) w_all(4:6)];
LayerWeighting=LayerWeighting+[w_all(7:9)];
if(performance_index<target_error)
break;
elseif(performance_index>target_error)
if(performance_index>err_before)
mu=mu*mu_inc;
err_before=performance_index;
elseif(performance_index<err_before)
mu=mu*mu_dec;
err_before=performance_index;
end
end
flag1=flag1+1;
end
--
※ 發信站: 批踢踢實業坊(ptt.cc)
◆ From: 140.112.26.128
→
12/31 21:28, , 1F
12/31 21:28, 1F
→
01/01 14:11, , 2F
01/01 14:11, 2F
推
01/01 16:39, , 3F
01/01 16:39, 3F