下面進行一個高維線性實驗
假設我們的真實方程是:
假設feature數200,訓練樣本和測試樣本各20個
模擬數據集
num_train,num_test = 10,10 num_features = 200 true_w = torch.ones((num_features,1),dtype=torch.float32) * 0.01 true_b = torch.tensor(0.5) samples = torch.normal(0,1,(num_train+num_test,num_features)) noise = torch.normal(0,0.01,(num_train+num_test,1)) labels = samples.matmul(true_w) + true_b + noise train_samples, train_labels= samples[:num_train],labels[:num_train] test_samples, test_labels = samples[num_train:],labels[num_train:]
定義帶正則項的loss function
def loss_function(predict,label,w,lambd): loss = (predict - label) ** 2 loss = loss.mean() + lambd * (w**2).mean() return loss
畫圖的方法
def semilogy(x_val,y_val,x_label,y_label,x2_val,y2_val,legend): plt.figure(figsize=(3,3)) plt.xlabel(x_label) plt.ylabel(y_label) plt.semilogy(x_val,y_val) if x2_val and y2_val: plt.semilogy(x2_val,y2_val) plt.legend(legend) plt.show()
擬合和畫圖
def fit_and_plot(train_samples,train_labels,test_samples,test_labels,num_epoch,lambd): w = torch.normal(0,1,(train_samples.shape[-1],1),requires_grad=True) b = torch.tensor(0.,requires_grad=True) optimizer = torch.optim.Adam([w,b],lr=0.05) train_loss = [] test_loss = [] for epoch in range(num_epoch): predict = train_samples.matmul(w) + b epoch_train_loss = loss_function(predict,train_labels,w,lambd) optimizer.zero_grad() epoch_train_loss.backward() optimizer.step() test_predict = test_sapmles.matmul(w) + b epoch_test_loss = loss_function(test_predict,test_labels,w,lambd) train_loss.append(epoch_train_loss.item()) test_loss.append(epoch_test_loss.item()) semilogy(range(1,num_epoch+1),train_loss,'epoch','loss',range(1,num_epoch+1),test_loss,['train','test'])
可以發現加了正則項的模型,在測試集上的loss確實下降了
以上就是Python深度學習pyTorch權重衰減與L2范數正則化解析的詳細內容,更多關于Python pyTorch權重與L2范數正則化的資料請關注服務器之家其它相關文章!
原文鏈接:https://blog.csdn.net/qq_43152622/article/details/116937183