如何在PyTorch中保存训练好的模型?我读到过:

Torch.save ()/torch.load()用于保存/加载可序列化对象。 model.state_dict()/model.load_state_dict()用于保存/加载模型状态。


当前回答

一个常见的PyTorch约定是使用.pt或.pth文件扩展名保存模型。

保存/加载整个模型

拯救策略:

path = "username/directory/lstmmodelgpu.pth"
torch.save(trainer, path)

负载:

(模型类必须在某处定义)

model.load_state_dict(torch.load(PATH))
model.eval()

其他回答

PIP安装火炬闪电

确保你的父模型使用pl.LightningModule而不是nn。模块

使用pytorch闪电保存和加载检查点

import pytorch_lightning as pl

model = MyLightningModule(hparams)
trainer.fit(model)
trainer.save_checkpoint("example.ckpt")
new_model = MyModel.load_from_checkpoint(checkpoint_path="example.ckpt")

我用这个方法,希望对大家有用。

num_labels = len(test_label_cols)
robertaclassificationtrain = '/dbfs/FileStore/tables/PM/TC/roberta_model'
robertaclassificationpath = "/dbfs/FileStore/tables/PM/TC/ROBERTACLASSIFICATION"

model = RobertaForSequenceClassification.from_pretrained(robertaclassificationpath, 
num_labels=num_labels)
model.cuda()

model.load_state_dict(torch.load(robertaclassificationtrain))
model.eval()

我保存我的火车模型已经在“roberta_model”路径。保存一个火车模型。

torch.save(model.state_dict(), '/dbfs/FileStore/tables/PM/TC/roberta_model')

如果您想保存模型,并希望稍后恢复训练:

单一的GPU: 拯救策略:

state = {
        'epoch': epoch,
        'state_dict': model.state_dict(),
        'optimizer': optimizer.state_dict(),
}
savepath='checkpoint.t7'
torch.save(state,savepath)

负载:

checkpoint = torch.load('checkpoint.t7')
model.load_state_dict(checkpoint['state_dict'])
optimizer.load_state_dict(checkpoint['optimizer'])
epoch = checkpoint['epoch']

多个GPU: 保存

state = {
        'epoch': epoch,
        'state_dict': model.module.state_dict(),
        'optimizer': optimizer.state_dict(),
}
savepath='checkpoint.t7'
torch.save(state,savepath)

负载:

checkpoint = torch.load('checkpoint.t7')
model.load_state_dict(checkpoint['state_dict'])
optimizer.load_state_dict(checkpoint['optimizer'])
epoch = checkpoint['epoch']

#Don't call DataParallel before loading the model otherwise you will get an error

model = nn.DataParallel(model) #ignore the line if you want to load on Single GPU

我总是喜欢使用Torch7 (.t7)或Pickle (.pth, .pt)来保存pytorch模型的权重。

一个常见的PyTorch约定是使用.pt或.pth文件扩展名保存模型。

保存/加载整个模型

拯救策略:

path = "username/directory/lstmmodelgpu.pth"
torch.save(trainer, path)

负载:

(模型类必须在某处定义)

model.load_state_dict(torch.load(PATH))
model.eval()