文章目录
- torch tensor cat拼接时报错
- Pytorch在计算accuracy时报错
torch tensor cat拼接时报错
代码:
>>> a = torch.tensor([1,2,])
>>> b = torch.tensor([3,4,5])
>>> torch.cat([a,b],1)
报错:
Traceback (most recent call last):File "<stdin>", line 1, in <module>
IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)
原因:两个一维tensor拼接时无需指定拼接的维度,或者指定维度为0
解决:
# 想要横着拼接 -> [1,2,3,4,5]
>>> torch.cat([a,b])
tensor([1, 2, 3, 4, 5])
>>> torch.cat([a,b],0)
tensor([1, 2, 3, 4, 5])
# 想要竖着拼接:先把列维数统一
>>> a = torch.tensor([1,2,3])
# [3] -> [1,3]
>>> a = torch.unsqueeze(a,0)
>>> b = torch.unsqueeze(b,0)
>>> torch.cat([a,b],0)
tensor([[1, 2, 3],[3, 4, 5]])
Pytorch在计算accuracy时报错
Traceback (most recent call last):File "main.py", line 547, in <module>File "main.py", line 270, in mainvalidate(test_loader_lfwa, model, criterion)File "main.py", line 480, in validatetop1[j].update(prec1[j][0].item(), input.size(0)) # top1 is a list, contains <utils.misc.AverageMeter object at 0x7f8ac80bf410>File "/home/face-attribute-prediction/utils/eval.py", line 17, in accuracytargetValue, targetIndices = target.topk(maxk, 1, True, True)
IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)
出错的代码:
targetValue, targetIndices = target.topk(maxk, 1, True, True)
原因:
在不使用one-hot以及nn.MultiLabelSoftMarginLoss()、nn.BCEWithLogitsLoss等多标签损失函数时,target不需要做topk转换。报这个错的原因是因为做了这个转换。错误的调用方式:prec1.append(accuracy(output[j], target_j, topk=(1,), useMultiLabelSoftMarginLoss=True))
解决:
计算accuracy的函数(自己加工的)
def accuracy(output, target, topk=(1,), useMultiLabelSoftMarginLoss=False):"""Computes the precision@k for the specified values of k"""with torch.no_grad():maxk = max(topk)batch_size = target.size(0)_, pred = output.topk(maxk, 1, True, True)pred = pred.t()if useMultiLabelSoftMarginLoss:# for multi-label losstargetValue, targetIndices = target.topk(maxk, 1, True, True)correct = pred.eq(targetIndices.view(1, -1).expand_as(pred))else:# 不需要对target做topk处理correct = pred.eq(target.view(1, -1).expand_as(pred))res = []for k in topk:correct_k = correct[:k].view(-1).float().sum(0)res.append(correct_k.mul_(100.0 / batch_size))return res
正确的调用:
prec1.append(accuracy(output[j], target_j, topk=(1,),))