Pytorch - A Seemingly Indifferent 1-D Dimension Leading to Model Failure
This is a reflection on PyTorch: be careful with the shape of tensor, even for differences between shape[B,1] and shape[B].
Have you encountered such a problem when training a neural network: you are sure that your model is perfectly correct, and your training procedure is also correct. But you cannot get the expect result. Your model outputs seem blurred, and the loss curve does not show obvious descent. What is wrong?
Have you encountered python warning like this? UserWarning: Using a target size (torch.Size([12])) that is different to the input size (torch.Size([12, 1])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size.
There is a lesson behind this. To view this article, please download:
PyTorch_lesson_1D_dimension_leading_to_bad_loss.pdf
本文作者: lucainiaoge
本文链接: https://lucainiaoge.github.io.git/2021/03/21/PyTorch_lesson_1D_dimension_leading_to_bad_loss/
版权声明: 本作品采用 Creative Commons authorship - noncommercial use - same way sharing 4.0 international license agreement 进行许可。转载请注明出处!
![]()
