标签 Study 下的文章

15、Dropout原理以及Torch源码的实现


NN.DROPOUTCLASStorch.nn.Dropout(p=0.5, inplace=False)Parametersp (float) – probability of an element to be zeroed. Default: 0.5inplace (bool) – If set to True, will do this operation in-place. Default: FalseShape:Input: (∗)(∗). Input can be of any shapeOutput: (∗)(∗). Output is of the same shape as inputm = nn.Dropout(p=0.2) input = torch.randn(20, 16) output = m(input)如何判断当前是否为Trai...

9、PyTorch的nn.Sequential及ModuleList源码


train# 实例化一个模型,在模型后调用.train(True),说明我们将该模型设置为训练模式 def train(self: T, mode: bool = True) -> T: r"""Sets the module in training mode. This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g. :class:`Dropout`, :class:`BatchNorm`, etc...

24、手写并验证向量内积实现PyTorch二维卷积


该9步过程可以看作是一个矩阵跟另外一个矩阵的矩阵相乘将每步input拉直,例如左上角第一个深蓝色区域拉为332001312行向量,再将Kernel拉为$(012220012)^T$列向量相乘得到第一个最终结果的左上角第一个数值12.0So,上边九步运算可以视为行数为9的矩阵和列数为9的矩阵进行矩阵乘法,再将相乘的结果reshape为欲得到的结果Other Method实现一个长度为25的内积目前Kernel只是3×3的范围大小,若是将Kernel填充一下,eg.左上角第一幅图,Kernel只有9个数,但是我们可以脑补一下浅蓝色部分都填充为0,每一步都将Kernel填充浅蓝色部分为0,那么,原问题就变为25行向量和25列向量相乘。Coding希望把region_vector都放入到region_matrix中,再将region_matrix与Kernel_mat...

23、手写并验证滑动相乘实现PyTorch二维卷积


蓝色的input_feature 5*5深蓝色小字部分kernel_size 3*3绿色部分out_feature 3*3stride = 1padding = 0channel = 1padding = 1stride = 2底部input_channels = 2顶端绿色为out_channels = 3kernels = 2*3 = 6(倒数第二行)input = input_feature_map # 卷积输入特征图 kernel = conv_layer.weight.data # 卷积核 input = torch.randn(5, 5) # 卷积输入特征图 kernel = torch.randn(3, 3) # 卷积核 bias = torch.randn(1) # 卷积偏置项,默认输出通道数目=1 # Func1 用原始的矩阵运算...

召唤看板娘