@ -77,9 +75,9 @@ PyTorch is an open-source machine learning library developed by Facebook's AI Re
- This tutorial gives an overview of the Pytorch for deep learning model building, trainng and evaluation with practical examples as well as descrition of possible project environments
## Options for utilising GPU
Working Environments for PyTorch
When diving into PyTorch, you have multiple options in terms of development environments. Two of the most popular choices are Visual Studio Code (VSCode) and Google Colab:
## Options for utilising GPU and environments
When diving into PyTorch, you have multiple options in terms of development environments and GPU resources. Two of the most popular choices are Visual Studio Code (VSCode) and Google Colab:
### Visual Studio Code (VSCode)
Description: VSCode is a free, open-source code editor developed by Microsoft. It supports a variety of programming languages and has a rich ecosystem of extensions, including support for Python and PyTorch.
@ -720,7 +718,7 @@ torch.randn(2, 3) # Returns a 2x3 matrix sampled from the standard normal distri
- Fills the input Tensor with values according to the method described in Understanding the difficulty of training deep feed-forward neural networks - Glorot, X. & Bengio, Y. (2010), using a uniform distribution. The resulting tensor will have values sampled from $\mathcal{U}(-a, a)$ where,
$$
a = \text{gain} \times \sqrt{\frac{6}{\text{fan_in} + \text{fan_out}}}
a = \text{gain} \times \sqrt{\frac{6}{\text{fan-in} + \text{fan-out}}}
$$
- Also known as Glorot initialization.
@ -737,7 +735,7 @@ nn.init.xavier_uniform_(a, gain=nn.init.calculate_gain('relu')) # Initializes a
- Fills the input Tensor with values according to the method described in Understanding the difficulty of training deep feed-forward neural networks - Glorot, X. & Bengio, Y. (2010), using a normal distribution. The resulting tensor will have values sampled from $\mathcal{N}(0, \text{std}^2)$ where,
@ -756,7 +754,7 @@ nn.init.xavier_normal_(a) # Initializes a with the Xavier normal method
- Fills the input Tensor with values according to the method described in Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification - He, K. et al. (2015), using a uniform distribution. The resulting tensor will have values sampled from $\mathcal{U}(-\text{bound}, \text{bound})$ where,
@ -773,7 +771,7 @@ nn.init.kaiming_uniform_(a, mode='fan_in', nonlinearity='relu') # Initializes a
- Fills the input Tensor with values according to the method described in Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification - He, K. et al. (2015), using a normal distribution. The resulting tensor will have values sampled from $\mathcal{N}(0, \text{std}^2)$ where,
$$
\operatorname{std}=\frac{\text { gain }}{\sqrt{\text { fan_mode }}}
\operatorname{std}=\frac{\text { gain }}{\sqrt{\text { fan-mode }}}