zamba.pytorch.transforms¶
Attributes¶
imagenet_normalization_values = dict(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
module-attribute
¶
Classes¶
ConvertTCHWtoCTHW
¶
Bases: torch.nn.Module
Convert tensor from (T, C, H, W) to (C, T, H, W)
Source code in zamba/pytorch/transforms.py
23 24 25 26 27 |
|
ConvertTHWCtoCTHW
¶
Bases: torch.nn.Module
Convert tensor from (0:T, 1:H, 2:W, 3:C) to (3:C, 0:T, 1:H, 2:W)
Source code in zamba/pytorch/transforms.py
9 10 11 12 13 |
|
ConvertTHWCtoTCHW
¶
Bases: torch.nn.Module
Convert tensor from (T, H, W, C) to (T, C, H, W)
Source code in zamba/pytorch/transforms.py
16 17 18 19 20 |
|
PackSlowFastPathways
¶
Bases: torch.nn.Module
Creates the slow and fast pathway inputs for the slowfast model.
Source code in zamba/pytorch/transforms.py
81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 |
|
Attributes¶
alpha = alpha
instance-attribute
¶
Functions¶
__init__(alpha: int = 4)
¶
Source code in zamba/pytorch/transforms.py
84 85 86 |
|
forward(frames: torch.Tensor)
¶
Source code in zamba/pytorch/transforms.py
88 89 90 91 92 93 94 95 96 97 |
|
PadDimensions
¶
Bases: torch.nn.Module
Pads a tensor to ensure a fixed output dimension for a give axis.
Attributes:
Name | Type | Description |
---|---|---|
dimension_sizes |
A tuple of int or None the same length as the number of dimensions in the input tensor. If int, pad that dimension to at least that size. If None, do not pad. |
Source code in zamba/pytorch/transforms.py
40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 |
|
Attributes¶
dimension_sizes = dimension_sizes
instance-attribute
¶
Functions¶
__init__(dimension_sizes: Tuple[Optional[int]])
¶
Source code in zamba/pytorch/transforms.py
48 49 50 |
|
compute_left_and_right_pad(original_size: int, padded_size: int) -> Tuple[int, int]
staticmethod
¶
Computes left and right pad size.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
original_size |
list, int
|
The original tensor size |
required |
padded_size |
list, int
|
The desired tensor size |
required |
Returns:
Type | Description |
---|---|
Tuple[int, int]
|
Tuple[int]: Pad size for right and left. For odd padding size, the right = left + 1 |
Source code in zamba/pytorch/transforms.py
52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 |
|
forward(vid: torch.Tensor) -> torch.Tensor
¶
Source code in zamba/pytorch/transforms.py
69 70 71 72 73 74 75 76 77 78 |
|
Uint8ToFloat
¶
Bases: torch.nn.Module
Source code in zamba/pytorch/transforms.py
30 31 32 |
|
VideotoImg
¶
Bases: torch.nn.Module
Source code in zamba/pytorch/transforms.py
35 36 37 |
|
Functions¶
slowfast_transforms()
¶
Source code in zamba/pytorch/transforms.py
121 122 123 124 125 126 127 128 129 130 131 |
|
zamba_image_model_transforms(single_frame = False, normalization_values = imagenet_normalization_values, channels_first = False)
¶
Source code in zamba/pytorch/transforms.py
103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 |
|