Shortcuts

mmselfsup.apis

mmselfsup.apis.init_random_seed(seed=None, device='cuda')[source]

Initialize random seed.

If the seed is not set, the seed will be automatically randomized, and then broadcast to all processes to prevent some potential bugs. :param seed: The seed. Default to None. :type seed: int, Optional :param device: The device where the seed will be put on.

Default to ‘cuda’.

Returns

Seed to be used.

Return type

int

mmselfsup.apis.set_random_seed(seed, deterministic=False)[source]

Set random seed.

Parameters
  • seed (int) – Seed to be used.

  • deterministic (bool) – Whether to set the deterministic option for CUDNN backend, i.e., set torch.backends.cudnn.deterministic to True and torch.backends.cudnn.benchmark to False. Defaults to False.

mmselfsup.core

hooks

class mmselfsup.core.hooks.DeepClusterHook(extractor, clustering, unif_sampling, reweight, reweight_pow, init_memory=False, initial=True, interval=1, dist_mode=True, data_loaders=None)[source]

Hook for DeepCluster.

This hook includes the global clustering process in DC.

Parameters
  • extractor (dict) – Config dict for feature extraction.

  • clustering (dict) – Config dict that specifies the clustering algorithm.

  • unif_sampling (bool) – Whether to apply uniform sampling.

  • reweight (bool) – Whether to apply loss re-weighting.

  • reweight_pow (float) – The power of re-weighting.

  • init_memory (bool) – Whether to initialize memory banks used in ODC. Defaults to False.

  • initial (bool) – Whether to call the hook initially. Defaults to True.

  • interval (int) – Frequency of epochs to call the hook. Defaults to 1.

  • dist_mode (bool) – Use distributed training or not. Defaults to True.

  • data_loaders (DataLoader) – A PyTorch dataloader. Defaults to None.

class mmselfsup.core.hooks.DenseCLHook(start_iters=1000, **kwargs)[source]

Hook for DenseCL.

This hook includes loss_lambda warmup in DenseCL. Borrowed from the authors’ code: https://github.com/WXinlong/DenseCL.

Parameters

start_iters (int, optional) – The number of warmup iterations to set loss_lambda=0. Defaults to 1000.

class mmselfsup.core.hooks.DistOptimizerHook(update_interval=1, grad_clip=None, coalesce=True, bucket_size_mb=- 1, frozen_layers_cfg={})[source]

Optimizer hook for distributed training.

This hook can accumulate gradients every n intervals and freeze some layers for some iters at the beginning.

Parameters
  • update_interval (int, optional) – The update interval of the weights, set > 1 to accumulate the grad. Defaults to 1.

  • grad_clip (dict, optional) – Dict to config the value of grad clip. E.g., grad_clip = dict(max_norm=10). Defaults to None.

  • coalesce (bool, optional) – Whether allreduce parameters as a whole. Defaults to True.

  • bucket_size_mb (int, optional) – Size of bucket, the unit is MB. Defaults to -1.

  • frozen_layers_cfg (dict, optional) – Dict to config frozen layers. The key-value pair is layer name and its frozen iters. If frozen, the layer gradient would be set to None. Defaults to dict().

class mmselfsup.core.hooks.GradAccumFp16OptimizerHook(update_interval=1, frozen_layers_cfg={}, **kwargs)[source]

Fp16 optimizer hook (using PyTorch’s implementation).

This hook can accumulate gradients every n intervals and freeze some layers for some iters at the beginning. If you are using PyTorch >= 1.6, torch.cuda.amp is used as the backend, to take care of the optimization procedure.

Parameters
  • update_interval (int, optional) – The update interval of the weights, set > 1 to accumulate the grad. Defaults to 1.

  • frozen_layers_cfg (dict, optional) – Dict to config frozen layers. The key-value pair is layer name and its frozen iters. If frozen, the layer gradient would be set to None. Defaults to dict().

after_train_iter(runner)[source]

Backward optimization steps for Mixed Precision Training. For dynamic loss scaling, please refer to https://pytorch.org/docs/stable/amp.html#torch.cuda.amp.GradScaler.

  1. Scale the loss by a scale factor.

  2. Backward the loss to obtain the gradients.

  3. Unscale the optimizer’s gradient tensors.

  4. Call optimizer.step() and update scale factor.

  5. Save loss_scaler state_dict for resume purpose.

class mmselfsup.core.hooks.MomentumUpdateHook(end_momentum=1.0, update_interval=1, **kwargs)[source]

Hook for updating momentum parameter, used by BYOL, MoCoV3, etc.

This hook includes momentum adjustment following:

\[m = 1 - (1 - m_0) * (cos(pi * k / K) + 1) / 2\]

where \(k\) is the current step, \(K\) is the total steps.

Parameters
  • end_momentum (float) – The final momentum coefficient for the target network. Defaults to 1.

  • update_interval (int, optional) – The momentum update interval of the weights. Defaults to 1.

class mmselfsup.core.hooks.ODCHook(centroids_update_interval, deal_with_small_clusters_interval, evaluate_interval, reweight, reweight_pow, dist_mode=True)[source]

Hook for ODC.

This hook includes the online clustering process in ODC.

Parameters
  • centroids_update_interval (int) – Frequency of iterations to update centroids.

  • deal_with_small_clusters_interval (int) – Frequency of iterations to deal with small clusters.

  • evaluate_interval (int) – Frequency of iterations to evaluate clusters.

  • reweight (bool) – Whether to perform loss re-weighting.

  • reweight_pow (float) – The power of re-weighting.

  • dist_mode (bool) – Use distributed training or not. Defaults to True.

class mmselfsup.core.hooks.SimSiamHook(fix_pred_lr, lr, adjust_by_epoch=True, **kwargs)[source]

Hook for SimSiam.

This hook is for SimSiam to fix learning rate of predictor.

Parameters
  • fix_pred_lr (bool) – whether to fix the lr of predictor or not.

  • lr (float) – the value of fixed lr.

  • adjust_by_epoch (bool, optional) – whether to set lr by epoch or iter. Defaults to True.

before_train_epoch(runner)[source]

fix lr of predictor.

class mmselfsup.core.hooks.StepFixCosineAnnealingLrUpdaterHook(min_lr=None, min_lr_ratio=None, **kwargs)[source]
class mmselfsup.core.hooks.SwAVHook(batch_size, epoch_queue_starts=15, crops_for_assign=[0, 1], feat_dim=128, queue_length=0, interval=1, **kwargs)[source]

Hook for SwAV.

This hook builds the queue in SwAV according to epoch_queue_starts. The queue will be saved in runner.work_dir or loaded at start epoch if the path folder has queues saved before.

Parameters
  • batch_size (int) – the batch size per GPU for computing.

  • epoch_queue_starts (int, optional) – from this epoch, starts to use the queue. Defaults to 15.

  • crops_for_assign (list[int], optional) – list of crops id used for computing assignments. Defaults to [0, 1].

  • feat_dim (int, optional) – feature dimension of output vector. Defaults to 128.

  • queue_length (int, optional) – length of the queue (0 for no queue). Defaults to 0.

  • interval (int, optional) – the interval to save the queue. Defaults to 1.

optimizer

class mmselfsup.core.optimizer.DefaultOptimizerConstructor(optimizer_cfg, paramwise_cfg=None)[source]

Rewrote default constructor for optimizers. By default each parameter share the same optimizer settings, and we provide an argument paramwise_cfg to specify parameter-wise settings. It is a dict and may contain the following fields: :param model: The model with parameters to be optimized. :type model: nn.Module :param optimizer_cfg: The config dict of the optimizer.

Positional fields are
  • type: class name of the optimizer.

Optional fields are
  • any arguments of the corresponding optimizer type, e.g., lr, weight_decay, momentum, etc.

Parameters

paramwise_cfg (dict, optional) – Parameter-wise options. Defaults to None.

Example 1:
>>> model = torch.nn.modules.Conv1d(1, 1, 1)
>>> optimizer_cfg = dict(type='SGD', lr=0.01, momentum=0.9,
>>>                      weight_decay=0.0001)
>>> paramwise_cfg = dict('bias': dict(weight_decay=0.,                                  lars_exclude=True))
>>> optim_builder = DefaultOptimizerConstructor(
>>>     optimizer_cfg, paramwise_cfg)
>>> optimizer = optim_builder(model)
class mmselfsup.core.optimizer.LARS(params, lr=<required parameter>, momentum=0, weight_decay=0, dampening=0, eta=0.001, nesterov=False, eps=1e-08)[source]

Implements layer-wise adaptive rate scaling for SGD.

Parameters
  • params (iterable) – Iterable of parameters to optimize or dicts defining parameter groups.

  • lr (float) – Base learning rate.

  • momentum (float, optional) – Momentum factor. Defaults to 0 (‘m’)

  • weight_decay (float, optional) – Weight decay (L2 penalty). Defaults to 0. (‘beta’)

  • dampening (float, optional) – Dampening for momentum. Defaults to 0.

  • eta (float, optional) – LARS coefficient. Defaults to 0.001.

  • nesterov (bool, optional) – Enables Nesterov momentum. Defaults to False.

  • eps (float, optional) – A small number to avoid dviding zero. Defaults to 1e-8.

Based on Algorithm 1 of the following paper by You, Gitman, and Ginsburg. `Large Batch Training of Convolutional Networks:

Example

>>> optimizer = LARS(model.parameters(), lr=0.1, momentum=0.9,
>>>                  weight_decay=1e-4, eta=1e-3)
>>> optimizer.zero_grad()
>>> loss_fn(model(input), target).backward()
>>> optimizer.step()
step(closure=None)[source]

Performs a single optimization step.

Parameters

closure (callable, optional) – A closure that reevaluates the model and returns the loss.

class mmselfsup.core.optimizer.TransformerFinetuneConstructor(optimizer_cfg, paramwise_cfg=None)[source]

Rewrote default constructor for optimizers.

By default each parameter share the same optimizer settings, and we provide an argument paramwise_cfg to specify parameter-wise settings. In addition, we provide two optional parameters, model_type and layer_decay to set the commonly used layer-wise learning rate decay schedule. Currently, we only support layer-wise learning rate schedule for swin and vit.

Parameters
  • optimizer_cfg (dict) –

    The config dict of the optimizer. Positional fields are

    • type: class name of the optimizer.

    Optional fields are
    • any arguments of the corresponding optimizer type, e.g., lr, weight_decay, momentum, model_type, layer_decay, etc.

  • paramwise_cfg (dict, optional) – Parameter-wise options. Defaults to None.

Example 1:
>>> model = torch.nn.modules.Conv1d(1, 1, 1)
>>> optimizer_cfg = dict(type='SGD', lr=0.01, momentum=0.9,
>>>                      weight_decay=0.0001, model_type='vit')
>>> paramwise_cfg = dict('bias': dict(weight_decay=0.,                                  lars_exclude=True))
>>> optim_builder = TransformerFinetuneConstructor(
>>>     optimizer_cfg, paramwise_cfg)
>>> optimizer = optim_builder(model)
mmselfsup.core.optimizer.build_optimizer(model, optimizer_cfg)[source]

Build optimizer from configs.

Parameters
  • model (nn.Module) – The model with parameters to be optimized.

  • optimizer_cfg (dict) –

    The config dict of the optimizer. Positional fields are:

    • type: class name of the optimizer.

    • lr: base learning rate.

    Optional fields are:
    • any arguments of the corresponding optimizer type, e.g., weight_decay, momentum, etc.

    • paramwise_options: a dict with regular expression as keys to match parameter names and a dict containing options as values. Options include 6 fields: lr, lr_mult, momentum, momentum_mult, weight_decay, weight_decay_mult.

Returns

The initialized optimizer.

Return type

torch.optim.Optimizer

Example

>>> model = torch.nn.modules.Conv1d(1, 1, 1)
>>> paramwise_options = {
>>>     '(bn|gn)(\d+)?.(weight|bias)': dict(weight_decay_mult=0.1),
>>>     '\Ahead.': dict(lr_mult=10, momentum=0)}
>>> optimizer_cfg = dict(type='SGD', lr=0.01, momentum=0.9,
>>>                      weight_decay=0.0001,
>>>                      paramwise_options=paramwise_options)
>>> optimizer = build_optimizer(model, optimizer_cfg)

mmselfsup.datasets

data_sources

class mmselfsup.datasets.data_sources.BaseDataSource(data_prefix, classes=None, ann_file=None, test_mode=False, color_type='color', channel_order='rgb', file_client_args={'backend': 'disk'})[source]

Datasource base class to load dataset information.

Parameters
  • data_prefix (str) – the prefix of data path.

  • classes (str | Sequence[str], optional) – Specify classes to load.

  • ann_file (str | None) – the annotation file. When ann_file is str, the subclass is expected to read from the ann_file. When ann_file is None, the subclass is expected to read according to data_prefix.

  • test_mode (bool) – in train mode or test mode. Defaults to False.

  • color_type (str) – The flag argument for mmcv.imfrombytes(). Defaults to color.

  • channel_order (str) – The channel order of images when loaded. Defaults to rgb.

  • file_client_args (dict) – Arguments to instantiate a FileClient. See mmcv.fileio.FileClient for details. Defaults to dict(backend=’disk’).

get_cat_ids(idx)[source]

Get category id by index.

Parameters

idx (int) – Index of data.

Returns

Image category of specified index.

Return type

int

classmethod get_classes(classes=None)[source]

Get class names of current dataset.

Parameters

classes (Sequence[str] | str | None) – If classes is None, use default CLASSES defined by builtin dataset. If classes is a string, take it as a file name. The file contains the name of classes where each line contains one class name. If classes is a tuple or list, override the CLASSES defined by the dataset.

Returns

Names of categories of the dataset.

Return type

tuple[str] or list[str]

get_gt_labels()[source]

Get all ground-truth labels (categories).

Returns

categories for all images.

Return type

list[int]

get_img(idx)[source]

Get image by index.

Parameters

idx (int) – Index of data.

Returns

PIL Image format.

Return type

Image

class mmselfsup.datasets.data_sources.CIFAR10(data_prefix, classes=None, ann_file=None, test_mode=False, color_type='color', channel_order='rgb', file_client_args={'backend': 'disk'})[source]

CIFAR10 Dataset.

This implementation is modified from https://github.com/pytorch/vision/blob/master/torchvision/datasets/cifar.py

class mmselfsup.datasets.data_sources.CIFAR100(data_prefix, classes=None, ann_file=None, test_mode=False, color_type='color', channel_order='rgb', file_client_args={'backend': 'disk'})[source]

CIFAR100 Dataset.

class mmselfsup.datasets.data_sources.ImageList(data_prefix, classes=None, ann_file=None, test_mode=False, color_type='color', channel_order='rgb', file_client_args={'backend': 'disk'})[source]

The implementation for loading any image list file.

The ImageList can load an annotation file or a list of files and merge all data records to one list. If data is unlabeled, the gt_label will be set -1.

class mmselfsup.datasets.data_sources.ImageNet(data_prefix, classes=None, ann_file=None, test_mode=False, color_type='color', channel_order='rgb', file_client_args={'backend': 'disk'})[source]

ImageNet Dataset.

This implementation is modified from https://github.com/pytorch/vision/blob/master/torchvision/datasets/imagenet.py

class mmselfsup.datasets.data_sources.ImageNet21k(data_prefix, classes=None, ann_file=None, multi_label=False, recursion_subdir=False, test_mode=False)[source]

ImageNet21k Dataset. Since the dataset ImageNet21k is extremely big, cantains 21k+ classes and 1.4B files. This class has improved the following points on the basis of the class ImageNet, in order to save memory usage and time required :

  • Delete the samples attribute

  • using ‘slots’ create a Data_item tp replace dict

  • Modify setting info dict from function load_annotations to function prepare_data

  • using int instead of np.array(…, np.int64)

Parameters
  • data_prefix (str) – the prefix of data path

  • ann_file (str | None) – the annotation file. When ann_file is str, the subclass is expected to read from the ann_file. When ann_file is None, the subclass is expected to read according to data_prefix

  • test_mode (bool) – in train mode or test mode

  • multi_label (bool) – use multi label or not.

  • recursion_subdir (bool) – whether to use sub-directory pictures, which are meet the conditions in the folder under category directory.

load_annotations()[source]

load dataset annotations.

pipelines

class mmselfsup.datasets.pipelines.BEiTMaskGenerator(input_size: int, num_masking_patches: int, min_num_patches: int = 4, max_num_patches: Optional[int] = None, min_aspect: float = 0.3, max_aspect: Optional[float] = None)[source]

Generate mask for image.

This module is borrowed from https://github.com/microsoft/unilm/tree/master/beit

Parameters
  • input_size (int) – The size of input image.

  • num_masking_patches (int) – The number of patches to be masked.

  • min_num_patches (int) – The minimum number of patches to be masked in the process of generating mask. Defaults to 4.

  • max_num_patches (int, optional) – The maximum number of patches to be masked in the process of generating mask. Defaults to None.

  • min_aspect (float, optional) – The minimum aspect ratio of mask blocks. Defaults to 0.3.

  • min_aspect – The minimum aspect ratio of mask blocks. Defaults to None.

class mmselfsup.datasets.pipelines.GaussianBlur(sigma_min, sigma_max, p=0.5)[source]

GaussianBlur augmentation refers to `SimCLR.

<https://arxiv.org/abs/2002.05709>`_.

Parameters
  • sigma_min (float) – The minimum parameter of Gaussian kernel std.

  • sigma_max (float) – The maximum parameter of Gaussian kernel std.

  • p (float, optional) – Probability. Defaults to 0.5.

class mmselfsup.datasets.pipelines.Lighting(alphastd=0.1)[source]

Lighting noise(AlexNet - style PCA - based noise).

Parameters

alphastd (float, optional) – The parameter for Lighting. Defaults to 0.1.

class mmselfsup.datasets.pipelines.RandomAppliedTrans(transforms, p=0.5)[source]

Randomly applied transformations.

Parameters
  • transforms (list[dict]) – List of transformations in dictionaries.

  • p (float, optional) – Probability. Defaults to 0.5.

class mmselfsup.datasets.pipelines.RandomAug(input_size=None, color_jitter=None, auto_augment=None, interpolation=None, re_prob=None, re_mode=None, re_count=None, mean=None, std=None)[source]

RandAugment data augmentation method based on “RandAugment: Practical automated data augmentation with a reduced search space”.

This code is borrowed from <https://github.com/pengzhiliang/MAE-pytorch>

class mmselfsup.datasets.pipelines.SimMIMMaskGenerator(input_size: int = 192, mask_patch_size: int = 32, model_patch_size: int = 4, mask_ratio: float = 0.6)[source]

Generate random block mask for each Image.

This module is used in SimMIM to generate masks.

Parameters
  • input_size (int) – Size of input image. Defaults to 192.

  • mask_patch_size (int) – Size of each block mask. Defaults to 32.

  • model_patch_size (int) – Patch size of each token. Defaults to 4.

  • mask_ratio (float) – The mask ratio of image. Defaults to 0.6.

class mmselfsup.datasets.pipelines.Solarization(threshold=128, p=0.5)[source]

Solarization augmentation refers to `BYOL.

<https://arxiv.org/abs/2006.07733>`_.

Parameters
  • threshold (float, optional) – The solarization threshold. Defaults to 128.

  • p (float, optional) – Probability. Defaults to 0.5.

class mmselfsup.datasets.pipelines.ToTensor[source]

Convert image or a sequence of images to tensor.

This module can not only convert a single image to tensor, but also a sequence of images.

samplers

class mmselfsup.datasets.samplers.DistributedGivenIterationSampler(dataset, total_iter, batch_size, num_replicas=None, rank=None, last_iter=- 1)[source]
gen_new_list()[source]

Each process shuffle all list with same seed, and pick one piece according to rank.

class mmselfsup.datasets.samplers.DistributedGroupSampler(dataset, samples_per_gpu=1, num_replicas=None, rank=None)[source]

Sampler that restricts data loading to a subset of the dataset.

It is especially useful in conjunction with torch.nn.parallel.DistributedDataParallel. In such case, each process can pass a DistributedSampler instance as a DataLoader sampler, and load a subset of the original dataset that is exclusive to it.

Note

Dataset is assumed to be of constant size.

Parameters
  • dataset – Dataset used for sampling.

  • num_replicas (optional) – Number of processes participating in distributed training.

  • rank (optional) – Rank of the current process within num_replicas.

class mmselfsup.datasets.samplers.DistributedSampler(dataset, num_replicas=None, rank=None, shuffle=True, replace=False, seed=0)[source]
class mmselfsup.datasets.samplers.GroupSampler(dataset, samples_per_gpu=1)[source]

datasets

class mmselfsup.datasets.BaseDataset(data_source, pipeline, prefetch=False)[source]

Base dataset class.

The base dataset can be inherited by different algorithm’s datasets. After __init__, the data source and pipeline will be built. Besides, the algorithm specific dataset implements different operations after obtaining images from data sources.

Parameters
  • data_source (dict) – Data source defined in mmselfsup.datasets.data_sources.

  • pipeline (list[dict]) – A list of dict, where each element represents an operation defined in mmselfsup.datasets.pipelines.

  • prefetch (bool, optional) – Whether to prefetch data. Defaults to False.

class mmselfsup.datasets.ConcatDataset(datasets)[source]

A wrapper of concatenated dataset.

Same as torch.utils.data.dataset.ConcatDataset, but concat the group flag for image aspect ratio.

Parameters

datasets (list[Dataset]) – A list of datasets.

class mmselfsup.datasets.DeepClusterDataset(data_source, pipeline, prefetch=False)[source]

Dataset for DC and ODC.

The dataset initializes clustering labels and assigns it during training.

Parameters
  • data_source (dict) – Data source defined in mmselfsup.datasets.data_sources.

  • pipeline (list[dict]) – A list of dict, where each element represents an operation defined in mmselfsup.datasets.pipelines.

  • prefetch (bool, optional) – Whether to prefetch data. Defaults to False.

class mmselfsup.datasets.MultiViewDataset(data_source, num_views, pipelines, prefetch=False)[source]

The dataset outputs multiple views of an image.

The number of views in the output dict depends on num_views. The image can be processed by one pipeline or multiple piepelines.

Parameters
  • data_source (dict) – Data source defined in mmselfsup.datasets.data_sources.

  • num_views (list) – The number of different views.

  • pipelines (list[list[dict]]) – A list of pipelines, where each pipeline contains elements that represents an operation defined in mmselfsup.datasets.pipelines.

  • prefetch (bool, optional) – Whether to prefetch data. Defaults to False.

Examples

>>> dataset = MultiViewDataset(data_source, [2], [pipeline])
>>> output = dataset[idx]
The output got 2 views processed by one pipeline.
>>> dataset = MultiViewDataset(
>>>     data_source, [2, 6], [pipeline1, pipeline2])
>>> output = dataset[idx]
The output got 8 views processed by two pipelines, the first two views
were processed by pipeline1 and the remaining views by pipeline2.
class mmselfsup.datasets.RelativeLocDataset(data_source, pipeline, format_pipeline, prefetch=False)[source]

Dataset for relative patch location.

The dataset crops image into several patches and concatenates every surrounding patch with center one. Finally it also outputs corresponding labels 0, 1, 2, 3, 4, 5, 6, 7.

Parameters
  • data_source (dict) – Data source defined in mmselfsup.datasets.data_sources.

  • pipeline (list[dict]) – A list of dict, where each element represents an operation defined in mmselfsup.datasets.pipelines.

  • format_pipeline (list[dict]) – A list of dict, it converts input format from PIL.Image to Tensor. The operation is defined in mmselfsup.datasets.pipelines.

  • prefetch (bool, optional) – Whether to prefetch data. Defaults to False.

class mmselfsup.datasets.RepeatDataset(dataset, times)[source]

A wrapper of repeated dataset.

The length of repeated dataset will be times larger than the original dataset. This is useful when the data loading time is long but the dataset is small. Using RepeatDataset can reduce the data loading time between epochs.

Parameters
  • dataset (Dataset) – The dataset to be repeated.

  • times (int) – Repeat times.

class mmselfsup.datasets.RotationPredDataset(data_source, pipeline, prefetch=False)[source]

Dataset for rotation prediction.

The dataset rotates the image with 0, 90, 180, and 270 degrees and outputs labels 0, 1, 2, 3 correspodingly.

Parameters
  • data_source (dict) – Data source defined in mmselfsup.datasets.data_sources.

  • pipeline (list[dict]) – A list of dict, where each element represents an operation defined in mmselfsup.datasets.pipelines.

  • prefetch (bool, optional) – Whether to prefetch data. Defaults to False.

class mmselfsup.datasets.SingleViewDataset(data_source, pipeline, prefetch=False)[source]

The dataset outputs one view of an image, containing some other information such as label, idx, etc.

Parameters
  • data_source (dict) – Data source defined in mmselfsup.datasets.data_sources.

  • pipeline (list[dict]) – A list of dict, where each element represents an operation defined in mmselfsup.datasets.pipelines.

  • prefetch (bool, optional) – Whether to prefetch data. Defaults to False.

evaluate(results, logger=None, topk=(1, 5))[source]

The evaluation function to output accuracy.

Parameters
  • results (dict) – The key-value pair is the output head name and corresponding prediction values.

  • logger (logging.Logger | str | None, optional) – The defined logger to be used. Defaults to None.

  • topk (tuple(int)) – The output includes topk accuracy.

mmselfsup.datasets.build_dataloader(dataset, imgs_per_gpu=None, samples_per_gpu=None, workers_per_gpu=1, num_gpus=1, dist=True, shuffle=True, replace=False, seed=None, pin_memory=True, persistent_workers=True, **kwargs)[source]

Build PyTorch DataLoader.

In distributed training, each GPU/process has a dataloader. In non-distributed training, there is only one dataloader for all GPUs.

Parameters
  • dataset (Dataset) – A PyTorch dataset.

  • imgs_per_gpu (int) – (Deprecated, please use samples_per_gpu) Number of images on each GPU, i.e., batch size of each GPU. Defaults to None.

  • samples_per_gpu (int) – Number of images on each GPU, i.e., batch size of each GPU. Defaults to None.

  • workers_per_gpu (int) – How many subprocesses to use for data loading for each GPU. persistent_workers option needs num_workers > 0. Defaults to 1.

  • num_gpus (int) – Number of GPUs. Only used in non-distributed training.

  • dist (bool) – Distributed training/test or not. Defaults to True.

  • shuffle (bool) – Whether to shuffle the data at every epoch. Defaults to True.

  • replace (bool) – Replace or not in random shuffle. It works on when shuffle is True. Defaults to False.

  • seed (int) – set seed for dataloader.

  • pin_memory (bool, optional) – If True, the data loader will copy Tensors into CUDA pinned memory before returning them. Defaults to True.

  • persistent_workers (bool) – If True, the data loader will not shutdown the worker processes after a dataset has been consumed once. This allows to maintain the workers Dataset instances alive. The argument also has effect in PyTorch>=1.7.0. Defaults to True.

  • kwargs – any keyword argument to be used to initialize DataLoader

Returns

A PyTorch dataloader.

Return type

DataLoader

mmselfsup.models

algorithms

backbones

heads

memories

necks

utils

mmselfsup.utils

class mmselfsup.utils.AliasMethod(probs)[source]

The alias method for sampling.

From: https://hips.seas.harvard.edu/blog/2013/03/03/the-alias-method-efficient-sampling-with-many-discrete-outcomes/

Parameters

probs (Tensor) – Sampling probabilities.

draw(N)[source]

Draw N samples from multinomial.

Parameters

N (int) – Number of samples.

Returns

Samples.

Return type

Tensor

class mmselfsup.utils.Extractor(dataset, samples_per_gpu, workers_per_gpu, dist_mode=False, persistent_workers=True, **kwargs)[source]

Feature extractor.

Parameters
  • dataset (Dataset | dict) – A PyTorch dataset or dict that indicates the dataset.

  • samples_per_gpu (int) – Number of images on each GPU, i.e., batch size of each GPU.

  • workers_per_gpu (int) – How many subprocesses to use for data loading for each GPU.

  • dist_mode (bool) – Use distributed extraction or not. Defaults to False.

  • persistent_workers (bool) – If True, the data loader will not shutdown the worker processes after a dataset has been consumed once. This allows to maintain the workers Dataset instances alive. The argument also has effect in PyTorch>=1.7.0. Defaults to True.

mmselfsup.utils.batch_shuffle_ddp(x)[source]

Batch shuffle, for making use of BatchNorm.

* Only support DistributedDataParallel (DDP) model. *

mmselfsup.utils.batch_unshuffle_ddp(x, idx_unshuffle)[source]

Undo batch shuffle.

* Only support DistributedDataParallel (DDP) model. *

mmselfsup.utils.collect_env()[source]

Collect the information of the running environments.

mmselfsup.utils.concat_all_gather(tensor)[source]

Performs all_gather operation on the provided tensors.

* Warning *: torch.distributed.all_gather has no gradient.

mmselfsup.utils.dist_forward_collect(func, data_loader, rank, length, ret_rank=- 1)[source]

Forward and collect network outputs in a distributed manner.

This function performs forward propagation and collects outputs. It can be used to collect results, features, losses, etc.

Parameters
  • func (function) – The function to process data. The output must be a dictionary of CPU tensors.

  • data_loader (Dataloader) – the torch Dataloader to yield data.

  • rank (int) – This process id.

  • length (int) – Expected length of output arrays.

  • ret_rank (int) – The process that returns. Other processes will return None.

Returns

The concatenated outputs.

Return type

results_all (dict(np.ndarray))

mmselfsup.utils.distributed_sinkhorn(out, sinkhorn_iterations, world_size, epsilon)[source]

Apply the distributed sinknorn optimization on the scores matrix to find the assignments.

mmselfsup.utils.find_latest_checkpoint(path, suffix='pth')[source]

Find the latest checkpoint from the working directory. :param path: The path to find checkpoints. :type path: str :param suffix: File extension.

Defaults to pth.

Returns

File path of the latest checkpoint.

Return type

latest_path(str | None)

References

1

https://github.com/microsoft/SoftTeacher /blob/main/ssod/utils/patch.py

2

https://github.com/open-mmlab/mmdetection /blob/master/mmdet/utils/misc.py#L7

mmselfsup.utils.gather_tensors(input_array)[source]

Gather tensor from all GPUs.

mmselfsup.utils.gather_tensors_batch(input_array, part_size=100, ret_rank=- 1)[source]

batch-wise gathering to avoid CUDA out of memory.

mmselfsup.utils.get_root_logger(log_file=None, log_level=20)[source]

Get root logger.

Parameters
  • log_file (str, optional) – File path of log. Defaults to None.

  • log_level (int, optional) – The level of logger. Defaults to logging.INFO.

Returns

The obtained logger.

Return type

logging.Logger

mmselfsup.utils.nondist_forward_collect(func, data_loader, length)[source]

Forward and collect network outputs.

This function performs forward propagation and collects outputs. It can be used to collect results, features, losses, etc.

Parameters
  • func (function) – The function to process data. The output must be a dictionary of CPU tensors.

  • data_loader (Dataloader) – the torch Dataloader to yield data.

  • length (int) – Expected length of output arrays.

Returns

The concatenated outputs.

Return type

results_all (dict(np.ndarray))

mmselfsup.utils.setup_multi_processes(cfg)[source]

Setup multi-processing environment variables.

mmselfsup.utils.sync_random_seed(seed=None, device='cuda')[source]

Make sure different ranks share the same seed. All workers must call this function, otherwise it will deadlock. This method is generally used in DistributedSampler, because the seed should be identical across all processes in the distributed group.

In distributed sampling, different ranks should sample non-overlapped data in the dataset. Therefore, this function is used to make sure that each rank shuffles the data indices in the same order based on the same seed. Then different ranks could use different indices to select non-overlapped data from the same data list.

Parameters
  • seed (int, Optional) – The seed. Default to None.

  • device (str) – The device where the seed will be put on. Default to ‘cuda’.

Returns

Seed to be used.

Return type

int

References

1

https://github.com/open-mmlab/mmdetection /blob/master/mmdet/core/utils/dist_utils.py

Read the Docs v: latest
Versions
latest
Downloads
pdf
html
epub
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.