compressor#

Package Contents#

QSGDCompressor

Quantization compressor.

TopkCompressor

Compressor for federated communication

class QSGDCompressor(n_bit, random=True, cuda=False)#

Bases: fedlab.contrib.compressor.compressor.Compressor

Quantization compressor.

A implementation for paper https://proceedings.neurips.cc/paper/2017/file/6c340f25839e6acdc73414517203f5f0-Paper.pdf.

Alistarh, Dan, et al. “QSGD: Communication-efficient SGD via gradient quantization and encoding.” Advances in Neural Information Processing Systems 30 (2017): 1709-1720. Thanks to git repo: https://github.com/xinyandai/gradient-quantization

Parameters:
  • n_bit (int) – the bits num for quantization. Bigger n_bit comes with better compress precision but more communication consumption.

  • random (bool, optional) – Carry bit with probability. Defaults to True.

  • cuda (bool, optional) – use GPU. Defaults to False.

compress(tensor)#

Compress a tensor with quantization :param tensor: [description] :type tensor: [type]

Returns:

The normalization number. signs (torch.Tensor): Tensor that indicates the sign of coresponding number. quantized_intervals (torch.Tensor): Quantized tensor that each item in [0, 2**n_bit -1].

Return type:

norm (torch.Tensor)

decompress(signature)#

Decompress tensor :param signature: [norm, signs, quantized_intervals], returned by :func:compress. :type signature: list

Returns:

Raw tensor represented by signature.

Return type:

torch.Tensor

class TopkCompressor(compress_ratio)#

Bases: fedlab.contrib.compressor.compressor.Compressor

Compressor for federated communication Top-k gradient or weights selection :param compress_ratio: compress ratio :type compress_ratio: float

compress(tensor)#

compress tensor into (values, indices) :param tensor: tensor :type tensor: torch.Tensor

Returns:

(values, indices)

Return type:

tuple

decompress(values, indices, shape)#

decompress tensor