utils#

Package Contents#

Aggregators

Define the algorithm of parameters aggregation

Logger

record cmd info to file and print it to cmd at the same time

MessageCode

Different types of messages between client and server that we support go here.

SerializationTool

class Aggregators#

Bases: object

Define the algorithm of parameters aggregation

static fedavg_aggregate(serialized_params_list, weights=None)#

FedAvg aggregator

Paper: http://proceedings.mlr.press/v54/mcmahan17a.html

Parameters:
  • serialized_params_list (list[torch.Tensor])) – Merge all tensors following FedAvg.

  • weights (list, numpy.array or torch.Tensor, optional) – Weights for each params, the length of weights need to be same as length of serialized_params_list

Returns:

torch.Tensor

static fedasync_aggregate(server_param, new_param, alpha)#

FedAsync aggregator

Paper: https://arxiv.org/abs/1903.03934

class Logger(log_name=None, log_file=None)#

Bases: object

record cmd info to file and print it to cmd at the same time

Parameters:
  • log_name (str) – log name for output.

  • log_file (str) – a file path of log file.

info(log_str)#

Print information to logger

warning(warning_str)#

Print warning to logger

class MessageCode#

Bases: enum.Enum

Different types of messages between client and server that we support go here.

ParameterRequest = 0#
GradientUpdate = 1#
ParameterUpdate = 2#
EvaluateParams = 3#
Exit = 4#
SetUp = 5#
Activation = 6#
class SerializationTool#

Bases: object

static serialize_model_gradients(model: torch.nn.Module) torch.Tensor#

_summary_

Parameters:

model (torch.nn.Module) – _description_

Returns:

_description_

Return type:

torch.Tensor

static deserialize_model_gradients(model: torch.nn.Module, gradients: torch.Tensor)#
static serialize_model(model: torch.nn.Module) torch.Tensor#

Unfold model parameters

Unfold every layer of model, concate all of tensors into one. Return a torch.Tensor with shape (size, ).

Please note that we update the implementation. Current version of serialization includes the parameters in batchnorm layers.

Parameters:

model (torch.nn.Module) – model to serialize.

static deserialize_model(model: torch.nn.Module, serialized_parameters: torch.Tensor, mode='copy')#

Assigns serialized parameters to model.parameters. This is done by iterating through model.parameters() and assigning the relevant params in grad_update. NOTE: this function manipulates model.parameters.

Parameters:
  • model (torch.nn.Module) – model to deserialize.

  • serialized_parameters (torch.Tensor) – serialized model parameters.

  • mode (str) – deserialize mode. “copy” or “add”.

static serialize_trainable_model(model: torch.nn.Module) torch.Tensor#

Unfold model parameters

Unfold every layer of model, concate all of tensors into one. Return a torch.Tensor with shape (size, ).

Parameters:

model (torch.nn.Module) – model to serialize.

static deserialize_trainable_model(model: torch.nn.Module, serialized_parameters: torch.Tensor, mode='copy')#

Assigns serialized parameters to model.parameters. This is done by iterating through model.parameters() and assigning the relevant params in grad_update. NOTE: this function manipulates model.parameters.

Parameters:
  • model (torch.nn.Module) – model to deserialize.

  • serialized_parameters (torch.Tensor) – serialized model parameters.

  • mode (str) – deserialize mode. “copy” or “add”.