handler#

Module Contents#

ParameterServerBackendHandler

An abstract class representing handler of parameter server.

SyncParameterServerHandler

Synchronous Parameter Server Handler.

AsyncParameterServerHandler

Asynchronous Parameter Server Handler

class ParameterServerBackendHandler(model, cuda=False)#

Bases: fedlab.core.model_maintainer.ModelMaintainer

An abstract class representing handler of parameter server.

Please make sure that your self-defined server handler class subclasses this class

Example

Read source code of SyncParameterServerHandler and AsyncParameterServerHandler.

Property for manager layer. Server manager will call this property when activates clients.

property if_stop(self) bool#

NetworkManager keeps monitoring this attribute, and it will stop all related processes and threads when True returned.

abstract _update_global_model(self, payload)#

Override this function to define how to update global model (aggregation or optimization).

class SyncParameterServerHandler(model, global_round, sample_ratio, cuda=False, logger=None)#

Bases: ParameterServerBackendHandler

Synchronous Parameter Server Handler.

Backend of synchronous parameter server: this class is responsible for backend computing in synchronous server.

Synchronous parameter server will wait for every client to finish local training process before the next FL round.

Details in paper: http://proceedings.mlr.press/v54/mcmahan17a.html

Parameters
  • model (torch.nn.Module) – Model used in this federation.

  • global_round (int) – stop condition. Shut down FL system when global round is reached.

  • sample_ratio (float) – The result of sample_ratio * client_num is the number of clients for every FL round.

  • cuda (bool) – Use GPUs or not. Default: False.

  • logger (Logger, optional) – object of Logger.

Property for manager layer. Server manager will call this property when activates clients.

property if_stop(self)#

NetworkManager keeps monitoring this attribute, and it will stop all related processes and threads when True returned.

property client_num_per_round(self)#
sample_clients(self)#

Return a list of client rank indices selected randomly. The client ID is from 1 to self.client_num_in_total + 1.

_update_global_model(self, payload)#

Update global model with collected parameters from clients.

Note

Server handler will call this method when its client_buffer_cache is full. User can overwrite the strategy of aggregation to apply on model_parameters_list, and use SerializationTool.deserialize_model() to load serialized parameters after aggregation into self._model.

Parameters

payload (list[torch.Tensor]) – A list of tensors passed by manager layer.

class AsyncParameterServerHandler(model, alpha, total_time, strategy='constant', cuda=False, logger=None)#

Bases: ParameterServerBackendHandler

Asynchronous Parameter Server Handler

Update global model immediately after receiving a ParameterUpdate message Paper: https://arxiv.org/abs/1903.03934

Parameters
  • model (torch.nn.Module) – Global model in server

  • alpha (float) – Weight used in async aggregation.

  • total_time (int) – Stop condition. Shut down FL system when total_time is reached.

  • strategy (str) – Adaptive strategy. constant, hinge and polynomial is optional. Default: constant.

  • cuda (bool) – Use GPUs or not.

  • logger (Logger, optional) – Object of Logger.

property if_stop(self)#

NetworkManager keeps monitoring this attribute, and it will stop all related processes and threads when True returned.

Property for manager layer. Server manager will call this property when activates clients.

_update_global_model(self, payload)#

Override this function to define how to update global model (aggregation or optimization).

_adapt_alpha(self, receive_model_time)#

update the alpha according to staleness