handler#

Module Contents#

ServerHandler

An abstract class representing handler of parameter server.

class ServerHandler(model: torch.nn.Module, cuda: bool, device: str = None)#

Bases: fedlab.core.model_maintainer.ModelMaintainer

An abstract class representing handler of parameter server.

Please make sure that your self-defined server handler class subclasses this class

Example

Read source code of SyncServerHandler and AsyncServerHandler.

Parameters:
  • model (torch.nn.Module) – PyTorch model.

  • cuda (bool) – Use GPUs or not.

  • device (str, optional) – Assign model/data to the given GPUs. E.g., ‘device:0’ or ‘device:0,1’. Defaults to None. If device is None and cuda is True, FedLab will set the gpu with the largest memory as default.

Property for manager layer. Server manager will call this property when activates clients.

abstract property if_stop: bool#

NetworkManager keeps monitoring this attribute, and it will stop all related processes and threads when True returned.

abstract setup_optim()#

Override this function to load your optimization hyperparameters.

abstract global_update(buffer)#
abstract load(payload)#

Override this function to define how to update global model (aggregation or optimization).

abstract evaluate()#

Override this function to define the evaluation of global model.