fedmgda+#

Module Contents#

FedMGDAServerHandler

Synchronous Parameter Server Handler.

class FedMGDAServerHandler(model: torch.nn.Module, global_round: int, num_clients: int = 0, sample_ratio: float = 1, cuda: bool = False, device: str = None, sampler: fedlab.contrib.client_sampler.base_sampler.FedSampler = None, logger: fedlab.utils.Logger = None)#

Bases: fedlab.contrib.algorithm.basic_server.SyncServerHandler

Synchronous Parameter Server Handler.

Backend of synchronous parameter server: this class is responsible for backend computing in synchronous server.

Synchronous parameter server will wait for every client to finish local training process before the next FL round.

Details in paper: http://proceedings.mlr.press/v54/mcmahan17a.html

Parameters:
  • model (torch.nn.Module) – model trained by federated learning.

  • global_round (int) – stop condition. Shut down FL system when global round is reached.

  • num_clients (int) – number of clients in FL. Default: 0 (initialized external).

  • sample_ratio (float) – the result of sample_ratio * num_clients is the number of clients for every FL round.

  • cuda (bool) – use GPUs or not. Default: False.

  • device (str, optional) – assign model/data to the given GPUs. E.g., ‘device:0’ or ‘device:0,1’. Defaults to None. If device is None and cuda is True, FedLab will set the gpu with the largest memory as default.

  • sampler (FedSampler, optional) – assign a sampler to define the client sampling strategy. Default: random sampling with FedSampler.

  • logger (Logger, optional) – object of Logger.

property num_clients_per_round#
setup_optim(sampler, lr)#

Override this function to load your optimization hyperparameters.

sample_clients(num_to_sample=None)#

Return a list of client rank indices selected randomly. The client ID is from 0 to self.num_clients -1.

global_update(buffer)#