powerofchoice#

Module Contents#

PowerofchoicePipeline

Powerofchoice

Synchronous Parameter Server Handler.

PowerofchoiceSerialClientTrainer

Deprecated

class PowerofchoicePipeline(handler: fedlab.core.server.handler.ServerHandler, trainer: fedlab.core.client.trainer.SerialClientTrainer)#

Bases: fedlab.core.standalone.StandalonePipeline

main()#
class Powerofchoice(model: torch.nn.Module, global_round: int, sample_ratio: float, cuda: bool = False, device: str = None, logger: fedlab.utils.Logger = None)#

Bases: fedlab.contrib.algorithm.basic_server.SyncServerHandler

Synchronous Parameter Server Handler.

Backend of synchronous parameter server: this class is responsible for backend computing in synchronous server.

Synchronous parameter server will wait for every client to finish local training process before the next FL round.

Details in paper: http://proceedings.mlr.press/v54/mcmahan17a.html

参数
  • model (torch.nn.Module) – Model used in this federation.

  • global_round (int) – stop condition. Shut down FL system when global round is reached.

  • sample_ratio (float) – The result of sample_ratio * num_clients is the number of clients for every FL round.

  • cuda (bool) – Use GPUs or not. Default: False.

  • device (str, optional) – Assign model/data to the given GPUs. E.g., ‘device:0’ or ‘device:0,1’. Defaults to None. If device is None and cuda is True, FedLab will set the gpu with the largest memory as default.

  • logger (Logger, optional) – object of Logger.

setup_optim(d)#

Override this function to load your optimization hyperparameters.

sample_candidates()#
sample_clients(candidates, losses)#

Return a list of client rank indices selected randomly. The client ID is from 0 to self.num_clients -1.

class PowerofchoiceSerialClientTrainer(model, num_clients, cuda=False, device=None, logger=None, personal=False)#

Bases: fedlab.contrib.algorithm.basic_client.SGDSerialClientTrainer

Deprecated Train multiple clients in a single process.

Customize _get_dataloader() or _train_alone() for specific algorithm design in clients.

参数
  • model (torch.nn.Module) – Model used in this federation.

  • num_clients (int) – Number of clients in current trainer.

  • cuda (bool) – Use GPUs or not. Default: False.

  • device (str, optional) – Assign model/data to the given GPUs. E.g., ‘device:0’ or ‘device:0,1’. Defaults to None.

  • logger (Logger, optional) – Object of Logger.

  • personal (bool, optional) – If Ture is passed, SerialModelMaintainer will generate the copy of local parameters list and maintain them respectively. These paremeters are indexed by [0, num-1]. Defaults to False.

evaluate(id_list, model_parameters)#

Evaluate quality of local model.