ifca#

Module Contents#

IFCAServerHander

Synchronous Parameter Server Handler.

IFCASerialClientTrainer

Train multiple clients in a single process.

class IFCAServerHander(model: torch.nn.Module, global_round: int, sample_ratio: float, cuda: bool = False, device: str = None, logger=None)#

Bases: fedlab.contrib.algorithm.basic_server.SyncServerHandler

Synchronous Parameter Server Handler.

Backend of synchronous parameter server: this class is responsible for backend computing in synchronous server.

Synchronous parameter server will wait for every client to finish local training process before the next FL round.

Details in paper: http://proceedings.mlr.press/v54/mcmahan17a.html

Parameters:
  • model (torch.nn.Module) – model trained by federated learning.

  • global_round (int) – stop condition. Shut down FL system when global round is reached.

  • num_clients (int) – number of clients in FL. Default: 0 (initialized external).

  • sample_ratio (float) – the result of sample_ratio * num_clients is the number of clients for every FL round.

  • cuda (bool) – use GPUs or not. Default: False.

  • device (str, optional) – assign model/data to the given GPUs. E.g., ‘device:0’ or ‘device:0,1’. Defaults to None. If device is None and cuda is True, FedLab will set the gpu with the largest memory as default.

  • sampler (FedSampler, optional) – assign a sampler to define the client sampling strategy. Default: random sampling with FedSampler.

  • logger (Logger, optional) – object of Logger.

Property for manager layer. Server manager will call this property when activates clients.

setup_optim(share_size, k, init_parameters)#

_summary_

Parameters:
  • share_size (_type_) – _description_

  • k (_type_) – _description_

  • init_parameters (_type_) – _description_

global_update(buffer)#
class IFCASerialClientTrainer(model, num_clients, cuda=False, device=None, logger=None, personal=False)#

Bases: fedlab.contrib.algorithm.basic_client.SGDSerialClientTrainer

Train multiple clients in a single process.

Customize _get_dataloader() or _train_alone() for specific algorithm design in clients.

Parameters:
  • model (torch.nn.Module) – Model used in this federation.

  • num_clients (int) – Number of clients in current trainer.

  • cuda (bool) – Use GPUs or not. Default: False.

  • device (str, optional) – Assign model/data to the given GPUs. E.g., ‘device:0’ or ‘device:0,1’. Defaults to None.

  • logger (Logger, optional) – Object of Logger.

  • personal (bool, optional) – If Ture is passed, SerialModelMaintainer will generate the copy of local parameters list and maintain them respectively. These paremeters are indexed by [0, num-1]. Defaults to False.

setup_dataset(dataset)#

Override this function to set up local dataset for clients

setup_optim(epochs, batch_size, lr)#

Set up local optimization configuration.

Parameters:
  • epochs (int) – Local epochs.

  • batch_size (int) – Local batch size.

  • lr (float) – Learning rate.

local_process(payload, id_list)#

Define the local main process.