algorithm#

Package Contents#

SGDClientTrainer

Client backend handler, this class provides data process method to upper layer.

SGDSerialClientTrainer

Train multiple clients in a single process.

SyncServerHandler

Synchronous Parameter Server Handler.

AsyncServerHandler

Asynchronous Parameter Server Handler

DittoSerialClientTrainer

Train multiple clients in a single process.

DittoServerHandler

Ditto server acts the same as fedavg server.

FedAvgSerialClientTrainer

Federated client with local SGD solver.

FedAvgServerHandler

FedAvg server handler.

FedDynSerialClientTrainer

Train multiple clients in a single process.

FedDynServerHandler

FedAvg server handler.

FedNovaSerialClientTrainer

Federated client with local SGD solver.

FedNovaServerHandler

FedAvg server handler.

FedProxSerialClientTrainer

Train multiple clients in a single process.

FedProxClientTrainer

Federated client with local SGD with proximal term solver.

FedProxServerHandler

FedProx server handler.

IFCASerialClientTrainer

Train multiple clients in a single process.

IFCAServerHander

Synchronous Parameter Server Handler.

PowerofchoiceSerialClientTrainer

Train multiple clients in a single process.

PowerofchoicePipeline

Powerofchoice

Synchronous Parameter Server Handler.

qFedAvgClientTrainer

Federated client with modified upload package and local SGD solver.

qFedAvgServerHandler

qFedAvg server handler.

ScaffoldSerialClientTrainer

Train multiple clients in a single process.

ScaffoldServerHandler

FedAvg server handler.

class SGDClientTrainer(model: torch.nn.Module, cuda: bool = False, device: str = None, logger: fedlab.utils.Logger = None)#

Bases: fedlab.core.client.trainer.ClientTrainer

Client backend handler, this class provides data process method to upper layer.

Parameters:
  • model (torch.nn.Module) – PyTorch model.

  • cuda (bool, optional) – use GPUs or not. Default: False.

  • device (str, optional) – Assign model/data to the given GPUs. E.g., ‘device:0’ or ‘device:0,1’. Defaults to None.

  • logger (Logger, optional) – :object of Logger.

Return a tensor list for uploading to server.

This attribute will be called by client manager. Customize it for new algorithms.

setup_dataset(dataset)#

Set up local dataset self.dataset for clients.

setup_optim(epochs, batch_size, lr)#

Set up local optimization configuration.

Parameters:
  • epochs (int) – Local epochs.

  • batch_size (int) – Local batch size.

  • lr (float) – Learning rate.

local_process(payload, id)#

Manager of the upper layer will call this function with accepted payload

In synchronous mode, return True to end current FL round.

train(model_parameters, train_loader) None#

Client trains its local model on local dataset.

Parameters:

model_parameters (torch.Tensor) – Serialized model parameters.

class SGDSerialClientTrainer(model, num_clients, cuda=False, device=None, logger=None, personal=False)#

Bases: fedlab.core.client.trainer.SerialClientTrainer

Train multiple clients in a single process.

Customize _get_dataloader() or _train_alone() for specific algorithm design in clients.

Parameters:
  • model (torch.nn.Module) – Model used in this federation.

  • num_clients (int) – Number of clients in current trainer.

  • cuda (bool) – Use GPUs or not. Default: False.

  • device (str, optional) – Assign model/data to the given GPUs. E.g., ‘device:0’ or ‘device:0,1’. Defaults to None.

  • logger (Logger, optional) – Object of Logger.

  • personal (bool, optional) – If Ture is passed, SerialModelMaintainer will generate the copy of local parameters list and maintain them respectively. These paremeters are indexed by [0, num-1]. Defaults to False.

Return a tensor list for uploading to server.

This attribute will be called by client manager. Customize it for new algorithms.

setup_dataset(dataset)#

Override this function to set up local dataset for clients

setup_optim(epochs, batch_size, lr)#

Set up local optimization configuration.

Parameters:
  • epochs (int) – Local epochs.

  • batch_size (int) – Local batch size.

  • lr (float) – Learning rate.

local_process(payload, id_list)#

Define the local main process.

train(model_parameters, train_loader)#

Single round of local training for one client.

Note

Overwrite this method to customize the PyTorch training pipeline.

Parameters:
class SyncServerHandler(model: torch.nn.Module, global_round: int, num_clients: int = 0, sample_ratio: float = 1, cuda: bool = False, device: str = None, sampler: fedlab.contrib.client_sampler.base_sampler.FedSampler = None, logger: fedlab.utils.Logger = None)#

Bases: fedlab.core.server.handler.ServerHandler

Synchronous Parameter Server Handler.

Backend of synchronous parameter server: this class is responsible for backend computing in synchronous server.

Synchronous parameter server will wait for every client to finish local training process before the next FL round.

Details in paper: http://proceedings.mlr.press/v54/mcmahan17a.html

Parameters:
  • model (torch.nn.Module) – model trained by federated learning.

  • global_round (int) – stop condition. Shut down FL system when global round is reached.

  • num_clients (int) – number of clients in FL. Default: 0 (initialized external).

  • sample_ratio (float) – the result of sample_ratio * num_clients is the number of clients for every FL round.

  • cuda (bool) – use GPUs or not. Default: False.

  • device (str, optional) – assign model/data to the given GPUs. E.g., ‘device:0’ or ‘device:0,1’. Defaults to None. If device is None and cuda is True, FedLab will set the gpu with the largest memory as default.

  • sampler (FedSampler, optional) – assign a sampler to define the client sampling strategy. Default: random sampling with FedSampler.

  • logger (Logger, optional) – object of Logger.

Property for manager layer. Server manager will call this property when activates clients.

property num_clients_per_round#
property if_stop#

NetworkManager keeps monitoring this attribute, and it will stop all related processes and threads when True returned.

sample_clients(num_to_sample=None)#

Return a list of client rank indices selected randomly. The client ID is from 0 to self.num_clients -1.

global_update(buffer)#
load(payload: List[torch.Tensor]) bool#

Update global model with collected parameters from clients.

Note

Server handler will call this method when its client_buffer_cache is full. User can overwrite the strategy of aggregation to apply on model_parameters_list, and use SerializationTool.deserialize_model() to load serialized parameters after aggregation into self._model.

Parameters:

payload (list[torch.Tensor]) – A list of tensors passed by manager layer.

class AsyncServerHandler(model: torch.nn.Module, global_round: int, num_clients: int, cuda: bool = False, device: str = None, logger: fedlab.utils.Logger = None)#

Bases: fedlab.core.server.handler.ServerHandler

Asynchronous Parameter Server Handler

Update global model immediately after receiving a ParameterUpdate message Paper: https://arxiv.org/abs/1903.03934

Parameters:
  • model (torch.nn.Module) – Global model in server

  • global_round (int) – stop condition. Shut down FL system when global round is reached.

  • num_clients (int) – number of clients in FL.

  • cuda (bool) – Use GPUs or not.

  • device (str, optional) – Assign model/data to the given GPUs. E.g., ‘device:0’ or ‘device:0,1’. Defaults to None. If device is None and cuda is True, FedLab will set the gpu with the largest memory as default.

  • logger (Logger, optional) – Object of Logger.

property if_stop#

NetworkManager keeps monitoring this attribute, and it will stop all related processes and threads when True returned.

Property for manager layer. Server manager will call this property when activates clients.

setup_optim(alpha, strategy='constant', a=10, b=4)#

Setup optimization configuration.

Parameters:
  • alpha (float) – Weight used in async aggregation.

  • strategy (str, optional) – Adaptive strategy. constant, hinge and polynomial is optional. Default: constant.. Defaults to ‘constant’.

  • a (int, optional) – Parameter used in async aggregation.. Defaults to 10.

  • b (int, optional) – Parameter used in async aggregation.. Defaults to 4.

global_update(buffer)#
load(payload: List[torch.Tensor]) bool#

Override this function to define how to update global model (aggregation or optimization).

adapt_alpha(receive_model_time)#

update the alpha according to staleness

class DittoSerialClientTrainer(model, num, cuda=False, device=None, logger=None, personal=True)#

Bases: fedlab.contrib.algorithm.basic_client.SGDSerialClientTrainer

Train multiple clients in a single process.

Customize _get_dataloader() or _train_alone() for specific algorithm design in clients.

Parameters:
  • model (torch.nn.Module) – Model used in this federation.

  • num_clients (int) – Number of clients in current trainer.

  • cuda (bool) – Use GPUs or not. Default: False.

  • device (str, optional) – Assign model/data to the given GPUs. E.g., ‘device:0’ or ‘device:0,1’. Defaults to None.

  • logger (Logger, optional) – Object of Logger.

  • personal (bool, optional) – If Ture is passed, SerialModelMaintainer will generate the copy of local parameters list and maintain them respectively. These paremeters are indexed by [0, num-1]. Defaults to False.

Return a tensor list for uploading to server.

This attribute will be called by client manager. Customize it for new algorithms.

setup_dataset(dataset)#

Override this function to set up local dataset for clients

setup_optim(epochs, batch_size, lr)#

Set up local optimization configuration.

Parameters:
  • epochs (int) – Local epochs.

  • batch_size (int) – Local batch size.

  • lr (float) – Learning rate.

local_process(payload, id_list)#

Define the local main process.

train(global_model_parameters, local_model_parameters, train_loader)#

Single round of local training for one client.

Note

Overwrite this method to customize the PyTorch training pipeline.

Parameters:
class DittoServerHandler(model: torch.nn.Module, global_round: int, num_clients: int = 0, sample_ratio: float = 1, cuda: bool = False, device: str = None, sampler: fedlab.contrib.client_sampler.base_sampler.FedSampler = None, logger: fedlab.utils.Logger = None)#

Bases: fedlab.contrib.algorithm.basic_server.SyncServerHandler

Ditto server acts the same as fedavg server.

class FedAvgSerialClientTrainer(model, num_clients, cuda=False, device=None, logger=None, personal=False)#

Bases: fedlab.contrib.algorithm.basic_client.SGDSerialClientTrainer

Federated client with local SGD solver.

train(model_parameters, train_loader)#

Single round of local training for one client.

Note

Overwrite this method to customize the PyTorch training pipeline.

Parameters:
class FedAvgServerHandler(model: torch.nn.Module, global_round: int, num_clients: int = 0, sample_ratio: float = 1, cuda: bool = False, device: str = None, sampler: fedlab.contrib.client_sampler.base_sampler.FedSampler = None, logger: fedlab.utils.Logger = None)#

Bases: fedlab.contrib.algorithm.basic_server.SyncServerHandler

FedAvg server handler.

global_update(buffer)#
class FedDynSerialClientTrainer(model, num_clients, cuda=False, device=None, logger=None, personal=False)#

Bases: fedlab.contrib.algorithm.basic_client.SGDSerialClientTrainer

Train multiple clients in a single process.

Customize _get_dataloader() or _train_alone() for specific algorithm design in clients.

Parameters:
  • model (torch.nn.Module) – Model used in this federation.

  • num_clients (int) – Number of clients in current trainer.

  • cuda (bool) – Use GPUs or not. Default: False.

  • device (str, optional) – Assign model/data to the given GPUs. E.g., ‘device:0’ or ‘device:0,1’. Defaults to None.

  • logger (Logger, optional) – Object of Logger.

  • personal (bool, optional) – If Ture is passed, SerialModelMaintainer will generate the copy of local parameters list and maintain them respectively. These paremeters are indexed by [0, num-1]. Defaults to False.

setup_dataset(dataset)#

Override this function to set up local dataset for clients

setup_optim(epochs, batch_size, lr, alpha)#

Set up local optimization configuration.

Parameters:
  • epochs (int) – Local epochs.

  • batch_size (int) – Local batch size.

  • lr (float) – Learning rate.

local_process(payload, id_list)#

Define the local main process.

train(id, model_parameters, train_loader)#

Single round of local training for one client.

Note

Overwrite this method to customize the PyTorch training pipeline.

Parameters:
class FedDynServerHandler(model: torch.nn.Module, global_round: int, num_clients: int = 0, sample_ratio: float = 1, cuda: bool = False, device: str = None, sampler: fedlab.contrib.client_sampler.base_sampler.FedSampler = None, logger: fedlab.utils.Logger = None)#

Bases: fedlab.contrib.algorithm.basic_server.SyncServerHandler

FedAvg server handler.

setup_optim(alpha)#

Override this function to load your optimization hyperparameters.

global_update(buffer)#
class FedNovaSerialClientTrainer(model, num_clients, cuda=False, device=None, logger=None, personal=False)#

Bases: fedlab.contrib.algorithm.basic_client.SGDSerialClientTrainer

Federated client with local SGD solver.

local_process(payload, id_list)#

Define the local main process.

class FedNovaServerHandler(model: torch.nn.Module, global_round: int, num_clients: int = 0, sample_ratio: float = 1, cuda: bool = False, device: str = None, sampler: fedlab.contrib.client_sampler.base_sampler.FedSampler = None, logger: fedlab.utils.Logger = None)#

Bases: fedlab.contrib.algorithm.basic_server.SyncServerHandler

FedAvg server handler.

setup_optim(option='weighted_scale')#

Override this function to load your optimization hyperparameters.

global_update(buffer)#
class FedProxSerialClientTrainer(model, num_clients, cuda=False, device=None, logger=None, personal=False)#

Bases: fedlab.contrib.algorithm.basic_client.SGDSerialClientTrainer

Train multiple clients in a single process.

Customize _get_dataloader() or _train_alone() for specific algorithm design in clients.

Parameters:
  • model (torch.nn.Module) – Model used in this federation.

  • num_clients (int) – Number of clients in current trainer.

  • cuda (bool) – Use GPUs or not. Default: False.

  • device (str, optional) – Assign model/data to the given GPUs. E.g., ‘device:0’ or ‘device:0,1’. Defaults to None.

  • logger (Logger, optional) – Object of Logger.

  • personal (bool, optional) – If Ture is passed, SerialModelMaintainer will generate the copy of local parameters list and maintain them respectively. These paremeters are indexed by [0, num-1]. Defaults to False.

setup_optim(epochs, batch_size, lr, mu)#

Set up local optimization configuration.

Parameters:
  • epochs (int) – Local epochs.

  • batch_size (int) – Local batch size.

  • lr (float) – Learning rate.

local_process(payload, id_list)#

Define the local main process.

train(model_parameters, train_loader, mu) None#

Client trains its local model on local dataset.

Parameters:
class FedProxClientTrainer(model: torch.nn.Module, cuda: bool = False, device: str = None, logger: fedlab.utils.Logger = None)#

Bases: fedlab.contrib.algorithm.basic_client.SGDClientTrainer

Federated client with local SGD with proximal term solver.

setup_optim(epochs, batch_size, lr, mu)#

Set up local optimization configuration.

Parameters:
  • epochs (int) – Local epochs.

  • batch_size (int) – Local batch size.

  • lr (float) – Learning rate.

local_process(payload, id)#

Manager of the upper layer will call this function with accepted payload

In synchronous mode, return True to end current FL round.

train(model_parameters, train_loader, mu) None#

Client trains its local model on local dataset.

Parameters:

model_parameters (torch.Tensor) – Serialized model parameters.

class FedProxServerHandler(model: torch.nn.Module, global_round: int, num_clients: int = 0, sample_ratio: float = 1, cuda: bool = False, device: str = None, sampler: fedlab.contrib.client_sampler.base_sampler.FedSampler = None, logger: fedlab.utils.Logger = None)#

Bases: fedlab.contrib.algorithm.basic_server.SyncServerHandler

FedProx server handler.

class IFCASerialClientTrainer(model, num_clients, cuda=False, device=None, logger=None, personal=False)#

Bases: fedlab.contrib.algorithm.basic_client.SGDSerialClientTrainer

Train multiple clients in a single process.

Customize _get_dataloader() or _train_alone() for specific algorithm design in clients.

Parameters:
  • model (torch.nn.Module) – Model used in this federation.

  • num_clients (int) – Number of clients in current trainer.

  • cuda (bool) – Use GPUs or not. Default: False.

  • device (str, optional) – Assign model/data to the given GPUs. E.g., ‘device:0’ or ‘device:0,1’. Defaults to None.

  • logger (Logger, optional) – Object of Logger.

  • personal (bool, optional) – If Ture is passed, SerialModelMaintainer will generate the copy of local parameters list and maintain them respectively. These paremeters are indexed by [0, num-1]. Defaults to False.

setup_dataset(dataset)#

Override this function to set up local dataset for clients

setup_optim(epochs, batch_size, lr)#

Set up local optimization configuration.

Parameters:
  • epochs (int) – Local epochs.

  • batch_size (int) – Local batch size.

  • lr (float) – Learning rate.

local_process(payload, id_list)#

Define the local main process.

class IFCAServerHander(model: torch.nn.Module, global_round: int, sample_ratio: float, cuda: bool = False, device: str = None, logger=None)#

Bases: fedlab.contrib.algorithm.basic_server.SyncServerHandler

Synchronous Parameter Server Handler.

Backend of synchronous parameter server: this class is responsible for backend computing in synchronous server.

Synchronous parameter server will wait for every client to finish local training process before the next FL round.

Details in paper: http://proceedings.mlr.press/v54/mcmahan17a.html

Parameters:
  • model (torch.nn.Module) – model trained by federated learning.

  • global_round (int) – stop condition. Shut down FL system when global round is reached.

  • num_clients (int) – number of clients in FL. Default: 0 (initialized external).

  • sample_ratio (float) – the result of sample_ratio * num_clients is the number of clients for every FL round.

  • cuda (bool) – use GPUs or not. Default: False.

  • device (str, optional) – assign model/data to the given GPUs. E.g., ‘device:0’ or ‘device:0,1’. Defaults to None. If device is None and cuda is True, FedLab will set the gpu with the largest memory as default.

  • sampler (FedSampler, optional) – assign a sampler to define the client sampling strategy. Default: random sampling with FedSampler.

  • logger (Logger, optional) – object of Logger.

Property for manager layer. Server manager will call this property when activates clients.

setup_optim(share_size, k, init_parameters)#

_summary_

Parameters:
  • share_size (_type_) – _description_

  • k (_type_) – _description_

  • init_parameters (_type_) – _description_

global_update(buffer)#
class PowerofchoiceSerialClientTrainer(model, num_clients, cuda=False, device=None, logger=None, personal=False)#

Bases: fedlab.contrib.algorithm.basic_client.SGDSerialClientTrainer

Train multiple clients in a single process.

Customize _get_dataloader() or _train_alone() for specific algorithm design in clients.

Parameters:
  • model (torch.nn.Module) – Model used in this federation.

  • num_clients (int) – Number of clients in current trainer.

  • cuda (bool) – Use GPUs or not. Default: False.

  • device (str, optional) – Assign model/data to the given GPUs. E.g., ‘device:0’ or ‘device:0,1’. Defaults to None.

  • logger (Logger, optional) – Object of Logger.

  • personal (bool, optional) – If Ture is passed, SerialModelMaintainer will generate the copy of local parameters list and maintain them respectively. These paremeters are indexed by [0, num-1]. Defaults to False.

evaluate(id_list, model_parameters)#

Evaluate quality of local model.

class PowerofchoicePipeline(handler: fedlab.core.server.handler.ServerHandler, trainer: fedlab.core.client.trainer.SerialClientTrainer)#

Bases: fedlab.core.standalone.StandalonePipeline

main()#
class Powerofchoice(model: torch.nn.Module, global_round: int, num_clients: int = 0, sample_ratio: float = 1, cuda: bool = False, device: str = None, sampler: fedlab.contrib.client_sampler.base_sampler.FedSampler = None, logger: fedlab.utils.Logger = None)#

Bases: fedlab.contrib.algorithm.basic_server.SyncServerHandler

Synchronous Parameter Server Handler.

Backend of synchronous parameter server: this class is responsible for backend computing in synchronous server.

Synchronous parameter server will wait for every client to finish local training process before the next FL round.

Details in paper: http://proceedings.mlr.press/v54/mcmahan17a.html

Parameters:
  • model (torch.nn.Module) – model trained by federated learning.

  • global_round (int) – stop condition. Shut down FL system when global round is reached.

  • num_clients (int) – number of clients in FL. Default: 0 (initialized external).

  • sample_ratio (float) – the result of sample_ratio * num_clients is the number of clients for every FL round.

  • cuda (bool) – use GPUs or not. Default: False.

  • device (str, optional) – assign model/data to the given GPUs. E.g., ‘device:0’ or ‘device:0,1’. Defaults to None. If device is None and cuda is True, FedLab will set the gpu with the largest memory as default.

  • sampler (FedSampler, optional) – assign a sampler to define the client sampling strategy. Default: random sampling with FedSampler.

  • logger (Logger, optional) – object of Logger.

setup_optim(d)#

Override this function to load your optimization hyperparameters.

sample_candidates()#
sample_clients(candidates, losses)#

Return a list of client rank indices selected randomly. The client ID is from 0 to self.num_clients -1.

class qFedAvgClientTrainer(model: torch.nn.Module, cuda: bool = False, device: str = None, logger: fedlab.utils.Logger = None)#

Bases: fedlab.contrib.algorithm.basic_client.SGDClientTrainer

Federated client with modified upload package and local SGD solver.

Return a tensor list for uploading to server.

This attribute will be called by client manager. Customize it for new algorithms.

setup_optim(epochs, batch_size, lr, q)#

Set up local optimization configuration.

Parameters:
  • epochs (int) – Local epochs.

  • batch_size (int) – Local batch size.

  • lr (float) – Learning rate.

train(model_parameters, train_loader) None#

Client trains its local model on local dataset. :param model_parameters: Serialized model parameters. :type model_parameters: torch.Tensor

class qFedAvgServerHandler(model: torch.nn.Module, global_round: int, num_clients: int = 0, sample_ratio: float = 1, cuda: bool = False, device: str = None, sampler: fedlab.contrib.client_sampler.base_sampler.FedSampler = None, logger: fedlab.utils.Logger = None)#

Bases: fedlab.contrib.algorithm.basic_server.SyncServerHandler

qFedAvg server handler.

global_update(buffer)#
class ScaffoldSerialClientTrainer(model, num_clients, cuda=False, device=None, logger=None, personal=False)#

Bases: fedlab.contrib.algorithm.basic_client.SGDSerialClientTrainer

Train multiple clients in a single process.

Customize _get_dataloader() or _train_alone() for specific algorithm design in clients.

Parameters:
  • model (torch.nn.Module) – Model used in this federation.

  • num_clients (int) – Number of clients in current trainer.

  • cuda (bool) – Use GPUs or not. Default: False.

  • device (str, optional) – Assign model/data to the given GPUs. E.g., ‘device:0’ or ‘device:0,1’. Defaults to None.

  • logger (Logger, optional) – Object of Logger.

  • personal (bool, optional) – If Ture is passed, SerialModelMaintainer will generate the copy of local parameters list and maintain them respectively. These paremeters are indexed by [0, num-1]. Defaults to False.

setup_optim(epochs, batch_size, lr)#

Set up local optimization configuration.

Parameters:
  • epochs (int) – Local epochs.

  • batch_size (int) – Local batch size.

  • lr (float) – Learning rate.

local_process(payload, id_list)#

Define the local main process.

train(id, model_parameters, global_c, train_loader)#

Single round of local training for one client.

Note

Overwrite this method to customize the PyTorch training pipeline.

Parameters:
class ScaffoldServerHandler(model: torch.nn.Module, global_round: int, num_clients: int = 0, sample_ratio: float = 1, cuda: bool = False, device: str = None, sampler: fedlab.contrib.client_sampler.base_sampler.FedSampler = None, logger: fedlab.utils.Logger = None)#

Bases: fedlab.contrib.algorithm.basic_server.SyncServerHandler

FedAvg server handler.

Property for manager layer. Server manager will call this property when activates clients.

setup_optim(lr)#

Override this function to load your optimization hyperparameters.

global_update(buffer)#