feddyn#
Module Contents#
FedAvg server handler. |
|
Train multiple clients in a single process. |
- class FedDynServerHandler(model: torch.nn.Module, global_round: int, num_clients: int = 0, sample_ratio: float = 1, cuda: bool = False, device: str = None, sampler: fedlab.contrib.client_sampler.base_sampler.FedSampler = None, logger: fedlab.utils.Logger = None)#
Bases:
fedlab.contrib.algorithm.basic_server.SyncServerHandler
FedAvg server handler.
- setup_optim(alpha)#
Override this function to load your optimization hyperparameters.
- global_update(buffer)#
- class FedDynSerialClientTrainer(model, num_clients, cuda=False, device=None, logger=None, personal=False)#
Bases:
fedlab.contrib.algorithm.basic_client.SGDSerialClientTrainer
Train multiple clients in a single process.
Customize
_get_dataloader()
or_train_alone()
for specific algorithm design in clients.- Parameters:
model (torch.nn.Module) – Model used in this federation.
num_clients (int) – Number of clients in current trainer.
cuda (bool) – Use GPUs or not. Default:
False
.device (str, optional) – Assign model/data to the given GPUs. E.g., ‘device:0’ or ‘device:0,1’. Defaults to None.
logger (Logger, optional) – Object of
Logger
.personal (bool, optional) – If Ture is passed, SerialModelMaintainer will generate the copy of local parameters list and maintain them respectively. These paremeters are indexed by [0, num-1]. Defaults to False.
- setup_dataset(dataset)#
Override this function to set up local dataset for clients
- setup_optim(epochs, batch_size, lr, alpha)#
Set up local optimization configuration.
- local_process(payload, id_list)#
Define the local main process.
- train(id, model_parameters, train_loader)#
Single round of local training for one client.
Note
Overwrite this method to customize the PyTorch training pipeline.
- Parameters:
model_parameters (torch.Tensor) – serialized model parameters.
train_loader (torch.utils.data.DataLoader) –
torch.utils.data.DataLoader
for this client.