ml_genn.communicators package
Communicators ars objects for parallel communications between ranks for use when training with multiple GPUs.
- class ml_genn.communicators.Communicator
Bases:
ABC
Base class for all communicators
- abstract barrier()
Wait for all ranks to reach this point in execution before continuing
- abstract broadcast(data, root)
Broadcast data from root to all ranks :param data: Data to broadcast :param root: Index of node to broadcast from
- Parameters:
root (int)
- abstract property num_ranks
Gets total number of ranks
- Returns:
int – num_ranks
- abstract property rank
Gets index of this rank
- Returns:
int – rank
- abstract reduce_sum(value)
Calculates the sum of value across all ranks :param value: Value to sum up
- Returns:
Sum of value across all ranks
- class ml_genn.communicators.MPI
Bases:
Communicator
Implementation of Communicator which uses mpi4py for parallel communications between ranks
- barrier()
Wait for all ranks to reach this point in execution before continuing
- broadcast(data, root)
Broadcast data from root to all ranks :param data: Data to broadcast :param root: Index of node to broadcast from
- Parameters:
root (int)
- property num_ranks
Gets total number of ranks
- Returns:
int – num_ranks
- property rank
Gets index of this rank
- Returns:
int – rank
- reduce_sum(value)
Calculates the sum of value across all ranks :param value: Value to sum up
- Returns:
Sum of value across all ranks