trinity.utils.distributed module#
For distributed training with multiple process groups.
- trinity.utils.distributed.init_process_group(host: str, port: int, group_name: str, backend: str | Backend = 'nccl', timeout: float | None = None, world_size: int = -1, rank: int = -1, pg_options: Any | None = None, device_id: device | None = None)[源代码]#
This function is used to initialize the process group. It requires torch >= 2.6.0