Why did I get multiprocessing.api:failed error when I switched a working multiprocess code to single GPU?

958 Views Asked by At

I have been using a PyTorch DDP script for training. It worked well for 4 GPUs and 2 GPUs, but I got the error messages like this when I launched the task with 1 GPU this time:

Note that --use-env is set by default in torchrun.
If your script expects `--local-rank` argument to be set, please
change it to read from `os.environ['LOCAL_RANK']` instead. See 
https://pytorch.org/docs/stable/distributed.html#launch-utility for 
further instructions

  warnings.warn(
/mnt/task_runtime/boltenv/lib/python3.8/site-packages/_distutils_hack/__init__.py:33: UserWarning: Setuptools is replacing distutils.
  warnings.warn("Setuptools is replacing distutils.")
Traceback (most recent call last):
  File "tls/runnet.py", line 257, in <module>
    main()
  File "tls/runnet.py", line 170, in main
    dist.init_process_group(backend='gloo', rank=int(local_rank), world_size = WORLD_SIZE)
  File "/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 907, in init_process_group
    default_pg = _new_process_group_helper(
  File "/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 1009, in _new_process_group_helper
    backend_class = ProcessGroupGloo(backend_prefix_store, group_rank, group_size, timeout=timeout)
RuntimeError: Socket Timeout
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 8354) of binary: /bin/python3
Traceback (most recent call last):
  File "/lib/python3.8/runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/lib/python3.8/site-packages/torch/distributed/launch.py", line 196, in <module>
    main()
  File "/lib/python3.8/site-packages/torch/distributed/launch.py", line 192, in main
    launch(args)
  File "/lib/python3.8/site-packages/torch/distributed/launch.py", line 177, in launch
    run(args)
  File "/lib/python3.8/site-packages/torch/distributed/run.py", line 785, in run
    elastic_launch(
  File "/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 134, in __call__
    return launch_agent(self._config, self._entrypoint, list(args))
  File "/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 250, in launch_agent
    raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError: 
============================================================
tls/runnet.py FAILED
------------------------------------------------------------
Failures:
  <NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
  time      : 2023-11-11_01:36:47
  host      : zdfadg
  rank      : 0 (local_rank: 0)
  exitcode  : 1 (pid: 8354)
  error_file: <N/A>
  traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================

The command I used is :

python3 -m torch.distributed.launch --nproc_per_node 1 tls/runnet.py

Could someone tell me why I got these errors and how to get around it for single GPU task.

0

There are 0 best solutions below