Add come updates for Neurips paper (#4)
* scenarionet training * wandb * train utils * fix callback * run PPO * use pg test * save path * use torch * add dependency * update ignore * update training * large model * use curriculum training * add time to exp name * storage_path * restore * update training * use my key * add log message * check seed * restore callback * restore call bacl * add log message * add logging message * restore ray1.4 * length 500 * ray 100 * wandb * use tf * more levels * add callback * 10 worker * show level * no env horizon * callback result level * more call back * add diffuculty * add mroen stat * mroe stat * show levels * add callback * new * ep len 600 * fix setup * fix stepup * fix to 3.8 * update setup * parallel worker! * new exp * add callback * lateral dist * pg dataset * evaluate * modify config * align config * train single RL * update training script * 100w eval * less eval to reveal * 2000 env eval * new trianing * eval 1000 * update eval * more workers * more worker * 20 worker * dataset to database * split tool! * split dataset * try fix * train 003 * fix mapping * fix test * add waymo tqdm * utils * fix bug * fix bug * waymo * int type * 8 worker read * disable * read file * add log message * check existence * dist 0 * int * check num * suprass warning * add filter API * filter * store map false * new * ablation * filter * fix * update filyter * reanme to from * random select * add overlapping checj * fix * new training sceheme * new reward * add waymo train script * waymo different config * copy raw data * fix bug * add tqdm * update readme * waymo * pg * max lateral dist 3 * pg * crash_done instead of penalty * no crash done * gpu * update eval script * steering range penalty * evaluate * finish pg * update setup * fix bug * test * fix * add on line * train nuplan * generate sensor * udpate training * static obj * multi worker eval * filx bug * use ray for testing * eval! * filter senario * id filter * fox bug * dist = 2 * filter * eval * eval ret * ok * update training pg * test before use * store data=False * collect figures * capture pic --------- Co-authored-by: Quanyi Li <quanyi@bolei-gpu02.cs.ucla.edu>
This commit is contained in:
42
scenarionet_training/train_utils/anisotropic_workerset.py
Normal file
42
scenarionet_training/train_utils/anisotropic_workerset.py
Normal file
@@ -0,0 +1,42 @@
|
||||
import copy
|
||||
import logging
|
||||
from typing import TypeVar
|
||||
|
||||
from ray.rllib.evaluation.rollout_worker import RolloutWorker
|
||||
from ray.rllib.evaluation.worker_set import WorkerSet
|
||||
from ray.rllib.utils.annotations import DeveloperAPI
|
||||
from ray.rllib.utils.framework import try_import_tf
|
||||
|
||||
tf1, tf, tfv = try_import_tf()
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Generic type var for foreach_* methods.
|
||||
T = TypeVar("T")
|
||||
|
||||
|
||||
@DeveloperAPI
|
||||
class AnisotropicWorkerSet(WorkerSet):
|
||||
"""
|
||||
Workers are assigned to different scenarios for saving memory/speeding up sampling
|
||||
"""
|
||||
|
||||
def add_workers(self, num_workers: int) -> None:
|
||||
"""
|
||||
Workers are assigned to different scenarios
|
||||
"""
|
||||
remote_args = {
|
||||
"num_cpus": self._remote_config["num_cpus_per_worker"],
|
||||
"num_gpus": self._remote_config["num_gpus_per_worker"],
|
||||
# memory=0 is an error, but memory=None means no limits.
|
||||
"memory": self._remote_config["memory_per_worker"] or None,
|
||||
"object_store_memory": self.
|
||||
_remote_config["object_store_memory_per_worker"] or None,
|
||||
"resources": self._remote_config["custom_resources_per_worker"],
|
||||
}
|
||||
cls = RolloutWorker.as_remote(**remote_args).remote
|
||||
for i in range(num_workers):
|
||||
config = copy.deepcopy(self._remote_config)
|
||||
config["env_config"]["worker_index"] = i
|
||||
config["env_config"]["num_workers"] = num_workers
|
||||
self._remote_workers.append(self._make_worker(cls, self._env_creator, self._policy_class, i + 1, config))
|
||||
Reference in New Issue
Block a user