* Create main branch * Initial commit * add setup.py * move tools from md * solve import conflict * refactor * get a unified func * rename * batch convert nuscenes * change summary file to json * remove set as well * move writing summary to metadrive * show import error * nuplan ok * clean example * waymo * all convert is ready now * source file to data * update get nuplan parameters * get all scenarios * format * add pg converter * fix nuplan bug * suppres tf warning * combine dataset function * test script * add test to github page * add test script * test script * add step condition to verofy * test scenarios * remove logging information * filter function * test filter * sdc filter test * add filter test * finish filter * multiprocess verify * multi_processing test * small dataset test! * multi-processing test * format * auto reduce worker num * use is_scenario_file to determine * build new dataset from error logs * add new test * add common utils * move all test genrtaed file to tmp, add genertae error set test * provide scripts * add test * reanme * scale up test * add script for parsing data * disable info output * multi processing get files * multi-process writing * test waymo converter * add local test for generating local dataset * set waymo origin path * merge automatically * batch generation * add combine API * fix combine bug * test combine data * add more test * fix bug * more test * add num works to script arguments * fix bug * add dataset to gitignore * test more scripts * update error message * .sh * fix bug * fix bug * 16 workers * remove annotation * install md for github test * fix bug * fix CI * fix test * add filters to combine script * fix test * Fix bug for generating dataset (#2) * update parameters for scripts * update write function * modify waymo script * use exist ok instead of overwrite * remove TODO * rename to comvine_dataset * use exist_ok and force_overwrite together * format * test * creat env for each thread * restore * fix bug * fix pg bug * fix * fix bug * add assert * don't return done info * to dict * add test * only compare sdc * no store mao * release memory * add start index to argumen * test * format some settings/flags * add tmp path * add tmp dir * test all scripts * suppress warning * suppress warning * format * test memory leak * fix memory leak * remove useless functions * imap * thread-1 process for avoiding memory leak * add list() * rename * verify existence * verify completeness * test * add test * add default value * add limit * use script * add anotation * test script * fix bug * fix bug * add author4 * add overwrite * fix bug * fix * combine overwrite * fix bug * gpu007 * add result save dir * adjust sequence * fix test bug * disable bash scri[t * add episode length limit * move scripts to root dir * format * fix test * Readme (#3) * rename to merge dataset * add -d for operation * test move * add move function * test remove * format * dataset -> database * add readme * format.sh * test assert * rename to database in .sh * Update README.md * rename scripts and update readme * remove repeat calculation * update radius * Add come updates for Neurips paper (#4) * scenarionet training * wandb * train utils * fix callback * run PPO * use pg test * save path * use torch * add dependency * update ignore * update training * large model * use curriculum training * add time to exp name * storage_path * restore * update training * use my key * add log message * check seed * restore callback * restore call bacl * add log message * add logging message * restore ray1.4 * length 500 * ray 100 * wandb * use tf * more levels * add callback * 10 worker * show level * no env horizon * callback result level * more call back * add diffuculty * add mroen stat * mroe stat * show levels * add callback * new * ep len 600 * fix setup * fix stepup * fix to 3.8 * update setup * parallel worker! * new exp * add callback * lateral dist * pg dataset * evaluate * modify config * align config * train single RL * update training script * 100w eval * less eval to reveal * 2000 env eval * new trianing * eval 1000 * update eval * more workers * more worker * 20 worker * dataset to database * split tool! * split dataset * try fix * train 003 * fix mapping * fix test * add waymo tqdm * utils * fix bug * fix bug * waymo * int type * 8 worker read * disable * read file * add log message * check existence * dist 0 * int * check num * suprass warning * add filter API * filter * store map false * new * ablation * filter * fix * update filyter * reanme to from * random select * add overlapping checj * fix * new training sceheme * new reward * add waymo train script * waymo different config * copy raw data * fix bug * add tqdm * update readme * waymo * pg * max lateral dist 3 * pg * crash_done instead of penalty * no crash done * gpu * update eval script * steering range penalty * evaluate * finish pg * update setup * fix bug * test * fix * add on line * train nuplan * generate sensor * udpate training * static obj * multi worker eval * filx bug * use ray for testing * eval! * filter senario * id filter * fox bug * dist = 2 * filter * eval * eval ret * ok * update training pg * test before use * store data=False * collect figures * capture pic --------- Co-authored-by: Quanyi Li <quanyi@bolei-gpu02.cs.ucla.edu> * Make video (#5) * generate accident scene * construction PG * no object * accident prob * capture script * update nuscenes toolds * make video * format * fix test * update readme * update readme * format * format * Update video/webpage/code * Update env (#7) * add capture script * gymnasium API * training with gymnasium API * update readme (#9) * Rebuttal (#15) * pg+nuplan train * Need map * use gym wrapper * use createGymWrapper * doc * use all scenarios! * update 80000 scenario * train script * config readthedocs * format * fix doc * add requirement * fix path * readthedocs * doc * reactive traffic example * Doc-example (#18) * reactive traffic example * structure * structure * waymo example * rename and add doc * finish example * example * start from 2 * fix build error * Update doc (#20) * Add list.py and desc * add operations * add structure * update readme * format * update readme * more doc * toc tree * waymo example * add PG * PG+waymo+nuscenes * add nuPlan setup instruction * fix command style by removing .py * Colab exp (#22) * add example * add new workflow * fix bug * pull asset automatically * add colab * fix test * add colab to readme * Update README.md (#23) * Update readme (#24) * update figure * add colab to doc * More info (#28) * boundary to exterior * rename copy to cp, avoiding bugs * add connectivity and sidewalk/cross for nuscenes * update lane type * add semantic renderer * restore * nuplan works * format * md versio>=0.4.1.2 * Loose numpy version (#30) * disable using pip extra requirement installation * loose numpy * waymo * waymo version * add numpy hint * restore * Add to note * add hint * Update document, add a colab example for reading data, upgrade numpy dependency (#34) * Minor update to docs * WIP * adjust numpy requirement * prepare example for reading data from SN dataset * prepare example for reading data from SN dataset * clean * Update Citation information (#37) * Update Sensor API in scripts (#39) * add semantic cam * update API * format * Update the citation in README.md (#40) * Optimize waymo converter (#44) * use generator for waymo * :wqadd preprocessor * use generator * Use Waymo Protos Directly (#38) * use protos directly * format protos --------- Co-authored-by: Quanyi Li <quanyili0057@gmail.com> * rename to unix style * Update nuScenes & Waymo Optimization (#47) * update can bus * Create LICENSE * update waymo doc * protobuf requirement * just warning * Add warning for proto * update PR template * fix length bug * try sharing nusc * imu heading * fix 161 168 * add badge * fix doc * update doc * format * update cp * update nuscenes interface * update doc * prediction nuscenes * use drivable aread for nuscenes * allow converting prediction * format * fix bug * optimize * clean RAM * delete more * restore to * add only lane * use token * add warning * format * fix bug * add simulation section * Add support to AV2 (#48) * add support to av2 --------- Co-authored-by: Alan-LanFeng <fenglan18@outook.com> * add nuscenes tracks and av2 bound (#49) * add nuscenes tracks to predict * ad av2 boundary type * 1. add back map center to restore original coordinate in nuScnes (#51) * 1. add back map center to restore the original coordinate in nuScenes * Use the utils from MetaDrive to update object summaries; update ScenarioDescription doc (#52) * Update * update * update * update * add trigger (#57) * Add test for waymo example (#58) * add test script * test first 10 scenarios * add dependency * add dependency * Update the script for generating multi-sensors images (#61) * fix broken script * format code * introduce offscreen rendering * try debug * fix * fix * up * up * remove fix * fix * WIP * fix a bug in nusc converter (#60) * fix a typo (#62) * Update waymo.rst (#59) * Update waymo.rst * Update waymo.rst * Fix a bug in Waymo conversion: GPU should be disable (#64) * Update waymo.rst * Update waymo.rst * allow generate all data * update readme * update * better logging info * more info * up * fix * add note on GPU * better log * format * Fix nuscenes (#67) * fix bug * fix a potential bug * update av2 documentation (#75) * fix av2 sdc_track_indx (#72) (#76) * Add View-of-Delft Prediction (VoD-P) dataset * Reformat VoD code * Add documentation for VoD dataset * Reformat convert_vod.py --------- Co-authored-by: Quanyi Li <785878978@qq.com> Co-authored-by: QuanyiLi <quanyili0057@gmail.com> Co-authored-by: Quanyi Li <quanyi@bolei-gpu02.cs.ucla.edu> Co-authored-by: PENG Zhenghao <pzh@cs.ucla.edu> Co-authored-by: Govind Pimpale <gpimpale29@gmail.com> Co-authored-by: Alan <36124025+Alan-LanFeng@users.noreply.github.com> Co-authored-by: Alan-LanFeng <fenglan18@outook.com> Co-authored-by: Yunsong Zhou <75066007+ZhouYunsong-SJTU@users.noreply.github.com>
ScenarioNet
Open-Source Platform for Large-Scale Traffic Scenario Simulation and Modeling
[ Webpage | Code | Video | Paper | Documentation ]
Colab example for running simulation with ScenarioNet:
Colab example for reading established ScenarioNet dataset:
ScenarioNet allows users to load scenarios from real-world datasets like Waymo, nuPlan, nuScenes, l5 and synthetic dataset such as procedural generated ones and safety-critical ones generated by adversarial attack. The built database provides tools for building training and test sets for ML applications.
Powered by MetaDrive Simulator, the scenarios can be reconstructed for various applications like AD stack test, reinforcement learning, imitation learning, scenario generation and so on.
Installation
The detailed installation guidance is available at documentation. A simplest way to do this is as follows.
# create environment
conda create -n scenarionet python=3.9
conda activate scenarionet
# Install MetaDrive Simulator
cd ~/ # Go to the folder you want to host these two repos.
git clone https://github.com/metadriverse/metadrive.git
cd metadrive
pip install -e.
# Install ScenarioNet
cd ~/ # Go to the folder you want to host these two repos.
git clone https://github.com/metadriverse/scenarionet.git
cd scenarionet
pip install -e .
API reference
All operations and API reference is available at
our documentation.
If you already have ScenarioNet installed, you can check all operations by python -m scenarionet.list.
ScenarioNet dataset and Scenario Description
Please refer to the Scenario Description section in MetaDrive documentation for a walk-through.
Citation
If you used this project in your research, please cite:
@article{li2023scenarionet,
title={ScenarioNet: Open-Source Platform for Large-Scale Traffic Scenario Simulation and Modeling},
author={Li, Quanyi and Peng, Zhenghao and Feng, Lan and Liu, Zhizheng and Duan, Chenda and Mo, Wenjie and Zhou, Bolei},
journal={Advances in Neural Information Processing Systems},
year={2023}
}
