* Create main branch * Initial commit * add setup.py * move tools from md * solve import conflict * refactor * get a unified func * rename * batch convert nuscenes * change summary file to json * remove set as well * move writing summary to metadrive * show import error * nuplan ok * clean example * waymo * all convert is ready now * source file to data * update get nuplan parameters * get all scenarios * format * add pg converter * fix nuplan bug * suppres tf warning * combine dataset function * test script * add test to github page * add test script * test script * add step condition to verofy * test scenarios * remove logging information * filter function * test filter * sdc filter test * add filter test * finish filter * multiprocess verify * multi_processing test * small dataset test! * multi-processing test * format * auto reduce worker num * use is_scenario_file to determine * build new dataset from error logs * add new test * add common utils * move all test genrtaed file to tmp, add genertae error set test * provide scripts * add test * reanme * scale up test * add script for parsing data * disable info output * multi processing get files * multi-process writing * test waymo converter * add local test for generating local dataset * set waymo origin path * merge automatically * batch generation * add combine API * fix combine bug * test combine data * add more test * fix bug * more test * add num works to script arguments * fix bug * add dataset to gitignore * test more scripts * update error message * .sh * fix bug * fix bug * 16 workers * remove annotation * install md for github test * fix bug * fix CI * fix test * add filters to combine script * fix test * Fix bug for generating dataset (#2) * update parameters for scripts * update write function * modify waymo script * use exist ok instead of overwrite * remove TODO * rename to comvine_dataset * use exist_ok and force_overwrite together * format * test * creat env for each thread * restore * fix bug * fix pg bug * fix * fix bug * add assert * don't return done info * to dict * add test * only compare sdc * no store mao * release memory * add start index to argumen * test * format some settings/flags * add tmp path * add tmp dir * test all scripts * suppress warning * suppress warning * format * test memory leak * fix memory leak * remove useless functions * imap * thread-1 process for avoiding memory leak * add list() * rename * verify existence * verify completeness * test * add test * add default value * add limit * use script * add anotation * test script * fix bug * fix bug * add author4 * add overwrite * fix bug * fix * combine overwrite * fix bug * gpu007 * add result save dir * adjust sequence * fix test bug * disable bash scri[t * add episode length limit * move scripts to root dir * format * fix test * Readme (#3) * rename to merge dataset * add -d for operation * test move * add move function * test remove * format * dataset -> database * add readme * format.sh * test assert * rename to database in .sh * Update README.md * rename scripts and update readme * remove repeat calculation * update radius * Add come updates for Neurips paper (#4) * scenarionet training * wandb * train utils * fix callback * run PPO * use pg test * save path * use torch * add dependency * update ignore * update training * large model * use curriculum training * add time to exp name * storage_path * restore * update training * use my key * add log message * check seed * restore callback * restore call bacl * add log message * add logging message * restore ray1.4 * length 500 * ray 100 * wandb * use tf * more levels * add callback * 10 worker * show level * no env horizon * callback result level * more call back * add diffuculty * add mroen stat * mroe stat * show levels * add callback * new * ep len 600 * fix setup * fix stepup * fix to 3.8 * update setup * parallel worker! * new exp * add callback * lateral dist * pg dataset * evaluate * modify config * align config * train single RL * update training script * 100w eval * less eval to reveal * 2000 env eval * new trianing * eval 1000 * update eval * more workers * more worker * 20 worker * dataset to database * split tool! * split dataset * try fix * train 003 * fix mapping * fix test * add waymo tqdm * utils * fix bug * fix bug * waymo * int type * 8 worker read * disable * read file * add log message * check existence * dist 0 * int * check num * suprass warning * add filter API * filter * store map false * new * ablation * filter * fix * update filyter * reanme to from * random select * add overlapping checj * fix * new training sceheme * new reward * add waymo train script * waymo different config * copy raw data * fix bug * add tqdm * update readme * waymo * pg * max lateral dist 3 * pg * crash_done instead of penalty * no crash done * gpu * update eval script * steering range penalty * evaluate * finish pg * update setup * fix bug * test * fix * add on line * train nuplan * generate sensor * udpate training * static obj * multi worker eval * filx bug * use ray for testing * eval! * filter senario * id filter * fox bug * dist = 2 * filter * eval * eval ret * ok * update training pg * test before use * store data=False * collect figures * capture pic --------- Co-authored-by: Quanyi Li <quanyi@bolei-gpu02.cs.ucla.edu> * Make video (#5) * generate accident scene * construction PG * no object * accident prob * capture script * update nuscenes toolds * make video * format * fix test * update readme * update readme * format * format * Update video/webpage/code * Update env (#7) * add capture script * gymnasium API * training with gymnasium API * update readme (#9) * Rebuttal (#15) * pg+nuplan train * Need map * use gym wrapper * use createGymWrapper * doc * use all scenarios! * update 80000 scenario * train script * config readthedocs * format * fix doc * add requirement * fix path * readthedocs * doc * reactive traffic example * Doc-example (#18) * reactive traffic example * structure * structure * waymo example * rename and add doc * finish example * example * start from 2 * fix build error * Update doc (#20) * Add list.py and desc * add operations * add structure * update readme * format * update readme * more doc * toc tree * waymo example * add PG * PG+waymo+nuscenes * add nuPlan setup instruction * fix command style by removing .py * Colab exp (#22) * add example * add new workflow * fix bug * pull asset automatically * add colab * fix test * add colab to readme * Update README.md (#23) * Update readme (#24) * update figure * add colab to doc * More info (#28) * boundary to exterior * rename copy to cp, avoiding bugs * add connectivity and sidewalk/cross for nuscenes * update lane type * add semantic renderer * restore * nuplan works * format * md versio>=0.4.1.2 * Loose numpy version (#30) * disable using pip extra requirement installation * loose numpy * waymo * waymo version * add numpy hint * restore * Add to note * add hint * Update document, add a colab example for reading data, upgrade numpy dependency (#34) * Minor update to docs * WIP * adjust numpy requirement * prepare example for reading data from SN dataset * prepare example for reading data from SN dataset * clean * Update Citation information (#37) * Update Sensor API in scripts (#39) * add semantic cam * update API * format * Update the citation in README.md (#40) * Optimize waymo converter (#44) * use generator for waymo * :wqadd preprocessor * use generator * Use Waymo Protos Directly (#38) * use protos directly * format protos --------- Co-authored-by: Quanyi Li <quanyili0057@gmail.com> * rename to unix style * Update nuScenes & Waymo Optimization (#47) * update can bus * Create LICENSE * update waymo doc * protobuf requirement * just warning * Add warning for proto * update PR template * fix length bug * try sharing nusc * imu heading * fix 161 168 * add badge * fix doc * update doc * format * update cp * update nuscenes interface * update doc * prediction nuscenes * use drivable aread for nuscenes * allow converting prediction * format * fix bug * optimize * clean RAM * delete more * restore to * add only lane * use token * add warning * format * fix bug * add simulation section * Add support to AV2 (#48) * add support to av2 --------- Co-authored-by: Alan-LanFeng <fenglan18@outook.com> * add nuscenes tracks and av2 bound (#49) * add nuscenes tracks to predict * ad av2 boundary type * 1. add back map center to restore original coordinate in nuScnes (#51) * 1. add back map center to restore the original coordinate in nuScenes * Use the utils from MetaDrive to update object summaries; update ScenarioDescription doc (#52) * Update * update * update * update * add trigger (#57) * Add test for waymo example (#58) * add test script * test first 10 scenarios * add dependency * add dependency * Update the script for generating multi-sensors images (#61) * fix broken script * format code * introduce offscreen rendering * try debug * fix * fix * up * up * remove fix * fix * WIP * fix a bug in nusc converter (#60) * fix a typo (#62) * Update waymo.rst (#59) * Update waymo.rst * Update waymo.rst * Fix a bug in Waymo conversion: GPU should be disable (#64) * Update waymo.rst * Update waymo.rst * allow generate all data * update readme * update * better logging info * more info * up * fix * add note on GPU * better log * format * Fix nuscenes (#67) * fix bug * fix a potential bug * update av2 documentation (#75) * fix av2 sdc_track_indx (#72) (#76) * Add View-of-Delft Prediction (VoD-P) dataset * Reformat VoD code * Add documentation for VoD dataset * Reformat convert_vod.py --------- Co-authored-by: Quanyi Li <785878978@qq.com> Co-authored-by: QuanyiLi <quanyili0057@gmail.com> Co-authored-by: Quanyi Li <quanyi@bolei-gpu02.cs.ucla.edu> Co-authored-by: PENG Zhenghao <pzh@cs.ucla.edu> Co-authored-by: Govind Pimpale <gpimpale29@gmail.com> Co-authored-by: Alan <36124025+Alan-LanFeng@users.noreply.github.com> Co-authored-by: Alan-LanFeng <fenglan18@outook.com> Co-authored-by: Yunsong Zhou <75066007+ZhouYunsong-SJTU@users.noreply.github.com>
110 lines
4.6 KiB
ReStructuredText
110 lines
4.6 KiB
ReStructuredText
#############################
|
|
View-of-Delft (VoD)
|
|
#############################
|
|
|
|
| Website: https://intelligent-vehicles.org/datasets/view-of-delft/
|
|
| Download: https://intelligent-vehicles.org/datasets/view-of-delft/ (Registration required)
|
|
| Papers:
|
|
Detection dataset: https://ieeexplore.ieee.org/document/9699098
|
|
Prediction dataset: https://ieeexplore.ieee.org/document/10493110
|
|
|
|
The View-of-Delft (VoD) dataset is a novel automotive dataset recorded in Delft,
|
|
the Netherlands. It contains 8600+ frames of synchronized and calibrated
|
|
64-layer LiDAR-, (stereo) camera-, and 3+1D (range, azimuth, elevation, +
|
|
Doppler) radar-data acquired in complex, urban traffic. It consists of 123100+
|
|
3D bounding box annotations of both moving and static objects, including 26500+
|
|
pedestrian, 10800 cyclist and 26900+ car labels. It additionally contains
|
|
semantic map annotations and accurate ego-vehicle localization data.
|
|
|
|
Benchmarks for detection and prediction tasks are released for the dataset. See
|
|
the sections below for details on these benchmarks.
|
|
|
|
**Detection**:
|
|
An object detection benchmark is available for researchers to develop and
|
|
evaluate their models on the VoD dataset. At the time of publication, this
|
|
benchmark was the largest automotive multi-class object detection dataset
|
|
containing 3+1D radar data, and the only dataset containing high-end (64-layer)
|
|
LiDAR and (any kind of) radar data at the same time.
|
|
|
|
**Prediction**:
|
|
A trajectory prediction benchmark is publicly available to enable research
|
|
on urban multi-class trajectory prediction. This benchmark contains challenging
|
|
prediction cases in the historic city center of Delft with a high proportion of
|
|
Vulnerable Road Users (VRUs), such as pedestrians and cyclists. Semantic map
|
|
annotations for road elements such as lanes, sidewalks, and crosswalks are
|
|
provided as context for prediction models.
|
|
|
|
1. Install VoD Prediction Toolkit
|
|
=================================
|
|
|
|
We will use the VoD Prediction toolkit to convert the data.
|
|
First of all, we have to install the ``vod-devkit``.
|
|
|
|
.. code-block:: bash
|
|
|
|
# install from github (Recommend)
|
|
git clone git@github.com:tudelft-iv/view-of-delft-prediction-devkit.git
|
|
cd vod-devkit
|
|
pip install -e .
|
|
|
|
# or install from PyPI
|
|
pip install vod-devkit
|
|
|
|
By installing from github, you can access examples and source code the toolkit.
|
|
The examples are useful to verify whether the installation and dataset setup is correct or not.
|
|
|
|
|
|
2. Download VoD Data
|
|
==============================
|
|
|
|
The official instruction is available at https://intelligent-vehicles.org/datasets/view-of-delft/.
|
|
Here we provide a simplified installation procedure.
|
|
|
|
First of all, please fill in the access form on vod website: https://intelligent-vehicles.org/datasets/view-of-delft/.
|
|
The maintainers will send the data link to your email. Download and unzip the file named ``view_of_delft_prediction_PUBLIC.zip``.
|
|
|
|
Secondly, all files should be organized to the following structure::
|
|
|
|
/vod/data/path/
|
|
├── maps/
|
|
| └──expansion/
|
|
├── v1.0-trainval/
|
|
| ├──attribute.json
|
|
| ├──calibrated_sensor.json
|
|
| ├──map.json
|
|
| ├──log.json
|
|
| ├──ego_pose.json
|
|
| └──...
|
|
└── v1.0-test/
|
|
|
|
**Note**: The sensor data is currently not available in the Prediction dataset, but will be released in the near future.
|
|
|
|
The ``/vod/data/path`` should be ``/data/sets/vod`` by default according to the official instructions,
|
|
allowing the ``vod-devkit`` to find it.
|
|
But you can still place it to any other places and:
|
|
|
|
- build a soft link connect your data folder and ``/data/sets/vod``
|
|
- or specify the ``dataroot`` when calling vod APIs and our convertors.
|
|
|
|
|
|
After this step, the examples in ``vod-devkit`` is supposed to work well.
|
|
Please try ``view-of-delft-prediction-devkit/tutorials/vod_tutorial.ipynb`` and see if the demo can successfully run.
|
|
|
|
3. Build VoD Database
|
|
===========================
|
|
|
|
After setup the raw data, convertors in ScenarioNet can read the raw data, convert scenario format and build the database.
|
|
Here we take converting raw data in ``v1.0-trainval`` as an example::
|
|
|
|
python -m scenarionet.convert_vod -d /path/to/your/database --split v1.0-trainval --dataroot /vod/data/path
|
|
|
|
The ``split`` is to determine which split to convert. ``dataroot`` is set to ``/data/sets/vod`` by default,
|
|
but you need to specify it if your data is stored in any other directory.
|
|
Now all converted scenarios will be placed at ``/path/to/your/database`` and are ready to be used in your work.
|
|
|
|
|
|
Known Issues: VoD
|
|
=======================
|
|
|
|
N/A
|