Files
scenarionet/documentation/operations.rst
Hidde Boekema 32910a3a1b
Some checks failed
test / code_style (push) Has been cancelled
test / test_functionality (push) Has been cancelled
test / test_ipynb (push) Has been cancelled
Add View-of-Delft Prediction (VoD-P) dataset (#99)
* Create main branch

* Initial commit

* add setup.py

* move tools from md

* solve import conflict

* refactor

* get a unified func

* rename

* batch convert nuscenes

* change summary file to json

* remove set as well

* move writing summary to metadrive

* show import error

* nuplan ok

* clean example

* waymo

* all convert is ready now

* source file to data

* update get nuplan parameters

* get all scenarios

* format

* add pg converter

* fix nuplan bug

* suppres tf warning

* combine dataset function

* test script

* add test to github page

* add test script

* test script

* add step condition to verofy

* test scenarios

* remove logging information

* filter function

* test filter

* sdc filter test

* add filter test

* finish filter

* multiprocess verify

* multi_processing test

* small dataset test!

* multi-processing test

* format

* auto reduce worker num

* use is_scenario_file to determine

* build new dataset from error logs

* add new test

* add common utils

* move all test genrtaed file to tmp, add genertae error set test

* provide scripts

* add test

* reanme

* scale up test

* add script for parsing data

* disable info output

* multi processing get files

* multi-process writing

* test waymo converter

* add local test for generating local dataset

* set waymo origin path

* merge automatically

* batch generation

* add combine API

* fix combine bug

* test combine data

* add more test

* fix bug

* more test

* add num works to script arguments

* fix bug

* add dataset to gitignore

* test more scripts

* update error message

* .sh

* fix bug

* fix bug

* 16 workers

* remove annotation

* install md for github test

* fix bug

* fix CI

* fix test

* add filters to combine script

* fix test

* Fix bug for generating dataset (#2)

* update parameters for scripts

* update write function

* modify waymo script

* use exist ok instead of overwrite

* remove TODO

* rename to comvine_dataset

* use exist_ok and force_overwrite together

* format

* test

* creat env for each thread

* restore

* fix bug

* fix pg bug

* fix

* fix bug

* add assert

* don't return done info

* to dict

* add test

* only compare sdc

* no store mao

* release memory

* add start index to argumen

* test

* format some settings/flags

* add tmp path

* add tmp dir

* test all scripts

* suppress warning

* suppress warning

* format

* test memory leak

* fix memory leak

* remove useless functions

* imap

* thread-1 process for avoiding memory leak

* add list()

* rename

* verify existence

* verify completeness

* test

* add test

* add default value

* add limit

* use script

* add anotation

* test script

* fix bug

* fix bug

* add author4

* add overwrite

* fix bug

* fix

* combine overwrite

* fix bug

* gpu007

* add result save dir

* adjust sequence

* fix test bug

* disable bash scri[t

* add episode length limit

* move scripts to root dir

* format

* fix test

* Readme (#3)

* rename to merge dataset

* add -d for operation

* test move

* add move function

* test remove

* format

* dataset -> database

* add readme

* format.sh

* test assert

* rename to database in .sh

* Update README.md

* rename scripts and update readme

* remove repeat calculation

* update radius

* Add come updates for Neurips paper (#4)

* scenarionet training

* wandb

* train utils

* fix callback

* run PPO

* use pg test

* save path

* use torch

* add dependency

* update ignore

* update training

* large model

* use curriculum training

* add time to exp name

* storage_path

* restore

* update training

* use my key

* add log message

* check seed

* restore callback

* restore call bacl

* add log message

* add logging message

* restore ray1.4

* length 500

* ray 100

* wandb

* use tf

* more levels

* add callback

* 10 worker

* show level

* no env horizon

* callback result level

* more call back

* add diffuculty

* add mroen stat

* mroe stat

* show levels

* add callback

* new

* ep len 600

* fix setup

* fix stepup

* fix to 3.8

* update setup

* parallel worker!

* new exp

* add callback

* lateral dist

* pg dataset

* evaluate

* modify config

* align config

* train single RL

* update training script

* 100w eval

* less eval to reveal

* 2000 env eval

* new trianing

* eval 1000

* update eval

* more workers

* more worker

* 20 worker

* dataset to database

* split tool!

* split dataset

* try fix

* train 003

* fix mapping

* fix test

* add waymo tqdm

* utils

* fix bug

* fix bug

* waymo

* int type

* 8 worker read

* disable

* read file

* add log message

* check existence

* dist 0

* int

* check num

* suprass warning

* add filter API

* filter

* store map false

* new

* ablation

* filter

* fix

* update filyter

* reanme to from

* random select

* add overlapping checj

* fix

* new training sceheme

* new reward

* add waymo train script

* waymo different config

* copy raw data

* fix bug

* add tqdm

* update readme

* waymo

* pg

* max lateral dist 3

* pg

* crash_done instead of penalty

* no crash done

* gpu

* update eval script

* steering range penalty

* evaluate

* finish pg

* update setup

* fix bug

* test

* fix

* add on line

* train nuplan

* generate sensor

* udpate training

* static obj

* multi worker eval

* filx bug

* use ray for testing

* eval!

* filter senario

* id filter

* fox bug

* dist = 2

* filter

* eval

* eval ret

* ok

* update training pg

* test before use

* store data=False

* collect figures

* capture pic

---------

Co-authored-by: Quanyi Li <quanyi@bolei-gpu02.cs.ucla.edu>

* Make video (#5)

* generate accident scene

* construction PG

* no object

* accident prob

* capture script

* update nuscenes toolds

* make video

* format

* fix test

* update readme

* update readme

* format

* format

* Update video/webpage/code

* Update env (#7)

* add capture script

* gymnasium API

* training with gymnasium API

* update readme (#9)

* Rebuttal (#15)

* pg+nuplan train

* Need map

* use gym wrapper

* use createGymWrapper

* doc

* use all scenarios!

* update 80000 scenario

* train script

* config readthedocs

* format

* fix doc

* add requirement

* fix path

* readthedocs

* doc

* reactive traffic example

* Doc-example (#18)

* reactive traffic example

* structure

* structure

* waymo example

* rename and add doc

* finish example

* example

* start from 2

* fix build error

* Update doc (#20)

* Add list.py and desc

* add operations

* add structure

* update readme

* format

* update readme

* more doc

* toc tree

* waymo example

* add PG

* PG+waymo+nuscenes

* add nuPlan setup instruction

* fix command style by removing .py

* Colab exp (#22)

* add example

* add new workflow

* fix bug

* pull asset automatically

* add colab

* fix test

* add colab to readme

* Update README.md (#23)

* Update readme (#24)

* update figure

* add colab to doc

* More info (#28)

* boundary to exterior

* rename copy to cp, avoiding bugs

* add connectivity and sidewalk/cross for nuscenes

* update lane type

* add semantic renderer

* restore

* nuplan works

* format

* md versio>=0.4.1.2

* Loose numpy version (#30)

* disable using pip extra requirement installation

* loose numpy

* waymo

* waymo version

* add numpy hint

* restore

* Add to note

* add hint

* Update document, add a colab example for reading data, upgrade numpy dependency (#34)

* Minor update to docs

* WIP

* adjust numpy requirement

* prepare example for reading data from SN dataset

* prepare example for reading data from SN dataset

* clean

* Update Citation information (#37)

* Update Sensor API in scripts (#39)

* add semantic cam

* update API

* format

* Update the citation in README.md (#40)

* Optimize waymo converter (#44)

* use generator for waymo

* :wqadd preprocessor

* use generator

* Use Waymo Protos Directly (#38)

* use protos directly

* format protos

---------

Co-authored-by: Quanyi Li <quanyili0057@gmail.com>

* rename to unix style

* Update nuScenes & Waymo Optimization (#47)

* update can bus

* Create LICENSE

* update waymo doc

* protobuf requirement

* just warning

* Add warning for proto

* update PR template

* fix length bug

* try sharing nusc

* imu heading

* fix 161 168

* add badge

* fix doc

* update doc

* format

* update cp

* update nuscenes interface

* update doc

* prediction nuscenes

* use drivable aread for nuscenes

* allow converting prediction

* format

* fix bug

* optimize

* clean RAM

* delete more

* restore to

* add only lane

* use token

* add warning

* format

* fix bug

* add simulation section

* Add support to AV2 (#48)

* add support to av2
---------

Co-authored-by: Alan-LanFeng <fenglan18@outook.com>

* add nuscenes tracks and av2 bound (#49)

* add nuscenes tracks to predict
* ad av2 boundary type

* 1. add back map center to restore original coordinate in nuScnes (#51)

* 1. add back map center to restore the original coordinate in nuScenes

* Use the utils from MetaDrive to update object summaries; update ScenarioDescription doc (#52)

* Update

* update

* update

* update

* add trigger (#57)

* Add test for waymo example (#58)

* add test script

* test first 10 scenarios

* add dependency

* add dependency

* Update the script for generating multi-sensors images (#61)

* fix broken script

* format code

* introduce offscreen rendering

* try debug

* fix

* fix

* up

* up

* remove fix

* fix

* WIP

* fix a bug in nusc converter (#60)

* fix a typo (#62)

* Update waymo.rst (#59)

* Update waymo.rst

* Update waymo.rst

* Fix a bug in Waymo conversion: GPU should be disable (#64)

* Update waymo.rst

* Update waymo.rst

* allow generate all data

* update readme

* update

* better logging info

* more info

* up

* fix

* add note on GPU

* better log

* format

* Fix nuscenes (#67)

* fix bug

* fix a potential bug

* update av2 documentation (#75)

* fix av2 sdc_track_indx (#72) (#76)

* Add View-of-Delft Prediction (VoD-P) dataset

* Reformat VoD code

* Add documentation for VoD dataset

* Reformat convert_vod.py

---------

Co-authored-by: Quanyi Li <785878978@qq.com>
Co-authored-by: QuanyiLi <quanyili0057@gmail.com>
Co-authored-by: Quanyi Li <quanyi@bolei-gpu02.cs.ucla.edu>
Co-authored-by: PENG Zhenghao <pzh@cs.ucla.edu>
Co-authored-by: Govind Pimpale <gpimpale29@gmail.com>
Co-authored-by: Alan <36124025+Alan-LanFeng@users.noreply.github.com>
Co-authored-by: Alan-LanFeng <fenglan18@outook.com>
Co-authored-by: Yunsong Zhou <75066007+ZhouYunsong-SJTU@users.noreply.github.com>
2025-05-01 12:51:49 +01:00

572 lines
26 KiB
ReStructuredText

######################
Operations
######################
How to run
~~~~~~~~~~
We provide various basic operations allowing users to modify the built database for ML applications.
These operations include building database from different data providers;aggregating datasets from diverse source;
splitting datasets to training/test set;sanity check/filtering scenarios.
All commands can be run with ``python -m scenarionet.[command]``, e.g. ``python -m scenarionet.list`` for listing available operations.
The parameters for each script can be found by adding a ``-h`` flag.
.. note::
When running ``python -m``, make sure the directory you are at doesn't contain a folder called ``scenarionet``.
Otherwise, the running may fail.
This usually happens if you install ScenarioNet or MetaDrive via ``git clone`` and put it under a directory you usually work with like home directory.
List
~~~~~
This command can list all operations with detailed descriptions::
python -m scenarionet.list
Convert
~~~~~~~~
.. generated by python -m convert.command -h | fold -w 80
**ScenarioNet doesn't provide any data.**
Instead, it provides converters to parse common open-sourced driving datasets to an internal scenario description, which comprises scenario databases.
Thus converting scenarios to our internal scenario description is the first step to build the databases.
Currently,we provide convertors for Waymo, nuPlan, nuScenes (Lyft) datasets.
Convert Waymo
------------------------
.. code-block:: text
python -m scenarionet.convert_waymo [-h] [--database_path DATABASE_PATH]
[--dataset_name DATASET_NAME] [--version VERSION]
[--overwrite] [--num_workers NUM_WORKERS]
[--raw_data_path RAW_DATA_PATH]
[--start_file_index START_FILE_INDEX]
[--num_files NUM_FILES]
Build database from Waymo scenarios
optional arguments:
-h, --help show this help message and exit
--database_path DATABASE_PATH, -d DATABASE_PATH
A directory, the path to place the converted data
--dataset_name DATASET_NAME, -n DATASET_NAME
Dataset name, will be used to generate scenario files
--version VERSION, -v VERSION
version
--overwrite If the database_path exists, whether to overwrite it
--num_workers NUM_WORKERS
number of workers to use
--raw_data_path RAW_DATA_PATH
The directory stores all waymo tfrecord
--start_file_index START_FILE_INDEX
Control how many files to use. We will list all files
in the raw data folder and select
files[start_file_index: start_file_index+num_files]
--num_files NUM_FILES
Control how many files to use. We will list all files
in the raw data folder and select
files[start_file_index: start_file_index+num_files]
This script converted the recorded scenario into our scenario descriptions.
Detailed guide is available at Section :ref:`waymo`.
Convert nuPlan
-------------------------
.. code-block:: text
python -m scenarionet.convert_nuplan [-h] [--database_path DATABASE_PATH]
[--dataset_name DATASET_NAME] [--version VERSION]
[--overwrite] [--num_workers NUM_WORKERS]
[--raw_data_path RAW_DATA_PATH] [--test]
Build database from nuPlan scenarios
optional arguments:
-h, --help show this help message and exit
--database_path DATABASE_PATH, -d DATABASE_PATH
A directory, the path to place the data
--dataset_name DATASET_NAME, -n DATASET_NAME
Dataset name, will be used to generate scenario files
--version VERSION, -v VERSION
version of the raw data
--overwrite If the database_path exists, whether to overwrite it
--num_workers NUM_WORKERS
number of workers to use
--raw_data_path RAW_DATA_PATH
the place store .db files
--test for test use only. convert one log
This script converted the recorded nuPlan scenario into our scenario descriptions.
It needs to install ``nuplan-devkit`` and download the source data from https://www.nuscenes.org/nuplan.
Detailed guide is available at Section :ref:`nuplan`.
Convert nuScenes (Lyft)
------------------------------------
.. code-block:: text
python -m scenarionet.convert_nuscenes [-h] [--database_path DATABASE_PATH]
[--dataset_name DATASET_NAME]
[--split
{v1.0-mini,mini_val,v1.0-test,train,train_val,val,mini_train,v1.0-trainval}]
[--dataroot DATAROOT] [--map_radius MAP_RADIUS]
[--future FUTURE] [--past PAST] [--overwrite]
[--num_workers NUM_WORKERS]
Build database from nuScenes/Lyft scenarios
optional arguments:
-h, --help show this help message and exit
--database_path DATABASE_PATH, -d DATABASE_PATH
directory, The path to place the data
--dataset_name DATASET_NAME, -n DATASET_NAME
Dataset name, will be used to generate scenario files
--split
{v1.0-mini,mini_val,v1.0-test,train,train_val,val,mini_train,v1.0-trainval}
Which splits of nuScenes data should be sued. If set
to ['v1.0-mini', 'v1.0-trainval', 'v1.0-test'], it
will convert the full log into scenarios with 20
second episode length. If set to ['mini_train',
'mini_val', 'train', 'train_val', 'val'], it will
convert segments used for nuScenes prediction
challenge to scenarios, resulting in more converted
scenarios. Generally, you should choose this parameter
from ['v1.0-mini', 'v1.0-trainval', 'v1.0-test'] to
get complete scenarios for planning unless you want to
use the converted scenario files for prediction task.
--dataroot DATAROOT The path of nuscenes data
--map_radius MAP_RADIUS The size of map
--future FUTURE 6 seconds by default. How many future seconds to
predict. Only available if split is chosen from
['mini_train', 'mini_val', 'train', 'train_val',
'val']
--past PAST 2 seconds by default. How many past seconds are used
for prediction. Only available if split is chosen from
['mini_train', 'mini_val', 'train', 'train_val',
'val']
--overwrite If the database_path exists, whether to overwrite it
--num_workers NUM_WORKERS
This script converted the recorded nuScenes scenario into our scenario descriptions.
It needs to install ``nuscenes-devkit`` and download the source data from https://www.nuscenes.org/nuscenes.
For Lyft datasets, this API can only convert the old version Lyft data as the old Lyft data can be parsed via `nuscenes-devkit`.
However, Lyft is now a part of Woven Planet and the new data has to be parsed via new toolkit.
We are working on support this new toolkit to support the new Lyft dataset.
Detailed guide is available at Section :ref:`nuscenes`.
Convert VoD
------------------------------------
.. code-block:: text
python -m scenarionet.convert_vod [-h] [--database_path DATABASE_PATH]
[--dataset_name DATASET_NAME]
[--split
{v1.0-trainval,v1.0-test,train,train_val,val,test}]
[--dataroot DATAROOT] [--map_radius MAP_RADIUS]
[--future FUTURE] [--past PAST] [--overwrite]
[--num_workers NUM_WORKERS]
Build database from VOD scenarios
optional arguments:
-h, --help show this help message and exit
--database_path DATABASE_PATH, -d DATABASE_PATH
directory, The path to place the data
--dataset_name DATASET_NAME, -n DATASET_NAME
Dataset name, will be used to generate scenario files
--split
{v1.0-trainval,v1.0-test,train,train_val,val,test}
Which splits of VOD data should be used. If set to
['v1.0-trainval', 'v1.0-test'], it will
convert the full log into scenarios with 20 second episode
length. If set to ['train', 'train_val', 'val', 'test'],
it will convert segments used for VOD prediction challenge
to scenarios, resulting in more converted scenarios.
Generally, you should choose this parameter from
['v1.0-trainval', 'v1.0-test'] to get complete
scenarios for planning unless you want to use the
converted scenario files for prediction task.
--dataroot DATAROOT The path of vod data
--map_radius MAP_RADIUS The size of map
--future FUTURE 3 seconds by default. How many future seconds to
predict. Only available if split is chosen from
['train', 'train_val', 'val', 'test']
--past PAST 0.5 seconds by default. How many past seconds are
used for prediction. Only available if split is
chosen from ['train', 'train_val', 'val', 'test']
--overwrite If the database_path exists, whether to overwrite it
--num_workers NUM_WORKERS number of workers to use
This script converts the View-of-Delft Prediction (VoD) dataset into our scenario descriptions.
You will need to install ``vod-devkit`` and download the source data from https://intelligent-vehicles.org/datasets/view-of-delft/.
Detailed guide is available at Section :ref:`vod`.
Convert PG
-------------------------
.. code-block:: text
python -m scenarionet.convert_pg [-h] [--database_path DATABASE_PATH]
[--dataset_name DATASET_NAME] [--version VERSION]
[--overwrite] [--num_workers NUM_WORKERS]
[--num_scenarios NUM_SCENARIOS]
[--start_index START_INDEX]
Build database from synthetic or procedurally generated scenarios
optional arguments:
-h, --help show this help message and exit
--database_path DATABASE_PATH, -d DATABASE_PATH
directory, The path to place the data
--dataset_name DATASET_NAME, -n DATASET_NAME
Dataset name, will be used to generate scenario files
--version VERSION, -v VERSION
version
--overwrite If the database_path exists, whether to overwrite it
--num_workers NUM_WORKERS
number of workers to use
--num_scenarios NUM_SCENARIOS
how many scenarios to generate (default: 30)
--start_index START_INDEX
which index to start
PG refers to Procedural Generation.
Scenario database generated in this way are created by a set of rules with hand-crafted maps.
These scenarios are collected by driving the ego car with an IDM policy in different scenarios.
Detailed guide is available at Section :ref:`pg`.
Merge
~~~~~~~~~
This command is for merging existing databases to build a larger one.
This is why we can build a ScenarioNet!
After converting data recorded in different format to this unified scenario description,
we can aggregate them freely and enlarge the database.
.. code-block:: text
python -m scenarionet.merge [-h] --to DATABASE_PATH --from FROM [FROM ...]
[--exist_ok] [--overwrite] [--filter_moving_dist]
[--sdc_moving_dist_min SDC_MOVING_DIST_MIN]
Merge a list of databases. e.g. scenario.merge --from db_1 db_2 db_3...db_n
--to db_dest
optional arguments:
-h, --help show this help message and exit
--database_path DATABASE_PATH, -d DATABASE_PATH, --to DATABASE_PATH
The name of the new combined database. It will create
a new directory to store dataset_summary.pkl and
dataset_mapping.pkl. If exists_ok=True, those two .pkl
files will be stored in an existing directory and turn
that directory into a database.
--from FROM [FROM ...]
Which datasets to combine. It takes any number of
directory path as input
--exist_ok Still allow to write, if the dir exists already. This
write will only create two .pkl files and this
directory will become a database.
--overwrite When exists ok is set but summary.pkl and map.pkl
exists in existing dir, whether to overwrite both
files
--filter_moving_dist add this flag to select cases with SDC moving dist >
sdc_moving_dist_min
--sdc_moving_dist_min SDC_MOVING_DIST_MIN
Selecting case with sdc_moving_dist > this value. We
will add more filter conditions in the future.
Split
~~~~~~~~~~
The split action is for extracting a part of scenarios from an existing one and building a new database.
This is usually used to build training/test/validation set.
.. code-block:: text
python -m scenarionet.split [-h] --from FROM --to TO [--num_scenarios NUM_SCENARIOS]
[--start_index START_INDEX] [--random] [--exist_ok]
[--overwrite]
Build a new database containing a subset of scenarios from an existing
database.
optional arguments:
-h, --help show this help message and exit
--from FROM Which database to extract data from.
--to TO The name of the new database. It will create a new
directory to store dataset_summary.pkl and
dataset_mapping.pkl. If exists_ok=True, those two .pkl
files will be stored in an existing directory and turn
that directory into a database.
--num_scenarios NUM_SCENARIOS
how many scenarios to extract (default: 30)
--start_index START_INDEX
which index to start
--random If set to true, it will choose scenarios randomly from
all_scenarios[start_index:]. Otherwise, the scenarios
will be selected sequentially
--exist_ok Still allow to write, if the to_folder exists already.
This write will only create two .pkl files and this
directory will become a database.
--overwrite When exists ok is set but summary.pkl and map.pkl
exists in existing dir, whether to overwrite both
files
Copy (Move)
~~~~~~~~~~~~~~~~
As the the database built by ScenarioNet stores the scenarios with virtual mapping,
directly move or copy an existing database to a new location with ``cp`` or ``mv`` command will break the soft link.
For moving or copying the scenarios to a new path, one should use this command.
When ``--remove_source`` is added, this ``copy`` command will be changed to ``move``.
.. code-block:: text
python -m scenarionet.cp [-h] --from FROM --to TO [--remove_source] [--copy_raw_data]
[--exist_ok] [--overwrite]
Move or Copy an existing database
optional arguments:
-h, --help show this help message and exit
--from FROM Which database to move.
--to TO The name of the new database. It will create a new
directory to store dataset_summary.pkl and
dataset_mapping.pkl. If exists_ok=True, those two .pkl
files will be stored in an existing directory and turn that
directory into a database.
--remove_source Remove the `from_database` if set this flag
--copy_raw_data Instead of creating virtual file mapping, copy raw
scenario.pkl file
--exist_ok Still allow to write, if the to_folder exists already. This
write will only create two .pkl files and this directory
will become a database.
--overwrite When exists ok is set but summary.pkl and map.pkl exists in
existing dir, whether to overwrite both files
Num
~~~~~~~~~~
Report the number of scenarios in a database.
.. code-block:: text
python -m scenarionet.num [-h] --database_path DATABASE_PATH
The number of scenarios in the specified database
optional arguments:
-h, --help show this help message and exit
--database_path DATABASE_PATH, -d DATABASE_PATH
Database to check number of scenarios
Filter
~~~~~~~~
Some scenarios contain overpasses, short ego-car trajectory or traffic signals.
This scenarios can be filtered out from the database by using this command.
Now, we only provide filters for ego car moving distance, number of objects, traffic lights, overpasses and scenario ids.
If you would like to contribute new filters,
feel free to create an issue or pull request on our `Github repo <https://github.com/metadriverse/scenarionet>`_.
.. code-block:: text
python -m scenarionet.filter [-h] --database_path DATABASE_PATH --from FROM
[--exist_ok] [--overwrite] [--moving_dist]
[--sdc_moving_dist_min SDC_MOVING_DIST_MIN]
[--num_object] [--max_num_object MAX_NUM_OBJECT]
[--no_overpass] [--no_traffic_light] [--id_filter]
[--exclude_ids EXCLUDE_IDS [EXCLUDE_IDS ...]]
Filter unwanted scenarios out and build a new database
optional arguments:
-h, --help show this help message and exit
--database_path DATABASE_PATH, -d DATABASE_PATH
The name of the new database. It will create a new
directory to store dataset_summary.pkl and
dataset_mapping.pkl. If exists_ok=True, those two .pkl
files will be stored in an existing directory and turn
that directory into a database.
--from FROM Which dataset to filter. It takes one directory path
as input
--exist_ok Still allow to write, if the dir exists already. This
write will only create two .pkl files and this
directory will become a database.
--overwrite When exists ok is set but summary.pkl and map.pkl
exists in existing dir, whether to overwrite both
files
--moving_dist add this flag to select cases with SDC moving dist >
sdc_moving_dist_min
--sdc_moving_dist_min SDC_MOVING_DIST_MIN
Selecting case with sdc_moving_dist > this value.
--num_object add this flag to select cases with object_num <
max_num_object
--max_num_object MAX_NUM_OBJECT
case will be selected if num_obj < this argument
--no_overpass Scenarios with overpass WON'T be selected
--no_traffic_light Scenarios with traffic light WON'T be selected
--id_filter Scenarios with indicated name will NOT be selected
--exclude_ids EXCLUDE_IDS [EXCLUDE_IDS ...]
Scenarios with indicated name will NOT be selected
Build from Errors
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This script is for generating a new database to exclude (include) broken scenarios.
This is useful for debugging broken scenarios or building a completely clean datasets for training or testing.
.. code-block:: text
python -m scenarionet.generate_from_error_file [-h] --database_path DATABASE_PATH --file
FILE [--overwrite] [--broken]
Generate a new database excluding or only including the failed scenarios
detected by 'check_simulation' and 'check_existence'
optional arguments:
-h, --help show this help message and exit
--database_path DATABASE_PATH, -d DATABASE_PATH
The path of the newly generated database
--file FILE, -f FILE The path of the error file, should be xyz.json
--overwrite If the database_path exists, overwrite it
--broken By default, only successful scenarios will be picked
to build the new database. If turn on this flog, it
will generate database containing only broken
scenarios.
Sim
~~~~~~~~~~~
Load a database to simulator and replay the scenarios.
We provide different render mode allows users to visualize them.
For more details of simulation,
please check Section :ref:`simulation` or the `MetaDrive document <https://metadrive-simulator.readthedocs.io/en/latest/>`_.
.. code-block:: text
python -m scenarionet.sim [-h] --database_path DATABASE_PATH
[--render {none,2D,3D,advanced,semantic}]
[--scenario_index SCENARIO_INDEX]
Load a database to simulator and replay scenarios
optional arguments:
-h, --help show this help message and exit
--database_path DATABASE_PATH, -d DATABASE_PATH
The path of the database
--render {none,2D,3D,advanced,semantic}
--scenario_index SCENARIO_INDEX
Specifying a scenario to run
Check Existence
~~~~~~~~~~~~~~~~~~~~~
We provide a tool to check if the scenarios in a database are runnable and exist on your machine.
This is because we include the scenarios to a database, a folder, through a virtual mapping.
Each database only records the path of each scenario relative to the database directory.
Thus this script is for making sure all original scenario file exists and can be loaded.
If it manages to find some broken scenarios, an error file will be generated to the specified path.
By using ``generate_from_error_file``, a new database can be created to exclude or only include these broken scenarios.
In this way, we can debug the broken scenarios to check what causes the error or just ignore and remove the broke
scenarios to make the database intact.
.. code-block:: text
python -m scenarionet.check_existence [-h] --database_path DATABASE_PATH
[--error_file_path ERROR_FILE_PATH] [--overwrite]
[--num_workers NUM_WORKERS] [--random_drop]
Check if the database is intact and all scenarios can be found and recorded in
internal scenario description
optional arguments:
-h, --help show this help message and exit
--database_path DATABASE_PATH, -d DATABASE_PATH
Dataset path, a directory containing summary.pkl and
mapping.pkl
--error_file_path ERROR_FILE_PATH
Where to save the error file. One can generate a new
database excluding or only including the failed
scenarios.For more details, see operation
'generate_from_error_file'
--overwrite If an error file already exists in error_file_path,
whether to overwrite it
--num_workers NUM_WORKERS
number of workers to use
--random_drop Randomly make some scenarios fail. for test only!
Check Simulation
~~~~~~~~~~~~~~~~~
This is a upgraded version of existence check.
It not only detect the existence and the completeness of the database, but check whether all scenarios can be loaded
and run in the simulator.
.. code-block:: text
python -m scenarionet.check_simulation [-h] --database_path DATABASE_PATH
[--error_file_path ERROR_FILE_PATH] [--overwrite]
[--num_workers NUM_WORKERS] [--random_drop]
Check if all scenarios can be simulated in simulator. We recommend doing this
before close-loop training/testing
optional arguments:
-h, --help show this help message and exit
--database_path DATABASE_PATH, -d DATABASE_PATH
Dataset path, a directory containing summary.pkl and
mapping.pkl
--error_file_path ERROR_FILE_PATH
Where to save the error file. One can generate a new
database excluding or only including the failed
scenarios.For more details, see operation
'generate_from_error_file'
--overwrite If an error file already exists in error_file_path,
whether to overwrite it
--num_workers NUM_WORKERS
number of workers to use
--random_drop Randomly make some scenarios fail. for test only!
Check Overlap
~~~~~~~~~~~~~~~~
This script is for checking if there are some overlaps between two databases.
The main goal of this command is to ensure that the training and test sets are isolated.
.. code-block:: text
python -m scenarionet.check_overlap [-h] --d_1 D_1 --d_2 D_2 [--show_id]
Check if there are overlapped scenarios between two databases. If so, return
the number of overlapped scenarios and id list
optional arguments:
-h, --help show this help message and exit
--d_1 D_1 The path of the first database
--d_2 D_2 The path of the second database
--show_id whether to show the id of overlapped scenarios