Update doc (#20)

* Add list.py and desc

* add operations

* add structure

* update readme

* format

* update readme
This commit is contained in:
Quanyi Li
2023-08-26 22:29:54 +01:00
committed by GitHub
parent e6831ff972
commit 4affa68a06
29 changed files with 802 additions and 199 deletions

101
README.md
View File

@@ -2,11 +2,13 @@
**Open-Source Platform for Large-Scale Traffic Scenario Simulation and Modeling**
[
[**Webpage**](https://metadriverse.github.io/scenarionet/) |
[**Code**](https://github.com/metadriverse/scenarionet) |
[**Video**](https://youtu.be/3bOqswXP6OA) |
[**Paper**](http://arxiv.org/abs/2306.12241) |
[**Documentation**](https://scenarionet.readthedocs.io/en/latest/) |
[**Documentation**](https://scenarionet.readthedocs.io/en/latest/)
]
ScenarioNet allows users to load scenarios from real-world dataset like Waymo, nuPlan, nuScenes, l5 and synthetic
dataset such as procedural generated ones and safety-critical ones generated by adversarial attack.
@@ -17,9 +19,12 @@ various applications like AD stack test, reinforcement learning, imitation learn
![system](docs/asset/system_01.png)
## Installation
The detailed installation guidance is available
at [documentation](https://scenarionet.readthedocs.io/en/latest/install.html).
A simplest way to do this is as follows.
```
# create environment
conda create -n scenarionet python=3.9
@@ -36,87 +41,21 @@ cd scenarionet
pip install -e .
```
## Usage
## API reference
We provide some explanation and demo for all scripts here.
**You are encouraged to try them on your own, add ```-h``` or ```--help``` argument to know more details about these
scripts.**
All operations and API reference is available at
our [documentation](https://scenarionet.readthedocs.io/en/latest/operations.html).
If you already have ScenarioNet installed, you can check all operations by `python -m scenarionet.list`.
### Convert
## Citation
**Waymo**: the following script can convert Waymo tfrecord (version: v1.2, data_bin: training_20s) to Metadrive scenario
description and store them at directory ./waymo
If you used this project in your research, please cite
```latex
@article{li2023scenarionet,
title={ScenarioNet: Open-Source Platform for Large-Scale Traffic Scenario Simulation and Modeling},
author={Li, Quanyi and Peng, Zhenghao and Feng, Lan and Duan, Chenda and Mo, Wenjie and Zhou, Bolei and others},
journal={arXiv preprint arXiv:2306.12241},
year={2023}
}
```
python -m scenarionet.convert_waymo -d waymo --raw_data_path /path/to/tfrecords --num_workers=16
```
**nuPlan**: the following script will convert nuPlan split containing .db files to Metadrive scenario description and
store them at directory ./nuplan
```
python -m scenarionet.convert_nuplan -d nuplan -raw_data_path /path/to/.db files --num_workers=16
```
**nuScenes**: as nuScenes split can be read by specifying version like v1.0-mini and v1.0-training, the following script
will convert all scenarios in that split
```
python -m scenarionet.convert_nuscenes -d nuscenes --version v1.0-mini --num_workers=16
```
**PG**: the following script can generate 10000 scenarios stored at directory ./pg
```
python -m scenarionet.scripts.convert_pg -d pg --num_workers=16 --num_scenarios=10000
```
### Merge & move
For merging two or more database, use
```
python -m scenarionet.merge -d /destination/path --from /database1 /2 ...
```
As a database contains a path mapping, one should move database folder with the following script instead of ```cp```
command.
Using ```--copy_raw_data``` will copy the raw scenario file into target directory and cancel the virtual mapping.
```
python -m scenarionet.copy_database --to /destination/path --from /source/path
```
### Verify
The following scripts will check whether all scenarios exist or can be loaded into simulator.
The missing or broken scenarios will be recorded and stored into the error file. Otherwise, no error file will be
generated.
With teh error file, one can build a new database excluding or including the broken or missing scenarios.
**Existence check**
```
python -m scenarionet.check_existence -d /database/to/check --error_file_path /error/file/path
```
**Runnable check**
```
python -m scenarionet.check_simulation -d /database/to/check --error_file_path /error/file/path
```
**Generating new database**
```
python -m scenarionet.generate_from_error_file -d /new/database/path --file /error/file/path
```
### visualization
Visualizing the simulated scenario
```
python -m scenarionet.sim -d /path/to/database --render --scenario_index
```

6
documentation/PG.rst Normal file
View File

@@ -0,0 +1,6 @@
############
PG
############
Known Issues
==================

View File

@@ -53,4 +53,6 @@ html_theme = 'sphinx_rtd_theme'
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = []
# html_static_path = ['custom.css']
# html_css_files = ['custom.css']

7
documentation/custom.css Normal file
View File

@@ -0,0 +1,7 @@
div.highlight pre {
white-space: pre-wrap; /* Since CSS 2.1 */
white-space: -moz-pre-wrap; /* Mozilla, since 1999 */
white-space: -pre-wrap; /* Opera 4-6 */
white-space: -o-pre-wrap; /* Opera 7 */
word-wrap: break-word; /* Internet Explorer 5.5+ */
}

View File

@@ -1,7 +1,6 @@
.. _datasets:
#####################
Supported Datasets
Datasets
#####################
In this section, we provide detailed guidance about how to setup various datasets in a step by step way.
We will keep updating these content and maintain some a troubleshooting section for each dataset.

View File

@@ -1,3 +1,7 @@
.. _desc:
########################
Scenario Description
########################
Coming soon! @Zhenghao please help this.

View File

@@ -26,13 +26,6 @@ Please feel free to contact us if you have any suggestions or ideas!
example.rst
operations.rst
.. toctree::
:hidden:
:maxdepth: 2
:caption: Scenario Description
description.rst
.. toctree::
:hidden:
:maxdepth: 2
@@ -42,13 +35,16 @@ Please feel free to contact us if you have any suggestions or ideas!
nuplan.rst
nuscenes.rst
waymo.rst
PG.rst
new_data.rst
.. toctree::
:hidden:
:maxdepth: 2
:caption: Simulation
:caption: System Design
description.rst
simulation.rst
Citation

View File

@@ -5,20 +5,30 @@ Installation
########################
The ScenarioNet repo contains tools for converting scenarios and building database from various data sources.
We recommend to create a new conda environment and install Python>=3.8,<=3.9::
conda create -n scenarionet python==3.9
conda activate scenarionet
The simulation part is maintained in `MetaDrive <https://github.com/metadriverse/metadrive>`_ repo, and let's install MetaDrive first.
**1. Install MetaDrive**
The installation of MetaDrive on different platforms is straightforward and easy!
We recommend to use the following command to install::
We recommend to install from Github in the following two ways::
# Install MetaDrive Simulator
# Method 1 (Recommend, if you don't want to access the source code)
pip install git+https://github.com/metadriverse/metadrive.git
# Method 2
git clone git@github.com:metadriverse/metadrive.git
cd metadrive
pip install -e.
It can also be installed from PyPI by::
A more stable version of MetaDrive can be installed from PyPI by::
# More stable version than installing from Github
pip install "metadrive-simulator>=0.4.1.1"
To check whether MetaDrive is successfully installed, please run::
@@ -29,9 +39,12 @@ To check whether MetaDrive is successfully installed, please run::
**2. Install ScenarioNet**
For ScenarioNet, we only provide Github installation::
For ScenarioNet, we only provide Github installation. Likewise, there are two ways to install from Github::
# Install ScenarioNet
# Method 1 (Recommend, if you don't want to access the source code)
pip install git+https://github.com/metadriverse/scenarionet.git
# Method 2
git clone git@github.com:metadriverse/scenarionet.git
cd scenarionet
pip install -e .

View File

@@ -1,5 +1,53 @@
.. _new_data:
#############################
Add new datasets
New dataset support
#############################
We believe it is the effort from community makes it closer to a `ImageNet` of autonomous driving field.
And thus we encourage the community to contribute convertors for new datasets.
To build a convertor compatible to our APIs, one should follow the following steps.
We recommend to take a look at our existing convertors.
They are good examples and can be adjusted to parse the new dataset.
Besides, **we are very happy to provide helps if you are working on supporting new datasets!**
**1. Convertor function input/output**
Take the ``convert_waymo(scenario:scenario_pb2.Scenario(), version:str)->metadrive.scenario.ScenarioDescription`` as an example.
It takes a scenario recorded in Waymo original format as example and returned a ``ScenarioDescription`` which is actually a nested Python ``dict``.
We just extend the functions of ``dict`` object to pre-define a structure with several required fields to fill out.
The required fields can be found at :ref:`desc`.
Apart from basic information like ``version`` and ``scenario_id``, there are mainly three fields that needs to fill:
``tracks``, ``map_features`` and ``dynamic_map_states``,
which stores objects information, map structure and traffic light states respectively.
These information can be extracted with the toolkit coupled with the original data.
**2. Fill in the object data**
By parsing the ``scenario`` with the official APIs, we can extract the history of all objects easily.
Generally, the object information is stored in a *frame-centric* way, which means the querying API takes the timestep as
input and returns all objects present in this frame.
However, ScenarioNet requires an *object-centric* object history.
In this way, we can easily know how many objects present in the scenario and retrieve the trajectory of each object with
its object_id.
Thus, a convert from *frame-centric* description to *object-centric* description is required.
A good example for this is the ``extract_traffic(scenario: NuPlanScenario, center)`` function in `nuplan/utils.py <https://github.com/metadriverse/scenarionet/blob/e6831ff972ed0cd57fdcb6a8a63650c12694479c/scenarionet/converter/nuplan/utils.py#L343>`_.
Similarly, the traffic light states can be extracted from the original data and represented in an *object-centric* way.
**3. Fill in the map data**
Map data consists of lane center lines and various kinds of boundaries such as yellow solid lines and white broken lines.
All of them are actually lines represented by a list of points.
To fill in this field, the first step is to query all line objects in the region where the scenario is collected.
By traversing the line list, and extracting the line type and point list, we can build a map which is ready to be
loaded into MetaDrive.
**4. Extend the description**
As long as all mandatory fields are filled, one can add new key-value pairs in each level of the nested-dict.
For example, the some scenarios are labeled with the type of scenario and the behavior of surrounding objects.
It is absolutely ok to include these information to the scenario descriptions and use them in the later experiments.

View File

@@ -1,3 +1,6 @@
#############################
nuPlan
#############################
Known Issues
==================

View File

@@ -1,3 +1,6 @@
#############################
nuScenes
#############################
Known Issues
==================

View File

@@ -2,3 +2,496 @@
Operations
###############
We provide various basic operations allowing users to modify the built database for ML applications.
These operations include building database from different data providers;aggregating datasets from diverse source;
splitting datasets to training/test set;sanity check/filtering scenarios.
All commands can be run with ``python -m scenarionet.[command]``, e.g. ``python -m scenarionet.list`` for listing available operations.
The parameters for each script can be found by adding a ``-h`` flag.
.. note::
When running ``python -m``, make sure the directory you are at doesn't contain a folder called ``scenarionet``.
Otherwise, the running may fail.
This usually happens if you install ScenarioNet or MetaDrive via ``git clone`` and put it under a directory you usually work with like home directory.
List
~~~~~
This command can list all operations with detailed descriptions::
python -m scenarionet.list
Convert
~~~~~~~~
.. generated by python -m convert.command -h | fold -w 80
**ScenarioNet doesn't provide any data.**
Instead, it provides converters to parse common open-sourced driving datasets to an internal scenario description, which comprises scenario databases.
Thus converting scenarios to our internal scenario description is the first step to build the databases.
Currently,we provide convertors for Waymo, nuPlan, nuScenes (Lyft) datasets.
Convert Waymo
------------------------
.. code-block:: text
python -m scenarionet.convert_waymo [-h] [--database_path DATABASE_PATH]
[--dataset_name DATASET_NAME] [--version VERSION]
[--overwrite] [--num_workers NUM_WORKERS]
[--raw_data_path RAW_DATA_PATH]
[--start_file_index START_FILE_INDEX]
[--num_files NUM_FILES]
Build database from Waymo scenarios
optional arguments:
-h, --help show this help message and exit
--database_path DATABASE_PATH, -d DATABASE_PATH
A directory, the path to place the converted data
--dataset_name DATASET_NAME, -n DATASET_NAME
Dataset name, will be used to generate scenario files
--version VERSION, -v VERSION
version
--overwrite If the database_path exists, whether to overwrite it
--num_workers NUM_WORKERS
number of workers to use
--raw_data_path RAW_DATA_PATH
The directory stores all waymo tfrecord
--start_file_index START_FILE_INDEX
Control how many files to use. We will list all files
in the raw data folder and select
files[start_file_index: start_file_index+num_files]
--num_files NUM_FILES
Control how many files to use. We will list all files
in the raw data folder and select
files[start_file_index: start_file_index+num_files]
This script converted the recorded scenario into our scenario descriptions.
Detailed guide is available at Section :ref:`waymo`.
Convert nuPlan
-------------------------
.. code-block:: text
python -m scenarionet.convert_nuplan.py [-h] [--database_path DATABASE_PATH]
[--dataset_name DATASET_NAME] [--version VERSION]
[--overwrite] [--num_workers NUM_WORKERS]
[--raw_data_path RAW_DATA_PATH] [--test]
Build database from nuPlan scenarios
optional arguments:
-h, --help show this help message and exit
--database_path DATABASE_PATH, -d DATABASE_PATH
A directory, the path to place the data
--dataset_name DATASET_NAME, -n DATASET_NAME
Dataset name, will be used to generate scenario files
--version VERSION, -v VERSION
version of the raw data
--overwrite If the database_path exists, whether to overwrite it
--num_workers NUM_WORKERS
number of workers to use
--raw_data_path RAW_DATA_PATH
the place store .db files
--test for test use only. convert one log
This script converted the recorded nuPlan scenario into our scenario descriptions.
It needs to install ``nuplan-devkit`` and download the source data from https://www.nuscenes.org/nuplan.
Detailed guide is available at Section :ref:`nuplan`.
Convert nuScenes (Lyft)
------------------------------------
.. code-block:: text
python -m scenarionet.convert_nuscenes [-h] [--database_path DATABASE_PATH]
[--dataset_name DATASET_NAME] [--version VERSION]
[--overwrite] [--num_workers NUM_WORKERS]
Build database from nuScenes/Lyft scenarios
optional arguments:
-h, --help show this help message and exit
--database_path DATABASE_PATH, -d DATABASE_PATH
directory, The path to place the data
--dataset_name DATASET_NAME, -n DATASET_NAME
Dataset name, will be used to generate scenario files
--version VERSION, -v VERSION
version of nuscenes data, scenario of this version
will be converted
--overwrite If the database_path exists, whether to overwrite it
--num_workers NUM_WORKERS
number of workers to use
This script converted the recorded nuScenes scenario into our scenario descriptions.
It needs to install ``nuscenes-devkit`` and download the source data from https://www.nuscenes.org/nuscenes.
For Lyft datasets, this API can only convert the old version Lyft data as the old Lyft data can be parsed via `nuscenes-devkit`.
However, Lyft is now a part of Woven Planet and the new data has to be parsed via new toolkit.
We are working on support this new toolkit to support the new Lyft dataset.
Detailed guide is available at Section :ref:`nuscenes`.
Convert PG
-------------------------
.. code-block:: text
python -m scenarionet.convert_pg [-h] [--database_path DATABASE_PATH]
[--dataset_name DATASET_NAME] [--version VERSION]
[--overwrite] [--num_workers NUM_WORKERS]
[--num_scenarios NUM_SCENARIOS]
[--start_index START_INDEX]
Build database from synthetic or procedurally generated scenarios
optional arguments:
-h, --help show this help message and exit
--database_path DATABASE_PATH, -d DATABASE_PATH
directory, The path to place the data
--dataset_name DATASET_NAME, -n DATASET_NAME
Dataset name, will be used to generate scenario files
--version VERSION, -v VERSION
version
--overwrite If the database_path exists, whether to overwrite it
--num_workers NUM_WORKERS
number of workers to use
--num_scenarios NUM_SCENARIOS
how many scenarios to generate (default: 30)
--start_index START_INDEX
which index to start
PG refers to Procedural Generation.
Scenario database generated in this way are created by a set of rules with hand-crafted maps.
These scenarios are collected by driving the ego car with an IDM policy in different scenarios.
Detailed guide is available at Section :ref:`pg`.
Merge
~~~~~~~~~
This command is for merging existing databases to build a larger one.
This is why we can build a ScenarioNet!
After converting data recorded in different format to this unified scenario description,
we can aggregate them freely and enlarge the database.
.. code-block:: text
python -m scenarionet.merge [-h] --database_path DATABASE_PATH --from FROM [FROM ...]
[--exist_ok] [--overwrite] [--filter_moving_dist]
[--sdc_moving_dist_min SDC_MOVING_DIST_MIN]
Merge a list of databases. e.g. scenario.merge --from db_1 db_2 db_3...db_n
--to db_dest
optional arguments:
-h, --help show this help message and exit
--database_path DATABASE_PATH, -d DATABASE_PATH
The name of the new combined database. It will create
a new directory to store dataset_summary.pkl and
dataset_mapping.pkl. If exists_ok=True, those two .pkl
files will be stored in an existing directory and turn
that directory into a database.
--from FROM [FROM ...]
Which datasets to combine. It takes any number of
directory path as input
--exist_ok Still allow to write, if the dir exists already. This
write will only create two .pkl files and this
directory will become a database.
--overwrite When exists ok is set but summary.pkl and map.pkl
exists in existing dir, whether to overwrite both
files
--filter_moving_dist add this flag to select cases with SDC moving dist >
sdc_moving_dist_min
--sdc_moving_dist_min SDC_MOVING_DIST_MIN
Selecting case with sdc_moving_dist > this value. We
will add more filter conditions in the future.
Split
~~~~~~~~~~
The split action is for extracting a part of scenarios from an existing one and building a new database.
This is usually used to build training/test/validation set.
.. code-block:: text
usage: split.py [-h] --from FROM --to TO [--num_scenarios NUM_SCENARIOS]
[--start_index START_INDEX] [--random] [--exist_ok]
[--overwrite]
Build a new database containing a subset of scenarios from an existing
database.
optional arguments:
-h, --help show this help message and exit
--from FROM Which database to extract data from.
--to TO The name of the new database. It will create a new
directory to store dataset_summary.pkl and
dataset_mapping.pkl. If exists_ok=True, those two .pkl
files will be stored in an existing directory and turn
that directory into a database.
--num_scenarios NUM_SCENARIOS
how many scenarios to extract (default: 30)
--start_index START_INDEX
which index to start
--random If set to true, it will choose scenarios randomly from
all_scenarios[start_index:]. Otherwise, the scenarios
will be selected sequentially
--exist_ok Still allow to write, if the to_folder exists already.
This write will only create two .pkl files and this
directory will become a database.
--overwrite When exists ok is set but summary.pkl and map.pkl
exists in existing dir, whether to overwrite both
files
Copy (Move)
~~~~~~~~~~~~~~~~
As the the database built by ScenarioNet stores the scenarios with virtual mapping,
directly move or copy an existing database to a new location with ``cp`` or ``mv`` command will break the soft link.
For moving or copying the scenarios to a new path, one should use this command.
When ``--remove_source`` is added, this ``copy`` command will be changed to ``move``.
.. code-block:: text
python -m scenarionet.copy [-h] --from FROM --to TO [--remove_source] [--copy_raw_data]
[--exist_ok] [--overwrite]
Move or Copy an existing database
optional arguments:
-h, --help show this help message and exit
--from FROM Which database to move.
--to TO The name of the new database. It will create a new
directory to store dataset_summary.pkl and
dataset_mapping.pkl. If exists_ok=True, those two .pkl
files will be stored in an existing directory and turn that
directory into a database.
--remove_source Remove the `from_database` if set this flag
--copy_raw_data Instead of creating virtual file mapping, copy raw
scenario.pkl file
--exist_ok Still allow to write, if the to_folder exists already. This
write will only create two .pkl files and this directory
will become a database.
--overwrite When exists ok is set but summary.pkl and map.pkl exists in
existing dir, whether to overwrite both files
Num
~~~~~~~~~~
Report the number of scenarios in a database.
.. code-block:: text
usage: num.py [-h] --database_path DATABASE_PATH
The number of scenarios in the specified database
optional arguments:
-h, --help show this help message and exit
--database_path DATABASE_PATH, -d DATABASE_PATH
Database to check number of scenarios
Filter
~~~~~~~~
Some scenarios contain overpasses, short ego-car trajectory or traffic signals.
This scenarios can be filtered out from the database by using this command.
Now, we only provide filters for ego car moving distance, number of objects, traffic lights, overpasses and scenario ids.
If you would like to contribute new filters,
feel free to create an issue or pull request on our `Github repo <https://github.com/metadriverse/scenarionet>`_.
.. code-block:: text
python -m scenarionet.filter [-h] --database_path DATABASE_PATH --from FROM
[--exist_ok] [--overwrite] [--moving_dist]
[--sdc_moving_dist_min SDC_MOVING_DIST_MIN]
[--num_object] [--max_num_object MAX_NUM_OBJECT]
[--no_overpass] [--no_traffic_light] [--id_filter]
[--exclude_ids EXCLUDE_IDS [EXCLUDE_IDS ...]]
Filter unwanted scenarios out and build a new database
optional arguments:
-h, --help show this help message and exit
--database_path DATABASE_PATH, -d DATABASE_PATH
The name of the new database. It will create a new
directory to store dataset_summary.pkl and
dataset_mapping.pkl. If exists_ok=True, those two .pkl
files will be stored in an existing directory and turn
that directory into a database.
--from FROM Which dataset to filter. It takes one directory path
as input
--exist_ok Still allow to write, if the dir exists already. This
write will only create two .pkl files and this
directory will become a database.
--overwrite When exists ok is set but summary.pkl and map.pkl
exists in existing dir, whether to overwrite both
files
--moving_dist add this flag to select cases with SDC moving dist >
sdc_moving_dist_min
--sdc_moving_dist_min SDC_MOVING_DIST_MIN
Selecting case with sdc_moving_dist > this value.
--num_object add this flag to select cases with object_num <
max_num_object
--max_num_object MAX_NUM_OBJECT
case will be selected if num_obj < this argument
--no_overpass Scenarios with overpass WON'T be selected
--no_traffic_light Scenarios with traffic light WON'T be selected
--id_filter Scenarios with indicated name will NOT be selected
--exclude_ids EXCLUDE_IDS [EXCLUDE_IDS ...]
Scenarios with indicated name will NOT be selected
Build from Errors
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This script is for generating a new database to exclude (include) broken scenarios.
This is useful for debugging broken scenarios or building a completely clean datasets for training or testing.
.. code-block:: text
python -m scenarionet.generate_from_error_file [-h] --database_path DATABASE_PATH --file
FILE [--overwrite] [--broken]
Generate a new database excluding or only including the failed scenarios
detected by 'check_simulation' and 'check_existence'
optional arguments:
-h, --help show this help message and exit
--database_path DATABASE_PATH, -d DATABASE_PATH
The path of the newly generated database
--file FILE, -f FILE The path of the error file, should be xyz.json
--overwrite If the database_path exists, overwrite it
--broken By default, only successful scenarios will be picked
to build the new database. If turn on this flog, it
will generate database containing only broken
scenarios.
Sim
~~~~~~~~~~~
Load a database to simulator and replay the scenarios.
We provide different render mode allows users to visualize them.
For more details of simulation,
please check Section :ref:`simulation` or the `MetaDrive document <https://metadrive-simulator.readthedocs.io/en/latest/>`_.
.. code-block:: text
usage: sim.py [-h] --database_path DATABASE_PATH
[--render {none,2D,3D,advanced}]
[--scenario_index SCENARIO_INDEX]
Load a database to simulator and replay scenarios
optional arguments:
-h, --help show this help message and exit
--database_path DATABASE_PATH, -d DATABASE_PATH
The path of the database
--render {none,2D,3D,advanced}
--scenario_index SCENARIO_INDEX
Specifying a scenario to run
Check Existence
~~~~~~~~~~~~~~~~~~~~~
We provide a tool to check if the scenarios in a database are runnable and exist on your machine.
This is because we include the scenarios to a database, a folder, through a virtual mapping.
Each database only records the path of each scenario relative to the database directory.
Thus this script is for making sure all original scenario file exists and can be loaded.
If it manages to find some broken scenarios, an error file will be generated to the specified path.
By using ``generate_from_error_file``, a new database can be created to exclude or only include these broken scenarios.
In this way, we can debug the broken scenarios to check what causes the error or just ignore and remove the broke
scenarios to make the database intact.
.. code-block:: text
python -m scenarionet.check_existence [-h] --database_path DATABASE_PATH
[--error_file_path ERROR_FILE_PATH] [--overwrite]
[--num_workers NUM_WORKERS] [--random_drop]
Check if the database is intact and all scenarios can be found and recorded in
internal scenario description
optional arguments:
-h, --help show this help message and exit
--database_path DATABASE_PATH, -d DATABASE_PATH
Dataset path, a directory containing summary.pkl and
mapping.pkl
--error_file_path ERROR_FILE_PATH
Where to save the error file. One can generate a new
database excluding or only including the failed
scenarios.For more details, see operation
'generate_from_error_file'
--overwrite If an error file already exists in error_file_path,
whether to overwrite it
--num_workers NUM_WORKERS
number of workers to use
--random_drop Randomly make some scenarios fail. for test only!
Check Simulation
~~~~~~~~~~~~~~~~~
This is a upgraded version of existence check.
It not only detect the existence and the completeness of the database, but check whether all scenarios can be loaded
and run in the simulator.
.. code-block:: text
python -m scenarionet.check_simulation [-h] --database_path DATABASE_PATH
[--error_file_path ERROR_FILE_PATH] [--overwrite]
[--num_workers NUM_WORKERS] [--random_drop]
Check if all scenarios can be simulated in simulator. We recommend doing this
before close-loop training/testing
optional arguments:
-h, --help show this help message and exit
--database_path DATABASE_PATH, -d DATABASE_PATH
Dataset path, a directory containing summary.pkl and
mapping.pkl
--error_file_path ERROR_FILE_PATH
Where to save the error file. One can generate a new
database excluding or only including the failed
scenarios.For more details, see operation
'generate_from_error_file'
--overwrite If an error file already exists in error_file_path,
whether to overwrite it
--num_workers NUM_WORKERS
number of workers to use
--random_drop Randomly make some scenarios fail. for test only!
Check Overlap
~~~~~~~~~~~~~~~~
This script is for checking if there are some overlaps between two databases.
The main goal of this command is to ensure that the training and test sets are isolated.
.. code-block:: text
python -m scenarionet.check_overlap [-h] --d_1 D_1 --d_2 D_2 [--show_id]
Check if there are overlapped scenarios between two databases. If so, return
the number of overlapped scenarios and id list
optional arguments:
-h, --help show this help message and exit
--d_1 D_1 The path of the first database
--d_2 D_2 The path of the second database
--show_id whether to show the id of overlapped scenarios

View File

@@ -0,0 +1,5 @@
###########
Simulation
###########
Coming soon!

View File

@@ -1,3 +1,6 @@
#############################
Waymo
#############################
Known Issues
==================

View File

@@ -1,13 +1,23 @@
import argparse
from scenarionet.verifier.utils import verify_database, set_random_drop
desc = "Check if the database is intact and all scenarios can be found and recorded in internal scenario description"
if __name__ == '__main__':
parser = argparse.ArgumentParser()
import argparse
from scenarionet.verifier.utils import verify_database, set_random_drop
parser = argparse.ArgumentParser(description=desc)
parser.add_argument(
"--database_path", "-d", required=True, help="Dataset path, a directory containing summary.pkl and mapping.pkl"
)
parser.add_argument("--error_file_path", default="./", help="Where to save the error file")
parser.add_argument(
"--error_file_path",
default="./",
help="Where to save the error file. "
"One can generate a new database excluding "
"or only including the failed scenarios."
"For more details, "
"see operation 'generate_from_error_file'"
)
parser.add_argument(
"--overwrite",
action="store_true",

View File

@@ -1,13 +1,16 @@
"""
Check If any overlap between two database
"""
import argparse
from scenarionet.common_utils import read_dataset_summary
desc = "Check if there are overlapped scenarios between two databases. " \
"If so, return the number of overlapped scenarios and id list"
if __name__ == '__main__':
parser = argparse.ArgumentParser()
import argparse
from scenarionet.common_utils import read_dataset_summary
parser = argparse.ArgumentParser(description=desc)
parser.add_argument('--d_1', type=str, required=True, help="The path of the first database")
parser.add_argument('--d_2', type=str, required=True, help="The path of the second database")
parser.add_argument('--show_id', action="store_true", help="whether to show the id of overlapped scenarios")
@@ -22,4 +25,6 @@ if __name__ == '__main__':
else:
print("Find {} overlapped scenarios".format(len(intersection)))
if args.show_id:
print("Overlapped scenario ids: {}".format(intersection))
print("Overlapped scenario ids:")
for id in intersection:
print(" " * 5, id)

View File

@@ -1,13 +1,24 @@
import pkg_resources # for suppress warning
import argparse
from scenarionet.verifier.utils import verify_database, set_random_drop
desc = "Check if all scenarios can be simulated in simulator. " \
"We recommend doing this before close-loop training/testing"
if __name__ == '__main__':
parser = argparse.ArgumentParser()
import pkg_resources # for suppress warning
import argparse
from scenarionet.verifier.utils import verify_database, set_random_drop
parser = argparse.ArgumentParser(description=desc)
parser.add_argument(
"--database_path", "-d", required=True, help="Dataset path, a directory containing summary.pkl and mapping.pkl"
)
parser.add_argument("--error_file_path", default="./", help="Where to save the error file")
parser.add_argument(
"--error_file_path",
default="./",
help="Where to save the error file. "
"One can generate a new database excluding "
"or only including the failed scenarios."
"For more details, "
"see operation 'generate_from_error_file'"
)
parser.add_argument(
"--overwrite",
action="store_true",

View File

@@ -1,12 +1,14 @@
import pkg_resources # for suppress warning
import argparse
import os
from scenarionet import SCENARIONET_DATASET_PATH
from scenarionet.converter.nuplan.utils import get_nuplan_scenarios, convert_nuplan_scenario
from scenarionet.converter.utils import write_to_directory
desc = "Build database from nuPlan scenarios"
if __name__ == '__main__':
parser = argparse.ArgumentParser()
import pkg_resources # for suppress warning
import argparse
import os
from scenarionet import SCENARIONET_DATASET_PATH
from scenarionet.converter.nuplan.utils import get_nuplan_scenarios, convert_nuplan_scenario
from scenarionet.converter.utils import write_to_directory
parser = argparse.ArgumentParser(description=desc)
parser.add_argument(
"--database_path",
"-d",

View File

@@ -1,12 +1,14 @@
import pkg_resources # for suppress warning
import argparse
import os.path
from scenarionet import SCENARIONET_DATASET_PATH
from scenarionet.converter.nuscenes.utils import convert_nuscenes_scenario, get_nuscenes_scenarios
from scenarionet.converter.utils import write_to_directory
desc = "Build database from nuScenes/Lyft scenarios"
if __name__ == '__main__':
parser = argparse.ArgumentParser()
import pkg_resources # for suppress warning
import argparse
import os.path
from scenarionet import SCENARIONET_DATASET_PATH
from scenarionet.converter.nuscenes.utils import convert_nuscenes_scenario, get_nuscenes_scenarios
from scenarionet.converter.utils import write_to_directory
parser = argparse.ArgumentParser(description=desc)
parser.add_argument(
"--database_path",
"-d",

View File

@@ -1,16 +1,18 @@
import pkg_resources # for suppress warning
import argparse
import os.path
import metadrive
from scenarionet import SCENARIONET_DATASET_PATH
from scenarionet.converter.pg.utils import get_pg_scenarios, convert_pg_scenario
from scenarionet.converter.utils import write_to_directory
desc = "Build database from synthetic or procedurally generated scenarios"
if __name__ == '__main__':
import pkg_resources # for suppress warning
import argparse
import os.path
import metadrive
from scenarionet import SCENARIONET_DATASET_PATH
from scenarionet.converter.pg.utils import get_pg_scenarios, convert_pg_scenario
from scenarionet.converter.utils import write_to_directory
# For the PG environment config, see: scenarionet/converter/pg/utils.py:6
parser = argparse.ArgumentParser()
parser = argparse.ArgumentParser(description=desc)
parser.add_argument(
"--database_path",
"-d",

View File

@@ -1,17 +1,19 @@
import pkg_resources # for suppress warning
import shutil
import argparse
import logging
import os
from scenarionet import SCENARIONET_DATASET_PATH, SCENARIONET_REPO_PATH
from scenarionet.converter.utils import write_to_directory
from scenarionet.converter.waymo.utils import convert_waymo_scenario, get_waymo_scenarios
logger = logging.getLogger(__name__)
desc = "Build database from Waymo scenarios"
if __name__ == '__main__':
parser = argparse.ArgumentParser()
import pkg_resources # for suppress warning
import shutil
import argparse
import logging
import os
from scenarionet import SCENARIONET_DATASET_PATH, SCENARIONET_REPO_PATH
from scenarionet.converter.utils import write_to_directory
from scenarionet.converter.waymo.utils import convert_waymo_scenario, get_waymo_scenarios
logger = logging.getLogger(__name__)
parser = argparse.ArgumentParser(description=desc)
parser.add_argument(
"--database_path",
"-d",

View File

@@ -1,9 +1,11 @@
import argparse
from scenarionet.builder.utils import copy_database
desc = "Move or Copy an existing database"
if __name__ == '__main__':
parser = argparse.ArgumentParser()
import argparse
from scenarionet.builder.utils import copy_database
parser = argparse.ArgumentParser(description=desc)
parser.add_argument('--from', required=True, help="Which database to move.")
parser.add_argument(
"--to",

View File

@@ -1,10 +1,12 @@
import argparse
from scenarionet.builder.filters import ScenarioFilter
from scenarionet.builder.utils import merge_database
desc = "Filter unwanted scenarios out and build a new database"
if __name__ == '__main__':
parser = argparse.ArgumentParser()
import argparse
from scenarionet.builder.filters import ScenarioFilter
from scenarionet.builder.utils import merge_database
parser = argparse.ArgumentParser(description=desc)
parser.add_argument(
"--database_path",
"-d",

View File

@@ -1,10 +1,13 @@
import pkg_resources # for suppress warning
import argparse
from scenarionet.verifier.error import ErrorFile
desc = "Generate a new database excluding " \
"or only including the failed scenarios detected by 'check_simulation' and 'check_existence'"
if __name__ == '__main__':
parser = argparse.ArgumentParser()
import pkg_resources # for suppress warning
import argparse
from scenarionet.verifier.error import ErrorFile
parser = argparse.ArgumentParser(description=desc)
parser.add_argument("--database_path", "-d", required=True, help="The path of the newly generated database")
parser.add_argument("--file", "-f", required=True, help="The path of the error file, should be xyz.json")
parser.add_argument("--overwrite", action="store_true", help="If the database_path exists, overwrite it")

33
scenarionet/list.py Normal file
View File

@@ -0,0 +1,33 @@
import pkgutil
import importlib
import argparse
desc = "List all available operations"
def list_modules(package):
ret = [name for _, name, _ in pkgutil.iter_modules(package.__path__)]
ret.remove("builder")
ret.remove("converter")
ret.remove("verifier")
ret.remove("common_utils")
return ret
if __name__ == '__main__':
import scenarionet
# exclude current path
# print(d)
parser = argparse.ArgumentParser(description=desc)
parser.parse_args()
modules = list_modules(scenarionet)
print("\nAvailable operations (usage python -m scenarionet.operation): \n")
for module in modules:
# module="convert_nuplan"
print(
"{}scenarionet.{}: {} \n".format(
" " * 5, module,
importlib.import_module("scenarionet.{}".format(module)).desc
)
)

View File

@@ -1,10 +1,12 @@
import argparse
from scenarionet.builder.filters import ScenarioFilter
from scenarionet.builder.utils import merge_database
desc = "Merge a list of databases. e.g. scenario.merge --from db_1 db_2 db_3...db_n --to db_dest"
if __name__ == '__main__':
parser = argparse.ArgumentParser()
import argparse
from scenarionet.builder.filters import ScenarioFilter
from scenarionet.builder.utils import merge_database
parser = argparse.ArgumentParser(description=desc)
parser.add_argument(
"--database_path",
"-d",

View File

@@ -1,12 +1,14 @@
import pkg_resources # for suppress warning
import argparse
import logging
from scenarionet.common_utils import read_dataset_summary
logger = logging.getLogger(__file__)
desc = "The number of scenarios in the specified database"
if __name__ == '__main__':
parser = argparse.ArgumentParser()
import pkg_resources # for suppress warning
import argparse
import logging
from scenarionet.common_utils import read_dataset_summary
logger = logging.getLogger(__file__)
parser = argparse.ArgumentParser(description=desc)
parser.add_argument("--database_path", "-d", required=True, help="Database to check number of scenarios")
args = parser.parse_args()
summary, _, _, = read_dataset_summary(args.database_path)

View File

@@ -1,14 +1,16 @@
import logging
import pkg_resources # for suppress warning
import argparse
import os
from metadrive.envs.scenario_env import ScenarioEnv
from metadrive.policy.replay_policy import ReplayEgoCarPolicy
from metadrive.scenario.utils import get_number_of_scenarios
desc = "Load a database to simulator and replay scenarios"
if __name__ == '__main__':
parser = argparse.ArgumentParser()
import logging
import pkg_resources # for suppress warning
import argparse
import os
from metadrive.envs.scenario_env import ScenarioEnv
from metadrive.policy.replay_policy import ReplayEgoCarPolicy
from metadrive.scenario.utils import get_number_of_scenarios
parser = argparse.ArgumentParser(description=desc)
parser.add_argument("--database_path", "-d", required=True, help="The path of the database")
parser.add_argument("--render", default="none", choices=["none", "2D", "3D", "advanced"])
parser.add_argument("--scenario_index", default=None, type=int, help="Specifying a scenario to run")

View File

@@ -1,13 +1,15 @@
"""
This script is for extracting a subset of data from an existing database
"""
import pkg_resources # for suppress warning
import argparse
from scenarionet.builder.utils import split_database
desc = "Build a new database containing a subset of scenarios from an existing database."
if __name__ == '__main__':
parser = argparse.ArgumentParser()
import pkg_resources # for suppress warning
import argparse
from scenarionet.builder.utils import split_database
parser = argparse.ArgumentParser(description=desc)
parser.add_argument('--from', required=True, help="Which database to extract data from.")
parser.add_argument(
"--to",