Update nuScenes & Waymo Optimization (#47)
* update can bus * Create LICENSE * update waymo doc * protobuf requirement * just warning * Add warning for proto * update PR template * fix length bug * try sharing nusc * imu heading * fix 161 168 * add badge * fix doc * update doc * format * update cp * update nuscenes interface * update doc * prediction nuscenes * use drivable aread for nuscenes * allow converting prediction * format * fix bug * optimize * clean RAM * delete more * restore to * add only lane * use token * add warning * format * fix bug
This commit is contained in:
@@ -28,18 +28,17 @@ After that, the scenarios will be loaded to MetaDrive simulator for closed-loop
|
||||
First of all, please install `MetaDrive <https://github.com/metadriverse/metadrive>`_ and `ScenarioNet <https://github.com/metadriverse/scenarionet>`_ following these steps :ref:`installation`.
|
||||
|
||||
|
||||
1. Setup Waymo toolkit
|
||||
1. Setup Requirements
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
For any dataset, the first step after installing ScenarioNet is to install the corresponding official toolkit as we need to use it to parse the original data and convert to our internal scenario description. For Waymo data, please install the toolkit via::
|
||||
For any dataset, the first step after installing ScenarioNet is to install the corresponding official toolkit as we need to use it to parse the original data and convert to our internal scenario description.
|
||||
For Waymo data, we already have the parser in ScenarioNet so just install the TensorFlow and Protobuf via::
|
||||
|
||||
pip install waymo-open-dataset-tf-2-11-0
|
||||
pip install tensorflow==2.11.0
|
||||
conda install protobuf==3.20
|
||||
|
||||
.. note::
|
||||
This package is only supported on Linux platform.
|
||||
`waymo-open-dataset` may degrade numpy, causing conflicts with `cv2` (`opencv-python`).
|
||||
A workaround is to ``pip install numpy==1.24.2``
|
||||
You may fail to install ``protobuf`` if using ``pip install protobuf==3.20``.
|
||||
|
||||
For other datasets like nuPlan and nuScenes, you need to setup `nuplan-devkit <https://github.com/motional/nuplan-devkit>`_ and `nuscenes-devkit <https://github.com/nutonomy/nuscenes-devkit>`_ respectively.
|
||||
Guidance on how to setup these datasets and connect them with ScenarioNet can be found at :ref:`datasets`.
|
||||
|
||||
@@ -27,6 +27,9 @@ First of all, we have to install the ``nuplan-devkit``.
|
||||
pip install -r requirements.txt
|
||||
pip install -e .
|
||||
|
||||
# additional requirements
|
||||
pip install pytorch-lightning
|
||||
|
||||
# 2. or install from PyPI
|
||||
pip install nuplan-devkit
|
||||
|
||||
@@ -51,7 +54,7 @@ Thus please download the following files:
|
||||
|
||||
|
||||
We recommend to download the mini split to test and make yourself familiar with the setup process.
|
||||
All downloaded files are ``.tgz`` files and can be uncompressed by ``tar -zxf xyz.tgz``.
|
||||
All downloaded files are ``.zip`` files and can be uncompressed by ``unzip "*.zip"``.
|
||||
All data should be placed to ``~/nuplan/dataset`` and the folder structure should comply `file hierarchy <https://nuplan-devkit.readthedocs.io/en/latest/dataset_setup.html#filesystem-hierarchy>`_.
|
||||
|
||||
.. code-block:: text
|
||||
@@ -85,7 +88,7 @@ All data should be placed to ``~/nuplan/dataset`` and the folder structure shoul
|
||||
│ │ ├── 2021.06.09.17.23.18_veh-38_00773_01140.db
|
||||
│ │ ├── ...
|
||||
│ │ └── 2021.10.11.08.31.07_veh-50_01750_01948.db
|
||||
│ └── trainval
|
||||
│ └── train_boston
|
||||
│ ├── 2021.05.12.22.00.38_veh-35_01008_01518.db
|
||||
│ ├── 2021.06.09.17.23.18_veh-38_00773_01140.db
|
||||
│ ├── ...
|
||||
|
||||
@@ -60,7 +60,9 @@ Secondly, all files should be organized to the following structure::
|
||||
├── maps/
|
||||
| ├──basemap/
|
||||
| ├──prediction/
|
||||
| ├──expansion/
|
||||
| └──expansion/
|
||||
├── can_bus/
|
||||
| ├──scene-1110_meta.json
|
||||
| └──...
|
||||
├── samples/
|
||||
| ├──CAM_BACK
|
||||
@@ -95,9 +97,9 @@ Please try ``nuscenes-devkit/python-sdk/tutorials/nuscenes_tutorial.ipynb`` and
|
||||
After setup the raw data, convertors in ScenarioNet can read the raw data, convert scenario format and build the database.
|
||||
Here we take converting raw data in ``nuscenes-mini`` as an example::
|
||||
|
||||
python -m scenarionet.convert_nuscenes -d /path/to/your/database --version v1.0-mini --dataroot /nuscens/data/path
|
||||
python -m scenarionet.convert_nuscenes -d /path/to/your/database --split v1.0-mini --dataroot /nuscens/data/path
|
||||
|
||||
The ``version`` is to determine which split to convert. ``dataroot`` is set to ``/data/sets/nuscenes`` by default,
|
||||
The ``split`` is to determine which split to convert. ``dataroot`` is set to ``/data/sets/nuscenes`` by default,
|
||||
but you need to specify it if your data is stored in any other directory.
|
||||
Now all converted scenarios will be placed at ``/path/to/your/database`` and are ready to be used in your work.
|
||||
|
||||
|
||||
@@ -113,8 +113,12 @@ Convert nuScenes (Lyft)
|
||||
.. code-block:: text
|
||||
|
||||
python -m scenarionet.convert_nuscenes [-h] [--database_path DATABASE_PATH]
|
||||
[--dataset_name DATASET_NAME] [--version VERSION]
|
||||
[--overwrite] [--num_workers NUM_WORKERS]
|
||||
[--dataset_name DATASET_NAME]
|
||||
[--split
|
||||
{v1.0-mini,mini_val,v1.0-test,train,train_val,val,mini_train,v1.0-trainval}]
|
||||
[--dataroot DATAROOT] [--map_radius MAP_RADIUS]
|
||||
[--future FUTURE] [--past PAST] [--overwrite]
|
||||
[--num_workers NUM_WORKERS]
|
||||
|
||||
Build database from nuScenes/Lyft scenarios
|
||||
|
||||
@@ -124,13 +128,31 @@ Convert nuScenes (Lyft)
|
||||
directory, The path to place the data
|
||||
--dataset_name DATASET_NAME, -n DATASET_NAME
|
||||
Dataset name, will be used to generate scenario files
|
||||
--version VERSION, -v VERSION
|
||||
version of nuscenes data, scenario of this version
|
||||
will be converted
|
||||
--split
|
||||
{v1.0-mini,mini_val,v1.0-test,train,train_val,val,mini_train,v1.0-trainval}
|
||||
Which splits of nuScenes data should be sued. If set
|
||||
to ['v1.0-mini', 'v1.0-trainval', 'v1.0-test'], it
|
||||
will convert the full log into scenarios with 20
|
||||
second episode length. If set to ['mini_train',
|
||||
'mini_val', 'train', 'train_val', 'val'], it will
|
||||
convert segments used for nuScenes prediction
|
||||
challenge to scenarios, resulting in more converted
|
||||
scenarios. Generally, you should choose this parameter
|
||||
from ['v1.0-mini', 'v1.0-trainval', 'v1.0-test'] to
|
||||
get complete scenarios for planning unless you want to
|
||||
use the converted scenario files for prediction task.
|
||||
--dataroot DATAROOT The path of nuscenes data
|
||||
--map_radius MAP_RADIUS The size of map
|
||||
--future FUTURE 6 seconds by default. How many future seconds to
|
||||
predict. Only available if split is chosen from
|
||||
['mini_train', 'mini_val', 'train', 'train_val',
|
||||
'val']
|
||||
--past PAST 2 seconds by default. How many past seconds are used
|
||||
for prediction. Only available if split is chosen from
|
||||
['mini_train', 'mini_val', 'train', 'train_val',
|
||||
'val']
|
||||
--overwrite If the database_path exists, whether to overwrite it
|
||||
--num_workers NUM_WORKERS
|
||||
number of workers to use
|
||||
|
||||
|
||||
|
||||
This script converted the recorded nuScenes scenario into our scenario descriptions.
|
||||
@@ -187,7 +209,7 @@ we can aggregate them freely and enlarge the database.
|
||||
|
||||
.. code-block:: text
|
||||
|
||||
python -m scenarionet.merge [-h] --database_path DATABASE_PATH --from FROM [FROM ...]
|
||||
python -m scenarionet.merge [-h] --to DATABASE_PATH --from FROM [FROM ...]
|
||||
[--exist_ok] [--overwrite] [--filter_moving_dist]
|
||||
[--sdc_moving_dist_min SDC_MOVING_DIST_MIN]
|
||||
|
||||
@@ -196,7 +218,7 @@ we can aggregate them freely and enlarge the database.
|
||||
|
||||
optional arguments:
|
||||
-h, --help show this help message and exit
|
||||
--database_path DATABASE_PATH, -d DATABASE_PATH
|
||||
--database_path DATABASE_PATH, -d DATABASE_PATH, --to DATABASE_PATH
|
||||
The name of the new combined database. It will create
|
||||
a new directory to store dataset_summary.pkl and
|
||||
dataset_mapping.pkl. If exists_ok=True, those two .pkl
|
||||
|
||||
@@ -26,18 +26,17 @@ The dataset includes:
|
||||
- Adjusted some road edge boundary height estimates
|
||||
|
||||
|
||||
1. Install Waymo Toolkit
|
||||
1. Install Requirements
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
First of all, we have to install the waymo toolkit and tensorflow::
|
||||
First of all, we have to install tensorflow and Protobuf::
|
||||
|
||||
pip install waymo-open-dataset-tf-2-11-0
|
||||
pip install tensorflow==2.11.0
|
||||
conda install protobuf==3.20
|
||||
|
||||
.. note::
|
||||
This package is only supported on Linux platform.
|
||||
`waymo-open-dataset` may degrade numpy, causing conflicts with cv2.
|
||||
A workaround is ``pip install numpy==1.24.2``
|
||||
You may fail to install ``protobuf`` if using ``pip install protobuf==3.20``.
|
||||
|
||||
|
||||
2. Download TFRecord
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Reference in New Issue
Block a user