Doc-example (#18)
* reactive traffic example * structure * structure * waymo example * rename and add doc * finish example * example * start from 2
This commit is contained in:
10
README.md
10
README.md
@@ -10,14 +10,12 @@
|
||||
|
||||
ScenarioNet allows users to load scenarios from real-world dataset like Waymo, nuPlan, nuScenes, l5 and synthetic
|
||||
dataset such as procedural generated ones and safety-critical ones generated by adversarial attack.
|
||||
The built database provides tools for building training and test sets for ML applications.
|
||||
|
||||

|
||||
The built database provides tools for building training and test sets for ML applications.
|
||||
|
||||
Powered by [MetaDrive Simulator](https://github.com/metadriverse/metadrive), the scenarios can be reconstructed for
|
||||
various applications like AD stack test, reinforcement learning, imitation learning, scenario generation and so on.
|
||||
|
||||

|
||||

|
||||
|
||||
|
||||
## Installation
|
||||
@@ -78,7 +76,7 @@ python -m scenarionet.scripts.convert_pg -d pg --num_workers=16 --num_scenarios=
|
||||
For merging two or more database, use
|
||||
|
||||
```
|
||||
python -m scenarionet.merge_database -d /destination/path --from /database1 /2 ...
|
||||
python -m scenarionet.merge -d /destination/path --from /database1 /2 ...
|
||||
```
|
||||
|
||||
As a database contains a path mapping, one should move database folder with the following script instead of ```cp```
|
||||
@@ -119,6 +117,6 @@ python -m scenarionet.generate_from_error_file -d /new/database/path --file /err
|
||||
Visualizing the simulated scenario
|
||||
|
||||
```
|
||||
python -m scenarionet.run_simulation -d /path/to/database --render --scenario_index
|
||||
python -m scenarionet.sim -d /path/to/database --render --scenario_index
|
||||
```
|
||||
|
||||
|
||||
BIN
docs/asset/system_01.png
Normal file
BIN
docs/asset/system_01.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 2.9 MiB |
@@ -3,7 +3,7 @@ This folder contains files for the documentation: [https://scenarionet.readthedo
|
||||
To build documents locally, please run the following codes:
|
||||
|
||||
```
|
||||
pip install sphinx sphinx_rtd_theme
|
||||
pip install -e .[doc]
|
||||
cd scenarionet/documentation
|
||||
make html
|
||||
```
|
||||
|
||||
@@ -31,7 +31,8 @@ release = '0.1.1'
|
||||
# ones.
|
||||
extensions = [
|
||||
"sphinx.ext.autosectionlabel",
|
||||
"sphinx_rtd_theme"
|
||||
"sphinx_rtd_theme",
|
||||
"sphinxemoji.sphinxemoji"
|
||||
]
|
||||
|
||||
# Add any paths that contain templates here, relative to this directory.
|
||||
|
||||
7
documentation/datasets.rst
Normal file
7
documentation/datasets.rst
Normal file
@@ -0,0 +1,7 @@
|
||||
.. _datasets:
|
||||
|
||||
|
||||
#####################
|
||||
Supported Datasets
|
||||
#####################
|
||||
|
||||
3
documentation/description.rst
Normal file
3
documentation/description.rst
Normal file
@@ -0,0 +1,3 @@
|
||||
########################
|
||||
Scenario Description
|
||||
########################
|
||||
136
documentation/example.rst
Normal file
136
documentation/example.rst
Normal file
@@ -0,0 +1,136 @@
|
||||
#######################
|
||||
Example
|
||||
#######################
|
||||
|
||||
In this example, we will show you how to convert a small batch of `Waymo <https://waymo.com/intl/en_us/open/>`_ scenarios into the internal Scenario Description.
|
||||
After that, the scenarios will be loaded to simulator for closed-loop simulation.
|
||||
First of all, please install `MetaDrive <https://github.com/metadriverse/metadrive>`_ and `ScenarioNet <https://github.com/metadriverse/scenarionet>`_ following these steps :ref:`installation`.
|
||||
|
||||
**1. Setup Waymo toolkit**
|
||||
|
||||
|
||||
For any dataset, this step is necessary after installing ScenarioNet,
|
||||
as we need to use the official toolkits of the data provider to parse the original scenario description and convert to our internal scenario description.
|
||||
For Waymo data, please install the toolkit via::
|
||||
|
||||
pip install waymo-open-dataset-tf-2-11-0==1.5.0
|
||||
|
||||
.. note::
|
||||
This package is only supported on Linux platform.
|
||||
|
||||
For other datasets like nuPlan and nuScenes, you need to setup `nuplan-devkit <https://github.com/motional/nuplan-devkit>`_ and `nuscenes-devkit <https://github.com/nutonomy/nuscenes-devkit>`_ respectively.
|
||||
Guidance on how to setup these datasets and connect them with ScenarioNet can be found at :ref:`datasets`.
|
||||
|
||||
**2. Prepare Data**
|
||||
|
||||
|
||||
Access the Waymo motion data at `Google Cloud <https://console.cloud.google.com/storage/browser/waymo_open_dataset_motion_v_1_2_0>`_.
|
||||
Download one tfrecord scenario file from ``waymo_open_dataset_motion_v_1_2_0/uncompressed/scenario/training_20s``.
|
||||
In this tutorial, we only use the first file ``training_20s.tfrecord-00000-of-01000``.
|
||||
Just click the download button |:arrow_down:| on the right side to download it.
|
||||
And place the downloaded tfrecord file to a folder. Let's call it ``exp_waymo`` and the structure is like this::
|
||||
|
||||
exp_waymo
|
||||
├──training_20s.tfrecord-00000-of-01000
|
||||
|
||||
.. note::
|
||||
For building database from all scenarios, install ``gsutil`` and use this command:
|
||||
``gsutil -m cp -r "gs://waymo_open_dataset_motion_v_1_2_0/uncompressed/scenario/training_20s" .``
|
||||
Likewise, place all downloaded tfrecord files to the same folder.
|
||||
|
||||
|
||||
**3. Convert to Scenario Description**
|
||||
|
||||
Run the following command to extract scenarios in ``exp_waymo`` to ``exp_converted``::
|
||||
|
||||
python -m scenarionet.convert_waymo -d /path/to/exp_converted/ --raw_data_path /path/to/exp_waymo --num_files=1
|
||||
|
||||
.. note::
|
||||
When running ``python -m``, make sure the directory you are at doesn't contain a folder called ``scenarionet``.
|
||||
Otherwise, the running may fail. For more details about the command, use ``python -m scenarionet.convert_waymo -h``
|
||||
|
||||
Now all exracted scenarios will be placed in ``exp_converted`` directory.
|
||||
If we list the directory with ``ll`` command, the structure will be like::
|
||||
|
||||
exp_converted
|
||||
├──exp_converted_0
|
||||
├──exp_converted_1
|
||||
├──exp_converted_2
|
||||
├──exp_converted_3
|
||||
├──exp_converted_4
|
||||
├──exp_converted_5
|
||||
├──exp_converted_6
|
||||
├──exp_converted_7
|
||||
├──dataset_mapping.pkl
|
||||
├──dataset_summary.pkl
|
||||
|
||||
This is because we use 8 workers to extract the scenarios, and thus the converted scenarios will be stored in 8 subfolders.
|
||||
If we go check ``exp_converted_0``, we will see the structure is like::
|
||||
|
||||
├──sd_waymo_v1.2_2085c5cffcd4727b.pkl
|
||||
├──sd_waymo_v1.2_27997d88023ff2a2.pkl
|
||||
├──sd_waymo_v1.2_3ece8d267ce5847c.pkl
|
||||
├──sd_waymo_v1.2_53e9adfdac0eb822.pkl
|
||||
├──sd_waymo_v1.2_8e40ffb80dd2f541.pkl
|
||||
├──sd_waymo_v1.2_df72c5dc77a73ed6.pkl
|
||||
├──sd_waymo_v1.2_f1f6068fabe77dc8.pkl
|
||||
├──dataset_mapping.pkl
|
||||
├──dataset_summary.pkl
|
||||
|
||||
Therefore, the subfolder produced by each worker is actually where the converted scenarios are placed.
|
||||
To aggregate the scenarios produced by all workers, the ``exp_converted/dataset_mapping.pkl`` stores the mapping
|
||||
from `scenario_id` to the path of the target scenario file relative to ``exp_converted``.
|
||||
As a result, we can get all scenarios produced by 8 workers by loading the database `exp_converted`.
|
||||
|
||||
**4. Database Operations**
|
||||
|
||||
Several basic operations are available and allow us to split, merge, move, and check the databases.
|
||||
First of all, let's check how many scenarios are included in this database built from ``training_20s.tfrecord-00000-of-01000``::
|
||||
|
||||
python -m scenarionet.num -d /path/to/exp_converted/
|
||||
|
||||
It will show that there are totally 61 scenarios.
|
||||
For machine learning applications, we usually want to split training/test sets.
|
||||
To this end, we can use the following command to build the training set::
|
||||
|
||||
python -m scenarionet.split --from /path/to/exp_converted/ --to /path/to/exp_train --num_scenarios 40
|
||||
|
||||
Again, use the following commands to build the test set::
|
||||
|
||||
python -m scenarionet.split --from /path/toexp_converted/ --to /path/to/exp_test --num_scenarios 21 --start_index 40
|
||||
|
||||
We add the ``start_index`` argument to select the last 21 scenarios as the test set.
|
||||
To ensure that no overlap exists, we can run this command::
|
||||
|
||||
python -m scenarionet.check_overlap --d_1 /path/to/exp_train/ --d_2 /path/to/exp_test/
|
||||
|
||||
It will report `No overlapping in two database!`.
|
||||
Now, let's suppose that the ``/exp_train/`` and ``/exp_test/`` are two databases built
|
||||
from different source and we want to merge them into a larger one.
|
||||
This can be achieved by::
|
||||
|
||||
python -m scenarionet.merge --from /path/to/exp_train/ /path/to/exp_test -d /path/to/exp_merged
|
||||
|
||||
Let's check if the merged database is the same as the original one::
|
||||
|
||||
python -m scenarionet.check_overlap --d_1 /path/to/exp_merged/ --d_2 /path/to/exp_converted
|
||||
|
||||
It will show there are 61 overlapped scenarios.
|
||||
Congratulations! Now you are already familiar with some common operations.
|
||||
More operations and details is available at :ref:`operations`.
|
||||
|
||||
**5. Simulation**
|
||||
|
||||
The database can be loaded to MetaDrive simulator for scenario replay or closed-loop simulation.
|
||||
First of all, let's replay scenarios in the ``exp_converted`` database::
|
||||
|
||||
python -m scenarionet.sim -d /path/to/exp_converted
|
||||
|
||||
|
||||
By adding ``--render 3D`` flag, we can use 3D renderer::
|
||||
|
||||
python -m scenarionet.sim -d /path/to/exp_converted --render 3D
|
||||
|
||||
.. note::
|
||||
``--render advanced`` enables the advanced deferred rendering pipeline,
|
||||
but an advanced GPU better than RTX 2060 is required.
|
||||
@@ -1,6 +1,6 @@
|
||||
########################
|
||||
##########################
|
||||
ScenarioNet Documentation
|
||||
########################
|
||||
##########################
|
||||
|
||||
|
||||
Welcome to the ScenarioNet documentation!
|
||||
@@ -17,6 +17,40 @@ You can also visit the `GitHub repo <https://github.com/metadriverse/scenarionet
|
||||
Please feel free to contact us if you have any suggestions or ideas!
|
||||
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
:hidden:
|
||||
:caption: Quick Start
|
||||
|
||||
install.rst
|
||||
example.rst
|
||||
operations.rst
|
||||
|
||||
.. toctree::
|
||||
:hidden:
|
||||
:maxdepth: 2
|
||||
:caption: Scenario Description
|
||||
|
||||
description.rst
|
||||
|
||||
.. toctree::
|
||||
:hidden:
|
||||
:maxdepth: 2
|
||||
:caption: Supported Dataset
|
||||
|
||||
datasets.rst
|
||||
nuplan.rst
|
||||
nuscenes.rst
|
||||
waymo.rst
|
||||
new_data.rst
|
||||
|
||||
.. toctree::
|
||||
:hidden:
|
||||
:maxdepth: 2
|
||||
:caption: Simulation
|
||||
|
||||
|
||||
|
||||
Citation
|
||||
########
|
||||
|
||||
|
||||
38
documentation/install.rst
Normal file
38
documentation/install.rst
Normal file
@@ -0,0 +1,38 @@
|
||||
.. _install:
|
||||
|
||||
########################
|
||||
Installation
|
||||
########################
|
||||
|
||||
The ScenarioNet repo contains tools for converting scenarios and building database from various data sources.
|
||||
The simulation part is maintained in `MetaDrive <https://github.com/metadriverse/metadrive>`_ repo, and let's install MetaDrive first.
|
||||
|
||||
**1. Install MetaDrive**
|
||||
|
||||
The installation of MetaDrive on different platforms is straightforward and easy!
|
||||
We recommend to use the following command to install::
|
||||
|
||||
# Install MetaDrive Simulator
|
||||
git clone git@github.com:metadriverse/metadrive.git
|
||||
cd metadrive
|
||||
pip install -e.
|
||||
|
||||
It can also be installed from PyPI by::
|
||||
|
||||
pip install "metadrive-simulator>=0.4.1.1"
|
||||
|
||||
To check whether MetaDrive is successfully installed, please run::
|
||||
|
||||
python -m metadrive.examples.profile_metadrive
|
||||
|
||||
.. note:: Please do not run the above command in the folder that has a sub-folder called :code:`./metadrive`.
|
||||
|
||||
**2. Install ScenarioNet**
|
||||
|
||||
For ScenarioNet, we only provide Github installation::
|
||||
|
||||
# Install ScenarioNet
|
||||
git clone git@github.com:metadriverse/scenarionet.git
|
||||
cd scenarionet
|
||||
pip install -e .
|
||||
|
||||
5
documentation/new_data.rst
Normal file
5
documentation/new_data.rst
Normal file
@@ -0,0 +1,5 @@
|
||||
.. _new_data:
|
||||
|
||||
#############################
|
||||
Add new datasets
|
||||
#############################
|
||||
3
documentation/nuplan.rst
Normal file
3
documentation/nuplan.rst
Normal file
@@ -0,0 +1,3 @@
|
||||
#############################
|
||||
nuPlan
|
||||
#############################
|
||||
3
documentation/nuscenes.rst
Normal file
3
documentation/nuscenes.rst
Normal file
@@ -0,0 +1,3 @@
|
||||
#############################
|
||||
nuScenes
|
||||
#############################
|
||||
4
documentation/operations.rst
Normal file
4
documentation/operations.rst
Normal file
@@ -0,0 +1,4 @@
|
||||
###############
|
||||
Operations
|
||||
###############
|
||||
|
||||
3
documentation/waymo.rst
Normal file
3
documentation/waymo.rst
Normal file
@@ -0,0 +1,3 @@
|
||||
#############################
|
||||
Waymo
|
||||
#############################
|
||||
@@ -8,8 +8,9 @@ from scenarionet.common_utils import read_dataset_summary
|
||||
|
||||
if __name__ == '__main__':
|
||||
parser = argparse.ArgumentParser()
|
||||
parser.add_argument('--database_1', type=str, required=True, help="The path of the first database")
|
||||
parser.add_argument('--database_2', type=str, required=True, help="The path of the second database")
|
||||
parser.add_argument('--d_1', type=str, required=True, help="The path of the first database")
|
||||
parser.add_argument('--d_2', type=str, required=True, help="The path of the second database")
|
||||
parser.add_argument('--show_id', action="store_true", help="whether to show the id of overlapped scenarios")
|
||||
args = parser.parse_args()
|
||||
|
||||
summary_1, _, _ = read_dataset_summary(args.database_1)
|
||||
@@ -19,4 +20,6 @@ if __name__ == '__main__':
|
||||
if len(intersection) == 0:
|
||||
print("No overlapping in two database!")
|
||||
else:
|
||||
print("Find overlapped scenarios: {}".format(intersection))
|
||||
print("Find {} overlapped scenarios".format(len(intersection)))
|
||||
if args.show_id:
|
||||
print("Overlapped scenario ids: {}".format(intersection))
|
||||
|
||||
@@ -1,3 +1,5 @@
|
||||
import logging
|
||||
|
||||
import pkg_resources # for suppress warning
|
||||
import argparse
|
||||
import os
|
||||
@@ -8,7 +10,7 @@ from metadrive.scenario.utils import get_number_of_scenarios
|
||||
if __name__ == '__main__':
|
||||
parser = argparse.ArgumentParser()
|
||||
parser.add_argument("--database_path", "-d", required=True, help="The path of the database")
|
||||
parser.add_argument("--render", action="store_true", help="Enable 3D rendering")
|
||||
parser.add_argument("--render", default="none", choices=["none", "2D", "3D", "advanced"])
|
||||
parser.add_argument("--scenario_index", default=None, type=int, help="Specifying a scenario to run")
|
||||
args = parser.parse_args()
|
||||
|
||||
@@ -20,16 +22,22 @@ if __name__ == '__main__':
|
||||
|
||||
env = ScenarioEnv(
|
||||
{
|
||||
"use_render": args.render,
|
||||
"use_render": args.render == "3D" or args.render == "advanced",
|
||||
"agent_policy": ReplayEgoCarPolicy,
|
||||
"manual_control": False,
|
||||
"render_pipeline": args.render == "advanced",
|
||||
"show_interface": True,
|
||||
# "reactive_traffic": args.reactive,
|
||||
"show_logo": False,
|
||||
"show_fps": False,
|
||||
"log_level": logging.CRITICAL,
|
||||
"num_scenarios": num_scenario,
|
||||
"interface_panel": [],
|
||||
"horizon": 1000,
|
||||
"vehicle_config": dict(
|
||||
show_navi_mark=False,
|
||||
show_navi_mark=True,
|
||||
show_line_to_dest=False,
|
||||
show_dest_mark=False,
|
||||
no_wheel_friction=True,
|
||||
lidar=dict(num_lasers=120, distance=50, num_others=4),
|
||||
lane_line_detector=dict(num_lasers=12, distance=50),
|
||||
@@ -38,15 +46,29 @@ if __name__ == '__main__':
|
||||
"data_directory": database_path,
|
||||
}
|
||||
)
|
||||
for index in range(num_scenario if args.scenario_index is not None else 1000000):
|
||||
for index in range(2, num_scenario if args.scenario_index is not None else 1000000):
|
||||
env.reset(seed=index if args.scenario_index is None else args.scenario_index)
|
||||
for t in range(10000):
|
||||
env.step([0, 0])
|
||||
if env.config["use_render"]:
|
||||
env.render(text={
|
||||
"scenario index": env.engine.global_seed + env.config["start_scenario_index"],
|
||||
})
|
||||
env.render(
|
||||
text={
|
||||
"scenario index": env.engine.global_seed + env.config["start_scenario_index"],
|
||||
"[": "Load last scenario",
|
||||
"]": "Load next scenario",
|
||||
"r": "Reset current scenario",
|
||||
}
|
||||
)
|
||||
|
||||
if args.render == "2D":
|
||||
env.render(
|
||||
film_size=(3000, 3000),
|
||||
target_vehicle_heading_up=False,
|
||||
mode="top_down",
|
||||
text={
|
||||
"scenario index": env.engine.global_seed + env.config["start_scenario_index"],
|
||||
}
|
||||
)
|
||||
if env.episode_step >= env.engine.data_manager.current_scenario_length:
|
||||
print("scenario:{}, success".format(env.engine.global_random_seed))
|
||||
break
|
||||
@@ -1,6 +1,6 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
python ../../merge_database.py --overwrite --exist_ok --database_path ../tmp/test_combine_dataset --from ../../../dataset/waymo ../../../dataset/pg ../../../dataset/nuscenes ../../../dataset/nuplan --overwrite
|
||||
python ../../merge.py --overwrite --exist_ok --database_path ../tmp/test_combine_dataset --from ../../../dataset/waymo ../../../dataset/pg ../../../dataset/nuscenes ../../../dataset/nuplan --overwrite
|
||||
python ../../check_simulation.py --overwrite --database_path ../tmp/test_combine_dataset --error_file_path ../tmp/test_combine_dataset --random_drop --num_workers=16
|
||||
python ../../generate_from_error_file.py --file ../tmp/test_combine_dataset/error_scenarios_for_test_combine_dataset.json --overwrite --database_path ../tmp/verify_pass
|
||||
python ../../generate_from_error_file.py --file ../tmp/test_combine_dataset/error_scenarios_for_test_combine_dataset.json --overwrite --database_path ../tmp/verify_fail --broken
|
||||
@@ -32,8 +32,8 @@ done
|
||||
|
||||
# combine the datasets
|
||||
if [ "$overwrite" = true ]; then
|
||||
python -m scenarionet.scripts.merge_database --database_path $dataset_path --from $(for i in $(seq 0 $((num_sub_dataset-1))); do echo -n "${dataset_path}/pg_$i "; done) --overwrite --exist_ok
|
||||
python -m scenarionet.scripts.merge --database_path $dataset_path --from $(for i in $(seq 0 $((num_sub_dataset-1))); do echo -n "${dataset_path}/pg_$i "; done) --overwrite --exist_ok
|
||||
else
|
||||
python -m scenarionet.scripts.merge_database --database_path $dataset_path --from $(for i in $(seq 0 $((num_sub_dataset-1))); do echo -n "${dataset_path}/pg_$i "; done) --exist_ok
|
||||
python -m scenarionet.scripts.merge --database_path $dataset_path --from $(for i in $(seq 0 $((num_sub_dataset-1))); do echo -n "${dataset_path}/pg_$i "; done) --exist_ok
|
||||
fi
|
||||
|
||||
|
||||
60
scenarionet/tests/script/reactive_traffic.py
Normal file
60
scenarionet/tests/script/reactive_traffic.py
Normal file
@@ -0,0 +1,60 @@
|
||||
from metadrive.envs.scenario_env import ScenarioEnv
|
||||
|
||||
if __name__ == "__main__":
|
||||
env = ScenarioEnv(
|
||||
{
|
||||
"use_render": True,
|
||||
# "agent_policy": ReplayEgoCarPolicy,
|
||||
"manual_control": False,
|
||||
"show_interface": False,
|
||||
"show_logo": False,
|
||||
"show_fps": False,
|
||||
"show_mouse": False,
|
||||
# "debug": True,
|
||||
# "debug_static_world": True,
|
||||
# "no_traffic": True,
|
||||
# "no_light": True,
|
||||
# "debug":True,
|
||||
# "no_traffic":True,
|
||||
"start_scenario_index": 1,
|
||||
# "start_scenario_index": 1000,
|
||||
"num_scenarios": 1,
|
||||
# "force_reuse_object_name": True,
|
||||
# "data_directory": "/home/shady/Downloads/test_processed",
|
||||
"horizon": 1000,
|
||||
"render_pipeline": True,
|
||||
# "reactive_traffic": True,
|
||||
"no_static_vehicles": False,
|
||||
"force_render_fps": 10,
|
||||
"show_policy_mark": True,
|
||||
# "show_coordinates": True,
|
||||
"vehicle_config": dict(
|
||||
show_navi_mark=False,
|
||||
no_wheel_friction=False,
|
||||
lidar=dict(num_lasers=120, distance=50, num_others=4),
|
||||
lane_line_detector=dict(num_lasers=12, distance=50),
|
||||
side_detector=dict(num_lasers=160, distance=50)
|
||||
),
|
||||
}
|
||||
)
|
||||
success = []
|
||||
while True:
|
||||
env.reset(seed=1)
|
||||
env.stop()
|
||||
env.main_camera.chase_camera_height = 50
|
||||
for i in range(250):
|
||||
if i < 50:
|
||||
action = [0, -1]
|
||||
elif i < 70:
|
||||
action = [0.55, 0.5]
|
||||
elif i < 80:
|
||||
action = [-0.5, 0.5]
|
||||
# elif i < 100:
|
||||
# action = [0, -0.4]
|
||||
elif i < 110:
|
||||
action = [0, -0.1]
|
||||
elif i < 130:
|
||||
action = [0.3, 0.5]
|
||||
else:
|
||||
action = [0, -0.5]
|
||||
o, r, tm, tc, info = env.step(action)
|
||||
52
setup.py
52
setup.py
@@ -34,20 +34,26 @@ install_requires = [
|
||||
"matplotlib",
|
||||
"pandas",
|
||||
"tqdm",
|
||||
"metadrive-simulator",
|
||||
"metadrive-simulator>=0.4.1.1",
|
||||
"geopandas",
|
||||
"yapf==0.30.0",
|
||||
"yapf",
|
||||
"shapely"
|
||||
]
|
||||
|
||||
doc = [
|
||||
"sphinxemoji",
|
||||
"sphinx",
|
||||
"sphinx_rtd_theme",
|
||||
]
|
||||
|
||||
train_requirement = [
|
||||
"ray[rllib]==1.0.0",
|
||||
# "torch",
|
||||
"wandb==0.12.1",
|
||||
"aiohttp==3.6.0",
|
||||
"gymnasium",
|
||||
"tensorflow",
|
||||
"tensorflow_probability"]
|
||||
"ray[rllib]==1.0.0",
|
||||
# "torch",
|
||||
"wandb==0.12.1",
|
||||
"aiohttp==3.6.0",
|
||||
"gymnasium",
|
||||
"tensorflow",
|
||||
"tensorflow_probability"]
|
||||
|
||||
setup(
|
||||
name="scenarionet",
|
||||
@@ -61,6 +67,7 @@ setup(
|
||||
install_requires=install_requires,
|
||||
extras_require={
|
||||
"train": train_requirement,
|
||||
"doc": doc
|
||||
},
|
||||
include_package_data=True,
|
||||
license="Apache 2.0",
|
||||
@@ -68,30 +75,3 @@ setup(
|
||||
long_description_content_type='text/markdown',
|
||||
)
|
||||
|
||||
"""
|
||||
How to publish to pypi? Noted by Zhenghao in Dec 27, 2020.
|
||||
|
||||
0. Rename version in setup.py
|
||||
|
||||
1. Remove old files and ext_modules from setup() to get a clean wheel for all platforms in py3-none-any.wheel
|
||||
rm -rf dist/ build/ documentation/build/ scenarionet.egg-info/ docs/build/
|
||||
|
||||
2. Rename current version to X.Y.Z.rcA, where A is arbitrary value represent "release candidate A".
|
||||
This is really important since pypi do not support renaming and re-uploading.
|
||||
Rename version in setup.py
|
||||
|
||||
3. Get wheel
|
||||
python setup.py sdist bdist_wheel
|
||||
|
||||
4. Upload to test channel
|
||||
twine upload --repository testpypi dist/*
|
||||
|
||||
5. Test as next line. If failed, change the version name and repeat 1, 2, 3, 4, 5.
|
||||
pip install --index-url https://test.pypi.org/simple/ scenarionet
|
||||
|
||||
6. Rename current version to X.Y.Z in setup.py, rerun 1, 3 steps.
|
||||
|
||||
7. Upload to production channel
|
||||
twine upload dist/*
|
||||
|
||||
"""
|
||||
|
||||
Reference in New Issue
Block a user