Skip to content

Dataset for the paper L. Bonati, S. D'Oro, M. Polese, S. Basagni, T. Melodia, "Intelligence and Learning in O-RAN for Data-driven NextG Cellular Networks," IEEE Communications Magazine, vol. 59, no. 10, pp. 21–27, October 2021.

Notifications You must be signed in to change notification settings

wineslab/colosseum-oran-commag-dataset

Repository files navigation

Colosseum O-RAN COMMAG Dataset

This repository contains the dataset for the paper L. Bonati, S. D'Oro, M. Polese, S. Basagni, T. Melodia, "Intelligence and Learning in O-RAN for Data-driven NextG Cellular Networks," IEEE Communications Magazine, vol. 59, no. 10, pp. 21–27, October 2021. Please cite the paper if you plan to use it in your publication.

Experiment setup

  • Number of Base Stations (BSs): 4
  • Channel bandwidth: 3 MHz (15 Physical Resource Blocks (PRBs))
  • Number of slices for each BS: 3
  • Scheduling policies available to each slice:
    • Policy 0: Round-robin (RR)
    • Policy 1: Waterfilling (WF)
    • Policy 2: Proportionally fair (PF)
  • Number of User Equipments (UEs): 40
  • Radio Frequency (RF) scenario setup (Colosseum Rome scenario):
    • Close: UEs uniformly distributed within 20 m of each BS
    • Medium: UEs uniformly distributed within 50 m of each BS
    • Far: UEs uniformly distributed within 100 m of each BS
  • UE Mobility:
    • Static: no mobility
    • Slow: 3 m/s
  • Traffic classes:
    • eMBB: Constant bitrate traffic (1 Mbps per UE)
    • MTC: Poisson traffic (30 pkt/s of 125 bytes per UE)
    • URLLC: Poisson traffic (10 pkt/s of 125 bytes per UE)
  • UEs belong to different traffic classes:
    • eMBB UEs: 2, 5, 8, 12, 15, 18, 22, 25, 28, 32, 35, 38
    • MTC UEs: 3, 6, 9, 13, 16, 19, 23, 26, 29, 33, 36, 39
    • URLLC UEs: 1, 4, 7, 10, 11, 14, 17, 20, 21, 24, 27, 30, 31, 34, 37, 40

Dataset structure

  • slice_mixed: UEs are randomly distributed across slices
  • slice_traffic: UEs are divided per slice based on traffic types:
    • Slice 0: eMBB UEs
    • Slice 1: MTC UEs
    • Slice 2: URLLC UEs

Training configurations

The scheduling policies and initial Resource Block Group (RBG) allocations for each slice are as follows.

Slice Scheduling Policy Slice RBG Allocation
Training Slice 0 Slice 1 Slice 2 Slice 0 Slice 1 Slice 2
tr0 PF RR PF 1 2 4
tr1 WF RR RR 1 4 2
tr2 RR PF WF 2 1 4
tr3 WF WF PF 2 4 1
tr4 RR WF WF 4 2 1
tr5 WF WF WF 4 1 2
tr6 PF PF WF 2 2 3
tr7 WF RR PF 2 3 2
tr8 WF PF RR 3 2 2
tr9 PF WF RR 3 3 1
tr10 RR RR PF 3 1 3
tr11 RR PF RR 1 3 3
tr12 RR RR RR 1 2 4
tr13 WF PF WF 1 4 2
tr14 PF WF PF 4 2 1
tr15 RR WF PF 3 1 4
tr16 PF RR RR 1 2 4
tr17 PF RR WF 1 2 4

Dynamic slice resizing

After the initial allocation, the RBGs for each slice are dynamically re-allocated as follows.

Time [s] RBG Slice 0 RBG Slice 1 RBG Slice 2
0-30 initial allocation (see training configurations above)
30-60 1 2 4
60-90 1 4 2
90-120 2 1 4
120-150 2 4 1
150-180 4 2 1
180-210 4 1 2
210-240 2 2 3
240-270 2 3 2
270-300 3 2 2
300-330 3 3 1
330-360 3 1 3
360-390 1 3 3
390-420 1 2 4
420-450 1 4 2
450-480 4 2 1

Testing the DRL agents

This repository contains the script test_agent_release.py, which is used to test the DRL agents we used in our work. The script executes in three phases:

  • Phase 1: loading agents and encoder from ml_models;
  • Phase 2: loading data from the CSV files in the repository;
  • Phase 3: feeding the DRL agents which compute the best action for the current state. This phase runs in a loop.

All required dependencies are included in the requirements.txt file.

Remark 1: anyone interested in feeding real-time data to the DRL agents must implement proper methods to (i) gather data from DUs (i.e., get_data_from_DUs()); (ii) feed it to the DRL agent (i.e., split_data()); and (iii) feed back the output of the DRL agent to the DUs (i.e., send_action_to_DU()).

Phase 1

We load the 3 DRL agents and the encoder portion of the autoencoder we used in the experimental section of our work. All models are stored in ml_models and loaded when starting the script. We have one DRL agent (i.e., the trained Proximal Policy Optimization (PPO) policy network) per slice. Rewards vary across the various DRL agents and are set as follows:

  • eMBB slice: Maximize throughput. This is done by setting the reward equal to tx_brate downlink [Mbps], which represents the downlink throughput in Mbps as measured by srsLTE;
  • MTC slice: Maximize throughput. This is done by setting the reward equal to tx_brate downlink [Mbps], which represents the downlink throughput in Mbps as measured by srsLTE;
  • URLLC slice: Minimize latency. This is done by setting the reward equal to ratio_granted_req, which represents the ratio between the number of PRBs allocated by the scheduler and those requested by the UEs. The higher the value, the faster requests are satisfied and traffic experience low latency.

These metrics are reported periodically by DUs and, in our case, are contained in the CSV repository included in this repository.

Phase 2

We load the CSV dataset included in the repository. CSV files are loaded into Pandas DataFrame structures, which are used in this case to feed the DRL agents with data. In real-world deployments, data is reported directly from DUs. In this case, and for testing purposes only, we provide functions to emulate such data by extracting it from the dataset we collected.

Phase 3

We run a loop that extracts data from the dataset and feeds it to each DRL agent. Data is taken from the dataset at random, grouped according to the slice they belong to, and fed to the corresponding DRL agent, which uses the PPO policy network to compute the best action to maximize the reward.

About

Dataset for the paper L. Bonati, S. D'Oro, M. Polese, S. Basagni, T. Melodia, "Intelligence and Learning in O-RAN for Data-driven NextG Cellular Networks," IEEE Communications Magazine, vol. 59, no. 10, pp. 21–27, October 2021.

Resources

Stars

Watchers

Forks

Contributors 3

  •  
  •  
  •  

Languages