Skip to content
Ben Notkin edited this page Oct 21, 2024 · 15 revisions

Demographics


Population Growth

Dataset Measurement Units Source Release Year Years of Data Coverage Resolution
Global Cities Database Population count People Oxford Economics 2022 2000-2040 Global City-level
City Population Population count People Thomas Brinkhoff On-going 1970-2024 Global Administration-level

Methodology

Population figures are typically used as a denominator for many indicators, and are a measure of demand for services. The line chart above summarizes the city's population change according to census and other public data. High growth of urban populations, caused by rates of natural increase (more births than deaths) in urban areas, migration from rural to urban areas, and the transformation of rural settlements into urban places, puts pressure on cities to meet the new demand.

Source data citation [Oxford Citation]

Thomas Brinkhoff: City Population, http://www.citypopulation.de (or the URL of the specific page)


Population Density

Dataset Measurement Units Source Release Year Years of Data Coverage Resolution
Global Cities Database Population count People Oxford Economics 2022 2000-2040 Global City-level
City Population Population count People Thomas Brinkhoff On-going 1970-2024 Global Administration-level
WorldPop Population density People per 10,000 square meters WorldPop NA 2000-2020 Global 100 meters

What is it?

Population density measures the number of people that live within a designated area of measurement using most recent census data. Population density helps identify areas that may be urban cores, suburban areas, and underpopulated regions.

Why use it?

Population density often has extensive resource allocation and urban growth implications. Areas of high population density often denote a need for robust infrastructure, public service distribution, and transportation planning, among others. Moreover, higher population density enables governments to deliver essential services at a lower cost per capita. Locating areas of increased population density can allow policy makers and planners to make informed choices about zoning, urban expansion, and areas of commercial or residential development.

Moreover, areas of increased population density expose more people to local hazards. With more local governments concentrating on resilience planning, population density can indicate areas with greater risk of environmental stress and vulnerability due to higher concentrations of people.

How is this made?

Population counts are taken from updated census information and aggregated into a consistent unit of measurement. In this map, it is estimated population numbers per 10,000 m2 grid cell. Knowing general distribution patterns of people is critical for service delivery, impact assessments, and intervention plans.

How should we interpret it?

High population density will often indicate economic centers and highly urbanized residential neighborhoods while lower population density indicates more suburban residential and underdeveloped regions. However, given the data's resolution and lack of intimate knowledge of specific neighborhood dynamics, population density provides a high-level overview of population trends in a region.

Source data citation

Bondarenko, Maksym, David Kerr, Alessandro Sorichetta, and Andrew Tatem. 2020. “Census/Projection-Disaggregated Gridded Population Datasets for 189 Countries in 2020 Using Built-Settlement Growth Model (BSGM) Outputs.” University of Southampton. https://doi.org/10.5258/SOTON/WP00684.


Population Distribution By Age And Sex (Chart Only)

Dataset Measurement Units Source Release Year Years of Data Coverage Resolution
WorldPop Population count, by age and sex People WorldPop NA 2000-2020 Global 100m

Methodology

This column chart classifies the city's population according to age group, forming the basis of population projections. Populations vary significantly in their proportions of young and old people, with growing populations generally appearing younger, and declining or slowly growing populations generally appearing older.

Reproductive age is defined as 15–49. Working age is defined as 15–64.

Source data citation

Bondarenko, Maksym, David Kerr, Alessandro Sorichetta, and Andrew Tatem. 2020. “Census/Projection-Disaggregated Gridded Population Datasets for 189 Countries in 2020 Using Built-Settlement Growth Model (BSGM) Outputs.” University of Southampton. https://doi.org/10.5258/SOTON/WP00684.


Relative Wealth

Dataset Measurement Units Source Release Year Years of Data Coverage Resolution
Relative Wealth Index --- --- Data for Good at Meta 2021 2021 93 low and middle-income countries 2400m

What is it?

Relative wealth is a micro-estimate of wealth and poverty relative to the whole country that covers the populated surface at a 2.4 km2 resolution. The data has been generated for nearly 100 low and middle income (LIMC) countries globally.

Why use it?

Relative wealth is an effective way to identify wealth disparity in a region to amplify resource allocation and targeted interventions, such as public investments, housing projects, and job creation programs. It can also help ensure the equitable development of infrastructure and services like schools, healthcare facilities, and public amenities, and help better ensure that regardless of economic status, access to quality essential services in healthy environments is not inhibited. It can also better inform zoning policy, such that in a lower income region, mixed-use development may help promote economic diversity and overall improved access to jobs, education, and retail services.

How is this made?

The dataset was developed by Meta. It leveraged ground truth measurements of household wealth collected from face-to-face surveys by the United States Agency for International Development from 66,819 villages in 56 LMIC’s. This ground truth data is linked to other non-traditional data sources, such as satellite imagery, cellular network data, topographic maps, and privacy protected connectivity data. The data is processed to develop quantitative features per each village to be used in various computational algorithms, which then are used to train a supervised machine learning model that predicts the relative wealth of each populated grid cell on the planet, even those without ground truth data.

How should we interpret it?

Areas with positive values have overall greater wealth in comparison to the rest of the country, whereas negative values have overall lesser wealth in comparison to the rest of the country. While this map can provide general spatial trends of wealth and poverty, it is important to note the algorithm is proprietary, and a lot of factors affect network connectivity and data usage thus it can be useful for observing wealth patterns within an area but may not be useful for cross-comparison among different regions.

Source data citation Chi, Guanghua, Han Fang, Sourav Chatterjee, and Joshua E. Blumenstock. 2022. “Microestimates of Wealth for All Low- and Middle-Income Countries.” Proceedings of the National Academy of Sciences 119 (3): e2113658119. https://doi.org/10.1073/pnas.2113658119.


Economic Activity


Economic Hotspots

Dataset Measurement Units Source Release Year Years of Data Coverage Resolution
Visible Infrared Imaging Rediometer Suite (VIIRS) Stray Light Corrected Nighttime Day/Night Band Composites Version 1 --- --- NASA Earthdata NA 2013-NRT Global 750m

What is it?

Economic activity is a measure of the monthly average radiance from nighttime lights. Nighttime lights are often a common proxy for economic activity because greater artificial light means more commercial activity.

Why use it?

Nighttime light radiance is correlated with economic activity in urban areas such as commercial activity, urbanization, and industrial activity.

Nighttime lights data is available globally from the Visible Infrared Imaging Radiometer Suite (VIIRS). VIIRS captures this data at 01:30 local time, meaning it is consistent and reliable, and the continuous monitoring of economic activity over large areas can supplement regions with scarce economic activity data

While most global economic indicator data is available at a national level, nighttime lights can be disaggregated to smaller spatial resolutions which is great for city/neighborhood contexts rather than analyzing at a national level

Nighttime lights are particularly useful in measuring economic activity in informal economies which is often not captured in traditional economic activity data

Also signifies changes in infrastructure development and urbanization which denote urban growth and economic development

How is this made?

VIIRS observations are made crossing the equator at around 01:30 local time. The data is collected, geolocated, and calibrated, thus creating a daily value of nanowatts per square centimeter per steradian. The daily values are aggregated into an average monthly value, and the values reflected on the map are the average of every month between 2014 and 2022.

How should we interpret it?

Areas of higher nighttime light emission should be seen as indicators of high economic activity; however, it is a proxy of economic activity rather than a true measure of it. Other caveats include that certain types of economic activity may have higher rates of nighttime emission than others (e.g., A daytime market vs. A gas station). Other economic activities, such as airports or industrial centers, may have high rates of nighttime light emission but may not necessarily be the centers of economic activity.

Source data citation Mills, Stephen, Stephanie Weiss, and Calvin Liang. 2013. “VIIRS Day/Night Band (DNB) Stray Light Characterization and Correction.” In Earth Observing Systems XVIII, 8866:549–66. SPIE. https://doi.org/10.1117/12.2023107.


Economic Change

Dataset Measurement Units Source Release Year Years of Data Coverage Resolution
Visible Infrared Imaging Rediometer Suite (VIIRS) Stray Light Corrected Nighttime Day/Night Band Composites Version 1 --- --- NASA Earthdata NA 2013-NRT Global 750m

What is it?

Economic change also leverages VIIRS nighttime lights data but analyzes the rate of change in nighttime radiance. Whereas economic activity looks at the monthly average of nighttime lights emission, economic change looks at the monthly temporal changes of average emission. Positive values represent an increase in nighttime lights emission and by proxy, economic activity, whereas negative values represent a decrease in nighttime lights emission and economic activity.

Why use it?

Similarly to economic activity, economic change data introduces a temporal dimension by analyzing shifts in nighttime lights over time to reveal patterns of economic transformation in an urban region. This approach allows for the identification of trends, such as growth, decline, or redistribution of economic activity, by observing how the intensity and distribution of nighttime lights evolve.

How is this made?

The data is processed and extracted in the same way as the economic activity data but is re-interpreted to look at temporal changes between months on nighttime lights emissions.

How should we interpret it?

Areas with positive values have experienced an increase in nighttime lights, which may denote an increase in economic activity, whereas negative values have experienced a decrease in nighttime lights, which may denote a decrease in economic activity. However, it is important to note that the same caveats as economic activity apply here as well, and that values closer to 0 (whether positive or negative) may not be significant indicators of economic change in a region.

Source data citation Mills, Stephen, Stephanie Weiss, and Calvin Liang. 2013. “VIIRS Day/Night Band (DNB) Stray Light Characterization and Correction.” In Earth Observing Systems XVIII, 8866:549–66. SPIE. https://doi.org/10.1117/12.2023107.


Built Form


Built-up Area

Dataset Measurement Units Source Release Year Years of Data Coverage Resolution
World Settlement Footprint Evolution --- --- German Aerospace Center 2021 1985-2015 Global 30m

What is it?

Built-up area is a map of the location and extent of urbanized land. The map denotes in what year the area became urbanized.

Why use it?

Understanding the location of urbanized areas has several implications for the provision of public services and infrastructure, transportation networks, and resilience planning. Also understanding when the land was urbanized contextualizes the age of buildings and infrastructure in a region, which can have implications for the health and resilience of the built environment.

Regarding Bangladesh, there has been rapid urbanization over the last 50 years that has caused increased rates of sprawl and informal settlements. Unplanned neighborhoods in peri-urban areas have suffered from congestion, narrow roadways, public health hazards, and inadequate drainage and sanitation infrastructure. Moreover, discrepancies in older versus newer settlements (in addition to informal settlements) can provide a variety of challenges in infrastructure demands.

How is this made?

The World Settlement Footprint developed the WSF Evolution data set to look at urban change between 1985 to 2015. Because existing global layers provide limited time steps and poor quality, WSF developed an iterative approach using past settlement extents using Landsat data, available from 1985 at a 30-meter resolution. Using various measures and spectral indices, WSF iteratively generated settlement and non-settlement training settlements to create a machine learning model to predict when settlements expanded.

How should we interpret it?

The map displays when land was urbanized in 5-year ranges, which provides insight as to the spatial-temporal patterns of urban change. However, there are gaps in the data. WSF does not account for areas that went from urbanized to non-urbanized (this is a much rarer phenomenon). Land that was already urbanized prior to 1985 is also not accounted for, and there can be drastic differences between settlements developed in 1980 versus much earlier development. Areas can also be redeveloped which may affect the overall accuracy of the temporal measure of urbanization.

Source data citation Marconcini, Mattia, Annekatrin Metz- Marconcini, Thomas Esch, and Noel Gorelick. 2021. “Understanding Current Trends in Global Urbanisation - The World Settlement Footprint Suite.” GI_Forum 2021, Volume 9, (June): 33–38. https://doi.org/10.1553/giscience2021_01_s33.


Built-up Density

Methodology

This map shows the imperviousness percentage of the city’s surfaces, as measured by satellite imagery. Less pervious surfaces absorb less water. This measure is a useful proxy for the density of buildings in a built-up area: impervious surfaces are typically paved structures such as roads, parking lots, airports, etc., that are covered by water-resistant material like asphalt, concrete, or rooftops.

Separate from population density, built-up density can indicate where more interactivity between people is likely to take place. In general, benefits of built-up density can include greater economic activity, higher energy efficiency, and more room for nearby open spaces.

Source data citation

[WSF]???


Land Cover

Dataset Measurement Units Source Release Year Years of Data Coverage Resolution
ESA WorldCover --- --- European Space Agency 2020 2020, 2021 Global 10m

What is it?

Land cover data refers to the land surface cover including vegetation, urban infrastructure, water, bare soil, among other types.

Why use it?

Identifying land cover types provides a greater understanding of how land is utilized (both how much land is used for a specific class and where that land cover is concentrated). Understanding land cover helps can help inform sustainable areas for development, balance between urban growth with environmental conservation, reduction of deforestation, habitat loss, and flood risk, and climate change adaptation. This also is integral for zoning and land use planning to ensure efficient and organized uses for land and maximize social and economic benefits of residential, commercial, industrial, agricultural, and conservation areas.

In Bangladesh:

Rapid urban growth has led to an increase in urbanized land and urban sprawl into peri-urban and rural communities.

Increased demand of food has required a greater need for agricultural land, causing natural areas to be converted to agricultural fields, and the intensification of agriculture can transform land cover types

Overall increases in aquaculture too, especially in coastal areas

How is this made?

The European Space Agency developed the first global land cover product for the years 2020 and 2021 at a very fine resolution of 10 meters, developed and validated based on Sentinel-1 and Sentinel-2 data. The data is organized into 11 broad land cover classes, aligned with the UN FAO’s Land Cover Classification system and the framework is aligned with the ESA’s WorldCover project.

How should we interpret it?

Mapping land cover is effective to look at the spatial distributions of land uses. Urbanized land may slightly differ from the WSF urbanized land due to the data coming from different sources.

Source data citation Zanaga, Daniele, Ruben Van De Kerchove, Wanda De Keersmaecker, Niels Souverijns, Carsten Brockmann, Ralf Quast, Jan Wevers, et al. 2021. “ESA WorldCover 10 m 2021 V200.” Zenodo. https://doi.org/10.5281/zenodo.5571936.


Road Network Orientation (Chart Only)

Dataset Measurement Units Source Release Year Years of Data Coverage Resolution
OpenStreetMap --- --- OpenStreetMap NA NA Global NA

Methodology

This layer visualizes the geometric orientation of a city’s streets, specifically the proportion of streets that point east, west, north, south, or to any point around a compass. Street edge bearings can help illuminate the history of urban design, transportation planning, and morphology; evaluate existing transportation patterns and configurations; and explore new planning proposals and alternatives.

Cities built since the 1900s with grid-based urban planning exhibit a clear “compass rose” pattern (the majority of street bearings point in a narrow band of N-S degrees, and another narrow band of E-W degrees, e.g. Manhattan). The connectedness of a grid supports route choice, convenience, walkability, and in turn the human dynamics of social mixing, activity, and encounter.

Cities with a highly organic urban form, and limited top-down planning, exhibit a heterogeneous set of edge bearings, equally spread across all corners of the compass (e.g. Kigali), generating a circular distribution. In other cases, partial areas within a highly planned city coexist alongside organic patterns for the remaining city, creating a hybrid pattern.

Source data citation “OpenStreetMap.” n.d. OpenStreetMap. https://www.openstreetmap.org/.


Intersections

Dataset Measurement Units Source Release Year Years of Data Coverage Resolution
OpenStreetMap --- --- OpenStreetMap NA NA Global NA

Methodology

Intersection density is a measure of network compactness, conveying information about street connectivity. The amount and types of intersections in a road network help determine how a local community functions and the character of the streets themselves. In a good street network, most streets should connect at both ends. A high level of connectivity provides an efficient platform for dispersing traffic, facilitating route choice, and creating more comfortable conditions for people who travel by foot, bike, or transit.

Source data citation “OpenStreetMap.” n.d. OpenStreetMap. https://www.openstreetmap.org/.


Schools

Dataset Measurement Units Source Release Year Years of Data Coverage Resolution
OpenStreetMap --- --- OpenStreetMap NA NA Global NA

Methodology

Using a proximity analysis along the existing road network, this map identifies the areas that are most- and least-accessible to schools. The analysis includes all facilities classified in OpenStreetMap as a school, kindergarten, college or university. Note that there may be more schools that are not included in OpenStreetMap.

Source data citation “OpenStreetMap.” n.d. OpenStreetMap. https://www.openstreetmap.org/.


Health Facilities

Dataset Measurement Units Source Release Year Years of Data Coverage Resolution
OpenStreetMap --- --- OpenStreetMap NA NA Global NA

Methodology

Using a proximity analysis along the existing road network, this map identifies the areas that are most- and least-accessible to schools. The analysis includes all facilities classified in OpenStreetMap as a health amenity, clinic, or hospital. Note that there may be more health facilities that are not included in OpenStreetMap.

Source data citation “OpenStreetMap.” n.d. OpenStreetMap. https://www.openstreetmap.org/.

Climate Conditions


Solar

Dataset Measurement Units Source Release Year Years of Data Coverage Resolution
Global Solar Atlas --- --- Solargis,World Bank 2017 NA Global 1km
What is it?

Solar availability is the measure of solar power potential in a theoretical 1 kWp photovoltaic (PV) system. Photovoltaics (PV) is the most widely applied and versatile technology for solar power. The availability of solar energy depends on site conditions. This map displays an indicative estimate of daily specific yield, that is, how much energy a hypothetical PV system would produce for per unit of capacity (kWh/kWp). Values above 4.5 are considered excellent availability, while values between 3.5 and 4.5 are moderate.

Why use it?

PV offers a unique opportunity to achieve long-term energy sustainability goals. Determining parts of a city that may be optimal for PV can optimize renewable energy integration, support energy resilience, amplify economic growth and job creation, and maximize land use for energy creation efficiently.

How is this made?

The data used in this map is derived from the "Global Solar Atlas 2.0," a free online tool developed and maintained by Solargis s.r.o. for the World Bank Group. The tool leverages Solargis data and is funded by the Energy Sector Management Assistance Program (ESMAP).

Note that The Global Solar Atlas provides a preliminary assessment of photovoltaic (PV) power potential for selected locations, based on generalized theoretical conditions. The analysis assumes an optimal tilt angle for PV modules and 100% system availability, without accounting for potential downtimes.

How should we interpret it?

Within a city, variations in solar availability are typically minimal, with daily yields differing only slightly—for example, 4.5 kWh/kWp in one area versus 4.3 kWh/kWp in another. Because these differences are so small, PV siting decisions should consider other factors not reflected in solar availability maps, such as shading from nearby structures, roof orientation, and ease of access for maintenance. These practical considerations often outweigh the minor variations in solar yield, meaning that the map alone should not be the primary basis for choosing solar panel locations.

Additionally, solar availability varies throughout the year due to changes in cloud cover, making it crucial to assess how consistent solar energy production can be across different months. Solar systems are more sustainable when availability is relatively stable year-round, with a high-to-low availability ratio below 2:1, as this ensures that the system operates efficiently most of the time. This consistency is a key factor in determining the overall effectiveness and sustainability of solar energy installations.

Source data citation “Global Solar Atlas.” n.d. https://globalsolaratlas.info/.

Data obtained from the “Global Solar Atlas 2.0, a free, web-based application is developed and operated by the company Solargis s.r.o. on behalf of the World Bank Group, utilizing Solargis data, with funding provided by the Energy Sector Management Assistance Program (ESMAP). For additional information: https://globalsolaratlas.info


Air Quality

Dataset Measurement Units Source Release Year Years of Data Coverage Resolution
Global (GL) Annual PM2.5 Grids from MODIS, MISR and SeaWiFS Aerosol Optical Depth (AOD), v4.03 (1998 – 2019) --- --- NASA SEDAC,Columbia U CIESIN 2022 1998-2019 Global 1km

What is it?

Air quality measures the concentration of pollutants in the air that can harm human health and the environment. In this map, it measures the ground level fine particulate matter (PM) PM2.5 particles in 2019, measured as micrograms per cubic meter. According to the WHO and the United States EPA standards, PM2.5 concentrations should not exceed between 5 µg/m3 or 12 µg/m3.

PM2.5 particles are those with a diameter of 2.5 micrometers or smaller which pose significant concern because their small size allows them to penetrate deep into the lungs or bloodstream. PM2.5 particles are commonly found in vehicle emissions, industrial activities, wildfires, and other sources of combustion. Other types of particulate matter, like PM10, include coarser particles like pollen, unpaved road dust, or sea salt.

Why use it?

Understanding concentration of air quality concerns in an urbanized region can influence the need for public health and environmental remediation efforts in specific neighborhoods. Monitoring air quality is crucial to promoting livability in urbanized regions. Particularly in Bangladesh:

Bangladesh is the world’s most polluted country, and PM2.5 particulates shortens the average Bangladeshi resident’s life by almost 7 years.

All 165 million Bangladeshi residents are exposed to an annual average PM 2.5 pollution level that exceeds both the WHO guideline and Bangladesh’s national standard of 15 µg/m³.

Since 1998, PM2.5 pollution has increased by 63 percent.

Particulate pollution is the second greatest threat to human health in Bangladesh, behind cardiovascular diseases.

How is this made?

Annual measures of air quality are done by MODIS, MISR, and SeaWiFS. The air quality is measured annually using satellite-retrieved aerosol optical depth (AOD) and accounts for daily air quality readings. Using geographically weighted regression to account for variations of data such as seasonality, the data is aggregated to a resolution of 1 km to give an annual measure of µg/m³ for the year 2019.

How should we interpret it?

The map displays the 1 km grid cells in measurements of µg/m³. As the Bangladesh national standard is 15 µg/m³, areas that exceed this number may require various air quality interventions.

Source data citation Center For International Earth Science Information Network-CIESIN-Columbia University. 2022. “Global Annual PM2.5 Grids from MODIS, MISR and SeaWiFS Aerosol Optical Depth (AOD), 1998-2019, V4.GL.03.” Palisades, NY: NASA Socioeconomic Data and Applications Center (SEDAC). https://doi.org/10.7927/FX80-4N39.


Summer LST

Dataset Measurement Units Source Release Year Years of Data Coverage Resolution
USGS Landsat 9 Level 2, Collection 2, Tier 1 --- --- USGS NA 2021-Present Global 30m

What is it?

Summer land surface temperature (LST) refers to the temperature of the Earth's surface as measured during the summer months. Unlike air temperature, which is typically measured at about 1.5 to 2 meters above the ground, LST represents the actual temperature of the land or ground surface itself, including surfaces like soil, vegetation, and urban infrastructure.

Why use it?

Temperatures in an area are affected by many factors, such as land cover, elevation, slope, and proximity to water. Higher temperatures can generate or exacerbate negative effects related to health, social equity, and economic productivity. Typically, cities demonstrate higher temperatures than vegetated areas: construction materials, such as concrete, absorb more solar radiation; less vegetation results in less evapotranspiration; and more vehicle usage and mechanical cooling generate more heat.

Note that it measures surface temperature rather than ambient temperature, which can differ by several degrees. Surface temperature is primarily useful for identifying hotter and cooler areas within a specific geography.

How is this made?

Data on land surface temperature is taken from June through September in 2014 to 2023 and the average value was extracted for every grid cell at a resolution of 30 meters. The product is generated from Landsat collection 2 Level 1 thermal infrared bands.

How should we interpret it?

LST is a great way to measure the impacts of human thermal comfort and heat stress. While LST is a good indicator of surface heat, it does not necessarily indicate other factors like humidity, wind, or direct sunlight. Moreover, LST may often get mistaken for measuring urban heat island effect, but it does not since urban heat island measures nighttime heat retention. However, it can give a general overview of hotter and cooler areas within a specific geography. Source data citation Earth Resources Observation And Science (EROS) Center. 2013. “Collection-2 Landsat 8-9 OLI (Operational Land Imager) and TIRS (Thermal Infrared Sensor) Level-2 Science Products.” U.S. Geological Survey. https://doi.org/10.5066/P9OGBGM6.


Vegetation

Dataset Measurement Units Source Release Year Years of Data Coverage Resolution
Sentinel-2, Normalized Difference Vegetation Index (NDVI) --- --- European Space Agency 2020 NA Global 10m

What is this?

This map shows the density and health of vegetation, as observed by satellite sensors. Specifically, it shows the normalized difference vegetation index, or NDVI, which uses visible and non-visible light to quantify vegetation on a scale of -1 to 1. Values below 0.1 indicate water, rock, or artificial surfaces, and higher values indicate greater vegetation density and health.

Why use it?

Knowing the distribution and density of plant life in and around a city is helpful for a variety of reasons. NDVI data can inform urban planning, environmental management, and public health initiatives. More plant matter in cities is associated with health benefits and the mitigation of environmental risks. Vegetation in a city can reduce temperatures and the urban heat island effect, and is an important component of passive cooling. It can also lessen air pollution by means of the deposition, dispersion and modification of pollutant particulate matter.° Plants absorb rainwater and slow the flow of surface water to reduce flooding (though, flooding can also increase in areas where flow has slowed), and hold hillsides together to reduce erosion in the event of flooding, as well as increasing evaporation. Increased vegetation may even alter an area's climate, which in turn affects the frequency and magnitude of flooding.°. Green spaces in a city can also serve important civic, social, and quality of life functions, by offering spaces for gathering and activity. This role can be especially crucial during health events, such as the COVID-19 pandemic, when the increased air flow of outdoor spaces made it often healthier to spend time outside rather than inside.

This map shows the median distribution of vegetation around a city, but NDVI is also useful when different time periods are compared. With multiple data points across time, NDVI can show such things as the increase and decrease of biomass, the health of plants, and the presence of drought. (The City Scan also uses NDMI, the normalized difference moisture index, for measuring drought.)

How is this made?

NDVI is calculated using remote sensing data, either from satellite or aerial vehicles. In this particular project, the remote sensing data is collected by the Copernicus Programme's (/European Space Agency's) Sentinel-2 mission's two satellites, which capture multiple ranges of light reflected from the earth's surface.° By combining these ranges in different ways, different phenomena on earth can be observed. Vegetation is highlighted by combining the near-infrared radiation and red radiation bands: NDVI takes the difference of near-infrared radiation ($NIR$) and red radiation ($Red$) and divides it by the two bands' sum.

$$NDVI = \frac{NIR - Red}{NIR + Red}$$

It may seem odd that this measurement uses the red, rather than green, radiation band. Plants appear green to the human eye because, within the visible spectrum, they strongly absorb red and blue light, and reflect green light. However, plants are typically much more reflective of near infrared light, and the comparison of near infrared radiation to red radiation is more indicative of vegetation than the comparison of green radiation to red radiation.

The user generating the map has the option to plot year-round NDVI or the NDVI of the hottest months. The choice may depend on the specific geography: a user may choose to plot the seasonal NDVI values to diminish the effect of colder months when plants are dormant and emitting lower NDVI values, or when snow cover will yield negative NDVI values. Alternatively, a user may plot year-round values in a place with hot and dry summers, where vegetation is similarly dormant due to dryness. Both the year-round and seasonal values take the average monthly value from 2014 to 2023, with the seasonal values using the three on-average hottest months, as identified in the land surface temperature calculation.)

How should I interpret it?

NDVI ranges between -1 and 1, with higher numbers indicating a higher density of healthy vegetation. Values of less than 0.1 typically indicate water, rock, and otherwise barren land; values of 0.1 to 0.3 are associated with shrubs and grassland; 0.3 to 0.5 corresponds to moderate vegetation; and values of more than 0.5 correspond to dense vegetation such as forests or mature crops. These numbers, though, vary by place and ecology. Different plant types emit different NDVI patterns. Because of this, an area might show lower NDVI values even while boasting lots of vegetation. In addition, NDVI does not separate plant density and plant health, nor does it account for the spatial arrangement of plant matter within an areal unit.

Because this map shows an average NDVI (whether an average of all twelve months or the hottest three), it does not show seasonality. An area may feature high vegetation density and health during one season of the year, and low vegetation during the rest of the year. In the year-round average, these high vegetation seasons will be suppressed; in the 3-month average, the high vegetation may be overly prominent or fully excluded.

Source data citation European Space Agency. 2020. “Normalized Difference Vegetation Index”. Sentinel-2. https://documentation.dataspace.copernicus.eu/Data/SentinelMissions/Sentinel2.html


Forest & Deforestation

Dataset Measurement Units Source Release Year Years of Data Coverage Resolution
Hansen et al., Global Forest Change --- --- U of Maryland NA 2000-2023 Global 1 arc second (30 m)

What is this?

Due to human activity and natural processes (as well as their combination) forest cover is always changing. Over the period of 2000–2012, 2.3 million square kilometers of forest were lost globally, while 0.8 million were gained, resulting in a net loss of 1.5 million square kilometers. This map shows both which areas are forested, and where there has been forest loss since 2000. Forested areas are 30 meter by 30 meter areas with at least 20% tree cover, as assessed from satellite imagery.

Why use it?

On a global scale, forests are vitally important to the mitigation of anthropogenic climate change by removing CO2 from the atmosphere. They also have important roles at the local level. Among these effects are local climate, flood and landslide prevention, biodiversity and animal habitat, water and air purification, and resource provision.

Trees draw water from deep underground and pump it to the surface. This water then evaporates from their leaves in a process called evapotranspiration. The release of water increases humidity and causes local cooling. The added moisture can then form rain clouds which both reflect sun light (increased albedo), causing further cooling, and water other plants in the surrounding area with shallower root systems. Trees roots also reduce the risk of floods and landslides, by slowing run off, strengthening soil, absorbing water, and accelerating evaporation. As it slows the water, it also filters it, making it cleaner for drinking. Above ground, trees also act as filters, catching particulate matter on their surfaces.

To reduce global climate change and local heat, to lessen the risk of flooding and landslides, and to maintain ecological diversity, among other reasons, it is important to prevent the loss of forests. Forests are threatened by both human activity, such as deforestation for reasons of agriculture, urban expansion, and timber, and by natural processes which are often accentuated by human activity, such as wildfire and insect epidemics.

How is this made?

For this dataset, areas of vegetation were first detected using red and infrared bands from the Landsat 7 satellite. These vegetated areas were then filtered by height, with vegetation taller than 5 meters classified as tree cover. If a 30 meter by 3- meter pixel has at least 20% tree cover, it is classified as forest. The dataset begins with a base model of forest coverage in the year 2000, and then measures forest loss for 2000–2023 and forest gain for 2000–2012.

How should I interpret it?

As with all global remote sensing data, this dataset is not specific to local conditions or ecosystems. It may be more or less accurate with different forest types. One study has found that this global forest change dataset may overestimate deforestation in the Amazon Basin.

Source data citation Hansen, M. C., P. V. Potapov, R. Moore, M. Hancher, S. A. Turubanova, A. Tyukavina, D. Thau, et al. 2013. “High-Resolution Global Maps of 21st-Century Forest Cover Change.” Science 342 (6160): 850–53. https://doi.org/10.1126/science.1244693.

Risk Identification


Flood Events

Dataset Measurement Units Source Release Year Years of Data Coverage Resolution
Global Active Archive of Large Flood Events --- --- Dartmouth Flood Observatory NA 1985-Present Global NA

Methodology

These events are recorded in the Dartmouth Flood Observatory’s Global Active Archive. The archive may not include all significant flood events. The death and displacement figures reflect the deaths and displacements for the entire flood extent, which may include areas greater than the city’s extent.

See Data Notes for more information on the archive and its severity classifications.'

Source data citation Brakenridge, G.R. n.d. “Global Active Archive of Large Flood Events.” Dartmouth Flood Observatory, University of Colorado. https://floodobservatory.colorado.edu/Archives/index.html.


Flooding

Dataset Measurement Units Source Release Year Years of Data Coverage Resolution
Fathom-Global 3.0 Flood Hazard Data --- --- Fathom 2022 NA Global 1 arc second (30 m)
  • Q:Why 15 cm
  • Q: Defended or undefended?
  • 30 m

What is this?

There are 12 total maps for flooding. These consist of maps for the three main types of urban flooding (fluvial, pluvial and coastal), as well as their combination; each of these flood types is laid over layers for population density, built-up area expansion, and urban services (such as schools and health facilities) to show the areas and assets exposed to flooding.

There are three main types of urban flooding:

  1. Fluvial or riverine flooding is floodwater from a watercourse, such as a river or canal. When the watercourse doesn't have capacity for the high levels of water flowing through it, the water overflows into surrounding areas. This overflow may occur because the water level is higher than the banks of the watercourse, or the high flow rate might break through an embankment. Often fluvial flooding is caused by instense rainfall or snowmelt upstream, though it can also be caused by other upstream factors, such as dam releases. Because fluvial flooding originates upstream, it can happen even when an area is not experiencing extreme precipitation, or even when it's dry.

  2. Pluvial or rainwater flooding is floodwater that has not yet, or cannot, reach a watercourse. It is independent of an overflowing water body and it can occur in any urban area — even higher elevation areas that lie above and far from coastal and river floodplains. Pluvial flooding consists of both surface water and flash flooding, which both typically come from extreme rainfall.° Surface water flooding happens when the rainwater overwhelms an area's drainage system, (whether natural or constructed) so that it cannot absorb and move the floodwater. This flooding can occur slowly, as it may take time for the drainage system to fill. Though slow, it can be very destructive, especially as the water may sit for a long time. Flash floods, as the name indicates, are much faster. The water comes so quickly that it may flood an area even while the drainage system still has capacity – the water just hasn't had time to reach it. These floods are often due to torrential rains in the nearby area and are aggravated by water funneling in from local places with higher elevation. Pluvial flooding of both types is worsened by more impermeable surfaces, such as concrete, which can limit water infiltration and increase the speed and amount of water running off the ground.

  3. Coastal flooding occurs when sea water inundates or covers normally dry coastal land. Often coastal flooding happens when severe winds correspond with high tides, blowing the tidal waters landward in a storm surge. Coastal flooding can also occur, though, when tides are lower with greater winds, with tsunamis and no wind, and simply due to rising sea levels which push tides above historic levels. Due to the effects of climate change (e.g. sea level rise and an increase in extreme weather events), damage due to coastal flooding has intensified.

These maps also include the combined flood zones for these three flood types. The area of combined flooding is relevant not only for understanding the total extent of places exposed to any kind of flooding, but also because the three flood types are not always distinct: instead, they can compound, such as when a coastal storm surge raises the sea level, reducing the drainage system's capacity to absorb surface water. Similarly, a regional precipitation event very well might cause rivers to overflow and also flood the area with surface water. The concurrence of pluvial, fluvial and coastal flooding often aggravates the damage potential they individually produce.

For more information on the population and built-up area underlays, see their individual sections.

Why use it?

All three types of flooding are very pertinent to cities. First, the mere density of urban areas means that more people and more assets are exposed should a hazard occur. Second, cities frequently sit in more flood prone areas. Human settlements are very often situated near rivers for, among other things, increased water access, land fertility and mobility. While very helpful for many reasons, this location near water also increases exposure to fluvial flooding. Similarly, settlements near the sea face exposure to coastal flooding. Some common forms of built-up development have also made, and make, urban areas more flood prone. The pavement of urban areas and the use of impermeable materials, such as traditional asphalt and concrete, seal the ground and prevent rainwater from draining. In addition, urban areas often reroute or alter natural water ways and drainage routes.

Flooding can have devastating effect on a place and its people, through injury, destruction and disruption. It can cause

  • immediate harm to people, as the floodwater's depth and velocity can cause death and injury;
  • damage to buildings and other assets, such as homes, schools, bridges, markets, agricultural crops, businesses, industrial facilities and cultural sites;
  • disruption of social and economic life, as it restricts movement and access and causes power outages;
  • environmental damage, as the flooding causes and carries leaks of pollutants; and
  • displacement, as people leave their homes to avoid all of these harms.

By knowing where flooding is most likely to occur (as well as its likely particular causes and its origins), cities are more able to decrease both exposure and vulnerability. This knowledge can guide urban development and the siting of new projects away from exposed areas, and inform where flood mitigation projects should be focused.

A variety of responses are possible, including both structural and non-structural interventions. Structural interventions include "blue" infrastructure, like retention poinds, bioswales, water squares, underground storage and floodplain extension; "green" infrastructure, like mangroves, salt marshes and green roofs; and "grey" infrastructure, like dikes, canals, pump stations, storm surge barriers, flood walls and sea walls. Non-structural interventions include warning systems, emergency institutions, and changes to building codes. A spatial understanding of a city's flood exposure can direct a city's efforts on where to build flood infrastructure, where not to build non-flood projects, and how to spatially adapt non-structural interventions.

The overlay with population density indicates which areas of expected flooding would likely affect the most people. It also indicates the kinds of preparedness and coordination different areas may require during a flood disaster. Areas with greater population density may likely require greater resource allocation and longer evacuation times.

Within a city's boundaries, not all land is developed. The overlay with built-up area shows how the flood zone intersects with where buildings are. This juxtaposition is helpful for multiple reasons. First, it indicates different types of exposure. Flooding in built-up areas and non-built-up areas can both be problematic, but the kind of impact will differ as the former would damage buildings while the second may destroy crops and pollute water sources. Second, as stated previously, urban areas often comprise impervious surfaces, a map of built-up area shows how pluvial flooding may be increased in developed zones. Third, the map of built-up area shows the physical expansion of the city in recent decades. It may indicate how urban factors such as land price, zoning, etc., are guiding the city toward or away from flood zones. Knowing whether newer or older construction is more common in flood zones can also indicate the likely types of construction in the flood zones, and how flood resilient they likely are.

The social infrastructure underlay maps where hospitals, schools, fire stations and police stations are located, and is especially helpful for understanding a city's disaster response. These services are obviously important to a city's functioning and so it is important to know their exposure. Even moreso, however, each of these services and facilities are central in the immediate aftermath of a disaster. Flooding can cause serious injury, and so access to hospitals and health clinics is imperative. If these facilities are themselves exposed to flooding, they are less able to meet the needs of flooding's victims elsewhere in the city. Similarly, fire and police services are often first responders in the wake of a crisis: if they are exposed to flooding, they are less able to respond. Finally, while the educational aspect of schools may be less pertinent during and immediately after a flood event, schools are often larger public buildings which can house displaced peoples. (TK investigate)

How is this made?

The global flood map begins with a high resolution global map of elevation: water flows downhill and fills basins, so it's important to know where the downhills and basins are. This elevation map uses the Copernicus Digital Elevation Model, but then removes surface objects like buildings and forests. This terrain model, though, does not include river depths – it shows rivers' water surfaces not the river beds. Water channel depth, then, needs to be derived by by estimating water flow through the channel and calculating the necessary depth of each cross section for that flow. Onto this adapted terrain model, the flood model then layers on hydrodynamic models simulating how water moves across terrain during extreme riverflow and rainfall events, and hydrological models simulating how water moves across water systems. Finally, the frequency and magnitude of extreme precipitation events (as well as how they are expected to change with climate change) are factored in. The resulting model has a critical success index (ratio of total observed events and predicted events that were both true events and predicted) of 0.75.

To read more about Fathom's methodology, read about their "method stack" here. Each step in the process includes research articles which go further in depth.

How should I interpret it?

These maps show the 15-cm flood zones with 1-in-10, 1-in-100 and 1-in-1000 year return periods. That is, they show the areas that are expected to flood to a depth of at least 15 centimeters at least once in a 10-year, 100-year or 1,000 year span. It is important to note, though, that due to the effects of climate change, with its change in precipitation levels and increase in extreme events, the expected frequency of flooding is changing. In general, flooding is expected to become more frequent and more severe.


Elevation

Dataset Measurement Units Source Release Year Years of Data Coverage Resolution
Forest And Buildings removed Copernicus DEM Elevation Meters above sea level Fathom 2023 NA Global 30m

What is it?

Elevation is a measure of how many meters above sea level an area is. Elevation informs an area's vulnerability to many natural disasters.

Why use it?

The height at which infrastructure, resources, and communities sit relative to normal water levels and tides, flood waters, and storm surges and waves informs their exposure. Elevation information is critical for communities to anticipate the impacts of disasters and to prepare resilient and cost-effective response and redevelopment strategies.

Much of Bangladesh is a low-lying delta with elevations ranging from just a few meters above sea level to below sea level in some areas. This makes it extremely vulnerable to flooding, especially during monsoon season.

The country is prone to severe cyclones and storm surges, exacerbated by its low elevation. These events can cause disruption to services and destroy critical infrastructure.

Many urban communities lie on the coast, which can be only a few meters above sea level or even below sea level, exposing highly dense populations to major climactic threats.

Riverbank erosion and sediment deposition are common problems in Bangladesh. The elevation of land can change rapidly due to sediment buildup or erosion, which can undermine infrastructure and lead to the loss of valuable land.

How is this made?

Elevation data comes from Fathom's FABDEM module as well as local terrain data, creating a blended digital terrain model. The combination of FABDEM data with available LiDAR data, Fathom has elevation coverage of the entire globe at a 30-meter resolution (with finer resolution in areas with more enhanced existing data). The data is provided as meters above sea level, which is then further categorized into various increments of 160-180 meters.

How should we interpret it?

Elevation data has critical implications for resilience planning, disaster management and resource allocation. Low-lying areas closer to sea level, especially coastal areas, should be further analyzed to see the resilience of the infrastructure, withstanding disaster intervention programs, and other adaptive strategies to circumvent risks associated with flooding.

Source data citation “FABDEM.” n.d. Fathom. https://www.fathom.global/product/fabdem/.


Slope

Dataset Measurement Units Source Release Year Years of Data Coverage Resolution
Forest And Buildings removed Copernicus DEM --- --- Fathom 2023 NA Global 30m

What is it?

Slope refers to the percentage change in elevation over a certain distance. In hilly or mountainous areas, floods can occur within minutes after heavy rains, while in flat areas, floodwaters can remain for days.

Why use it?

Considering the slope of land is important in reducing construction costs, extending services and public facilities, minimizing the risks of hazards like flooding and landslides, and mitigating the impacts of development on natural resources.

In Bangladesh, the lack of changing slope and the majority of the land cover being low-lying poses significant risks related to flooding, drainage and water management, agriculture, and infrastructure design. Without natural drainage slopes, water accumulation can increase flood risks. This can also lead to disparate impacts on drainage and irrigation on agricultural lands, causing issues such as waterlogging and salinization, which may impact crop yield outputs. Moreover, there are current capacity constraints on existing infrastructure

How is this made?

Leverages same data suite and methods as Elevation

How should we interpret it?

Similarly to elevation, slope has significant implications toward management of resilience and infrastructure in a region. Areas with small slopes risk more prolonged and intense flooding events, slower water flow and poor drainage, widespread erosion, and sediment disposition. Areas with large slopes risk rapid erosion and impediment of infrastructural and built environment development. Moreover, high slopes can diminish transit and general accessibility of a region.

Source data citation “FABDEM.” n.d. Fathom. https://www.fathom.global/product/fabdem/.


Landslides

Dataset Measurement Units Source Release Year Years of Data Coverage Resolution
Global Landslide Model --- --- NASA Goddard 2020 NA Global 30 arc seconds (1 km)

What is this?

Landslides are the mass movement of rock or soil down a slope. These movements can take many forms and can move at very different speeds, from a quick topple to a slow spread. Some landslide types are more common in steep volcanic areas, while others are more common where soil has been filled in. Different landslide types are also more likely to be caused by different trigger events, such as heavy precipitation, liquefaction, excavation, and stream erosion. Landslides are caused by both nautral and human-originated factores.Most simply, though, landslides occur when either downslope forces increase (such as increased mass above the would-be landslide site through soil saturation), or the strength of the land is reduced (such as through road cutting or vegetation loss). For more on the various landslide types, see the USGS Landslide Handbook.

Why use it?

Landslides can cause serious damage to buildings and infrastructure that are in the path of the landslide, and even death in the case of fast moving slides like debris flows. The damage to or destruction of infrastructure like roads, bridges, pipes and electrical lines can then cause disruption in far greater areas: with roads blocked, people and goods can't move; with pipes broken, people lose access to drinking water and sewage can contaminate the land; with electrical lines down, buildings lose power for important functions. Road-related landslides are especially insidious, as roadmaking itself often increases an area's susceptiblity to landslides. The excavation and in-fill both weaken the surface. While the debris of some landslides can be fairly easily bulldozed away, the debris and destruction of others may require much larger and longer projects. In either case, landslides often continue for days to months after their onset, preventing the quick return to pre-slide life. Landslides can also dam waterways. These landslide dams can cause flooding above the dams while they holds, and then intense flooding below when these typically weak dams fail.

Once areas with landslide susceptibility are identified, mitigation efforts can be taken. Possible interventions, though, include 1) reducing water weight by improving drainage and re-directing surface water (see how this parallels flooding interventions), 2) adding or caring for vegetation, which both binds the soil with roots and accelerates evaporation, 2) constructing retention walls for slope stabilization and the preemptive redirection of debris flows, 4) terraforming to reduce headweight above a slope, and 5) covering slopes with mesh or cables. These listed activities are only some of the possible interventions. The specific approaches to be used depend on the particular types of landslides an area is prone to and other local factors.

Identifying more landslide-prone areas is also useful for non-mitigation activities. A city can direct development, highways, and housing away from landslide susceptible areas (as with flooding, it is common that cities grow into areas of greater hazard, as the safer areas have already been built upon). When projects do occur in areas with higher susceptibility, more care can be taken to avoid triggering a landslide: humans can cause landslides by adding weight or vibrations above a slope or by weakening a slope through excavation, soil in-fill, or saturation.

How is this made?

This map was developed by employing a fuzzy heuristic approach" to model landslide susceptibility using globally available datasets. The model incorporates slope, rock classification, distance to fault lines, highways, and forest loss, all aggregated to a common 30 arc second (or 1 kilometer) resolution. Slope is calculated by taking the gradient of Viewfinder Panorama's global digital elevation model (DEM), which augments the Shuttle Radar Topography Mission's DEM to be more reliable at higher elevations with more extreme slopes, such as the Himalayas. (Note that this is a different DEM than we use for elevation and slope.) This 1-arc-second-resolution DEM is aggregated to 30 arc seconds by taking the maximum slope of all 900 aggregated cells in each new, lower resolution cell. Rock structure and strength is included by reclassifying Bouysse's Geological Map of the World 13 lithological classes [TK verify] into 5 quantitative classes by rock age and type, under the heuristic that younger rock and sedimentary rock are typically weaker than older rock and igneous and metamorphic rock. For seismicity, fault lines in the Geological Map of the World were vectorized, and a raster map was made measuring distance from the nearest fault line. The model incorporates highways, the construction of which can contribute to landslide susceptibility, by rasterizing OpenStreetMap's highway road network: any 30-arc-second tile that includes a highway segment is classified as a highway tile. Similarly, any tile that features any forest loss between 2000 and 2013 in Hansen's inventory of forest loss (see forest and deforestation) is classified for forest loss.

These variables are combined in a fuzzy overlay model. In an overlay model, multiple spatial variables are combined (such as through addition or by taking the maximum for each cell) to create a new aggregate variable, which is expected to portray a phenomenon of interest: in this case, landslide susceptibility. Unlike in spatial regression, this combination is not statistical, and is not based on the data of the outcome variable. Rather, the combination is heuristical and predetermined by the analysts, guided by prior knowledge about each variable and presumptions about how it relates to the outcome.

A heuristic model was perhaps chosen because of the incompleteness of the outcome variable's dataset. As the predicted variable, the model used 1,194 landslides around the world, between 2007 and 2013, in the Global Landslide Catalog. Only landslides with spatial accuracy of 1 kilometer or better were used. In part because these lower accuracy events were excluded, and more because there is documented underreporting with a bias toward recording landslides near settlements and roadways, this dataset is known to be incomplete, and incomplete in a non-random way. To this point, all the local inventories considered by the model developers featured greater landslide density than the global inventory. Because of this incompleteness and bias, the researchers may have not trusted the statistical relationship between the independent and dependent data sets, and thus chose a heuristic model.

In this heuristic model, a threshold is chosen for each dependent variable that indicates landslide susceptibility for that variable. For example, according to slope by itself, sites with greater than 30° of slope might be deemed landslide susceptible. Using this threshold, the variable is then transformed to a scale of 0 to 1. The transformation blurs the threshold, so that cells near the threshold are given partial values close to 1, and values farther away are given partial values closer to 0, or 0 itself. In the example of slope, areas with greater than 30° of slope are assigned a 1, areas between 10° and 30° are assigned fractional values, and areas with less than 10° of slope are assigned 0. (These break points are not necessarily the ones used by the model developers.) The choice of transformation depends on knowledge about particular variable. To read more about possible transformations, see Esri's page on fuzzy membership functions.

In this landslide model, all of the non-slope variables are combined using a fuzzy gamma operator.° The resulting raster is then multiplied by the transformed slope variable, to ensure that no flat terrain can be marked as highly susceptible to landslides. In this way, the researchers gave extra importance to slope. Altogether, the model for susceptibility is

$Susceptibility = \left(1 - \prod_{i}^{n} (1 - X_i)\right)^{0.9} \cdot \left( \prod_{i}^{n} X_i \right)^{0.1} \cdot S$

where $X_i$ is the transformed input variable $i$ in the set of $n$ non-slope variables and $S$ is the transformed slope variable.

This model results in a range of values between 0 and 1, with higher values indicating greater possibility of landslides. The researchers then classified the continuous result into five classes (very low, low, moderate, high and very high susceptibility) so that the each class has half the area of prevous class (e.g., the low susceptibility class covers half the area covered by the very low susceptibility class). The specific break points are 0.11, 0.49, 0.671 and 0.75, but these are not physically meaningful values.

The inventory of global landslides was then used to evaluate the performance of the model. The classified version of the model yielded and area under the ROC curve (AUC) of 0.82, while the unclassified, continuous, version yield an AUC of 0.85.

How should I interpret it?

Landslide susceptibility measures how prone a site is to landslide formation, given its intrinsic properties (in this case, slope, rock type, road presence, fault zones and forest loss). It is not a measure of probability that a landslide will occur during any particular period.

In this map, landslide susceptibility is divided into 5 classes: very low, low, moderate, high, and very high. These classes mark break points in the landslide susceptibility index, from 0 to 1, so that each susceptibility class covers roughly twice as much area as the next. That is, very low covers about 52% of total land in the study; low, 26%; moderate 13%; high, 6%; very high, 3%. While these classes, and the index itself, were not devised based on any distribution of actual landslides, they correspond to increasing levels of landslide likelihood. Very roughly, the very high susceptibility class has a positive likelihood ratio ($\frac{True Positive Rate}{False Positive Rate}$) 3 times greater than the high susceptibility class and 100 times greater than the very low susceptibility class. With these class divisions, the model has an area under the curve (AUC) of 0.82.

In viewing this map, it is important to understand that, as with many of these layers, the input variables are limited by the requirement of being (near) globally available. For this reason, specific soil conditions and structure, cannot be included. Very different rock types and structures may appear within the same cell of this susceptibility map and may, in actuality, have very different susceptibilities to mass movement, but the model does not differentiate them.

Nor does the model differentiate between types of mass movement (falls, topples, slides, spreads flows, or their sub-types). The Global Landslide Catalog used for validation, though, emphasizes rainfall-triggered landslides, and the researchers' choice to give more weight to slope prioritizes landslides more common in steeper terrains.

Source data citation Stanley, Thomas, and Dalia B. Kirschbaum. 2017. “A Heuristic Approach to Global Landslide Susceptibility Mapping.” Natural Hazards 87 (1): 145–64. https://doi.org/10.1007/s11069-017-2757-y.


Earthquake Events

Methodology

Source data citation

National Geophysical Data Center / World Data Service (NGDC/WDS): NCEI/WDS Global Significant Earthquake Database. NOAA National Centers for Environmental Information. DOI:10.7289/V5TD9V7K

Note that the cited damage level is for the earthquake overall, not necessarily in the city of interest. The database defines damage levels by monetary cost: limited, < $1 million; moderate, $1–5 million; severe, $5–24 million; extreme, $25+ million.


Liquefaction

Dataset Measurement Units Source Release Year Years of Data Coverage Resolution
Global liquefaction susceptibility map --- --- NA 2019 NA Global 1.2 km

What is this?

One of the ways earthquakes can cause damage is through temporarily converting the ground into a liquid-like phase, in a process called soil liquefaction. Strong vibrations or increased water pressure in water-satured sediment allow the formerly stable and static soil particles to move around and behave like a fluid. Essentially, the added energy is sufficient to convert the solid ground into a pseudo-liquid. When the ground begins acting like a liquid, it loses strength and can no longer support weight, such as buildings or bridges. Without solid foundations, the structures can collapse. The liquified earth is also able to flow down even gentle slopes, causing landslides. This map shows which parts of the city are most susceptible to soil liquefaction, and therefore which areas are more vulnerable to liquefaction-based damage should an earthquake occur.

Liquefaction is typically discussed in relation to earthquakes, but non-seismic loads can also increase pore water pressure to the point of triggering liquefaction. Examples of non-seismic triggers include the failures of dams and mine tailings impoundments.°

Why use it?

While all of a city has the same likelihood of seismic activity (given the geographic scale of earthquakes), not all areas will have the same susceptibility to liquefaction, and therefore different risks of liquefaction-based damage in the event of an earthquake. Factors such as softer soil, greater slope, and worse drainage can heighten the risk of liquefaction and the damage it can cause. Understanding which areas are more susceptible can inform response measures.

The three general ways to reduce an area's liquefaction risk are to avoid susceptible areas, strengthen structures, and strengthen the soil. The most basic response is to decrease exposure by avoiding construction and inhabitance in liquefaction-prone areas: keep people and assets to areas that will handle earthquake activity better. It is not always possible to avoid susceptible areas, especially when they are already inhabited. In this case, it is important to engineer structures, and specifically their foundations, to be more liquefaction resistant. The most fundamental response to liquefaction susceptibility is mitigation by way of strengthening the soil. Soil conditions can be imporved by various methods, including drainage, soil densification and sediment replacement.°

How is this made?

This liquefaction susceptibility map is generated by a geospatial regression model for designed for global application by Zhu et al.. (The team actually developed two models: a coastal model and inland model. The liquefaction susceptiblity map combines these two models, using the coastal model for regions within 20 kilometers of the sea, and the inland model for everywhere else.) The probability model was trained on the observed liquefaction of 27 earthquake events in 6 countries, including events where no liquefaction was observed, and uses proxies for soil density, soil saturation and earthquake loading as explanatory variables. As a proxy for earthquake loading the model uses peak ground velocity (PGV, the maximum ground velocity recorded during a particular seismic event), and as a proxy for soil density, the a slope-derived time-averaged shear wave velocity $V_{S30}$. $V_{S30}$ is the speed at which a shear wave (a seismic wave travelling perpendicular to the direction of propagation, and also known as a secondary or S-wave) travels from the surface to a depth of 30 meters. This variable, which is often used to quantify soil conditions, requires on the ground measurement. The model, therefore, approximates the variable using slope. For soil saturation, the model uses a number of variables: the coastal model uses precipitation, distance to water bodies, and, for the inland model, the water table depth.

As trained, the model is a liquefaction probability model. It predicts how likely liquefaction is in an area given the empirical characteristics of an observed earthquake. While this prediction is useful for validating the model's performance, it is not useful for understanding which areas are susceptible to future liquefaction. To convert the probability model into a susceptibility model, the PGV variable (which describes the seismic activity of a particular earthquake) is simply removed from the trained model. The resulting values not individually meaningful in an absolute sense, but they can be classed to show which areas have less and greater susceptibility. In this map, the values are classed as very low, low, moderate, high and very high.

How should I interpret it?

This map shows areas' susceptibility to liquefaction rather than their probability of liquefaction. Whereas probability quantifies how likely an event is to occur in a certain time period (e.g., in the next 10 years or during this window), susceptibility measures how likely an event is to occur given a triggering event. One way this difference is often described is that probability includes a temporal dimension while susceptibility does not and is limited to the inherent characteristics of a place. This difference can be seen in the construction of the liquefaction susceptibility model: the model was first developed with the PGV of historical earthquakes – that is, they included a trigger event; to make it a susceptibility model that applied beyond those specific events, the PGV was removed, leaving more or less unchanging characteristics, soil density and soil saturation.

Because the likelihood of a triggering earthquake is excluded, it is important to understand that liquefaction susceptiblity may be high in areas where the probability of seismic activity is low. Similarly, areas with low liquefaction susceptibility may have high probabilities of earthquakes and could experience significant seismic damage in other ways. It is helpful, then, to view liquefaction susceptibility maps in conjunction with maps of seismic hazard.

It is also important, when using these specific liquefaction maps, to remember that they were developed using globally available data. The tradeoff of this choice, which made susceptibility maps available to far more areas, is that the maps take into account less site-specific data than an on the ground assessment would. For example, the time-averaged shear wave velocity $V_{S30}$, which is used to quanitify soil density, is calculated using slope rather than in situ testing. This means that areas sharing similar slope characteristics but having different compositions (sand, rock, etc.) have the same alleged soil density in this model. While the model performs quite well, on the ground analysis could give a more localized picture of susceptiblity. Similarly, the resolution of the model is quite low, at a 1.2-kilometer grid size. Liquefaction susceptibility may vary significantly within these grid cells, depending on the composition of the soil.

Source data citation Zorn, Conrad, and Elco Koks. 2019. “Global Liquefaction Susceptibility Map.” Zenodo. https://doi.org/10.5281/ZENODO.2583746.


Seismic Hazard

Dataset Measurement Units Source Release Year Years of Data Coverage Resolution
Global Earthquake Model (GEM) Seismic Hazard Map --- --- Global Earthquake Model 2023 NA Global 1km?

Note that we are now using the 2023 version as opposed to the 2018 version so we need to update the resolution and maybe other aspects

Methodology

This map depicts the geographic distribution of the Peak Ground Acceleration (PGA) with a 10% probability of being exceeded in 50 years, computed for reference rock conditions (shear wave velocity of 760-800 m/s). Seismic risk in cities has increased mainly due to urbanization, poor land-use planning and construction, inadequate infrastructure and services, and environmental degradation. Urban risk predictions and expected losses from major earthquakes in the future justify proactive risk mitigation activities.

These data and map are derived from the Global Earthquake Model, which provides information on probabilistic earthquake risk; at its scale, it does not address the vulnerability to earthquakes of individual structures or sub-regions. An earthquake with moderate damage potential can cause structural damage to unreinforced masonry buildings and the movement of wood-frame houses. See https://web.archive.org/web/20120306024658/http://quake.abag.ca.gov/shaking/mmi/plaintext/ for more.

Source data citation Pagani, Marco, Julio García-Pelaez, Robin Gee, Kendra Johnson, Valerio Poggi, Michele Simionato, Richard Styron, et al. 2018. “GEM Global Seismic Hazard Map v.2023.1.” https://doi.org/10.13117/GEM-GLOBAL-SEISMIC-HAZARD-MAP-2018.1.


Seismic Risk

Dataset Measurement Units Source Release Year Years of Data Coverage Resolution
Global Earthquake Model (GEM) Seismic Risk Map --- --- Global Earthquake Model 2023 NA Global Hexagonal grid of 0.30 x 0.34 decimal degrees (approximately 1,000 km2 at the equator)

Methodology

This map indicates the geographic distribution of average annual loss (AAL) of built-up area due to ground shaking in the residential, commercial, and industrial building stock, considering contents, structural, and non-structural components. It does not consider the effects of tsunamis, liquefaction, landslides, and fires following earthquakes. The loss estimates are from direct physical damage to buildings due to shaking, and thus damage to infrastructure or indirect losses due to business interruption are not included. Seismic risk in cities has increased mainly due to urbanization, poor land-use planning and construction, inadequate infrastructure and services, and environmental degradation. Urban risk predictions and expected losses from major earthquakes in the future justify proactive risk mitigation activities.

Map data from Silva et al, 2023, Global Earthquake Model (GEM) Seismic Hazard Map and Seismic Risk Map, V2023.1.

These data and map are derived from the Global Earthquake Model, which provides information on probabilistic earthquake risk; at its scale, it does not address the vulnerability to earthquakes of individual structures or sub-regions. An earthquake with moderate damage potential can cause structural damage to unreinforced masonry buildings and the movement of wood-frame houses. See https://web.archive.org/web/20120306024658/http://quake.abag.ca.gov/shaking/mmi/plaintext/ for more.

Source data citation (Chicagoify) V Silva, D Amo-Oduro, A Calderon, J Dabbeek, V Despotaki, L Martins, A Rao, M Simionato, D Viganò, C Yepes, A Acevedo, N Horspool, H Crowley, K Jaiswal, M Journeay, M Pittore (2018). Global Earthquake Model (GEM) Seismic Risk Map (version 2018.1). DOI: 10.13117/GEM-GLOBAL-SEISMIC-RISK-MAP-2018.1


Road Network Criticality

Dataset Measurement Units Source Release Year Years of Data Coverage Resolution
OpenStreetMap --- --- OpenStreetMap NA NA Global NA

What is this?

Some roads are more important than others to an area's road network. They connect more parts of the city, are used by more people on more trips, and cause larger problems when obstructed. This map shows an approximation of how important, or critical, each road segment (the length between two intersections) is to the overall network. Higher percentages indicate higher importance.

Why use it?

Roads are crucial for the movement of people and goods throughout, and in and out of, a city. Knowing which parts of a road network are most critical is helpful for understanding where a city's mobility is most vulnerable, as well as where non-disaster interventions could be targeted.

The criticality map highlights segments within a road network that, if blocked due to flooding or other hazards, would cause a high degree of disruption to travel across the city. The obstruction of a non-critical street may only annoy people who live nearby, while the obstruction of a more critical highway could broadly disrupt economic and social life in the city, slow emergency services during a crisis, and even prevent evacuation.

By knowing which segments are most critical, a city can work towards making the road network more resilient. This effort may involve elevating particular segments to lift them above the flood zone, repaving them with more permeable materials, reconstructing the soil sediment underneath, or adding green infrastructure such as bioswales alongside or erosion-preventing vegetation. The resilience effort could also redesign the road network more broadly, by adding network alternative routes and guiding traffic through less hazard-exposed areas.

The criticality map can also inform interventions less related to disaster preparedness but still very important. For example, a critical road may be a strong candidate for a bus rapid transit route or other public transit. Alternatively, it may be wise to put transit routes near but not along these most critical segments in order to not trap them in congestion. Locations along critical roads may also be good sites for public facilities, as these locations are closer to more parts of the city along the road network. (It is important to note, however, that by placing these facilities along critical segments, the city would be making the segments even more critical.) Finally, roads can also be obstructed by non-disaster events, such as major construction. An awareness of the most critical segments could inform the city of the possible transport and congestion impact a major construction project could have.

This layer visualizes an approximation of road network criticality. It highlights segments within a road network that, if blocked due to flooding or other hazards, would cause a high degree of disruption to travel across the city. Segments in red are the most critical for the overall connectivity of the city.

How is this made?

Criticality here is measured by calculating betweenness for each segment. Betweenness for a segment is calculated by first mapping the shortest path from each intersection to every other intersection. Once all intersections have been connected to all other intersections, the total number of trips is counted, as well as the number of trips that use each road segment in the network.[^criticality_caveat] Each segment's betweenness score is the percentage of total trips that use that segment.

The road networks used in this analysis come from OpenStreetMaps. The analysis only includes primary and secondary roadtypes.

[^criticality_caveat]: Depending on the city size, the model may only use a sample of intersections, rather than all the intersections in the city. The resulting betweenness scores are shown to be similar.

How should I interpret it?

Question: does GOST acknowledge highways or road type? What secondary roads are included?

Cities vary substantially in their distributions of criticality, and therefore in their probabilities of network disruption due to the failure of a small number of road segments. For example, when a street network follows a grid pattern or has many alternative routes between destinations, the risk of disruption owing to the failure of key road segments is lower. In contrast, other cities have particular road segments whose failure would affect a large proportion of journeys across the city.

The user of this road criticality assessment should bear in mind a number of caveats. For computational reasons, this analysis treats all drivable roads equivalently, as well as all intersections equivalently. This simplification does not necessarily reflect reality. Some intersections are near the homes and workplaces of hundreds of people, while others are near the homes and workplaces of very few. Many more people will be initiating trips from the former than from the latter. Similarly, road segments differ in quality, with some being designed for greater traffic and greater speeds (e.g., a highway coursing through a city, or arterial roads running parallel to other streets in a grid). In addition, not everyone takes the shortest route: for many reasons, including congestion, road quality, familiarity, beauty, and comfort, people may take less direct routes. These more indirect routes will in some instances diffuse the network's criticality, while in other instances concentrate it.

Source data citation “OpenStreetMap.” n.d. OpenStreetMap. https://www.openstreetmap.org/.


Drought

Methodology

The Normalized Difference Moisture Index (NDMI) detects moisture content in vegetation, and is an indicator of water stress in crops. It is also used to identify vegetation in dry areas with an increased risk of combustion. This map computes NDMI over the June to September 2014-2023

NDMI measures soil moisture from a range of -1 to 1. Negative values indicates water stress, and positive values may indicate water-logging.

The stresses introduced by low soil moisture are borne by agricultural crops and biodiversity. Increased depletion of soil moisture also leads to a higher risk of wildfire.

footnote: © ESA Land Cover CCI 2020 / Contains modified Copernicus Sentinel data (2020) processed by ESA WorldCover consortium / Gannon et al 2021

Source data citation [???]


FWI

Dataset Measurement Units Source Release Year Years of Data Coverage Resolution
Global Fire WEather Database Fire Weather Index - NASA 2015 1990-Present Global 0.5° latitude by 2/3° longitude

What is this?

The Fire Weather Index (FWI) measures the potential for wildfire in an area at a given time, by combining measures of moisture, temperature, and wind. Areas with a higher index have a higher potential for wildfire, though the specific relationship between FWI and fire danger is highly localized; therefore values representing high fire potential must be determined locally.

Why use it?

Around the world, wildfires are becoming increasingly frequent and devastating due to climatic and anthropogenic drivers. Increased temperature resulting from climate change may lead to an increase in wildfire frequency and total burned area, and fire frequencies under expected conditions by 2050 are projected to increase by approximately 27% globally compared to 2000 levels.

In Bangladesh, trends like shifting cultivation (“slash and burn”) and grazing are contributing to wildfires, which not only jeopardize nearby cities by killing lives and causing pollution, but also destroy valuable forests. Understanding the FWI trends can help cities prepare for fire seasons by raising awareness on fire prevention methods and implementing emergency response systems.

footnote: Farukh, Murad A., Md A. Islam, and Hiroshi Hayasaka. 2023. “Wildland Fires in the Subtropical Hill Forests of Southeastern Bangladesh.” Atmosphere 14 (1): 97. https://doi.org/10.3390/atmos14010097.

How is this made?

The FWI originated from the Canadian Forest Fire Weather Index System, and our data source, the Global Fire WEather Database (GFWED), built upon this System to create a global FWI dataset. The FWI System includes three fire behavior indices that reflect the behavior of a fire if it were to start. The Initial Spread Index (ISI) represents the ability of a fire to spread immediately after ignition, with values greater than 15 considered extreme. The Buildup Index (BUI) represents the total fuel available to a fire, with values greater than 90 considered extreme. The Fire Weather Index (FWI) combines the ISI and BUI to provide an overall rating of fireline intensity in a reference fuel type and level terrain, with values greater than 30 considered extreme.

When we evaluate wildfire risk, we should be concerned about extreme values, instead of the average, because extreme values are what would trigger fires. Therefore, in the graph, the monthly FWI values are the 95th percentile of all the daily FWI values of that month during 2016-2021. This way, the extreme values of each month are captured, and we can see which months have the highest potential for wildfire events.

footnote: Field, R. D., A. C. Spessa, N. A. Aziz, A. Camia, A. Cantin, R. Carr, W. J. de Groot, et al. 2015. “Development of a Global Fire Weather Database.” Natural Hazards and Earth System Sciences 15 (6): 1407–23. https://doi.org/10.5194/nhess-15-1407-2015.

How should we interpret it?

FWI values must be compared locally: months with higher FWI values represent periods when wildfire risks are relatively high. However, there are a few caveats when interpreting FWI:

  • The FWI only relies on meteorological variables and therefore only captures the risk of wildfire from an atmospheric perspective.
  • It does not include three important factors of wildfire risk and spread including: (1) vegetation/fuel availability, (2) ignition (natural or human-caused), and (3) fire suppression and management. As a result, not all extreme fire weather days will result in a wildfire.

footnote: https://climatedataguide.ucar.edu/climate-data/canadian-forest-fire-weather-index-fwi.

Source data citation Field, R. D., A. C. Spessa, N. A. Aziz, A. Camia, A. Cantin, R. Carr, W. J. de Groot, et al. 2015. “Development of a Global Fire Weather Database.” Natural Hazards and Earth System Sciences 15 (6): 1407–23. https://doi.org/10.5194/nhess-15-1407-2015.


Lightning flash rate density

Dataset Measurement Units Source Release Year Years of Data Coverage Resolution
TRMM LIS Very High Resolution Climatology (VHRC) Flash Rate Density fl km–2 yr–1 NASA 2016 1998-2013 Global 0.1° (approx. 11km at the equator)

What is this?

The lightning flash rate density here refers to the mean annual flash rate density, which is the total lightning flashes per year per square kilometer. It comes from a NASA dataset called the LIS 0.1 Degree Very High Resolution Gridded Lightning Full Climatology (VHRFC) dataset, which measures the distribution and variability of total lightning occurring in the Earth's tropical and subtropical regions.

footnote: “LIS 0.1 DEGREE VERY HIGH RESOLUTION GRIDDED LIGHTNING FULL CLIMATOLOGY (VHRFC) V1.” n.d. NASA. https://cmr.earthdata.nasa.gov/search/concepts/C1979883245-GHRC_DAAC.html.

Why use it?

  • This information can be used for severe storm detection and analysis, which can help cities prepare for and prevent potential lightning damages.
  • The Khasi Hills, on the border between India and Bangladesh, is one of the lightning hotspots in Asia.
  • UN data shows that Bangladesh loses 300 lives per year due to lightning strikes, a figure that is starkly higher compared to that of the United States, with less than 20 deaths per year.
  • Deaths caused by lightning are rising in Bangladesh, and extreme weather caused by climate change is to blame for the increase in fatalities.

footnote: Albrecht, Rachel I., Steven J. Goodman, Dennis E. Buechler, Richard J. Blakeslee, and Hugh J. Christian. 2016. “Where Are the Lightning Hotspots on Earth?,” November. https://doi.org/10.1175/BAMS-D-14-00193.1. footnote: “Bangladesh: Climate Change Is Increasing Lightning Deaths.” 2024. PreventionWeb. January 26, 2024. https://www.preventionweb.net/news/climate-change-increasing-lightning-deaths-bangladesh.

How is this made?

  • The lightning flash measurements are taken by the Lightning Imaging Sensor (LIS) instrument on board the NASA Tropical Rainfall Measuring Mission (TRMM) launched in November 1997. It is designed to detect lightning from space during both day and night with storm-scale resolution. The LIS was powered off and the TRMM mission ended data collection on 8 April 2015, terminating a mission lasting a total of 17+ years.
  • TRMM started its descending path to decommissioning in 2014 with several instrument outages during the period, which the dataset authors expect could introduce uncertainties on the lightning data. Therefore, 2014 and 2015 data were not included in the dataset.

footnote: Albrecht, Rachel I., Steven J. Goodman, Dennis E. Buechler, Richard J. Blakeslee, and Hugh J. Christian. 2016. “Where Are the Lightning Hotspots on Earth?,” November. https://doi.org/10.1175/BAMS-D-14-00193.1.

How should we interpret it?

The number of lightning flashes is only one factor in the fatalities caused by lightnings. The majority of victims of lightning are farmers, who are vulnerable to the elements as they work the fields through the rainy monsoon months in the spring and summer. Therefore, higher lightning flash rate density does not necessarily mean higher fatality rates. The vulnerability of the local population—especially outdoor workers during monsoons—plays a big part as well.

footnote: Vaidyanathan, Rajini. 2023. “Bangladesh Sees Dramatic Rise in Lightning Deaths Linked to Climate Change.” BBC, December 31, 2023. https://www.bbc.com/news/world-asia-67779223.


Climate Projections Overview

What is this, and how is this made?

Climate projection models are tools that scientists use to predict how the Earth's climate might change in the future. These models simulate the interactions between the atmosphere, oceans, land, and ice, taking into account factors like greenhouse gas emissions and natural climate processes. By running these models under different scenarios, scientists can estimate potential changes in temperature, precipitation, sea level, and other climate-related factors over time. This helps us understand and prepare for future climate conditions.

Because the climate is very complex, different models might emphasize various aspects of the climate system or use slightly different assumptions, so by combining them into a multi-model ensemble, scientists can average out individual biases and gain a more robust and reliable prediction of future climate changes. This approach also helps identify where models agree or disagree, improving confidence in the results.

The climate projections in this analysis are directly adapted from the Climate Change Knowledge Portal (CCKP), which computes a range of climate indicators using median values of multi-model ensembles derived from the sixth phase of the Coupled Model Intercomparison Project (CMIP6). Up to 31 CMIP6 models are used in the CCKP-CMIP6 collection. CCKP defines the 20-year window of 1995-2014 as the reference period, and any future projections are compared against the climate condition during this reference period, so that we can gauge the magnitude of climatic changes. The future period used in this analysis is 2024-2100.

Moreover, SSP scenarios, or Shared Socioeconomic Pathways, are a set of narratives that describe different possible futures based on varying levels of social, economic, and environmental development. These scenarios are used in climate modeling to explore how different societal choices might impact greenhouse gas emissions and, consequently, climate change.

To disentangle the deep uncertainties embedded in climate models and to effectively inform decision-making on adaptation strategies, this analysis presents climate projections for X Shared Socioeconomic Pathways (SSPs): [TBD], each with its corresponding levels of designated Radiative Forcing (W/m2) by 2100 (radiative forcing is a measure of how much the Earth's energy balance is changed by factors like greenhouse gases, aerosols, or changes in solar energy).

Why use it?

Bangladesh is particularly vulnerable to climate change due to its low-lying geography, dense population, and reliance on agriculture. The country faces increased risks of severe flooding, stronger cyclones, rising sea levels, and more intense heatwaves. These changes threaten homes, crops, and livelihoods, especially in coastal and rural areas. Climate change also exacerbates existing challenges like poverty and food security, making it crucial for Bangladesh to prepare and adapt to these growing threats.

The climate projections can help predict future risks like flooding, heatwaves, and sea-level rise. By understanding these potential changes, cities can make better decisions to protect communities, infrastructure, and livelihoods, ensuring a safer and more resilient future for the residents.

How should we interpret it?

Climate model projections are valuable tools for understanding how climate change could impact cities, but they come with some caveats. These models are based on assumptions and scenarios that may not perfectly predict the future, so there's some uncertainty in the results. They also operate on large scales, so local factors unique to a city might not be fully captured. To help overcome some of the limitations, we include more detailed versions of the climate projection graphs that paint a fuller picture of the full range of model predictions, so that cities have a more comprehensive understanding of the potential hazards they might confront in the upcoming decades. Whenever possible, the climate projections should be used alongside other local data and expert advice when planning for climate impacts.

This analysis focuses on indicators related to temperature and precipitation to gain a rough understanding of possible changes in future heat, flood, and cold spell hazards. The following part explains each indicator in greater detail.

footnote: CCKP Metadata.


Mean temperature and maximum of daily max-temperature

What is this?

Mean temperature refers the average mean temperature over a year. The daily max-temperature refers to the maximum single day maximum temperature over a year.

Why use it?

Temperature increases of up to 1.5 degrees could lead to a variety of consequences with regards to health, productivity, and natural hazards. For urban residents, higher temperatures could mean more incidents of heat-related illnesses, such as heat exhaustion, heat stroke, cardiovascular and kidney diseases, and even death. Rising temperatures could also worsen air pollution by increasing ground-level ozone smog, which is created when pollution from cars, factories, and other sources reacts to sunlight and heat. Furthermore, warmer temperatures could be linked to projected precipitation increase because the Clausius-Clapeyron-Relationship dictates that for every 1ºC of increased air temperature, that air’s potential to carry moisture increases by 7%. Thus, the warmer the air, the more moisture it “can” carry, and therefore if rain were to form, much more water could be tapped into.

Importantly, mean temperature statistics do not necessarily capture seasonal differences or extreme scenarios. The rise in summer temperatures could exceed the projected average increase, exacerbating or prolonging severe weather conditions that pose a significant threat to people’s lives and livelihoods.

Hot days and tropical nights

What is this?

The radial bar chart indicating the hot day/tropical night risk category of each month. The risk classes are: 0 - no condition met, 1 - 30°C days and 20°C nights, 2 - 35°C days and 23°C nights, 3 - 40°C days and 26°C nights, and 4 - 45°C days and 29°C nights.

Why use it?

More frequent hot days can lead to heat stress, increased health risks, and higher energy demand for cooling. Understanding future trends helps cities plan for heatwaves and protect vulnerable populations.

An increase in tropical nights (nights that remain uncomfortably warm) can disrupt sleep and exacerbate health issues. It can also strain energy resources as people use more cooling at night.

Warm Spell Duration Index

What is this?

Warm Spell Duration Index (WSDI) is the total number of days in a year that are part of a heatwave of 6 days or longer, when the daily maximum temperature is higher than in 90% of days in the reference period 1995-2014.

Why use it?

WSDI is an important measure of heatwaves, an increasingly common phenomenon in Bangladesh and around the world. Heatwaves not only pose a direct threat to urban areas by harming residents and industries, especially vulnerable populations, but also jeopardize people’s basic needs by undermining agricultural productivity and water supply in more rural areas, resulting in ripple effects like accelerating migration to urban areas and intensifying resource shortage.

Cold Spell Duration Index

What is this?

Cold Spell Duration Index (CSDI) is the total number of days in a year that are part of a cold spell of 6 days or longer, when the daily minimum temperature is lower than in 90% of days in the reference period 1995-2014.

Why use it?

Longer cold spells can increase the risk of cold-related illnesses and mortality, especially for vulnerable populations like the elderly and those with pre-existing health conditions. Cold spells can also lead to infrastructure issues, damage crops, and affect food supply and livelihoods.

Days with precipitation >20mm and >50mm

What is this?

These indicators refer to the average count of days in a year with at least 20mm and 50mm of precipitation.

Why use it?

The threshold for heavy precipitation can vary depending on local conditions and the specific impacts being assessed, but 50 millimeters is generally a good benchmark for significant rainfall events that can lead to flooding and other impacts in Bangladesh. As climate impacts are often convoluted and non-linear, it is difficult to speculate the reasons or driving forces behind the projection results. More detailed or localized assessments and modeling would be needed to understand, for example, the reason why precipitation increases in one scenario and decreases in another.

Annual precipitation amount during wettest days

What is this?

This indicator refers to the annual sum of precipitation when the daily precipitation rate exceeds the local 95th percentile of daily precipitation intensity. In other words, it is the total precipitation amount on the 5% wettest days of a year.

Why use it?

Bangladesh has been significantly affected by extreme precipitation, leading to frequent and severe flooding, particularly during the monsoon season. However, it is important to understand not just the fact that precipitation patterns could change due to climate change, but also how much they might change, so that cities can design adaptive strategies at the right scale.

Moreover, although weather patterns are becoming more erratic globally, the local climate impacts can be less straightforward. Therefore, extreme precipitation projections can play a crucial role in assessing future risks on the city level.

Change in annual exceedance probability of largest 5-day cumulative precipitation

What is this?

Extreme precipitation events are often characterized by return periods: for example, an event with a 20-year return period has a 5% probability of occurring in any given year. In simple terms, this indicator describes how much more likely an extreme precipitation event of a certain return is expected to occur in the future. For instance, if a 50-year rainfall event today is projected to become a 25-year event in 2100, the change in annual exceedance probability is 2, because it will be twice as likely to occur.

Why use it?

When it comes to precipitation impacts, the extreme events often cause much more harm than the aggregate precipitation amount. In Bangladesh, in particular, extreme rainfalls during monsoon seasons frequently displace communities, strain infrastructure, and exacerbate existing vulnerabilities in the population. As climate change make extreme events more likely to occur in certain places, it is critical to update the expectation for these “rare” events and prepare early warning systems and emergency responses adequately.

Footnotes

  1. https://www.sciencedirect.com/science/article/pii/S0048969721036779 ↩︎
  2. https://sustainability.stanford.edu/news/how-vegetation-alters-climate https://ui.adsabs.harvard.edu/abs/2013EGUGA..1513983P ↩︎
  3. Sentinel-2 satellites capture 13 bands. NDVI uses bands number 4 and 8 for red (650–860 nm wavelength) and near-infrared (780–885 nm) radiation, respectively. While the satellites observe the the earth's surface at a resolution of 10–60 meters, both bands 4 and 8 have a resolution of 10 meters. See https://custom-scripts.sentinel-hub.com/custom-scripts/sentinel-2/ndvi/. ↩︎
  4. https://www.zurich.com/knowledge/topics/flood-and-water-damage/three-common-types-of-flood ↩︎
  5. In an analogy to probability, the fuzzy gamma operator combines the probability that any of a set of events happen and that the probability that all of a set of events happen. (Note, though, that neither input variables in this model, nor the outcome susceptibility, are probabilities.) Mathematically, the fuzzy operator is

    $\left(1 - \prod_{i}^{n} (1 - X_i)\right)^\gamma \cdot \left( \prod_{i}^{n} X_i \right)^{1-\gamma}$

    where $X_i$ is the transformed input variable $i$ in the set of $n$ non-slope variables, and $\gamma$ is the parameter defining how much to weight, by analogy, the probability of any event occurring (left side) versus the probability of all events happening (right side). In this model, $\gamma$ is set to 0.9.↩︎
  6. Kramer, Steven Lawrence. 1985. ["Liquefaction of Sands Due to Non-seismic Loading (Landslide, Triaxial, Compiance, Montana)"](https://www.proquest.com/openview/bd9efecaff7df535c0654c9f582ed1ff/1) ↩︎
  7. Johansson, Jörgen. 2000. [Soil Liquefaction](https://depts.washington.edu/liquefy/html/how/how1.html). University of Washington. ↩︎