diff --git a/04-points.qmd b/04-points.qmd
index e8166d1..343adb6 100644
--- a/04-points.qmd
+++ b/04-points.qmd
@@ -51,7 +51,8 @@ colnames(db)
The rest of this session will focus on two main elements of the table: the spatial dimension (as stored in the point coordinates), and the nightly price values, expressed in USD and contained in the `price` column. To get a sense of what they look like first, let us plot both. We can get a quick look at the non-spatial distribution of house values with the following commands:
-```{r fig.margin=TRUE, fig.cap="Raw AirBnb prices in San Diego"}
+```{r}
+#| warning: false
# Create the histogram
qplot( data = db, x = price)
```
@@ -59,6 +60,7 @@ qplot( data = db, x = price)
This basically shows there is a lot of values concentrated around the lower end of the distribution but a few very large ones. A usual transformation to *shrink* these differences is to take logarithms. The original table already contains an additional column with the logarithm of each price (`log_price`).
```{r}
+#| warning: false
# Create the histogram
qplot( data = db, x = log_price )
```
diff --git a/05-flows.qmd b/05-flows.qmd
index 1158dff..274f2c1 100644
--- a/05-flows.qmd
+++ b/05-flows.qmd
@@ -35,7 +35,7 @@ In this chapter we will show a slightly different way of managing spatial data i
## Data
-In this note, we will use data from the city of San Francisco representing bike trips on their public bike share system. The original source is the SF Open Data portal ([link](http://www.bayareabikeshare.com/open-data)) and the dataset comprises both the location of each station in the Bay Area as well as information on trips (station of origin to station of destination) undertaken in the system from September 2014 to August 2015 and the following year. Since this note is about modeling and not data preparation, a cleanly reshaped version of the data, together with some additional information, has been created and placed in the `sf_bikes` folder. The data file is named `flows.geojson` and, in case you are interested, the (Python) code required to created from the original files in the SF Data Portal is also available on the `flows_prep.ipynb` notebook [\[url\]](https://github.com/darribas/spa_notes/blob/master/sf_bikes/flows_prep.ipynb), also in the same folder.
+In this note, we will use data from the city of San Francisco representing bike trips on their public bike share system. The original source is the [SF Open Data portal](https://datasf.org/opendata/) and the dataset comprises both the location of each station in the Bay Area as well as information on trips (station of origin to station of destination) undertaken in the system from September 2014 to August 2015 and the following year. Since this note is about modeling and not data preparation, a cleanly reshaped version of the data, together with some additional information, has been created and placed in the `sf_bikes` folder. The data file is named `flows.geojson` and, in case you are interested, the (Python) code required to created from the original files in the SF Data Portal is also available on the `flows_prep.ipynb` [notebook](https://github.com/darribas/spa_notes/blob/master/sf_bikes/flows_prep.ipynb), also in the same folder.
Let us then directly load the file with all the information necessary:
@@ -277,7 +277,7 @@ generate_draw <- function(m){
}
```
-This function takes a model `m` and the set of covariates `x` used and returns a random realization of predictions from the model. To get a sense of how this works, we can get and plot a realization of the model, compared to the expected one and the actual values:
+This function takes a model `m` and the set of covariates `x` used and returns a random realisation of predictions from the model. To get a sense of how this works, we can get and plot a realisation of the model, compared to the expected one and the actual values:
```{r}
new_y <- generate_draw(m1)
@@ -355,11 +355,11 @@ legend(
title(main="Predictive check - Baseline model")
```
-The plot shows there is a significant mismatch between the fitted values, which are much more concentrated around small positive values, and the realizations of our "inferential engine", which depict a much less concentrated distribution of values. This is likely due to the combination of two different reasons: on the one hand, the accuracy of our estimates may be poor, causing them to jump around a wide range of potential values and hence resulting in very diverse predictions (inferential uncertainty); on the other hand, it may be that the amount of variation we are not able to account for in the model[^05-flows-2] is so large that the degree of uncertainty contained in the error term of the model is very large, hence resulting in such a flat predictive distribution.
+The plot shows there is a significant mismatch between the fitted values, which are much more concentrated around small positive values, and the realisations of our "inferential engine", which depict a much less concentrated distribution of values. This is likely due to the combination of two different reasons: on the one hand, the accuracy of our estimates may be poor, causing them to jump around a wide range of potential values and hence resulting in very diverse predictions (inferential uncertainty); on the other hand, it may be that the amount of variation we are not able to account for in the model[^05-flows-2] is so large that the degree of uncertainty contained in the error term of the model is very large, hence resulting in such a flat predictive distribution.
[^05-flows-2]: The $R^2$ of our model is around 2%
-It is important to keep in mind that the issues discussed in the paragraph above relate only to the uncertainty behind our model, not to the point predictions derived from them, which are a mechanistic result of the minimization of the squared residuals and hence are not subject to probability or inference. That allows them in this case to provide a fitted distribution much more accurate apparently (black line above). However, the lesson to take from this model is that, even if the point predictions (fitted values) are artificially accurate[^05-flows-3], our capabilities to infer about the more general underlying process are fairly limited.
+Keep in mind that the issues discussed in the paragraph above relate to the uncertainty behind our model. This is not to the point predictions derived from them, which are a mechanistic result of the minimisation of the squared residuals and hence are not subject to probability or inference. That allows the model in this case to provide a fitted distribution much more accurate apparently (black line above). However, the lesson to take from the model is that, even if the point predictions (fitted values) are artificially accurate[^05-flows-3], our capabilities to infer about the more general underlying process are fairly limited.
[^05-flows-3]: which they are not really, in light of the comparison between the black and red lines.
diff --git a/06-spatial-econometrics.qmd b/06-spatial-econometrics.qmd
index 8945b92..9008b32 100644
--- a/06-spatial-econometrics.qmd
+++ b/06-spatial-econometrics.qmd
@@ -26,7 +26,7 @@ library(spdep)
## Data
-To explore ideas in spatial regression, we will the set of Airbnb properties for San Diego (US), borrowed from the "Geographic Data Science with Python" book (see [here](https://geographicdata.science/book/data/airbnb/regression_cleaning.html) for more info on the dataset source). This covers the point location of properties advertised on the Airbnb website in the San Diego region.
+To explore ideas in spatial regression, we will use a set of Airbnb properties for San Diego (US) from @reyABwolf. The dataset provides point location data of properties advertised on the Airbnb website in the San Diego region.
Let us load the data:
@@ -34,13 +34,13 @@ Let us load the data:
db <- st_read('data/abb_sd/regression_db.geojson')
```
-The table contains the followig variables:
+The table contains the following variables:
```{r}
names(db)
```
-For most of this chapter, we will be exploring determinants and strategies for modelling the price of a property advertised in AirBnb. To get a first taste of what this means, we can create a plot of prices within the area of San Diego:
+We will be exploring determinants and strategies for modelling the price of a property advertised in AirBnb. To get a first taste of what this means, we can create a plot of prices within the area of San Diego:
```{r}
db %>%
@@ -50,11 +50,9 @@ db %>%
theme_void()
```
-## Non-spatial regression, a refresh
+## Non-spatial regression
-Before we discuss how to explicitly include space into the linear regression framework, let us show how basic regression can be carried out in R, and how you can interpret the results. By no means is this a formal and complete introduction to regression so, if that is what you are looking for, the first part of @gelman2006data, in particular chapters 3 and 4, are excellent places to check out.
-
-The core idea of linear regression is to explain the variation in a given (*dependent*) variable as a linear function of a series of other (*explanatory*) variables. For example, in our case, we may want to express/explain the price of a property advertised on AirBnb as a function of some of its characteristics, such as the number of people it accommodates, and how many bathrooms, bedrooms and beds it features. At the individual level, we can express this as:
+Before we discuss how to explicitly include space into the linear regression framework, let us fit a linear regression model and interpret the results. We may want to explain the price of a property advertised on AirBnb as a function of some of its characteristics, such as the number of people it accommodates, and how many bathrooms, bedrooms and beds it features. At the individual level, we can express this as:
$$
\log(P_i) = \alpha + \beta_1 Acc_i + \beta_2 Bath_i + \beta_3 Bedr_i + \beta_4 Beds_i + \epsilon_i
@@ -66,17 +64,11 @@ $$
\log(P) = \alpha + \beta_1 Acc + \beta_2 Bath + \beta_3 Bedr + \beta_4 Beds + \epsilon
$$ where each term can be interpreted in terms of vectors instead of scalars (wit the exception of the parameters $(\alpha, \beta_{1, 2, 3, 4})$, which *are* scalars). Note we are using the logarithm of the price, since this allows us to interpret the coefficients as roughly the percentage change induced by a unit increase in the explanatory variable of the estimate.
-Remember a regression can be seen as a multivariate extension of bivariate correlations. Indeed, one way to interpret the $\beta_k$ coefficients in the equation above is as the degree of correlation between the explanatory variable $k$ and the dependent variable, *keeping all the other explanatory variables constant*. When you calculate simple bivariate correlations, the coefficient of a variable is picking up the correlation between the variables, but it is also subsuming into it variation associated with other correlated variables --also called confounding factors\[\^06-spatial-econometrics-1\]. Regression allows you to isolate the distinct effect that a single variable has on the dependent one, once we *control* for those other variables.
-
-::: column-margin
-::: {.callout-tip appearance="simple" icon="false"}
-**Task**
+Remember a regression can be seen as a multivariate extension of bivariate correlations. Indeed, one way to interpret the $\beta_k$ coefficients in the equation above is as the degree of correlation between the explanatory variable $k$ and the dependent variable, *keeping all the other explanatory variables constant*. When you calculate simple bivariate correlations, the coefficient of a variable is picking up the correlation between the variables, but it is also subsuming into it variation associated with other correlated variables --also called confounding factors. Regression allows you to isolate the distinct effect that a single variable has on the dependent one, once we *control* for those other variables.
Assume that new houses tend to be built more often in areas with low deprivation. If that is the case, then $NEW$ and $IMD$ will be correlated with each other (as well as with the price of a house, as we are hypothesizing in this case). If we calculate a simple correlation between $P$ and $IMD$, the coefficient will represent the degree of association between both variables, but it will also include some of the association between $IMD$ and $NEW$. That is, part of the obtained correlation coefficient will be due not to the fact that higher prices tend to be found in areas with low IMD, but to the fact that new houses tend to be more expensive. This is because (in this example) new houses tend to be built in areas with low deprivation and simple bivariate correlation cannot account for that.
-:::
-:::
-Practically speaking, running linear regressions in `R` is straightforward. For example, to fit the model specified in the equation above, we only need one line of code:
+We first fit the model specified in the equation above by running:
```{r}
m1 <- lm('log_price ~ accommodates + bathrooms + bedrooms + beds', db)
@@ -94,11 +86,11 @@ A full detailed explanation of the output is beyond the scope of the chapter, bu
[^06-spatial-econometrics-1]: Keep in mind that regression is no magic. We are only discounting the effect of other confounding factors that we include in the model, not of *all* potentially confounding factors.
-## Spatial regression: a (very) first dip
+## Spatial regression
Spatial regression is about *explicitly* introducing space or geographical context into the statistical framework of a regression. Conceptually, we want to introduce space into our model whenever we think it plays an important role in the process we are interested in, or when space can act as a reasonable proxy for other factors we cannot but should include in our model. As an example of the former, we can imagine how houses at the seafront are probably more expensive than those in the second row, given their better views. To illustrate the latter, we can think of how the character of a neighborhood is important in determining the price of a house; however, it is very hard to identify and quantify "character" per se, although it might be easier to get at its spatial variation, hence a case of space as a proxy.
-Spatial regression is a large field of development in the econometrics and statistics literature. In this brief introduction, we will consider two related but very different processes that give rise to spatial effects: spatial heterogeneity and spatial dependence. For more rigorous treatments of the topics introduced here, the reader is referred to @anselin2003spatial, @anselin2014modern, and @gibbons2014spatial.
+Spatial regression is a large field of development in the econometrics and statistics literature. In this brief introduction, we will consider two related but very different processes that give rise to spatial effects: *spatial heterogeneity* and *spatial dependence*. For more rigorous treatments of the topics introduced here, the reader is referred to @anselin2003spatial and @anselin2014modern.
## Spatial heterogeneity
@@ -160,9 +152,9 @@ nei.fes %>%
theme_void()
```
-We can see how neighborhoods in the left (west) tend to have higher prices. What we can't see, but it is represented there if you are familiar with the geography of San Diego, is that the city is bounded by the Pacific ocean on the left, suggesting neighbourhoods by the beach tend to be more expensive.
+We can see how neighborhoods in the left (west) tend to have higher prices. What we cannot see, but it is represented there if you are familiar with the geography of San Diego, is that the city is bounded by the Pacific ocean on the left, suggesting neighbourhoods by the beach tend to be more expensive.
-Remember that the interpretation of a $\beta_k$ coefficient is the effect of variable $k$, *given all the other explanatory variables included remain constant*. By including a single variable for each area, we are effectively forcing the model to compare as equal only house prices that share the same value for each variable; in other words, only houses located within the same area. Introducing FE affords you a higher degree of isolation of the effects of the variables you introduce in your model because you can control for unobserved effects that align spatially with the distribution of the FE you introduce (by neighbourhood, in our case).
+Remember that the interpretation of a $\beta_k$ coefficient is the effect of variable $k$, *given all the other explanatory variables included remained constant*. By including a single variable for each area, we are effectively forcing the model to compare as equal only house prices that share the same value for each variable; that is, only houses located within the same area. Introducing FE affords you a higher degree of isolation of the effects of the variables you introduce in your model because you can control for unobserved effects that align spatially with the distribution of the FE you introduce (by neighbourhood, in our case).
**Spatial regimes**
@@ -187,7 +179,7 @@ m3 <- lm(
summary(m3)
```
-This allows us to get a separate constant term and estimate of the impact of each variable *for every neighborhood*
+This allows us to get a separate constant term and estimate of the impact of each variable *for every neighborhood*. Note that to obtain a neighbourhood-specific constant, you will need to add the regression constant and the estimate for the interaction between one and a specific neighbourhood estimate.
## Spatial dependence
@@ -195,9 +187,9 @@ As we have just discussed, SH is about effects of phenomena that are *explicitly
**Spatial Weights**
-There are several ways to introduce spatial dependence in an econometric framework, with varying degrees of econometric sophistication [see @anselin2003spatial for a good overview]. Common to all of them however is the way space is formally encapsulated: through *spatial weights matrices (*$W$)[^06-spatial-econometrics-2] These are $NxN$ matrices with zero diagonals and every $w_{ij}$ cell with a value that represents the degree of spatial connectivity/interaction between observations $i$ and $j$. If they are not connected at all, $w_{ij}=0$, otherwise $w_{ij}>0$ and we call $i$ and $j$ neighbors. The exact value in the latter case depends on the criterium we use to define neighborhood relations. These matrices also tend to be row-standardized so the sum of each row equals to one.
+There are several ways to introduce spatial dependence in an econometric framework, with varying degrees of econometric sophistication [see @anselin2003spatial for a good overview]. Common to all of them however is the way space is formally encapsulated: through *spatial weights matrices (*$W$)[^06-spatial-econometrics-2] These are $NxN$ matrices with zero diagonals and every $w_{ij}$ cell with a value that represents the degree of spatial connectivity/interaction between observations $i$ and $j$. If they are not connected at all, $w_{ij}=0$, otherwise $w_{ij}>0$ and we call $i$ and $j$ neighbors. The exact value in the latter case depends on the criterium we use to define neighborhood relations. These matrices also tend to be row-standardized so the sum of each row equals to one.
-[^06-spatial-econometrics-2]: If you need to refresh your knowledge on spatial weight matrices, check [Block E](https://darribas.org/gds_course/content/bE/concepts_E.html) of @darribas_gds_course; [Chapter 4](https://geographicdata.science/book/notebooks/04_spatial_weights.html) of @reyABwolf; or the [Spatial Weights](https://fcorowe.github.io/intro-gds/03-spatial_weights.html) Section of @rowe2022a.
+[^06-spatial-econometrics-2]: If you need to refresh your knowledge on spatial weight matrices. [Block E](https://darribas.org/gds_course/content/bE/concepts_E.html) of @darribas_gds_course [Chapter 4](https://geographicdata.science/book/notebooks/04_spatial_weights.html) of @reyABwolf provide a good explanation of theory around spatial weights and the [Spatial Weights](https://fcorowe.github.io/intro-gds/03-spatial_weights.html) Section of @rowe2022a illustrates the use of R to compute different types of spatial weight matrices.
A related concept to spatial weight matrices is that of *spatial lag*. This is an operator that multiplies a given variable $y$ by a spatial weight matrix:
@@ -235,7 +227,7 @@ hknn
**Exogenous spatial effects**
-Let us come back to the house price example we have been working with. So far, we have hypothesized that the price of an AirBnb property in San Diego can be explained using information about its own characteristics, and the neighbourhood it belongs to. However, we can hypothesise that the price of a house is also affected by the characteristics of the houses surrounding it. Considering it as a proxy for larger and more luxurious houses, we will use the number of bathrooms of neighboring houses as an additional explanatory variable. This represents the most straightforward way to introduce spatial dependence in a regression, by considering not only a given explanatory variable, but also its spatial lag.
+Let us come back to the house price example we have been working with. So far, we have hypothesised that the price of an AirBnb property in San Diego can be explained using information about its own characteristics, and the neighbourhood it belongs to. However, we can hypothesise that the price of a house is also affected by the characteristics of the houses surrounding it. Considering it as a proxy for larger and more luxurious houses, we will use the number of bathrooms of neighboring houses as an additional explanatory variable. This represents the most straightforward way to introduce spatial dependence in a regression, by considering not only a given explanatory variable, but also its spatial lag.
In our example case, in addition to including the number of bathrooms of the property, we will include its spatial lag. In other words, we will be saying that it is not only the number of bathrooms in a house but also that of the surrounding properties that helps explain the final price at which a house is advertised for. Mathematically, this implies estimating the following model:
@@ -298,8 +290,7 @@ Conceptually, predicting in linear regression models involves using the estimate
$$
\log(\bar{P_i}) = \bar{\alpha} + \bar{\beta_1} Acc_i^* + \bar{\beta_2} Bath_i^* + \bar{\beta_3} Bedr_i^* + \bar{\beta_4} Beds_i^*
-$$
-where $\log(\bar{P_i})$ is our predicted value, and we include the bar sign to note that it is our estimate obtained from fitting the model. We use the $^*$ sign to note that those can be new values for the explanatory variables, not necessarily those used to fit the model.
+$$ where $\log(\bar{P_i})$ is our predicted value, and we include the bar sign to note that it is our estimate obtained from fitting the model. We use the $^*$ sign to note that those can be new values for the explanatory variables, not necessarily those used to fit the model.
Technically speaking, prediction in linear models is relatively streamlined in R. Suppose we are given data for a new house which is to be put on the AirBnb platform. We know it accommodates four people, and has two bedrooms, three beds, and one bathroom. We also know that the surrounding properties have, on average, 1.5 bathrooms. Let us record the data first:
diff --git a/_quarto.yml b/_quarto.yml
index fb8dde4..dc01705 100644
--- a/_quarto.yml
+++ b/_quarto.yml
@@ -13,7 +13,7 @@ book:
style: docked
background: light
date: today
- downloads: [pdf]
+ #downloads: [pdf]
chapters:
- index.qmd
- 01-overview.qmd
@@ -28,8 +28,8 @@ book:
- 10-st_analysis.qmd
- 11-datasets.qmd
- references.qmd
-
bibliography: references.bib
+editor: visual
format:
html:
@@ -55,8 +55,6 @@ format:
reference-location: margin
mermaid:
theme: neutral
- pdf:
- documentclass: scrreprt
-
-editor: visual
+# pdf:
+# documentclass: scrreprt
diff --git a/docs/01-overview.html b/docs/01-overview.html
index a0241e5..07e443c 100644
--- a/docs/01-overview.html
+++ b/docs/01-overview.html
@@ -119,7 +119,6 @@
Spatial Modelling for Data Scientists
-
@@ -400,8 +399,8 @@
1.5 Assessment
The final module mark is composed of the two computational essays. Together they are designed to cover the materials introduced in the entirety of content covered during the semester. A computational essay is an essay whose narrative is supported by code and computational results that are included in the essay itself. Each teaching week, you will be required to address a set of questions relating to the module content covered in that week, and to use the material that you will produce for this purpose to build your computational essay.
-
Assignment 1 (50%) refer to the set of questions at the end of sec-chp4, sec-chp5 and sec-chp6. You are required to use your responses to build your computational essay. Each chapter provides more specific guidance of the tasks and discussion that you are required to consider in your assignment.
-
Assignment 2 (50%) refer to the set of questions at the end of sec-chp7, sec-chp8, sec-chp9 and sec-chp10. You are required to use your responses to build your computational essay. Each chapter provides more specific guidance of the tasks and discussion that you are required to consider in your assignment.
+
Assignment 1 (50%) refer to the set of questions at the end of Chapter 4, Chapter 5 and Chapter 6. You are required to use your responses to build your computational essay. Each chapter provides more specific guidance of the tasks and discussion that you are required to consider in your assignment.
+
Assignment 2 (50%) refer to the set of questions at the end of Chapter 7, Chapter 8, Chapter 9 and Chapter 10. You are required to use your responses to build your computational essay. Each chapter provides more specific guidance of the tasks and discussion that you are required to consider in your assignment.
1.5.1 Format Requirements
Both assignments will have the same requirements:
diff --git a/docs/02-spatial_data.html b/docs/02-spatial_data.html
index abdd9be..b852987 100644
--- a/docs/02-spatial_data.html
+++ b/docs/02-spatial_data.html
@@ -139,7 +139,6 @@
Spatial Modelling for Data Scientists
-
@@ -272,13 +271,13 @@
Different classifications of spatial data types exist. Knowing the structure of the data at hand is important as specific analytical methods would be more appropriate for particular data types. We will use a particular classification involving four data types: lattice/areal data, point data, flow data and trajectory data (Fig. 1). This is not a exhaustive list but it is helpful to motivate the analytical and modelling methods that we cover in this book.
+Fig. 1. Data Types. Area / Lattice data source: Önnerfors et al. (2019). Point data source: Tao et al. (2018). Flow data source: Rowe and Patias (2020). Trajectory data source: Kwan and Lee (2004).
Lattice/Areal Data. These data correspond to records of attribute values (such as population counts) for a fixed geographical area. They may comprise regular shapes (such as grids or pixels) or irregular shapes (such as states, counties or travel-to-work areas). Raster data are a common source of regular lattice/areal area, while censuses are probably the most common form of irregular lattice/areal area. Point data within an area can be aggregated to produce lattice/areal data.
Point Data. These data refer to records of the geographic location of an discrete event, or the number of occurrences of geographical process at a given location. As displayed in Fig. 1, examples include the geographic location of bus stops in a city, or the number of boarding passengers at each bus stop.
Flow Data. These data refer to records of measurements for a pair of geographic point locations. or pair of areas. These data capture the linkage or spatial interaction between two locations. Migration flows between a place of origin and a place of destination is an example of this type of data.
-
Trajectory Data. These data record geographic locations of moving objects at various points in time. A trajectory is composed of a single string of data recording the geographic location of an object at various points in time and each record in the string contains a time stamp. These data are complex and can be classified into explicit trajectory data and implicit trajectory data. The former refer to well-structured data and record positions of objects continuously and intensively at uniform time intervals, such as GPS data. The latter is less structured and record data in relatively time point intervals, including sensor-based, network-based and signal-based data (Kong et al. 2018).
-
In this course, we cover analytical and modelling approaches for point, lattice/areal and flow data. While we do not explicitly analyse trajectory data, various of the analytical approaches described in this book can be extended to incorporate time, and can be applied to model these types of data. In sec-chp10, we describe approaches to analyse and model spatio-temporal data. These same methods can be applied to trajectory data.
+
Trajectory Data. These data record geographic locations of moving objects at various points in time. A trajectory is composed of a single string of data recording the geographic location of an object at various points in time and each record in the string contains a time stamp. These data are complex and can be classified into explicit trajectory data and implicit trajectory data. The former refer to well-structured data and record positions of objects continuously and intensively at uniform time intervals, such as GPS data. The latter is less structured and record data in relatively time point intervals, including sensor-based, network-based and signal-based data (Kong et al. 2018).
+
In this course, we cover analytical and modelling approaches for point, lattice/areal and flow data. While we do not explicitly analyse trajectory data, various of the analytical approaches described in this book can be extended to incorporate time, and can be applied to model these types of data. In Chapter 10, we describe approaches to analyse and model spatio-temporal data. These same methods can be applied to trajectory data.
2.2 Hierarchical Structure of Data
The hierarchical organisation is a key feature of spatial data. Smaller geographical units are organised within larger geographical units. You can find the hierarchical representation of UK Statistical Geographies on the Office for National Statistics website. In the bottom part of the output below, we can observe a spatial data frame for Liverpool displaying the hierarchical structure of census data (from the smallest to the largest): Output Areas (OAs), Lower Super Output Areas (LSOAs), Middle Super Output Areas (MSOAs) and Local Authority Districts (LADs). This hierarchical structure entails that units in smaller geographies are nested within units in larger geographies, and that smaller units can be aggregated to produce large units.
@@ -304,34 +303,34 @@
Major challenges exist when working with spatial data. Below we explore some of the key longstanding problems data scientists often face when working with geographical data.
2.3.1 Modifible Area Unit Problem (MAUP)
-
The Modifible Area Unit Problem (MAUP) represents a challenge that has troubled geographers for decades (Openshaw 1981). Two aspects of the MAUP are normally recognised in empirical analysis relating to scale and zonation. Fig. 2 illustrates these issues
+
The Modifible Area Unit Problem (MAUP) represents a challenge that has troubled geographers for decades (Openshaw 1981). Two aspects of the MAUP are normally recognised in empirical analysis relating to scale and zonation. Fig. 2 illustrates these issues
Scale refers to the idea that a geographical area can be divided into geographies with differing numbers of spatial units.
Zonation refers to the idea that a geographical area can be divided into the same number of units in a variety of ways.
The MAUP is a critical issue as it can impact our analysis and thus any conclusions we can infer from our results (e.g. Fotheringham and Wong 1991). There is no agreed systematic approach on how to handle the effects of the MAUP. Some have suggested to perform analyses based on different existing geographical scales, and assess the consistency of the results and identify potential sources of change. The issue with such approach is that results from analysis at different scales are likely to differ because distinct dimensions of a geographic process may be captured at different scales. For example, in migration studies, smaller geographies may be more suitable to capture residential mobility over short distances, while large geographies may be more suitable to capture long-distance migration. And it is well documented that these types of moves are driven by different factors. While residential mobility tends to be driven by housing related reasons, long-distance migration is more closely related to employment-related motives (Niedomysl 2011).
-
An alternative approach is to use the smallest geographical system available and create random aggregations at various geographical scales, to directly quantify the extent of scale and zonation. This approach has shown promising results in applications to study internal migration flows (Stillwell, Daras, and Bell 2018). Another approach involves the production of “meaningful” or functional geographies that can more appropriately capture the process of interest. There is an active area of work defining functional labour markets (Casado-Díaz, Martínez-Bernabéu, and Rowe 2017), urban areas (Arribas-Bel, Garcia-López, and Viladecans-Marsal 2021) and various forms of geodemographic classifications (Singleton and Spielman 2013; Patias, Rowe, and Cavazzi 2019) . However there is the recognition that none of the existing approaches resolve the effects of the MAUP and recently it has been suggested that the most plausible ‘solution’ would be to ignore the MAUP (Wolf et al. 2020).
+
The MAUP is a critical issue as it can impact our analysis and thus any conclusions we can infer from our results (e.g. Fotheringham and Wong 1991). There is no agreed systematic approach on how to handle the effects of the MAUP. Some have suggested to perform analyses based on different existing geographical scales, and assess the consistency of the results and identify potential sources of change. The issue with such approach is that results from analysis at different scales are likely to differ because distinct dimensions of a geographic process may be captured at different scales. For example, in migration studies, smaller geographies may be more suitable to capture residential mobility over short distances, while large geographies may be more suitable to capture long-distance migration. And it is well documented that these types of moves are driven by different factors. While residential mobility tends to be driven by housing related reasons, long-distance migration is more closely related to employment-related motives (Niedomysl 2011).
+
An alternative approach is to use the smallest geographical system available and create random aggregations at various geographical scales, to directly quantify the extent of scale and zonation. This approach has shown promising results in applications to study internal migration flows (Stillwell, Daras, and Bell 2018). Another approach involves the production of “meaningful” or functional geographies that can more appropriately capture the process of interest. There is an active area of work defining functional labour markets (Casado-Díaz, Martínez-Bernabéu, and Rowe 2017), urban areas (Arribas-Bel, Garcia-López, and Viladecans-Marsal 2021) and various forms of geodemographic classifications (Singleton and Spielman 2013; Patias, Rowe, and Cavazzi 2019) . However there is the recognition that none of the existing approaches resolve the effects of the MAUP and recently it has been suggested that the most plausible ‘solution’ would be to ignore the MAUP (Wolf et al. 2020).
2.3.2 Ecological Fallacy
-
Ecological fallacy is an error in the interpretation of statistical data based on aggregate information. Specifically it refers to inferences made about the nature of specific individuals based solely on statistics aggregated for a given group. It is about thinking that relationships observed for groups necessarily hold for individuals. A key example is Robinson (1950) who illustrates this problem exploring the difference between ecological correlations and individual correlations. He looked at the relationship between country of birth and literacy. Robinson (1950) used the percent of foreign-born population and percent of literate population for the 48 states in the United States in 1930. The ecological correlation based on these data was 0.53. This suggests a positive association between foreign birth and literacy, and could be interpreted as foreign born individuals being more likely to be literate than native-born individuals. Yet, the correlation based on individual data was negative -0.11 which indicates the opposite. The main point emerging from this example is to carefully interpret analysis based on spatial data and avoid making inferences about individuals from these data.
+
Ecological fallacy is an error in the interpretation of statistical data based on aggregate information. Specifically it refers to inferences made about the nature of specific individuals based solely on statistics aggregated for a given group. It is about thinking that relationships observed for groups necessarily hold for individuals. A key example is Robinson (1950) who illustrates this problem exploring the difference between ecological correlations and individual correlations. He looked at the relationship between country of birth and literacy. Robinson (1950) used the percent of foreign-born population and percent of literate population for the 48 states in the United States in 1930. The ecological correlation based on these data was 0.53. This suggests a positive association between foreign birth and literacy, and could be interpreted as foreign born individuals being more likely to be literate than native-born individuals. Yet, the correlation based on individual data was negative -0.11 which indicates the opposite. The main point emerging from this example is to carefully interpret analysis based on spatial data and avoid making inferences about individuals from these data.
2.3.3 Spatial Dependence
-
Spatial dependence refers to the spatial relationship of a variable’s values for a pair of locations at a certain distance apart, so that these values are more similar (or less similar) than expected for randomly associated pairs of observations (Anselin 1988). For example, we could think of observed patterns of ethnic segregation in an area are a result of spillover effects of pre-existing patterns of ethnic segregation in neighbouring areas. sec-chp5 will illustrate approach to explicitly incorporate spatial dependence in regression analysis.
+
Spatial dependence refers to the spatial relationship of a variable’s values for a pair of locations at a certain distance apart, so that these values are more similar (or less similar) than expected for randomly associated pairs of observations (Anselin 1988). For example, we could think of observed patterns of ethnic segregation in an area are a result of spillover effects of pre-existing patterns of ethnic segregation in neighbouring areas. Chapter 5 will illustrate approach to explicitly incorporate spatial dependence in regression analysis.
2.3.4 Spatial Heterogeneity
-
Spatial heterogeneity refers to the uneven distribution of a variable’s values across space. Concentration of deprivation or unemployment across an area are good examples of spatial heterogeneity. We illustrate various ways to visualise, explore and measure the spatial distribution of data in multiple chapters. We also discuss on potential modelling approaches to capture spatial heterogeneity in sec-chp5, sec-chp7 and sec-chp10.
+
Spatial heterogeneity refers to the uneven distribution of a variable’s values across space. Concentration of deprivation or unemployment across an area are good examples of spatial heterogeneity. We illustrate various ways to visualise, explore and measure the spatial distribution of data in multiple chapters. We also discuss on potential modelling approaches to capture spatial heterogeneity in Chapter 5, Chapter 7 and Chapter 10.
2.3.5 Spatial nonstationarity
-
Spatial nonstationarity refers to variations in the relationship between an outcome variable and a set of predictor variables across space. In a modelling context, it relates to a situation in which a simple “global” model is inappropriate to explain the relationships between a set of variables. The geographical nature of the model must be modified to reflect local structural relationships within the data. For example, ethinic segregation has been positively associated with employment outcomes in some countries pointing to networks in pre-existing communities facilitating access to the local labour market. Inversely ethinic segregation has been negatively associated with employment outcomes pointing to lack of integration into the broader local community. We illustrate various modelling approaches to capture spatial nonstationarity in sec-chp8 and sec-chp9.
+
Spatial nonstationarity refers to variations in the relationship between an outcome variable and a set of predictor variables across space. In a modelling context, it relates to a situation in which a simple “global” model is inappropriate to explain the relationships between a set of variables. The geographical nature of the model must be modified to reflect local structural relationships within the data. For example, ethinic segregation has been positively associated with employment outcomes in some countries pointing to networks in pre-existing communities facilitating access to the local labour market. Inversely ethinic segregation has been negatively associated with employment outcomes pointing to lack of integration into the broader local community. We illustrate various modelling approaches to capture spatial nonstationarity in Chapter 8 and Chapter 9.
-
+
Anselin, Luc. 1988. Spatial Econometrics: Methods and Models. Vol. 4. Springer Science & Business Media.
diff --git a/docs/03-data-wrangling.html b/docs/03-data-wrangling.html
index 767dcd5..359503c 100644
--- a/docs/03-data-wrangling.html
+++ b/docs/03-data-wrangling.html
@@ -143,7 +143,6 @@
Spatial Modelling for Data Scientists
-
@@ -305,10 +304,10 @@
If you are already familiar with R, R computational notebooks and data types, you may want to jump to Section Read Data and start from there. This section describes how to read and manipulate data using sf and tidyverse functions, including mutate(), %>% (known as pipe operator), select(), filter() and specific packages and functions how to manipulate spatial data.
The chapter is based on:
-
Grolemund and Wickham (2019), this book illustrates key libraries, including tidyverse, and functions for data manipulation in R
-
Xie, Allaire, and Grolemund (2019), excellent introduction to R markdown!
-
Williamson (2018), some examples from the first lecture of ENVS450 are used to explain the various types of random variables.
-
Lovelace, Nowosad, and Muenchow (2019), a really good book on handling spatial data and historical background of the evolution of R packages for spatial data analysis.
+
Grolemund and Wickham (2019), this book illustrates key libraries, including tidyverse, and functions for data manipulation in R
+
Xie, Allaire, and Grolemund (2019), excellent introduction to R markdown!
+
Williamson (2018), some examples from the first lecture of ENVS450 are used to explain the various types of random variables.
+
Lovelace, Nowosad, and Muenchow (2019), a really good book on handling spatial data and historical background of the evolution of R packages for spatial data analysis.
Save the script: File > Save As, select your required destination folder, and enter any filename that you like, provided that it ends with the file extension .R
-
An R Notebook or a Quarto Document are a Markdown options with descriptive text and code chunks that can be executed independently and interactively, with output visible immediately beneath a code chunk - see Xie, Allaire, and Grolemund (2019). A Quarto Document is an improved version of the original R Notebook. Quarto Document requires a package called Quarto. Quarto does not have a dependency or requirement for R. Quarto is multilingual, beginning with R, Python, Javascript, and Julia. The concept is that Quarto will work even for languages that do not yet exist. This book was original written in R Notebook but later transitioned into Quarto Documents.
+
An R Notebook or a Quarto Document are a Markdown options with descriptive text and code chunks that can be executed independently and interactively, with output visible immediately beneath a code chunk - see Xie, Allaire, and Grolemund (2019). A Quarto Document is an improved version of the original R Notebook. Quarto Document requires a package called Quarto. Quarto does not have a dependency or requirement for R. Quarto is multilingual, beginning with R, Python, Javascript, and Julia. The concept is that Quarto will work even for languages that do not yet exist. This book was original written in R Notebook but later transitioned into Quarto Documents.
To create an R Notebook, you need to:
Open a new script file: File > New File > R Notebook
@@ -857,7 +856,7 @@
Note we used a pipe operator%>%, which helps make the code more efficient and readable - more details, see Grolemund and Wickham (2019). When using the pipe operator, recall to first indicate the data frame before %>%.
+
Note we used a pipe operator%>%, which helps make the code more efficient and readable - more details, see Grolemund and Wickham (2019). When using the pipe operator, recall to first indicate the data frame before %>%.
Note also the use a variable name before the = sign in brackets to indicate the name of the new variable after mutate.
3.9.2 Selecting Variables
@@ -932,7 +931,7 @@
3.10 Using Spatial Data Frames
-
A core area of the module is learning to work with spatial data in R. R has various purposedly designed packages for manipulation of spatial data and spatial analysis techniques. Various packages exist in CRAN, including sf(Pebesma 2018, 2022a), stars(Pebesma 2022b), terra, s2(Dunnington, Pebesma, and Rubak 2023), lwgeom(Pebesma 2023), gstat(Pebesma 2004; Pebesma and Graeler 2022), spdep(Bivand 2022), spatialreg(Bivand and Piras 2022), spatstat(Baddeley, Rubak, and Turner 2015; Baddeley, Turner, and Rubak 2022), tmap(Tennekes 2018, 2022), mapview(Appelhans et al. 2022) and more. A key package is this ecosystem is sf(Pebesma and Bivand 2023). R package sf provides a table format for simple features, where feature geometries are stored in a list-column. It appeared in 2016 and was developed to move spatial data analysis in R closer to standards-based approaches seen in the industry and open source projects, to build upon more modern versions of open source geospatial software stack and allow for integration of R spatial software with the tidyverse(Wickham et al. 2019), particularly ggplot2, dplyr, and tidyr. Hence, this book relies heavely on sf for the manipulation and analysis of the data.
+
A core area of the module is learning to work with spatial data in R. R has various purposedly designed packages for manipulation of spatial data and spatial analysis techniques. Various packages exist in CRAN, including sf(Pebesma 2018, 2022a), stars(Pebesma 2022b), terra, s2(Dunnington, Pebesma, and Rubak 2023), lwgeom(Pebesma 2023), gstat(Pebesma 2004; Pebesma and Graeler 2022), spdep(Bivand 2022), spatialreg(Bivand and Piras 2022), spatstat(Baddeley, Rubak, and Turner 2015; Baddeley, Turner, and Rubak 2022), tmap(Tennekes 2018, 2022), mapview(Appelhans et al. 2022) and more. A key package is this ecosystem is sf(Pebesma and Bivand 2023). R package sf provides a table format for simple features, where feature geometries are stored in a list-column. It appeared in 2016 and was developed to move spatial data analysis in R closer to standards-based approaches seen in the industry and open source projects, to build upon more modern versions of open source geospatial software stack and allow for integration of R spatial software with the tidyverse(Wickham et al. 2019), particularly ggplot2, dplyr, and tidyr. Hence, this book relies heavely on sf for the manipulation and analysis of the data.
@@ -945,7 +944,7 @@
-
Lovelace, Nowosad, and Muenchow (2024) provide a helpful overview and evolution of R spatial package ecosystem.
+
Lovelace, Nowosad, and Muenchow (2024) provide a helpful overview and evolution of R spatial package ecosystem.
To read our spatial data, we use the st_read function. We read a shapefile containing data at Output Area (OA) level for Liverpool. These data illustrates the hierarchical structure of spatial data.
Similar to ggplot2, tmap is based on the idea of a ‘grammar of graphics’ which involves a separation between the input data and aesthetics (i.e. the way data are visualised). Each data set can be mapped in various different ways, including location as defined by its geometry, colour and other features. The basic building block is tm_shape() (which defines input data), followed by one or more layer elements such as tm_fill() and tm_dots().
@@ -1204,8 +1203,8 @@
Warning: In view mode, scale bar breaks are ignored.
-
-
+
+
@@ -1223,7 +1222,7 @@
3.10.3 Comparing geographies
-
If you recall, one of the key issues of working with spatial data is the modifiable area unit problem (MAUP) - see (spatial_data?). To get a sense of the effects of MAUP, we analyse differences in the spatial patterns of the ethnic population in Liverpool between Middle Layer Super Output Areas (MSOAs) and OAs. So we map these geographies together.
+
If you recall, one of the key issues of working with spatial data is the modifiable area unit problem (MAUP) - see (spatial_data?). To get a sense of the effects of MAUP, we analyse differences in the spatial patterns of the ethnic population in Liverpool between Middle Layer Super Output Areas (MSOAs) and OAs. So we map these geographies together.
@@ -1293,7 +1292,7 @@
-
+
Appelhans, Tim, Florian Detsch, Christoph Reudenbach, and Stefan Woellauer. 2022. Mapview: Interactive Viewing of Spatial Data in r. https://github.com/r-spatial/mapview.
diff --git a/docs/04-points.html b/docs/04-points.html
index 0358faa..9691ea9 100644
--- a/docs/04-points.html
+++ b/docs/04-points.html
@@ -165,7 +165,6 @@
Spatial Modelling for Data Scientists
-
@@ -302,14 +301,14 @@
This chapter is based on the following references, which are great follow-up’s on the topic:
-Lovelace, Nowosad, and Muenchow (2019) offer a great introduction.
-
Chapter 6 of Brunsdon and Comber (2015), in particular subsections 6.3 and 6.7.
+Lovelace, Nowosad, and Muenchow (2019) offer a great introduction.
+
Chapter 6 of Brunsdon and Comber (2015), in particular subsections 6.3 and 6.7.
-Bivand, Pebesma, and Gómez-Rubio (2013) provides an in-depth treatment of spatial data in R.
+Bivand, Pebesma, and Gómez-Rubio (2013) provides an in-depth treatment of spatial data in R.
4.1 Dependencies
-
We will rely on the following libraries in this section, all of them included in sec-dependencies:
+
We will rely on the following libraries in this section, all of them included in Section 1.4.1:
# data manipulation, transformation and visualisationlibrary(tidyverse)
@@ -356,29 +355,20 @@
The rest of this session will focus on two main elements of the table: the spatial dimension (as stored in the point coordinates), and the nightly price values, expressed in USD and contained in the price column. To get a sense of what they look like first, let us plot both. We can get a quick look at the non-spatial distribution of house values with the following commands:
Warning: `qplot()` was deprecated in ggplot2 3.4.0.
-
-
-
`stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
-
-
+
+
This basically shows there is a lot of values concentrated around the lower end of the distribution but a few very large ones. A usual transformation to shrink these differences is to take logarithms. The original table already contains an additional column with the logarithm of each price (log_price).
-
# Create the histogram
+
# Create the histogramqplot( data =db, x =log_price)
-
-
`stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
-
To obtain the spatial distribution of these houses, we need to focus on the geometry column. The easiest, quickest (and also “dirtiest”) way to get a sense of what the data look like over space is using plot: