Empirical Project 8 Working in R

Getting started in R

For this project you will need the following packages:

If you need to install any of these packages, run the following code:

install.packages(c("tidyverse", "readxl", "knitr"))

You can import the libraries now, or when they are used in the R walk-throughs below.

library(tidyverse)
library(readxl)
library(knitr)

Part 8.1 Cleaning and summarizing the data

So far we have been working with data that was formatted correctly. However, sometimes the datasets will be messier and we will need to clean the data before starting any analysis. In the data we will use, the European Values Study (EVS) data has been converted to an Excel spreadsheet from another program, so there are some entries that were not converted correctly and some variables that need to be recoded (for example, replacing words with numbers, or replacing one number with another).

Download the EVS data and documentation:

Although we will be performing our analysis in R, the data has been provided as an Excel spreadsheet, so look at the spreadsheet in Excel first to understand its structure.

  1. The Excel spreadsheet contains multiple worksheets, one for each wave, that need to be joined together to create a single ‘tibble’ (like a spreadsheet for R). The variable names currently do not tell us what the variable is, so we need to provide labels and short descriptions to avoid confusion.

R walk-through 8.1 Importing the data into R

As we are dealing with an Excel file, we use the read_excel function from the readxl package. The file is called Project 8 datafile.xlsx and contains four worksheets named ‘Wave 1’ through to ‘Wave 4’ that contain the data. We will load the worksheets one by one and add them to the previous worksheets using the rbind function.

library(tidyverse)
library(readxl)
library(knitr)
wellbeing_data <- read_excel("Project 8 datafile.xlsx",sheet="Wave 1")
wellbeing_data <- rbind(wellbeing_data,read_excel("Project 8 datafile.xlsx",sheet="Wave 2"))
wellbeing_data <- rbind(wellbeing_data,read_excel("Project 8 datafile.xlsx",sheet="Wave 3"))
wellbeing_data <- rbind(wellbeing_data,read_excel("Project 8 datafile.xlsx",sheet="Wave 4"))

As the variable names provided in the spreadsheet are not very specific, we can assign a label and a short description to each variable for reference later, using the attr function (note that the order in each list is the same as the order of the variable names).

attr(wellbeing_data,"labels") <-c ("EVS-wave","Country/region","Respondent number",
                                  "Health","Life satisfaction",
                                  "Work Q1","Work Q2","Work Q3","Work Q4","Work Q5",
                                  "Sex","Age","Marital status","Number of children",
                                  "Education","Employment","Monthly household income")

attr(wellbeing_data,"shortDescription") <- c( "EVS-wave",
                                              "Country/region",
                                              "Original respondent number",
                                              "State of health (subjective)",
                                              "Satisfaction with your life",
                                              "To develop talents you need to have a job",
                                              "Humiliating to receive money without having to work for it",
                                              "People who don't work become lazy",
                                              "Work is a duty towards society",
                                              "Work should come first even if it means less spare time",
                                              "Sex",
                                              "Age",
                                              "Marital status",
                                              "How many children you have-deceased children not included",
                                              "Educational level respondent: ISCED-code one digit",
                                              "Employment status",
                                              "Monthly household income (× 1,000), corrected for ppp in euros")

Throughout this project we will refer to the variables using their original names, but you can use the following code to look up the attributes of each variable:

attr(wellbeing_data,"labels")[attr(wellbeing_data,"names")=="X028"]
## [1] "Employment"
attr(wellbeing_data,"shortDescription")[attr(wellbeing_data,"names")=="X028"]
## [1] "Employment status"
  1. Now we will take a closer look at how some of the variables were measured.
  1. We will now check and clean the dataset so it is ready to use.

R walk-through 8.2 Cleaning data and splitting variables

Inspect the data and recode missing values

R stores variables as different types depending on the kind of information the variable represents. For categorical data, such as country or occupation, the variables can be stored as text (chr). Numerical data should be stored as a number (num), but sometimes when R imports data it will store numerical values as text, which can cause problems later when we use these variables in calculations. The str function shows what type each variable is being stored as.

str(wellbeing_data)
## Classes 'tbl_df', 'tbl' and 'data.frame':    164997 obs. of  17 variables:
##  $ S002EVS: chr  "1981-1984" "1981-1984" "1981-1984" "1981-1984" ...
##  $ S003   : chr  "Belgium" "Belgium" "Belgium" "Belgium" ...
##  $ S006   : num  1001 1002 1003 1004 1005 ...
##  $ A009   : chr  "Fair" "Very good" "Poor" "Very good" ...
##  $ A170   : chr  "9" "9" "3" "9" ...
##  $ C036   : chr  ".a" ".a" ".a" ".a" ...
##  $ C037   : chr  ".a" ".a" ".a" ".a" ...
##  $ C038   : chr  ".a" ".a" ".a" ".a" ...
##  $ C039   : chr  ".a" ".a" ".a" ".a" ...
##  $ C041   : chr  ".a" ".a" ".a" ".a" ...
##  $ X001   : chr  "Male" "Male" "Male" "Female" ...
##  $ X003   : num  53 30 61 60 60 19 38 39 44 76 ...
##  $ X007   : chr  "Single/Never married" "Married" "Separated" "Married" ...
##  $ X011_01: chr  ".a" ".a" ".a" ".a" ...
##  $ X025A  : chr  ".a" ".a" ".a" ".a" ...
##  $ X028   : chr  "Full time" "Full time" "Unemployed" "Housewife" ...
##  $ X047D  : chr  ".a" ".a" ".a" ".a" ...
##  - attr(*, "labels")= chr  "EVS-wave" "Country/region" "Respondent number" "Health" ...
##  - attr(*, "shortDescription")= chr  "EVS-wave" "Country/region" "Original respondent number" "State of health (subjective)" ...

We start by recoding all missing values as NA.

wellbeing_data[wellbeing_data ==".a"] <-NA

Recode the life satisfaction variable

To recode the life satisfaction variable, we can use logical indexing to select all observations where the variable A170 is equal to either ‘Dissatisfied’ or ‘Satisfied’ and assign the value 1 or 10 respectively. When this variable was imported, some of the entries included text, so it is a chr variable. After changing the text into numerical values we can convert the variable into a numerical type using the as.numeric function.

wellbeing_data$A170[wellbeing_data$A170=="Dissatisfied"] <- 1
wellbeing_data$A170[wellbeing_data$A170=="Satisfied"] <- 10
wellbeing_data$A170 <- as.numeric(wellbeing_data$A170)

Recode the variable for number of children

We repeat this process for the variable indicating the number of children.

wellbeing_data$X011_01[wellbeing_data$X011_01=="No children"] <- 0
wellbeing_data$X011_01 <- as.numeric(wellbeing_data$X011_01)

Replace text with numbers for multiple variables

When we have to recode multiple variables with the same mapping of text to numerical value, we can use the tidyverse library and the piping (%>%) operator instead of repeating the process used above for each variable. (At this stage, don’t worry if you can’t understand exactly what this code does, as long as you can adapt it for similar situations.)

wellbeing_data <- wellbeing_data %>%
    mutate_at(c("C036","C037","C038","C039","C041"),funs(recode(.,"Strongly disagree" = "1", 
                "Disagree" = "2" , "Neither agree nor disagree" = "3","Agree" = "4",
                "Strongly agree" = "5"))) %>%
    mutate_at(c("A009"),funs(recode(.,"Very poor" = "1","Poor"="2","Fair"="3",
                "Good"="4","Very good"="5"))) %>%
    mutate_at(c("A009","C036","C037","C038","C039","C041","X047D"),funs(as.numeric))

Split a variable containing numbers and text

To split the variable X025A into two new columns we use the separate function, which creates two new variables called Education_1 and Education_2 containing the numeric value and the text description respectively.

wellbeing_data <- wellbeing_data %>% 
  separate(X025A,c("Education_1","Education_2")," : ",remove = FALSE) %>%
  mutate_at("Education_1",funs(as.numeric))

Although the paper we are following only considered individuals aged 25–80 who were not students, we will retain these observations in our analysis. However, we need to remove any observations that have missing values for A170, X028, or any of the other variables except X047D.

  1. Remove all observations with missing values from your data. R walk-through 8.3 gives guidance on how to do this. (Remember that not all questions were asked in each wave, so refer to the data documentation PDF to check whether there should be missing values for an entire variable in a particular wave.)

R walk-through 8.3 Dropping specific observations

As not all questions were asked in all waves, we have to be careful when dropping observations with missing values for certain questions, to avoid accidentally dropping an entire wave of data. For example, information on self-reported health (A009) was not recorded in Wave 3, and questions on work attitudes (C036 to C041) and information on household income are only asked in Waves 3 and 4. Furthermore, information on the number of children (X011_01) and education (X025A) are only collected in the final wave.

We will first use the complete.cases function to filter variables present in all waves. This function creates an index of which observations to keep, which we can then use to reference the required rows in the dataframe.

include <- names(wellbeing_data) %in% c("X003","A170","X028","X007", "X001")
wellbeing_data <- wellbeing_data[complete.cases(wellbeing_data[,include]),]

Next we will filter based on whether or not a question was asked in a particular wave. Again we use the complete.cases function, but combine it with the logical OR operator | to include all observations for waves in which a question does not feature, along with the complete cases for that question in the other waves.

include_wave_1_2_4 <- names(wellbeing_data) %in% c("A009")    # A009 is not in Wave 3.
include_wave_3_4 <- names(wellbeing_data) %in% c("C036","C037","C038","C039","C041","X047D")  # Work attitudes and income are in Waves 3 and 4.
include_wave_4 <- names(wellbeing_data) %in% c("X011_01","X025A") # Number of children and education are in Wave 4.

wellbeing_data <- wellbeing_data %>%
  filter(S002EVS == "1981-1984" | S002EVS == "1990-1993" | complete.cases(wellbeing_data[,include_wave_3_4])) 

wellbeing_data <- wellbeing_data %>%
  filter(S002EVS != "2008-2010" |  complete.cases(wellbeing_data[,include_wave_4]))

wellbeing_data <- wellbeing_data %>%
  filter(S002EVS == "1999-2001" |  complete.cases(wellbeing_data[,include_wave_1_2_4]))
  1. We will now create two variables, work ethic and relative income, which we will use in our comparison of wellbeing.

R walk-through 8.4 Calculating averages and percentiles

Calculate average work ethic score

We use the rowMeans function to calculate the average work ethic score for each observation based on the five survey questions, C036 to C041, related to working attitudes.

wellbeing_data$work_ethic <- rowMeans(wellbeing_data[,c("C036","C037","C038","C039","C041")])

Calculate income percentiles for each individual

R provides a handy function (ecdf) to obtain an individual’s relative income as a percentile. As we want a separate income distribution for each wave, we use the group_by function. Finally, we tidy up the results by converting each value into a percentage and rounding to a single decimal place.

The ecdf function will not work if there is missing data. Since we don’t have income data for Waves 1 and 2, we will have to split the data when working out the percentiles. First we select observations from Waves 3 and 4 that have income values, calculate the percentile values, and save them as a variable called percentile (which will be ‘NA’ for Waves 1 and 2). Then we recombine this data with the original observations from Waves 1 and 2.

# Generate the percentile variable for Waves 3 and 4, and store in temporary dataframe
df.new <- wellbeing_data %>%
  subset(S002EVS %in% c("1999-2001","2008-2010")) %>%   # Select Waves 3 and 4
  filter(!is.na(X047D)) %>%                             # Drop observations with missing income data
  group_by(S002EVS) %>%                                 # Calculate the percentiles per wave
  mutate(., percentile = ecdf(X047D)(X047D)) %>%        # Calculate the relative income as the percentiles of income distribution
  mutate(., percentile = round(percentile*100,1))       # Express the variable as a percentage

# Recombine Waves 3 and 4 with the Waves 1 and 2
wellbeing_data <- wellbeing_data %>%
  subset(S002EVS %in% c("1981-1984","1990-1993")) %>%   # Select Waves 1 and 2
  mutate(., percentile = NA) %>%                        # Create an empty variable so old and new dataframes are same size
  bind_rows(., df.new)                                  # Recombine the data
  1. Now we have all the variables we need in the format we need. We will make some tables to summarize the data. Using the data for Wave 4:
Male Female
Mean Standard deviation Mean Standard deviation
Wellbeing
Self-reported health
Work ethic
Age
Education
Number of children

Summary statistics by gender, European Values Study.

Figure 8.1 Summary statistics by gender, European Values Study.

R walk-through 8.5 Calculating summary statistics

Create table showing employment status, by country

One of the most useful R functions to obtain summary information is summarize, which produces many different summary statistics for a set of data. First we select the data for Wave 4 and group it by country (S003) and employment type (X028), then we use the summarize function to obtain the number of observations for each country and employment type pair. With the data effectively grouped by country only (a single observation contains the number of observations for each level of X028), we can use the sum function to obtain a total count per country and calculate relative frequencies (saved as freq).

wellbeing_data %>%
  subset(S002EVS == "2008-2010") %>%          # Select Wave 4 only
  group_by(S003,X028) %>%                     # Group by country and employment
  summarize(n = n()) %>%                      # Count the number of observations by group
  mutate(freq = round(n / sum(n),4)*100) %>%  # Calculate the relative frequency (percentages)
  select(-n) %>%                              # Drop the count variable
  spread(X028,freq) %>%                       # Reshape the data into a neat table
  kable(., format="markdown")
|S003               | Full time| Housewife| Other| Part time| Retired| Self employed| Students| Unemployed|
|:------------------|---------:|---------:|-----:|---------:|-------:|-------------:|--------:|----------:|
|Albania            |     29.42|      7.42|  1.50|      5.50|    9.08|         22.08|     7.33|      17.67|
|Armenia            |     23.86|     20.92|  1.14|      8.09|   18.38|          5.96|     6.70|      14.95|
|Austria            |     39.80|      7.24|  1.89|      9.95|   25.49|          5.02|     8.39|       2.22|
|Belarus            |     57.88|      2.43|  1.21|      6.95|   18.59|          3.40|     6.87|       2.67|
|Belgium            |     42.89|      5.96|  3.72|      8.94|   23.01|          3.57|     5.21|       6.70|
|Bosnia Herzegovina |     34.06|      9.33|  0.82|      2.90|   14.67|          3.08|     8.15|      26.99|
|Bulgaria           |     46.32|      2.62|  0.76|      2.79|   31.28|          5.58|     2.37|       8.28|
|Croatia            |     41.58|      3.37|  0.93|      2.78|   26.01|          2.86|     8.75|      13.72|
|Cyprus             |     46.32|     13.68|  1.29|      2.84|   24.39|          6.58|     1.68|       3.23|
|Czech Republic     |     46.56|      3.06|  4.66|      1.68|   31.27|          3.82|     5.43|       3.52|
|Denmark            |     55.89|      0.28|  1.32|      6.69|   24.32|          5.94|     4.15|       1.41|
|Estonia            |     50.35|      4.08|  2.20|      5.11|   28.52|          3.61|     3.38|       2.75|
|Finland            |     52.34|      1.38|  3.94|      5.11|   22.77|          6.17|     3.72|       4.57|
|France             |     46.83|      5.59|  1.94|      6.04|   28.78|          2.76|     3.13|       4.92|
|Georgia            |     19.46|     11.60|  0.81|      6.57|   19.38|          7.06|     2.60|      32.52|
|Germany            |     38.44|      4.58|  3.03|      8.44|   28.64|          2.97|     2.67|      11.23|
|Great Britain      |     33.50|      7.32|  4.01|     11.23|   28.99|          5.72|     1.40|       7.82|
|Greece             |     28.49|     17.42|  0.40|      2.97|   26.73|         13.72|     6.18|       4.09|
|Hungary            |     46.39|      1.20|  7.21|      2.00|   24.04|          3.53|     6.57|       9.05|
|Iceland            |     54.50|      2.25|  6.01|      9.91|    7.06|         11.41|     4.95|       3.90|
|Ireland            |     41.87|     19.84|  1.59|      9.72|   13.89|          4.96|     1.59|       6.55|
|Italy              |     32.88|      8.33|  0.46|      9.13|   23.06|         13.70|     7.08|       5.37|
|Kosovo             |     19.57|     11.65|  0.52|      5.23|    5.83|          9.41|    18.00|      29.80|
|Latvia             |     52.05|      6.18|  2.34|      3.76|   23.22|          3.26|     4.26|       4.93|
|Lithuania          |     50.13|      4.11|  2.89|      5.16|   23.97|          3.41|     6.04|       4.29|
|Luxembourg         |     51.33|      9.36|  1.12|      7.38|   15.36|          3.00|     9.87|       2.58|
|Macedonia          |     35.74|      4.34|  1.24|      1.71|   16.74|          3.72|     8.53|      27.98|
|Malta              |     33.84|     32.33|  0.68|      3.84|   23.42|          2.33|     0.55|       3.01|
|Moldova            |     30.49|      7.24|  1.87|      7.58|   25.64|          4.86|     4.43|      17.89|
|Montenegro         |     39.02|      4.63|  0.60|      2.14|   16.64|          4.97|     4.80|      27.19|
|Netherlands        |     32.40|      9.52|  3.68|     18.24|   27.76|          6.48|     0.80|       1.12|
|Northern Cyprus    |     31.19|     19.55|  2.23|      5.20|    8.91|          8.91|    13.61|      10.40|
|Northern Ireland   |     30.10|     10.36|  4.53|      8.74|   29.45|          3.56|     1.29|      11.97|
|Norway             |     53.23|      2.22|  6.55|      9.48|   12.60|          8.17|     7.06|       0.71|
|Poland             |     41.81|      6.00|  0.10|      3.14|   28.00|          5.81|     7.62|       7.52|
|Portugal           |     46.20|      5.24|  1.57|      3.27|   33.51|          1.70|     1.05|       7.46|
|Romania            |     41.07|     10.54|  1.95|      3.22|   33.95|          3.02|     3.61|       2.63|
|Russian Federation |     54.36|      5.81|  2.72|      5.08|   23.77|          1.27|     2.72|       4.26|
|Serbia             |     34.21|      5.02|  1.07|      2.38|   25.16|          6.91|     4.11|      21.13|
|Slovakia           |     40.98|      1.73|  4.89|      2.21|   39.73|          3.55|     1.25|       5.66|
|Slovenia           |     47.44|      2.50|  2.75|      1.25|   31.59|          4.37|     6.99|       3.12|
|Spain              |     41.52|     16.30|  0.11|      4.63|   19.93|          6.28|     3.19|       8.04|
|Sweden             |     54.82|      0.38|  6.60|      7.36|   15.36|          7.23|     4.06|       4.19|
|Switzerland        |     48.50|      6.42|  3.21|     14.03|   21.31|          2.89|     1.39|       2.25|
|Turkey             |     16.42|     42.39|  0.60|      2.14|   10.00|          7.66|     5.92|      14.88|
|Ukraine            |     40.92|      6.79|  1.02|      4.84|   32.09|          4.41|     3.23|       6.71|

Calculate summary statistics by gender

We can also obtain summary statistics on a number of variables at the same time using the summarize_at function. To obtain the mean value for each of the required variables, grouped by the gender variable, we can do the following:

wellbeing_data %>%    # Obtain mean value
  subset(S002EVS == "2008-2010") %>%    # We only require Wave 4.
  group_by(X001) %>%                    # Group by gender
  summarize_at(c("A009","A170","work_ethic","X003","Education_1","X011_01"), mean)
## # A tibble: 2 x 7
##   X001    A009  A170 work_ethic  X003 Education_1 X011_01
##   <chr>  <dbl> <dbl>      <dbl> <dbl>       <dbl>   <dbl>
## 1 Female  3.60  6.93       3.64  47.3        3.05    1.69
## 2 Male    3.77  7.03       3.72  46.9        3.14    1.55

We can repeat this to obtain the standard deviation, again grouped by male and female.

wellbeing_data %>%    # Obtain standard deviation
  subset(S002EVS == "2008-2010") %>%
  group_by(X001) %>%
  summarize_at(c("A009","A170","work_ethic","X003","Education_1","X011_01"), sd)
## # A tibble: 2 x 7
##   X001    A009  A170 work_ethic  X003 Education_1 X011_01
##   <chr>  <dbl> <dbl>      <dbl> <dbl>       <dbl>   <dbl>
## 1 Female 0.969  2.32      0.765  17.5        1.40    1.39
## 2 Male   0.929  2.28      0.757  17.4        1.31    1.43

We can combine these two operations into a single code block and ask summarize_at to perform multiple functions on multiple variables at the same time.

wellbeing_data %>%    # Obtain mean and standard deviations
  subset(S002EVS == "2008-2010") %>%
  group_by(X001) %>%
  summarize_at(c("A009","A170","work_ethic","X003","Education_1","X011_01"), funs(mean,sd))
## # A tibble: 2 x 13
##   X001   A009_mean A170_mean work_ethic_mean X003_mean Education_1_mean
##   <chr>      <dbl>     <dbl>           <dbl>     <dbl>            <dbl>
## 1 Female      3.60      6.93            3.64      47.3             3.05
## 2 Male        3.77      7.03            3.72      46.9             3.14
## # ... with 7 more variables: X011_01_mean <dbl>, A009_sd <dbl>,
## #   A170_sd <dbl>, work_ethic_sd <dbl>, X003_sd <dbl>,
## #   Education_1_sd <dbl>, X011_01_sd <dbl>

In this example, R automatically appends the function name (in this case, _mean or _sd) to the end of the variables. If you need these variables in subsequent calculations, you need to reformat the data and variable names.

Part 8.2 Visualizing the data

Note

You will need to have done Questions 1–5 in Part 8.1 before doing this part.

We will now create some summary charts of the self-reported measures (work ethic and wellbeing), starting with column charts to show the distributions of values. Along with employment status, these are the main variables of interest, so it is important to look at them carefully before doing further data analysis.

The distribution of work ethic and wellbeing may vary across countries but may also change over time within a country, especially since the surveys are conducted around once a decade. To compare distributions for a particular country over time, we have to use the same horizontal axis, so we will first need to make a frequency table for each distribution of interest. Also, since the number of people surveyed in each wave may differ, we will use percentages instead of frequencies as the vertical axis variable.

  1. Use the data from Wave 3 and Wave 4 only, for three countries of your choice:

R walk-through 8.6 Calculating frequencies and percentages

First we need to create a frequency table of the work_ethic variable for each wave. This variable only takes discrete values from 1.0 to 5.0 in increments of 0.2 (since it is an average of five integer variables), so we can group by each discrete value and count the number of observations in each group using the summarize function. Once we have a count of each value (for each wave), we compute the percentages by dividing the count of the work_ethic values by the total number of observations (for each wave).

Once we have the percentages and frequency data, we use ggplot to plot a column chart. To overlay the two waves and make sure that the plot for each wave is visible, we use the alpha option in the geom_bar function to set the transparency level.

waves  <- c("1999-2001","2008-2010")
df.new <- wellbeing_data %>% 
  subset(., S002EVS %in% waves) %>%       # Select Waves 3 and 4
  subset(., S003 == "Germany") %>%        # We use Germany in this example.
  group_by(S002EVS, work_ethic) %>%       # Group by each wave and the discrete values of work_ethic
  summarize (freq = n()) %>%
  mutate(proportion = freq / sum(freq))

The frequencies and percentages are saved in a new dataframe called df.new. If you want to look at it, type view(df.new) into the console window. These numbers were generated to produce a plot of the distribution of scores for each wave as follows:

p<-ggplot(df.new, aes(work_ethic,proportion, fill=S002EVS, color=S002EVS)) +    
                # The colour and fill of each wave is set by the wave variable in the count/frequency dataframe.
    geom_bar(stat="identity", position="identity", alpha=0.2) + 
    theme_bw() +
    ggtitle(paste("Distribution of work ethic for Germany"))
print(p)

Distribution of work ethic scores for Germany.

Figure 8.2 Distribution of work ethic scores for Germany.

  1. We will use line charts to make a similar comparison for wellbeing over time. Using data for countries that are present in Waves 1 to 4:

R walk-through 8.7 Plotting multiple lines on a chart

Calculate average wellbeing, by wave and country

Before we can plot the line charts we have to calculate the average wellbeing for each country in each wave.

In R walk-through 8.5 we produced summary tables, grouped by country and employment status. We will copy this process, but now we only require mean values. Countries that do not report the wellbeing variable for all waves will have an average wellbeing of ‘NA’. Since each country is represented by a row in the summary table, we use the rowwise and na.omit functions to drop any countries that do not have a value for the average wellbeing for all four waves.

df.new <- wellbeing_data %>%
  group_by(S002EVS,S003) %>%
  summarize(avg_wellbeing=round(mean(A170),2)) %>%    # Calculate average wellbeing per wave and country
  spread(S002EVS,avg_wellbeing) %>%                   # Reshape the data to one row per country
  rowwise() %>%                                       # Set subsequent operations to be performed on each row
  na.omit() %>%                                       # Drop any rows with missing observations
  print()
## # A tibble: 11 x 5
##    S003             `1981-1984` `1990-1993` `1999-2001` `2008-2010`
##    <chr>                  <dbl>       <dbl>       <dbl>       <dbl>
##  1 Belgium                 7.37        7.6         7.42        7.63
##  2 Denmark                 8.21        8.17        8.31        8.41
##  3 France                  6.71        6.77        6.98        7.05
##  4 Germany                 7.22        7.03        7.43        6.77
##  5 Iceland                 8.05        8.01        8.08        8.07
##  6 Ireland                 7.82        7.88        8.21        7.82
##  7 Italy                   6.65        7.3         7.18        7.4 
##  8 Netherlands             7.75        7.77        7.83        7.99
##  9 Northern Ireland        7.66        7.88        8.07        7.82
## 10 Spain                   6.6         7.15        6.97        7.29
## 11 Sweden                  8.03        7.99        7.62        7.68

Create a line chart for average wellbeing

To draw the lines for all countries on a single plot we need to gather the data into the correct format to use in ggplot.

df.new %>%
  gather(Wave,mean,`1981-1984`:`2008-2010`) %>%
  ggplot(., aes(x=Wave, y=mean, color=S003)) + geom_line(aes(group=S003,linetype=S003), size=1)  + # Group the data by country 
  theme_bw()

Line chart of average wellbeing across countries and survey waves.

Figure 8.3 Line chart of average wellbeing across countries and survey waves.

correlation coefficient
A numerical measure of how closely associated two variables are and whether they tend to take similar or dissimilar values, ranging from a value of 1, indicating that the variables take similar values (positively correlated), to −1, indicating that the variables take dissimilar variables (negative or inverse correlation). A value of 1 or −1 indicates that knowing the value of one of the variables would allow you to perfectly predict the value of the other. A value of 0 indicates that knowing one of the variables provides no information about the value of the other.

After describing patterns in our main variables over time, we will use correlation coefficients and scatter plots to look at the relationship between these variables and the other variables in our dataset.

  1. Using the Wave 4 data:
Variable Wellbeing Work ethic
Age
Education
Employment status (= 1 if full-time employed, = 0 if unemployed)
Gender (= 0 if male, = 1 if female)
Self-reported health
Income
Number of children
Relative income
Wellbeing 1
Work ethic 1

Correlation between wellbeing, work ethic and other variables, Wave 4.

Figure 8.4 Correlation between wellbeing, work ethic and other variables, Wave 4.

R walk-through 8.8 Creating dummy variables and calculating correlation coefficients

To obtain the correlation coefficients between variables, we have to make sure that all variables are numeric. However, the data on sex and employment status are coded using text, so we need to create two new variables. We can use the ifelse function to make the value of the variable conditional on whether a logical statement is satisfied or not. As shown below, we can nest ifelse statements to create more complex conditions.

Although we had previously removed observations with missing values (‘NA’), we have reintroduced ‘NA’ values in the previous step when creating the employment variable. This would cause problems when obtaining the correlation coefficients using the cor function, so we include the option use="pairwise.complete.obs" to exclude observations containing ‘NA’ from the calculations.

wellbeing_data %>%
  subset(S002EVS == "2008-2010") %>%                                              # We only require Wave 4.
  mutate(gender = as.numeric(ifelse(X001=="Male",0,1))) %>%                       # Create a gender variable
  mutate(employment = as.numeric(ifelse(X028=="Full time",1,
                                        ifelse(X028=="Unemployed",0,NA)))) %>%    # Provide nested condition for employment
  select(X003,Education_1,employment,gender,A009,X047D,X011_01,percentile,A170,work_ethic) %>%
  cor(.,use="pairwise.complete.obs") -> M   # Store output in matrix

M[,c("A170","work_ethic")]                  # Only interested in two columns, so use matrix indexing
##                    A170  work_ethic
## X003        -0.08220504  0.13333312
## Education_1  0.09363900 -0.14535643
## employment   0.18387286 -0.02710835
## gender      -0.02105535 -0.04781557
## A009         0.37560287 -0.07320865
## X047D        0.23542497 -0.15226999
## X011_01     -0.01729372  0.08925452
## percentile   0.29592968 -0.18679869
## A170         1.00000000 -0.03352464
## work_ethic  -0.03352464  1.00000000

Next we will look at the relationship between employment status and wellbeing and investigate the paper’s hypothesis that this relationship varies with a country’s average work ethic.

  1. Using the data from Wave 4, carry out the following:

R walk-through 8.9 Calculating group means

Calculate average wellbeing and differences in average wellbeing

We can achieve the tasks in Question 4(a) and (b) in one go using an approach similar to that used in R walk-through 8.5, although now we are interested in calculating the average wellbeing by country and employment type. Once we have tabulated these means we can compute the difference in the average values. We will create two new variables: D1 for the difference between the average wellbeing for full-time employed and unemployed, and D2 for the difference in average wellbeing for full-time employed and retired individuals.

employment_list = c("Full time","Retired","Unemployed")   # Set the employment types that we are interested in reporting
df.employment <- wellbeing_data %>%
  subset(S002EVS == "2008-2010") %>%                      # Select Wave 4
  subset(X028 %in% employment_list) %>%                   # Select only observations with the specified employment types
  group_by(S003,X028) %>%                                 # Group by country and then employment type
  summarize(mean = mean(A170)) %>%                        # Calculate the mean by country/employment type group
  spread(X028,mean) %>%                                   # Reshape to one row per country
  mutate(D1 = `Full time` - Unemployed, D2 = `Full time` - Retired)  # Create the difference in means
  kable(df.employment, format="markdown")
|S003               | Full time|  Retired| Unemployed|         D1|         D2|
|:------------------|---------:|--------:|----------:|----------:|----------:|
|Albania            |  6.634561| 5.807339|   6.066038|  0.5685232|  0.8272215|
|Armenia            |  6.037671| 4.853333|   5.459016|  0.5786548|  1.1843379|
|Austria            |  7.435950| 7.741936|   6.074074|  1.3618763| -0.3059851|
|Belarus            |  6.099162| 5.617391|   5.606061|  0.4931014|  0.4817707|
|Belgium            |  7.717014| 7.834951|   6.366667|  1.3503472| -0.1179376|
|Bosnia Herzegovina |  7.332447| 7.006173|   6.771812|  0.5606347|  0.3262740|
|Bulgaria           |  6.177007| 4.972973|   4.693878|  1.4831297|  1.2040343|
|Croatia            |  7.305668| 6.478964|   7.165644|  0.1400238|  0.8267036|
|Cyprus             |  7.384401| 7.031746|   6.560000|  0.8244011|  0.3526551|
|Czech Republic     |  7.302135| 6.885086|   6.065217|  1.2369173|  0.4170491|
|Denmark            |  8.541315| 8.205426|   7.200000|  1.3413153|  0.3358890|
|Estonia            |  6.925117| 6.253444|   4.971429|  1.9536884|  0.6716735|
|Finland            |  7.817073| 8.018692|   5.767442|  2.0496313| -0.2016184|
|France             |  7.200637| 6.974093|   6.242424|  0.9582127|  0.2265437|
|Georgia            |  6.116667| 4.690377|   5.421446|  0.6952203|  1.4262901|
|Germany            |  7.258114| 6.848548|   4.608466|  2.6496488|  0.4095667|
|Great Britain      |  7.532934| 7.930796|   6.025641|  1.5072931| -0.3978617|
|Greece             |  7.152113| 6.621622|   5.980392|  1.1717205|  0.5304911|
|Hungary            |  6.645941| 5.890000|   4.858407|  1.7875342|  0.7559413|
|Iceland            |  8.203857| 8.446808|   7.230769|  0.9730875| -0.2429518|
|Ireland            |  7.895735| 7.828571|   7.181818|  0.7139164|  0.0671632|
|Italy              |  7.427083| 7.440594|   6.595745|  0.8313387| -0.0135107|
|Kosovo             |  6.297710| 6.038462|   6.784461| -0.4867512|  0.2592484|
|Latvia             |  6.516854| 5.928058|   5.305085|  1.2117692|  0.5887964|
|Lithuania          |  6.586387| 5.631387|   4.530612|  2.0557752|  0.9550006|
|Luxembourg         |  7.866221| 8.240224|   5.500000|  2.3662207| -0.3740027|
|Macedonia          |  7.193059| 6.671296|   6.609418|  0.5836403|  0.5217623|
|Malta              |  7.700405| 7.760234|   5.954546|  1.7458594| -0.0598291|
|Moldova            |  7.117318| 5.976744|   6.071429|  1.0458899|  1.1405742|
|Montenegro         |  7.632967| 7.216495|   7.473186|  0.1597809|  0.4164722|
|Netherlands        |  8.037037| 7.945245|   7.000000|  1.0370370|  0.0917921|
|Northern Cyprus    |  6.738095| 6.444444|   5.642857|  1.0952381|  0.2936508|
|Northern Ireland   |  7.677419| 7.769231|   7.540540|  0.1368788| -0.0918114|
|Norway             |  8.193182| 8.264000|   8.000000|  0.1931818| -0.0708182|
|Poland             |  7.464692| 6.574830|   7.063291|  0.4014013|  0.8898626|
|Portugal           |  6.838527| 5.878906|   5.438597|  1.3999304|  0.9596207|
|Romania            |  7.135392| 6.574713|   7.407407| -0.2720155|  0.5606793|
|Russian Federation |  6.861436| 5.702290|   6.404255|  0.4571804|  1.1591456|
|Serbia             |  7.170673| 6.673203|   6.731517|  0.4391556|  0.4974705|
|Slovakia           |  7.468384| 6.710145|   6.118644|  1.3497400|  0.7582391|
|Slovenia           |  7.834211| 7.130435|   6.760000|  1.0742105|  0.7037757|
|Spain              |  7.336870| 7.209945|   7.191781|  0.1450892|  0.1269253|
|Sweden             |  7.877315| 8.173554|   6.515151|  1.3621633| -0.2962389|
|Switzerland        |  8.037528| 8.115578|   5.761905|  2.2756228| -0.0780503|
|Turkey             |  6.500000| 6.606965|   5.762542|  0.7374582| -0.1069652|
|Ukraine            |  6.344398| 5.444444|   4.949367|  1.3950313|  0.8999539|

Plot a column chart, sorted according to work ethic

In order to plot the differences ordered by the average work ethic, we first need to summarize the work_ethic variable by country and store the results in a temporary dataframe.

df.work_ethic <- wellbeing_data %>%
  subset(S002EVS == "2008-2010") %>%    
  select(S003,work_ethic) %>%
  group_by(S003) %>%
  summarize(mean_work = mean(work_ethic))

We can now combine the average work_ethic data with the table containing the difference in means and plot a column chart, with the countries ordered by average work ethic. (The ordering is obtained using the x=reorder() option in ggplot.)

df.employment %>%
  inner_join(.,df.work_ethic, by = "S003") %>%                # Combine with the average work ethic data
  ggplot(.,aes(y=D1, x=reorder(factor(S003),mean_work)))  +   
    geom_bar(stat="identity") + 
    xlab("Country") + ylab("Difference") +
    ggtitle("Difference in wellbeing between the full-time employed and the unemployed \n 
            (sorted from lowest to highest average work ethic).") + 
    theme_bw() + 
    theme(axis.text.x = element_text(angle = 90, hjust = 1), plot.title = element_text(hjust = 0.5))  # Rotate the country names

Difference in wellbeing between the full-time employed and the unemployed (sorted from lowest to highest average work ethic).

Figure 8.5 Difference in wellbeing between the full-time employed and the unemployed (sorted from lowest to highest average work ethic).

This process can be repeated for the difference in means between full-time employed and retired individuals by changing y=D1 to y=D2 in the ggplot function.

Part 8.3 Confidence intervals for difference in the mean

Note

You will need to have done Questions 1–5 in Part 8.1 before doing this part.

The aim of this project was to look at the empirical relationship between employment and wellbeing. In other words, is the mean wellbeing of employed people (statistically) significantly different from the mean wellbeing of unemployed people (as we might expect from economic theory)?

When we calculate differences between groups, we might also want to know if these differences are statistically significant. In Part 6.2 of Empirical Project 6, we constructed confidence intervals for means, and used a rule of thumb to assess statistical significance from confidence intervals displayed on a chart. Now we will learn how to construct confidence intervals for the difference in two means, which allows us to assess statistical significance directly instead of using the rule of thumb.

Remember that the width of a confidence interval depends on the standard deviation and number of observations. (Read Part 6.2 of Empirical Project 6 to understand why.) When making a confidence interval for a sample mean (such as the mean wellbeing of the unemployed), we use the sample standard deviation and number of observations in that sample (unemployed people) to obtain the standard error of the sample mean.

When we look at the difference in means (such as wellbeing of employed minus unemployed), we are using data from two groups (the unemployed and the employed) to make one confidence interval. To calculate the standard error for the difference in means, we use the standard errors (SE) of each group:

This formula requires the two groups of data to be independent, meaning that the data from one group is not related, paired, or matched with data from the other group. This assumption is reasonable for the wellbeing data we are using. However, if the two groups of data are not independent, for example if the same people generated both groups of data (as in the public goods experiments of Empirical Project 2), then we cannot use this formula.

Once we have the new standard error and number of observations, we can calculate the width of the confidence interval (distance from the mean to one end of the interval) as before. The difference in means is statistically significant if the confidence interval for the difference in means does not contain 0.

  1. We will apply this method to make confidence intervals for differences in wellbeing. Choose three countries: one with an average work ethic in the top third of scores, one in the middle third, and one in the lower third. (See R walk-through 6.3 for help on calculating confidence intervals and adding them to a chart.)

R walk-through 8.10 Calculating confidence intervals and adding error bars

We will use Turkey, Spain, and Great Britain as example countries in the top, middle, and bottom third of work ethic scores respectively.

In the tasks in Questions 1(a) and (b) we will obtain the means, standard errors, and confidence intervals step-by-step, then for Question 1(c) we show how to use a shortcut to obtain confidence intervals from a single function.

Calculate confidence intervals manually

We obtained the difference in means in R walk-through 8.9, so now we can calculate the standard error of the means for each country of interest.

country_list <- c("Turkey","Spain","Great Britain")     # List chosen countries
df.employment.se <- wellbeing_data %>%  
  subset(S002EVS == "2008-2010") %>%                    # Select Wave 4
  subset(X028 %in% employment_list) %>%                 # Select the employment types we are interested in
  subset(S003 %in% country_list) %>%                    # Select countries
  group_by(S003,X028) %>%                               # Group by country and employment type
  summarize(se = sd(A170) /sqrt(n())) %>%               # Calculate the standard error of each group mean
  spread(X028,se) %>%
  mutate(D1_SE = sqrt(`Full time`^2 + Unemployed^2), D2_SE = sqrt(`Full time`^2 + Retired^2))   # Calculate the SE of difference 

We can now combine the standard errors with the difference in means, and compute the confidence interval width.

df.employment <- df.employment %>%
  subset(S003 %in% country_list) %>%                # Select chosen countries
  select(-`Full time`, -Retired, -Unemployed) %>%   # We only need the differences.
  inner_join(.,df.employment.se, by="S003") %>%     # Join the means with the respective SEs
  select(-`Full time`, -Retired, -Unemployed) %>%
  mutate(CI_1 = 1.96*D1_SE, CI_2 = 1.96*D2_SE) %>%  # Compute the confidence interval width for both differences
  print()
## # A tibble: 3 x 7
## # Groups:   S003 [3]
##   S003             D1     D2 D1_SE D2_SE  CI_1  CI_2
##   <chr>         <dbl>  <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 Great Britain 1.51  -0.398 0.267 0.154 0.524 0.302
## 2 Spain         0.145  0.127 0.265 0.168 0.520 0.330
## 3 Turkey        0.737 -0.107 0.229 0.231 0.449 0.452

We now have a table containing the difference in means, the standard error of the difference in means, and the confidence intervals for each of the two differences. (Recall that D1 is the difference between the average wellbeing for full-time employed and unemployed, and D2 is the difference in average wellbeing for full-time employed and retired individuals.)

Calculate confidence intervals using the t.test function

We could obtain the confidence intervals directly by using the t.test function. First we need to prepare the data in two groups. In the following example we go through the difference in average wellbeing for full-time employed and unemployed individuals in Turkey, but the process can be repeated for the difference between full-time employed and retired individuals (also for your other two chosen countries).

Start by selecting the data and storing it in a temporary matrix to pass to the t.test function.

turkey_full <- wellbeing_data %>%
  subset(S002EVS == "2008-2010") %>%    # Select Wave 4 only
  subset(S003 == "Turkey") %>%          # Select country
  subset(X028 == "Full time") %>%       # Select employment type
  select(A170) %>%                      # We only need the wellbeing data.
  as.matrix()                           # We have to set it as a matrix for t.test.

turkey_unemployed <- wellbeing_data %>% # Repeat for second group
  subset(S002EVS == "2008-2010") %>%    
  subset(S003 == "Turkey") %>%
  subset(X028 == "Unemployed") %>%
  select(A170) %>%
  as.matrix()

We can now use the t.test function, passing the two newly created matrices as the data. We also set the confidence level to 95%. The t.test function produces quite a bit of output, but we are only interested in the confidence interval, which we can obtain by using $conf.int.

turkey_ci <- t.test(turkey_full,turkey_unemployed, conf.level = 0.95)$conf.int
turkey_ci
## [1] 0.2873459 1.1875704
## attr(,"conf.level")
## [1] 0.95

We can then calculate the difference in means by finding the midpoint of the interval ((turkey_ci[2] + turkey_ci[1])/2 is 0.7374582), which should be the same as the figures obtained in Question 1(b) (df.employment[3,2] is 0.7374582).

Add error bars to the column charts

We can now use these confidence intervals (and widths) to add error bars to our column charts. To do so we use the geom_errorbar option, and specify the lower and upper levels of the confidence interval for the ymin and ymax options respectively. In this case it is easier to use the results from Questions 1(a) and (b), as we already have the values for the difference in means and the CI width stored as variables.

ggplot(df.employment, aes(x=S003, y=D1)) +
  geom_bar(stat = "identity") + 
  geom_errorbar(aes(ymin=D1-CI_1, ymax=D1+CI_1), width=.2) +
  ylab("Difference in means") + xlab("Country") + 
  theme_bw() +
  ggtitle("Difference in wellbeing (full-time and unemployed)")

Difference in wellbeing between full-time employed and unemployed.

Figure 8.6 Difference in wellbeing between full-time employed and unemployed.

Again, this can be repeated for the difference in wellbeing between full-time employed and retired. Remember to change y=D1 to y=D2, and the upper and lower limits for the error bars.

  1. The method we used to compare wellbeing relied on making comparisons between people with different employment statuses, but a person’s employment status is not entirely random. We cannot therefore make causal statements such as ‘being unemployed causes wellbeing to decrease’. Describe how we could use the following methods to assess better the effect of being unemployed on wellbeing, and make some statements about causality: