We will first load in the libraries for this homework assignment. There are quite a few packages I used for small conversions this week.

# Libraries
library(npsm)
library(dplyr)
library(tidyr)
library(ggplot2)
library(survival)
library(readxl)
library(purrr)

Exercise 1

In this exercise, we want to explore the relationship between the Dirichlet Process (namely its parameter \(\alpha\)) to a fixed base distribution.

  1. We will first write R code to draw 100 sample curves from the Dirichlet Process (DP) prior for a range of concentration parameters \(\alpha\) and one fixed base distribution \(F_0\) (for example, the normal distribution).

We will assume that our fixed base distribution \(F_0\) is a standard normal distribution with a mean of 0 and a variance of 1.

Our function dir_curves first draws a sample of size n (set to a default of \(n = 100\)) from a standard normal distribution. Then we draw a sample size of n from a beta distribution with the parameters of 1 and \(\alpha\).

Next, we calculate the weights for each point in the normal distribution by taking the Dirichlet Process product as defined in lecture \(\prod_{i = 1}^n 1 - \beta_i\). Note that \(\beta_i\) are the values we sampled from the beta distribution earlier.

Finally, we sample from the normal sample we obtained in the first step of the function weighted by the weights obtained using the Dirichlet Process.

dp_curves <- function(alpha, n = 100) {
  # Draw from standard normal
  s <- rnorm(n = n, mean = 0, sd = 1)
  # Draw from beta distribution
  v <- rbeta(n = n, shape1 = 1, shape2 = alpha)
  w <- array(0, n)
  w[1] <- v[1]
  # Calculate weights
  w[2:n] <- sapply(2:n, function(i) v[i] * prod(1 - v[1:(i-1)]))
  theta <- sample(s, prob = w, replace = TRUE) 
  return(theta)
}
  1. What happens when we change \(\alpha\)?

\(\textbf{As we increase the alpha value, the Dirichlet curves will follow the base distribution more closely due to the evenness in the spread of the weights}\). We will show this fact using our simulation below.

We will use our function dp_curves to explore this question. We use a range of different \(\alpha\)-values ranging from 1 to 1000. Then we generate DP curves from our function in the previous section for the different types of alpha values. Lastly, we plot cumulative step functions obtained in order to plot the Dirichlet curves.

n <- 100
alpha_val <- c(1, 5, 10, 25, 50, 100, 200, 500, 1000)

vals_array <- matrix(0, nrow = length(alpha_val), ncol = n + 1)
vals_array[,1] <- alpha_val

set.seed(10)
for (i in seq_len(length(alpha_val))) {
  vals_array[i,2:ncol(vals_array)] <- dp_curves(alpha_val[i], n)
}

The graph generated using the code below depicts the DP curves generated with the alpha value noted at the top of the section. One observation is that the curves generated with lower alpha values are more chunky with larger jumps. We overlay the cumulative distribution function in red for each of the graphs. In the graph with \(\alpha = 1\), we see that the Dirichlet curve doesn’t follow the normal distribution ecdf very closely. However, as we increase the alpha, we see that the curve ecdfs get less chunky with less jumps. By \(\alpha = 1000\), we see that the DP curve follows the normal ecdf pretty closely.

data.frame(vals_array) %>% 
  gather(group, value, X2:X101) %>% 
  select(-group) %>%
  rename(group = X1) %>%
  ggplot() + 
  stat_ecdf(mapping = aes(x = rnorm(length(alpha_val)*n)), color = "red") +
  stat_ecdf(mapping = aes(x = value)) +
  facet_wrap( ~ group) + 
  labs(x = "Value", y = "Cumulative Probability")

Notice that for smaller values of alpha, the generated weights seem to be concentrated in a couple of areas (in the \(\alpha = 1\), we see that the weights are concentrated near the two extremes). As the alphas increase, we see that the weights become more evenly spread. Thus, \(\textbf{as we increase the alpha value, we increase the spread of the points to follow the base distribution more closely}\).

  1. What does this corresponds to in a more standard distribution, say the normal distribution?

\(\textbf{The change in alpha values correspond to the spread or variance in a standard distribution}\). Particularly in the normal distribution example we used, our cdf functions had more jumps and more frequently as our alphas got to be larger. This means that the spread of the weights on our points are more even across the entire range instead of concentrating in a few clusters.

Exercise 2

In this problem, we will explore hourly wages across states and occupations.

  1. First, we will load in occupational employment statistics from the Bureau of Labor Statistics (the occupational employement statistics page).

We set the null values to be NA so it is more interpretable for R.

employment <- read_excel("employment_stats.xlsx", na = "NA")
  1. Next, we construct a two-way table of median hourly wages with states as rows and job titles as columns.

We first do some filtering our the data set by selecting out the features of interest (namely, state, job title, and hourly median wage). Then, we convert hourly_median_wage to a numerical value for easier analysis later down the road. Lastly, we spread the data in order to obtain a two-way table where the rows are states and the columns are job titles. Each cell contains the median hourly wages for that particular state and job.

occ_stats <- employment %>%
  rename(state = STATE,
         job_title = OCC_TITLE,
         hourly_median_wage = H_MEDIAN) %>%
  dmap_at("hourly_median_wage", as.numeric) %>%
  select(state, job_title, hourly_median_wage) %>%
  spread(job_title, hourly_median_wage)

We add in row names as the states for the data set.

row.names(occ_stats) <- occ_stats$state

Lastly, we remove the states column and convert the data frame to a matrix.

occ_stats <- occ_stats %>%
  select(-state) %>%
  as.matrix()

Taking a look at a section of our two-way table, we see that it has quite a few missing values. In the next step, we will impute the missing values in order to perform Friedman’s test.

occ_stats[1:5, 1:3]
##            Accountants and Auditors Actors Actuaries
## Alabama                       29.38     NA     43.27
## Alaska                        35.77     NA        NA
## Arizona                       28.18  14.73     41.68
## Arkansas                      27.55     NA        NA
## California                    35.12     NA     48.30
  1. Next, we perform the Friedman’s test on the constructed table. What is our null and alternative hypothesis? What are our conclusions?

We will first get rid of columns that have more than 15% missing values. This gets rid of occupations for which we don’t have much data across different states. We will use a threshold of 15% for a stricter bound on the density of our two-way table. Having too many null values might influence the median statistic and the results of the Friedman test.

na <- NULL
for (i in seq_len(ncol(occ_stats))) {
  if (mean(is.na(occ_stats[,i])) > 0.15) {
    na <- c(na,i)
  }
}
occ_stats <- occ_stats[,-na] %>% as.matrix()

Next, we will inpute the missing values within each column. We will replace missing values with the median. Since Friedman’s test is a nonparametric method where the test statistic depends on the rank of the observations, we see that by imputing the median, we aren’t changing the ranking of the observations. We do obtain an increase in the median rank, but it’s something smaller that we can overlook.

We end up with 54 states (like before) and 490 jobs (almost half of what we had previously - 844 jobs).

for (i in nrow(occ_stats)) {
  for (j in ncol(occ_stats)) {
    if (is.na(occ_stats[i,j])) {
      occ_stats[i,j] <- median(occ_stats[,j])
    }
  }
}

We perform the Friedman test on our two-way table of median hourly wages across states. The Friedman test is used for one-way repeated measures analysis of variance by ranks.

In our data set, our states are our “blocks” and our job titles are the “treatments”. By performing the Friedman test, we want to answer the question of whether any job consistently has higher or lower median wages than the other jobs.

We use the function friedman.test to perform Friedman’s test.

friedman.test(occ_stats) 
## 
##  Friedman rank sum test
## 
## data:  occ_stats
## Friedman chi-squared = 3239.6, df = 489, p-value < 2.2e-16

We obtain a statistic of 3239.6, degrees of freedom of 489 (obtained from the number of treatments 490 minus 1), and a p-value of less than 0.01.

  1. Lastly, we perform median polish on this table. What are our conclusions?

We now follow the lecture notes from class (ANOVA Lecture) and perform median polish on our two-way table. From median polish, we are trying to analyze the following model \(Y_{ij} = \mu + \alpha_i + \beta_j + \epsilon{ij}\).

# Compute the median
mu <- median(occ_stats, na.rm = T)

# Compute median residuals
med_polish <- as.matrix(occ_stats) - mu

# Row median (adding and residuals)
row_median <- apply(med_polish, 1, median, na.rm = T)
med_polish <- med_polish - row_median
med_polish <- cbind(roweff = c(row_median), med_polish)

# Column median (adding and residuals)
med_polish <- rbind(coleff = rep(0, ncol(med_polish)), med_polish)
med_polish[1,1] <- mu
col_median <- apply(med_polish[2:nrow(med_polish),], 2, median, 
                    na.rm = T)
med_polish[1,] <- med_polish[1,] + col_median

# Create new residual table from column medians
med_polish[2:nrow(med_polish),] <- sweep(med_polish[2:nrow(med_polish),],
                                         2, col_median) 

# Second iteration adding row median
second_row_median <- apply(med_polish[,2:ncol(med_polish)], 1, median, na.rm = T)
med_polish[,1] <- med_polish[,1] + second_row_median
med_polish[,2:ncol(med_polish)] <- sweep(med_polish[,2:ncol(med_polish)], 1, second_row_median) 

# Second iteration adding column median
second_col_median <- apply(med_polish[2:nrow(med_polish),], 2, median, na.rm = T)
med_polish[1,] <- med_polish[1,] + second_col_median 
med_polish[2:nrow(med_polish),] <- sweep(med_polish[2:nrow(med_polish),],
                                        2, col_median)

We end up with the following median polish residual table. At a first glance, we see that Alaska, Washington DC, Massachusetts, New York, and Washington have the highest median hourly wages. This is interesting as while the latter four states are well known urban hubs, Alaska is a sparse state with not too much industry.

We definitely see that the the column effects of the job title is a stronger factor in distinguishing among the median hourly wages than the geographic region (states). We have effect values in the 10s to 30s for job title, but effects for state lingers around less than 10 consistently.

We see that the residuals are the portion of the median hourly wages that can’t be explained by either state or job title. There are several jobs in which the residuals are also fairly large. Across rows, these residuals seem to even out. However, for some occupations such as Administrative Law Judges, we see that the residuals are in the -20 and +20 range. This suggests that some occupations are not well explained by state and job title.

Here’s a section of our final median polish table.

med_polish[1:5, 1:3]
##            roweff Accountants and Auditors
## coleff   19.04000                 10.62000
## Alabama  -0.56250                -10.28250
## Alaska    5.09500                 -9.55000
## Arizona   0.30000                -12.34500
## Arkansas -1.86875                -10.80625
##          Administrative Law Judges, Adjudicators, and Hearing Officers
## coleff                                                        23.75750
## Alabama                                                        5.34750
## Alaska                                                       -21.70000
## Arizona                                                      -24.33500
## Arkansas                                                     -55.58625

Do model fit evaluations using a Tukey additivity plot. What are your conclusions?

We will first make a Tukey additivity plot. We will use the function medpolish to repeat the process we performed manually above.

tukey <- medpolish(occ_stats, na.rm = T)
## 1: 59528.3
## 2: 57951.14
## Final: 57936.25
plot(tukey)

In the Tukey additivity plot, we are plotting the residuals against the model fit values. This value is determined by the model \(Y_{ij} = \mu + \alpha_i + \beta_j + \epsilon{ij}\). Note that \(\alpha\) is the row effect, \(\beta\) is the column effect, and \(\epsilon\) is the residual.

The Tukey additivity plot plots the residuals \(\epsilon\) against the diagnostic comparison values which are defined as \(\frac{\alpha_i \beta_j}{\mu}\). If the data follows approximately the model plot the residuals, the cluster of points should have a slope of approximately 1 - p (as proven with the Taylor expansion and equation wrangling from lecture).

We see that our points follow somewhat of a positive slope, which suggests that perhaps our p value for the power transformation is less than 1. We also have to keep in mind that we vetted our data for the more sparse columns and jobs. This may alter the result as well.

Exercise 3

In exercise 3, we will explore survival data analysis.

We first load the data set hodgkins from the package npsm. The data is from a clinical trial in early Hodgkin’s disease. The subjects received one of two treatments: radiation of affected node (AN) or total nodal radiation (TN).

We also have an indicator variable of whether the patient relapsed and the total survival time for the patient. Each patient denotes one observation in the data.

After taking a quick look at the data, we will plot the Kaplan-Meier estimated survival curves for both treatments.

data("hodgkins")

We see that survival time ranges from 85 to 2177. The metadata gives us a source for the data, however, gives us no insight into what unit of time it is. We will assume that survival time is in days. Hodgkin’s is a lymph node disease, so using days as a survival time unit makes sense.

We see that relapse is a binary variable and we have two treatments.

summary(hodgkins)
##       time         relapse       trt    
##  Min.   :  86   Min.   :0.0000   AN:25  
##  1st Qu.: 505   1st Qu.:0.0000   TN:24  
##  Median :1408   Median :0.0000          
##  Mean   :1214   Mean   :0.4286          
##  3rd Qu.:1807   3rd Qu.:1.0000          
##  Max.   :2177   Max.   :1.0000

We now find the Kaplan-Meier estimates for the two treatments. We can do this by calling a survfit and extracting out the summary for the estimates.

hodgkins$survival <- Surv(time = rep(0, nrow(hodgkins)), 
                          time2 = as.numeric(hodgkins$time), 
                          event = hodgkins$relapse)
kaplan_fit <- survfit(hodgkins$survival ~ hodgkins$trt)

We see that we have a time variable, the number of individuals at risk at that time, number of deaths that occur at that time, and the survival estimate split up by treatment. We also have confidence interval estimates as well for each of the treatments.

# summary(kaplan_fit)

We plot the Kaplan-Meier curves. We see that while the two curves for the two treatments start out at the same place in the beginning, there is a huge dip in the probability of survival for those with the AN treatment. The TN treatment curve levels out at a probability of around 0.8, but the AN treatment curve dips to less than 0.4 after 1500 days. For both treatments, we see that the events occur more frequency after 1000 days (the tick marks indicate deaths). This result was also shown in the summary of our survfit above. We commented out the summary results to avoid data bombardment in the final html file.

plot(kaplan_fit, lty = 1:2, xlab = "Survival Time (Days)", 
     ylab = "Probability of Hodgkin's-Free Survival")
legend("bottomleft", c("AN", "TN"), lty=1:2, bty="n")

Intuitively, we would guess that there is a difference in survival times between the two treatments. We test to see if there is a difference between the two survival curves. We obtain a p-value of less than 0.01, a significant result. This suggests what we suspected: there is a significant difference between the two treatments AN and TN.

survdiff(Surv(hodgkins$time,hodgkins$relapse) ~ hodgkins$trt)
## Call:
## survdiff(formula = Surv(hodgkins$time, hodgkins$relapse) ~ hodgkins$trt)
## 
##                  N Observed Expected (O-E)^2/E (O-E)^2/V
## hodgkins$trt=AN 25       16     8.72      6.07      10.6
## hodgkins$trt=TN 24        5    12.28      4.31      10.6
## 
##  Chisq= 10.6  on 1 degrees of freedom, p= 0.00115

Exercise 4

To simulate survival data, often it is useful to simulate multiple time points. For example the time to event and the time to end of study. Then, events occurring after the time to end of study are censored.

Suppose the time to event of interest follows an exponential distribution with a mean of 5 years and the time to end of study follows an exponential distribution with a mean of 1.8 years. For a sample size \(n = 100\), we will simulatesurvival times from this model.

We first note that the random sampling function in R for the exponential function has a parameter called rate, which is \(\frac{1}{\lambda}\) where \(\lambda\) is the mean of the distribution. Then we generate our data and put it in a data frame.

We then determine whether an event has occured by seeing if our death time is less than the study end time. If it is, then a death has occurred. Otherwise, it hasn’t. We use an indicator variable to mark the deaths.

n <- 100

set.seed(10)
times <- data_frame(event_time = rexp(n, rate = 1/5),
               study_end = rexp(n, rate = 1/1.8)) %>%
  mutate(occur = ifelse(study_end > event_time, 1, 0),
         end_time = ifelse(occur == 1, event_time, study_end))

Next, we obtain the Kaplan_meier estimates. We see that they range from 0.2 to 1. We see that we have lower standard errors on our estimates the smaller the time is. This makes sense has we have more individuals early on since events can occur and reduce our sample size. Smaller sample sizes will make us more unsure of our estimates.

times$survival <- Surv(time = rep(0, nrow(times)), time2 = times$end_time, 
                       event = times$occur)
kaplan_fit <- survfit(times$survival ~ 1)
# summary(kaplan_fit) 

Lastly, we plot the Kaplan-Meier estimate. We see that the tick marks, indicating events occuring, are concentrated in the beginning (again, there are more observations when the time is small). There are larger drops in probability and across time as survival time increases.

plot(kaplan_fit, lty = 1:2, xlab = "Survival Time", 
     ylab = "Probability of Survival", conf.int = FALSE)