This is an R-code companion to Session 2, around pages 20 to 26.

Let’s load Hadley’s tidyverse and other packages we need before we get going. The following will also install them if you don’t have them.

packs <- c("tidyverse", "expm", "viridis")
# install any that are not here
install_em <- setdiff(packs, rownames(installed.packages()))
if (length(install_em) > 0) install.packages(pkgs = install_em)
# load em up
dump <- lapply(packs, function(x) library(x, character.only = TRUE))
Loading tidyverse: ggplot2
Loading tidyverse: tibble
Loading tidyverse: tidyr
Loading tidyverse: readr
Loading tidyverse: purrr
Loading tidyverse: dplyr
Conflicts with tidy packages --------------------------------------------------------------------------------------
filter(): dplyr, stats
lag():    dplyr, stats
Loading required package: Matrix

Attaching package: ‘Matrix’

The following object is masked from ‘package:tidyr’:

    expand


Attaching package: ‘expm’

The following object is masked from ‘package:Matrix’:

    expm

Loading required package: viridisLite

Make the TPM

The basic setup is that we define a transition probability matrix on the integers from 1 to 5 with scattering boundaries. In code, we can make the TPM like this:

P <- matrix(
  c(.2, .2, .2, .2, .2,
    .2, .3, .5, 0, 0,
    0,  .3, .4, .3, 0,
    0, 0, .5, .3, .2,
    .2, .2, .2, .2, .2),
  ncol = 5,
  byrow = TRUE
)
P
     [,1] [,2] [,3] [,4] [,5]
[1,]  0.2  0.2  0.2  0.2  0.2
[2,]  0.2  0.3  0.5  0.0  0.0
[3,]  0.0  0.3  0.4  0.3  0.0
[4,]  0.0  0.0  0.5  0.3  0.2
[5,]  0.2  0.2  0.2  0.2  0.2

A function to Simulate a Markov Chain from \(\mathbf{P}\)

For any TPM, \(\mathbf{P}\), we can simulate the chain with a function like this. In this version we will return a tibble

#' function to simulate from a transition probability matrix
#' @param P the transition probability matrix.  The states are assumed to be
#' the integers from 1 to the number of rows/columns.  Rows must sum to one.
#' @param init starting value
#' @param reps number of steps of the chain to make
sim_tpm <- function(P, init, reps) {
  stopifnot(all(near(rowSums(P), 1))) # make sure rows sum to 1
  stopifnot(init %in% 1:nrow(P)) # make sure starting point is valid
  
  ret  <- rep(NA, reps)  # to return values at the end
  
  ret[1] <- init # set first state to init
  for (i in 2:reps) {  # updated states 2...reps, each according to the previous and P
     ret[i] <- sample(x = 1:nrow(P), size = 1, prob = P[ret[i - 1], ])
  }
  
  tibble::tibble(iter = 1:reps, state = ret)
}

Simulate the chain from different starting values

Let’s do five runs of the chain, each for 1000 iterations, and each starting from a different value. We will put the results into a big tibble at the end.

set.seed(5)  # for reproducibility
runs5 <- lapply(1:5, function(x) sim_tpm(P, x, 1000)) %>%
  bind_rows(.id = "starting_state")
# look at a few rows of that
runs5[1:20, ]

And we can plot those trajectories:

ggplot(runs5, aes(x = iter, y = state, colour = starting_state)) +
  geom_line() +
  facet_wrap(~ starting_state, ncol = 3)

And we can also look at how many times each chain has spent in particular states:

ggplot(runs5, aes(x = state, fill = starting_state)) +
  geom_histogram(bins = 5) +
  facet_wrap(~ starting_state, ncol = 3)

and we see that those are all pretty similar, regardless of the starting state, as we might expect.

\(n\)-step Probabilities by matrix multiplication

We can represent our knowledge (uncertainly) of the state of the system using a row vector of probabilties. For example \[ \boldsymbol{v}_0 = (0, 0, 0, 1, 0) \] tells us that at step 0, we are certain the system is in state 4. Probabilities after one step are found by matrix multiplication: i.e., \(\boldsymbol{v}_1 = \boldsymbol{v}_0\mathbf{P}\). In code this looks like:

# let's say v0 state is known with certainty:
v0 <- c(0, 0, 0, 1, 0)
# probs after one step
v1 <- v0 %*% P
v1
     [,1] [,2] [,3] [,4] [,5]
[1,]    0    0  0.5  0.3  0.2
# another step
v2 <- v1 %*% P
v2
     [,1] [,2] [,3] [,4] [,5]
[1,] 0.04 0.19 0.39 0.28  0.1
# and another
v3 <- v2 %*% P
v3
      [,1]  [,2]  [,3]  [,4]  [,5]
[1,] 0.066 0.202 0.419 0.229 0.084

Note that we got to v3 like this:

v3_alt <- (((v0 %*% P) %*% P) %*% P)
v3_alt
      [,1]  [,2]  [,3]  [,4]  [,5]
[1,] 0.066 0.202 0.419 0.229 0.084

But, recall that matrix multiplication is associative (which means you can change how the matrices are grouped by parentheses without changing the value of the product), so we can rewrite that as

v3_alt2 <- v0 %*% (P %*% P %*% P)
v3_alt2
      [,1]  [,2]  [,3]  [,4]  [,5]
[1,] 0.066 0.202 0.419 0.229 0.084

And that suggests that we can get the \(n\)-step probabilites of ending up somewhere by multiplying \(\boldsymbol{v}_0\) by \[ \mathbf{P}^n = \mathbf{P}\mathbf{P}\mathbf{P}\cdots \mathbf{P}\mathbf{P} ~~~~~~(n~\mbox{times}) \] R does not have a built in function for multiplying a square matrix to itself \(n\) times. But the expm package provides %^%. Let’s look at \(\mathbf{P}\) when we power it up…

library(expm)
P %^% 1  # Just P itself
     [,1] [,2] [,3] [,4] [,5]
[1,]  0.2  0.2  0.2  0.2  0.2
[2,]  0.2  0.3  0.5  0.0  0.0
[3,]  0.0  0.3  0.4  0.3  0.0
[4,]  0.0  0.0  0.5  0.3  0.2
[5,]  0.2  0.2  0.2  0.2  0.2
P %^% 2  # After two steps
     [,1] [,2] [,3] [,4] [,5]
[1,] 0.12 0.20 0.36 0.20 0.12
[2,] 0.10 0.28 0.39 0.19 0.04
[3,] 0.06 0.21 0.46 0.21 0.06
[4,] 0.04 0.19 0.39 0.28 0.10
[5,] 0.12 0.20 0.36 0.20 0.12
P %^% 3  # After three steps
      [,1]  [,2]  [,3]  [,4]  [,5]
[1,] 0.088 0.216 0.392 0.216 0.088
[2,] 0.084 0.229 0.419 0.202 0.066
[3,] 0.066 0.225 0.418 0.225 0.066
[4,] 0.066 0.202 0.419 0.229 0.084
[5,] 0.088 0.216 0.392 0.216 0.088
P %^% 10 # After 10 steps
           [,1]      [,2]      [,3]      [,4]       [,5]
[1,] 0.07317830 0.2195098 0.4146239 0.2195098 0.07317830
[2,] 0.07317296 0.2195152 0.4146335 0.2195093 0.07316903
[3,] 0.07316778 0.2195130 0.4146385 0.2195130 0.07316778
[4,] 0.07316903 0.2195093 0.4146335 0.2195152 0.07317296
[5,] 0.07317830 0.2195098 0.4146239 0.2195098 0.07317830

Wow! Each row seems to be converging to the same value.

Just for fun, lets look at the powered TPM graphically

Just to play around with some ggplot hooliganism, lets power the matrix up 1, 2, 3, 4, 5, 6, 10, 50, 100 times and plot the matrices as heatmaps.

ns <- c(1, 2, 3, 4, 5, 6, 10, 50, 100)
names(ns) <- ns
# get all the tpm values in long format
probs <- lapply(ns, function(x) {
  M <- P %^% x
  colnames(M) <- 1:5
  M %>% 
    as_tibble() %>%
    mutate(from = 1:5) %>%
    gather(key = "to", value = "prob", -from)
  }) %>%
  bind_rows(.id = "power") %>%
  mutate(power = as.integer(power))
# plot em
ggplot(probs, aes(x = to, y = from, fill = prob)) +
  geom_raster() +
  facet_wrap(~ power, ncol = 3) +
  scale_fill_viridis() +
  geom_hline(yintercept = 0:5 - 0.5, colour = "white") +
  geom_vline(xintercept = 0:5 - 0.5, colour = "white")

And we see that the values in the rows of the matrix are all converging to the same vector. This vector is called the limiting distribution. And a Markov chain that has this property is called ergodic.

Session Information

sessionInfo()
R version 3.3.3 (2017-03-06)
Platform: x86_64-apple-darwin13.4.0 (64-bit)
Running under: OS X El Capitan 10.11.6

locale:
[1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

other attached packages:
 [1] viridis_0.4.0      viridisLite_0.2.0  expm_0.999-2       Matrix_1.2-8       dplyr_0.5.0       
 [6] purrr_0.2.2        readr_1.1.0        tidyr_0.6.1        tibble_1.3.3       ggplot2_2.2.1.9000
[11] tidyverse_1.0.0   

loaded via a namespace (and not attached):
 [1] Rcpp_0.12.11     knitr_1.16.5     magrittr_1.5     hms_0.3          munsell_0.4.3    lattice_0.20-34 
 [7] colorspace_1.2-6 R6_2.2.0         rlang_0.1.1      stringr_1.2.0    plyr_1.8.4       tools_3.3.3     
[13] grid_3.3.3       gtable_0.2.0     DBI_0.6-1        htmltools_0.3.6  assertthat_0.1   yaml_2.1.14     
[19] lazyeval_0.2.0   rprojroot_1.2    digest_0.6.12    gridExtra_2.2.1  base64enc_0.1-3  evaluate_0.10.1 
[25] rmarkdown_1.6    labeling_0.3     stringi_1.1.3    scales_0.4.1     backports_1.0.5  jsonlite_1.5    
LS0tCnRpdGxlOiAiUmFuZG9tIFdhbGsgd2l0aCBTY2F0dGVyaW5nIEJvdW5kYXJpZXMsIGFuZCBMaW1pdGluZyBEaXN0cmlidXRpb25zIgpkZXNjcmlwdGlvbjogV2UgZGV2aXNlIGEgc2ltcGxlIDUtc3RhdGUgTWFya292IGNoYWluLCBleHBsb3JlIGl0cyBwcm9wZXJ0aWVzLCBzaW11bGF0ZSBmcm9tIGl0IGFuZCBvYnNlcnZlIHRoZSBjb25zZXF1ZW5jZXMgb2YgdGhlIFdlYWsgTGF3IG9mIExhcmdlIG51bWJlcnMgZm9yIGVyZ29kaWMgY2hhaW5zLgpvdXRwdXQ6IAogIGh0bWxfbm90ZWJvb2s6CiAgICB0b2M6IHRydWUKICAgIHRvY19mbG9hdDogdHJ1ZQotLS0KClx1c2VwYWNrYWdle2Jsa2FycmF5fQpcdXNlcGFja2FnZXthbXNtYXRofQpcbmV3Y29tbWFuZHtcYm19e1xib2xkc3ltYm9sfQoKVGhpcyBpcyBhbiBSLWNvZGUgY29tcGFuaW9uIHRvIFNlc3Npb24gMiwgYXJvdW5kIHBhZ2VzIDIwIHRvIDI2LgoKTGV0J3MgbG9hZCBIYWRsZXkncyB0aWR5dmVyc2UgYW5kIG90aGVyIHBhY2thZ2VzIHdlIG5lZWQgYmVmb3JlIHdlIGdldCBnb2luZy4gIFRoZSBmb2xsb3dpbmcKd2lsbCBhbHNvIGluc3RhbGwgdGhlbSBpZiB5b3UgZG9uJ3QgaGF2ZSB0aGVtLiAKYGBge3J9CnBhY2tzIDwtIGMoInRpZHl2ZXJzZSIsICJleHBtIiwgInZpcmlkaXMiKQoKIyBpbnN0YWxsIGFueSB0aGF0IGFyZSBub3QgaGVyZQppbnN0YWxsX2VtIDwtIHNldGRpZmYocGFja3MsIHJvd25hbWVzKGluc3RhbGxlZC5wYWNrYWdlcygpKSkKaWYgKGxlbmd0aChpbnN0YWxsX2VtKSA+IDApIGluc3RhbGwucGFja2FnZXMocGtncyA9IGluc3RhbGxfZW0pCgojIGxvYWQgZW0gdXAKZHVtcCA8LSBsYXBwbHkocGFja3MsIGZ1bmN0aW9uKHgpIGxpYnJhcnkoeCwgY2hhcmFjdGVyLm9ubHkgPSBUUlVFKSkKYGBgCgojIyBNYWtlIHRoZSBUUE0KCgoKVGhlIGJhc2ljIHNldHVwIGlzIHRoYXQgd2UgZGVmaW5lIGEgdHJhbnNpdGlvbiBwcm9iYWJpbGl0eSBtYXRyaXggb24gdGhlIAppbnRlZ2VycyBmcm9tIDEgdG8gNSB3aXRoIHNjYXR0ZXJpbmcgYm91bmRhcmllcy4gICBJbiBjb2RlLCB3ZSBjYW4gbWFrZSB0aGUgClRQTSBsaWtlIHRoaXM6CmBgYHtyIG1ha2UtdHBtfQpQIDwtIG1hdHJpeCgKICBjKC4yLCAuMiwgLjIsIC4yLCAuMiwKICAgIC4yLCAuMywgLjUsIDAsIDAsCiAgICAwLCAgLjMsIC40LCAuMywgMCwKICAgIDAsIDAsIC41LCAuMywgLjIsCiAgICAuMiwgLjIsIC4yLCAuMiwgLjIpLAogIG5jb2wgPSA1LAogIGJ5cm93ID0gVFJVRQopClAKYGBgCgoKIyMgQSBmdW5jdGlvbiB0byBTaW11bGF0ZSBhIE1hcmtvdiBDaGFpbiBmcm9tICRcbWF0aGJme1B9JAoKRm9yIGFueSBUUE0sICRcbWF0aGJme1B9JCwgd2UgY2FuIHNpbXVsYXRlIHRoZSBjaGFpbiB3aXRoIGEgZnVuY3Rpb24gbGlrZSB0aGlzLiAgSW4gdGhpcwp2ZXJzaW9uIHdlIHdpbGwgcmV0dXJuIGEgdGliYmxlCmBgYHtyIHRwbS1zaW0tZnVuY30KIycgZnVuY3Rpb24gdG8gc2ltdWxhdGUgZnJvbSBhIHRyYW5zaXRpb24gcHJvYmFiaWxpdHkgbWF0cml4CiMnIEBwYXJhbSBQIHRoZSB0cmFuc2l0aW9uIHByb2JhYmlsaXR5IG1hdHJpeC4gIFRoZSBzdGF0ZXMgYXJlIGFzc3VtZWQgdG8gYmUKIycgdGhlIGludGVnZXJzIGZyb20gMSB0byB0aGUgbnVtYmVyIG9mIHJvd3MvY29sdW1ucy4gIFJvd3MgbXVzdCBzdW0gdG8gb25lLgojJyBAcGFyYW0gaW5pdCBzdGFydGluZyB2YWx1ZQojJyBAcGFyYW0gcmVwcyBudW1iZXIgb2Ygc3RlcHMgb2YgdGhlIGNoYWluIHRvIG1ha2UKc2ltX3RwbSA8LSBmdW5jdGlvbihQLCBpbml0LCByZXBzKSB7CiAgc3RvcGlmbm90KGFsbChuZWFyKHJvd1N1bXMoUCksIDEpKSkgIyBtYWtlIHN1cmUgcm93cyBzdW0gdG8gMQogIHN0b3BpZm5vdChpbml0ICVpbiUgMTpucm93KFApKSAjIG1ha2Ugc3VyZSBzdGFydGluZyBwb2ludCBpcyB2YWxpZAogIAogIHJldCAgPC0gcmVwKE5BLCByZXBzKSAgIyB0byByZXR1cm4gdmFsdWVzIGF0IHRoZSBlbmQKICAKICByZXRbMV0gPC0gaW5pdCAjIHNldCBmaXJzdCBzdGF0ZSB0byBpbml0CiAgZm9yIChpIGluIDI6cmVwcykgeyAgIyB1cGRhdGVkIHN0YXRlcyAyLi4ucmVwcywgZWFjaCBhY2NvcmRpbmcgdG8gdGhlIHByZXZpb3VzIGFuZCBQCiAgICAgcmV0W2ldIDwtIHNhbXBsZSh4ID0gMTpucm93KFApLCBzaXplID0gMSwgcHJvYiA9IFBbcmV0W2kgLSAxXSwgXSkKICB9CiAgCiAgdGliYmxlOjp0aWJibGUoaXRlciA9IDE6cmVwcywgc3RhdGUgPSByZXQpCn0KYGBgCgojIyBTaW11bGF0ZSB0aGUgY2hhaW4gZnJvbSBkaWZmZXJlbnQgc3RhcnRpbmcgdmFsdWVzCgpMZXQncyBkbyBmaXZlIHJ1bnMgb2YgdGhlIGNoYWluLCBlYWNoIGZvciAxMDAwIGl0ZXJhdGlvbnMsIGFuZCBlYWNoIHN0YXJ0aW5nIGZyb20gYSBkaWZmZXJlbnQKdmFsdWUuICBXZSB3aWxsIHB1dCB0aGUgcmVzdWx0cyBpbnRvIGEgYmlnIHRpYmJsZSBhdCB0aGUgZW5kLgpgYGB7ciBydW4tNS10aW1lc30Kc2V0LnNlZWQoNSkgICMgZm9yIHJlcHJvZHVjaWJpbGl0eQpydW5zNSA8LSBsYXBwbHkoMTo1LCBmdW5jdGlvbih4KSBzaW1fdHBtKFAsIHgsIDEwMDApKSAlPiUKICBiaW5kX3Jvd3MoLmlkID0gInN0YXJ0aW5nX3N0YXRlIikKIyBsb29rIGF0IGEgZmV3IHJvd3Mgb2YgdGhhdApydW5zNVsxOjIwLCBdCmBgYAoKQW5kIHdlIGNhbiBwbG90IHRob3NlIHRyYWplY3RvcmllczoKYGBge3IgcGxvdC01LWNoYWlucywgZmlnLndpZHRoID0gMTAsIG91dC53aWR0aD0iMTAwJSIsIGNhY2hlID0gVFJVRX0KZ2dwbG90KHJ1bnM1LCBhZXMoeCA9IGl0ZXIsIHkgPSBzdGF0ZSwgY29sb3VyID0gc3RhcnRpbmdfc3RhdGUpKSArCiAgZ2VvbV9saW5lKCkgKwogIGZhY2V0X3dyYXAofiBzdGFydGluZ19zdGF0ZSwgbmNvbCA9IDMpCmBgYAoKQW5kIHdlIGNhbiBhbHNvIGxvb2sgYXQgaG93IG1hbnkgdGltZXMgZWFjaCBjaGFpbiBoYXMgc3BlbnQgaW4gcGFydGljdWxhciBzdGF0ZXM6CmBgYHtyIGhpc3QtNS1jaGFpbnMsIGZpZy53aWR0aCA9IDEwLCBvdXQud2lkdGg9IjEwMCUiLCBjYWNoZSA9IFRSVUV9CmdncGxvdChydW5zNSwgYWVzKHggPSBzdGF0ZSwgZmlsbCA9IHN0YXJ0aW5nX3N0YXRlKSkgKwogIGdlb21faGlzdG9ncmFtKGJpbnMgPSA1KSArCiAgZmFjZXRfd3JhcCh+IHN0YXJ0aW5nX3N0YXRlLCBuY29sID0gMykKYGBgCgphbmQgd2Ugc2VlIHRoYXQgdGhvc2UgYXJlIGFsbCBwcmV0dHkgc2ltaWxhciwgcmVnYXJkbGVzcyBvZiB0aGUgc3RhcnRpbmcgc3RhdGUsIGFzIHdlIAptaWdodCBleHBlY3QuCgojIyAkbiQtc3RlcCBQcm9iYWJpbGl0aWVzIGJ5IG1hdHJpeCBtdWx0aXBsaWNhdGlvbgoKV2UgY2FuIHJlcHJlc2VudCBvdXIga25vd2xlZGdlICh1bmNlcnRhaW5seSkgb2YgdGhlIHN0YXRlIG9mIHRoZSBzeXN0ZW0gdXNpbmcgYSByb3cgdmVjdG9yCm9mIHByb2JhYmlsdGllcy4gIEZvciBleGFtcGxlCiQkClxib2xkc3ltYm9se3Z9XzAgPSAoMCwgMCwgMCwgMSwgMCkKJCQKdGVsbHMgdXMgdGhhdCBhdCBzdGVwIDAsIHdlIGFyZSBjZXJ0YWluIHRoZSBzeXN0ZW0gaXMgaW4gc3RhdGUgNC4gIFByb2JhYmlsaXRpZXMgYWZ0ZXIgb25lIHN0ZXAKYXJlIGZvdW5kIGJ5IG1hdHJpeCBtdWx0aXBsaWNhdGlvbjogaS5lLiwgJFxib2xkc3ltYm9se3Z9XzEgPSBcYm9sZHN5bWJvbHt2fV8wXG1hdGhiZntQfSQuICBJbiBjb2RlCnRoaXMgbG9va3MgbGlrZToKYGBge3Igbi1zdGVwLXByb2JzfQojIGxldCdzIHNheSB2MCBzdGF0ZSBpcyBrbm93biB3aXRoIGNlcnRhaW50eToKdjAgPC0gYygwLCAwLCAwLCAxLCAwKQoKIyBwcm9icyBhZnRlciBvbmUgc3RlcAp2MSA8LSB2MCAlKiUgUAp2MQoKIyBhbm90aGVyIHN0ZXAKdjIgPC0gdjEgJSolIFAKdjIKIyBhbmQgYW5vdGhlcgp2MyA8LSB2MiAlKiUgUAp2MwpgYGAKCk5vdGUgdGhhdCB3ZSBnb3QgdG8gYHYzYCBsaWtlIHRoaXM6CmBgYHtyfQp2M19hbHQgPC0gKCgodjAgJSolIFApICUqJSBQKSAlKiUgUCkKdjNfYWx0CmBgYApCdXQsIHJlY2FsbCB0aGF0IG1hdHJpeCBtdWx0aXBsaWNhdGlvbiBpcyBhc3NvY2lhdGl2ZSAod2hpY2ggbWVhbnMgeW91IGNhbiBjaGFuZ2UgaG93IHRoZSBtYXRyaWNlcwphcmUgZ3JvdXBlZCBieSBwYXJlbnRoZXNlcyB3aXRob3V0IGNoYW5naW5nIHRoZSB2YWx1ZSBvZiB0aGUgcHJvZHVjdCksIHNvIHdlIGNhbiByZXdyaXRlIHRoYXQgYXMKYGBge3J9CnYzX2FsdDIgPC0gdjAgJSolIChQICUqJSBQICUqJSBQKQp2M19hbHQyCmBgYAoKQW5kICoqdGhhdCoqIHN1Z2dlc3RzIHRoYXQgd2UgY2FuIGdldCB0aGUgJG4kLXN0ZXAgcHJvYmFiaWxpdGVzIG9mIGVuZGluZyB1cCAKc29tZXdoZXJlIGJ5IG11bHRpcGx5aW5nICRcYm9sZHN5bWJvbHt2fV8wJCBieQokJApcbWF0aGJme1B9Xm4gPSBcbWF0aGJme1B9XG1hdGhiZntQfVxtYXRoYmZ7UH1cY2RvdHMgXG1hdGhiZntQfVxtYXRoYmZ7UH0gfn5+fn5+KG5+XG1ib3h7dGltZXN9KQokJApSIGRvZXMgbm90IGhhdmUgYSBidWlsdCBpbiBmdW5jdGlvbiBmb3IgbXVsdGlwbHlpbmcgYSBzcXVhcmUgbWF0cml4IHRvIAppdHNlbGYgJG4kIHRpbWVzLiAgQnV0IHRoZSBgZXhwbWAgcGFja2FnZSBwcm92aWRlcyBgJV4lYC4gIExldCdzIGxvb2sgYXQKJFxtYXRoYmZ7UH0kIHdoZW4gd2UgcG93ZXIgaXQgdXAuLi4KYGBge3J9CmxpYnJhcnkoZXhwbSkKClAgJV4lIDEgICMgSnVzdCBQIGl0c2VsZgoKUCAlXiUgMiAgIyBBZnRlciB0d28gc3RlcHMKClAgJV4lIDMgICMgQWZ0ZXIgdGhyZWUgc3RlcHMKClAgJV4lIDEwICMgQWZ0ZXIgMTAgc3RlcHMKYGBgCgpXb3chIEVhY2ggcm93IHNlZW1zIHRvIGJlIGNvbnZlcmdpbmcgdG8gdGhlIHNhbWUgdmFsdWUuCgojIyBKdXN0IGZvciBmdW4sIGxldHMgbG9vayBhdCB0aGUgcG93ZXJlZCBUUE0gZ3JhcGhpY2FsbHkKCkp1c3QgdG8gcGxheSBhcm91bmQgd2l0aCBzb21lIGdncGxvdCBob29saWdhbmlzbSwgbGV0cyBwb3dlciB0aGUgbWF0cml4IHVwIDEsIDIsIDMsIDQsIDUsIDYsIDEwLCA1MCwgMTAwCnRpbWVzIGFuZCBwbG90IHRoZSBtYXRyaWNlcyBhcyBoZWF0bWFwcy4KYGBge3IsIGZpZy53aWR0aCA9IDEwLCBvdXQud2lkdGg9IjEwMCUifQpucyA8LSBjKDEsIDIsIDMsIDQsIDUsIDYsIDEwLCA1MCwgMTAwKQpuYW1lcyhucykgPC0gbnMKCiMgZ2V0IGFsbCB0aGUgdHBtIHZhbHVlcyBpbiBsb25nIGZvcm1hdApwcm9icyA8LSBsYXBwbHkobnMsIGZ1bmN0aW9uKHgpIHsKICBNIDwtIFAgJV4lIHgKICBjb2xuYW1lcyhNKSA8LSAxOjUKICBNICU+JSAKICAgIGFzX3RpYmJsZSgpICU+JQogICAgbXV0YXRlKGZyb20gPSAxOjUpICU+JQogICAgZ2F0aGVyKGtleSA9ICJ0byIsIHZhbHVlID0gInByb2IiLCAtZnJvbSkKICB9KSAlPiUKICBiaW5kX3Jvd3MoLmlkID0gInBvd2VyIikgJT4lCiAgbXV0YXRlKHBvd2VyID0gYXMuaW50ZWdlcihwb3dlcikpCgojIHBsb3QgZW0KZ2dwbG90KHByb2JzLCBhZXMoeCA9IHRvLCB5ID0gZnJvbSwgZmlsbCA9IHByb2IpKSArCiAgZ2VvbV9yYXN0ZXIoKSArCiAgZmFjZXRfd3JhcCh+IHBvd2VyLCBuY29sID0gMykgKwogIHNjYWxlX2ZpbGxfdmlyaWRpcygpICsKICBnZW9tX2hsaW5lKHlpbnRlcmNlcHQgPSAwOjUgLSAwLjUsIGNvbG91ciA9ICJ3aGl0ZSIpICsKICBnZW9tX3ZsaW5lKHhpbnRlcmNlcHQgPSAwOjUgLSAwLjUsIGNvbG91ciA9ICJ3aGl0ZSIpCmBgYAoKQW5kIHdlIHNlZSB0aGF0IHRoZSB2YWx1ZXMgaW4gdGhlIHJvd3Mgb2YgdGhlIG1hdHJpeCBhcmUgYWxsIGNvbnZlcmdpbmcgdG8gdGhlIHNhbWUKdmVjdG9yLiAgVGhpcyB2ZWN0b3IgaXMgY2FsbGVkIHRoZSBfbGltaXRpbmcgZGlzdHJpYnV0aW9uXy4gIEFuZCBhIE1hcmtvdiBjaGFpbiB0aGF0IApoYXMgdGhpcyBwcm9wZXJ0eSBpcyBjYWxsZWQgX2VyZ29kaWNfLgoKIyMgU2Vzc2lvbiBJbmZvcm1hdGlvbgpgYGB7cn0Kc2Vzc2lvbkluZm8oKQpgYGAK