Question 1

Exercise 5 on page 448 in Ruppert/Matteson. Suppose that \(\epsilon_{t}\) is white noise with mean 0 and variance \(1,\) that \(a_{t}=\epsilon_{t}\) \(\sqrt{7+a_{t-1}^{2} / 2},\) and that \(Y_{t}=2+0.67 Y_{t-1}+a_{t}\)

  1. What is the mean of \(Y_{t} ?\)
  2. What is the ACF of \(Y_{t} ?\)
  3. What is the ACF of \(a_{t} ?\)
  4. What is the ACF of \(a_{t}^{2} ?\)

(a) \(\mathbf{Solution.}\qquad\) We can see that \(Y_{t}\) is an \(\textrm{AR}(1)\) process with \(\textrm{ARCH}(1)\) error term. The mean of \(Y_{t}\) is, \[ \mu=\mathbb{E}\left[Y_{t}\right]=\frac{2}{1-0.67}=\clyx(\frac{2}{1-0.67}) \]

   

(b) \(\mathbf{Solution.}\qquad\) First we compute \(\sigma_{a}^{2}\), which is given by \[ \sigma_{a}^{2}=\textrm{Var}\left(a_{t}\right)=\frac{\alpha_{0}}{1-\alpha_{1}}=\frac{7}{1-\frac{1}{2}}=14 \]

Then \(\gamma(m)\) and \(\gamma(0)\) are, \[\begin{align*} \gamma(m) & =\sigma_{a}^{2}\frac{\alpha^{\prime m1}}{1-\alpha^{2}}=\frac{14}{1-(0.67)^{2}}(0.67)^{\left|m\right|}\\ \gamma(0) & =\textrm{Var}\left(Y_{t}\right)=\frac{14}{1-(0.67)^{2}}\approx\clyx(\frac{14}{1-(0.67)^{2}}) \end{align*}\]

Thus the ACF of \(Y_{t}\) is, \[ \rho_{Y}(m)=\frac{\gamma(m)}{\gamma(0)}=(0.67)^{(m)} \]

   

(c) \(\mathbf{Solution.}\qquad\) Since \(a_{t}\) is an \(\textrm{ARCH}(1)\) process, then it is a weak white noise process where, \[ \mathbb{E}\left[a_{t}a_{t+m}\right]=\begin{cases} \frac{\alpha_{0}}{1-\alpha_{1}} & m=t\\ 0 & m\neq t \end{cases} \]

Then the ACF is, \[ \rho_{a}(m)=\begin{cases} 1 & \text{ if }m=0\\ 0 & \text{ if }m\neq0 \end{cases} \]

   

(d) \(\mathbf{Solution.}\qquad\) We know from slide 7 of lecture 9 that \(a_{t}^{2}\) follows an \(\textrm{AR}(1)\) process. Then, as in part b), \[ \rho_{a^{2}}(m)=\left(\frac{1}{2}\right)^{\left|m\right|} \]

Question 2

Carry out a simulation of the process in Exercise 5 on page 448 in Ruppert/Matteson, utilizing normally distributed assumption on \(\epsilon_{t} .\) Utilize a burn-in period of at least 25 and show a simulation of 500 values of the \(Y_{t}\) process. In particular-

  1. Show a plot \(Y_{t}\) vs. time \(t,\) and summarize your observations on this process.
  2. Carry out a normal QQ plot on the simulated values \(Y_{t}\) and summarize your results.
  3. What can you say about the distribution of \(Y_{t}\) if change the problem to have \(a_{t}=\sqrt{7} \cdot \epsilon_{t} ?\)

(a) \(\mathbf{Solution.}\qquad\) Below we simulate the process from question 1 and plot the path.

We see that the process looks stationary with brief periods of high volatility. Next, we compute the mean and standard deviation of the process and see that they are close to the theoretical mean and variance we derived in question 1.

## [1] 5.950897
## [1] 5.331977

Also if we check the partial autocorrelation plot, we see that after one lag, the process is a white noise process which is a property of the AR(1) process.

   

(b) \(\mathbf{Solution.}\qquad\) The QQ plot of the process \(Y_t\) shows that the process is not normally distributed and has heavier tails.

   

(c) \(\mathbf{Solution.}\qquad\) Since this implies the error \(a_t \sim N(0,7)\), then the distribution of \(Y_t\) will also be normally distributed. After the change, \(Y_t\) will follow an AR(1) process with independent gaussian, white-noise residuals.

   

Question 3

Supposing \(\left\{X_{n}\right\}_{n=-\infty}^{\infty}\) is a two-sided ARCH(1) process with iid white noise process \(\left\{\epsilon_{n}\right\}_{n=-\infty}^{\infty}\) and parameters \(\alpha_{o}, \alpha_{1}\)

  1. Show that \(V_{n}=\sigma_{n}^{2}\left(\epsilon_{n}^{2}-1\right)\) is a weakly stationary second-order process with mean \(0,\) and finite variance if \(E\left(\epsilon_{n}^{4}\right)<\infty\) and \[ \alpha_{1}^{2}<\frac{1}{E\left(\epsilon_{n}^{4}\right)} \] Hint: You can use results done in class, and also you can use the result that if \(0 \leq \eta<1\) then \[ \sum_{j=0}^{\infty} j \eta^{j}<\infty \]
  2. Show that \(E\left(V_{n}^{2}\right)\) is not finite if \[ \alpha_{1}^{2}>\frac{1}{E\left(\epsilon_{n}^{4}\right)} \]
  3. Interpret results (a) and (b) for the case of the white noise being normally distributed, i.e., what is the requirement on \(\alpha_{1}\) in order for \(V_{n}\) to be a second-order stationary process.

(a) \(\mathbf{Solution.}\qquad\) First we compute the mean and variance of \(V_{n}\), \[\begin{align*} \mathbb{E}\left[V_{n}\right] & =\mathbb{E}\left[\sigma_{n}^{2}\left(\varepsilon_{n}^{2}-1\right)\right]\\ & =\mathbb{E}\left[\sigma_{n}^{2}\right]\mathbb{E}\left[\left(\varepsilon_{n}^{2}-1\right)\right]\\ & =\mathbb{E}\left[\sigma_{n}^{2}\right]\left(\mathbb{E}\left[\varepsilon_{n}^{2}\right]-1\right)\\ & =0 \end{align*}\]

we can split the expectation into a product since \(\sigma_{n}^{2}\) depends only on observations up to \(X_{n-1}\), which is independent of \(\epsilon_{n}\). Next, computing the variance of \(V_{n}\), \[\begin{align} \text{Var}\left(V_{n}\right) & =\mathbb{E}\left[V_{n}^{2}\right]\nonumber \\ & =\mathbb{E}\left[\sigma_{n}^{4}\right]\mathbb{E}\left[\left(\varepsilon_{n}^{2}-1\right)^{2}\right]\label{eq:1} \end{align}\]

Our task is to show that \(\textrm{Var}\left(V_{n}\right)\) is finite. We can see from equation (\ref{eq:1}) that \(\textrm{Var}\left(V_{n}\right)\) will be finite as long as \(\mathbb{E}\left[\sigma_{n}^{4}\right]<\infty\) and \(\mathbb{E}\left[\epsilon_{n}^{4}\right]<\infty\). Thus, we wish to find conditions for \(\mathbb{E}\left[\sigma_{n}^{4}\right]\) to be finite. Recall that \(\sigma_{n}^{2}=\alpha_{0}+\alpha_{1}X_{n-1}^{2}\) can be written as, \[ \sigma_{n}^{2}=\alpha_{0}\sum_{j=0}^{\infty}\alpha_{1}^{j}\varepsilon_{n-1}^{2}\cdots\varepsilon_{n-j}^{2} \]

where we adopt the convention that \(\alpha_{1}^{0}\varepsilon_{n-1}^{2}\cdots\varepsilon_{n-0}^{2}=1\) when \(j=0\). Then, \[\begin{align} \mathbb{E}\left[\sigma_{n}^{4}\right] & =\mathbb{E}\left[\left(\alpha_{0}\sum_{j=0}^{\infty}\alpha_{1}^{j}\varepsilon_{n-1}^{2}\cdots\varepsilon_{n-j}^{2}\right)\left(\alpha_{0}\sum_{i=0}^{\infty}\alpha_{1}^{i}\varepsilon_{n-1}^{2}\cdots\varepsilon_{n-i}^{2}\right)\right]\nonumber \\ & =\alpha_{0}^{2}\mathbb{E}\left[\sum_{j=0}^{\infty}\sum_{i=0}^{\infty}\alpha_{1}^{j}\alpha_{1}^{i}\left(\epsilon_{n-1}^{2}\cdots\varepsilon_{n-j}^{2}\right)\left(\varepsilon_{n-1}^{2}\cdots\varepsilon_{n-i}^{2}\right)\right]\nonumber \\ & =\alpha_{0}^{2}\sum_{j=0}^{\infty}\sum_{i=0}^{\infty}\alpha_{1}^{j}\alpha_{1}^{i}\mathbb{E}\left[\left(\epsilon_{n-1}^{2}\cdots\varepsilon_{n-j}^{2}\right)\left(\varepsilon_{n-1}^{2}\cdots\varepsilon_{n-i}^{2}\right)\right]\label{eq:2} \end{align}\]

We focus on the expectation term to get some insight as to how to rewrite the sum. Below we show some values of \(\left(\epsilon_{n-1}^{2}\cdots\varepsilon_{n-j}^{2}\right)\left(\varepsilon_{n-1}^{2}\cdots\varepsilon_{n-i}^{2}\right)\) for some combinations of \(i\) and \(j\).

Given that all the \(\epsilon_{i}\) are iid, then \(\mathbb{E}\left[\epsilon_{n-i}^{2}\right]=1\) and \(\mathbb{E}\left[\epsilon_{n-i}^{4}\right]=\mathbb{E}\left[\epsilon_{n}^{4}\right]\) for all \(i=\left\{ 1,2,\ldots\right\}\). Thus the expectation of all the values in \(M\) becomes,

Thus we can express equation (\ref{eq:2}) as the sum of the diagonal terms and the cross-terms in the matrix above. \[\begin{align} & \alpha_{0}^{2}\sum_{j=0}^{\infty}\sum_{i=0}^{\infty}\alpha_{1}^{j}\alpha_{1}^{i}\mathbb{E}\left[\left(\epsilon_{n-1}^{2}\cdots\varepsilon_{n-j}^{2}\right)\left(\varepsilon_{n-1}^{2}\cdots\varepsilon_{n-i}^{2}\right)\right]\nonumber \\ & =\alpha_{0}^{2}\left(\sum_{j=0}^{\infty}\left(\alpha_{1}^{2}\right)^{j}\mathbb{E}\left[\varepsilon_{n}^{4}\right]^{j}+2\sum_{i>j}^{\infty}\alpha_{1}^{j}\alpha_{1}^{i}\mathbb{E}\left[\epsilon_{n}^{4}\right]^{j}\right)\nonumber \\ & =\alpha_{0}^{2}\sum_{j=0}^{\infty}\left(\alpha_{1}^{2}\mathbb{E}\left[\varepsilon_{n}^{4}\right]\right)^{j}+2\alpha_{0}^{2}\sum_{j=0}^{\infty}\sum_{i=j+1}^{\infty}\alpha_{1}^{j}\alpha_{1}^{i}\mathbb{E}\left[\epsilon_{n}^{4}\right]^{j}\nonumber \\ & =\alpha_{0}^{2}\sum_{j=0}^{\infty}\left(\alpha_{1}^{2}\mathbb{E}\left[\varepsilon_{n}^{4}\right]\right)^{j}+2\alpha_{0}^{2}\sum_{j=0}^{\infty}\sum_{i=0}^{\infty}\alpha_{1}^{j}\alpha_{1}^{j+1+i}\mathbb{E}\left[\epsilon_{n}^{4}\right]^{j}\nonumber \\ & =\alpha_{0}^{2}\sum_{j=0}^{\infty}\left(\alpha_{1}^{2}\mathbb{E}\left[\varepsilon_{n}^{4}\right]\right)^{j}+2\alpha_{0}^{2}\alpha_{1}\sum_{j=0}^{\infty}\left(\alpha_{1}^{2}\right)^{j}\mathbb{E}\left[\epsilon_{n}^{4}\right]^{j}\sum_{i=0}^{\infty}\alpha_{1}^{i}\label{eq:3} \end{align}\]

If \(\left|\alpha_{1}\right|<1\), then equation (\ref{eq:3}) becomes, \[\begin{align} & \alpha_{0}^{2}\sum_{j=0}^{\infty}\left(\alpha_{1}^{2}\mathbb{E}\left[\varepsilon_{n}^{4}\right]\right)^{j}+2\alpha_{0}^{2}\alpha_{1}\sum_{j=0}^{\infty}\left(\alpha_{1}^{2}\right)^{j}\mathbb{E}\left[\epsilon_{n}^{4}\right]^{j}\sum_{i=0}^{\infty}\alpha_{1}^{i}\nonumber \\ & =\alpha_{0}^{2}\sum_{j=0}^{\infty}\left(\alpha_{1}^{2}\mathbb{E}\left[\varepsilon_{n}^{4}\right]\right)^{j}+\frac{2\alpha_{0}^{2}\alpha_{1}}{1-\alpha_{1}}\sum_{j=0}^{\infty}\left(\alpha_{1}^{2}\mathbb{E}\left[\varepsilon_{n}^{4}\right]\right)^{j}\nonumber \\ & =\alpha_{0}^{2}\left(1+\frac{2\alpha_{1}}{1-\alpha_{1}}\right)\sum_{j=0}^{\infty}\left(\alpha_{1}^{2}\mathbb{E}\left[\varepsilon_{n}^{4}\right]\right)^{j}\nonumber \\ & =\alpha_{0}^{2}\left(\frac{1+\alpha_{1}}{1-\alpha_{1}}\right)\sum_{j=0}^{\infty}\left(\alpha_{1}^{2}\mathbb{E}\left[\varepsilon_{n}^{4}\right]\right)^{j}\label{eq:4} \end{align}\]

The infinite geometric series in equation (\ref{eq:4}) will converge as long as \(\alpha_{1}^{2}\mathbb{E}\left[\varepsilon_{n}^{4}\right]<1\). In other words, we require that, \[ \mathbb{E}\left[\varepsilon_{n}^{4}\right]<\infty\ \ \text{and \ }\alpha_{1}^{2}<\frac{1}{E\left(\epsilon_{n}^{4}\right)} \]

   

(b) \(\mathbf{Solution.}\qquad\) From equation (\ref{eq:4}) we know that \(\alpha_{1}^{2}\mathbb{E}\left[\varepsilon_{n}^{4}\right]\geq1\) then the geometric series will not converge, which implies that \(\mathbb{E}\left[V_{n}^{2}\right]\) will not be finite.

   

(c) \(\mathbf{Solution.}\qquad\) We know that the fourth moment of standard normal distribution is 3. Thus, \[ \alpha_{1}^{2}<\frac{1}{3}\Longrightarrow\left|\alpha_{1}\right|<\frac{1}{\sqrt{3}} \]