We investigate two ways to calculate the autocovariance function for AR and MA models. These methods extend to ARMA models. The instructions below help you work through the case of a causal AR(1) model, \[X_n = \phi X_{n-1} + \epsilon_n.\] where \(\{\epsilon_n\}\) is white noise with variance \(\sigma^2\), and \(-1<\phi<1\). Assume the process is stationary, i.e., it is initialized with a random draw from its stationary distribution. Show your working for both the approaches A and B explained below. If you want an additional challenge, you can work through the AR(2) or ARMA(1,1) case instead.
A. Using the stochastic difference equation to obtain a difference equation for the autocovariance function (ACF). Start by writing the ACF as \[\gamma_h = \mathrm{Cov}(X_n,X_{n+h})= \mathrm{Cov}(X_n, \phi X_{n+h-1} + \epsilon_{n+h}), \mbox{ for $h>0$}.\] Writing the right hand side in terms of \(\gamma_{h-1}\) leads to an equation which is formally a first order linear homogeneous recurrence relation with constant coefficients. To solve such an equation, we look for solutions of the form \[\gamma_h = A\lambda^h.\] Substituting this general solution into the recurrence relation, together with an initial condition derived from explicitly computing \(\gamma_0\), provides an approach to finding two equations that can be solved for the two unknowns, \(A\) and \(\lambda\).
\(\mathbf{Solution.}\qquad\) First we work with \(\gamma_{h}\) \[\begin{align} \gamma_{h} & =\mathrm{Cov}\left(X_{n},X_{n+h}\right)\nonumber \\ & =\mathrm{Cov}\left(X_{n},\phi X_{n+h-1}+\varepsilon_{n+h}\right)\nonumber \\ & =\mathrm{Cov}\left(X_{n},\phi X_{n+h-1}\right)+\mathrm{Cov}\left(X_{n},\varepsilon_{n+h}\right)\nonumber \\ & =\phi\gamma_{h-1}\label{eq:1} \end{align}\]
Next let, \[\begin{equation} \gamma_{h}=A\lambda^{h}\label{eq:2} \end{equation}\]
Substituting equation (\ref{eq:1}) into equation (\ref{eq:2}), we have \[\begin{align*} A\lambda^{h} & =\phi\left(A\lambda^{h-1}\right)\\ \lambda & =\phi \end{align*}\]
By explicitly computing \(\gamma_{0},\) we can derive an initial condition. \[\begin{align*} \gamma_{0} & =\mathrm{Cov}\left(X_{n},X_{n}\right)\\ & =\mathrm{Cov}\left(\phi X_{n-1}+\varepsilon_{n},\phi X_{n-1}+\varepsilon_{n}\right)\\ & =\phi^{2}\mathrm{Cov}\left(X_{n-1},X_{n-1}\right)+2\phi\underbrace{\mathrm{Cov}\left(X_{n-1},\varepsilon_{n}\right)}_{=0}+\mathrm{Cov}\left(\varepsilon_{n,}\varepsilon_{n}\right)\\ & =\phi^{2}\gamma_{0}+\sigma^{2} \end{align*}\]
Solving for \(\gamma_{0}\) , \[\begin{align*} \gamma_{0}\left(1-\phi^{2}\right) & =\sigma^{2}\\ \gamma_{0} & =\frac{\sigma^{2}}{\left(1-\phi^{2}\right)} \end{align*}\]
Finally, checking the initial condition in equation (\ref{eq:2}), \[ \gamma_{0}=A\lambda^{0}\Longrightarrow A=\frac{\sigma^{2}}{\left(1-\phi^{2}\right)} \]
Thus, the solution to equation (\ref{eq:2}) becomes, \[ \gamma_{h}=\frac{\sigma^{2}}{\left(1-\phi^{2}\right)}\phi^{h} \]
 Â
B. Via the MA(\(\infty\)) representation. Construct a Taylor series expansion of \(g(x)=(1-\phi x)^{-1}\) of the form \[g(x) = g_0 + g_1 x + g_2 x^2 + g_3 x^3 + \dots\] Do this either by hand or using your favorite math software (if you use software, please say what software you used and what you entered to get the output). Use this Taylor series to write down the MA(\(\infty\)) representation of an AR(1) model. Then, apply the general formula for the autocovariance function of an MA(\(\infty\)) process.
\(\mathbf{Solution.}\qquad\) To construct the Taylor expansion of \(g(x)\), we compute a few of its derivatives \[\begin{align*} g^{\prime}(x)= & \phi(1-\phi x)^{-2}\\ g^{\prime\prime}(x)= & 2\phi^{2}(1-\phi x)^{-3}\\ & \vdots\\ g^{(n)}(x)= & n!\phi^{n}(1-\phi x)^{-(n+1)} \end{align*}\]
Then the Taylor expansion becomes, \[\begin{align} g(x) & =\sum_{n=0}^{\infty}\frac{1}{n!}g^{(n)}(0)x^{n}\nonumber \\ & =\sum_{n=0}^{\infty}\frac{1}{n!}\left(n!\phi^{n}\right)x^{n}\nonumber \\ & =\sum_{n=0}^{\infty}\phi^{n}x^{n}\label{eq:3} \end{align}\]
Recall \(AR(1)\) model is given by, \[ X_{n}=\phi X_{n-1}+\varepsilon_{n} \]
Using the backshift operator \(BX_{n}=X_{n-1}\), the \(AR\left(1\right)\) model becomes, \[\begin{align*} X_{n} & =\phi BX_{n}+\varepsilon_{n}\\ X_{n}(1-\phi B) & =\varepsilon_{n}\\ X_{n} & =(1-\phi B)^{-1}\varepsilon_{n} \end{align*}\]
Applying (), \[\begin{align} X_{n} & =\varepsilon_{n}+\phi\varepsilon_{n-1}+\phi^{2}\varepsilon_{n-2}\ldots\nonumber \\ & =\sum_{i=0}^{\infty}\phi^{i}\varepsilon_{n-i}\label{eq:4} \end{align}\]
Applying the formula for the autocovariance function to (), \[ \begin{aligned}\gamma_{h} & =\mathrm{Cov}\left(X_{n},X_{n+h}\right)\\ & =\mathrm{Cov}\left(\sum_{i=0}^{\infty}\phi^{i}\varepsilon_{n-i},\sum_{j=0}^{\infty}\phi^{j}\varepsilon_{n+h-j}\right) \end{aligned} \]
The covariance only exists when \(j=h+i\). Thus, \[\begin{align} \gamma_{h} & =\sum_{i=0}^{\infty}\phi^{i}\phi^{h+i}\sigma^{2}\nonumber \\ & =\phi^{h}\sigma^{2}\sum_{i=0}^{\infty}\phi^{2i}\label{eq:5} \end{align}\]
In order for () to converge and apply the geometric series formula, we need \(\left|\phi\right|<1\). Then, \[\begin{align*} \gamma_{h} & =\frac{\sigma^{2}}{1-\phi^{2}}\phi^{h} \end{align*}\]
 Â
C. Check your work for the specific case of an AR(1) model with \(\phi_1=0.6\) by comparing your formula with the result of the R function ARMAacf
.
\(\mathbf{Solution.}\qquad\) From parts \(A\text{ and }B\) we know the autocovariance function. We can construct the autocorrelation function, \[\begin{align*} \rho_{h} & =\frac{\gamma_{h}}{\gamma_{0}}\\ & =\phi^{h}\frac{\sigma^{2}}{1-\phi^{2}}\left(\frac{1-\phi^{2}}{\sigma^{2}}\right)\\ & =\phi^{h} \end{align*}\]
We can compare this autocorrelation function to the R function ARMAacf
. Below we show the R code.
set.seed(1234)
# Parameters
n = 20
phi = 0.6
my_acf = (phi)^(0:n) # Derived acf
ARMAacf = ARMAacf(ar = phi, lag.max = n) # Built-in R ACF
# Set up as dataframe for plotting
df = data.frame("my_acf" = my_acf, "ARMAacf" = ARMAacf)
df["time"] = 0:n
# Plot both functions
ggplot(df, aes(x = time)) +
geom_line(aes(y = my_acf, colour = "Derived ACF")) +
geom_line(aes(y = ARMAacf, colour = "ARMAacf"), linetype = "twodash") +
labs(colour = "Functions", x = "lag", y = "correlation") +
ggtitle("Comparison between our Derived ACF and ARMAacf") +
theme(plot.title = element_text(hjust = 0.5)) +
scale_color_manual(values = c("darkred", "steelblue"))
We see that indeed both functions generate the same plot.
Compute the autocovariance function (ACF) of the random walk model. Specifically, find the ACF, \(\gamma_{mn}=\mathrm{Cov}(X_m,X_n)\), for the random walk model specified by \[ X_{n} = X_{n-1}+\epsilon_n,\] where \(\{\epsilon_n\}\) is white noise with variance \(\sigma^2\), and we use the initial value \(X_0=0\).
\(\mathbf{Solution.}\qquad\) Since \(x_{0}=0,\) then \(X_{n}=\sum_{i=1}^{n}\varepsilon_{i}.\) Thus, computing the ACF \[\begin{align*} \gamma_{mn} & =\mathrm{Cov}\left(X_{m},X_{n}\right)\\ & =\mathrm{Cov}\left(\sum_{i=1}^{m}\varepsilon_{i},\sum_{j=1}^{n}\varepsilon_{j}\right) \end{align*}\]
The only terms that survive are when \(j=i\). Thus, \[\begin{align*} \gamma_{mn} & =\min\{m,n\}\sigma^{2} \end{align*}\]
Explain which parts of your responses above made use of a source, meaning anything or anyone you consulted (including classmates or office hours) to help you write or check your answers. All sources are permitted, but failure to attribute material from a source is unethical. See the syllabus for additional information on grading.
For this homework I consulted lecture notes and chapters 1 and 3 from Shumway/Stoffer. I briefly discussed problem 2.1 b) with Hunter Zhang to help him understand intuition of the problem.