Significance Tests for Event Studies

The abnormal and cumulative abnormal returns from event studies are typically used in two ways. Either they are used as dependent variables in subsequent regression analyses or they are interpreted as such. This latter direct interpretation seeks to answer the question whether the distribution of the abnormal returns is systematically different from predicted. In the relevant literature, the focus is almost always on the mean of the distrubtion of abnormal returns and, specifically, one seeks to answer the questions whether this mean is different from zero (with statistical significance).

The answer about statistical significance is given by means of hypothesis testing, where the null hypothesis ($H_0$) states that the mean of the abnormal returns within the event window is zero and the alternative hypothesis ($H_1$) states the opposite. Formally, the testing framework reads as follows:

\begin{equation}H_0: μ = 0                                              \end{equation}

\begin{equation}H_1: μ \neq 0                                              \end{equation}

Note that μ may not only represent the mean of simple abnormal returns (ARs). Event studies are oftentimes muti-level calculations, where ARs are compounded to obtain cumulative abnormal returns (CARs), and CARs are 'averaged' to obtain cumulative average abnormal returns (CAARs) in cross-sectional studies (sometimes also called 'sample studies'). In long-run event studies, the buy-and-hold abnormal return (BHAR) is often used to replace CAR. Futhermore, BHARs can then again be 'averaged' to obtain ABHAR for cross-sectional studies. Significance testing can be applied to the mean of any of these returns, meaning that μ in the above testing framework can represent the mean of ARs, CARs, BHARs, AARs, CAARs, and ABHARs. Let us shortly revisit these six different forms of abnormal return calculations, as presented in the introduction:

\begin{equation}AR_{i,t}=R_{i,t}-E[R_{i,t}|\Omega_{i,t-1}], \end{equation}

where the term $E[R_{i,t}|\Omega_{i,t-1}]$ denotes the expected value of $R_{i,t}$ condtional on the information set at time $t-1$, which thus serves as the predicted return. Hence, if the event does not have any effect, the abnormal return will not systematically differ from zero, that is, it has mean zero. The information set at time $t-1$ can be of different nature and represent, for example, the constant-expected-return model, the market model, or a Fama-French factor model.

\begin{equation}AAR_{t}= \frac{1}{N} \sum\limits_{i=1}^{N}AR_{i,t} \end{equation}

\begin{equation}CAR_{i}=\sum\limits_{t=T_1 + 1}^{T_2} AR_{i,t} \end{equation}

\begin{equation}BHAR_{i}=\prod\limits_{t=T_1 + 1}^{T_2} (1 + R_{i,t}) -\prod\limits_{t=T_1 + 1}^{T_2} (1 + E[R_{i,t}|\Omega_{i,t}])\end{equation}



For grouped observations, be it along the firm or the event dimension, we provide a precision-weighted CAAR, which offers a similar standardization as the Patell Test:

\begin{equation}PWCAAR=\sum\limits_{i=1}^{N}\sum\limits_{t=T_1 + 1}^{T_2}\omega_i AR_{i, t}\end{equation}

where $$\omega_i = \frac{\left(\sum\limits_{t=T_1 + 1}^{T_2} S^2_{AR_{i,t}}\right)^{-0.5}}{ \sum\limits_{1=1}^{N}\left(\sum\limits_{t=T_1 + 1}^{T2}S^2_{AR_{i,t}}\right)^{-0.5}}$$

and $S^2_{AR_{i,t}}$ denotes the forecast-error-corrected variance.

The literature on event-study hypthesis testing covers a wide range of tests and is thus very comprehensive. Generally, significance tests can be classified into parametric and nonparametric tests. Parametric tests assume that the individual firm's abnormal returns are normally distributed, whereas nonparametric tests do not rely on any such assumptions. Applied researches typically carray out both parametric and nonparametric tests to verify that the research findings are not driven by outliers, which tend to affect the results of parametric tests but not the results of nonparametric tests; for example, see Schipper and Smith (1983). Table 1 provides an overview toghether with links to the formulas of the different test statistics.

Table 1: Significance tests
Null hypothesis Parametric tests Nonparametric tests Application
$H_0: E(AR) = 0$ AR Test   Individual Event
$H_0: E(AAR) = 0$ Cross-Sectional Test, Time-Series Standard Deviation Test, Patell Test, Adjusted Patell Test, Standardized Cross-Sectional Test, Adjusted Standardized Cross-Sectional Test, and Skewness Corrected Test Generalized Sign TestGeneralized Rank T Test, and Generalized Rank Z Test Sample of Events
$H_0: E(CAR) = 0$ CAR t-test   Individual Event
$H_0: E(CAAR) = 0$ Cross-Sectional TestTime-Series Standard Deviation TestPatell TestAdjusted Patell TestStandardized Cross-Sectional TestAdjusted Standardized Cross-Sectional Test, and Skewness Corrected Test Generalized Sign TestGeneralized Rank T Test, and Generalized Rank Z Test Sample of Events
$H_0: E(BHAR) = 0$ BHAR Test   Individual Event
$H_0: E(ABHAR) = 0$ ABHAR Test and Skewness Corrected Test   Sample of Events

Among the most widely used parametric tests are those developed by Patell (1976) and Boehmer, Musumeci and Poulsen (1991), whereas among the most widely used nonparametric tests are the rank-test of Corrado (1989), and the sign-test of Cowan (1992).

Why different test statistics are needed

An informed choice of test statistic should be based on the research setting and the statistical issues pertaining to the observed data. Specifically, event-date clustering poses a problem leading to (i) cross-sectional correlation of abnormal returns and (ii) distortions from event-induced volatility changes. A cross-sectional correlation arises when sample studies focus on (an) event(s) that happened for multiple firms at the same day(s). Event-induced changes of volatility, on the other hand, is a phenomenon common to many event types (e.g., M&A transactions) that becomes problematic when events are clustered. As a consequence, both issues impact the standard error which appears in the numerator of a t-test statistic (that is, of the test statistic of a parametric test). If this impact is igorned, the test statistic becomes inflated (in absoluate value), leading to liberal inference: If the null hypothesis is true, it will be rejected with probability greater than the nominal signficance level of the test; that is, there is an unduly large chance to `find' something in the data, even though nothing happened in reality.

Comparison of test statistics

There have been several attempts to address these statistical issues. Patell (1976, 1979), for example, tried to overcome the t-test's sensitivity to event-induced volatility by standardizing the event window's ARs. He used the dispersion of the estimation interval's ARs to limit the impact of stocks with large return volatilities. Unfortunately, the test can still be liberal (that is, reject  true null hypotheses too often), particularly when samples are characterized by non-normal returns, low prices, or illiquidity. Also, the test has been found to be still affected by event-induced volatility changes (Campbell and Wasley, 1993; Cowan and Sergeant, 1996; Maynes and Rumsey, 1993, Kolari and Pynnonen, 2010). Boehmer, Musumeci and Poulsen (1991) resolved this latter issue and developed a test statistic robust against volatility-changing events. Furthermore, the simulation study of Kolari and Pynnonen (2010) indicates an over-rejection of true null hypotheses for both the Patell and the BMP test if the cross-sectional correlation is ignored. Kolari and Pynnonen (2010) developed an adjusted version for both test statistics that accounts for such cross-sectional correlation.

The nonparametric rank test of Corrado and Zivney (1992) (RANK) is based on re-standardized event-window returns and has proven robust against induced volatility and cross-correlation. Sign tests are another type of nonparametric tests. One advantage over parametric t-tests (claimed by the authors of sign tests) is that they are able to also identify small levels of abnormal returns. Moreover, the use of nonparametric sign and rank tests has long been promoted by statisticians for applications that require robustness against non-normally distributed data. Past research (e.g., Fama, 1976) has argued that daily stock returns have distributions that are more fat-tailed (exhibit larger skewness or kurtosis) than normal distributions, which then suggests the use of nonparametric tests.

Several authors have further advanced the sign and rank tests pioneered by Cowan (1992) and Corrado and Zivney (1992). Campbell and Wasley (1993), for example, improved the RANK test by introducing an adjustment to the standard error for longer CARs, creating the Campbell-Wasley test statistic (CUM-RANK). Another nonparametric test is the generalized rank test (GRANK), which seems to have good properties for both shorter and longer CAR windows.

The Cowan (1992) sign test (SIGN) is also used for testing CARs by comparing the proportion of positive ARs close to an event to a proporiton of postive ARs from a `normal' (that is, event-free) period. Because this test  only uses the sign of the difference between abnormal returns, but not its magnitude, associated event-induced (excess) volatility does not inflate the null-rejection rates of the test; furthermore, the test is robust against asymmetric return distributions.

Overall, when comparing the different test statistics, the relevant literature has come to the following findings and recommendations (see Table 2 for further details):

  1. Parametric tests based on standardized abnormal returns perform better than those based on non-standardized returns.
  2. Generally, nonparametric tests tend to be more powerful than parametric tests.
  3. The generalized rank test (GRANK) is one of the most powerful tests for both shorter and and longer CAR windows.
Table 2: Comparison of the main test statistics (1-9 are parametric, 10-15 non-parametric)
No. Name Key Reference Abbreviation in EST Results Strengths Weaknesses
1 T test    
  • Simplicity
  • Sensitive to cross-sectional correlation and volatility changes
2 Cross-Sectional Test    CSect T    
3 Time-Series Standard Deviation Test   CDA T    
4 Patell Test Patell (1976) Patell Z
  • Robust against the way in which ARs are distributed across the (cumulated) event window
  • Sensitive to cross-sectional correlation and event-induced volatility
5 Adjusted Patell Test Kolari and Pynnönen (2010) Adjusted Patell Z
  • Same as Patell
  • Accounts for cross-sectional correlation
6 Standardized Cross-Sectional Test Boehmer, Musumeci and Poulsen (1991) StdCSect Z
  • Robust against the way in which ARs are distributed across the (cumulated) event window
  • Accounts for event-induced volatility
  • Accounts for serial correlation
  • Sensitive to cross-sectional correlation
7 Adjusted Standardized Cross-Section Test Kolari and Pynnönen (2010) Adjusted StdCSect Z
  • Accounts additionally for cross-correlation
8 Skewness Corrected Test Hall (1992) Skewness Corrected T
  • Corrects the test statistics for potential skewness (in the return distribution)
9 Jackknife Test Giaccotto and Sfiridis (1996) Jackknife T    
10 Corrado Rank Test Corrado and Zivney (1992) Rank Z  
  • Loses power for wider CARs (e.g., [-10,10])
11 Generalized Rank Test Kolari and Pynnönen (2011) Generalized Rank T
  • Accounts for cross-correlation of returns
  • Accounts for returns serial correlation
  • Accounts for event-induced volatility
12 Generalized Rank Test Kolari and Pynnönen (2011) Generalized Rank Z
  • see GRANKT
13 Sign Test Cowan (1992) not available
  •  Robust against skewness (in the return distributioin)
  • Inferior performance for longer event windows
14 Cowan Generalized Sign Test Cowan (1992) Generalized Sign Z     
15 Wilcoxon signed-rank Test  Wilcoxon (1945)  
  • Takes into account both the sign and the magnitude of ARs

Source: Thesw strengths and weaknesses were compiled from Kolari and Pynnonen (2011).


Formulas, acronyms, and decisions rule applicable to test statistics

Let $L_1 = T_1 - T_0 + 1$ denote estimation-window length with $T_0$ denoting the 'earliest' day of the estimation window and $T_1$ denoting the 'latest' day of the estimation window; furthermore, let $L_2 = T_2 - T_1$ denote the event-window length with $T_2$ denoting the 'latest day' of the event window. This notation implies that the estimation window, given by $\{T_0, \ldots, T_1\}$ ends immediately before the event window, given by $\{T_1+1, \ldots, T_2\}$ begins. We will stick to this convention for simplicity in all the formulas below, but note that our methodology also allows for an arbitrary gap between the the windows, as specified by the user. Let $N$ denote the sample size (i.e., the number of observations); finally, $S_{AR_i}$ denotes the (sample) standard deviation over the estimation window based on the formula

$$S^2_{AR_i} = \frac{1}{M_i - 2} \sum\limits_{t=T_0}^{T_1}(AR_{i,t})^2$$

Here, $M_{i}$ denotes to the number of non-missing (i.e., matched) returns. This formula is based on the market model (whtere two parameters need to estimated to compute abnormal returns). For other models, the numerator needs to be changed from $M_i -2$ to $M_i -k$, where $k$ denotes the number of paramaters that need to be estimated to compute abnormal returns; for example, in the constant-expected-return model, we have $k=1$ whereas in the (standard) Fama-French factor model, we have $k=4$ (one constant and three factors).

Parametric Tests

[1] T test

Our research app provides test statistics for single firms at each point of time $t$. The null hypothesis is: $H_0: E(AR_{i, t}) = 0,$ and the test statistic is given by

$$t_{AR_{i,t}}=\frac{AR_{i,t}}{S_{AR_i}}, $$

where $S_{AR_i}$ is the standard deviation of the abnormal returns in the estimation window based on

$$S^2_{AR_i} = \frac{1}{M_i-2} \sum\limits_{t=T_0}^{T_1}(AR_{i,t})^2.$$

Second, we provide t-statistics of the cumulative abnormal returns for each firm. The t-statistic for the null $H_0: E(CAR_{i}) = 0$ is defined as



$$S^2_{CAR} = L_2 S^2_{AR_i}.$$

[2] Cross-Sectional Test (Abbr.: CSect T) 

A simple test statistic for testing $H_0: E(AAR) = 0$ is given by


where $S_{AAR_t}$ denotes the standard deviation across firms at time $t$ based on:

$$S^2_{AAR_t} =\frac{1}{N-1} \sum\limits_{i=1}^{N}(AR_{i, t} - AAR_t)^2.$$

The test statistic for testing $H_0: E(CAAR) = 0$ is given by


where $S_{CAAR}$ denotes the standard deviation of the cumulative abnormal returns across the sample based on:

$$S^2_{CAAR} =\frac{1}{N-1} \sum\limits_{i=1}^{N}(CAR_{i} - CAAR)^2.$$

Brown and Warner (1985) showed that the cross-sectional test is sensitive to event-induced volatility, which can result in low power of the test.

[3] Time-Series Standard Deviation or Crude Dependence Test (Abbr.: CDA T) 

The time-series standard deviation test uses the entire sample for variance estimation. According to this construction, the time-series dependence test does not account for (possibly) unequal variances across observations. We have for the variance estimation:

$$S^2_{AAR} =\frac{1}{M-2} \sum\limits_{t=T_0}^{T_1}(AAR_{t} - \overline{AAR})^2,$$

where $[T_0, T_1]$ denotes the estimation window and

$$\overline{AAR} = \frac{1}{M} \sum\limits_{t=T_0}^{T_1}AAR_{t}.$$

The test statistic for testing $H_0: E(AAR_t) = 0$ is given by$$t_{AAR_t}=\sqrt{N}\frac{AAR_t}{S_{AAR}}.$$

The test statistic for testing $H_0: E(CAAR = 0)$ is given by

$$t_{CAAR}=\frac{CAAR}{\sqrt{T_2 - T_1}S_{AAR}}.$$

[4] Patell or Standardized Residual Test (Abbr.: Patell Z) 

The Patell test is a widely used test statistic in event studies. In the first step, Patell (1976, 1979) suggested to standardize each $AR_i$ by the forecast-error-corrected standard deviation before calculating the test statistic.

\begin{equation}SAR_{i,t} = \frac{AR_{i,t}}{S_{AR_{i,t}}} \label{eq:sar}\end{equation}

As the event-window abnormal returns are out-of-sample predictions, Patell adjusts the standard error by the forecast-error:

\begin{equation}S^2_{AR_{i,t}} = S^2_{AR_i} \left(1+\frac{1}{M_i}+\frac{(R_{m,t}-\overline{R}_{m})^2} {\sum\limits_{t=T_0}^{T_1}(R_{m,t}-\overline{R}_{m})^2}\right)\label{EQ:FESD}\end{equation}

with $\overline{R}_{m}$ denoting the average of the market returns in the estimation window. 


$SAR_{i,t}$ follows a t-distribution with ${M_i-2}$ degrees of freedom under the null.

The  test statistic for testing $H_0:E( AAR = 0)$ isgiven by 

$$z_{Patell, t} = \frac{ASAR_t}{S_{ASAR_t}},$$

where $ASAR_t$ denotes the sum of the standardized abnormal returns over the sample

$$ASAR_t = \sum\limits_{i=1}^N SAR_{i,t},$$

with expectation zero and variance 

$$S^2_{ASAR_t} = \sum\limits_{i=1}^N \frac{M_i-2}{M_i-4}$$

under the null.

The test statistic for testing $H_0: E(CAAR) = 0$ is given by


with $CSAR$ denoting the cumulative standardized abnormal returns

$$CSAR_{i} = \sum\limits_{t=T_1+1}^{T_2} SAR_{i,t}$$

with expectation zero and variance

$$S^2_{CSAR_i} = L_2\frac{M_i-2}{M_i-4}$$

under the null.

Under the assumption of cross-sectional independence and some other conditions (Patell, 1976), $z_{Patell}$ has a limiting standard normal distribution under the null.

[5] Kolari and Pynnönen adjusted Patell or Standardized Residual Test (Abbr.: Adjusted Patell Z) 

Kolari and Pynnönen (2010) propose a modification of the Patell test to account for cross-correlation of the abnormal returns. Using the standardized abnormal returns ($SAR_{i,t}$) defined as above in Section [4], and defining $\overline r$ as the average of the sample cross-correlations of the estimation-period abnormal returns, the test statistic for $H_0: E(AAR) = 0$ is given by

$$z_{Patell, t}=z_{Patell, t} \sqrt{\frac{1}{1 + (N - 1) \overline r}},$$

where $z_{patell, t}$ denotes the Patell test statistic. It is easily seen that if the term $\overline r$ is zero, the adjusted test statistic reduces to the original Patell test statistic. Assuming the square-root rule holds for the standard deviation of different return periods, this test can be used when considering Cumulated Abnormal Returns ($H_0: E(CAAR) = 0$): 

$$z_{Patell}=z_{Patell} \sqrt{\frac{1}{1 + (N - 1) \overline r}}.$$

[6] Standardized Cross-Sectional or BMP Test (Abbr.: StdCSect Z) 

Similarly, Boehmer, Musumeci and Poulsen (1991) proposed a standardized cross-sectional method that is robust against any (additional) variance induced by the event. The test statistic on day $t$ ($H_0: E(AAR) = 0$) in the event window is given by

$$z_{BMP, t}= \frac{ASAR_t}{\sqrt{N}S_{ASAR_t}},$$

with $ASAR_t$ defined as for Patell-test [2] and with standard deviation based on

$$S^2_{ASAR_t} = \frac{1}{N-1}\sum\limits_{i=1}^{N}\left(SAR_{i, t} - \frac{1}{N} \sum\limits_{l=1}^N SAR_{l, t} \right)^2.$$

Furthermore, the EST API provides the test statistic for testing $H_0: E(CAAR) = 0$ given by


where $\overline{SCAR}$ denotes the averaged standardized cumulated abnormal returns across the $N$ firms, with standard deviation based on

$$S^2_{\overline{SCAR}} = \frac{1}{N-1} \sum\limits_{i=1}^{N} \left(SCAR_i - \overline{SCAR}\right)^2,$$

$$\overline{SCAR} = \frac{1}{N}\sum\limits_{i=1}^{N}SCAR_i$$

with $SCAR_i = \frac{CAR_i}{S_{CAR_i}}$ and $S_{CAR_i}$ denoting the forecast-error-corrected standard deviation from Mikkelson and Partch (1988). The Mikkelson-and-Partch correction adjusts for each firm the test statistic for serial correlation in the returns. The correction terms are

  • Market Model:

$$S^2_{CAR_i} = S_{AR_i}^2\left(L_i + \frac{L^2_i}{M_i} + \frac{\left(\sum\limits_{t=T_1+1}^{T_2}(R_{m,t}-\overline{R}_{m})\right)^2} {\sum\limits_{t=T_0}^{T_1}(R_{m,t}-\overline{R}_{m})^2}\right)$$

  • Comparison Period Mean Adjusted Model:

$$S^2_{CAR_i} = S_{AR_i}^2\left(L_i + \frac{L^2_i}{M_i}\right)$$

  • Market Adjusted Model:

$$S^2_{CAR_i} = S_{AR_i}^2L_i,$$

where $L_i$ denotes the number of non-missing returns in the event window and $M_i$ denotes the number of non-missing returns in the estimation window for firm $i$. Finally, $\overline{R}_{m}$ denotes the average of the market returns in the estimation window; for example, see Patell Test.

[7] Kolari and Pynnönen Adjusted Standardized Cross-Sectional or BMP Test  (Abbr.: Adjusted StdCSect Z

Kolari and Pynnönen (2010) proposed a modification of the BMP test to account for cross-correlation of the abnormal returns. Using the standardized abnormal returns ($SAR_{i,t}$) defined as in the previous section, and defining $\overline r$ as the average of the sample cross-correlations of the estimation-period abnormal returns, the test statistic for $H_0: E(AAR) = 0$ of the adjusted BMP test is

$$z_{BMP, t}=z_{BMP, t} \sqrt{\frac{1- \overline r}{1 + (N - 1) \overline r}},$$

where $z_{bmp, t}$ denotes the BMP test statistic. It is easily seen that if the term $\overline r$ is zero, the adjusted test statistic reduces to the original BMP test statistic. Assuming the square-root rule holds for the standard deviation of different return periods, this test can be used when considering Cumulated Abnormal Returns ($H_0: E(CAAR) = 0$): 

$$z_{BMP}=z_{BMP} \sqrt{\frac{1- \overline r}{1 + (N - 1) \overline r}}.$$

[8] Skewness-Corrected Test (Abbr.: Skewness Corrected T) 

The skewness-adjusted t-test, introduced by Hall 1992, corrects the cross-sectional t-test for a (possibly) skewed abnormal-return distribution. This test is applicable for averaged abnormal return ($H_0: E(AAR) = 0$), the cumulative averaged abnormal return ($H_0: E(CAAR) = 0$), and the averaged buy-and-hold abnormal return ($H_0: E(ABHAR) = 0$). In what follows, we will folcus on cumulative averaged abnormal returns. First, recall the (unbiased) cross-sectional sample variance:

$$S^2_{CAAR} = \frac{1}{N-1} \sum\limits_{i=1}^{N}(CAR_i - CAAR)^2.$$

Next, the (unbiased) sample skewness is given by:

$$\gamma = \frac{N}{(N-2)(N-1)} \sum\limits_{i=1}^{N}(CAR_i - CAAR)^3S^{-3}_{CAAR} .$$

Finallyt, let

$$S = \frac{CAAR}{S_{CAAR}}.$$ 

Then the skewness-adjusted test statistic for CAAR is given by

$$t_{skew} = \sqrt{N}\left(S + \frac{1}{3}\gamma S^2 + \frac{1}{27}\gamma^2S^3 + \frac{1}{6N}\gamma\right),$$

which is asymptotically standard normal distributed under the null. For a further discussion on skewness transformation we refer to Hall (1992) and for further discussion on unbiased estimation of the second and third moments we refer to Cramer (1961) or Rimoldini (2013). 

[9] Jackknife Test (Abbr.: Jackknife T) 

This test will be added in a future version.

Nonparametric Tests

[10] Corrado Rank Test (Abbr.: Rank Z) 

In a first step, the Corrado's (1989) rank test transforms abnormal returns into ranks. Ranking is done for all abnormal returns of both the event and the estimation period. If ranks are tied, the midrank is used. For adjusting on missing values Corrado and Zyvney (1992) suggested a standardization of the ranks by the number of non-missing values $M_i$ plus 1

$$K_{i, t}=\frac{rank(AR_{i, t})}{1 + M_i + L_i} $$,

where $L_i$ denotes to the number of non-missing (i.e., matched) returns in the event window. The rank statistic for testing on a single day ($H_0: E(AAR) = 0$) is then given by

$$t_{rank, t} = \frac{\overline{K}_t - 0.5}{S_{\overline{K}}},$$

where $\overline{K}_t = \frac{1}{N_t}\sum\limits_{i=1}^{N_t}K_{i, t}$, $N_t$  denotes the number of non-missing returns across firms and

$$S^2_{\overline{K}} = \frac{1}{L_1 + L_2} \sum\limits_{t=T_0}^{T_2} \frac{N_t}{N}\left(\overline{K}_t - 0.5 \right)^2$$.

When analyzing a multi-day event period, Campell and Wasley (1993) defined the RANK test considering the sum of the mean excess rank for the event window as follows ($H_0: E(CAAR) = 0$):

 $$t_{rank} =\sqrt{L_2} \left(\frac{\overline{K}_{T_1, T_2}  - 0.5}{S_{\overline{K}}}\right),$$

where $\overline{K}_{T_1, T_2} = \frac{1}{L_2} \sum\limits_{t=T_1 + 1}^{T_2}\overline{K}_t$ denotes the mean rank across firms and time in the event window. By adjusting the last day in the event window $T_2$, one can get a series of test statistics as definded by Campell and Wasley (1993).

Note 1: The adjustment for event-induced variance as done by Campell and Wasley (1993) is omitted here and may be implemented in a future version. As an alternative for such an application, we recommend the GRANK-T or GRANK-Z test. 

[11] Generalized Rank T Test (Abbr.: Generalized Rank T)  - this section 11 is under revision-

In the following steps we assume, for the sake of simplicity, that there are no missing values in either the estimation or the event window for any firm. In order to account for possible event-induced volatility, the GRANK test squeezes the whole event window into a single observation, the so-called 'cumulative event day'. First, define the standardized cumulative abnormal returns of firm $i$ in the event window as


where $S_{CAR_{i}}$ denotes the standard deviation of the prediction errors in the cumulative abnormal returns of firm $i$, based on

\begin{equation}S^2_{CAR_{i}} = S^2_{AR_i} \left(L+\frac{L_2}{L_1}+\frac{\sum\limits_{t=T_1+1}^{T_2}(R_{m,t}-\overline{R}_{m})^2} {\sum\limits_{t=T_0}^{T_1}(R_{m,t}-\overline{R}_{m})^2}\right).\end{equation}

$$S^2_{CAR_i} = S_{AR_i}^2\left(L_i + \frac{L^2_i}{M_i} + \frac{\left(\sum\limits_{t=T_1+1}^{T_2}(R_{m,t}-\overline{R}_{m})\right)^2} {\sum\limits_{t=T_0}^{T_1}(R_{m,t}-\overline{R}_{m})^2}\right)$$

$$S^2_{CAR_i} = S_{AR_i}^2\left(L_i + \frac{L^2_i}{M_i} + \frac{\sum\limits_{t=T_1+1}^{T_2}(R_{m,t}-\overline{R}_{m})^2} {\sum\limits_{t=T_0}^{T_1}(R_{m,t}-\overline{R}_{m})^2}\right)$$

Under the null, the standardized CAR value $SCAR_{i}$ has an expectation of zero and approximately unit variance. To account for event-induced volatility $S_{CAR_{i}}$ is re-standardized by the cross-sectional standard deviation



$$S^2_{SCAR}=\frac{1}{N-1} \sum\limits_{i=1}^N \left(SCAR_{i} - \overline{SCAR} \right)^2 \quad \text{ and } \quad \overline{SCAR} = \frac{1}{N} \sum\limits_{i=1}^N SCAR_{i}.$$

By construction, $SCAR^*_{i}$ has again an expectation of zero and approximately unit variance under the null. Now, let us define the generalized standardized abnormal returns ($GSAR$):

$$GSAR_{i, t} = \left\{ \eqalign{ SCAR^*_i &\text{ for $t$ in event window} \ SAR_{i ,t} &\text{ for $t$ in estimation window}} \right \}.$$

The CAR window is also considered as one time point, the other time points are considered GSAR is equal to the standardized abnormal returns. Define on this $L_1 + 1$ points the standardized ranks:

$$K_{i, t}=\frac{rank(GSAR_{i, t})}{L_1 + 2}-0.5$$

Then the generalized rank t-statistic for testing $H_0: E(CAAR) = 0$ is defined as:

$$t_{grank}=Z\left(\frac{L_1 - 1}{L_1 - Z^2}\right)^{1/2}$$



where $t=0$ indicates the cumulative event day and

$$S^2_{\overline{K}}=\frac{1}{L_1 + 1}\sum\limits_{t \in CW}\frac{N_t}{N}\overline{K}_t^2$$

with CW representing the combined window consisting of the estimation window and the cumulative event day, and

$$\overline{K}_t=\frac{1}{N_t}\sum\limits_{i=1}^{N_t}K_{i, t}.$$

$t_{grank}$ is t-distributed with $L_1 - 1$ degrees of freedom under the null.

Formulas for testing on a single day ($H_0: E(AAR) = 0$) are straightforward modifications of the ones shown above.

[12] Generalized Rank Z Test (Generalized Rank Z) 

Using some facts about statistics on ranks, we get the standard deviation of $\overline{K_{0}}$ based on

$$S^2_{\overline{K_{0}}} =\frac{L_1}{12N(L_1 + 2)}.$$

By this calculation, the following test statistic can be defined

$$z_{grank} = \frac{ \overline{K_{0}} }{ S_{\overline{ K_{0} } } } = \sqrt{ \frac{12N(L_1+ 2)}{L_1}} \overline{K_{0}},$$

which converges under null hypothesis quickly to the standard normal distribution as number of the firms $N$ increases.

[13] Sign Test

This sign test has been proposed by Cowan (1991) and builds on the ratio of positive cumulative abnormal returns $\hat{p}$ present in the event window. Under the null hypothesis, this ratio should not significantly differ from 0.5.

$$t_{sign}= \sqrt{N}\left(\frac{\hat{p}-0.5}{\sqrt{0.5(1-0.5)}}\right)$$

This test will be added in a future version.

[14] Cowan Generalized Sign Test (Abbr.: Generalized Sign Z) 

Under the null hypothesis, the number of stocks with positive abnormal cumulative returns ($CAR$) is expected to be consistent with the fraction $\hat{p}$ of positive $CAR$ from the estimation period. When the number of positive $CAR$ is significantly higher than the expected number derived from the estimation-period fraction, then null hypothesis is rejected.

The estimation-period fraction is given by

$$\hat{p}=\frac{1}{N}\sum\limits_{i=1}^{N}\frac{1}{L_1}\sum\limits_{t=T_0}^{T_1}\varphi_{i, t},$$

where $\varphi_{i,t}$ equals $1$ if the sign is positive and equals $0$ otherwise. The generalized sign test statistic ($H_0: E(CAAR) = 0$) is given by


where $w$ is the number of stocks with positive cumulative abnormal returns during the event period. To compute the p-value, a normal approximation to the binomial distribution with the parameters $\hat{p}$ and $N$, is used.

Note 1: This test is based on Cowan, A. R. (1992).

Note 2: the EST API provides GSIGN test statistics also for single days ($H_0: E(AAR) = 0$) in the event time period.

Note 3: The GSIGN test is based on the traditional SIGN test where under the null hypothesis a binomial distribution Bin(0.5, $N$) is used for the distribution of the test statistics.

Note 4: If $N$ is small, the normal approximation is inaccurate for calculating the p-value; in such a case we recommend to use the binomial distribution for calculating the p-value. 

[15] Wilcoxon Test (Abbr.: Wilcoxon Z) 

This test will be added in a future version.