Significance Tests for Event Studies

Event studies are concerned with the question of whether abnormal returns on an event date or, more generally, during a window around an event date (called the event window) are unusually large (in magnitude). To answer this question one carries out a formal hypothesis test where the null hypothesis specifies that the expected value of a certain random variable is zero; if the null hypothesis is rejected, one concludes that the event had an ‘impact’. It is customary in the literature to use two-sided tests, which specify as alternative hypothesis that the expected value is different from zero (as opposed to larger, or smaller, than zero). We follow this convention.

If there is only one instance under study, the random variable is the abnormal return on the event day itself (AR) or, more generally, the cumulative abnormal return during the event window (CAR). If there are multiple instances under study, the respective quantities are averaged across instances. Thus, the random variable is the average abnormal return on the respective event day (AAR) or the average cumulative abnormal return during the respective event window, which can alternatively be expressed as the cumulative average abnormal return (CAAR).

In terms of terminology, by an instance we mean a given event for a given firm. In the case of multiple instances, there are two possibilities: (i) a given event (type), such as inclusion to an index or a merger, for multiple firms or (ii) multiple repetitions of a given event (type) for a given firm. An example of the first possibility would be studying the effect of being included in the S&P500 index for multiple firms; an example of the second possibility would be studying the effect of mergers for a given firm. In terms of the statistical methodology, both possibilities are handled in the same way.

For the computation of the abnormal return of firm $i$ on day $t$, denoted by $AR_{i,t}$ we refer the user to the introduction. In case one considers more than one instance, let $N$ denote the number of instances considered and define

\begin{equation}AAR_{t}= \frac{1}{N} \sum\limits_{i=1}^{N}AR_{i,t} \end{equation}

\begin{equation}CAR_{i}=\sum\limits_{t=T_1 + 1}^{T_2} AR_{i,t} \end{equation}

\begin{equation}CAAR=\frac{1}{N}\sum\limits_{i=1}^{N}CAR_i\end{equation}

The literature on event-study hypothesis testing covers a wide range of tests. Generally, significance tests can be classified into parametric and nonparametric tests. Parametric tests (at least in the field of event studies) assume that the individual firm's abnormal returns are normally distributed, whereas nonparametric tests do not rely on any such assumption. Applied researchers typically carry out both parametric and nonparametric tests to verify that the research findings are not driven by non-normal returns or outliers, which tend to affect the results of parametric tests but not the results of nonparametric tests; for example, see Schipper and Smith (1983).

Table 1 lists the various tests according to the null hypothesis for which they can be used. Table 2 lists them by their name and presents strengths and weaknesses compiled from Kolari and Pynnonen (2011).

Table 1: Tests by use
Null Hypothesis Parametric Tests Nonparametric Tests Application
$H_0: E(AR) = 0$ T Test Permutation Test Single Instance
$H_0: E(AAR) = 0$ Cross-Sectional Test, Time-Series Standard Deviation Test, Patell Test, Adjusted Patell Test, Standardized Cross-Sectional Test, Adjusted Standardized Cross-Sectional Test, and Skewness Corrected Test Generalized Sign TestGeneralized Rank T TestGeneralized Rank Z Test, and Wilcoxon Test Multiple Instances
$H_0: E(CAR) = 0$ T Test Permutation Test Single Instance
$H_0: E(CAAR) = 0$ Cross-Sectional TestTime-Series Standard Deviation TestPatell TestAdjusted Patell TestStandardized Cross-Sectional TestAdjusted Standardized Cross-Sectional Test, and Skewness Corrected Test Generalized Sign TestGeneralized Rank T Test, and Generalized Rank Z Test Multiple Instances

 

Table 2: Tests by name (1-9 are parametric, 10-16 are non-parametric)
# Name Key Reference EST Abbreviation Strengths and Weaknesses
1 T Test    
  • Simplicity
  • Sensitive to cross-sectional  and event-induced volatility; also sensitive to deviations from normality
2 Cross-Sectional Test   CSect T  
3 Time-Series Standard Deviation Test   CDA T  
4 Patell Test Patell (1976) Patell Z
  • Robust against the way in which ARs are distributed across the (cumulated) event window
  • Sensitive to cross-sectional correlation and event-induced volatility
5 Adjusted Patell Test Kolari and Pynnönen (2010) Adjusted Patell Z
  • Same as Patell; accounts for cross-sectional correlation
6 Standardized Cross-Sectional Test Boehmer, Musumeci and Poulsen (1991) StdCSect Z
  • Robust against the way in which ARs are distributed across the (cumulated) event window. Accounts for event-induced volatility and serial correlation
  • Sensitive to cross-sectional correlation
7 Adjusted Standardized Cross-Section Test Kolari and Pynnönen (2010) Adjusted StdCSect Z
  • Accounts additionally for cross-correlation
8 Skewness Corrected Test Hall (1992) Skewness-Corrected T
  • Corrects the test statistics for potential skewness (in the return distribution)
9 Jackknife Test Giaccotto and Sfiridis (1996) Jackknife T  
10 Corrado Rank Test Corrado and Zivney (1992) Rank Z
  • Loses power for longer event windows (e.g., [-10,10])
11 Generalized Rank Test Kolari and Pynnönen (2011) Generalized Rank T
  • Accounts for cross-sectional and serial correlation of returns, as well as for event-induced volatility
12 Generalized Rank Test Kolari and Pynnönen (2011) Generalized Rank Z
  • Less robust against the cross-sectional correlation of returns than Generalized Rank T
13 Sign Test Cowan (1992) Sign Z
  • Robust against skewness (in the return distribution)
  • Inferior performance for longer event windows
14 Cowan Generalized Sign Test Cowan (1992) Generalized Sign Z   
15 Wilcoxon signed-rank Test Wilcoxon (1945) Wilcoxon
  • Takes into account both the sign and the magnitude of ARs
16 Permutation Test Nguyen and Wolf (2023) Permutation
  • Robust against non-normality  of abnormal returns, unlike the T Test
  • Computationally more expensive

In describing the formulas for the test statistics and their (approx.) distributions under the null, which are used to compute $p$-values, we follow the order in Table 2.

Some Preliminaries

The estimation window is given by $\{T_0, \ldots, T_1\}$ and thus has length $L_1 = T_1 - T_0 + 1$. The event window is given by $\{T_1+1, \ldots, T_2\}$ and thus has length $L_2 = T_2 - T_1$. This convention implies that the estimation window ends immediately before the event window. We will stick to this convention for simplicity in all the formulas below, but note that our methodology also allows for an arbitrary gap between the two windows, as specified by the user.

If the event window is of length one (that is, contains a single day only), we shall use the convention $T_1+1 = 0 = T_2$. Otherwise, it always holds more generally that $T_1 +1 \le 0 \le T_2$.

If multiple instances are considered, $N$ denotes the number of instances.

For any given firm $i$, $S_{AR_i}$ denotes the sample standard deviation of the returns during the estimation window, which is given as the square root of the corresponding sample variance

$$S^2_{AR_i} = \frac{1}{M_i - K} \sum\limits_{t=T_0}^{T_1}AR_{i,t}^2$$

Here, $M_{i}$ denotes the number of non-missing returns during the estimation window; for example, $M_i = T_1 - T_0 +1$ in case of no missing observations. Furthermore, $K$ denotes the degrees of freedom (given by the number of free parameters)  in the benchmark model that was used to compute the abnormal returns; for example, $K=1$ for the constant-expected-return model, $K=2$ for the market model, and $K=4$ for the three-factor Fama-French factor (which also contains a constant in addition to the three stochastic factors).

Finally, $N(0,1)$ denotes the standard normal distribution and $t_k$ denotes the $t$-distribution with $k$ degrees of freedom.

 

Parametric Tests

[1] T Test

[1.1] Null hypothesis of interest: $H_0: E(AR_{i,0}) = 0$ 

Test statistic:

$$t =\frac{AR_{i,t}}{S_{AR_i}}$$

Approximate null distribution: $t \stackrel{\cdot}{\sim} t_{M_i-K}$

[1.2] Null hypothesis of interest: $H_0: E(CAR_i) = 0$

Test statistic:

$$t=\frac{CAR_{i}}{S_{CAR_i}} \quad \mbox{with} \quad S^2_{CAR_i} = L_2 S^2_{AR_i}$$

Approximate null distribution: $t \stackrel{\cdot}{\sim} t_{M_i-K}$

 

[2] Cross-Sectional Test (Abbr.: CSect T) 

[2.1] Null hypothesis of interest: $E(AAR_0) = 0$

Test statistic:

$$ t= \sqrt{N} \frac{AAR_0}{S_{AAR,0}} \quad \mbox{with} \quad S^2_{AAR,0} = \frac{1}{N-1} \sum\limits_{i=1}^{N}(AR_{i,0} - AAR_0)^2$$

Approximate null distribution: $t \stackrel{\cdot}{\sim} t_{N-1}$

[2.2] Null hypothesis of interest: $E(CAAR) = 0$

Test statistic:

$$ t = \sqrt{N} \frac{CAAR}{S_{CAAR}} \quad \mbox{with} \quad S^2_{CAAR} = \frac{1}{N-1} \sum\limits_{i=1}^{N}(CAR_{i} - CAAR)^2$$

Approximate null distribution: $t \stackrel{\cdot}{\sim} t_{N-1}$

 

[3] Time-Series Standard Deviation or Crude Dependence Test (Abbr.: CDA T) 

[3.1] Null hypothesis of interest: $E(AAR_0) = 0$

Test statistic:

$$t = \sqrt{N} \frac{AAR_0}{S_{AAR}} \quad \mbox{with} \quad S^2_{AAR} = \frac{1}{M-1} \sum\limits_{t=T_0}^{T_1} \Bigl (AAR_{t} - \frac{1}{M} \sum\limits_{t=T_0}^{T_1} AAR_t \Bigr )^2$$

where $M$ denotes the number of non-missing $AAR_t$ during the estimation window.

Approximate null distribution: $t \stackrel{\cdot}{\sim} t_{M-1}$

[3.2] Null hypothesis of interest: $E(CAAR) = 0$

Test statistic:

$$ t = \sqrt{N} \frac{CAAR}{S_{CAAR}} \quad \mbox{with} \quad S^2_{CAAR} = \frac{1}{M-1} \sum\limits_{t=T_0}^{T_1} \Bigl (CAAR_{t} - \frac{1}{M} \sum\limits_{t=T_0}^{T_1} CAAR_t \Bigr )^2$$

where $M$ denotes the number of non-missing $CAAR_t$ during the estimation window.

Approximate null distribution: $t \stackrel{\cdot}{\sim} t_{M-1}$

 

[4] Patell or Standardized Residual Test (Abbr.: Patell Z) 

[4.1] Null hypothesis of interest: $H_0: E(AAR_0) = 0$

Test statistic:

$$z = \frac{ASAR_0}{S_{ASAR}}$$

The underlying idea is to standardize each $AR_{i,t}$ by the so-called forecast-error-corrected standard deviation before calculating the test statistic; for example, for the market model,

$$SAR_{i,0} = \frac{AR_{i,0}}{S_{AR_{i,0}}} \quad \mbox{with}  \quad S^2_{AR_{i,0}} = S^2_{AR_i}\left (1 + \frac{1}{M_i} +\frac{(R_{m,0} - \overline R_m)^2}{\sum\limits_{t=T_0}^{T_1}(R_{m,t} - \overline R_m)^2}\right ) \quad \mbox{and} \quad \overline R_m = \frac{1}{L_1} \sum\limits_{t=T_0}^{T_1}R_{m,t}$$

where $R_{m,t}$ denotes the market return on day $t$. (The standardization is analogous for any other day $t$ in the event window.)

Then compute

$$ ASAR_0 = \sum_{i=1}^N SAR_{i,0} $$

Under the null, this statistic has expectation zero and variance

$$S_{ASAR}^2 = \sum_{i=1}^N \frac{M_i-2}{M_i-4}$$

Approximate null distribution: $z \stackrel{\cdot}{\sim} N(0,1)$

[4.2] Null hypothesis of interest: $H_0: E(CAAR) = 0$

Test statistic:

$$z  = \frac{1}{\sqrt{N}} \sum_{i=1}^N \frac{CSAR_i}{S_{CSAR_i}}$$

where $CSAR_i$ denotes the cumulative standardized abnormal return of firm $i$:

$$CSAR_i = \sum_{t=T_1+1}^{T_2} SAR_{i,t}$$

which under the null has expectation zero and variance

$$S_{CSAR_i}^2 = L_2\frac{M_i-2}{M_i-4}$$

Approximate null distribution: $z \stackrel{\cdot}{\sim} N(0,1)$

 

[5] Kolari and Pynnönen adjusted Patell or Standardized Residual Test (Abbr.: Adjusted Patell Z) 

[5.1] Null hypothesis of interest: $H_0: E(AAR_0) = 0$

Test statistic:

$$z_{\text{adj}} = z \cdot \sqrt{\frac{1-\bar r}{1 + (N-1) \bar r}}$$

where $z$ is defined as in [4.1] and $\bar r$ denotes the average of the (pairwise) sample cross-correlations of the estimation-period abnormal returns.

Approximate null distribution: $z_{\text{adj}} \stackrel{\cdot}{\sim} N(0,1)$

[5.2] Null hypothesis of interest: $H_0: E(CAAR) = 0$

Test statistic:

$$z_{\text{adj}} = z \cdot \sqrt{\frac{1-\bar r}{1 + (N-1) \bar r}}$$

where $z$ is defined as in [4.2] and $\bar r$ denotes the average of the (pairwise) sample cross-correlations of the estimation-period abnormal returns.

Approximate null distribution: $z_{\text{adj}} \stackrel{\cdot}{\sim} N(0,1)$

 

[6] Standardized Cross-Sectional or BMP Test (Abbr.: StdCSect T) 

[6.1] Null hypothesis of interest: $H_0: E(AAR_0) = 0$

Test statistic:

$$ t = \frac{ASAR_0}{\sqrt{N} S_{ASAR,0}} \quad \mbox{with} \quad S^2_{ASAR,0} = \frac{1}{N-1} \sum\limits_{i=1}^{N}\Bigl (SAR_{i,0} -\frac{1}{N} \sum_{i=1}^N SAR_{i,0} \Bigr )^2$$

with $SAR_{i,0}$ and $ASAR_0$ defined as in [4.1]

Approximate null distribution: $t \stackrel{\cdot}{\sim} t_{M-1}$

[6.2] Null hypothesis of interest: $H_0: E(CAAR) = 0$

Test statistic: 

$$ t =\sqrt{N} \frac{\overline{SCAR}}{S_{\overline{SCAR}}}$$

where

$$\overline{SCAR} = \frac{1}{N} \sum_{i=1}^NSCAR_i \quad \mbox{and} \quad S_{\overline{SCAR}}^2 = \frac{1}{N-1} \sum_{i=1}^N \bigl ( SCAR_i - \overline{SCAR} \bigl )^2$$

These statistics are based on

$$SCAR_i = \frac{CAR_i}{S_{CAR_i}}$$

where $S_{CAR_i}$ denotes the forecast-error-corrected standard deviation; for example, for the market model,

$$S_{CAR_i}^2 = S_{AR_i}^2\left (L_2 + \frac{L_2}{M_i} +\frac{\sum_{t=T_1+1}^{T_2} (R_{m,t} - \bar R_m)^2}{\sum_{t=T_0}^{T_1} (R_{m,t} - \bar R_m)^2}\right )$$

Approximate null distribution: $t \stackrel{\cdot}{\sim} t_{M-1}$

 

[7] Kolari and Pynnönen Adjusted Standardized Cross-Sectional or BMP Test  (Abbr.: Adjusted StdCSect T

[7.1] Null hypothesis of interest: $H_0: E(AAR_0) = 0$

Test statistic:

$$z_{\text{adj}} = t \cdot \sqrt{\frac{1-\bar r}{1 + (N-1) \bar r}}$$

where $t$ is defined as in [6.1] and $\bar r$ denotes the average of the (pairwise) sample cross-correlations of the estimation-period abnormal returns.

Approximate null distribution: $t_{\text{adj}} \stackrel{\cdot}{\sim} t_{N-1}$

[7.2] Null hypothesis of interest: $H_0: E(CAAR) = 0$

Test statistic:

$$t_{\text{adj}} = z \cdot \sqrt{\frac{1-\bar r}{1 + (N-1) \bar r}}$$

where $t$ is defined as in [6.2] and $\bar r$ denotes the average of the (pairwise) sample cross-correlations of the estimation-period abnormal returns.

Approximate null distribution: $t_{\text{adj}} \stackrel{\cdot}{\sim} t_{N-1}$

 

[8] Skewness-Corrected Test (Abbr.: Skewness-Corrected T) 

[8.1] Null hypothesis of interest: $H_0: E(AAR_0) = 0$

Test statistic: 

$$ t  = \sqrt{N} \left ( S + \frac{1}{3} \gamma S^2 + \frac{1}{27} \gamma^2 S^3 + \frac{1}{6N} \gamma \right )$$

As far as the ingredients are concerned, first recall the cross-sectional sample variance

$$S^2_{AAR,0} = \frac{1}{N-1} \sum\limits_{i=1}^{N}(AR_{i,0} - AAR_0)^2$$

Next, the corresponding sample skewness is given by

$$\gamma = \frac{N}{(N-2)(N-1)}\sum_{i=1}^N \frac{(AR_{i,0} - AAR_0)^3}{S^3_{AAR,0}}$$

Finally, let

$$S = \frac{AAR_0}{S_{AAR,0}}$$

Approximate null distribution: $t \stackrel{\cdot}{\sim} t_{N-1}$

[8.2] Null hypothesis of interest: $H_0: E(CAAR) = 0$

Test statistic: 

$$ t = \sqrt{N} \left (S + \frac{1}{3} \gamma S^2 + \frac{1}{27} \gamma^2 S^3+ \frac{1}{6N} \gamma \right )$$

As far as the ingredients are concerned, first recall the cross-sectional sample variance

$$S^2_{CAAR} = \frac{1}{N-1} \sum\limits_{i=1}^{N}(CAR_{i} - CAAR)^2$$

Next, the corresponding sample skewness is given by

$$\gamma = \frac{N}{(N-2)(N-1)}\sum_{i=1}^N \frac{(CAR_{i} - CAAR)^3}{S^3_{CAAR}}$$

Finally, let

$$S = \frac{CAAR}{S_{CAAR}}$$

Approximate null distribution: $t \stackrel{\cdot}{\sim} t_{N-1}$

 

[9] Jackknife Test (Abbr.: Jackknife T) 

This test will be added in a future version.

 

Nonparametric Tests

[10] Corrado Rank Test (Abbr.: Rank Z) 

[10.1] Null hypothesis of interest: $H_0: E(AAR_0) = 0$

Test statistic: 

$$z = \frac{\bar K_0 - 0.5}{S_{\bar {K}}}$$

Start by computing, for any $i$, a vector of `scaled' ranks based on the combined sample $\{AR_{i,t}\}_{i = T_0}^{T_2}$:

$$K_{i,t} = \frac{\mbox{rank}(AR_{i,t})}{1+M_i+L_{2,i}}$$

where $L_{i,2}$ denotes the number of non-missing $AR_{i,t}$ during the event window

Then, for any $t$, denote the number of non-missing $K_{i,t}$ by $N_t$ and define

$$\bar K_t = \frac{1}{N_t} \sum_{i=1}^N K_{i,t} \quad \mbox{and} \quad S_{\bar{K}}^2 = \frac{1}{L_1 + L_2}\sum_{t=T_0}^{T_2} \bigl (\bar K_t - 0.5)^2$$

Approximate null distribution: $z \stackrel{\cdot}{\sim} N(0,1)$

[10.2] Null hypothesis of interest: $H_0: E(CAAR) = 0$

Test statistic: 

$$z = \sqrt{L_2}\left (\frac{\bar K_{T_1+1,T_2} - 0.5}{S_{\bar {K}}} \right)\quad \mbox{with} \quad \bar K_{T_1+1,T_2} = \frac{1}{L_2} \sum_{t=T_1+1}^{T_2} \bar K_t$$

Approximate null distribution: $z \stackrel{\cdot}{\sim} N(0,1)$

 

[11] Generalized Rank T Test (Abbr.: Generalized Rank T) 

[11.1] Null hypothesis of interest: $H_0: E(AAR_0) = 0$

Test statistic: 

$$t = Z \cdot \left (\frac{L_1 -1}{L_1 - Z^2} \right )\quad \mbox{with} \quad Z = \frac{\bar U_{L_1+1}}{S_{\bar U}}$$

Arguably, this is the most complicated test statistic of them all, so it will take a while to describe its construction. For simplicity, we will assume no missing data anywhere.

For any $t$ during the estimation window, let $SAR_{i,t} = AR_{i,t} / S_{AR_i}$ and then compute $SAR_{i,0}$ as described in [4.1]. Next, use cross-sectional standardization to compute

$$ SAR_{i,0}^* = \frac{SAR_{i,0}}{S_{SAR_0}} \quad \mbox{with} \quad S_{SAR_0}^2 = \frac{1}{N-1} \sum_{i=1}^N \bigl (SAR_{i,0} - \overline{SAR_0} \bigr )^2 \quad \mbox{and} \quad \overline{SAR_0} = \frac{1}{N} \sum_{i=1}^N SAR_{i,0}$$

This, for any $i$, gives a time series of length $L_1 + 1$:

$$\{GSAR_{i,1}, \ldots, GSAR_{i,L_1}, GSAR_{i,L_1+1}\} =\{SAR_{i,T_0}, \ldots, SAR_{i,T_1}, SAR_i^*\}$$

Next, for any $i$, let

$$ U_{i,t} = \frac{\mbox{rank}(GSAR_{i,t})}{L_1+2} - 0.5$$

where the ranks are across $t \in \{1,\ldots, L_1+1\}$ 

Next, for any $t$, let

$$\bar U_t = \frac{1}{N} \sum_{i=1}^N U_{i,t}$$

and then let

$$S_{\bar U}^2 = \frac{1}{L_1+1} \sum_{t=1}^{L_1+1} \bar U_t^2$$

noting that, necessarily, the average of the values $\{\bar U_t\}_{t=1}^{L_1+1}$ is zero

Approximate null distribution:$t \stackrel{\cdot}{\sim} t_{L_1 - 1}$

[11.2]  Null hypothesis of interest: $H_0: E(CAAR) = 0$

Test statistic: 

$$t = Z \cdot \left (\frac{L_1 -1}{L_1 - Z^2} \right ) \quad \mbox{with} \quad Z = \frac{\bar U_{L_1+1}}{S_{\bar U}}$$

Arguably, this is the most complicated test statistic of them all, so it will take a while to describe its construction. For simplicity, we will assume no missing data anywhere.

Compute $SCAR_i$ as described in [6.1]; use cross-sectional standardization to compute

$$ SCAR_i^* = \frac{SCAR_i}{S_{SCAR}} \quad \mbox{with} \quad S_{SCAR}^2 = \frac{1}{N-1} \sum_{i=1}^N \bigl (SCAR_i - \overline{SCAR} \bigr )^2 \quad \mbox{and} \quad \overline{SCAR} = \frac{1}{N} \sum_{i=1}^N SCAR_i$$

This,for any $i$, gives a time series of length $L_1 + 1$:

$$\{GSAR_{i,1}, \ldots, GSAR_{i,L_1}, GSAR_{i,L_1+1}\} =\{SAR_{i,T_0}, \ldots, SAR_{i,T_1}, SCAR_i^*\}$$

Next, for any $i$, let

$$U_{i,t} = \frac{\mbox{rank}(GSAR_{i,t})}{L_1+2} - 0.5$$

where the ranks are across $t\in\{1,\ldots, L_1+1\}$ 

Next, for any $t$, let

$$\bar U_t = \frac{1}{N} \sum_{i=1}^N U_{i,t}$$

and then let

$$S_{\bar U}^2 = \frac{1}{L_1+1} \sum_{t=1}^{L_1+1} \bar U_t^2$$

noting that, necessarily, the average of the values $\{\bar U_t\}_{t=1}^{L_1+1}$ is zero

Approximate null distribution:$t \stackrel{\cdot}{\sim} t_{L_1 - 1}$

 

[12] Generalized Rank Z Test (Abbr.: G-Rank Z) 

[12.1] Null hypothesis of interest: $H_0: E(AAR_0) = 0$

Test statistic:

$$ z = \frac{\bar U_{L_1+1}}{S_{\bar U_{L_1+1}}} \quad \mbox{with} \quad S_{\bar U_{L_1+1}}^2 = \frac{L_1}{12 N (L_1+2)}$$

where the ingredients are defined as in [11.1]

Approximate null distribution: $z \stackrel{\cdot}{\sim} N(0,1)$

[12.2]  Null hypothesis of interest: $H_0: E(CAAR) = 0$

Test statistic: 

$$z = \frac{\bar U_{L_1+1}}{S_{\bar U_{L_1+1}}} \quad \mbox{with} \quad S_{\bar U_{L_1+1}}^2 = \frac{L_1}{12 N (L_1+2)}$$

where the ingredients are defined as in [11.2]

Approximate null distribution: $z \stackrel{\cdot}{\sim} N(0,1)$

 

[13] Sign Test (Abbr.: Sign Z) 

[13.1] Null hypothesis of interest: $H_0: E(AAR_0) = 0$

Test statistic:

$$z = \frac{w - N \cdot 0.5}{\sqrt{N\cdot 0.5 \cdot 0.5}}$$

where $w$ is the number of  the $AR_{i,0}$ that are positive

Approximate null distribution: $z \stackrel{\cdot}{\sim} N(0,1)$

[13.2] Null hypothesis of interest: $H_0: E(CAAR) = 0$

Test statistic:

$$z = \frac{w - N \cdot 0.5}{\sqrt{N\cdot 0.5 \cdot 0.5}}$$

where $w$ is the number of  the $CAR_i$ during the event window that are positive

Approximate null distribution: $z \stackrel{\cdot}{\sim} N(0,1)$

 

[14] Generalized Sign Test (Abbr.: Generalized Sign Z) 

[14.1] Null hypothesis of interest: $H_0: E(AAR_0) = 0$

Test statistic:

$$z = \frac{w - N \cdot \widehat p}{\sqrt{N\cdot \widehat p (1 - \widehat p)}}$$

where $w$ is the number of the $AR_{i,0}$ that are positive and $\widehat p$ is the fraction of the $AR_{i,t}$ during the estimation window (across both $i$ and $t$) that are positive

Approximate null distribution: $z \stackrel{\cdot}{\sim} N(0,1)$

[14.2] Null hypothesis of interest: $H_0: E(CAAR) = 0$

Test statistic:

$$z = \frac{w - N \cdot \widehat p}{\sqrt{N\cdot \widehat p (1 - \widehat p)}}$$

where $w$ is the number of the $CAR_i$ during the event window that are positive and $\widehat p$ is the fraction of the $AR_{i,t}$ during the estimation window (across both $i$ and $t$) that are positive

Approximate null distribution: $z \stackrel{\cdot}{\sim} N(0,1)$

 

[15] Wilcoxon Test (Abbr.: Wilcoxon)

[15.1] Null hypothesis of interest: $H_0: E(AAR_0) = 0$

The Wilcoxon test is a nonparametric test based on the ranks of the $AR_{i,0}$ across $i$. The (exact) distribution of the test statistic under the null, upon which we base the $p$-value, is nonstandard and we refer the user to the original paper of Wilcoxon (1945) or any suitable textbook for the details.

[15.2] Null hypothesis of interest: $H_0: E(CAAR) = 0$

The Wilcoxon test is not available for this null hypothesis.

 

[16] Permutation Test (Abbr.: Permutation) 

[16.1] Null hypothesis of interest: $H_0: E(AR_{i,0}) = 0$

The permutation test is a non-parametric test that computes the $p$-value in a data-dependent (or resampling-based) fashion. We refer the user to Nguyen and Wolf (2023) for the details.

[16.2] Null hypothesis of interest: $H_0: E(CAR_{I}) = 0$

The permutation test is a non-parametric test that computes the $p$-value in a data-dependent (or resampling-based) fashion. We refer the user to Nguyen and Wolf (2023) for the details.