Significance Tests for Event Studies
Event studies are concerned with the question of whether abnormal returns on an event date or, more generally, during a window around an event date (called the event window) are unusually large (in magnitude). To answer this question one carries out a formal hypothesis test where the null hypothesis specifies that the expected value of a certain random variable is zero; if the null hypothesis is rejected, one concludes that the event had an ‘impact’. It is customary in the literature to use twosided tests, which specify as alternative hypothesis that the expected value is different from zero (as opposed to larger, or smaller, than zero). We follow this convention.
If there is only one instance under study, the random variable is the abnormal return on the event day itself (AR) or, more generally, the cumulative abnormal return during the event window (CAR). If there are multiple instances under study, the respective quantities are averaged across instances. Thus, the random variable is the average abnormal return on the respective event day (AAR) or the average cumulative abnormal return during the respective event window, which can alternatively be expressed as the cumulative average abnormal return (CAAR).
In terms of terminology, by an instance we mean a given event for a given firm. In the case of multiple instances, there are two possibilities: (i) a given event (type), such as inclusion to an index or a merger, for multiple firms or (ii) multiple repetitions of a given event (type) for a given firm. An example of the first possibility would be studying the effect of being included in the S&P500 index for multiple firms; an example of the second possibility would be studying the effect of mergers for a given firm. In terms of the statistical methodology, both possibilities are handled in the same way.
For the computation of the abnormal return of firm $i$ on day $t$, denoted by $AR_{i,t}$ we refer the user to the introduction. In case one considers more than one instance, let $N$ denote the number of instances considered and define
\begin{equation}AAR_{t}= \frac{1}{N} \sum\limits_{i=1}^{N}AR_{i,t} \end{equation}
\begin{equation}CAR_{i}=\sum\limits_{t=T_1 + 1}^{T_2} AR_{i,t} \end{equation}
\begin{equation}CAAR=\frac{1}{N}\sum\limits_{i=1}^{N}CAR_i\end{equation}
The literature on eventstudy hypothesis testing covers a wide range of tests. Generally, significance tests can be classified into parametric and nonparametric tests. Parametric tests (at least in the field of event studies) assume that the individual firm's abnormal returns are normally distributed, whereas nonparametric tests do not rely on any such assumption. Applied researchers typically carry out both parametric and nonparametric tests to verify that the research findings are not driven by nonnormal returns or outliers, which tend to affect the results of parametric tests but not the results of nonparametric tests; for example, see Schipper and Smith (1983).
Table 1 lists the various tests according to the null hypothesis for which they can be used. Table 2 lists them by their name and presents strengths and weaknesses compiled from Kolari and Pynnonen (2011).
$H_0: E(AR) = 0$  T Test  Permutation Test  Single Instance 
$H_0: E(AAR) = 0$  CrossSectional Test, TimeSeries Standard Deviation Test, Patell Test, Adjusted Patell Test, Standardized CrossSectional Test, Adjusted Standardized CrossSectional Test, and Skewness Corrected Test  Generalized Sign Test, Generalized Rank T Test, Generalized Rank Z Test, and Wilcoxon Test  Multiple Instances 
$H_0: E(CAR) = 0$  T Test  Permutation Test  Single Instance 
$H_0: E(CAAR) = 0$  CrossSectional Test, TimeSeries Standard Deviation Test, Patell Test, Adjusted Patell Test, Standardized CrossSectional Test, Adjusted Standardized CrossSectional Test, and Skewness Corrected Test  Generalized Sign Test, Generalized Rank T Test, and Generalized Rank Z Test  Multiple Instances 
#  Name  Key Reference  EST Abbreviation  Strengths and Weaknesses 

1  T Test 


2  CrossSectional Test  CSect T  
3  TimeSeries Standard Deviation Test  CDA T  
4  Patell Test  Patell (1976)  Patell Z 

5  Adjusted Patell Test  Kolari and Pynnönen (2010)  Adjusted Patell Z 

6  Standardized CrossSectional Test  Boehmer, Musumeci and Poulsen (1991)  StdCSect Z 

7  Adjusted Standardized CrossSection Test  Kolari and Pynnönen (2010)  Adjusted StdCSect Z 

8  Skewness Corrected Test  Hall (1992)  SkewnessCorrected T 

9  Jackknife Test  Giaccotto and Sfiridis (1996)  Jackknife T  
10  Corrado Rank Test  Corrado and Zivney (1992)  Rank Z 

11  Generalized Rank Test  Kolari and Pynnönen (2011)  Generalized Rank T 

12  Generalized Rank Test  Kolari and Pynnönen (2011)  Generalized Rank Z 

13  Sign Test  Cowan (1992)  Sign Z 

14  Cowan Generalized Sign Test  Cowan (1992)  Generalized Sign Z  
15  Wilcoxon signedrank Test  Wilcoxon (1945)  Wilcoxon 

16  Permutation Test  Nguyen and Wolf (2023)  Permutation 

In describing the formulas for the test statistics and their (approx.) distributions under the null, which are used to compute $p$values, we follow the order in Table 2.
Some Preliminaries
The estimation window is given by $\{T_0, \ldots, T_1\}$ and thus has length $L_1 = T_1  T_0 + 1$. The event window is given by $\{T_1+1, \ldots, T_2\}$ and thus has length $L_2 = T_2  T_1$. This convention implies that the estimation window ends immediately before the event window. We will stick to this convention for simplicity in all the formulas below, but note that our methodology also allows for an arbitrary gap between the two windows, as specified by the user.
If the event window is of length one (that is, contains a single day only), we shall use the convention $T_1+1 = 0 = T_2$. Otherwise, it always holds more generally that $T_1 +1 \le 0 \le T_2$.
If multiple instances are considered, $N$ denotes the number of instances.
For any given firm $i$, $S_{AR_i}$ denotes the sample standard deviation of the returns during the estimation window, which is given as the square root of the corresponding sample variance
$$S^2_{AR_i} = \frac{1}{M_i  K} \sum\limits_{t=T_0}^{T_1}AR_{i,t}^2$$
Here, $M_{i}$ denotes the number of nonmissing returns during the estimation window; for example, $M_i = T_1  T_0 +1$ in case of no missing observations. Furthermore, $K$ denotes the degrees of freedom (given by the number of free parameters) in the benchmark model that was used to compute the abnormal returns; for example, $K=1$ for the constantexpectedreturn model, $K=2$ for the market model, and $K=4$ for the threefactor FamaFrench factor (which also contains a constant in addition to the three stochastic factors).
Finally, $N(0,1)$ denotes the standard normal distribution and $t_k$ denotes the $t$distribution with $k$ degrees of freedom.
Parametric Tests
[1] T Test
[1.1] Null hypothesis of interest: $H_0: E(AR_{i,0}) = 0$
Test statistic:
$$t =\frac{AR_{i,t}}{S_{AR_i}}$$
Approximate null distribution: $t \stackrel{\cdot}{\sim} t_{M_iK}$
[1.2] Null hypothesis of interest: $H_0: E(CAR_i) = 0$
Test statistic:
$$t=\frac{CAR_{i}}{S_{CAR_i}} \quad \mbox{with} \quad S^2_{CAR_i} = L_2 S^2_{AR_i}$$
Approximate null distribution: $t \stackrel{\cdot}{\sim} t_{M_iK}$
[2] CrossSectional Test (Abbr.: CSect T)
[2.1] Null hypothesis of interest: $E(AAR_0) = 0$
Test statistic:
$$ t= \sqrt{N} \frac{AAR_0}{S_{AAR,0}} \quad \mbox{with} \quad S^2_{AAR,0} = \frac{1}{N1} \sum\limits_{i=1}^{N}(AR_{i,0}  AAR_0)^2$$
Approximate null distribution: $t \stackrel{\cdot}{\sim} t_{N1}$
[2.2] Null hypothesis of interest: $E(CAAR) = 0$
Test statistic:
$$ t = \sqrt{N} \frac{CAAR}{S_{CAAR}} \quad \mbox{with} \quad S^2_{CAAR} = \frac{1}{N1} \sum\limits_{i=1}^{N}(CAR_{i}  CAAR)^2$$
Approximate null distribution: $t \stackrel{\cdot}{\sim} t_{N1}$
[3] TimeSeries Standard Deviation or Crude Dependence Test (Abbr.: CDA T)
[3.1] Null hypothesis of interest: $E(AAR_0) = 0$
Test statistic:
$$t = \sqrt{N} \frac{AAR_0}{S_{AAR}} \quad \mbox{with} \quad S^2_{AAR} = \frac{1}{M1} \sum\limits_{t=T_0}^{T_1} \Bigl (AAR_{t}  \frac{1}{M} \sum\limits_{t=T_0}^{T_1} AAR_t \Bigr )^2$$
where $M$ denotes the number of nonmissing $AAR_t$ during the estimation window.
Approximate null distribution: $t \stackrel{\cdot}{\sim} t_{M1}$
[3.2] Null hypothesis of interest: $E(CAAR) = 0$
Test statistic:
$$ t = \sqrt{N} \frac{CAAR}{S_{CAAR}} \quad \mbox{with} \quad S^2_{CAAR} = \frac{1}{M1} \sum\limits_{t=T_0}^{T_1} \Bigl (CAAR_{t}  \frac{1}{M} \sum\limits_{t=T_0}^{T_1} CAAR_t \Bigr )^2$$
where $M$ denotes the number of nonmissing $CAAR_t$ during the estimation window.
Approximate null distribution: $t \stackrel{\cdot}{\sim} t_{M1}$
[4] Patell or Standardized Residual Test (Abbr.: Patell Z)
[4.1] Null hypothesis of interest: $H_0: E(AAR_0) = 0$
Test statistic:
$$z = \frac{ASAR_0}{S_{ASAR}}$$
The underlying idea is to standardize each $AR_{i,t}$ by the socalled forecasterrorcorrected standard deviation before calculating the test statistic; for example, for the market model,
$$SAR_{i,0} = \frac{AR_{i,0}}{S_{AR_{i,0}}} \quad \mbox{with} \quad S^2_{AR_{i,0}} = S^2_{AR_i}\left (1 + \frac{1}{M_i} +\frac{(R_{m,0}  \overline R_m)^2}{\sum\limits_{t=T_0}^{T_1}(R_{m,t}  \overline R_m)^2}\right ) \quad \mbox{and} \quad \overline R_m = \frac{1}{L_1} \sum\limits_{t=T_0}^{T_1}R_{m,t}$$
where $R_{m,t}$ denotes the market return on day $t$. (The standardization is analogous for any other day $t$ in the event window.)
Then compute
$$ ASAR_0 = \sum_{i=1}^N SAR_{i,0} $$
Under the null, this statistic has expectation zero and variance
$$S_{ASAR}^2 = \sum_{i=1}^N \frac{M_i2}{M_i4}$$
Approximate null distribution: $z \stackrel{\cdot}{\sim} N(0,1)$
[4.2] Null hypothesis of interest: $H_0: E(CAAR) = 0$
Test statistic:
$$z = \frac{1}{\sqrt{N}} \sum_{i=1}^N \frac{CSAR_i}{S_{CSAR_i}}$$
where $CSAR_i$ denotes the cumulative standardized abnormal return of firm $i$:
$$CSAR_i = \sum_{t=T_1+1}^{T_2} SAR_{i,t}$$
which under the null has expectation zero and variance
$$S_{CSAR_i}^2 = L_2\frac{M_i2}{M_i4}$$
Approximate null distribution: $z \stackrel{\cdot}{\sim} N(0,1)$
[5] Kolari and Pynnönen adjusted Patell or Standardized Residual Test (Abbr.: Adjusted Patell Z)
[5.1] Null hypothesis of interest: $H_0: E(AAR_0) = 0$
Test statistic:
$$z_{\text{adj}} = z \cdot \sqrt{\frac{1\bar r}{1 + (N1) \bar r}}$$
where $z$ is defined as in [4.1] and $\bar r$ denotes the average of the (pairwise) sample crosscorrelations of the estimationperiod abnormal returns.
Approximate null distribution: $z_{\text{adj}} \stackrel{\cdot}{\sim} N(0,1)$
[5.2] Null hypothesis of interest: $H_0: E(CAAR) = 0$
Test statistic:
$$z_{\text{adj}} = z \cdot \sqrt{\frac{1\bar r}{1 + (N1) \bar r}}$$
where $z$ is defined as in [4.2] and $\bar r$ denotes the average of the (pairwise) sample crosscorrelations of the estimationperiod abnormal returns.
Approximate null distribution: $z_{\text{adj}} \stackrel{\cdot}{\sim} N(0,1)$
[6] Standardized CrossSectional or BMP Test (Abbr.: StdCSect T)
[6.1] Null hypothesis of interest: $H_0: E(AAR_0) = 0$
Test statistic:
$$ t = \frac{ASAR_0}{\sqrt{N} S_{ASAR,0}} \quad \mbox{with} \quad S^2_{ASAR,0} = \frac{1}{N1} \sum\limits_{i=1}^{N}\Bigl (SAR_{i,0} \frac{1}{N} \sum_{i=1}^N SAR_{i,0} \Bigr )^2$$
with $SAR_{i,0}$ and $ASAR_0$ defined as in [4.1]
Approximate null distribution: $t \stackrel{\cdot}{\sim} t_{M1}$
[6.2] Null hypothesis of interest: $H_0: E(CAAR) = 0$
Test statistic:
$$ t =\sqrt{N} \frac{\overline{SCAR}}{S_{\overline{SCAR}}}$$
where
$$\overline{SCAR} = \frac{1}{N} \sum_{i=1}^NSCAR_i \quad \mbox{and} \quad S_{\overline{SCAR}}^2 = \frac{1}{N1} \sum_{i=1}^N \bigl ( SCAR_i  \overline{SCAR} \bigl )^2$$
These statistics are based on
$$SCAR_i = \frac{CAR_i}{S_{CAR_i}}$$
where $S_{CAR_i}$ denotes the forecasterrorcorrected standard deviation; for example, for the market model,
$$S_{CAR_i}^2 = S_{AR_i}^2\left (L_2 + \frac{L_2}{M_i} +\frac{\sum_{t=T_1+1}^{T_2} (R_{m,t}  \bar R_m)^2}{\sum_{t=T_0}^{T_1} (R_{m,t}  \bar R_m)^2}\right )$$
Approximate null distribution: $t \stackrel{\cdot}{\sim} t_{M1}$
[7] Kolari and Pynnönen Adjusted Standardized CrossSectional or BMP Test (Abbr.: Adjusted StdCSect T)
[7.1] Null hypothesis of interest: $H_0: E(AAR_0) = 0$
Test statistic:
$$z_{\text{adj}} = t \cdot \sqrt{\frac{1\bar r}{1 + (N1) \bar r}}$$
where $t$ is defined as in [6.1] and $\bar r$ denotes the average of the (pairwise) sample crosscorrelations of the estimationperiod abnormal returns.
Approximate null distribution: $t_{\text{adj}} \stackrel{\cdot}{\sim} t_{N1}$
[7.2] Null hypothesis of interest: $H_0: E(CAAR) = 0$
Test statistic:
$$t_{\text{adj}} = z \cdot \sqrt{\frac{1\bar r}{1 + (N1) \bar r}}$$
where $t$ is defined as in [6.2] and $\bar r$ denotes the average of the (pairwise) sample crosscorrelations of the estimationperiod abnormal returns.
Approximate null distribution: $t_{\text{adj}} \stackrel{\cdot}{\sim} t_{N1}$
[8] SkewnessCorrected Test (Abbr.: SkewnessCorrected T)
[8.1] Null hypothesis of interest: $H_0: E(AAR_0) = 0$
Test statistic:
$$ t = \sqrt{N} \left ( S + \frac{1}{3} \gamma S^2 + \frac{1}{27} \gamma^2 S^3 + \frac{1}{6N} \gamma \right )$$
As far as the ingredients are concerned, first recall the crosssectional sample variance
$$S^2_{AAR,0} = \frac{1}{N1} \sum\limits_{i=1}^{N}(AR_{i,0}  AAR_0)^2$$
Next, the corresponding sample skewness is given by
$$\gamma = \frac{N}{(N2)(N1)}\sum_{i=1}^N \frac{(AR_{i,0}  AAR_0)^3}{S^3_{AAR,0}}$$
Finally, let
$$S = \frac{AAR_0}{S_{AAR,0}}$$
Approximate null distribution: $t \stackrel{\cdot}{\sim} t_{N1}$
[8.2] Null hypothesis of interest: $H_0: E(CAAR) = 0$
Test statistic:
$$ t = \sqrt{N} \left (S + \frac{1}{3} \gamma S^2 + \frac{1}{27} \gamma^2 S^3+ \frac{1}{6N} \gamma \right )$$
As far as the ingredients are concerned, first recall the crosssectional sample variance
$$S^2_{CAAR} = \frac{1}{N1} \sum\limits_{i=1}^{N}(CAR_{i}  CAAR)^2$$
Next, the corresponding sample skewness is given by
$$\gamma = \frac{N}{(N2)(N1)}\sum_{i=1}^N \frac{(CAR_{i}  CAAR)^3}{S^3_{CAAR}}$$
Finally, let
$$S = \frac{CAAR}{S_{CAAR}}$$
Approximate null distribution: $t \stackrel{\cdot}{\sim} t_{N1}$
[9] Jackknife Test (Abbr.: Jackknife T)
This test will be added in a future version.
Nonparametric Tests
[10] Corrado Rank Test (Abbr.: Rank Z)
[10.1] Null hypothesis of interest: $H_0: E(AAR_0) = 0$
Test statistic:
$$z = \frac{\bar K_0  0.5}{S_{\bar {K}}}$$
Start by computing, for any $i$, a vector of `scaled' ranks based on the combined sample $\{AR_{i,t}\}_{i = T_0}^{T_2}$:
$$K_{i,t} = \frac{\mbox{rank}(AR_{i,t})}{1+M_i+L_{2,i}}$$
where $L_{i,2}$ denotes the number of nonmissing $AR_{i,t}$ during the event window
Then, for any $t$, denote the number of nonmissing $K_{i,t}$ by $N_t$ and define
$$\bar K_t = \frac{1}{N_t} \sum_{i=1}^N K_{i,t} \quad \mbox{and} \quad S_{\bar{K}}^2 = \frac{1}{L_1 + L_2}\sum_{t=T_0}^{T_2} \bigl (\bar K_t  0.5)^2$$
Approximate null distribution: $z \stackrel{\cdot}{\sim} N(0,1)$
[10.2] Null hypothesis of interest: $H_0: E(CAAR) = 0$
Test statistic:
$$z = \sqrt{L_2}\left (\frac{\bar K_{T_1+1,T_2}  0.5}{S_{\bar {K}}} \right)\quad \mbox{with} \quad \bar K_{T_1+1,T_2} = \frac{1}{L_2} \sum_{t=T_1+1}^{T_2} \bar K_t$$
Approximate null distribution: $z \stackrel{\cdot}{\sim} N(0,1)$
[11] Generalized Rank T Test (Abbr.: Generalized Rank T)
[11.1] Null hypothesis of interest: $H_0: E(AAR_0) = 0$
Test statistic:
$$t = Z \cdot \left (\frac{L_1 1}{L_1  Z^2} \right )\quad \mbox{with} \quad Z = \frac{\bar U_{L_1+1}}{S_{\bar U}}$$
Arguably, this is the most complicated test statistic of them all, so it will take a while to describe its construction. For simplicity, we will assume no missing data anywhere.
For any $t$ during the estimation window, let $SAR_{i,t} = AR_{i,t} / S_{AR_i}$ and then compute $SAR_{i,0}$ as described in [4.1]. Next, use crosssectional standardization to compute
$$ SAR_{i,0}^* = \frac{SAR_{i,0}}{S_{SAR_0}} \quad \mbox{with} \quad S_{SAR_0}^2 = \frac{1}{N1} \sum_{i=1}^N \bigl (SAR_{i,0}  \overline{SAR_0} \bigr )^2 \quad \mbox{and} \quad \overline{SAR_0} = \frac{1}{N} \sum_{i=1}^N SAR_{i,0}$$
This, for any $i$, gives a time series of length $L_1 + 1$:
$$\{GSAR_{i,1}, \ldots, GSAR_{i,L_1}, GSAR_{i,L_1+1}\} =\{SAR_{i,T_0}, \ldots, SAR_{i,T_1}, SAR_i^*\}$$
Next, for any $i$, let
$$ U_{i,t} = \frac{\mbox{rank}(GSAR_{i,t})}{L_1+2}  0.5$$
where the ranks are across $t \in \{1,\ldots, L_1+1\}$
Next, for any $t$, let
$$\bar U_t = \frac{1}{N} \sum_{i=1}^N U_{i,t}$$
and then let
$$S_{\bar U}^2 = \frac{1}{L_1+1} \sum_{t=1}^{L_1+1} \bar U_t^2$$
noting that, necessarily, the average of the values $\{\bar U_t\}_{t=1}^{L_1+1}$ is zero
Approximate null distribution:$t \stackrel{\cdot}{\sim} t_{L_1  1}$
[11.2] Null hypothesis of interest: $H_0: E(CAAR) = 0$
Test statistic:
$$t = Z \cdot \left (\frac{L_1 1}{L_1  Z^2} \right ) \quad \mbox{with} \quad Z = \frac{\bar U_{L_1+1}}{S_{\bar U}}$$
Arguably, this is the most complicated test statistic of them all, so it will take a while to describe its construction. For simplicity, we will assume no missing data anywhere.
Compute $SCAR_i$ as described in [6.1]; use crosssectional standardization to compute
$$ SCAR_i^* = \frac{SCAR_i}{S_{SCAR}} \quad \mbox{with} \quad S_{SCAR}^2 = \frac{1}{N1} \sum_{i=1}^N \bigl (SCAR_i  \overline{SCAR} \bigr )^2 \quad \mbox{and} \quad \overline{SCAR} = \frac{1}{N} \sum_{i=1}^N SCAR_i$$
This,for any $i$, gives a time series of length $L_1 + 1$:
$$\{GSAR_{i,1}, \ldots, GSAR_{i,L_1}, GSAR_{i,L_1+1}\} =\{SAR_{i,T_0}, \ldots, SAR_{i,T_1}, SCAR_i^*\}$$
Next, for any $i$, let
$$U_{i,t} = \frac{\mbox{rank}(GSAR_{i,t})}{L_1+2}  0.5$$
where the ranks are across $t\in\{1,\ldots, L_1+1\}$
Next, for any $t$, let
$$\bar U_t = \frac{1}{N} \sum_{i=1}^N U_{i,t}$$
and then let
$$S_{\bar U}^2 = \frac{1}{L_1+1} \sum_{t=1}^{L_1+1} \bar U_t^2$$
noting that, necessarily, the average of the values $\{\bar U_t\}_{t=1}^{L_1+1}$ is zero
Approximate null distribution:$t \stackrel{\cdot}{\sim} t_{L_1  1}$
[12] Generalized Rank Z Test (Abbr.: GRank Z)
[12.1] Null hypothesis of interest: $H_0: E(AAR_0) = 0$
Test statistic:
$$ z = \frac{\bar U_{L_1+1}}{S_{\bar U_{L_1+1}}} \quad \mbox{with} \quad S_{\bar U_{L_1+1}}^2 = \frac{L_1}{12 N (L_1+2)}$$
where the ingredients are defined as in [11.1]
Approximate null distribution: $z \stackrel{\cdot}{\sim} N(0,1)$
[12.2] Null hypothesis of interest: $H_0: E(CAAR) = 0$
Test statistic:
$$z = \frac{\bar U_{L_1+1}}{S_{\bar U_{L_1+1}}} \quad \mbox{with} \quad S_{\bar U_{L_1+1}}^2 = \frac{L_1}{12 N (L_1+2)}$$
where the ingredients are defined as in [11.2]
Approximate null distribution: $z \stackrel{\cdot}{\sim} N(0,1)$
[13] Sign Test (Abbr.: Sign Z)
[13.1] Null hypothesis of interest: $H_0: E(AAR_0) = 0$
Test statistic:
$$z = \frac{w  N \cdot 0.5}{\sqrt{N\cdot 0.5 \cdot 0.5}}$$
where $w$ is the number of the $AR_{i,0}$ that are positive
Approximate null distribution: $z \stackrel{\cdot}{\sim} N(0,1)$
[13.2] Null hypothesis of interest: $H_0: E(CAAR) = 0$
Test statistic:
$$z = \frac{w  N \cdot 0.5}{\sqrt{N\cdot 0.5 \cdot 0.5}}$$
where $w$ is the number of the $CAR_i$ during the event window that are positive
Approximate null distribution: $z \stackrel{\cdot}{\sim} N(0,1)$
[14] Generalized Sign Test (Abbr.: Generalized Sign Z)
[14.1] Null hypothesis of interest: $H_0: E(AAR_0) = 0$
Test statistic:
$$z = \frac{w  N \cdot \widehat p}{\sqrt{N\cdot \widehat p (1  \widehat p)}}$$
where $w$ is the number of the $AR_{i,0}$ that are positive and $\widehat p$ is the fraction of the $AR_{i,t}$ during the estimation window (across both $i$ and $t$) that are positive
Approximate null distribution: $z \stackrel{\cdot}{\sim} N(0,1)$
[14.2] Null hypothesis of interest: $H_0: E(CAAR) = 0$
Test statistic:
$$z = \frac{w  N \cdot \widehat p}{\sqrt{N\cdot \widehat p (1  \widehat p)}}$$
where $w$ is the number of the $CAR_i$ during the event window that are positive and $\widehat p$ is the fraction of the $AR_{i,t}$ during the estimation window (across both $i$ and $t$) that are positive
Approximate null distribution: $z \stackrel{\cdot}{\sim} N(0,1)$
[15] Wilcoxon Test (Abbr.: Wilcoxon)
[15.1] Null hypothesis of interest: $H_0: E(AAR_0) = 0$
The Wilcoxon test is a nonparametric test based on the ranks of the $AR_{i,0}$ across $i$. The (exact) distribution of the test statistic under the null, upon which we base the $p$value, is nonstandard and we refer the user to the original paper of Wilcoxon (1945) or any suitable textbook for the details.
[15.2] Null hypothesis of interest: $H_0: E(CAAR) = 0$
The Wilcoxon test is not available for this null hypothesis.
[16] Permutation Test (Abbr.: Permutation)
[16.1] Null hypothesis of interest: $H_0: E(AR_{i,0}) = 0$
The permutation test is a nonparametric test that computes the $p$value in a datadependent (or resamplingbased) fashion. We refer the user to Nguyen and Wolf (2023) for the details.
[16.2] Null hypothesis of interest: $H_0: E(CAR_{I}) = 0$
The permutation test is a nonparametric test that computes the $p$value in a datadependent (or resamplingbased) fashion. We refer the user to Nguyen and Wolf (2023) for the details.