Current Projects
1. Optimal Design of Experimental Studies Investigating Moderation and Main Effects.
Experimental studies investigating moderation and main effects provide the source material for improving the quality of and equity in education by delineating the impact of an intervention, and the contexts, conditions, and sub-populations for which it is most effective. Despite sustained interest in experimental studies, the literature has not developed accessible optimal sampling strategies to help plan powerful and efficient designs to detect these effects. This project, funded by the Spencer Foundation, aims to develop a flexible optimal design framework so that we can design well-powered studies with minimal financial resources. Preliminary results show that the proposed framework can identify much more efficient or powerful designs when contrasted with conventional design frameworks. This project has the potential for broad impacts because it facilitates a fundamental shift in the principles and strategies of study design.
2. Statistical Power Formulas for Experiments Investigating Mediation Effects.
Experimental studies investigating the mechanisms of interventions through mediation analyses often lay the groundwork for the critical development and iterative improvement of programs/policies aiming to improve key outcomes. Despite growing interest in employing mediation studies, the literature has not developed robust power formulas for experimental studies investigating mediation effects. This project aims to develop statistical power formulas for experiments investigating mediation effects by accounting for the effects of covariates/confounders in the mediator-outcome path.
3. Monte Carlo Confidence Intervals (MCCIs) to Test the Difference and Equivalence of Effects Across Replication Studies.
Replication is a powerful vehicle for verifying the findings of prior studies and can bridge existing and new knowledge. This project will develop the framework and tools to formally test the difference and equivalence of effects across replication studies. Specifically, the project proposes a Monte Carlo confidence interval (MCCI) method to compute the CIs for the tests. The methods will be implemented in an R package and Shiny app.