Overview | Dashboard Description | Using the Data | Understanding Effect Size | Resources

## About the Data – Achievement Gap Dashboard

**Achievement Gap Dashboard, In Short **

The achievement gap dashboard displays gaps in performance between a minority group (target group) and a majority group (comparison group) at school, district, and state levels. Groups include race, economic status, disability status and English proficiency. Graphs include Badger Exam and ACT statewide assessments as well as attendance rates.

## Dashboard Description

**How does the Achievement Gap dashboard differ from other dashboards? **

#### Different Display Structure

The structure of the achievement gap graphs is different from that of the other graphs in WISEdash that you may be familiar with. There are four major differences in the way that results are displayed in the Achievement Gap dashboard:

- Gaps-based instead of Results-based: unlike other dashboards, the purpose of this dashboard is not to display the results for a given subgroup but rather to display the gap in results between two subgroups.
- School-level detail instead of Student-level Detail: in most secure dashboards, users can drill down to student-level detail but because this dashboard is based on the gaps between groups of students, not on individual student results, it shows gaps at the school-level instead of student-level. Users can drill down to school-level detail.
- Inclusion of State and District-level Comparison: Although state and district -level results are displayed in WISEdash public, such aggregated results are typically not displayed in WISEdash for Districts. The state and district gaps are displayed here to provide additional comparative information to the user.
- Group size filter: By default, gaps are not displayed when either the target or comparison group has less than 20 students. This is done because users need to be extra cautious when analyzing data for very small groups, which can be exhibit large variations from year to year due to the natural variation in individual student performance (i.e. statistically unstable). However, if users would like to see the gap data regardless of the number of students – for those focusing on a small number of students – you can simply set the Small Groups filter to “Show”. Note that gaps cannot be shown when there are no students in either the target or comparison group.

Rather than describing the performance of one student group, the achievement gap graphs compare the difference in performance of two groups. The performance of two groups must be compared in order to calculate a gap. Those two groups are the target group and the comparison group. The target group is the group that the user is interested in studying (students with disabilities, economically disadvantaged students, English language learners and students of the race/ethnicity groups not selected with the Comparison Race/Ethnicity filter). The comparison group is the group that the target group is compared to (students without disabilities, non-economically disadvantaged students, English proficient students, and students of the race/ethnicity group selected with the Comparison Race/Ethnicity filter).

Each bar shown in the graphs characterizes a school, district or state gap in performance between groups. **The larger the bar, the larger the gap** between the two groups. The smaller the bar, the smaller the gap between the two groups. If the bar is positive then the comparison group is performing better than the target group. If the bar is negative then the target group is performing better than the comparison group. The bars representing the gap are shown in effect size units. See the “Effect Size” section below for an explanation of effect size.

For example, one bar will represent the gap between students with disabilities and non-disabled students in the school, another bar will represent that gap in the district, and a third bar will represent that gap throughout the state.

#### Different Security Structure

Because the Achievement Gap dashboard displays results in a different manner WISEdash user security mechanisms are applied differently as well.

__Summary-level Security__: Users who don’t have drilldown access to other dashboards will not be able to view school-level detail despite the fact that such detail is not student-level detail.__Economic Status Security__: For those users who are restricted from disaggregating or filtering by economic status, they will not see economic status rows in the Achievement Gap detail. Further, gap comparisons for economic status (including school, district and state-level), will also not display in the summary view for such users regardless of whether there are gap data to present.

These roles are designated at the school and district levels.

__District-level Security__: If a user has access to view district results – regardless of whether the user is restricted at the school-level – they can view district and state-level results in the metric summary of the dashboard. If a user does not have access to a district, they cannot view district-level results. Such restricted district results are still included in the state-level results.__School-level Security__: Users who are limited by security to select schools in a district – who can otherwise view detailed information – will only be able to view detailed information for schools for which they have access. Further, such users would not be able to see summary school-level information for the schools for which they do not have access. However, school-level results are included in district and state-level results regardless of security.

**Which student-level data are included?**

Only students with a Badger Exam or ACT test results are included in the Badger and ACT graphs. Students who were not tested or who were administered the DLM alternate assessment are not included. All students are included in the attendance graph. FAY status is not considered for the achievement gap dashboard; both students who meet and do not meet FAY status are included in the results. Provided there are enough students for a comparison there will be a school, district and state bar for each race, disability status, economic status and English proficiency.

Unlike other dashboards, when a user clicks one of the bars in the graph the achievement gap dashboard does not display a student listing nor does it drill down to a student profile dashboard. Rather, it displays the school, district and state-level averages and counts that are used to calculate gaps. Individual student results are not available from this dashboard.

### How can we use the data from this dashboard to improve student outcomes?

This dashboard allows users to easily identify and visualize gaps of any size, and to do so across different measures that have different scales. Users may also want to compare the gap size to external benchmarks, to similar groups, or to the district or state for perspective. Because effect size is used in the dashboard, users could compare gaps on the Badger Exam to gaps on The ACT. This would allow educators to –for example- focus their school improvement efforts on specific grades or content areas.

About a third of Wisconsin districts are very small, enrolling 500 students or less. In these districts, educators may need to see data for very small groups of students. In other districts, educators may be focused on helping a specific student group – perhaps 14 ELL students – and they want to dig into the achievement gap data. In these cases, educators would select the “Show” filter and be able to see what the effect size for different gaps are, and make appropriate plans based on that.

For specific strategies used to close achievement gaps, please see the Promoting Excellence for All webpage and the associated eCourse.

## Understanding Effect Size

### What is effect size?

Effect size is a standardized measure of the difference between groups on a given outcome.

### Why is an effect size calculation used?

Effect size is useful because, as a standardized measure, it allows users to evaluate the magnitude of the difference between two groups. It does so in the context of other differences in the school or district as well as in the context of published research on relevant interventions.

Specifically, in the case of the Achievement Gap dashboard, effect size calculations enable a discussion of achievement gaps across different assessments with different scales. This allows for comparisons across different measures, and helps schools focus their improvement efforts on areas most in need.

### How is effect size calculated?

The effect size shown in the achievement gap dashboard is a standardized mean difference. This means that the effect size is measured in standard deviations. For example, an effect size of 1.0 means that the gap between groups is one standard deviation. An effect size of 0.5 means that the gap is half of one standard deviation.

The formula used to calculate achievement gaps dashboard effect sizes is Glass’ delta. To calculate Glass’ delta, subtract the mean of the comparison group from the mean of the target group. Then divide that difference by the state-level standard deviation of the comparison group.

where

For example, to calculate the effect size for a school-level math gap mean scale score between Hispanic students and white students, first subtract the school-level mean scale score for Hispanic students from the school-level mean scale score for white students. Then divide this difference by the state-level standard deviation for white students.

### How should educators interpret an Effect Size score?

#### Direction of the Effect Size

Effect sizes can be positive or negative. For example, if the effect size of the Hispanic-White gap in reading is 0.4, this indicates that the mean value associated with the white students is higher than the mean value associated with the Hispanic students. Hence, we say there is a Hispanic-White gap. Hispanic students are the target group and white students are the comparison group. In order to close the gap, we need to raise the reading achievement of Hispanic students so they are our target group. We are comparing their performance to white students so white students are the comparison group.

A negative effect size would indicate the opposite; that the mean value associated with the Hispanic students is higher than the mean value associated with the white students. This happens when the target group outperforms the comparison group.

#### Magnitude of the Effect Size

An effect size of zero means that there is no difference between the two groups**. A small effect size means that the difference between the two groups is small. A large effect size means that the difference between the two groups is large.** However, there are a number of ways to interpret the magnitude of an effect size. One way to interpret the magnitude of an effect size is to compare that effect size to benchmarks or observations reported in scientific literature.

- For example, in
*Statistical Power Analysis for the Behavioral Sciences*(1988), Cohen describes effect sizes of 0.2 and smaller as small, effect sizes of 0.2 to 0.5 as medium, and effect sizes larger than 0.5 as large. - Another benchmark has been set by the federal What Works Clearinghouse (WWC). When evaluating intervention programs, WWC considers effect sizes of 0.25 and greater to be “substantively important.”
- John Hattie’s
*Visible Learning: A Synthesis of Over 800 Meta-Analyses Relating to Achievement*provides the effect sizes associated with many education interventions. - One more approach is to compare school-level effect sizes to district-level or state-level effect sizes. Similarly, effect sizes can be compared across groups or from one outcome to another, such as from the Badger Exam to the ACT.

These benchmarks may not be appropriate in every context but are a good starting point for interpretation.

#### Why Does the Magnitude Change When Reversing the Target and Comparison Groups?

The achievement gap dashboard allows the user to select the comparison group for the race/ethnicity comparisons. You may expect that the gap size between two race/ethnicity groups will remain the same if the target and comparison groups are reversed, for example, if you change the comparison from Black to White to a comparison of White to Black. Sometimes, though, the size of the gap will change. This is because the denominator of the effect size formula is based on the comparison group only. When you reverse the target and comparison groups the denominator changes from one group to the other, resulting in a different effect size.

#### How Does Effect Size Differ From Statistical Significance?

Effect size helps users to evaluate the magnitude of a difference between groups. Statistical significance determines whether the difference is likely to be due to chance. For example, the What Works Clearinghouse (WWC) uses statistical significance to determine if the difference in outcomes between treatment and control groups in a study is likely to be due to chance, and uses effect size to determine if the difference is large enough to have a substantive impact on the treatment group.

It is possible for a difference between groups to be significant without being substantively important. Statistical significance is dependent on sample size. If a comparison is poorly designed statistical significance can simply mean that a large sample was used. Effect size is independent of sample size.

## Resources

### Achievement Gap Resources

- Achievement Gap Dashboard - Understanding Effect Size PPT
- Promoting Excellence for All strategies, research and companion eCourse
- Wisconsin Disproportionality Technical Assistance Network Resources

### Effect Size Resources

- Best Evidence Encyclopedia -- Empowering Educators with Evidence on Proven Programs. (n.d.). Retrieved June 02, 2016, from http://www.bestevidence.org/index.cfm
- Cohen, J. (1988). Statistical power analysis for the behavioral sciences. Hillsdale, NJ: L. Erlbaum Associates.
- Hattie, J. (2009). Visible learning: A synthesis of over 800 meta-analyses relating to achievement. London: Routledge.
- Hill, C., et. al. “Empirical Benchmarks for Interpreting Effect Sizes in Research.” (2007). Retrieved from http://mdrc.org/sites/default/files/full_84.pdf
- Lipsey, Mark W., et al. “Translating the Statistical Representation of the Effects of Education Interventions into More Readily Interpretable Forms.” (2012). https://ies.ed.gov/ncser/pubs/20133000/pdf/20133000.pdf
- What Works Clearinghouse. (n.d.). Procedures and Standards Handbook Version 3.0. Retrieved from http://ies.ed.gov/ncee/wwc/pdf/reference_resources/wwc_procedures_v3_0_s...