Quick Links
Upcoming Webinars
PD Events
<< Jul 2018 >>
SMTWTFS
1 2 3 4 5 6 7
8 9 10 11 12 13 14
15 16 17 18 19 20 21
22 23 24 25 26 27 28
29 30 31 1 2 3 4
student data

From Student Data to Student Achievement

This page provides educators actionable steps for the analysis and application of student data, with a focus on data derived from student assessment. From spreadsheets to data-driven decision making, educators use student assessment results to transform school-wide processes, day-to-day instruction, and – ultimately – student learning.

Navigating the trove of student data is cumbersome and overwhelming at times. Student assessment offers plenty of fruitful information to schools, parents, and students about student performance and achievement, but the data can be difficult to pinpoint or comprehend.

People spend years in school learning to parse through, organize, visualize, and draw meaningful conclusions from data. It’s complicated! Yet, schools must interpret student data and transform it into meaningful curricular and instructional initiatives in a limited amount of time and with a finite amount of resources, which is a tall order.

Student assessment provides a quantitative measure of student performance and a wealth of information that becomes a road map for leading school improvement.

Why Student Data? Schools and districts often have limited resources, whether it be time, money, personnel, etc., and therefore must utilize resources effectively. Student data derived from student assessment offers a fuller picture of what is happening across schools and districts and empowers us to foster a school environment dedicated to student achievement.

Student data plays an integral role in finding those opportunities for meaningful change, while also making the process manageable by providing focus. To effectively use student data in this way, schools need to have a process in place for analyzing the data.

Student Data and Educators

Leadership

Leadership

Schools leaders use student data to make data-informed decisions at all levels of the school and measure the efficacy of initiatives.

By using data to improve schools, school leaders learn where to invest in teacher professional development and support for instruction, such as creating common assessments. School leaders can also monitor performance by subgroups to ensure equity with the school or district and meet requirements of ESSA.

Teachers

Teachers

In the classroom, teachers use assessment data as an opportunity for reflection with outgoing students, recognizing what strategies worked and what didn’t, and as a tool to gauge the strengths and areas for growth within the incoming class. Using a curriculum cycle, teachers make adjustments to curriculum based on student assessment results to better support students whether through standards alignment, development of knowledge or skills, and more.

With data, teachers apply data driven instruction to ensure they intentionally align to standards and ensure students have meaningful practice with student assessment accommodations they may encounter in larger summative assessments. Teachers also use data for decision-making all the way to day-to-day lesson planning.

By having a well-rounded and aligned curriculum influenced in part by reflecting on student assessment data, data-driven instruction becomes feasible throughout the academic year.

Student Data Protocols

To get to a place in which staff at schools and districts are comfortable with student data, not only as a tool for self-reflection but as fodder for larger conversation and school-wide growth, is no easy feat.

Enter, data protocols. Student data protocols are a series of steps for analyzing student data. They help us review data in an intentional way, talk about data with staff without heightening anxiety, and identify opportunities for improvement.

For example, we follow this student data protocol to effectively scrutinize and understand assessment data by using techniques such as visualization and storytelling.

As you review this student data protocol, you should also consider how to apply the protocol to not only engage with data but also as a means to foster school- or district-wide conversations with data as a comfortable, self-reflective topic.

Asking the Right Questions of Assessment Data

Identify questions to ask the data set.

Visualizing Student Assessment Data

Plot the relevant data in a chart.
story-telling-with-data

Storytelling with Assessment Data

Discuss the visualization(s) and create a story.
digging-deeper

Broadening the Scope of Student Data

Supplement the data with other resources.
identify-next-steps

Identifying Next Steps from Student Data

Develop an action plan based on the findings

Asking the Right Questions of Assessment Data

Assessments are designed for specific purposes. Summative assessment and standardized test scores offer differing interpretations of student learning, and the scoring methodologies are not always comparable, making the nature of assessment data different from test to test.

It’s important to understand the characteristics of assessments to best determine how to use them. Essentially, by not understanding how to think critically about the assessments, the choices you make using the assessment data may not be as effective (aka assessment literacy).

For example, asking a question about student growth with assessment data designed to measure proficiency, will lead to an incorrect understanding and skewed interpretation of the data.

When looking at assessment data, do you know what each assessment measures and how they score your students? Is the assessment measuring growth or performance; what about proficiency? Does the assessment predict success in the next grade level or the ability to pass an end of year exam? Are scores derived from references – norms or criteria?

Knowing this information – and more – is paramount because it allows schools to use student data intentionally and as it was designed, which ultimately drives more-informed and better decision-making. It also saves time in the long run by giving clear direction for each set of student assessment data you may encounter.

While there are a plethora of ways to categorize assessments, this page focuses on type, score, and reference.

Differentiating Between Student Performance, Proficiency, and Growth

Assessment design describes the intent of the assessment – its designed purpose, answering the question ‘What is this assessment testing our students for, and what information are the scores conveying?’ Proficiency, growth, and performance are examples of assessment design. It is important to remember that these are not necessarily mutually exclusive and that assessments can be designed to address one or more of these.

The description of each includes examples of the scoring methodologies, which refer to how a score is represented. Scoring methodologies provide different interpretations of assessment results and accentuate the design of the assessment.

Proficiency measures student performance against an identified benchmark for success, and therefore, proficiency gauges a student’s relation to the pre-determined benchmark. For example, a proficiency-based assessment can offer data as to whether a student, or group of students, meet grade-level expectations or if they are likely to succeed in the next grade level. Additionally, proficiency can measure whether students mastered standards or obtained certain skills.

Scores that address proficiency are represented with proficiency levels. Proficiency levels are predetermined score ranges; scores are presented as words, such as proficient or not proficient; met expectations, approaching, did not meet; etc.

Proficiency answers questions such as: ‘Do students meet “grade level” expectations?’

Growth measures a student’s change year to year or throughout an academic year relative to their performance. In other words, growth measurements calculate student performance over time.

Scores that address growth are represented with growth scores derived from scale scores. Growth scores track the change in individuals’ scale scores, discussed in the next section, with the goal of seeing improvement from one assessment session to the next.

Growth answers questions such as: ‘How many points did the student grow from test to test?’ or ‘How did students rank in their amount of growth relative to “like” peers?’

Performance measures a student’s ability to apply skills or knowledge. Unlike proficiency, performance does not weigh results against a benchmark for success. And unlike growth, performance does not quantify change over time. Rather, performance presents a snapshot of a student’s knowledge or skills at the time of the assessment.

Scores that address performance are represented with scale scores, percentages, raw scores, and stanines.

Scale scores are scores that fall on a pre-determined scale that is identical year to year; it is expected that over time students will score higher. The scale score itself reflects performance. Percentages are the standard grading method in schools. This sort of score represents the percent of correct answers. A raw score is as it sounds: a score that is unaltered. Raw scores tally the number of correct answers on a particular test. A stanine score, abbreviated from standard nine, is a score calculated to fit on a nine-point scale.

Performance answers questions such as: ‘What is the range of student scores in my class (school, grade, etc.)?’

Percentiles, representing referenced scores, are an additional performance score. Percentiles calculate a student’s score relative to a referenced group; in other words, they evaluate a student’s score relative to peers or to different benchmarks for achievement. Two types of referenced scores are:

Criterion-referenced scores compare student performance to a standard for acceptable achievement. Scores reveal where students’ performance falls relative to different units of measure. Criterion-referenced assessments are often presented as percentages, proficiency levels, raw scores, stanine, and scale scores.

Normative-referenced scores compare the score of individual students on a bell curve. Scores reveal whether students performed below, above, or at an average to a hypothetical norm. Unlike criterion-referenced, norm-referenced compares student performance. Scores are often presented as percentages or percentiles.

From references, you get percentiles. Two common percentile scores are growth percentiles, which are calculated by comparing an individual’s results to students who scored similarly on the prior test, and scale score percentiles, which measure where a student’s scale score falls in relation to a predetermined group.

An Example: Millcreek Township School District

The next section outlines how to build visualizations based on assessment design to meaningfully address questions asked of student data.

Before that, however, is an example provided by Joseph Orlando, Director of K-5 Curriculum, and Marianne Ouellet, Supervisor of Instruction and Assessment at Millcreek Township School District. They detail the importance of understanding what scores specifically address to avoid misconception and incorrect analysis of assessment data: education myth busting.

For those unfamiliar with the system, PVAAS is a: ‘statistical analysis used to measure a district’s, school’s or teacher’s influence on the academic progress rates of groups of students from year to year. ‘Conceptually and as a simple explanation, a value-added growth measure is calculated in the following manner: Growth-Current Achievement compared to all prior achievement, with achievement being measured by quality assessments such as the PSSA and Keystone Exams.’

1. MYTH – GREEN IS GOOD

Below you will see the color coding descriptors provided by Pennsylvania Department of Education (PDE).

According to the chart, it appears that “Green is good.” Green equates to, “Yay! You hit the standard for PA Academic Growth.” If all of your students are on grade level, that is true.

If all of your students are on grade level, that is true. However, we know that isn’t the case in many of our classrooms. For students scoring below grade level, one year’s growth is not going to help them close the achievement gap. In fact, in some cases it may even put them further behind.

PVAAS_Growth Data
2. MYTH – ACHIEVEMENT IS MORE IMPORTANT THAN GROWTH

Another myth we would like to bust is that “Achievement is More Important Than Growth.” According to PDE’s Guide to Key PVAAS Reporting, achievement and growth are complementary but different types of academic measures. One measure is not more important than the other, rather each have their own specific value.

Historically, student achievement has been the mainstay of measuring academic success. We know that there are often outside factors that can impact the data collected. With growth measures, many of these challenges are eliminated.

There are a few positive twists that come from using growth measure. One is that there is little to no correlation with a student’s demographic background. Additionally, when analyzing a student’s growth, the comparison is made based on the student’s own prior performance. Finally, growth measures reflect change across time (e.g. years).

By using the data provided by both achievement and growth measures, a clearly articulated instructional plan can be developed. This comprehensive plan will support achievement and growth for our students.

Visualizing Student Assessment Data

After reviewing assessment design and understanding questions to ask different student assessment data, the next step in the student data protocol is to visualize the assessment data. Data scientists stress the importance of displaying data in a visual manner.

In a TED Talk, David McCandless shares the importance of data visualization, not just for interpreting data but also for making its information more tangible. To David, data is “a kind of ubiquitous resource that we can shape to provide new innovations and insights…Data is the new soil… like a fertile, creative medium.”

Data is not usually associated with words like “creative”. McCandless attributes this creativity to data visualization, or: “visualizing the information so that we can see the patterns and connections that matter, and then designing that information so it makes more sense, tells a story, or allows us to focus only on the information that is important.”

Data visualization enables us to better understand data and derive meaning from it. By understanding the nuances of different data sets, you can better display the data to address your questions.

With proficiency data, you determine whether students perform at, above, or below of pre-selected benchmarks. This chart helps us identify the percentage of students that fall into different performance categories on the assessment by subject. The colors represent the percentage of students in each proficiency category: green is the highest and red is the lowest category.

Student Proficiency Data

Quick guide: This chart quickly identifies where students fall within their level of performance. Grades where red is the largest section of the bar, show areas where the most students struggled. Grades where green is the largest section of the bar are the areas where the most students succeeded. When the red or yellow category bars are large, many students may be “on the bubble” or close to the highest level or lowest level of performance.

It can be beneficial to look for connections between subjects to find trends. For example, how does performance compare in Science and English? Often, if students struggle with skills like reading informational text, they will also struggle with Science assessments.

Growth data allows us to analyze student growth relative to their performance. It is helpful to gauge student growth relative to another benchmark. This chart compares growth percentile versus scale score. The percentile shows the ranking of students, and the scale score shows the student’s overall score. Comparing these illustrates what the growth means for their overall performance.

Student Growth Data

Quick guide: The vertical (y) axis shows how individual students scored on the growth percentile for the specific test selected. The horizontal (x) axis shows how individual students performed on the scale score for the specific test selected. Each dot represents an individual student score and meets at the intersection of the performance and growth percentile.

Use this report to separate students into four groups: Meets Growth Goal and Score Goal (top right), Does Not Meet Growth Goal or Score Goal (bottom left), Does Not Meet Growth Goal But Does Meet Score Goal (bottom right), Does Not Meet Score Goal But Does Meet Growth Goal (top left).

This chart also shows whether you are reaching specific sub-groups of students, analyzes how each student is growing compared to their overall performance, finds high-performing students who aren’t actually growing and vice versa, and identifies individual student needs for grouping and scheduling.

Performance data analyze the scope of performance for students or their performance relative to a reference. When the average just isn’t enough information, it’s helpful to study the range and tendency of students’ performance. After all, having even one really high or really low score can skew your results and prevent the average from giving a reliable picture of overall student performance.

Student Performance Data

Quick guide: Quartile 4 is the range of scores earned by the top 25% of students. The top line is the “Maximum.” (If you had 100 students, this is the range of scores earned by students 76- 100.) Quartile 3 is the range of scores earned by the second highest 25% of students. (If you had 100 students, this is the range of scores earned by students 51- 75.) Quartile 2 is the range of scores earned by the third highest 25% of students. (If you had 100 students, this is the range of scores earned by students 26- 50.) Quartile 1 is the range of scores earned by the bottom 25% of students. (If you had 100 students this is the range of scores earned by students 1- 25.)

  • The mid-line score earned.
  • The bottom line is called the “Minimum” and is the lowest score in the range of student performance.

Longer box and whiskers represent more varied students’ abilities. If the range of performance is small, a teacher might not need to differentiate as much as when the range is large. Use the median to determine how students tended to perform and the outside performance references to add context to the data. For example, youcould have a wide range of performance levels, but if they are all above the point that indicates proficiency or the state average, you might actually be meeting goals.

Storytelling with Assessment Data

Just like data visualization, data storytelling is also a movement in the data studies world. The visualization complements the narrative; data storytelling takes the visualizations a step further by making the information digestible and relevant. Moreover, visualizing data is step one, but then constructing stories is step two, as stories make data meaningful and memorable.

By engaging your audience in the data with visual charts and a narrative, you are far more likely to support others in understanding and using the data.

How do you tell stories with data? Well, first think about the key components of a story: character, setting, plot, conflict and resolution. The characters are likely the students, the setting is the class, grade, school, etc., the plot is the visual representation of the data, the conflict is the questions, and the resolution is the findings.

For example, storytelling for PARCC English language arts proficiency results for 3rd, 4th, and 5th grade students could be:

Characters: 3rd-5th Grade students

Setting: ELA

Plot:

PARCC_Student Proficiency Data

Conflict: How does student proficiency change year to year? How are the proficiency distribution levels shifting in each grade level?

Resolution: This chart shows that proficiency is improving overall from year to year. Specifically, 4th grade students had less students who did not meet expectations than 3rd grade, while the number of students exceeding expectations also rose overall between 3rd and 5th grade.

This might seem elementary, but given that student data can be dense and convoluted, it is immensely helpful to break the process down into familiar and replicable steps.

Broadening the Scope of Student Data

The next step in the student data protocol is expanding the scope of the data for a more nuanced view. Sticking with the PARCC example above, is there other student data to cultivate the narrative and provide a more nuanced perspective on the story you created?

For example, many schools take multiple assessments in one year, and it can be helpful to consider assessment results in conjunction with one another. The chart below shows the MAP results in language for this same group of students.

This box and whisker plot allows us to see scores broken into quartiles and even includes the expected RIT scores of students in those grades.

The chart confirms the results from the PARCC assessment, which shows improvement over time. However, this MAP chart shows that there are a few outliers. Further compare and contrast the two charts to add to you narrative and draw some additional conclusions.

Student performance on reading in the MAP assessment is improving year to year; and at the 5th grade level students perform above the grade level minimum.

Yet, in looking at the PARCC ELA section, the scores mostly stagnate between 4th and 5th grade; and across 3rd, 4th, and 5th, about 20 percent of students do not meet grade level expectations. While MAP is beneficial for gauging student progress throughout the year, it is not a predictor for performance on PARCC.

NWEA MAP_Student Performance Data

Identifying Next Steps from Student Data

Use the visualization and accompanying narrative to guide your conversations with colleagues and other stakeholders (e.g. parents or students). Visualization and storytelling makes the data more understandable and empowers teams to find opportunities for improvement, set achievable goals, and determine metrics for success.

Based on the PARCC data (same as above), an educator might be interested in the specific 4th grade strategies that led to a sharp reduction in students not meeting grade-level expectations. Perhaps, a PLC for 3rd, 4th, and 5th grade teachers would allow for a knowledge transfer and replication of these strategies elsewhere to support student achievement

From the MAP data, it is clear that while students are growing overall, some students still need extra interventions. Another next step may be to invest in teacher professional development around differentiation or connect with the teachers and develop support strategies for those students struggling.

In addition, comparing the data from multiple assessments emphasizes that while curricular shifts can support improved MAP results throughout the year, overcompensating can be at the expense of results for assessments like PARCC.

The easy-to-read format makes the information more accessible, so the charts can be used to craft narratives and bring the data to life.

Through well-crafted stories and understandable visualizations, you help teachers tailor their instruction to meet their students’ specific needs. Moreover, you make student assessment data a ubiquitous resource able to drive innovation and insight in the classroom.

Conclusion

This page just scratches the surface of the myriad of student assessments in K-12 education. Assessments – and student data more specifically – provide teachers, administrators, parents, and even students valuable information about their education. That said, the various assessments provide different information about the nature of learning – growth, proficiency, or performance – and the various scoring methodologies present student learning in unique ways.

When analyzing student data, it is important to know how a test measures student learning, how it presents student scores, and how to visualize and interpret the data. Knowing these, will set schools up for success to use student assessment data as a resource for the betterment of teaching and learning. Happy charting!

Link Student Performance to Curriculum

Download Page as a Resource

You can download all the information on this page in an easy-to-read PDF by filling out the form below.

student data
Do NOT follow this link or you will be banned from the site!