I agree with the statement to a certain extent. The use of more data in decision making processes has inherently forced greater accountability upon teachers, school administrators, and students. Testing issues aside, accountability is a good thing. However, when data is taken at face value and not disaggregated or fully understood, it may lead to wrong interpretations or to the oversimplification of problems and solutions. For example, if a school administrator reviews data from a case study without understanding the context of the study or other factors that lead to the outcomes, the proposed solution may not align with his or her student population or community.
I don't believe that numbers lie,but I also believe in half-truths. It's like selective disclosure- I may know someone cheated in my Algebra class and decide not to tell the teacher. I haven't lied but I haven't exactly been honest either. I agree that public education is in an era that requires positive data to validate its reputation with critics. However, I also believe data can be skewed to a certain extent to tell the story we want it to tell. I am never fully committed to data as the full truth about any specific issue. Number crunchers from politicians to athletic coaches to those who work for school districts became quite skilled long ago with painting the picture they wish to paint with data and numbers. Interpreting data with the proverbial grain of salt is a wise strategy.
After pondering this question further, I also wanted to note that I see value in different types of data. Most statisticians will tout quantitative or "hard numbers" as the most relevant but I also strongly believe in more qualitative data as an often-overlooked indicator of success or failure in many situations. A high school may have a high college acceptance rate, the best average SAT scores in its district, and the highest rate of enrollment in AP classes. Many would use these statistics to spotlight the success of the school itself when obviously, those factors could all be influenced by outside factors as well (socio-economic status of families, parent advocacy, attained level of parent education, etc.). Looking more closely, if an outsider were to survey students and parents about issues of customer service, school culture, or personalization, one may find that this "successful school" is considered cold and unwelcoming, or may not embrace multi-culturalism in its policies and procedures. It is critical when assessing a school's strengths and weaknesses to perhaps start with the "sexy" numbers about test scores and AP courses but to look much deeper into the school's soul. How does a school welcome new students? How does it support/protect its teachers? Are ALL demographic groups visibly engaged on its campus and in activities or just students of privilege? Are teachers tolerant and empathetic with regard to learning differences and exceptionalities? These are examples of school success criteria that are hidden or overlooked by data about test scores. With a little more effort, it is possible to uncover the truth in these areas as well.
At first, I wanted to say that I'm ambivalent about this, but after some thinking, I'm not. Data are absolutely not objective. Everything-including numbers-has a context. For example, we know that schools are now labeled by their end-of-grade/course scores, i.e. a "Priority School" is labeled so because 50-60% of its students are performing at grade level. But what does that even mean? And, more importantly, what is the context for this label? Did every single student in this building test? Do the tests actually measure what it means to be "at grade level"? Who made these tests? What inequities exist in the school that prevent students from passing their EOG/Cs? How many of these students only missed passing by one point, and what distinguishes that one point from a passing to not passing score and thus “at grade level” and not? There are a million more questions that need to be considered in order to truly understand what data mean, and each school has a unique context that should be understood before it is labeled.
Now, do I think data should never be considered? No, of course data should be considered, and it should definitely take part in decision-making processes in our school system. The trouble arises when we rely solely on data because we think that it is completely objective.
To use strictly a data driving decision making process for school improvement could be seen as a narrowed and safe approach. And to a certain extent, it would be a disservice to those affected by the decisions. Additionally, to say that the data don’t lie is like applying a literal meaning to a figurative topic. It just doesn’t work. For data to be considered objective, one would have to believe that there was or is no human connection attached.
All too common, our educational institutions are governed merely by the collection and interpretation of back-loaded data, which is unfair, in my opinion. Common sense tells us that our students are more than numbers or levels. The most valuable data related to student success is not collected during a week of testing, but through a variety of data and artifacts collected over time. An EOG or EOC cannot effectively nor objectively convey how much a student has grown from August to May for it is limited to a few moments in time that are ultimately skewed by increased levels of emotion and anxieties.
Data has its place in the decision making process, but it should not be limited to just one source or one test. At this point, it is unlikely that our current concept of data collection with respect to educational performance will change anytime soon. Thus, the improvement we all desire for our students will continue to be hindered and misaligned.
I am glad to see I am basically in agreement with my colleagues (those that have posted so far). I would recommend that Elena attain some peace with data's presence in her chosen career because "it ain't going anywhere" (as I am sure she understands).
David is correct with his point about the entrenched presence that data-driven decision-making has attained on the current priority list of any school district with the means to collect said data. However, I don't agree that reform and improvement is hindered by our current concept of data collection with respect to educational performance. I simply think that our current philosophies and priorities regarding quantitative data's place in the reform discussion should be meshed more throughly with other, more qualitative data and information, and that the qualitative types of information should have more significance in the discussion.
If students were robots or machinery, we can use strictly numbers-based or quantitative data to assess effectiveness and achievement. As long as we are dealing with humans, especially children, we are irresponsible if we do not look more deeply at schools,teachers, and students in a more humanistic, four-dimensional context.
I disagree with the statement for many of the reasons that have already been mentioned. As Elena pointed out, "There are a million more questions that need to be considered in order to truly understand what data mean, and each school has a unique context that should be understood before it is labeled." I think it is a shame that we have reduced the success of a school based on data and statistics. Yes, data does have it's good points but where is the balance? What about growth and the perspective of being well-rounded (as a student and as an education system)?
Ha! You're definitely right, Spencer. I should come to peace with data. I'll work on that... :)
I really love David's post about data, especially where he wrote: "For data to be considered objective, one would have to believe that there was or is no human connection attached." How true! As long as we are working with students, we must understand that a summative test will never paint the entire picture of learning that happened over the course of a semester or year.
So I guess the question that is left is: how do we change the opinions of the people in power to think this, too? Spencer and David both sort of alluded to the fact that data-driven school reforms "aren't going away", which I agree with, but how do we get it to become more meaningful?
As many have said data by itself is not objective. Data does not explain why things happen the way they do it only shows the results of specifically examined areas. I think Spencer mentioned something as such, data does not lie but it does not tell the complete truth either. It gives the truth in a specific controlled environment, with all conditions being equal across the board and as we know, no one school is like another. Each school has its own culture, nuances, and set of issues it must deal with and “objective” data is only true if conditions can be made the exact same across all schools. Data tends to neglect the understanding of why things happen. In no way am I saying data is not important, but it should not be the sole driving force for change in our schools. The primary source of change must be the understanding of why data shows what it does, not the “objective” data itself.
I disagree with the statement that data don't lie because data can lie. Data can be skewed to tell whatever story would like it to tell.
Data is a tool that can be used to analyze a school. Once data is collected then the data should be analyzed carefully to understand what the data is saying. It should not be taken at face value and used as the sole criterion in determining a student or a school's success.
Elena…I’m with you on questioning the validity of tests results…It frustrates me to no end.
It seems that we are generally agreeing that data driven decision making is at best irresponsible. I think we also share common ground in thinking that the data itself does not effectively display the real performance level of a school or its students. I extend my frustrations in stating that having labels of failure, mediocrity or excellence applied to schools specifically because of “achievement” data is just as irresponsible as making improvement decisions using this same data.
I will again affirm that the improvement we all desire for our students and schools will continue to be hindered and misaligned as long as those in control limit their accountability standards to what I believe is generalized achievement data. To acknowledge Spencer’s thoughts where he states “I don't agree that reform and improvement is hindered by our current concept of data collection with respect to educational performance,” it is this very ideal that has us in this current state of disaster. This current concept of data collection is not working in a way that sufficiently informs or improves our educational systems. You can have as much quantitative, qualitative or any other informational data as you want, but as long as the “reformers” continue to rely simply upon year-end testing percentages you will see little improvement across the board.
Further, the reform we are destined to receive will maintain this same negligent approach for monitoring success and performance. I will go even further and say that this new teacher evaluation process will be inadequate as well. In my opinion, this tool will not effectively increase teacher quality as easily as most might think. Even with the new evaluative process in place, teacher accountability remains in the subjective hands of the administrators’ to some respect. Will there be data released on TV or the web at the end of the year to rate teachers’ performance? Not likely! Yet, it’s the teachers who make the most direct impact on the data used to rate the students and schools. As I teacher, I find it hard to reasonably find a quick fix for evaluating all students or all teachers using one generic instrument.
I agree that data can be subjective and should not be the primary focus of school improvement, but I do have a question. How can we make sure schools do their job and are accountable to their students? Unfortunately I do not think we have enough intelligent administrators today that can step back and take responsibility for their school with out the pushing of explicit “hard” data. People love the easy way out and the same applies to many school administrators. The lazy administrator would probably love for there to be no level of accountability. While I dislike the fact that data is often the primary factor to drive decisions and policy, what is the alternative? In a sense this bothers me more because, I can get all worked up and upset about standardization and the unintelligent analysis of data, but what other options do we have. I feel one person who doesn’t do their job is one person too many. I almost feel we must fix administrators before we can fix this “data dilemma.”
Numbers do not lie, but they also do not tell the whole story. The numbers are not the issue, it is how the numbers are generated and how they are interpruted. We all know that tests can be biased and numbers can be misunderstood. My principal came into a math department meeting talking to us about the numbers. As soon as she had finished one of the math teachers got up and read to her what the numbers really meant. He then asked her to leave our meeting. She never came back. She misunderstood the numbers.
Great point Matt. My biggest problem with No Child LEft Behind is not the testing or the accountability it is the follow up and the "help" if should generate. When the data is not fully understood money or santions could be thrown at a school without any real guidance or support.
Trea brings up an interesting point about accountability. Basically, he makes the assumption that we have no alternative to judging schools based on summative test data. I agree with him that we need some testing accountability, but the ways in which our system gets to this end need to be reformed. Also, as many people have already stated, it becomes problematic when judgements and reforms are based SOLELY on testing data.
I agree with the statement that data in decision making processes is a real strength. Clear objective data can be very eye-opening as we have learned in Dr. Veitch's class about clinical supervision. Data for teachers can be important in showing student progress and holds them accountable.
I do however think that pertinent data in certain situations can be skewed or left out which can lead to misinterpretation of the data. Researchers may be inclined, for example, to only put in data that supports their hypothesis.
After reading Elena's post I am more inclined to side with her on the data issue. I think data is important, however it should not be analyzed without taking into account the context - especially when doing comparisons that have contexts that may differ.
I also like how Liz put it that, " it is a shame that we have reduced the success of a school based on data and statistics." Schools have different strengths and growth areas and where one school may be given the honor of being a 'school of excellence' and another may not, this is said without context to where that school is located in a certain district and what the population make up is of that school.
Because of my own school experience and teaching in one of the "best" school districts in NC, I firmly disagree with the statement. Data is needed, but one must be weary of the reported story. Just looking at scores and disregarding the child can leave children who don't test well, behind.
I tend to get worked up over always discussing problems in education and not discussing solutions. Trea brought up several good points discussing that if we don't use data "How can we make sure schools do their job and are accountable to their students? What is the alternative to using data to drive decisions about creating policies?" David made the point that we can't use one generic instrument and I agree with him. What I'm wondering from you guys is what is the solution? What instruments would you use?
With respect to the comments regarding how do would we hold school accountable without the use of test data. I don’t believe that the issue is exclusively with having tests to monitor student performance. I deem most educators would find testing to be relevant in the assessment of student performance. However, it’s the format that brings the frustration to the table. One test week is not, in my opinion, a sufficient amount of data to fairly pass judgment on student performance.
I would find using quarterly assessments as some districts use already throughout the year to collect data over time as a positive way to assess our students. This would allow for the students to show growth, which is a primary factor of performance in my eyes. Are our students improving throughout the year? This is the question that is seldom answered with published data. With this, the decisions that are made are generally based on the current isolated test data. With quarterly testing, it would remove some level of anxiety from the teachers and students being that they would be more comfortable and familiar with the process itself. My students looked at the EGOs as another just another assessment because of our use of quarterly assessments, which were set up just like the EOGs. With this format, you would have formative and summative data that is quantitative and qualitative in value. I am not saying this is the best way to solve the problem. I am, however, saying that is offers a probable solution to making the data we collect more valuable.
As we all know, funding and costs shoots this idea dead in its steps. The higher ups love to talk reform and improvement, but seldom put the money where their mouths are for the sake of our students.
I agree with Liz’s earlier statements that schools need to be more than test scores and data. The focus should be on growth, personal, social, and not just academic. I just feel that the more I think about the issue of objective data the more questions I have, in regards to alternative methods to measure growth. Can there be effective measurement of a students social and emotional growth and how will that reflect that a school is doing their job? How do we know we are actually meeting the needs of our students? I guess what I am saying is how do we define subjective data and measure it as well?
Generally speaking, numbers are objective; however, the way in which they are interpreted is very subjective. I agree with much of what has been said regarding the usefulness of data but only to a certain extent. As Bernhardt (2004) points out, different kinds of data can be beneficial in different ways, so it is not just testing data that we need to be cognizant of. Likewise, she mentions that "data-driven decision making is only partly about data" and that it is the shared vision and leadership which can ultimately make the difference. Like others have said, we definitely need a means of evaluating effectiveness and holding schools and teachers accountable, but I think that looking only at the objective data (testing numbers) will significantly limit what we are able to learn and understand about a school and what is actually going on there.
I think Elena and Arpita raise interesting points regarding the context of the data. Often data is thrown in front of a group and the group becomes enthralled with the numbers, rather than working to understand the complexities of the situation and what the numbers mean relative to the setting.
Additionally, I agree with Spencer that often qualitative data can offer greater insight or, at least, enhance the quantitative data that is presented.
Most of my peers have said it already, data can not be the only factor used when driving decisions in a school. The previous school I worked at is a perfect example of how relying only on data can negatively impact the school. 6 years ago the school ranked first out of all schools in their end of year assessments (called FCAT in FL) FIRST! Years of hard work from the previous administrator, teachers and students had resulted in this achievement. The following year, the teachers bragged about their results, the parents couldn't stop celebrating, and the new administrator rode the data wave since she was new to the position. Decisions were being made based on the data that had come from the previous year, and since they were on top, nothing changed- SIP stayed the same, goals stayed the same- they figured if it worked before it will keep on working. A year later the school quickly dropped in the ranking, and it took years for the school to climb back up. David said it best when he shared that "Data has its place in the decision making process, but it should not be limited to just one source or one test."
I also agree with Trea. What do we do? How do we still hold teachers accountable?
How should we particularly hold schools accountable to make school improvement decisions that is not just data drive, but is driven by parents, students, the community, data that is analyzed correctly with all factors, school population factors, etc -? Can it all be objective?
I also wonder if the competitiveness between our schools was taken away, would it help schools focus on their own personal growth more than how they appear next to another school? Would this cause them to analyze their data more closely and accurately?
Such good questions have been raised regarding this topic. I hope in class on Thursday we can discuss possible solutions and "alternative methods" on how to evaluate school improvement. Trea I am with you on "how do we define subjective data and measure it as well."
It is evident that the data being provided to our school leaders today is based on the values framework of policymakers and other groups (often outside traditional education). When will the rubrics and evaluation systems that lead to data used in decision making be developed by educators?
Yes, I agree that numbers do not lie; however, I do not agree that decisions made for school improvement should be made using data alone. One must take all factors into consideration when using the respective data. Although numbers do not lie, the method in which the data was collected may “lie”. In this context I use lie to mean a method that is manipulated, a method that does not reveal all or account for all components, a method that looks strictly at the concept at which is being measured and does not consider any outside forces that may lead to the respective outcome(s), a method that is not appropriately designed to measure the respective concern. I like the example Elena used about the data that labels a school a Priority School. When this information is placed in newspapers, sent home to parents and place ed on respective DPI’s website, are the answers to the questions Elena listed also shared?? i strongly doubt that they are and for this reason I feel that one can not base decisions on data alone.
“Never worry about numbers. Help one person at a time, and always start with the person nearest you.” Mother Teresa
“Intuition becomes increasingly valuable in the new information society precisely because there is so much data.” John Naisbitt
The above quotes reflect my philosophy regarding data. The impact of NCLB race/ethnic disaggregated data brought the spotlight on the achievement of ELL students. Data on ELL, has helped me compare our district data with others in the nation who were being effective in addressing ELL needs. I found a school district that completely transformed their ESL program. St. Paul Public schools provided data that showed their ELL population went from least performing to highest performing in 5 years compared to the rest of Minnesota. How? Here is where data was good catalyst to learn more about the individual. We took a team to St. Paul on a frigid February to see if the data was lying. It was not. Our district is now into its 3rd year of implementation of collaboration through co-teaching k12. Students are talking about how much they enjoy not being pulled out of class and how much they learn from two teachers in the classroom. Data assists as a compass, but the compass will not take you there, one has to create a road map.
I believe that data can be a great tool to drive decisions. However the key is how the data is being used to make decisions.
Data should be analyzed carefully and thoughtfully to drive instruction. The problem is that many times the data is not used to implement change in instruction or or the data is taken at face value without fully understanding what it means.
Different types of data have to be collected to understand student achievement wholeheartedly. Summative assessments, formative assesments, surveys, and graphs all need to be used to understand the needs of a school population.
Jordi gives us a great example on how data results promote positive change in the action at schools. I believe that we must look at the data with the faces of students in mind. That way, solutions are created that have students in mind. Working with individual students to help in their need area may not always be possible in public school at all times, but we have to 'see' the data as individuals and work from there.
While I of course agree with everyone that numbers can lie (the Kansas City Chiefs are the only undefeated team in the NFL) and can be manipulated to represent a dichotomy of truths, I still do believe that data, in capable and trusted hands, can and should be the starting place for improvement within individual schools. Ultimately, this is the purpose of the equity audits for Dr. Marshall’s class, where we utilized objective/subjective data to illustrate real deficiencies that are hindering achievement for all students within our schools. Does the effectiveness of these audits not rely solely on our belief in the objectiveness and truth of our own numbers?
I think that Arpita raises an excellent point in regards to the impact of competition on how and why we look at data. Data is too often used as a tool for comparisons: what school is performing better, which students have mastered this concept, which teachers are more effective, etc. If we instead looked at a collection of qualitative and quantitative data as a means of personal growth and use that to inform decisions and policies I think we would be much better off. Several people have spoken about the necessity of using multiple forms of data, which I couldn't agree more with. I think that perceptual data can be a very powerful tool that is easy to collect. The biggest problem I think that we face in data-driven decision making is that the data is being interpreted and shared by the wrong people. Data has the potential to be very beneficial if evaluated and disseminated by qualified individuals.
I think an interesting case study in examining the use of ‘objective’ data in driving school improvement is our recent discussion of Wake County’s recent decision to utilize EVAAS testing as the sole determinant in placing 7th and 8th grade math students. In previous years, teachers exclusively had the power to decide which students were placed in college-bound math courses (pre-Algebra and Algebra I), with an end result of only 40% enrollment for qualified minorities as compared to over 60% of white students. Because of the switch from teacher evaluation to ‘objective’ data, student enrollment has increased in both classes by 30%. Although it will be several years before we can begin to gauge the success or failure of this transformation, I think it’s worth considering whether all students are best served if this and similar placement decisions are determined solely by data or by teacher evaluation, or by a data-based teacher decision?
The School Improvement in Maryland web page in our reading for tomorrow addresses the type of data we should be talking about. This is the type of implementation that can yield results for students. Lets let the bean counters crunch school and state EOG results while we focus on what matters,i.e.,the implementation of a data based framework that effectively informs and drives instruction.
"What is needed is a monitoring system that is aligned with the state content standards that yields timely and meaningful diagnostic results for classroom teachers. Teachers can and should collect daily information about where students are in relation to what is being taught. Teachers need the diagnostic information to determine what they need to teach/re-teach and when they need to provide additional interventions. Teachers must apply the principles of assessment for learning."
We maximize the likelihood of data relevance if we focus in on how it informs classroom instruction. We are able to exert influence over this process, we can continuously sanity check this data for efficacy, and in so doing we can indirectly maximize our chances of positively impacting the effect of EOG and AYP data on our schools.
Jordi, thanks for sharing your experience with your ELL program. The same type of work was done at the school I am after a trip to Chicago to visit a "high performing school." The data wasn't wrong at that school, but there was so much more to the success of the school than just the scores of the test. In the article by Viadero it talks about tying teacher pay to student success, a topic that has been in the news in certain states this year. I'm interested in finding out if the state of NC is thinking in this direction...any thoughts on this?
Will. I think that data generated on a student by student basis, in individual classrooms, through a teacher implemented framework for data collection, instruction and intervention, is the best model for placing children in the Wake Example. This option is probably most closely aligned with the last possibility you offer but its not exactly the same. What I suggest differs in that I am not talking about a teacher making a choice. I am talking about a teacher, or better yet, a grade level or content team arriving at a data based outcome regarding the placement of an individual child based on formative and summative data generated through this framework.
Maria makes an excellent point which has been echoed by several others regarding success of schools that is beyond the scope of data. I also strongly agree with what Courtnee was saying about the importance of looking at data but still making decisions with students in mind. Data can be helpful to the extent that we only look it as a small piece of a bigger picture that will ultimately help students to be more successful.
In response to Will's comments regarding the usefulness of EVAAS, I have also found it to be a misleading source of data. In my experience, several predictions about student success that have been made on the basis of EVAAS have been fairly incorrect. This is a perfect example of how data should be used as one element in the decision making process, but only in conjunction with several other elements.
My first comment is going to be my "soapbox" comment. I don't think there is anything in education that makes me more angry than the saying "the numbers and data don't lie." I think that my anger comes from the fact that my principal constantly harps on how our school improvement plan is centered on our low test scores as our numbers don't lie. Well, I can't help but think, you're right, the data that suggests that low income schools are more likely to perform lower on standardized tests doesn't lie. So, I think that in order to operate under the "data doesn't lie" banner, you have to consider all data concerning instruction.
As the leader of the PLC for Social Studies, I am in charge of disseminating data from department wide tests using a program called achievement series. This program allows the user to break down individual performance on NCSCOS goals and objectives. This information would be very helpful if in fact the data could be seen as a direct measure of the students' mastery. However, as teachers, we understand that the data comes with inherent flaws as it only gives us a minor reflection of where the students are in the class. The data in achievement series does not take into account the student that has been absent twenty days and simply sleeps whenever they're present. And, from time to time, this student's score will be higher than the rest of the class simply because the student might be a good guesser. So, I think that the data can be useful in some situations, but I feel like it's a little too unreliable to base an entire school improvement plan on.
I completely agree with Spencer and his discussion of data as simply a means to save face. Without doubt, any piece of data can be blurred in a way that best suits the school and, more importantly, the school improvement plan. For instance, my school has in recent years experienced an increase in fights; however, the number of reported fights have decreased. As such, the data on violent acts has actually decreased and in the eyes of the public the school is much safer. However, if one looked more closely they would notice that the number of altercations and conflicts (which aren't as severe) have sky-rocketed. So, Spencer, I completely agree that data is something that can save face and also be used in a way that best serves the school's purpose.
I am really intrigued by all of the comments that have been posted.
It sounds like a lot of us agree that data is useful but many times it is either not analyzed sufficiently to gather real data about the climate of the school or it is manipulated to "save face" and serve the school's best interest.
I hope that we discuss in class what are possible solutions for long term and what are the implications for schools if we continue done this path.
I agree with rgaddy : "Teachers need the diagnostic information to determine what they need to teach/re-teach and when they need to provide additional interventions. Teachers must apply the principles of assessment for learning."
With all the added responsibilities teachers have now, diagnostic information can 'weed out' false positives and allow teachers to address the core deficit.
It is great to have what is seen to be objective data; however, we know that research and statistics show whatever they set out to show. For example, one can find research to prove that kids do better on standardize test when they are more physically fit than their counterparts AND one can find research that shows that kids who are physically fit perform equal to those who are not.
The case study used by WIll focusing on the placement of 7th and 8th grade math students using EVASS is interesting because it does show where “numbers” have made a decision. However, Heidi stated that she has experienced incidents where EVAAS has measured the student’s future successfulness incorrectly. This definitely reiterates my opinion (which everyone’s posts have done) that numbers alone can not be the sole bases for making decisions. To answer your question Will, I think decisions should be made on data-based teacher decisions. We must take all factors pertaining to a specific student into consideration. For example, students who attend early colleges must receive a certain score on the COMPASS Test, SAT, or ACT in order to be placed in certain college classes. Although several of our students received the needed score, as a faculty we knew they were unable to be totally immersed in college classes because they were not mature enough to handle instructors who did not take attendance, submit work on time without been reminded multiple times, and communicate with a college instructor. For that reason we designed a different track for those students that ensured by graduation they would have all the classes needed to receive an associates degree, but eased them into the college environment. The numbers said they were ready academically, but we knew they would not succeed because they matured at a slower rate than their peers; had we only gone off of numbers alone, some of them would not have successfully completed the program.
49 comments:
I agree with the statement to a certain extent. The use of more data in decision making processes has inherently forced greater accountability upon teachers, school administrators, and students. Testing issues aside, accountability is a good thing.
However, when data is taken at face value and not disaggregated or fully understood, it may lead to wrong interpretations or to the oversimplification of problems and solutions. For example, if a school administrator reviews data from a case study without understanding the context of the study or other factors that lead to the outcomes, the proposed solution may not align with his or her student population or community.
I don't believe that numbers lie,but I also believe in half-truths. It's like selective disclosure- I may know someone cheated in my Algebra class and decide not to tell the teacher. I haven't lied but I haven't exactly been honest either. I agree that public education is in an era that requires positive data to validate its reputation with critics. However, I also believe data can be skewed to a certain extent to tell the story we want it to tell. I am never fully committed to data as the full truth about any specific issue. Number crunchers from politicians to athletic coaches to those who work for school districts became quite skilled long ago with painting the picture they wish to paint with data and numbers. Interpreting data with the proverbial grain of salt is a wise strategy.
After pondering this question further, I also wanted to note that I see value in different types of data. Most statisticians will tout quantitative or "hard numbers" as the most relevant but I also strongly believe in more qualitative data as an often-overlooked indicator of success or failure in many situations. A high school may have a high college acceptance rate, the best average SAT scores in its district, and the highest rate of enrollment in AP classes. Many would use these statistics to spotlight the success of the school itself when obviously, those factors could all be influenced by outside factors as well (socio-economic status of families, parent advocacy, attained level of parent education, etc.). Looking more closely, if an outsider were to survey students and parents about issues of customer service, school culture, or personalization, one may find that this "successful school" is considered cold and unwelcoming, or may not embrace multi-culturalism in its policies and procedures. It is critical when assessing a school's strengths and weaknesses to perhaps start with the "sexy" numbers about test scores and AP courses but to look much deeper into the school's soul. How does a school welcome new students? How does it support/protect its teachers? Are ALL demographic groups visibly engaged on its campus and in activities or just students of privilege? Are teachers tolerant and empathetic with regard to learning differences and exceptionalities? These are examples of school success criteria that are hidden or overlooked by data about test scores. With a little more effort, it is possible to uncover the truth in these areas as well.
At first, I wanted to say that I'm ambivalent about this, but after some thinking, I'm not. Data are absolutely not objective. Everything-including numbers-has a context. For example, we know that schools are now labeled by their end-of-grade/course scores, i.e. a "Priority School" is labeled so because 50-60% of its students are performing at grade level. But what does that even mean? And, more importantly, what is the context for this label? Did every single student in this building test? Do the tests actually measure what it means to be "at grade level"? Who made these tests? What inequities exist in the school that prevent students from passing their EOG/Cs? How many of these students only missed passing by one point, and what distinguishes that one point from a passing to not passing score and thus “at grade level” and not? There are a million more questions that need to be considered in order to truly understand what data mean, and each school has a unique context that should be understood before it is labeled.
Now, do I think data should never be considered? No, of course data should be considered, and it should definitely take part in decision-making processes in our school system. The trouble arises when we rely solely on data because we think that it is completely objective.
To use strictly a data driving decision making process for school improvement could be seen as a narrowed and safe approach. And to a certain extent, it would be a disservice to those affected by the decisions. Additionally, to say that the data don’t lie is like applying a literal meaning to a figurative topic. It just doesn’t work. For data to be considered objective, one would have to believe that there was or is no human connection attached.
All too common, our educational institutions are governed merely by the collection and interpretation of back-loaded data, which is unfair, in my opinion. Common sense tells us that our students are more than numbers or levels. The most valuable data related to student success is not collected during a week of testing, but through a variety of data and artifacts collected over time. An EOG or EOC cannot effectively nor objectively convey how much a student has grown from August to May for it is limited to a few moments in time that are ultimately skewed by increased levels of emotion and anxieties.
Data has its place in the decision making process, but it should not be limited to just one source or one test. At this point, it is unlikely that our current concept of data collection with respect to educational performance will change anytime soon. Thus, the improvement we all desire for our students will continue to be hindered and misaligned.
I am glad to see I am basically in agreement with my colleagues (those that have posted so far). I would recommend that Elena attain some peace with data's presence in her chosen career because "it ain't going anywhere" (as I am sure she understands).
David is correct with his point about the entrenched presence that data-driven decision-making has attained on the current priority list of any school district with the means to collect said data. However, I don't agree that reform and improvement is hindered by our current concept of data collection with respect to educational performance. I simply think that our current philosophies and priorities regarding quantitative data's place in the reform discussion should be meshed more throughly with other, more qualitative data and information, and that the qualitative types of information should have more significance in the discussion.
If students were robots or machinery, we can use strictly numbers-based or quantitative data to assess effectiveness and achievement. As long as we are dealing with humans, especially children, we are irresponsible if we do not look more deeply at schools,teachers, and students in a more humanistic, four-dimensional context.
I disagree with the statement for many of the reasons that have already been mentioned. As Elena pointed out, "There are a million more questions that need to be considered in order to truly understand what data mean, and each school has a unique context that should be understood before it is labeled." I think it is a shame that we have reduced the success of a school based on data and statistics. Yes, data does have it's good points but where is the balance? What about growth and the perspective of being well-rounded (as a student and as an education system)?
Ha! You're definitely right, Spencer. I should come to peace with data. I'll work on that... :)
I really love David's post about data, especially where he wrote: "For data to be considered objective, one would have to believe that there was or is no human connection attached." How true! As long as we are working with students, we must understand that a summative test will never paint the entire picture of learning that happened over the course of a semester or year.
So I guess the question that is left is: how do we change the opinions of the people in power to think this, too? Spencer and David both sort of alluded to the fact that data-driven school reforms "aren't going away", which I agree with, but how do we get it to become more meaningful?
As many have said data by itself is not objective. Data does not explain why things happen the way they do it only shows the results of specifically examined areas. I think Spencer mentioned something as such, data does not lie but it does not tell the complete truth either. It gives the truth in a specific controlled environment, with all conditions being equal across the board and as we know, no one school is like another. Each school has its own culture, nuances, and set of issues it must deal with and “objective” data is only true if conditions can be made the exact same across all schools. Data tends to neglect the understanding of why things happen. In no way am I saying data is not important, but it should not be the sole driving force for change in our schools. The primary source of change must be the understanding of why data shows what it does, not the “objective” data itself.
I disagree with the statement that data don't lie because data can lie. Data can be skewed to tell whatever story would like it to tell.
Data is a tool that can be used to analyze a school. Once data is collected then the data should be analyzed carefully to understand what the data is saying. It should not be taken at face value and used as the sole criterion in determining a student or a school's success.
Elena…I’m with you on questioning the validity of tests results…It frustrates me to no end.
It seems that we are generally agreeing that data driven decision making is at best irresponsible. I think we also share common ground in thinking that the data itself does not effectively display the real performance level of a school or its students. I extend my frustrations in stating that having labels of failure, mediocrity or excellence applied to schools specifically because of “achievement” data is just as irresponsible as making improvement decisions using this same data.
I will again affirm that the improvement we all desire for our students and schools will continue to be hindered and misaligned as long as those in control limit their accountability standards to what I believe is generalized achievement data. To acknowledge Spencer’s thoughts where he states “I don't agree that reform and improvement is hindered by our current concept of data collection with respect to educational performance,” it is this very ideal that has us in this current state of disaster. This current concept of data collection is not working in a way that sufficiently informs or improves our educational systems. You can have as much quantitative, qualitative or any other informational data as you want, but as long as the “reformers” continue to rely simply upon year-end testing percentages you will see little improvement across the board.
Further, the reform we are destined to receive will maintain this same negligent approach for monitoring success and performance. I will go even further and say that this new teacher evaluation process will be inadequate as well. In my opinion, this tool will not effectively increase teacher quality as easily as most might think. Even with the new evaluative process in place, teacher accountability remains in the subjective hands of the administrators’ to some respect. Will there be data released on TV or the web at the end of the year to rate teachers’ performance? Not likely! Yet, it’s the teachers who make the most direct impact on the data used to rate the students and schools. As I teacher, I find it hard to reasonably find a quick fix for evaluating all students or all teachers using one generic instrument.
I agree that data can be subjective and should not be the primary focus of school improvement, but I do have a question. How can we make sure schools do their job and are accountable to their students? Unfortunately I do not think we have enough intelligent administrators today that can step back and take responsibility for their school with out the pushing of explicit “hard” data. People love the easy way out and the same applies to many school administrators. The lazy administrator would probably love for there to be no level of accountability. While I dislike the fact that data is often the primary factor to drive decisions and policy, what is the alternative? In a sense this bothers me more because, I can get all worked up and upset about standardization and the unintelligent analysis of data, but what other options do we have. I feel one person who doesn’t do their job is one person too many. I almost feel we must fix administrators before we can fix this “data dilemma.”
Numbers do not lie, but they also do not tell the whole story. The numbers are not the issue, it is how the numbers are generated and how they are interpruted. We all know that tests can be biased and numbers can be misunderstood. My principal came into a math department meeting talking to us about the numbers. As soon as she had finished one of the math teachers got up and read to her what the numbers really meant. He then asked her to leave our meeting. She never came back. She misunderstood the numbers.
Great point Matt. My biggest problem with No Child LEft Behind is not the testing or the accountability it is the follow up and the "help" if should generate. When the data is not fully understood money or santions could be thrown at a school without any real guidance or support.
I agree with Liz. There is more to education and a school than just test scores. There is personal and school growth.
Trea brings up an interesting point about accountability. Basically, he makes the assumption that we have no alternative to judging schools based on summative test data. I agree with him that we need some testing accountability, but the ways in which our system gets to this end need to be reformed. Also, as many people have already stated, it becomes problematic when judgements and reforms are based SOLELY on testing data.
I agree with the statement that data in decision making processes is a real strength. Clear objective data can be very eye-opening as we have learned in Dr. Veitch's class about clinical supervision. Data for teachers can be important in showing student progress and holds them accountable.
I do however think that pertinent data in certain situations can be skewed or left out which can lead to misinterpretation of the data. Researchers may be inclined, for example, to only put in data that supports their hypothesis.
After reading Elena's post I am more inclined to side with her on the data issue. I think data is important, however it should not be analyzed without taking into account the context - especially when doing comparisons that have contexts that may differ.
I also like how Liz put it that, " it is a shame that we have reduced the success of a school based on data and statistics." Schools have different strengths and growth areas and where one school may be given the honor of being a 'school of excellence' and another may not, this is said without context to where that school is located in a certain district and what the population make up is of that school.
Because of my own school experience and teaching in one of the "best" school districts in NC, I firmly disagree with the statement. Data is needed, but one must be weary of the reported story. Just looking at scores and disregarding the child can leave children who don't test well, behind.
I tend to get worked up over always discussing problems in education and not discussing solutions. Trea brought up several good points discussing that if we don't use data "How can we make sure schools do their job and are accountable to their students? What is the alternative to using data to drive decisions about creating policies?" David made the point that we can't use one generic instrument and I agree with him. What I'm wondering from you guys is what is the solution? What instruments would you use?
With respect to the comments regarding how do would we hold school accountable without the use of test data. I don’t believe that the issue is exclusively with having tests to monitor student performance. I deem most educators would find testing to be relevant in the assessment of student performance. However, it’s the format that brings the frustration to the table. One test week is not, in my opinion, a sufficient amount of data to fairly pass judgment on student performance.
I would find using quarterly assessments as some districts use already throughout the year to collect data over time as a positive way to assess our students. This would allow for the students to show growth, which is a primary factor of performance in my eyes. Are our students improving throughout the year? This is the question that is seldom answered with published data. With this, the decisions that are made are generally based on the current isolated test data. With quarterly testing, it would remove some level of anxiety from the teachers and students being that they would be more comfortable and familiar with the process itself. My students looked at the EGOs as another just another assessment because of our use of quarterly assessments, which were set up just like the EOGs. With this format, you would have formative and summative data that is quantitative and qualitative in value. I am not saying this is the best way to solve the problem. I am, however, saying that is offers a probable solution to making the data we collect more valuable.
As we all know, funding and costs shoots this idea dead in its steps. The higher ups love to talk reform and improvement, but seldom put the money where their mouths are for the sake of our students.
I agree with Liz’s earlier statements that schools need to be more than test scores and data. The focus should be on growth, personal, social, and not just academic. I just feel that the more I think about the issue of objective data the more questions I have, in regards to alternative methods to measure growth. Can there be effective measurement of a students social and emotional growth and how will that reflect that a school is doing their job? How do we know we are actually meeting the needs of our students? I guess what I am saying is how do we define subjective data and measure it as well?
Generally speaking, numbers are objective; however, the way in which they are interpreted is very subjective. I agree with much of what has been said regarding the usefulness of data but only to a certain extent. As Bernhardt (2004) points out, different kinds of data can be beneficial in different ways, so it is not just testing data that we need to be cognizant of. Likewise, she mentions that "data-driven decision making is only partly about data" and that it is the shared vision and leadership which can ultimately make the difference. Like others have said, we definitely need a means of evaluating effectiveness and holding schools and teachers accountable, but I think that looking only at the objective data (testing numbers) will significantly limit what we are able to learn and understand about a school and what is actually going on there.
I think Elena and Arpita raise interesting points regarding the context of the data. Often data is thrown in front of a group and the group becomes enthralled with the numbers, rather than working to understand the complexities of the situation and what the numbers mean relative to the setting.
Additionally, I agree with Spencer that often qualitative data can offer greater insight or, at least, enhance the quantitative data that is presented.
Most of my peers have said it already, data can not be the only factor used when driving decisions in a school. The previous school I worked at is a perfect example of how relying only on data can negatively impact the school. 6 years ago the school ranked first out of all schools in their end of year assessments (called FCAT in FL) FIRST! Years of hard work from the previous administrator, teachers and students had resulted in this achievement. The following year, the teachers bragged about their results, the parents couldn't stop celebrating, and the new administrator rode the data wave since she was new to the position. Decisions were being made based on the data that had come from the previous year, and since they were on top, nothing changed- SIP stayed the same, goals stayed the same- they figured if it worked before it will keep on working. A year later the school quickly dropped in the ranking, and it took years for the school to climb back up. David said it best when he shared that "Data has its place in the decision making process, but it should not be limited to just one source or one test."
I also agree with Trea. What do we do? How do we still hold teachers accountable?
How should we particularly hold schools accountable to make school improvement decisions that is not just data drive, but is driven by parents, students, the community, data that is analyzed correctly with all factors, school population factors, etc -? Can it all be objective?
I also wonder if the competitiveness between our schools was taken away, would it help schools focus on their own personal growth more than how they appear next to another school? Would this cause them to analyze their data more closely and accurately?
Such good questions have been raised regarding this topic. I hope in class on Thursday we can discuss possible solutions and "alternative methods" on how to evaluate school improvement. Trea I am with you on "how do we define subjective data and measure it as well."
It is evident that the data being provided to our school leaders today is based on the values framework of policymakers and other groups (often outside traditional education). When will the rubrics and evaluation systems that lead to data used in decision making be developed by educators?
Yes, I agree that numbers do not lie; however, I do not agree that decisions made for school improvement should be made using data alone. One must take all factors into consideration when using the respective data. Although numbers do not lie, the method in which the data was collected may “lie”. In this context I use lie to mean a method that is manipulated, a method that does not reveal all or account for all components, a method that looks strictly at the concept at which is being measured and does not consider any outside forces that may lead to the respective outcome(s), a method that is not appropriately designed to measure the respective concern. I like the example Elena used about the data that labels a school a Priority School. When this information is placed in newspapers, sent home to parents and place ed on respective DPI’s website, are the answers to the questions Elena listed also shared?? i strongly doubt that they are and for this reason I feel that one can not base decisions on data alone.
“Never worry about numbers. Help one person at a time, and always start with the person nearest you.”
Mother Teresa
“Intuition becomes increasingly valuable in the new information society precisely because there is so much data.”
John Naisbitt
The above quotes reflect my philosophy regarding data. The impact of NCLB race/ethnic disaggregated data brought the spotlight on the achievement of ELL students. Data on ELL, has helped me compare our district data with others in the nation who were being effective in addressing ELL needs. I found a school district that completely transformed their ESL program. St. Paul Public schools provided data that showed their ELL population went from least performing to highest performing in 5 years compared to the rest of Minnesota. How? Here is where data was good catalyst to learn more about the individual. We took a team to St. Paul on a frigid February to see if the data was lying.
It was not.
Our district is now into its 3rd year of implementation of collaboration through co-teaching k12. Students are talking about how much they enjoy not being pulled out of class and how much they learn from two teachers in the classroom. Data assists as a compass, but the compass will not take you there, one has to create a road map.
I believe that data can be a great tool to drive decisions. However the key is how the data is being used to make decisions.
Data should be analyzed carefully and thoughtfully to drive instruction. The problem is that many times the data is not used to implement change in instruction or or the data is taken at face value without fully understanding what it means.
Different types of data have to be collected to understand student achievement wholeheartedly. Summative assessments, formative assesments, surveys, and graphs all need to be used to understand the needs of a school population.
Love the Veitch reference, AAAAPITA. Specific, observable, measurable, non-judgemental data AKA Sitting On My New Jalopy.
Also, Jordi, what great quotes! Especially the one from Mother Teresa.
Jordi gives us a great example on how data results promote positive change in the action at schools. I believe that we must look at the data with the faces of students in mind. That way, solutions are created that have students in mind. Working with individual students to help in their need area may not always be possible in public school at all times, but we have to 'see' the data as individuals and work from there.
While I of course agree with everyone that numbers can lie (the Kansas City Chiefs are the only undefeated team in the NFL) and can be manipulated to represent a dichotomy of truths, I still do believe that data, in capable and trusted hands, can and should be the starting place for improvement within individual schools. Ultimately, this is the purpose of the equity audits for Dr. Marshall’s class, where we utilized objective/subjective data to illustrate real deficiencies that are hindering achievement for all students within our schools. Does the effectiveness of these audits not rely solely on our belief in the objectiveness and truth of our own numbers?
I think that Arpita raises an excellent point in regards to the impact of competition on how and why we look at data. Data is too often used as a tool for comparisons: what school is performing better, which students have mastered this concept, which teachers are more effective, etc. If we instead looked at a collection of qualitative and quantitative data as a means of personal growth and use that to inform decisions and policies I think we would be much better off. Several people have spoken about the necessity of using multiple forms of data, which I couldn't agree more with. I think that perceptual data can be a very powerful tool that is easy to collect. The biggest problem I think that we face in data-driven decision making is that the data is being interpreted and shared by the wrong people. Data has the potential to be very beneficial if evaluated and disseminated by qualified individuals.
I think an interesting case study in examining the use of ‘objective’ data in driving school improvement is our recent discussion of Wake County’s recent decision to utilize EVAAS testing as the sole determinant in placing 7th and 8th grade math students. In previous years, teachers exclusively had the power to decide which students were placed in college-bound math courses (pre-Algebra and Algebra I), with an end result of only 40% enrollment for qualified minorities as compared to over 60% of white students. Because of the switch from teacher evaluation to ‘objective’ data, student enrollment has increased in both classes by 30%. Although it will be several years before we can begin to gauge the success or failure of this transformation, I think it’s worth considering whether all students are best served if this and similar placement decisions are determined solely by data or by teacher evaluation, or by a data-based teacher decision?
The School Improvement in Maryland web page in our reading for tomorrow addresses the type of data we should be talking about. This is the type of implementation that can yield results for students. Lets let the bean counters crunch school and state EOG results while we focus on what matters,i.e.,the implementation of a data based framework that effectively informs and drives instruction.
"What is needed is a monitoring system that is aligned with the state content standards that yields timely and meaningful diagnostic results for classroom teachers. Teachers can and should collect daily information about where students are in relation to what is being taught. Teachers need the diagnostic information to determine what they need to teach/re-teach and when they need to provide additional interventions. Teachers must apply the principles of assessment for learning."
We maximize the likelihood of data relevance if we focus in on how it informs classroom instruction. We are able to exert influence over this process, we can continuously sanity check this data for efficacy, and in so doing we can indirectly maximize our chances of positively impacting the effect of EOG and AYP data on our schools.
Jordi, thanks for sharing your experience with your ELL program. The same type of work was done at the school I am after a trip to Chicago to visit a "high performing school." The data wasn't wrong at that school, but there was so much more to the success of the school than just the scores of the test. In the article by Viadero it talks about tying teacher pay to student success, a topic that has been in the news in certain states this year. I'm interested in finding out if the state of NC is thinking in this direction...any thoughts on this?
Will. I think that data generated on a student by student basis, in individual classrooms, through a teacher implemented framework for data collection, instruction and intervention, is the best model for placing children in the Wake Example. This option is probably most closely aligned with the last possibility you offer but its not exactly the same. What I suggest differs in that I am not talking about a teacher making a choice. I am talking about a teacher, or better yet, a grade level or content team arriving at a data based outcome regarding the placement of an individual child based on formative and summative data generated through this framework.
excellent point Maria brings on "falling as asleep" on data. As many have mentioned, several, ongoing pieces of data should be used.
Will, great point on the use of Dr. Marshall's equity audit to uncover what Spencer calls " looking deeper into the school's soul".
I do believe though, that we are using data more effectively, specially when it comes to formative assessment data.
Maria makes an excellent point which has been echoed by several others regarding success of schools that is beyond the scope of data. I also strongly agree with what Courtnee was saying about the importance of looking at data but still making decisions with students in mind. Data can be helpful to the extent that we only look it as a small piece of a bigger picture that will ultimately help students to be more successful.
In response to Will's comments regarding the usefulness of EVAAS, I have also found it to be a misleading source of data. In my experience, several predictions about student success that have been made on the basis of EVAAS have been fairly incorrect. This is a perfect example of how data should be used as one element in the decision making process, but only in conjunction with several other elements.
My first comment is going to be my "soapbox" comment. I don't think there is anything in education that makes me more angry than the saying "the numbers and data don't lie." I think that my anger comes from the fact that my principal constantly harps on how our school improvement plan is centered on our low test scores as our numbers don't lie. Well, I can't help but think, you're right, the data that suggests that low income schools are more likely to perform lower on standardized tests doesn't lie. So, I think that in order to operate under the "data doesn't lie" banner, you have to consider all data concerning instruction.
As the leader of the PLC for Social Studies, I am in charge of disseminating data from department wide tests using a program called achievement series. This program allows the user to break down individual performance on NCSCOS goals and objectives. This information would be very helpful if in fact the data could be seen as a direct measure of the students' mastery. However, as teachers, we understand that the data comes with inherent flaws as it only gives us a minor reflection of where the students are in the class. The data in achievement series does not take into account the student that has been absent twenty days and simply sleeps whenever they're present. And, from time to time, this student's score will be higher than the rest of the class simply because the student might be a good guesser. So, I think that the data can be useful in some situations, but I feel like it's a little too unreliable to base an entire school improvement plan on.
I completely agree with Spencer and his discussion of data as simply a means to save face. Without doubt, any piece of data can be blurred in a way that best suits the school and, more importantly, the school improvement plan. For instance, my school has in recent years experienced an increase in fights; however, the number of reported fights have decreased. As such, the data on violent acts has actually decreased and in the eyes of the public the school is much safer. However, if one looked more closely they would notice that the number of altercations and conflicts (which aren't as severe) have sky-rocketed. So, Spencer, I completely agree that data is something that can save face and also be used in a way that best serves the school's purpose.
I am really intrigued by all of the comments that have been posted.
It sounds like a lot of us agree that data is useful but many times it is either not analyzed sufficiently to gather real data about the climate of the school or it is manipulated to "save face" and serve the school's best interest.
I hope that we discuss in class what are possible solutions for long term and what are the implications for schools if we continue done this path.
I agree with rgaddy : "Teachers need the diagnostic information to determine what they need to teach/re-teach and when they need to provide additional interventions. Teachers must apply the principles of assessment for learning."
With all the added responsibilities teachers have now, diagnostic information can 'weed out' false positives and allow teachers to address the core deficit.
It is great to have what is seen to be objective data; however, we know that research and statistics show whatever they set out to show. For example, one can find research to prove that kids do better on standardize test when they are more physically fit than their counterparts AND one can find research that shows that kids who are physically fit perform equal to those who are not.
The case study used by WIll focusing on the placement of 7th and 8th grade math students using EVASS is interesting because it does show where “numbers” have made a decision. However, Heidi stated that she has experienced incidents where EVAAS has measured the student’s future successfulness incorrectly. This definitely reiterates my opinion (which everyone’s posts have done) that numbers alone can not be the sole bases for making decisions. To answer your question Will, I think decisions should be made on data-based teacher decisions. We must take all factors pertaining to a specific student into consideration.
For example, students who attend early colleges must receive a certain score on the COMPASS Test, SAT, or ACT in order to be placed in certain college classes. Although several of our students received the needed score, as a faculty we knew they were unable to be totally immersed in college classes because they were not mature enough to handle instructors who did not take attendance, submit work on time without been reminded multiple times, and communicate with a college instructor. For that reason we designed a different track for those students that ensured by graduation they would have all the classes needed to receive an associates degree, but eased them into the college environment. The numbers said they were ready academically, but we knew they would not succeed because they matured at a slower rate than their peers; had we only gone off of numbers alone, some of them would not have successfully completed the program.
Post a Comment