Last week, the Pittsburgh Business Times published their 2012 Guide of Western Pennsylvania Schools, which lists the school district rankings for the Pittsburgh area and the entire state of Pennsylvania. The newspaper analyzed all the school districts’ performance based on the Pennsylvania System of School Assessment (PSSA) Exam results. According to their website, the formula for the ranking takes into account three years of PSSA test scores in math, reading, writing and science. They look at three years of scores, with the current year given the most weight.
In the Top 15 school districts category in Pennsylvania, Allegheny County was the number one county with six school districts represented followed by Chester County with three school districts (Unionville-Chadds Ford, T/E and Great Valley), Delaware County with two school districts (Radnor and Wallingford-Swarthmore) and Montgomery County with two school districts (Lower Merion and Lower Moreland).
For 2012 rankings, Upper St. Clair School Districts holds onto its first place title for the eighth year in a row, with Tredyffrin Easttown Township School District dropping to third place and Unionville-Chadds Ford School District taking second place. Radnor Township School District stays in fourth place, Lower Merion drops down a level to eighth and Great Valley School District drops from 13th to 14th place. Looking at other area school district rankings, Downingtown School District moved from 28th to 25th and Phoenixville School District dropped from 85th place to 98th on the rankings list.
To see the ranking for all 500 Pennsylvania school districts, click here.
Pennsylvania School District Rankings |
Statewide Statewide |
Rank 2012 Rank 2011 School District (County) |
1 1 Upper St. Clair School District (Allegheny) |
2 3 Unionville-Chadds Ford School District (Chester) |
3 2 Tredyffrin-Easttown School District (Chester) |
4 4 Radnor Township School District (Delaware) |
5 6 Mt. Lebanon School District (Allegheny) |
6 5 North Allegheny School District (Allegheny) |
7 9 Hampton Township School District (Allegheny) |
8 7 Lower Merion School District (Montgomery) |
9 8 Central Bucks School District (Bucks) |
10 12 South Fayette Township School District (Allegheny) |
11 10 Peters Township School District (Washington) |
12 11 Fox Chapel Area School District (Allegheny) |
13 15 Wallingford-Swarthmore School District (Delaware) |
14 13 Great Valley School District (Chester) |
15 14 Lower Moreland Township (Montgomery) |
A Pennsylvania school district that places in the top 15 or 20 out of 500 districts statewide based on the PSSA exams is an achievement for which students, parents, teachers and administrators can all be proud. PSSA scores is viewed by many as a reliable predictor of future success. As a tool for student assessment, the PSSA exam helps measure and provide useful information of what students are learning. The PSSAs measure the performance of the entire class and give us the truest measure of how an overall class is performing.
In the Unionville-Chadds Ford School District, the teachers union used their District’s high PSSA and SAT scores as a contract negotiating tool. I wrote a post on January 11, 2012, ‘Do Higher Teacher Salaries in Philadelphia Area School Districts Equate to Higher PSSA & SAT Scores?’ that included a report by Keith Knauss, a school board member from Unionville Chadds Ford School Board. Knauss looked at 61 Philadelphia area school districts for factors that might explain the wide variation in academic achievement on PSSA and SAT tests.
In his analysis of the data, Knauss concluded that “only two factors are significant – Parental Education and Poverty and those two factors alone can explain the bulk of the differences in academic achievement.” Recognizing that “those two factors are beyond the control of the District”, Knauss notes, “all other factors, where the District does have control over are not significant, including per student spending, class size, teacher salary, teacher experience, teacher education.”
While most of us might assume that the more experienced teachers, or those with the most education and the highest salaries would be factors associated with higher test results, Knauss research data does not support that theory, at least not in the 61 school districts in the Philadelphia area that he researched. Knauss concludes, “contrary to popular belief, there is no evidence from the 61 districts that spending or the number of teachers has a measurable effect on academic achievement.” Click here to read Keith’s Spending Trends Presentation TE research study.
Bottom line … if we accept that school district rankings, based on PSSA performance, have an importance, do we give credit to the District teachers for the results? If you believe that the teachers play a role in the student’s performance on the PSSA exams, should the results be a factor in the current teacher contract negotiations? Should the TEEA use the PSSA exam results as a tool in their contract negotiations?
TESD is facing tighter budgets and difficult choices are the options that remain for the school board. In all likelihood, the 2012-13 school year will see a $50 fee charged to students to play sports, perform in the marching band and participate in clubs. The District’s Education Committee is exploring many ways to reduce costs to help the budget. Last year we saw the elimination of foreign language in the elementary program and German and Latin in the middle school. Now we see that there is discussion of eliminating string lessons in the third grade or possibly eliminating elementary and middle school music lessons.
Another couple of budget strategies in discussion — (1) the demotion of professional staff for economic reasons and (2) increasing class size to help the 2012-13 budget. Here’s a question — wonder if there is any research to suggest that increasing class size could result in lower PSSA exam results for TESD.
Click here for details of Education Committee suggestions for 2012-13 budget strategies.
A casual reading of my quote above might be misconstrued as saying that teachers are unimportant or can only have a minor effect on learning. Quite to the contrary.
.
From the presentation:
”
Many might wonder why the teacher factors (teacher education, teacher pay and teacher experience) in the prior analysis are not significant when determining academic achievement. Haven’t dozens of studies shown that teachers are the key to delivering an excellent education? Haven’t we all experienced the magical influence that some teachers have had on our children? Haven’t we seen the recent study reported in the NY Times that measures the long-term favorable student outcomes derived from an above average teacher? “Big Study Links Good
Teachers to Lasting Gain”
.
Teachers are the key to education, but the teacher factors we measure (teacher education, teacher pay and teacher experience) are just not related to teacher effectiveness. The best teachers are not necessarily the ones that have the most degrees, the most experience and the highest pay. It’s the reason the President is doing his best to encourage states to rework teacher evaluation and
compensation systems to include value-added performance based measures (pay for performance) rather than relying on just degrees and experience. It’s the reason Governor Corbett is investing in value added assessment for teacher evaluation.”
.
Bottom line – test results are affected by many factors besides those related to teachers. We have no measures in place to tell us whether a district’s teachers are above average or below average. We can’t say with any certainty that TE’s teachers are above average because of TE’s high test scores. Neither can we say Chester Upland’s teachers are below average because of CU’s low test scores.
I appreciate the clarification Keith — apologies if my post was confusing to readers.
We need to be careful here – correlation does not equal causation, and there are many factors to be considered in any such analysis. A couple points to illustrate:
1) “Spending per student” is somewhat misleading – to make meaningful comparisons between and among vastly different communities, you would need to delve into considerable detail – every district is different, and spending includes debt service for bond issues (facilities) teacher salries and benefits, expenditures related directly to instruction, (books, lab facilities, instructional equipment, computers, etc.)expenditures for extra-curricular activities, and so on. At some point, it IS about money – students in rural or inner city schools with inadequate and poorly maintained facilites, without up to date books and equipment, with much larger class sizes, are severely handicapped by a lower level of spending . . . the impact of cuts may be less dramatic in affluent districts, but at some point cuts will have an impact on quality of outcomes.
2) “Student Teacher Ratio” is NOT the same as class size. Often, student teacher ratio is a calculation dividing total students by total “teachers” (with varying definitions of “teacher”. Some ratios include aides, specialists reading and math, “pull outs”), etc, while the actual class size in core classrooms is much higher. ( Most T/E classes are much larger than 15 the figure in Keith’s data) Ratios can be very misleading. You need to look at actual core class size at various grade levels. There is very good research for the proposition that lower class sizes in the early grades – K-3 – have significant and lasting benefits on student achievement and on less tangible but equally important factors such as social adjustment and discipline. Again, at some point it IS about money – dramatic benefits in poor rural or inner city schools with large classes – reduction of class size does show significant benefits, and as much as that costs money (more teachers) it IS about spending. Even in more affluent districts, there is a limit to how much you can increase class size before you have a negative impact on academic performance. If early grades go from (for example) 20- 24 kids to 30-35 kids, do you really believe that will not have an impact? We can argue about where that line is, but there is a line –
Another way to look at this:
In Keith’s study, on page 10, under “Balancing the Budget” are two statements:
1) There is no evidence the increased spending and more teachers will result in improved academic achievement
2) There is no evidence that decreased spending and fewer teachers will result in declining academic achievement
Logically, statement #2 does not necessarily follow from statement #1. Even assuming statement 1 to be true (for the sake of argument), statement 2 has no meaning without more information. If you are talking about less spending on teacher benefits, such as pensions, I will agree that this may not likely have an impact on student achievement. However, less spending as a result of fewer teachers is another matter. Presumably fewer teachers means larger class sizes (not student teacher ratios!). How many fewer teachers? Fewer than what? How much increase in class size? What were the class sizes in the critical grade levels to begin with, and what is the new size as a result of fewer teachers?
It is common sense that at some point the increase in class size must have an adverse effect on student achievement. I think we can all agree that 100 students per teacher is too many. I think most of us would agree that 50 is too many – at what point does it begin to not matter? 30? 35? The research says more like 20 in grades K-3. It is clear that unless you know what was your baseline X and what is your proposed increase Y you cannot meaningfully evaluate the possible impact of “fewer teachers.” And again, we need actual core class sizes, not “student teacher ratios.”
http://www.addictinginfo.org/2012/04/15/are-you-sick-of-highly-paid-teachers/
why weren’t we number 1
I found the scores for Upper St Clair and Conestoga 11th grade 2011.
Reading: USC 94 CHS 93
Science: USC 78 CHS 78
Writing USC 97 CHS 99
Math USC 94 CHS 90
My guess is that the other years will yield a similar result. So if you really wanted to you could dig up 3 years of scores at the different grade levels and make your own comparison. But to be accurate you’d also view in terms of standard deviations and significant differences. I think the closeness of the above scores would indicate that there is no real major difference between the schools. Bragging at this level is as valuable as it’s re-sale value. My guess is that you can probably say the same thing for the top 5? 10? Who knows?
I think if any value to these scores it is less in comparison to other schools and more as one factor in trying to determine effectiveness of your programs. And I’m not convinced that it is a significant factor but it’s something.
Unfortunately, in my opinion, we’ve latched onto this type of thing as a way to compare schools or to consider funding or what have you. We’ve increased standardized testing to a point where I seriously suspect there is not much value to the this type of testing. We look at information such as that drawn up by Keith Kraus and focus more on using it as a sign that teachers are overpaid rather than seeing the big elephant in the room and that is that poverty crushes education. So instead of government and schools working in complement to each other we get the opposite effect. We don’t have an education problem in this nation we have a poverty problem. Work on the poverty problem and you will see many educational problems begin to subside.
And if it was all that easy to do we would have done it already.