In Opposition To Academic Boycotts

Our guest author today is Rita Freedman, Acting Executive Director of the Jewish Labor Committee, and a recent retiree from the American Federation of Teachers.

There is a growing, worldwide effort to ostracize Israel and to make it into a pariah state. (This despite the fact that Israel is still the only democratic country in the Middle East.)  A key ingredient of this campaign is the call to boycott, divest from and impose sanctions on Israel (known as BDS for boycott, divest, sanction).  Within the world of higher education, this takes the form of calls to boycott all Israeli academic institutions, sometimes including boycotting all Israeli scholars and researchers. The rationale is that this will somehow pressure Israel into an agreement with the Palestinians, one which will improve their lot and lead to an independent Palestinian state that exists adjacent to the State of Israel (although it is worth noting that some in the BDS movement envision a future without the existence of Israel).  

Certainly, the goals of improving life for the Palestinian people, building their economy and supporting their democratic institutions – not to mention supporting the creation of an independent Palestine that is thriving and getting along peacefully with its Israeli neighbor – are entirely worthy. 

More Effective, Less Expensive, Still Controversial: Maximizing Vocabulary Growth In Early Childhood

Our guest author today is Lisa Hansel, communications director for the Core Knowledge Foundation. Previously, she was the editor of American Educator, the magazine published by the American Federation of Teachers.

With all the chatter in 2013 (thanks in part to President Obama) about expanding access to high-quality early childhood education, I have high hopes for America’s children finally getting the strong foundation of knowledge and vocabulary they need to do well in—and enjoy—school.

When children arrive in kindergarten with a broad vocabulary and a love of books, both of which come from being engaged in conversations with caregivers daily and being read to frequently, they are well prepared for learning to read and write. Just as important, their language comprehension makes learning through teacher read-alouds and conversations relatively easy. The narrower the children’s vocabulary and the fewer experiences they’ve had with books, the tougher the climb to come. Sadly, far too many children don’t make the climb; they mentally dropout in middle school, and are physically adrift soon thereafter.

Being Kevin Huffman

In a post earlier this week, I noted how several state and local education leaders, advocates and especially the editorial boards of major newspapers used the results of the recently-released NAEP results inappropriately – i.e., to argue that recent reforms in states such as Tennessee and D.C. are “working." I also discussed how this illustrates a larger phenomenon in which many people seem to expect education policies to generate immediate, measurable results in terms of aggregate student test scores, which I argued is both unrealistic and dangerous.

Mike G. from Boston, a friend whose comments I always appreciate, agrees with me, but asks a question that I think gets to the pragmatic heart of the matter. He wonders whether individuals in high-level education positions have any alternative. For instance, Mike asks, what would I suggest to Kevin Huffman, who is the head of Tennessee’s education department? Insofar as Huffman’s opponents “would use any data…to bash him if it’s trending down," would I advise him to forego using the data in his favor when they show improvement?*

I have never held any important high-level leadership positions. My political experience and skills are (and I’m being charitable here) underdeveloped, and I have no doubt many more seasoned folks in education would disagree with me. But my answer is: Yes, I would advise him to forego using the data in this manner. Here’s why.

How Much Do You Know About Early Oral Language Development?

The following was written by Susan B. Neuman and Esther Quintero. Neuman is Professor of Early Childhood & Literacy Education, Steinhardt School of Culture, Education, & Human Development at New York University.

The topic of oral vocabulary instruction is affected by common myths, which have sometimes gotten in the way of promoting high quality teaching early on. While these myths often contain partial truths, recent evidence has called into question many of these notions.

We've prepared this short quiz  for you -- take it and find out how much you know about this important issue. Read through the following statements and decide if they are myths that have been perpetuated about oral vocabulary development or if they are facts (or key principles) about the characteristics of high quality vocabulary instruction. Download Dispelling Myths and Reinforcing Facts About Early Oral Language Development and Instruction if you prefer to go straight to the answers.

A Few Additional Points About The IMPACT Study

The recently released study of IMPACT, the teacher evaluation system in the District of Columbia Public Schools (DCPS), has garnered a great deal of attention over the past couple of months (see our post here).

Much of the commentary from the system’s opponents was predictably (and unfairly) dismissive, but I’d like to quickly discuss the reaction from supporters. Some took the opportunity to make grand proclamations about how “IMPACT is working," and there was a lot of back and forth about the need to ensure that various states’ evaluations are as “rigorous” as IMPACT (as well as skepticism as to whether this is the case).

The claim that this study shows that “IMPACT is working” is somewhat misleading, and the idea that states should now rush to replicate IMPACT is misguided. It also misses the important points about the study and what we can learn from its results.

ESEA Waivers And The Perpetuation Of Poor Educational Measurement

Some of the best research out there is a product not of sophisticated statistical methods or complex research designs, but rather of painstaking manual data collection. A good example is a recent paper by Morgan Polikoff, Andrew McEachin, Stephani Wrabel and Matthew Duque, which was published in the latest issue of the journal Educational Researcher.

Polikoff and his colleagues performed a task that makes most of the rest of us cringe: They read and coded every one of the over 40 state applications for ESEA flexibility, or “waivers." The end product is a simple but highly useful presentation of the measures states are using to identify “priority” (low-performing) and “focus” (schools "contributing to achievement gaps") schools. The results are disturbing to anyone who believes that strong measurement should guide educational decisions.

There's plenty of great data and discussion in the paper, but consider just one central finding: How states are identifying priority (i.e., lowest-performing) schools at the elementary level (the measures are of course a bit different for secondary schools).

Can Knowledge Level The Learning Field For Children?

** Reprinted here in the Core Knowledge Blog

How much do preschoolers from disadvantaged and more affluent backgrounds know about the world and why does that matter? One recent study by Tanya Kaefer (Lakehead University) Susan B. Neuman (New York University) and Ashley M. Pinkham (University of Michigan) provides some answers.

The researchers randomly selected children from preschool classrooms in two sites, one serving kids from disadvantaged backgrounds, the other serving middle-class kids. They then set about to answer three questions:

A Quick Look At The DC Charter School Rating System

Having taken a look at several states’ school rating systems  (see our posts on the systems in IN, OH, FL and CO), I thought it might be interesting to examine a system used by a group of charter schools – starting with the system used by charters in the District of Columbia. This is the third year the DC charter school board has released the ratings.

For elementary and middle schools (upon which I will focus in this post*), the DC Performance Management Framework (PMF) is a weighted index composed of: 40 percent absolute performance; 40 percent growth; and 20 percent what they call “leading indicators” (a more detailed description of this formula can be found in the second footnote).** The index scores are then sorted into one of three tiers, with Tier 1 being the highest, and Tier 3 the lowest.

So, these particular ratings weight absolute performance – i.e., how highly students score on tests – a bit less heavily than do most states that have devised their own systems, and they grant slightly more importance to growth and alternative measures. We might therefore expect to find a somewhat weaker relationship between PMF scores and student characteristics such as free/reduced price lunch eligibility (FRL), as these charters are judged less predominantly on the students they serve. Let’s take a quick look.

The Wrong Way To Publish Teacher Prep Value-Added Scores

As discussed in a prior post, the research on applying value-added to teacher prep programs is pretty much still in its infancy. Even just a couple of years of would go a long way toward at least partially addressing the many open questions in this area (including, by the way, the evidence suggesting that differences between programs may not be meaningfully large).

Nevertheless, a few states have decided to plow ahead and begin publishing value-added estimates for their teacher preparation programs. Tennessee, which seems to enjoy being first -- their Race to the Top program is, a little ridiculously, called “First to the Top” -- was ahead of the pack. They have once again published ratings for the few dozen teacher preparation programs that operate within the state. As mentioned in my post, if states are going to do this (and, as I said, my personal opinion is that it would be best to wait), it is absolutely essential that the data be presented along with thorough explanations of how to interpret and use them.

Tennessee fails to meet this standard. 

Words Reflect Knowledge

I was fascinated when I started to read about the work of Betty Hart and Todd Risley and the early language differences between children growing up in different socioeconomic circumstances. But it took me a while to realize that we care about words primarily because of what words indicate about knowledge. This is important because it means that we must focus on teaching children about a wide range of interesting “stuff” – not just vocabulary for vocabulary’s sake. So, if words are the tip of the iceberg, what lies underneath? This metaphor inspired me to create the short animation below. Check it out!