Three new episodes of Emily Hanford's groundbreaking podcast have dropped. They have a lot to teach us about following evidence and sticking with what works.
"Steubenville proves schools can defy odds with evidence, continuity, and teacher buy-in—not just phonics."
Great recap of a great series! I wonder whether you noticed at the beginning of Episode 11 that SFA prioritizes letter sounds over letter names--something I did when I taught kindergarten (despite what my curriculum--and consensus among researchers--demanded) and what I continue to do as a reading specialist working with small group intervention because I emphasize both efficiency and effectiveness. Too many 'evidence-based" phonics programs buckle under their own weight, crowding out other literacy components such as knowledge-building, as I wrote about in Bursting with Knowledge: Are We Overteaching Phonics?
On teaching children _letter sounds_ as opposed to _letter names_ :
just think of showing a child the letter "w" and naming it as "WUH," and then saying "WUH - WUH- watermelon!" (How well would "DOUBLE-YOO ... DOUBLE-YOO ... watermelon!" work ?!)
Happily, I had that insight when my own children were small, and--without trying to 'teach the alphabet' or that 'alphabet song' (which I admit is useful when you're trying to put things into alphabetical order--but not otherwise)--they learned to read organically when preschoolers.
Where and how do student interests, teacher passions, and localized learning fit into SFA and DI models? Are these components baked in? Sacrificed? Something else?
Where do you fall on "evidence" at the end of the day?
1. Yes, EdReports is laughable. They look at the inputs (what's on the page), and not the short-term outcomes (observing typical classrooms), or the medium term ones (actual student achievement).
SFA and DI have "real" evidence on outcomes.
2. So, as you report, states or districts that try to "use evidence" will almost certainly botch the empirical details.
3. Moreover, "Evidence" in our sector doesn't generally contain any "rigorous side effects may include" data. What happens in real life implementation?
What if DI - which as you know I admire - had evidence to show that 30% of district jettison it within 4 years (even if there are learning gains for kids!), because of the teacher complaints you mention?
What if that was an 80% rejection rate? That sort of "body rejects organ transplant" data would be dispositive, right?
That's such an interesting point/observation. When I was observing and writing about Success Academy, I heard Eva Moskowitz and her lieutenants say about the network's focus on test scores, for example, "You might have very strong personal views about standardized testing, and that's fine. But if so, this is not the place for you." Where do you draw the line between creating consensus and, well, muscular leadership and a culture of "this is how we do things here."
Great summary. I'm particularly interested in 4, teacher buy-in. Did you describe the SFA 'voting' in your book? Where can I learn more about it?
"Steubenville proves schools can defy odds with evidence, continuity, and teacher buy-in—not just phonics."
Great recap of a great series! I wonder whether you noticed at the beginning of Episode 11 that SFA prioritizes letter sounds over letter names--something I did when I taught kindergarten (despite what my curriculum--and consensus among researchers--demanded) and what I continue to do as a reading specialist working with small group intervention because I emphasize both efficiency and effectiveness. Too many 'evidence-based" phonics programs buckle under their own weight, crowding out other literacy components such as knowledge-building, as I wrote about in Bursting with Knowledge: Are We Overteaching Phonics?
(https://highfiveliteracy.com/2024/11/18/bursting-with-knowledge-are-we-overteaching-phonics/)
Thanks for this important discussion!
On teaching children _letter sounds_ as opposed to _letter names_ :
just think of showing a child the letter "w" and naming it as "WUH," and then saying "WUH - WUH- watermelon!" (How well would "DOUBLE-YOO ... DOUBLE-YOO ... watermelon!" work ?!)
Happily, I had that insight when my own children were small, and--without trying to 'teach the alphabet' or that 'alphabet song' (which I admit is useful when you're trying to put things into alphabetical order--but not otherwise)--they learned to read organically when preschoolers.
Great example. This reminds me of a first grader who wrote a Halloween story about a "yitch".
Where and how do student interests, teacher passions, and localized learning fit into SFA and DI models? Are these components baked in? Sacrificed? Something else?
Where do you fall on "evidence" at the end of the day?
1. Yes, EdReports is laughable. They look at the inputs (what's on the page), and not the short-term outcomes (observing typical classrooms), or the medium term ones (actual student achievement).
SFA and DI have "real" evidence on outcomes.
2. So, as you report, states or districts that try to "use evidence" will almost certainly botch the empirical details.
3. Moreover, "Evidence" in our sector doesn't generally contain any "rigorous side effects may include" data. What happens in real life implementation?
What if DI - which as you know I admire - had evidence to show that 30% of district jettison it within 4 years (even if there are learning gains for kids!), because of the teacher complaints you mention?
What if that was an 80% rejection rate? That sort of "body rejects organ transplant" data would be dispositive, right?
That's such an interesting point/observation. When I was observing and writing about Success Academy, I heard Eva Moskowitz and her lieutenants say about the network's focus on test scores, for example, "You might have very strong personal views about standardized testing, and that's fine. But if so, this is not the place for you." Where do you draw the line between creating consensus and, well, muscular leadership and a culture of "this is how we do things here."