Data Visualisation 4 – Bloom’s 2 Sigma Problem and Synchronous Teaching
This week I decided to capture data to explore Bloom’s 2 -Sigma Problem (Bloom, 1984), in which he demonstrated students who received 1-1 tuition with a mastery learning approach, outperform students in a class size of 30 by two standard deviations (2-sigma) in exam performance.
Thirty-five years later, with the advent of EdTech, AI and data-driven platforms, we can be on the cusp of solving the problem of how this level of mastery can be replicated on a larger scale. What if the cognitive distribution between human-technology assemblage, which lead to ammeter chess players using technology to beat grandmasters and more power machines (Beard, 2018), can also lead to more effective teaching?
I chose a student for this pilot, who I will refer to as student A and gave synchronous live teaching as we worked through topics on the Hegarty Maths site. The six topics listed in figure 2b together form a unit on Solving Equations. Student A was asked to watch the video for the topic, make notes before answering 12 questions on the site. The video solutions offer good models of worked examples to offer cognitive support (Rosenshine, 2012). At each stage, the student could ask for help if needed and go directly to the part of the video which demonstrates the concept which needs to be applied. Students receive two attempts to answer a question before it is given a red light and move on. I recorded the number of times I offered ‘corrective measures’ as suggested by Bloom as a means of direct support and intervention to a student-centred learning model; taking this approach and intervening only when student A was stuck, allowed me to get a more accurate picture of strengths and weaknesses. For topic 3, student A received an amber light and before continuing to the next topic, we decided to work through a few of the ‘building block’ topics suggested by Hegarty maths before re-attempting topic 3 again; these were not personalised to the needs of the student but based on the pre-requisites of the topic. The data visualisation diagram (fig 2a) represents no fixed structure and would offer a visual-map through the horizontal and vertical flights of the student’s journey through mastery learning of the topic.
Reflections
Student A is yet to re-attempt topic 3. In addition to completing the six topics at mastery, student A would then be given a topic test, as per Bloom’s assessment of mastery learning. Based on my role as a teacher, I picked relevant sub-topics identified by Hegarty Maths for topic 3 and what I felt the student’s needs were based on the ‘corrective measures’ I had intervened in. In the future, AI should offer a far greater level of personalisation by not only ‘plugging the gaps’ with questions from default subtopics but personalise them with individualised questions to the student’s specific needs at the time. Hegarty Maths also collected additional information of student A, included time taken, how much of the video was watched, specific questions which were attempted a second or third time, all of which would be useful had I to take an asynchronous approach to mastery learning; such an approach may be more suited to teaching students in large numbers for whom personalised instruction and support would need to be differentiated accordingly. A barrier to mastery-learning, which teachers may face is teaching to the test or exams, under the added pressure of having to ‘deliver’ the curriculum with enough time for revision before the actual exam (Steve Griffiths, 2016).
References
Beard, A. (2018) Natural Born Learners: Our Incredible Capacity to Learn and How We Can Harness It. London: Weidenfeld & Nicholson.
Bloom, B. S. (1984) ‘The 2 Sigma Problem: The Search for Methods of Group Instruction as Effective as One-to-One Tutoring’, Educational Researcher, 13(6), pp. 4–16. doi: 10.2307/1175554.
Rosenshine, B. (2012) ‘Principles of instruction: research-based strategies that all teachers should know’, American Educator, 36(1), pp. 12–21.
Steve Griffiths (2016) Let’s teach for mastery — not test scores | Sal Khan. Available at: https://www.youtube.com/watch?v=-MTRxRO5SRA (Accessed: 21 February 2021).
Hi Saqib,
“Hegarty maths also collected additional information of student 1, included time taken, how much of the video was watched, specific questions which were attempted second or third time…”
Does it suggest why it collects the data it does; does it have a reason or does it look like it attempts to capture all data that is easily avalable?
Does it ascribe a value to this data or does it leave you (the teacher) to say what it is worth?
Monitoring and tracking of student activity I take it, which is useful if you’re using it for a homework tool. It just provides the data but them gives a summary on a student dashboard
‘Thirty-five years later, with the advent of EdTech, AI and data-driven platforms, we can be on the cusp of solving the problem of how this level of mastery can be replicated on a larger scale.’
Perhaps, but I wonder if we also need to question Bloom’s original model, which seems to imply a rather rigid, systematised education that may not reflect the ways all subject disciplines are taught and assessed in current times. It may make sense in subjects, such as mathematics, where ‘achievement’ tends to be a fixed, predefined point, and where rigorous testing defines learning, but that wouldn’t be the case for all subjects. ‘Mastery’ learning definitely has its critics, and is often seen as antithetical to constructivism.
I wonder if, given the ways data-driven systems seem to suit the systemised approach of mastery learning, that these educational theories are being reasserted in authoritative ways, despite quite a lot of criticism from educationalists? Or perhaps we are seeing the entrenching of divisions between ‘technical’ subjects, such as science, technology, engineering and maths, and subjects such as philosophy, the arts, and the social sciences, where the former are being characterised more by data-driven technologies?
I think your visualisation is intriguing, but I wasn’t sure how it would be useful or relevant to a teacher. What could a teacher do with this visualisation? What would it help the teacher to understand about a student’s learning? What problems might there be in the ways such a visualisation represents student learning? These could be useful questions to consider next time around.
Does mastery-learning and constructivism need to be at odds with each other? Could one not have flipped mastery group learning for example?
In any case, as the paper below suggests, mastery-learning has demonstrated higher academic achievement as compared to a constructivist approach to knowledge construction; if schools are measured on league tables and students gain entry to courses based on exam results, why wouldn’t they choose to ‘master’ material? http://www.iraj.in/journal/journal_file/journal_pdf/14-341-1489657999123-128.pdf
“Or perhaps we are seeing the entrenching of divisions between ‘technical’ subjects, such as science, technology, engineering and maths, and subjects such as philosophy, the arts, and the social sciences, where the former are being characterised more by data-driven technologies?”
Sal Khan makes an interesting point about gaps in learning due to the linear factory/industrial model of education which isn’t personalised; traditionally, students in various cultures learned from music, meditation, grammar or martial arts from a teacher/guru by ‘mastering/embodying’ the lesson before moving on and this wouldn’t necessarily be based on tests as we know them in the industrial/data age. Is what we are witnessing a modern phenomenon based on the quantification of education in a post-industrial age?
‘Does mastery-learning and constructivism need to be at odds with each other? Could one not have flipped mastery group learning for example?’
Yes, I don’t think they should be!
‘Is what we are witnessing a modern phenomenon based on the quantification of education in a post-industrial age?’
In a way I’d like to think so. But then adhering to the practicalities of league tables wouldn’t sound as utopian. I think the better kind of research is really trying to focus on learning experience, rather than on the accountability or measurement systems we see in league tables.