The effects of collecting, analyzing and using massive amounts of data in education have been on KnowledgeWorks’ radar for the last 14 years, first with the Pattern Recognition driver from 2020 Forecast: Creating the Future of Learning and most recently with the Automating Choices driver from our fifth anchor forecast, Navigating the Future of Learning.
Each iteration has given us the opportunity to revisit a critical force of change based on current knowledge and the developmental stage and acceptance rates of the technologies that support the measurement and processing of data at large scale.
Ten years ago, in our third anchor forecast, Recombinant Education: Regenerating the Learning Ecosystem, the authors summarized these trends under a disruption, High-Fidelity Living. They wrote, “As big data floods human sensemaking capacities, cognitive assistants and contextual feedback systems will help people target precisely their interactions with the world.” Using data to generate insights is not inherently good or bad. However, the ways in which people decide to turn them into action or inaction could intentionally or unintentionally weaponize them. How have you been using data insights in the last ten years?
These disruptions were major societal shifts that promised to have broad impact on the future of learning. We forecast that they would cause deep, and sometimes unsettling, change. But we also made the case that education stakeholders could use future uncertainty to spark creativity, not only fear.
Read more >>
Opportunity: insights for personalization
The forecast suggested that education stakeholders looking to take advantage of High-Fidelity Living should “watch for massive data sets, learning analytics, and dashboards to enable radically and continuously personalized learning for all learners based on their performance and motivation.”
Organizations such as Panorama Education and Shmoop now offer visual tools that educators and administrators can easily use to look beyond standardized academic achievement data. These beautifully designed dashboards are developed by compiling and processing hundreds of records and measurements. The goal is that, by using these tools, decision makers can hone their supports to a personal level, helping every learner thrive.
Even KnowledgeWorks was bitten by the High-Fidelity Living bug when we began to invest more deeply in impact and improvement expertise and research related to the implementation of personalized, competency-based learning in our partner states, districts and schools. By rigorously measuring our progress, we expect to make better and more targeted decisions alongside every learning community with which we partner than we would be able to do without data-driven insights and a focus on continuous improvement.
Challenge: automating decision making
The pendulum can also swing too far and lean too heavily into High-Fidelity Living. This is why the forecast warned readers that “interventions based on automated alerts and signals could create data blindness by reducing human intuition and limiting insight; to the extent that automation correlates with lower cost, this risk could be especially pronounced in low-income communities.”
The truth is that imperfect humans are creating imperfect processing systems for imperfect data. How could that possibly create perfect decisions? Moreover, issues generally arise when educators are so overloaded that automation seems like a gift from the universe and/or when funds are so low that it makes economic sense. However, people and organizations, particularly those who are misrepresented in historic data sets, should not have to give into automating their decision making.
Systems which are automating decision making have spread in education. Automated essay scoring engines are a popular example because of the direct and indirect implications they can have on learners’ outcomes and the proven problems with bias. “Natural language processing (NLP) artificial intelligence systems — often called automated essay scoring engines — are now either the primary or secondary grader on standardized tests in at least 21 states, according to a survey conducted by Motherboard.” Only three of these 21 states ensure that every essay is also read by a human evaluator.
Other parts of the social sector have also been using automated tools to support human decision making. In child welfare cases across the country, automated data analysis is being used to determine the level of danger children might be in. The data used come from “jails, psychiatric services, public-welfare benefits, drug and alcohol treatment centers and more.” Systemic inequities make it relatively likely that low-income parents will be found in at least one of these data sets and be labelled “high risk” as a result. For example, parents in need of food stamps to feed their children do not have a choice but to surrender their private information to the government, which could later be used against them through a biased automated scoring system.
It’s time for course correction.
Luckily, the dangers of bias in the data that get fed to automated software, in society at large and in education, are now being heavily researched. The hope is that edtech organizations, in partnership with education advocates and other stakeholders, can figure out ways to make the processing of large amounts of data more equitable, inclusive and just for every person, thereby correcting the course of technologies that augment or supplant human decision making.
Initiatives to counter the effects of blind spots in data insights have already been created. There are tech systems to automate the identification of bias in other tech systems. There are also product certifications to hold edtech companies accountable for racial equity.
Such approaches should only be the beginning, though. Education leaders should make an effort to identify the platforms that are currently processing and interpreting big data and to understand fully the variables at play when these platforms make recommendations. This way, they can help ensure that data insights can be used to help people make better decisions instead of completely replacing human decision making. KnowledgeWorks will continue to be on the lookout for what might characterize the next chapter of big data in education.
This post is part of a seven-piece series reflecting on the state of the challenges and opportunities introduced in KnowledgeWorks’ third anchor forecast, Recombinant Education: Regenerating the Learning Ecosystem, published 10 years ago. Read the rest of the series:
- Five Disruptions That We Thought Could Change Everything (an introduction). KnowledgeWorks’ third major forecast anticipated significant reshaping of learning. Now that it is 2022, we have reached the time horizon of that forecast, and we’re looking back.
- Startup? Yes. Democratization? Not so much. We reflect on innovation and entrepreneurship as it translates to a public good like education and the challenge of competing agendas.
- Cutting Out the Middleman: Networks and Education. We reflect on networked forms of organization that deliver new levels of differentiation and specialization – and the challenges of having so many choices.
- Weaving Webs of Personalization. We look at the power of value-driven customization and personalization – with opportunities for co-created value propositions and challenges in foundational hurdles and culture wars.
- Looking for a Learning Landscape, 10 Years On. Though still uncommon in the US, learning landscapes are gaining traction, especially during COVID. But the momentum is lost when met with challenges of access and quality.
- Uneven Changes in Education Over the Past Decade. To conclude the retrospectives of our third forecast, Katherine Prince surmises that the wave of disintermediation that had been restructuring many sectors ended up affecting education less deeply than we had thought it might.