In this day and age, almost every aspect of our lives is influenced in some way by artificial intelligence. AI powers everything from which video plays next when you’re watching YouTube to whether your job application is accepted or your insurance claim is approved.
Whether we like it or not, our fate is often determined by algorithms that see us as a cloud of data points, not as humans. So, when we apply this technology to a space as fundamental to our society as education, we must make sure that our approach is responsible and equitable—treating the people affected by our tools as human beings.
AI Meets Edu
One of the primary applications of AI is to massively increase an organization’s capacity to do tasks that require some form of reasoning. In education, this increase in capacity is already showing up in numerous forms. At its most basic, grading of multiple choice quizzes and tests is now essentially instantaneous. But machine learning can do much more with that data. For example, it can show where students are thriving and where they need more academic support or even dynamically personalize instructional content to help a child learn effectively.
However, today’s students interact with data in very different ways than we may be used to. Children who have only known a world in which AI-based systems are widespread often turn to search engines to find answers to questions before going to their own parents and teachers. This trend shouldn’t be news to anyone; in fact, evidence of this has been shown more than a decade ago. And as one-to-one device programs gain traction (hitting over 50 percent in 2017, and climbing into the 80 and 90 percent range during the pandemic), it’s safe to assume that more students will be asking their questions to Google before their instructors.
It’s not difficult to see a child’s reasoning behind this. With the wealth of different sources of knowledge available on the internet, why ask a single teacher? Also, asking a search engine avoids awkward or difficult conversations behind more serious questions. And this simple fact highlights both the benefits and obstacles involved with implementing AI in education.
Asking the Difficult Questions
Students struggling with mental health challenges that they aren’t equipped to confront alone frequently search for resources online to help themselves. When schools have access to that information, they can intervene, provide help and, potentially, save lives.
The next generation of grief detection systems, such as Securly Auditor and Securly’s At-Risk heuristic system, use natural language processing and artificial intelligence tools to infer the sentiment behind a student’s messages and cross reference this information with past data on that student to interpret their mental state. This helps prevent false positives and can provide a clearer picture into what topics students are most concerned about—or if they are in need of immediate attention. If the system determines that a student is at risk, this is brought to the attention of the trained human analysts on Securly’s 24 team, who reach out to the school-designated emergency contacts.
Natural language processing can also be applied to other aspects of student mental health and social-emotional learning. Cyberbullying in the form of comments that students make to each other can be analyzed before messages are even sent, allowing an AI-powered system to detect bullying or hateful attacks and then help students understand how they can manage their feelings without hurting themselves or others.
Of course, companies and other actors in the education field using such sensitive data must make sure that student privacy is maintained to the greatest extent possible while striving to prevent tragedy and help students in need of support.
Drawbacks of Over-reliance on AI
Additionally, we need to remain aware of the issues inherent in this approach, such as AI bias. This has been a problem with automated systems dating back to the early days of modern computers. More recently, concerns have been raised that AI in certain learning management systems could misidentify a student as low-performing, potentially leading to unfair treatment in academic settings.
AI bias can manifest itself in many different ways. When designing any AI tool, it is fundamentally important to make sure that systems are audited for bias. It’s also important to understand that bias can crop up unexpectedly, despite measures to prevent it.
As AI systems make their way into new spaces, finding consensus on what is ethically acceptable is becoming a more difficult task. It is clear that allowing a child’s social-emotional development to be assisted or monitored by an AI tool is not a decision to be taken lightly. Yet, it’s imperative to have solutions in place that can help students in need of support.
AI systems used in these contexts should serve, primarily, in an advisory capacity. While AI can identify when a student is in need of mental health support, district and school staff should be the ones who provide the needed support. AI can be an amazing tool to help students grow and thrive, but it needs a human touch in order to be truly effective.