Artificial intelligence is changing things—or, the people who are building the algorithms and technologies behind artificial intelligence are. But like the tech industry at large, the people building AI often don't look like many of their users or come from similar backgrounds.
That can have damaging effects on who has access to these tools—and who benefits from the field of AI and who doesn't. This was a topic of conversation this April at the ASU GSV conference, an event that brings together education investors and entrepreneurs from all around the world and it's hosted by Arizona State University and GSV, a venture capital firm.
One of the challenges with bias in AI tends to come down to who has access to these careers in the first place, and that's the area that Tess Posner, CEO of the nonprofit AI4All, is trying to address. During our live interview series at the conference, she told us about how her organization works with diverse youth to introduce them to AI fields and careers.
Listen to the discussion on this week’s EdSurge On Air podcast. You can follow the podcast on the Apple Podcast app, Spotify, Stitcher, Google Play Music or wherever you listen. Or read a portion of the interview below, lightly edited for clarity.
EdSurge: For folks tuning in who don't know what AI4All is about, can you start off by telling us what you do and how you got started?
Posner: Artificial intelligence is one of the most critical technologies of our time. It's going to reshape almost every industry. And we see that people are using the technology every day. We have it in our pockets and our smartphones. In fact 85 percent of Americans use AI technology every day. But right now there's a big diversity crisis in the field. The technology itself is being built and shaped and decided by a homogenous group of individuals, and to get into the field and be part of that decision making and development is only accessible to a few.
This is exactly what AI4All was started to address. We're working to increase diversity inclusion in the fields and we really believe that that will lead to the best outcomes for AI as well as mitigate potential risks.
How does your organization do that exactly?
We're actually running education and mentorship programs for underrepresented young people to get into the field. In our flagship program, which we started in 2017, we run AI summer camps for underrepresented high schoolers. We host them at university AI labs where the technology is being built and created and where there's a lot of great talent to teach the classes. Then we recruit underrepresented young people from different communities, specifically girls, low income students and youth of color.
We get them introduced to AI through learning technical skills, introduced to role models and mentors in the field. And then most importantly, they actually work on AI projects that are related to solving an important problem in the world using AI technology. Then we have an alumni program which helps the students after the summer camp continue learning, connected to role models and mentors and then internships, professional development opportunities.
The challenge you’re trying to solve is an enormous one, but what are some of the big causes that you see behind it?
I think the problem is multifaceted, which is why it's not an easy one to solve. You have the sort of access and pipeline issues where for example, only 35 percent of high schools in the U.S. teach computer science, most of them don't even teach AI, so there's really a lack of access among most young people. The Kapor Center, which just put out a report about it, calls it the leaky tech pipeline and that there's a lot of leaks throughout.
And at each stage there are barriers preventing people from getting into it. Stemming from the access piece, lack of role models and mentors in the field, and also just a problematic culture in tech generally that creates this image of it that makes it not appealing and even issues of direct discrimination and harassment that we see where the cultures are not inclusive and supporting people of all backgrounds to succeed.
We actually have to solve the issue at all stages, and it's a very holistic approach that's required. AI4All specifically focuses on the access feeds because we believe that starting early is really important, but then we're also ensuring that the students have that ongoing support into not only college, but into careers in the future.
This is an issue across the tech industry, across CS education. Why focus on AI specifically?
AI is pretty pervasive, and it's getting embedded pretty invisibly into almost every industry. What's happening is that we're delegating decision making to AI systems. In other words, we're outsourcing things and saying, ‘Well, it'll be easier and more objective if AI can help make that decision.’
For example, it's helping us make hiring decisions. It's helping us make decisions about who gets parole, who gets access to financial services. What we're seeing is that existing societal biases like sexism and racism are actually creeping into AI systems. This is often unintentional, but because that's showing up and because we're delegating these decisions to AI systems, imagine if these systems are biased? We are seeing that could actually further marginalize certain populations from key services or make false decisions that could have life-changing consequences.
Isn’t there some truth to the fact that automation is going to disrupt some jobs and some jobs will be a thing of the past? How do you talk to young students about jobs that might not be there anymore in the future?
Yes, definitely. We like to have our students engage in some of those questions and really develop their problem-solving abilities around those areas rather than sugarcoating it. But we also are optimistic that if we create a generation of problem solvers and change makers in the space that are empowered to make good use of the technology, we can mitigate some of those risks.
My background before AI4All is around workforce development. And I did a lot of work around what our new models for retraining and training people are. There's no question that we're in a period of rapid change in the economy, and we have to have better solutions to reacting to that more quickly so that whatever changes happen, whatever jobs change or new jobs are created, we have to enable people to move more quickly between those, especially those that are most vulnerable.
I really believe that we need a lot of innovation in that area as well. There's a lot of great solutions out there, whether it's bootcamps or retraining or apprenticeships that I'm hopeful that if we can invest more in those we'll be able to keep up with the fast changing economy that at the end of the day we can't really predict. We just have to be ready for it.
This week’s podcast sponsor is Emporia State University’s Instructional Design and Technology program: designed for those interested in creating dynamic, interactive learning environments in both public and private sectors, the master’s in IDT from ESU can be completed quickly and entirely online, preparing educators for the new age of the technology-driven learning environment. Learn more here.
We're sitting here at ASU GSV and there are dozens of edtech companies around us, many of them that are talking about how they're using AI to try to improve teaching and learning. What are some of the risks in how AI is being applied in education?
I've had a bunch of conversations and seeing how it's become a really popularized buzzword, which is exciting. I think there are a lot of great applications of AI in education, personalized education, ways to make a teacher's time more efficient, for example. I think, first and foremost, companies that are using AI in education or any space frankly need to really think about how they're going to mitigate potential biases in the data set. Every data set is going to have biases, but it's really what are the implications? How are you going to test for that throughout the life cycle of product development?
I would definitely recommend looking into that very closely and creating a set of standards in the company for how you're thinking about those datasets and all the implications that come with them. There's also a lot of work being done to create industry standards around responsible and ethical use of AI. There's a lot of free resources out there that I would definitely encourage people that are going down this road to look at. If you're using student data and how you're training AI systems, ask: How do you get those communities involved? How do you get teachers involved? How do you get parents involved and thinking about what that means and what the implications are?