April 11, 2023
Artificial Intelligence: What You Need to Know
Mubina Schroeder is an Associate Professor of Cognitive Science at Molloy University. She co-directs the Cognition and Learning Laboratory, where she researches technology-based science education and cognitive flexibility and is the founder of theimagineer.life, a space for youth creators to showcase their AI co-created work. This Q&A was taken from multiple interviews, as well as from some of her own writings on the subject.
What is Artificial Intelligence? (AI)
Although it seems so futuristic, AI has been around for a long time. We all use it on a regular basis and don’t even think about it. Examples of AI are when we use our face to unlock our phone or when Netflix makes suggestions for the next show for us to watch.
If we have been using AI for some time now, why is everyone talking about it right now?
The release of ChatGPT and other large language models like Bard has driven enormous interest in AI because these models allow users to directly interface with AI.
Trained on mountains of data, these large language models are generative and easy for users to interface with: you can just chat with them. Their creative properties are enormous, and we are only seeing the beginning of what these models are capable of. One realm AI has been used a lot in the last year or so is in creating visual art—for example, a group recently made a surreal, endless show based on Seinfeld called “Nothing, Forever.”
What are the economic implications of AI?
When we used to dream about futuristic societies, we thought that robots would be doing manual labor while humans were free to do the intellectual and creative heavy lifting. The truth will likely be the opposite, with implications for society that go beyond job loss. Many stakeholders in AI have started to talk about implementing a Universal Basic Income (UBI) model to offset the enormous economic effects this wave of automation might have but there are many questions that remain: How would UBI be funded? What would it look like? And what would life be like for many of us who have centered our identities around our careers and educational training?
I am worried that lawmakers are not paying enough attention to this and I fear that by the time we do focus on this issue, it might be too late. Right now, AI is a co-pilot. What happens when it becomes the pilot and takes us to an unknown destination?
What do we call the level of AI that we have now? Can it and will it get better?
The AI that is used today is Artificial Narrow Intelligence (ANI). Some researchers do think that we are approaching Artificial General Intelligence (AGI), soon. It is a hotly debated topic and one point that has been discussed is defining what we think general intelligence is. It is definitely improving exponentially, at a pace that is astounding even for those in the tech industry. Sam Altman, the CEO of OpenAI, describes the rate of progress as exponential and is confident that AI will reshape society as know it.
What is the next step for AI?
After ANI comes AGI, intelligence that is on par with human intelligence, including the ability to learn.
We are in a kind of “space race” similar to when we battled Russia to be the first country to the Moon. Today, though, our rivals extend beyond Russia to China and other countries, to even individual or smaller organizations. Many experts think AGI will happen before 2050, but some feel it will happen within the next few years.
What happens after AGI?
After AGI, you get Artificial Super Intelligence (ASI), intelligence that is smarter than all of humanity. When that happens, all bets are off and there is no way to predict what will happen. ASI frightens even the scientists who work with AI and many of them concur that ASI may mean the extinction of the human race.
Will ASI happen?
It is certainly possible. As of right now, there are practically no regulations or guardrails on artificial intelligence. New York Times reporter Ezra Klein refers to the good/bad aspects of AI as summoning angels or demons. There is a large unknown about what effects might transpire. There is the issue of alignment, which is questioning whether we can make sure AI stays on course with the goals we prepare for it and doesn’t do things like decide that humans are a threat and do something nefarious. For example, we could give AI a goal to solve a crisis like climate change, but what if AI decides that the biggest threat to the planet’s climate is the existence of human society? Thus far, researchers have not figured out how to solve the alignment issue.
What does this mean for the field of education?
I think that it is important for all stakeholders in education to focus on having learners understand and use the technology as a tool and in an ethical way. I serve as an advisor to Virtue AI, which focuses on integrating AI into work and learning in a way that is respectful to humans. Avoiding the technology will not help us progress; we must all get to a place where we understand how to use this to better our lives and communities.