Exploring the Intersection of Philosophy, AI, and Ethics at LSU
July 16, 2024
LSU Ethics Director on Why Philosophy Matters to the Future of AI
Imagine you apply for a mortgage, you’ve done your research, you meet the criteria, and you think you are a good candidate for the mortgage; you have a good credit score, but you're rejected – not by a person, but by a computer algorithm.
It’s the realistic scenario that Debbie Goldgaber says we should all take into account as she reveals some of the ethical/normative issues with the predicative artificial intelligence that increasingly inserts itself into our lives. “Generative AI poses additional challenges,” she says.
Goldgaber is an associate professor of philosophy and director of the LSU Ethics Institute who studies the intersection of those two areas. Her work with the Ethics Institute focuses on the four core issues in AI ethics: fairness, accountability, transparency, and security/privacy.
EXPLORE LSU's Philosophy Program
She develops the mortgage example further, highlighting the ethical areas of concern.
“You ask a bank employee why you were rejected. They look at you sympathetically but shrug their shoulders and say, ‘We have to go by a score generated by an algorithm. We don't know why it rejected you.’
“You might feel more upset about this situation because it's hard to know if it's fair. The employee can't be accountable to you with reasons why you were rejected because there’s no transparency about how the algorithm made the decision. So, they can't help you feel better about the decision or improve your chances in the future.
“Finally, you may be worried about whether the data that informed the decision was secure and reliable and that that data will be secured in the future so that you won't be harmed by data breaches."
“I think one fundamental question for me is the extent to which technology can and does transform human life, human capacities and human values.”
— Debbie Goldgaber, Associate professor of philosophy,
Director of the LSU Ethics Institute
Goldgaber emphasizes the need for transparency and accountability in AI systems to prevent scenarios where humans are unfairly held responsible for automated decisions and to ensure these technologies do not undermine human agency or moral responsibility.
Philosophy and AI: A Historical and Ethical Perspective
Philosophers have explored questions about thought and the mind long before modern AI existed. German philosopher Immanuel Kant (1724-1804) thought about morality in terms of universal and general rules that could be—as we would say today—programmed. Thinking about morality in terms of rules helps us consider how to create AI that is morally restrained. Goldgaber expands on how ethics, though, is about much more than rule-following; it’s about caring for and securing the intangible things that humans value most: freedom, creativity, and connection.
“I think one fundamental question for me is the extent to which technology can and does transform human life, human capacities and human values,” Goldgaber says. “I'm particularly interested, philosophically speaking, in how the ways that we try to be accountable, responsive and responsible to each other—the stuff of Ethics—are affected by automation and technological systems. One thing I’m concerned about is that technologies we are adopting today make our futures more determined by what we have done in our past—so that we get "locked in" in ways that limit our future freedom. From our credit scores to our social media to our recommendation algorithms, we are increasingly determined by our digital pasts.”
Pressing Ethical Concerns and the Role of Humanities
Goldgaber stresses the importance of centering human values in AI development.
“We need to think about human capabilities (for sociality, learning, creativity) and how those are fostered, and then to think about how technology can interact with, support and amplify these human capabilities. I think today some of our technology risks dampening our capacities to learn, explore and communicate rather than amplify these. Young people feel this today and express this in their concerns about social media and addiction to smartphones. Teachers can see it in the decreased ability of students to retain attention and read arguments that extend more than a few paragraphs.”
Humanities play an important role by providing insights into human culture and values. Goldgaber emphasizes that humanists, who study and value the legacy of human cultural production, are essential partners in creating technologies that truly benefit humanity.
The LSU Ethics Institute: Bridging STEM and Ethics
As the director of the LSU Ethics Institute, Goldgaber is focused on interdisciplinary collaboration, particularly the partnership between philosophy and STEM fields. "We saw a need to integrate ethical considerations into scientific research," Goldgaber explains. "Researchers are increasingly required to address the ethical, legal, and social implications (ELSI) of their work, making ethical dialogue crucial for grant competitiveness and responsible innovation."
AI Technology: Current and Future
Goldgaber’s perspective on AI distinguishes between the hype surrounding artificial general intelligence (AGI)—an AI that has “human-like” or greater than human-like intelligence—and the practical advancements we see today. While she acknowledges the stunning capabilities of large language models (LLMs) like ChatGPT, she is skeptical about their direct link to AGI.
"I think that LLM models have done a lot to fuel people's imagination about what AI can do because it seems that we are interacting with a human-like agent who uses language like we do,” she says. “I am skeptical there's anything like a direct line between LLMs and AGIs. There’s a saying among AI researchers “the hard stuff”—Chess, GO, IQ tests—"is easy, and the easy stuff”—categorizing an object you’ve never seen before, dodging an obstacle in your path—“is hard.” Surprisingly, reasoning tasks are relatively low-energy computation tasks, but perception and mobility tasks are not).”
She says she is excited about advancements in neuronal interfaces: “Devices under development that can help restore people's movement and language abilities by recognizing patterns in brain activity and linking them back to motor activity.”
Recommended Readings for Further Insight
For those interested in delving deeper into the ethical challenges of AI, Goldgaber recommends the non-fiction "The Alignment Problem" by Brian Christian and for fiction-lovers, "The Life Cycle of Software Objects" by Ted Chiang.
Next Step
Let LSU put you on a path to success! With 330+ undergraduate programs, 70 master's programs, and over 50 doctoral programs, we have a degree for you.