Philosophy of AI & Emerging Technologies Working Group

This working group consists of weekly discussions aimed at developing our understanding of the foundations of contemporary AI and emerging technologies both for its own sake and to interrogate the legitimating ideas most closely associated with contemporary technological culture. We will also discuss chapter and paper drafts of participants, as we plan for this to be a vehicle for collaborative projects. Topics include, but are not limited to:

  • Computational theory of mind
  • Revolution in military affairs
  • Algorithmic thinking and Bayesianism
  • Gebru and Torres’ “TESCREAL”: Transhumanism, Extroprianism, Singularitanism, Cosmism, Rationalism, Effective Altruism, and Longtermism
  • “Bay Area Rationalism,” and the widespread use of science-fiction and trolley problem type gedankenexperiments
  • Accelerationism/Libertarianism/Neo-Feudalism
  • AI architectures and facets of human cognition

Meetings will regularly occur each Thursday at 12pm in 211 Coates Hall.

Each week, we will start our meeting with a brief introduction about the day's reading followed by an open discussion facilitated by rotating members. Occasionally, we will invite speakers to address the group.

The group is open to faculty, staff, administrators, graduate students, and (by invitation) undergraduates.

To receive weekly readings and announcements, email Carrie Powell at cpowell3@lsu.edu.

Thursday August 28, 2025,  12pm, 211 Coates Hall

Reading: Hannah Fry, Hello World: Being Human in the Age of Algorithms (2018)

Our tentative plan is to follow Frey’s book with Melanie Mitchell’s (2019) Artificial Intelligence: A Guide for Thinking Humans and then for participants to vote on which papers and books to cover. We will privilege reading and discussing drafts of work by members of the group.

Dr. Michael Ardoline

Assistant Professor of Philosophy

michaelardoline@lsu.edu 

 

Dr. Jon Cogburn

Professor of Philosphy and Chair of the Department of Philosophy and Religious Studies

jcogbu1@lsu.edu

 

Dr. Lauren Horn Griffin

Assistant Professor of Religious Studies

lhgriffin@lsu.edu 

  • Hubert Dreyfuss - What Computers Still Can’t Do: A Critique of Artificial Reason (1992)
  • Matthew Stewart - The Management Myth: Debunking Modern Business Philosophy (2010)
  • Scott Aaronson - “Why Philosophers Should Care About Computational Complexity” (2011)
  • John Ralston Saul - Voltaire’s Bastards: The Dictatorship of Reason in the West (2013)
  • Andy Clark -  Mindware: An Introduction to Cognitive Science (2013)
  • OpenAI - “Concrete Problems in AI Safety” (2016)
  • Vaswani et al. - “Attention is All You Need” (2017)
  • OpenAI - “AI and Compute” (2018)
  • Hannah Fry – Hello World: Being Human in the Age of Algorithms (2018)
  • Melanie Mitchell - Artificial Intelligence: A Guide for Thinking Humans (2019)
  • Janelle Shane – You Look Like a Thing and I Love You: How Artificial Intelligence Works and Why It’s Making the World a Weirder Place (2019)
  • Rich Sutton - “The Bitter Lesson” (2019)
  • Brown et al. - “Language Models are Few-Shot Learners” (GPT-3 Paper) (2020)
  • Kate Crawford - Atlas of AI (2021)
  • Timnit Gebru et al. - “On the Dangers of Stochastic Parrots” (2021)
  • Brian Christian – The Alignment Problem: Machine Learning and Human Values (2021)
  • Dan McQuillan – Resisting AI (2022)
  • Luciano Floridi - The Ethics of Artificial Intelligence (2023)
  • David Chalmers - “Could a Large Language Model Be Conscious? “(2023)
  • Andy Clark - The Experience Machine (2023)
  • Timnit Gebru, Émile P. Torres et al. - AI Con: The Great AI Swindle (2024)
  • Michael Townsen Hicks, James Humpries, & Joe Slater - “Chat GTP is Bullshit” (2024)
  • Jakob Stenseke – “On the Computational Complexity of Ethics: moral tractability for minds and machines” (2024)
  • Quinn Slobodian – Hayek’s Bastards (2025)
  • Stanford HAI - AI Index Report — (Updated Yearly)
  • Air Street Capital - State of AI Report —(Yearly) 

Philosophy

  • Can AI be conscious or sentient? Can machines experience subjective consciousness or emotions?
  • To what extent is thinking computation? What does the development of machines that “think” reveal about human and animal thinking? Can non-algorithmic behavior emerge on systems where the behavior of their parts can be described algorithmically? Is this what is going on with humans? With AI systems of sufficient complexity?
  • What does it mean to be human in the age of AI? When machines match or surpass human abilities, what sets humans apart? Are AI systems likely to surpass the human ability (such as it is) to produce non-slop? Or are there philosophical and computational reasons to think that this will not happen.
  • At what point should a machine be granted moral or legal status? Should advanced AI be granted rights, responsibilities, or personhood? If we are not there yet, why not?
  • Who is responsible for AI actions? When AI causes harm, who is accountable—the developer, user, company, or the AI itself?
  • Can AI make ethical decisions? Can we embed moral reasoning into AI—and whose morality should it follow?
  • How does AI affect human autonomy and agency? What is human autonomy after all? Are humans losing control over their decisions as AI systems increasingly shape choices?
  • What are the long-term risks of superintelligence? Could a future AI become vastly more intelligent than humans and act in ways that are harmful. Is this plausible? If not, what is the political/economic purpose of worrying about it? What do these questions teach us about human and animal intelligence?

Psychology

  • Do interactions with AI tools influence attention, decision-making, or cognitive biases?
  • What is the psychological impact of anthropomorphic design in AI (e.g., voice assistants, chatbots)?
  • How does AI-assisted cognition (e.g., autocomplete, summarization) reshape human problem-solving or creativity?
  • What social or emotional roles are people willing to assign to AI companions or assistants?
  • Does prolonged exposure to AI-generated content (e.g., recommendations, deepfakes) affect belief formation, polarization, or group identity?
  • How does AI shape interpersonal dynamics, such as conflict resolution, persuasion, or empathy in mediated communication? How do children understand and relate to AI entities compared to adults?
  • What developmental impacts might AI-based toys, tutors, or caregivers have on attention, language, or empathy?
  • How effective are AI tools (e.g., chatbots, digital therapists) in mental health screening or intervention?
  • What are the psychological risks of replacing human care with AI in therapy, counseling, or crisis situations?
  • Can AI help detect early signs of psychological disorders through behavioral or linguistic analysis? How do people react to algorithmic decision-making in high-stakes domains (e.g., hiring, healthcare, criminal justice)?

Computability Theory

  • P vs NP Problem - Whether every problem whose solution can be verified quickly (in polynomial time) can also be solved quickly. If P ≠ NP (most believe it isn’t), heuristics and approximations are not just pragmatic, but also necessary.
  • Exponential Growth of Search Trees - The combinatorial explosion in the number of possible states or actions in AI planning, game playing, and decision-making tasks. In pre LLM AI, this motivated techniques like Monte Carlo Tree Search, pruning, approximate inference, and deep learning as function approximation. Often a less stupid algorithm does not suffer the same explosion (such as proof systems versus truth tables in propositional logic). Is something similar going on with the explosion of data center needs for running and training LLMs?
  • The Halting Problem - There is no general algorithm that can decide whether an arbitrary program halts on a given input. AI alignment and interpretability research must grapple with the fact that some behaviors (including formal verification and safety checking) or failure modes may be fundamentally unpredictable.
  • No Algorithm for Determining Consistency of First-Order Theories - By Gödel’s incompleteness theorems and Church’s work, there’s no algorithm that can determine whether arbitrary first-order (or stronger) logical systems are consistent. This poses a fundamental problem with respect to AI hallucinations. AI systems that attempt formal reasoning (e.g., symbolic AI, theorem provers) can generate contradictions or meaningless statements that they cannot detect as such. This impacts AI reasoning in formal domains, such as law, mathematics, and even AI safety where self-reference may arise.
  • Undecidability and Semi-Decidability in Logic-Based AI - Problems like logic entailment, satisfiability in certain theories, or general program verification are undecidable or only semi-decidable. Trade-offs betweenexpressivity vs. tractability are central in knowledge-based AI. In AI alignment, undecidability limits our ability to fully specify or prove the safety of general-purpose agents.
  • Kolmogorov Complexity and the Limits of Compression - The shortest program that can describe a string is incomputable; there’s no general algorithm to determine the minimal description length. Impacts theories of intelligence based on compression or minimal description length (e.g., Solomonoff induction, universal AI).Suggests limits to prediction, explanation, and generalization—key goals in AI and machine learning. Also limits our ability to evaluate when an AI’s learned model is “simple” or “optimal” in any universal sense.

Public Policy

  • Hype/Bubbles - To what extent do economic and political factors driving investment in emerging technologies produce widespread false beliefs and unreasonable hopes and fears about those technologies?
  • Algorithmic Culture - What are the social effects of offloading tasks to algorithms, of thinking of thinking and expertise in terms of procedures that can be implemented in an algorithm.
  • Algorithmic Bias and Fairness - How do AI systems perpetuate or exacerbate existing social inequalities? What frameworks (technical and legal) exist to measure or mitigate bias? Who is accountable for harm when it results from automated decisions?
  • AI Governance and Regulation - What are the appropriate roles of national governments vs international bodies? How should emerging regulations (e.g. EU AI Act, U.S. Executive Orders) be evaluated? What policy tools (mandates, standards, audits) are most effective?
  • Surveillance, Privacy, and Civil Liberties - How should AI-enabled surveillance (e.g. facial recognition, predictive policing) be regulated? What rights do individuals have against automated monitoring? What constitutes meaningful informed consent in data-driven AI?
  • Labor and the Future of Work - What are the likely impacts of AI on job displacement, deskilling, and wage inequality? What policies can support equitable adaptation (e.g., UBI, reskilling, labor protections)? Should there be limits on automation in certain sectors?
  • AI in Critical Sectors (Healthcare, Finance, Justice) - How can we ensure safe, equitable deployment of AI in sensitive domains? What standards of accuracy, explainability, or auditability should be enforced? How can public trust be maintained in automated decision-making?
  • Misinformation and Democratic Integrity - How does AI (e.g., generative models, deepfakes) threaten truth, elections, and public discourse? What are the limits of content moderation and speech regulation? Should synthetic media be labeled or restricted?
  • Accountability, Transparency, and Explainability - How do we ensure AI systems are understandable and contestable to those affected? What legal and ethical frameworks support “right to explanation”? What are the challenges of governing black-box systems
  • Global Power and AI Geopolitics - How does AI influence global power dynamics (e.g. U.S.-China rivalry)? What are the risks of militarized AI or arms races? Can international norms or treaties for AI safety be achieved?