PHL 6001 - AI Ethics and Governance

4 lecture hours 0 lab hours 4 credits
Course Description
This course introduces students to some of the central ethical issues in existing and emerging digital technologies, with an emphasis on the ethics of artificial intelligence (AI), as well as how these issues arise in the context of organizational governance and compliance. Students will explore topics in digital and AI ethics with respect to relevant professional areas-such as data science, computer science, software engineering, user experience and design, among others-as well as topics concerning the broader social implications of digital technologies and the ethical challenges they raise. The goal of the course is to critically engage with these topics and cases interactively, studying both their theoretical, philosophical context and their practical implications, so that students can pursue continued, independent reflection on key issues in digital and AI ethics and apply the guiding ethical principles that emerge to their own professional and personal lives. In addition to addressing these ethical challenges, students will consider how organizations and institutions-for example, corporations and IRBs but also legislative bodies-can or should respond to such challenges through regulation, oversight, or other mechanisms. Students will be encouraged to draw on their own professional experience in assessing key questions and cases in digital and AI ethics.
Prereq: None
Note: None
Course Learning Outcomes
Upon successful completion of this course, the student will be able to:
  • Demonstrate advanced knowledge of existing ethical issues in digital technologies and AI
  • Anticipate ethical issues arising from emerging digital technologies and developments in AI
  • Identify the philosophical bases for ethical concerns surrounding digital technologies and AI
  • Exhibit familiarity with and understanding of established ethical frameworks, concepts, and principles within ethical theory
  • Synthesize those theoretical frameworks, concepts, and principles and connect them to applied issues, using the theoretical resources to help understand and resolve the applied issues while at the same time scrutinizing the theoretical principles by evaluating their real-world implications
  • Evaluate competing considerations about engineers' and designers' moral responsibility for the products they create and services they provide
  • Engage in independent ethical reasoning on novel problems using theoretical and practical ethical resources learned
  • Foster ethical behavior and integrity in their professional and personal lives using the theoretical and practical resources learned
  • Develop concrete proposals for addressing ethical challenges in digital technologies and AI at the level of organizational governance

Prerequisites by Topic
  • None

Course Topics
  • Ethical frameworks, concepts, and principles (consequentialism, deontology, rights, etc.)
  • Ethics of information, part I: privacy and transparency
  • Ethics of information, part II: intellectual property, individual liberties, and human rights
  • Algorithmic bias, "weapons of math destruction," and social justice
  • The "black box" explainability problem in deep learning AI
  • Ethics of human-AI interaction and impact on social relationships (e.g., anthropomorphic framing in AI and robotics)
  • Digital technologies and human well-being: digital media and mental health
  • Digital technologies and democracy: digital media, filtering, misinformation, and political polarization
  • AI and human work: automation, worker displacement, and the meaning of work
  • Ethics of design and user experience: "dark patterns" and user agency
  • Ethics of data capture, digital advertising, and "surveillance capitalism"
  • Ethics of biometric identification
  • Data privacy and regulatory mechanisms: GDPR, HIPPA, IRB regulations, etc.
  • The internet, social media, and the question of legal status (e.g., public utility status, technology or publishing company status, etc.)
  • Artificial moral agency, part I: theory
  • Artificial moral agency, part II: application (e.g., ethics settings in driverless cars, constraints on autonomous weapons systems, etc.)

Coordinator
Dr. Andrew McAninch


Print-Friendly Page (opens a new window)