There is no doubt that "artificial intelligence" or "AI" is one of the hottest areas of computer science research. What is not clear, however, is what exactly this term means. To some, it is the use of biology- or neuroscience-inspired algorithms to solve complex problems, such as using neural networks for facial recognition, genetic algorithms to schedule gate and crews at an aiport, or machine learning algorithms to analyze market data and predict changes in stock prices. While these programs might be effective in practice, their specific workings are not fully understood and their behavior under rare conditions may be unpredictable. To others, AI is the grander dream of creating a conscious program or machine, capable of self-awareness and independent behavior. An artificially intelligent machine, due to its connectivity and fast processing speeds, could be of great benefit to humanity (e.g., in tackling climate or energy challenges) but also could be a threat (such as Hal in 2001: A Space Odyssey, SkyNet in The Terminator, and Ava in Ex Machina).
Regardless of which definition is used, AI poses significant and challenging ethical issues for computer scientists. What responsibilities does a researcher/developer bear when producing software whose complex behavior is not fully understood. For example, suppose a self-driving car kills a pedestestrian or a program for approving credit applications unintentionally discriminates based on ethnicity or gender. Who is responsible? The program designer? Individual programmers? Executives of the software company? What responsibilities (if any) does a developer have to monitor and revise software as a result of unintended consequences? If a self-aware program is ever developed, what ethical responsibility would its creator have? Given the potential dangers of machine intelligence, can research that might lead to self-aware AI be ethically justified at all?
You are to write an 8-10 page (double-spaced) midterm paper focusing on the ethical issues related to both interpretations of artificial intelligence. In your analysis, you should support your arguments with specific real-world cases and should reference the ethical dimensions described in the Laudon paper: rule-based vs. consequence-based and individual vs. collective. Your paper should reference at least five reputable sources.
The grading rubric for this paper is as follows:
Readability (grammar, spelling, clarity, sufficient length) 0-20 points Takes a position and provides justification
Includes sufficient background and factual information
Includes information and analysis appropriate for a computer scientist
Argument is cohesive and persuasive
Specifically cites the ethical dimensions
Cites credible references appropriately