An AIED 2018 workshop
The field of AIED raises far-reaching ethical questions with important implications for students and educators. However, most AIED research, development and deployment has taken place in what is essentially a moral vacuum (for example, what happens if a child is subjected to a biased set of algorithms that impact negatively and incorrectly on their school progress?).
Around the world, virtually no research has been undertaken, no guidelines have been provided, no policies have been developed, and no regulations have been enacted to address the specific ethical issues raised by the use of Artificial Intelligence in Education.
This workshop, ETHICS in AIED: Who Cares?, is a first step towards addressing this central problem for the field. It will be an opportunity for researchers who are exploring ethical issues critical for AIED to share their research, to identify the key ethical issues, and to map out how to address the multiple challenges, towards establishing a basis for meaningful ethical reflection necessary for innovation in the field of AIED.
Please keep reading to find out more about the ETHICS in AIED. Who cares? workshop: when and where, workshop format, who the workshop is for, workshop outcomes, how to get involved and abstracts’ deadline.
Professor Beverly Park Woolf
College of Information and Computer Sciences, University of Massachusetts
Institute of Educational Technology, The Open University
When and where?
The ETHICS in AIED. Who cares? workshop will be held on [date tbc] at [time tbc] in [room location tbc]. For updates, please follow this workshop (see the Follow button at the bottom of the page).
The half-day ETHICS in AIED: Who Cares? workshop will comprise:
Part 1: ETHICS in AIED: What’s the problem?
(A round-table discussion, introduced and led by Professor Beverly Woolf).
Part 2: ETHICS in AIED: Mapping the Landscape
(Up to six AIED conference participants to each give a five-minute ‘lightning’ presentation on ethics in AIED research, each of which will be followed by a five-minute Q&A/discussion).
Part 3: ETHICS in AIED: Addressing the Challenges
(A round-table discussion, clarifying AIED ethical questions and areas of important research, and identifying next steps: an initial road map or targets).
Who is the workshop for?
Given that all AIED work raises ethical questions, the ETHICS in AIED: Who Cares? workshop will be of relevance to all AIED 2018 conference participants (i.e., to everyone involved in or interested in the research, development or deployment of Artificial Intelligence in Education).
As the first AIED workshop devoted to this key topic, ETHICS in AIED: Who Cares? also aims to serve as community-building event. Hopefully, participants will leave with a clearer understanding of ethical issues central to AIED, and how they might contribute towards addressing the challenges.
The workshop will also help us begin to develop a shared understanding of the multiple challenges and points of contention around the ethics of AIED, that we can draw on when developing and researching AIED technologies. The ambition is that this will be the first of a series of meetings through which the community builds a firm ethical foundation for our work.
The workshop also aims to lead to a co-authored paper and possibly a book on Ethics in AIED. Accordingly, the workshop will be recorded to ensure that contributions are not lost (any participants who do not want to be recorded will not be recorded!).
How do I get involved?
To give a five-minute ‘lightning’ presentation on your ethics in AIED research (or related research) at the ETHICS in AIED: Who Cares? workshop, please upload an abstract (around 200 words) to: [link to easychair here]. Please remember that you will only have five minutes to present (plus five minutes for questions).
To be considered for inclusion in the ETHICS in AIED: Who Cares? workshop, all abstracts must be uploaded to [link to easychair here] by [date and time tbc].
More thoughts on the ethics of AIED
While the range of AI techniques and technologies researched in classrooms and discussed at conferences are extensive and growing, the ethical consequences are rarely fully considered (at least, there is very little published work considering the ethics). In short, as a field (while we apply our university research regulations), we are working without any fully-worked out moral groundings specific to the field of AIED.
In fact, AIED techniques raise an indeterminate number of self-evident but as yet unanswered ethical questions. To begin with, concerns exist about the large volumes of data collected to support AIED (such as the recording of student competencies, emotions, strategies and misconceptions). Who owns and who is able to access this data, what are the privacy concerns, how should the data be analysed, interpreted and shared, and who should be considered responsible if something goes wrong?
However, while data raises major ethical concerns for the field of AIED, AIED ethics cannot be reduced to questions about data. Other major ethical concerns include the potential for bias (conscious or unconscious) incorporated into AIED algorithms and impacting negatively on the civil rights of individual students (in terms of gender, age, race, social status, income inequality…). But these particular AIED ethical concerns, centred on data and bias, are the ‘known unknowns’. What about the ‘unknown unknowns’, the ethical issues raised by the field of AIED that have yet to be even identified?
AIED ethical questions include:
- What are the criteria for ethically acceptable AIED?
- How does the transient nature of student goals, interests and emotions impact on the ethics of AIED?
- What are the AIED ethical obligations of private organisations (developers of AIED products) and public authorities (schools and universities involved in AIED research)?
- How might schools, students and teachers opt out from, or challenge, how they are represented in large datasets?
- What are the ethical implications of not being able to easily interrogate how AIED deep decisions (using multi-level neural networks) are made?
Strategies are also needed for risk amelioration since AI algorithms are vulnerable to hacking and manipulation. Where AIED interventions target behavioural change (such as by ‘nudging’ individuals towards a particular course of action), the entire sequence of AIED enhanced pedagogical activity also needs to be ethically warranted. And finally, it is important to recognise another perspective on AIED ethical questions: in each instance, the ethical cost of inaction and failure to innovate must be balanced against the potential for AIED innovation to result in real benefits for learners, educators and educational institutions.
Currently, very little has been written about the ethics of AIED. However, the ethics of two closely related fields (AI and Learning Analytics) have been explored. For example:
Bostrom, N. and Yudkowsky, E. (2014). The ethics of artificial intelligence, The Cambridge Handbook of Artificial Intelligence, pp. 316–334.