Efforts to address the ‘disruptive’ ethical, legal, and social potential of socio-technical innovation have resulted in the formulation of standard procedures for Ethical Impact Assessments (EIA), which may include specifically targeted Privacy or Data Protection Assessments (PIA) (Satori 2016). It is important that PIAs and EIAs are understood and implemented as iterative, reflexive, contextual and creative processes (rather than ‘rubberstamping’ or ‘policing’ activities). They could be undertaken in a range of formats, such as open discussions, questionnaires, case studies, and role-playing, and should be carried out periodically. They are necessary to pro-actively notice and address risks, to take steps to avoid infringing upon fundamental rights early, and increase transparency. There are no unequivocal rules or always right/wrong answers.
How can the EIA be folded in as an ongoing process, not as a tickbox exercise or a process of policing?
How can EIA/PIA processes be designed to support designers and users of CIS to recognise the potential limitations of their assumptions/priorities?
How can EIAs/PIAs reveal ethical, legal and social issues at design-time, but also during implementation, use and governance?
Who manages the EIA/PIA processes and do they ensure that their importance is translated across all design teams and the approaches they engender are embedded in all design processes?
Increasingly, Ethical, Privacy and Data Protection Impact Assessments are employed in organisations and projects to help in anticipating ethical, legal and social issues such as, data protection and privacy issues, and to help address these. These processes require pre-planning and a predictive assessment of how innovation might impact on end user and stakeholder rights. Furthermore, this is an on-going process which mandates iterative evaluation. Standards that are under development using a EIA/PIA should not be a “tick-box” exercise. When done effectively, the processes of EIA/PIA can lead to transparency in relation to the operation of organizations and technologies, which can increase end user confidence.
A Whistleblower Policy can also be implemented.
Petersen et al (2016) describe an EIA process in the SecInCoRe project, designed to facilitate deeper reflection, learning and grounding in how ethical, legal and social issues manifest concretely in multi-agency collaboration through a prototype CIS. This EIA staged an encounter between a fictional disaster management committee looking for new collaboration technology and our project’s design team. The committee was played by the project’s domain analysis team comprised of social scientists, legal scholars and a representative of the British Association for Professionals in Public Safety and Civil Contingencies Communications and Information Systems (British APCO), a practitioner with over 30 years of experience as a senior police officer. All social science researchers have experience of participant observation with emergency responders as well as knowledge of the literature. The design team was played by engineers and computer scientists in the project, most of whom have also worked with emergency responders. The idea was not to simulate real world experience with maximum fidelity to ‘tell it like it is’, but to seek concrete grounding in different knowledges and perspectives of real world practice.
The fictitious commissioning committee specifically looked for support in addressing ethical, legals and social issues arising in multi-agency collaborations. This was based on their experiences during the Germanwings crash, which were described in a briefing document. This brief was given to the technical/conceptual developers, who were asked to make a pitch for the SecInCoRe project’s products, explaining how the SecInCoRe concept and technologies could help responders address the challenges. The role-playing members of the disaster management committee then posed questions from their respective perspectives (e.g. as legal experts, police authorities, environment agency members, aviation experts) and asked questions from those perspectives. This activity was carried out five times, with each session focusing on a different aspect of the project and involving different team members. The sessions were transcribed and analysed.
This process instigated broad discussions of the potential and the limitations of technology, needs and opportunities for technical, organizational and regulatory innovation. A range of issues was explored, from data gathering to public-private partnerships to the afterlife of data. The role-playing discussions generated new insights into ELSI and new design ideas.
De Hert, P., Kloza, D., and Wright, D. (2012). PIAF Project Deliverable 3: Recommendations for a privacy impact assessment framework for the European Union.
Liegl, M., Büscher, M., and Oliphant, R. (2015). Ethically Aware IT Design for Emergency Response: From Co-Design to ELSI Co-Design. In Proceedings of the ISCRAM 2015 Conference. Kristiansand, Norway, 24-27 May 2015. [Link]
Harris, I., Jennings, C. ., Pullinger, D., Rogerson, S., and Duquenoy, P. (2011). Ethical assessment of new technologies: A meta-methodology. Journal of Information, Communication and Ethics in Society, 9: 49–64. [DOI] [Link]
Petersen, K., Oliphant, R., and Büscher, M. (2016). Experimenting with the Ethical Impact Assessment as a Grounding Socio-Technical Practice. Proceedings of the Intelligent Systems for Crisis Response and Management Conference, Rio de Janeiro, 22-25 May 2016. [Link]
SATORI. (2016). CEN Workshop SATORI – Ethics Assessment of research and innovation. [Link]
Wright, D., and Friedewald, M. (2013). Integrating privacy and ethical impact assessments. Science and Public Policy, 40(6), 755–766. [DOI]
Wright, D., and De Hert, P. (2012). Privacy Impact Assessment. Springer Netherlands.