Purpose: Artificial Intelligence (AI) systems play an increasing role in organisation management, process and product development. This study identifies risks and hazards that AI systems may pose to the work health and safety (WHS) of those engaging with or exposed to them. A conceptual framework of organisational measures for minimising those risks is proposed.
Design/methodology/approach: Adopting an exploratory, inductive qualitative approach, the researchers interviewed 30 experts in data science, technology and WHS; 12 representatives of nine organisations using or preparing to use AI; and ran online workshops, including with 12 WHS inspectors. The research mapped AI ethics principles endorsed by the Australian government onto the AI Canvas, a tool for tracking AI implementation from ideation via development to operation. Fieldwork and analysis developed a matrix of WHS and organisational–managerial risks and risk minimisation strategies relating to AI use at each implementation stage.
Findings: The study identified psychosocial, work stress and workplace relational risks that organisations and employees face during AI implementation in a workplace. Privacy, business continuity and gaming risks were also noted. All may persist and reoccur during the lifetime of an AI system. Alertness to such risks may be enhanced by adopting a systematic risk assessment approach.
Originality/value: A collaborative project involving sociologists, economists and computer scientists, the study relates abstract AI ethics principles to concrete WHS risks and hazards. The study translates principles typically applied at the societal level to workplaces and proposes a process for assessing AI system risks.
|Number of pages||19|
|Journal||International Journal of Workplace Health Management|
|Publication status||Published - 28 Jul 2023|
- AI canvas
- Artificial intelligence
- Ethics principles
- Future work
- Job control/workload
- Organisational/peer relations
- Risk assessment