Abstract
Automated-decision making (ADM) or Data Driven Inferencing (‘DDI’) is now used widely in Australia by private and public entities to make a variety of decisions impacting individuals. These automated systems may include traditional rule-based systems, algorithms or ‘more specialised systems which use automated tools to predict and deliberate, including through the use of machine learning’ 11.3. Training data sets guide the automated systems to ‘learn’ to apply the data they analyse to come to a decision 21.4. Where training data sets intentionally or unintentionally reflect or embed assumptions or biases, ADM/DDI may produce results that perpetuate those biases,3 leading to potentially discriminatory outcomes for the individuals about whom decisions are made. This has a significant impact on the human rights of those individuals and raises broader questions about transparency, accountability and systemic disadvantage in a community where these technologies are increasingly used 1.5.
The eight case studies that follow briefly illustrate the potential for discriminatory practices arising from the use of ADM/DDI in recruitment, facial recognition, predictive policing and corrections risk assessment, financial services, visa processing, access to health care and car insurance.
The eight case studies that follow briefly illustrate the potential for discriminatory practices arising from the use of ADM/DDI in recruitment, facial recognition, predictive policing and corrections risk assessment, financial services, visa processing, access to health care and car insurance.
Original language | English |
---|---|
Type | Submission to Australian Law Reform Commission |
Number of pages | 34 |
Publication status | Published - 4 Nov 2020 |
Keywords
- Anti-discrimination laws
- Automated-decision making (ADM)
- Data driven inferencing (DDI)
- Discrimination
- Australia
- Australian Law Reform Commission