M. Brundage et al. (2020): Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims, https://arxiv.org/abs/2004.07213.
 European Parliament Research Service (EPRS) (2019): EU guidelines on ethics in artificial intelligence: Context and implementation, https://www.europarl.europa.eu/RegData/etudes/BRIE/2019/640163/EPRS_BRI(2019)640163_EN.pdf.
 U. von der Leyen (2019): https://g8fip1kplyr33r3krz5b97d1-wpengine.netdna-ssl.com/wp-content/uploads/2019/07/190714-Letter-Candidate-RENEW-1.pdf.
 OECD (2019): Recommendation of the Council on Artificial Intelligence, https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449.
 J. Gesley et al. (2019): Regulation of Artificial Intelligence in Selected Jurisdictions, https://www.loc.gov/law/help/artificial-intelligence/regulation-artificial-intelligence.pdf.
 The Article 29 Working Party (2018): Guidelines on Automated individual decision-making and Profiling for the purpose of Regulation 2016/679, https://ec.europa.eu/newsroom/article29/item-detail.cfm?item_id=612053.
 High-Level Expert Group on AI (2019): Ethics Guidelines for Trustworthy Artificial Intelligence, https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai.
 ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT), https://facctconference.org/.
 I. D. Raji et al. (2020): Closing the AI Accountability Gap: Defining an End-to-End Framework for Internal Algorithmic Auditing, doi: https://doi.org/10.1145/3351095.3372873.
 A. Clark (2018): The Machine Learning Audit–CRISP-DM Framework, https://www.isaca.org/resources/isaca-journal/issues/2018/volume-1/the-machine-learning-auditcrisp-dm-framework.
 UK National Audit Office (2016): Framework to review models, https://www.nao.org.uk/report/framework-to-review-models/#:~:text=National%20Audit%20Office%20report%3A%20Framework%20to%20review%20models&text=This%20framework%20provides%20a%20structured%2C%20flexible%20approach%20to%20reviewing%20models.&text=The%20framework%20is%20based%20on,HM%20Treasury%20and%20international%20standards.
 M. Wieringa (2020): What to account for when accounting for algorithms, doi: https://doi.org/10.1145/3351095.3372833.
 The European Parliament and of the Council of the European Union (2016): Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation), https://eur-lex.europa.eu/eli/reg/2016/679/oj.
 Information commissioner’s Office AI auditing framework, https://ico.org.uk/about-the-ico/news-and-events/ai-auditing-framework/.
 C. Molnar (2019): Interpretable Machine Learning. A Guide for Making Black Box Models Explainable, https://christophm.github.io/interpretable-ml-book/.
 C. Molnar et al. (2020): Limitations of Interpretable Machine Learning Methods, https://compstat-lmu.github.io/iml_methods_limitations/.
 I. E. Kumar et al. (2020): Problems with Shapley-value-based explanations as feature importance measures, https://arxiv.org/pdf/2002.11097.pdf.
 K. Eykholt et al. (2017): Robust Physical-World Attacks on Deep Learning Models, https://arxiv.org/abs/1707.08945.
 N. Morgulis et al. (2019): Fooling a Real Car with Adversarial Traffic Signs, https://arxiv.org/abs/1907.00374.
 The Norwegian Data Protection Authority (Datatilsynet) (2018): Artificial intelligence and privacy, https://www.datatilsynet.no/globalassets/global/english/ai-and-privacy.pdf.
 Swedish Data Protection Authority (2019): Supervision pursuant to the General Data Protection Regulation (EU) 2016/679 - facial recognition used to monitor the attendance of students, https://www.datainspektionen.se/globalassets/dokument/beslut/facial-recognition-used-to-monitor-the-attendance-of-students.pdf.
 Information commissioner’s Office website https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/data-protection-impact-assessments-dpias/when-do-we-need-to-do-a-dpia/.
 A. Feller et al. (2016): A computer program used for bail and sentencing decisions was labeled biased against blacks. It’s actually not that clear., The Washington Post, http://www.cs.yale.edu/homes/jf/Feller.pdf.
 S. Verma and J. Rubin (2018): Fairness Definitions Explained, http://fairware.cs.umass.edu/papers/Verma.pdf.
 INTOSAI ISSAI 400 – Fundamental Principles of Compliance Auditing, https://www.intosai.org/fileadmin/downloads/documents/open_access/ISSAI_100_to_400/issai_400/issai_400_en.pdf.