Transparent Practice #1: Document the modeling process
Last week, we shared an Open Letter insisting on AI that is Transparent, Fair, and Accountable for job seekers, employers, and training providers. This post dives deeper into one of our Four Practices of Transparency: Documenting the Modeling Process!
At a minimum, users need access to information on how the system was built and how it works. What algorithms are used? On what data is the model trained?
Model developers should articulate the sources of the training data they have used and describe how they have ensured that those data are as unbiased as possible. After all, AI is not actually that intelligent: it simply learns from the patterns it sees. Models trained on biased data are especially likely to replicate those biases in their outcomes.
Next, developers should articulate the choices they have made in developing their models. Such choices invariably involve trade-offs between how easy it is to understand how a model works, and how robust and comprehensive the model is. Some types of models make it easy to understand which inputs lead to which recommendations. Others – including modern classes of models such as neural networks and other deep learning models – allow developers to achieve robust and accurate results quickly. Such models are complex and derive from a huge range of inputs, making them hard to study and understand, even for data scientists.
This practice is all about a commitment to clarity – about model purpose and design and about the tradeoffs the maker has navigated. Makers who embrace this approach enable their users to intelligently interrogate and dissect the results they receive.
Offering clear and open documentation is a bedrock of AdeptID practice: invitations to learn about and use the tools are easy to find on our website, written for technical and non-technical audiences, and designed to invite users to sample and experiment with the model.
Next week, we’ll share another one of our practices. In the meantime, we look forward to continuing our conversation on how AI can be built responsibly to get more people into better jobs, faster.