Culture Mile School Visits Fund
Apply for up to £600 to visit Culture Mile's amazing cultural venues
A growing number of commercial and government organizations are adopting Artificial Intelligence (AI) and machine learning (ML) approaches to solve some of their most consequential problems. Despite the rapid proliferation of advanced AI techniques, architectures and benchmarks, the challenges organizations face are often broader than building or using the next most performant model. Important considerations exist, for instance, surrounding the fairness and bias in AI models and underlying training data, the governance of the appropriate use of AI systems, and the provision of training to equip employees with a necessary understanding of the impact of AI-based automation.
To address these challenges, many institutions have called for “Responsible AI” (RAI) in policy documents and statements of ethical principles. Though these principles are a step in the right direction, there is a need to further translate them into guidance that engineering teams can implement.
In fact, the engineering teams who build software platforms for AI systems have a crucial role in helping with adoption of RAI practices. Many of the techniques that engineers use for building software reliably and iteratively – version control, continuous integration, continuous delivery, agile development – should also be applied to the development and deployment of AI.
In this whitepaper, Palantir describes a novel model lifecycle framework built upon common software engineering techniques that enables engineering teams to implement RAI principles in practice.
A Listing is an individual learning opportunity.