AI for Human Welfare
The group's interests include the stability of AI algorithms; notably their resilience to deliberate, adversarial attacks. As well as practical attack and defence algorithms, they study big picture questions such as the trade off between accuracy, stability and bias.
Mathematical progress in these areas has implications for regulation. For example,
- How do we define AI, reliability, safety, trustworthiness?
- Are governments in danger of proposing laws that are mathematically impossible to uphold?
- Who is responsible for the safety of AI tools where the final product builds on networks that were made available by a third party?
Public perception is a closely related issue,
- What types of mistakes are acceptable?
- How do we judge AI tools?
- In what sense does AI perform better than humans?
To tackle these questions clearly requires a range of interdisciplinary approaches.