Trustworthy AI often focuses on the trust we should, or shouldn’t, place in AI systems – whether they are capable, reliable, resilient, transparent, etc. However, it is just as important to know when to trust, or not to trust, the developers of AI systems. In this AI for Good Discovery, Dr Shahar Avin will describe why it is currently hard to evaluate the trustworthiness of AI developers, and then will outline how the combination of numerous interlocking mechanisms, from red teaming to third party audits, could create a system where such evaluation of trustworthiness is easier.