<aside> đź’ˇ
The Practice Playbook on Responsible AI is a set of recommendations for social impact organisations to incorporate Responsible AI at every stage of developing an AI intervention.
It draws from existing scholarship on the topic and the experiences and scholarship of the RAIL Fellowship Mentors and the Fellows, and is intended to be a resource for social impact organisations and startup teams to understand potential challenges and key considerations when embarking on developing AI interventions as part of their programme offerings.
</aside>
Artificial intelligence (AI) systems and technologies have been widely adopted and deployed in India across critical social sectors. This started as early as 2018, with the NITI Aayog establishing a National Strategy for AI, promoting “AI for All” as a development pathway and responsible AI as the way to do it. To benefit from the opportunities of AI, it is important to ensure that its risks are managed and regulated.
To tackle challenges from emerging technologies, state intervention through laws and regulation is not enough. Embedding good practices and principles at the stage of development of a technology is imperative. That is where the notion of Responsible AI comes in.
The key principles of Responsible AI are transparency (and explainability), fairness and addressing bias, accountability of relevant stakeholders, privacy of the individuals whose data is being used, and security of such data against breach and harm. These principles seek to provide an ethical and legal view of the process of design, development, and deployment of AI interventions.
There are several approaches and frameworks for ethical and responsible AI globally. In India, the NITI Aayog’s two-part approach papers on Responsible AI published in 2021 provide systems and societal considerations, and action plans for various stakeholders. There is a need to translate these principles into on-ground actions that are contextually relevant for India.
<aside> 🔠DFL’s Responsible AI Impact Lab (RAIL) is envisioned as a platform for stakeholders invested in the purposeful, ethical and sustainable deployment of AI in India, to exchange ideas, build communities of practice, and access the tools and expertise required to drive equitable and responsible use of AI. Learn more about RAIL and the fellowship here.
</aside>
Through expert mentorship, workshops, seminars, and peer learning exchanges, RAIL empowers organisations to anticipate potential harms, establish risk mitigation strategies, and develop AI in the public interest.
RAIL's inaugural capacity-strengthening fellowship is a unique initiative targeted at social impact organisations and startups to equip them with the tools and knowledge necessary to build and implement ethical and responsible AI systems.
<aside> 🔍 It facilitates identification of challenges in applying RAI principles to design and deployment.
</aside>
<aside> 👥 It seeks to refine the RAI approach through intensive peer workshops and individually tailored sessions.
</aside>