Bridging Tech and Policy: Insights on Privacy and AI from IndiaFOSS 2024
Anwesha Sen / Nov 1, 2024The IndiaFOSS 2024 conference brought together a diverse mix of developers, policymakers, and advocates, creating an engaging forum to explore the future of Free and Open Source Software (FOSS) amid a rapidly changing technological landscape. With the rise of artificial intelligence and new regulations like the EU AI Act and Digital Personal Data Protection Act (DPDPA), conversations about the critical policy and ethical issues they present are at the forefront.
As a speaker at IndiaFOSS 2024, I explored some of these dynamics by discussing the implications of biometric technologies in public service delivery and law enforcement on human rights. Organized by FOSS United, a non-profit dedicated to strengthening the FOSS ecosystem in India, the conference emphasized the vital intersection of FOSS and public policy for dismantling monopolies on innovation and fostering an inclusive future. This article will focus on three themes discussed at the conference and what it means for the tech policy ecosystem - reclaiming control over our communication channels through FOSS, privacy, and open-source AI.
Privacy
One of India's key tech policy challenges is the establishment of effective privacy regulations. Since the enactment of the Digital Personal Data Protection Act (DPDPA) in August 2023, the ecosystem has been awaiting the rules for implementation, which, as of September 2024, are yet to be released. This delay means that India still lacks any privacy protections. The rapid advancement of AI exacerbates privacy concerns, particularly issues like data scraping. Notably, clause 3(c)(ii) of the DPDPA excludes all publicly available data from its scope, inadvertently fostering unchecked data scraping practices.
Discussions at the conference also highlighted questions surrounding the implications of the expanding biometric surveillance ecosystem, particularly considering the increase in welfare algorithms that use “AI” to determine the eligibility of an individual for certain welfare schemes (and often get wrong). Such systems are poorly governed and often also lack adequate regulatory safeguards and grievance redressal mechanisms, resulting in a cycle of human rights violations. Furthermore, these algorithms contribute to exclusionary access to public services, exacerbating systemic discrimination and infringing on privacy rights. They also undermine the right to dissent, increase the risk of false incrimination through “predictive policing,” and promote widespread surveillance, all while lacking the transparency necessary for individuals to seek justice.
While open-sourcing digital public infrastructure (DPI) can enhance transparency, accountability, and privacy, the use of AI in DPI for public services and law enforcement - whether open or proprietary - is antithetical to human rights and must be curtailed.
Open-sourcing AI
AI consists of three key components: data or data information (e.g., sufficiently detailed information about the data used to train the system), code, and model weights and parameters. Open-sourcing AI refers to granting the four freedoms of FOSS - use, modify, inspect, and sell - to all three elements of AI. While code and model weights can often be shared freely, the open sharing of datasets presents a more complex challenge. Key questions were discussed at a panel discussion on the same topic, such as which types of datasets can be shared without infringing on individual privacy, what copyright and intellectual property considerations must be taken into account, and how much openness is apt to foster innovation while protecting privacy. The ecosystem continues to navigate these challenges as it seeks to balance transparency and creativity with the protection of individual privacy and community rights.
Advocating for open-source AI initiatives allows policymakers to create an environment where AI frameworks, algorithms, and tools are accessible to all. This promotes collaboration among diverse stakeholders—from researchers to startups—leading to innovative solutions that meet a broader range of societal needs. A transparent approach to AI development aligns with democratic values, enabling citizens to better understand and influence the technologies affecting their lives. However, this push for openness must be coupled with strong protections for privacy and intellectual property. Policymakers should engage with these initiatives to establish robust frameworks that address risks related to open training data, ensuring ethical considerations remain central to AI development.
FOSS for taking back control of our communication systems
Global communication systems are predominantly managed and governed by major technology corporations, often referred to as Big Tech. These organizations exert significant influence over how information flows across the world, yet they lack a nuanced understanding of the socio-political dynamics in the Global South. Pratik Sinha, co-founder at Alt News, spoke about how this gap in understanding can have severe consequences, particularly when it comes to issues such as misinformation, hate speech, and the spread of harmful content.
For instance, the algorithms and content moderation practices employed by these companies are not be adequately attuned to the cultural and contextual sensitivities of diverse regions, leading to the amplification of divisive narratives and the marginalization of local voices. Furthermore, their failure to address the unique challenges faced in these areas can contribute to real-world harm, including violence and societal unrest. The lack of accountability from these tech giants is particularly concerning, as their insufficient content moderation has historically facilitated, and continues to fuel, large-scale violence and even genocides.
This underscores the urgent need for alternative communication frameworks that prioritize transparency and accountability. The FOSS community is uniquely positioned to address these challenges by collaboratively developing communication systems tailored to the specific needs of various regions. Pratik suggested that by leveraging open-source principles, the FOSS community can create platforms (such as Mastodon) that empower users, enhance local governance, and foster a culture of shared responsibility in content moderation. In doing so, they can provide viable alternatives to Big Tech, ensuring that communication systems serve the diverse needs of communities rather than being controlled by a handful of corporations with a limited understanding of local complexities.
By embracing FOSS principles, policymakers can create an ecosystem that promotes collaboration and knowledge sharing among developers, users, and organizations. This collaborative environment can lead to more effective policies that prioritize user rights, security, and accessibility, ensuring that technology serves the public good. Moreover, FOSS encourages the adoption of open standards, which can enhance interoperability between systems and reduce barriers to entry for smaller developers and startups. This not only stimulates competition but also drives technological advancement in a way that is more aligned with the needs of the community. Ultimately, supporting FOSS can lead to the development of ethical tech policies that not only promote sustainable growth but also democratize access to technology, ensuring that all citizens can benefit from the digital revolution.