Artificial intelligence and human rights: our contributions to the 192nd session of the IACHR

TEDIC
Blog Democracy
Flyer con el titular: "Inteligencia artificial y derechos humanos: nuestros aportes a la 192 sesión de la CIDH".

The Inter-American Commission on Human Rights (IACHR) held an official regional hearing on “Artificial intelligence and human rights” during its 192nd session. This space provided us with an opportunity to discuss the impacts of artificial intelligence (AI) on fundamental rights, highlighting both its risks and the necessary measures to ensure responsible development aligned with democratic principles.

Key topics discussed

During the session, IACHR commissioners —such as Stuardo Ralón, Carlos Bernal, Gloria de Mees, and the Rapporteur of the RELE Pedro Vaca—, along with experts and civil society organizations, including TEDIC as an observer, analyzed various issues related to artificial intelligence and its regulation. The following concerns were highlighted:

  • Algorithmic opacity and discriminatory biases: The lack of transparency in AI systems can lead to unfair decisions, particularly affecting vulnerable populations.
  • Need for regulatory frameworks and human rights mechanisms: Although there are no specific regulations for AI, existing frameworks must be strengthened to safeguard fundamental rights. Examples include the Comprehensive Personal Data Protection Law, the Cybersecurity Law and the Transparency Law, among others.
  • Transparency and governance: The discussion emphasized that creating entirely new regulations is unnecessary. While ethical guidelines are essential, they are insufficient on their own and cannot replace the legal framework. Human rights mechanisms are crucial for implementing and strengthening safeguards for the responsible use of AI.
  • Application of Inter-American standards: A call was made to establish effective mechanisms to enforce human rights in the context of AI.

TEDIC’s advocacy in the field of artificial intelligence

Since 2019, TEDIC has been addressing the impact of artificial intelligence-based technologies in areas such as migration, labor, public administration and surveillance, conducting analyses with a gender and human rights perspective. Through its reports and research, the organization has identified one of the main structural weaknesses of the country: the absence of a comprehensive personal data protection law1 and the lack of adequate resources to guarantee transparency and access to public information.

In addition, the data collected for the provision of public services does not meet open data standards or basic transparency criteria. Its poor maintenance hinders the effective and ethical implementation of artificial intelligence systems. This lack of regulation and control creates significant risks for the privacy and rights of citizens2.

Lack of personal data regulation – AI Global Index

TEDIC collaborated on the Paraguayan chapter for the Global Index on Responsible AI, expressing the urgent need for ethical governance in the country. The analysis revealed the absence of a personal data protection law, as well as the lack of adequate resources to ensure transparency and access to public information.

Automation of employment policies

Alongside the regional organization Derechos Digitales, TEDIC investigated the deployment of the tool “EmpleaPY”, developed by the Paraguay’s Ministry of Labor, Employment and Social Security with support from the Inter-American Development Bank (IDB). This platform aims to facilitate access to job offers through automated processes, but the study identified serious privacy issues, as its implementation was carried out without prior data protection analysis in a country that still lacks specific legislation on the matter.

The research examined the functioning of this tool within the framework of the Paraguayan State’s digitalization process and raised concerns about the use of automated systems for decision-making in the labor field, without transparency regarding algorithmic criteria or guarantees for users. The study is framed within a broader context of the increasing incorporation of artificial intelligence (AI)-based technologies in public policy across the region.

Facial recognition and human rights

Since 2018, TEDIC has pursued legal actions in Paraguay to access public information regarding the use of facial recognition technologies. Several lawsuits filed against the Ministry of the Interior and the National Police were rejected on the grounds of “national security”, limiting citizens’ access to key information about these technologies. Recent research reveals serious concerns about transparency, corruption and human rights violations in the acquisition and implementation of biometric surveillance systems. The study documents a consistent increase in the use of facial recognition cameras by the National Police, though the results remain unclear, raising concerns about the privacy and security of personal data.

Additionally, it identifies opaque procurement processes, questionable acquisitions and a troubling lack of oversight that enables corrupt practices. The report also warns about the potential illegality of using public funds to finance these technologies, particularly the use of the Universal Service Funds (FSU) administered by CONATEL, which are intended to improve access to telecommunications, not to fund surveillance systems.

Along these lines, in 2024 Paraguay enacted the Law for the Prevention, Control and Eradication of Violence in Sports, which authorizes the collection of biometric data at sporting events and venues. The law was passed in under a year and entered into force swiftly. Even before its promulgation, the Ministry of the Interior and the Paraguayan Football Association had already signed an agreement to implement facial recognition systems before any legal framework was in place to authorize them.

A TEDIC publication also highlights a potential conflict of interest, as the technology provider, ITTI SAECA, is part of Grupo Vázquez, a business conglomerate in which the current President of the Republic holds shares. This situation has sparked criticism and pushback from fans of various clubs, who are promoting the #ConMiCaraNo (#NotWithMyFace) campaign in opposition to biometric surveillance in Paraguayan football.

Use of AI in the Paraguayan Justice System

In 2019, the Judiciary of Paraguay began negotiations to acquire “Prometea,” an artificial intelligence (AI) software developed in Argentina, with the aim of implementing it in the Constitutional Chamber of the Supreme Court of Justice. The system, based on AI and supervised machine learning, is designed to automatically generate judicial rulings, with the goal of reducing delays and streamlining bureaucratic processes.

Although there has been no progress in its implementation so far, the interest shown by state institutions in adopting such technology is noteworthy. However, it is essential to conduct a prior impact assessment to determine whether this tool is genuinely effective in reducing judicial delays. One of the main challenges lies in extrapolating Prometea—designed for an administrative fiscal enforcement context, where legal procedures are more structured and limited—to a very different setting, such as the Constitutional Chamber of the Supreme Court. There are fundamental distinctions within the judicial system: an administrative proceeding is not equivalent to a criminal or civil one, and each follows its own procedural stages. It is therefore crucial to exercise caution when adopting technological solutions outside the context for which they were originally designed.

The automation of war through killer robots

Since 2020, TEDIC has been part of the global Stop Killer Robots campaign, which seeks to ban autonomous weapons and ensure meaningful human control over the use of force. TEDIC joined the campaign to foster dialogue among diverse stakeholders, raise awareness about this critical issue and strengthen civil society efforts to address the use and abuse of technology, particularly artificial intelligence, as it relates to violence and control.

During March and April 2024 we held working group meetings with civil society organizations from Paraguay and representatives of Paraguayan state institutions (Ministry of National Defense, Ministry of Foreign Affairs and National Congress). We also submitted to the UN a statement from Paraguayan civil society that included recommendations for states. The statement, titled “Let’s Stop Digital Dehumanization: We Urge Paraguay and UN Member States to Strictly Regulate Autonomous Weapons”, was supported by Amnesty International Paraguay, the Human Rights Coordinator of Paraguay (CODEHUPY), Heñói Center for Studies and Promotion of Democracy, Human Rights and Socio-environmental Sustainability and Fundación Vencer .

Echoing these recommendations, in December 2024 the UN General Assembly, composed of 193 member states, approved resolution A/RES/79/62 on lethal autonomous weapons systems. Including Paraguay, 166 countries voted in favor, 3 voted against and 15 abstained. The resolution establishes a new forum under UN auspices to address the serious challenges and concerns posed by these systems and the measures that should be taken in response.

Surveillance in the Triple Frontier

In 2023, TEDIC, in collaboration with Data Privacy Brasil, conducted research on the operation of border surveillance technologies in the triple frontier between Brazil, Argentina and Paraguay. The study analyzed two key security programs: Muralha Inteligente in Brazil and the Automated Migratory Facial Recognition System (SMARF) in Paraguay, aiming to understand their narratives, assess their effectiveness and determine whether they truly serve the purposes for which they were designed.
The findings reveal that, as in previous research, there is a troubling lack of transparency in how these systems operate. The opacity surrounding their implementation limits proper citizen oversight and raises serious concerns about their impact on the fundamental rights of individuals crossing the region.

Global Declaration “Within Bounds: Limiting AI’s environmental impact”

TEDIC has also signed a declaration calling for fairer and more responsible management of AI to minimize its environmental impact. The document outlines five key demands, beginning with the urgent elimination of fossil fuels across the AI industry’s entire supply chain. It also stresses the need for the computing infrastructure of these systems to respect planetary boundaries, avoiding excessive use of resources. The declaration highlights the responsibility of major companies in the sector to maintain ethical and sustainable supply chains and to ensure their economic and political influence does not undermine environmental or social well-being. Another core demand is the need for equitable public participation in decisions about the use of computing, while rejecting the criminalization of climate and environmental activism. Finally, it calls for greater transparency around the social and environmental implications of AI infrastructures, ensuring that information is accessible prior to their construction or expansion.

AI Governance in Paraguay

Currently, the Ministry of Information and Communication Technologies (MITIC) and the National Council of Science and Technology (CONACYT) are in the diagnostic phase of UNESCO’s Readiness Assessment Methodology (RAM), a tool developed as part of the Recommendation on the Ethics of Artificial Intelligence. By 2025, the objective is to design a roadmap to guide both the implementation of the RAM and the development of other key indicators within a framework of participatory governance.

This effort is significant, as it reflects the government’s interest in promoting the responsible development and use of artificial intelligence in the country. However, it is essential that this process adopt a human rights-based approach and ensure transparency and citizen participation, so that technological policies respond to the needs of society and do not compromise privacy or individual freedoms. In this context, TEDIC is interested in accompanying the process by contributing its expertise in digital rights, transparency and AI ethics, to help ensure that these policies are inclusive and uphold fundamental rights.

Regional Civil Society Recommendations to the IACHR on AI

As a result of the session, the IACHR requested that civil society submit recommendations for multistakeholders on artificial intelligence and human rights. Derechos Digitales de América Latina led the drafting of the region’s civil society contributions, a collaborative effort involving 17 organizations. TEDIC actively participated by contributing its expertise and collaborating in the development of recommendations for the protection of human rights in the context of AI use.

The recommendations were:

For States:

  • Offer, through its Commissioners and Special Rapporteurs, support and technical guidance to States to integrate a human rights perspective throughout the lifecycle of public policies (design, implementation, evaluation and monitoring of their human rights impact) that seek to implement digital technologies for the provision of public services, access to goods or exercise of rights by citizens.
  • Develop standards and guidelines that operationalize the implementation of a human rights perspective and human rights impact assessment applicable to the data lifecycle and the artificial intelligence lifecycle when deployed by States for the provision of public services, access to goods or exercise of rights by citizens.
  • Further develop pre-existing Inter-American standards on meaningful participation, and affirm the importance of integrating active, open, continuous and diverse participatory processes in the design of public policies that seek to implement AI systems in the State so that different population groups, especially those most impacted, can be involved in public decision-making processes.
  • Support and guide training processes for public servants involved in the adoption of AI systems, as well as those servants involved in the fulfillment of rights and their reparation through judicial and extrajudicial mechanisms. Also support processes to enhance citizen literacy regarding the impact that AI systems have on the exercise of their rights.
  • That, in in drafting such standards, consider as a starting point the standards consolidated in the universal human rights system, particularly the following:
  • The obligation of States to implement systematic due diligence mechanisms and assessments applied to the adoption and deployment of data-intensive digital technologies, adopted in General Assembly Resolution A/HRC/51/17 of 2022 on “The right to privacy in the digital age”. Furthermore, such due diligence assessments should be enshrined in “legally binding obligations” that contain data-testing protocols that safeguard against algorithmic, racial and ethnic bias, and that “should be completed before the deployment of new technologies,” as highlighted by Resolution A/HRC/56/68 of 2024 on “Contemporary forms of racism, racial discrimination, xenophobia and related intolerance”.
  • The obligation of States to implement comprehensive and regular human rights impact assessments applicable to the design, development, procurement, deployment, and operation of data-intensive digital technologies, adopted in General Assembly Resolution A/HRC/51/17 of 2022 on “The right to privacy in the digital age”.
  • Note that the processing of large amounts of data, particularly when involving personal data, can pose a high risk to privacy, especially when used for (i) profiling, (ii) facial recognition, (iii) behavior and crime prediction, and (iv) the scoring or rating of individuals, as recognized in General Assembly Resolution A/HRC/RES/48/4 of 2021 on “The right to privacy in the digital age”.
  • The state obligation to adopt mechanisms of explainability and transparency for all processes supported by AI systems, particularly when adopted to automate processes in the public sector, recognized in General Assembly Resolution A/HRC/48/31 of 2021 on “The right to privacy in the digital age”.
  • The state obligation to implement independent audit mechanisms for automation systems deployed in the public sector, as recognized in General Assembly Resolution A/HRC/48/31 of 2021 on “The right to privacy in the digital age”.
  • The state obligation to ensure the participation of all stakeholders, including potentially affected individuals and marginalized racial and ethnic groups, in the deployment and use of AI by the State, as recognized in General Assembly Resolution A/HRC/48/31 of 2021 on “The right to privacy in the digital age” and Resolution A/HRC/56/68 of 2024 on “Contemporary forms of racism, racial discrimination, xenophobia and related intolerance”.
  • The importance of the regulatory prohibition of high-risk uses of AI systems, as recognized in General Assembly Resolution A/HRC/48/31 of 2021 on “The right to privacy in the digital age”.
  • The state obligation to refrain from using AI and facial recognition systems to identify individuals in the context of peaceful protests, as well as to prohibit indiscriminate and mass surveillance, as recognized in General Assembly Resolution A/HRC/44/24 of 2020 on “The impact of new technologies on the promotion and protection of human rights in the context of assemblies, including peaceful protests”.
  • The state obligation to develop regulatory frameworks for AI that comprehensively address the effects and impacts of systemic racism, ensure alignment with international human rights law, and effectively prevent racial discrimination by AI systems, as recognized in Resolution A/HRC/56/68 of 2024 on “Contemporary forms of racism, racial discrimination, xenophobia, and related intolerance”.
  • The state obligation to provide clear and accessible appeal mechanisms, as well as mechanisms for restitution, compensation and rehabilitation, in cases where AI systems have led to human rights violations. The scope should also cover addressing the systemic impacts of racism, including human intervention and review, as recognized in Resolution A/HRC/56/68 of 2024 on “Contemporary forms of racism, racial discrimination, xenophobia, and related intolerance.”
  • That, when preparing future thematic and country reports, as well as conducting future future in-person working visits:
  • Include a cross-cutting human rights perspective that enables the analysis, documentation and monitoring of the impact and effects of the adoption and deployment of AI systems by States, particularly regarding issues such as the protection of children, women and girls, human rights defenders, workers and the future of work, the environment and climate change, ethnic and racialized groups, journalists, among others. Also, include the exercise of rights impacted by these systems, such as freedom of expression, health, work, privacy, education, data protection, access to justice, equality and non-discrimination, access to public information, among others.
  • Require States to generate public, open and regularly updated information on the adoption, piloting, deployment and evaluation of implemented AI systems or those that have been dismantled or withdrawn after implementation.
  • Publish diagnostic reports on the regional status of the use, implementation, evaluation,and monitoring of AI systems deployed by States or that have been dismantled or withdrawn after implementation.

    For Private Sector Actors

    • Develop standards and guidelines that operationalize corporate human rights commitments and ensure their applicability to the activities of technology companies.
      In this regard, we respectfully suggest adopting the content of United Nations General Assembly Resolution A/HRC/50/56, “The practical application of the Guiding Principles on Business and Human Rights to the activities of technology companies”, adopted in 2022. The resolution outlines specific actions for companies that develop, design, sell or license digital technologies—particularly AI systems—which are subsequently adopted by States. It also establishes specific obligations for States that acquire, purchase, license or outsource public sector functions to private technology companies.
    • Address in a differentiated manner the roles and responsibilities of different private sector actors (including national and transnational technology companies, development cooperation actors, private universities, etc.), as well as the scope of their human rights obligations in relation to AI systems when adopted or deployed by States.

      For Civil Society

      • Establish, in cooperation with interested civil society and other stakeholders, working groups to advance the monitoring, study, documentation, evaluation and exchange of information regarding cases of AI system deployment by States.
      • Organize, in collaboration with interested civil society, information and knowledge exchange sessions between the IACHR and its various Rapporteurships, to build technical capacity and mutual knowledge.
      • Encourage and guide civil society to enable it to utilize the mechanisms of the Inter-American Human Rights System (such as involvement with SIMORE, thematic hearings, submission of cases before the IAHRS, etc.) to strengthen its advocacy capacities at the regional level.
      • Create meeting spaces, both online and offline, for members of the IACHR, its various Rapporteurships, and civil society, to follow up on the actions taken by civil society following the regional hearing convened ex officio on “Human Rights and Artificial Intelligence.”

        Other OAS efforts regarding AI

        • We request that the IACHR articulate, advise and guide, from a human rights perspective, the efforts undertaken by the OAS towards the development of the so-called Inter-American Framework for Data and AI Governance, which comprises three documents: a) Guidelines for Data and AI Governance, b) a study on the state of data and AI governance in the Americas, and c) a model policy on data and AI governance, produced in response to the mandate adopted by the OAS at its most recent General Assembly.
        • Similarly, we urge the IACHR to advise and guide, from a human rights perspective, the draft Declaration Towards the Safe, Secure and Trustworthy Development and Deployment of Artificial Intelligence in the Americas, published in draft by the OAS in December 2024.

        This publication has been funded by the European Union. Its content is the sole responsibility of TEDIC and does not necessarily reflect the views of the European Union.

        1 TEDIC. 2024.Artificial Intelligence in Paraguay: The urgent need for responsible governance. https://www.tedic.org/en/globalindex_ia/

        2 Global index on Responsible AI (2024). https://www.global-index.ai/ and UNESCO (2025) Artificial Intelligence Readiness Assessment in Paraguay. Oxford Insights. 2023. Government AI Readiness Index 2023. https://oxfordinsights.com/wp-content/uploads/2023/12/2023-Government-AI-Readiness-Index-1.pdf