![Flyer con fondo amarillo y texto negro.](https://www.tedic.org/wp-content/uploads/2025/02/UE1.png)
Elections represent significant milestones in the political participation of democratic societies and the construction of a social state governed by the rule of law. The right to freedom of opinion and expression in the electoral context enables political parties and candidates to campaign freely, share proposals, and engage with citizens. It allows people to express their opinions on policies and the performance of political figures, to discuss, debate, decide whom to vote for, and, no less importantly, to demand accountability. Additionally, it ensures that journalists can report without fear of retaliation or favoritism, and that electoral institutions can guarantee free, fair, and secure elections.
However, in many countries, elections take place in contexts of heightened political tensions during which freedom of expression and other human rights are suppressed. Candidates, political activists, and journalists are often targeted with attacks. Information is manipulated to spread disinformation, hate speech, and other forms of violence, threatening citizen participation and public trust in democracy. Technology has amplified both opportunities and threats to democratic processes. On one hand, it has facilitated access to information and increased the channels and possibilities for seeking, receiving, and disseminating information and ideas, as well as forming opinions. On the other hand, it has magnified disinformation and violence against women, electoral candidates, journalists, and other vulnerable groups participating in these processes.
In this context, the UN Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression, Irene Khan, has called for contributions to her upcoming report on freedom of expression and elections in the digital age, to be presented to the Human Rights Council in June 2025. The call for contributions is open to governments, international and regional organizations, national electoral management bodies, election observation missions, national human rights institutions, media outlets, digital technology companies, civil society organizations, and experts to share their perspectives. The report will analyze the information environment in the electoral context and evaluate opportunities, threats, and challenges to freedom of expression and its impact on voters, with a focus on vulnerable groups. Key topics in the area of digital rights to be addressed include the role, responsibilities, and responses of state and non-state actors regarding AI.
From the organizations TEDIC, Derechos Digitales, IPANDETEC, and Fundación Karisma, we have submitted our contribution on the threats and challenges to freedom of expression during elections in Latin America. In this framework, the main concerns identified and outlined in the report to the rapporteur are as follows.
Threats
Online Gender-Based Violence Against Women in Politics
Women in politics face disproportionate levels of online harassment, often aimed at silencing their voices and discrediting their campaigns. This form of violence not only violates their rights but also undermines democratic participation by discouraging women from engaging in public life. These situations often lead to self-censorship, a response to online harassment that negatively affects their right to freedom of expression and opinion.
In Colombia, the persistence of the gender gap in political candidacies and campaigns during the 2022 elections was documented, exacerbated by the violence faced by women in politics on social media. In Panama, it was concluded that the type of online political violence faced by candidates took the form of targeted attacks on their political roles and direct aggression during the 2024 elections. Additionally, non-consensual use of GenAI and deepfakes has been employed to impersonate women candidates, undermining their campaigns by attacking, deceiving, or spreading false information about their personal lives. We have documented cases of GenAI and deepfake usage against women in politics in Brazil, Mexico, and Paraguay.
Shrinking Civic Space
Another major threat to freedom of expression during elections is the rise of so-called “anti-NGO laws,” reflecting a worrying trend in Latin America, with particular mention of cases in Paraguay, Peru, Nicaragua, and Venezuela. These laws are accompanied by misleading and hostile narratives that discredit civil society organizations, portraying them as agents of foreign interests or associating them with illicit activities. The impact of these laws and hostile narratives includes the stigmatization of civil society, exposing its members to attacks and contributing to the shrinking of civic space. The ambiguity of sanctions further exacerbates the problem, fostering self-censorship, especially during election periods, when many organizations intensify their scrutiny of incumbent authorities and candidates competing for public office.
State-Sponsored Disinformation
State-sponsored disinformation and collusion with platform companies to control public narratives raise serious concerns about governmental control over the information ecosystem. One of the most notable cases of official disinformation in electoral contexts occurred during Jair Bolsonaro’s presidency in Brazil. His anti-democratic rhetoric during the recognition phase of his opponent Lula da Silva’s victory incited an attempted coup in January 2022. A recent judicial investigation into the coup attempt concluded that state-sponsored disinformation played a key role as part of a coordinated information operation. Social media platforms failed to take decisive action to prevent it or safeguard the integrity of the elections.
Challenges
Lack of Data: An Obstacle to Tackling Disinformation
The lack of standardized methodologies to assess the impact of disinformation on voter behavior, electoral outcomes, and democratic institutions poses a significant challenge in addressing disinformation in electoral contexts. This gap hinders the development of evidence-based policies and impedes efforts to design targeted interventions, leaving disinformation as an unchecked threat to electoral integrity.
In 2024, Meta announced the deactivation of CrowdTangle, a crucial tool for researchers studying audience interactions on the platform and content virality. While it replaced the tool with its Ad Library, access is highly restricted, limited to individuals affiliated with academic institutions and nonprofit organizations, excluding journalists and independent researchers. In January 2025, Meta announced the dismantling of its fact-checking division in favor of a community notes model similar to that implemented by X. This change represents a significant setback in efforts to improve information integrity, particularly during election periods when the flow of violent and misleading content intensifies due to political competition.
The Decline of Local Journalism and Its Implications
The decline of local media, particularly newspapers, has reduced civic engagement, knowledge, and trust, contributing to the spread of disinformation. The cost of quality journalism and financial challenges in the sector pose significant obstacles. The dominance of companies like Facebook and Google in advertising has further weakened the financial stability of independent journalism, negatively affecting local media outlets.
Growing “Influence Industries” and Their Opacity
These business conglomerates operate in multiple countries in the region with little transparency. Various investigative media reports highlight how electoral regulations in Latin America, Spain, and the United States fail to adequately address the phenomenon of influence industries, their disinformation operations, their funding sources, and related issues. In Latin American countries, this industry thrives in a context of weak access to information laws, poorly enforced or absent data protection frameworks, and ambiguous or inadequately monitored political party financing regulations.
Platform Business Models and Algorithms
The business model of online platforms, based on the principles of the attention economy and surveillance capitalism, incentivizes the amplification of harmful and illegal content, such as hate speech, disinformation, and extremist opinions, as these are more likely to elicit strong user reactions. In this environment, verified facts and reasoned debate take a backseat as platform algorithms prioritize content that provokes outrage or conflict. This algorithmic tendency is particularly exploited during elections to bias public discourse, intimidate dissenters, and create a polarized public sphere.
Despite these challenges, platform regulations have largely avoided directly addressing the underlying business models and technological design architectures that perpetuate these problems.
Efforts to Regulate AI and Disinformation
Paraguay’s Superior Electoral Tribunal (TSJE) has launched several initiatives to strengthen electoral integrity online. Notable efforts include an agreement with Panama’s Electoral Tribunal to promote the ethical use of technology and an upcoming collaboration with social media platforms to combat disinformation and the misuse of artificial intelligence (AI). Additionally, topics such as content moderation, hate speech regulation, and AI integration in elections were discussed at events like the Mercosur Parliamentary Forum and the XVII Meeting of Electoral Authorities. However, concerns persist regarding how to regulate non-prohibited content, such as electoral disinformation, particularly concerning algorithmic transparency, the role of tech companies, and AI’s impact on freedom of expression.
At the regional level, regulating disinformation remains a challenge. Some countries, like Nicaragua, have passed laws against “fake news” and disinformation, but their enforcement has been controversial, as they are used to target political opponents and dissenters, raising concerns about human rights violations and freedom of expression.
The contribution to the UN Special Rapporteur’s report on the promotion and protection of the right to freedom of opinion and expression represents a significant step in ensuring that states uphold international standards in electoral contexts, particularly in online environments. This report also advances discussions on the roles, responsibilities, and responses of state and non-state actors concerning AI-related issues.
To read our contributions, click here.
This publication was funded by the European Union. Its content is the sole responsibility of TEDIC and does not necessarily reflect the views of the European Union.