Brazil's AI Legal Framework: Congress Votes in 22 Days and Your Company Needs to Be Ready

Entercast Consulting·

On May 27, 2026, the plenary of Brazil's Chamber of Deputies is expected to vote on Bill 2338/2023 - the bill that will define the rules of the game for artificial intelligence in Brazil. With 72% of Brazilian companies already using AI at some level, according to industry experts, regulation arrives late for many and urgently for everyone.

Bill 2338/2023 was approved by the Federal Senate in December 2024 and moved through the Chamber during 2025. The vote, initially expected for the end of last year, was postponed due to political impasses and disagreements over sensitive points - especially the balance between rights protection and innovation.

Now, the president of the Chamber, Hugo Motta (Republicanos-PB), aligned with the bill's rapporteur, congressman Aguinaldo Ribeiro (PP-PB), for the final report to be presented on May 19, followed by a plenary vote on May 27. The vote is imminent - and the clock is running for companies that have not yet started any compliance work.

Why the vote is urgent now

For Brazil, the timing could not be more loaded. While the country debates the text, the global AI market has accelerated: in 2025 and 2026, major platforms launched autonomous agents, multimodal systems and corporate-grade models already embedded in Brazilian business operations.

Regulating now is no longer theory - it is dealing with technology that has already decided credit, filtered resumes and targeted advertising for millions of Brazilians. According to industry data, 54% of Brazilians already use AI in some way, and the number of startups with "AI" in the name grew 857% between 2023 and 2025.

The international context also matters. The European Union has applied the AI Act since 2024. The United States is building its own federal framework. Brazil risks operating in a regulatory vacuum exactly as global platforms expand their products into the national market.

What Bill 2338/2023 says

The bill creates a risk-based regulatory framework. In practice, that means: the greater the potential negative impact of an AI system on people, the greater the obligations for whoever develops or uses that system.

The text defines categories of application - from low-impact systems, such as content recommendations on platforms, to high-risk systems, including applications in health care, education, hiring, credit granting, public security and critical infrastructure.

For high-risk applications, the bill establishes a set of obligations:

  • Algorithmic transparency: users must be informed when a decision that affects them was made or supported by AI
  • Algorithmic impact assessment: companies will need to document the risks of their systems before putting them into production
  • Right to human review: anyone subject to an automated decision may request review by a human being
  • Data protection in training: the use of personal data to train models will be subject to stricter rules, in dialogue with Brazil's LGPD
  • Auditability: high-risk systems must be auditable by competent authorities

Who will be most impacted

The bill is not aimed only at big tech. It affects any organization using AI in processes that affect people. This includes banks using credit models, health plans analyzing procedure approvals, recruitment platforms with automated screening, fintechs, insurers and any company with customer service systems based on generative AI.

Companies such as Nubank and iFood, which have already tested early compliance, reported that algorithm documentation and bias auditing generate initial costs, but reduce legal risk and increase customer trust over the long term - according to industry reports.

Startups and small companies are the most delicate point. The regulatory burden expected by the bill can be disproportionate for smaller organizations, which often build AI-based products without structured legal teams.

Criticism of the current text

The parliamentary debate around Bill 2338/2023 is intense. Criticism is concentrated on three main fronts.

The first is European inspiration without local adaptation: part of the text is influenced by the European Union's AI Act, created for a market with very different characteristics from Brazil. Critics argue that a direct transplant could harm the competitiveness of the national innovation ecosystem.

The second is ambiguity in definitions: terms such as "AI system" and "high risk" still need clearer boundaries in the text, which could create legal uncertainty and disputes between companies and regulators.

The third is the cost of compliance: implementing governance, documentation and audit processes has a real cost. For early-stage startups, it can become a barrier to entry.

Rapporteur Aguinaldo Ribeiro said in March 2026 that the text is "90% ready" and that the final changes aim precisely to balance protection and innovation. The final version will only be known when the report is presented on May 19.

What your company should do now

With the vote scheduled for May 27, there is no more time for a wait-and-see posture. Regardless of immediate approval, early alignment with the bill's principles is a strategic move - and companies that arrive prepared for the post-vote scenario will be ahead.

Practical actions to start now:

  • Map your AI systems: identify which tools and automated processes your organization already uses and whether any of them affect decisions about people
  • Assess each system's risk level: ask whether the tool is used in credit, health care, hiring, security or infrastructure - these categories are likely to be high-risk
  • Document models in production: keep records on training data origin, model purpose and decision criteria, even in simplified form for now
  • Involve legal and compliance now: AI compliance is not only an IT topic - it needs alignment with risk, privacy and governance teams
  • Follow the May 19 report: the final text may change specific obligations; subscribe to legislative alerts or follow specialized sources

Brazil enters the era of regulated AI

The vote on Brazil's AI legal framework is not just a legislative event - it signals that the country has chosen to create rules for this technology. The risk-based model follows the global trend of avoiding generic regulation and focusing where impact is greatest.

For Brazilian companies, the message is clear: the window for "testing without thinking about compliance" is closing. Organizations that begin the AI governance journey before approval will move ahead - not only legally, but also in the trust of customers, partners and investors.

Follow Entercast for analysis on the impact of Brazil's AI legal framework on your sector and operations.

This article was published on May 5, 2026. Follow Entercast so you do not miss the next updates.