Our democratic stability, longstanding tradition of protecting human rights, and commitment to innovation place us in a privileged position to build a model that combines technological development with social well-being.
If we create a clear, agile, and balanced regulation, we can prove that it is possible to adopt AI to improve job quality, increase productivity, and attract digital investment, without waiving the principles that define us as a society. We have the chance to be pioneers, regulating with legal intelligence, innovating responsibly, and projecting regional leadership that places workers at the center.
But to move forward with foresight, we must first recognize our starting point: there is a legal loophole that currently limits the responsible use of AI in the employment setting.
Artificial intelligence is no longer a futuristic promise; it has become a daily reality in every sector. Its integration into the world of work is no longer optional: it is transforming processes, decision-making, and employment relationships at a speed that far outpaces our ability to regulate it.
And this raises the crucial question: are we legally and ethically prepared to embrace this change?
Today, AI can screen résumés in seconds, predict resignations, suggest training programs, and evaluate performance with pinpoint accuracy. What was once exclusively the domain of human judgment is now supported by algorithms that learn from historical patterns to “streamline” talent management.
According to a 2025 McKinsey global survey, 78% of organizations are using AI in at least one business function, compared to only 55% the previous year. The adoption of this technology is unstoppable.
However, while technology advances steadily, our labor legislation remains stuck in regulatory lag. This forces us to ask: who monitors the algorithms that make decisions about workers?
AI itself is not inherently a risk. The danger lies in its implementation without proper technical, ethical, and legal criteria. A poorly designed algorithm can reproduce and amplify historical biases, undermining essential principles such as equal opportunity and non-discrimination.
This was a warning from Cathy O’Neil, author of Weapons of Math Destruction (2017), who explains how opaque algorithms trained on biased data can perpetuate or even amplify injustices instead of correcting them. This is especially relevant in sensitive employment processes like recruitment, promotions, dismissals, and task allocation.
Imagine a worker like Ana, with years of experience, who is not considered for a promotion because an automated system determined she “did not have the ideal profile.” No one can explain why, or what data led to that decision.
How can Ana challenge the cold logic of a machine that decides her professional future without any possibility of a clear explanation? Algorithmic transparency and human oversight are no longer just ideals—they are urgent requirements. Human beings must always remain our focus.
The Challenge and Opportunity for Costa Rica
Costa Rican labor legislation still lacks a clear or sufficient response to these new challenges.
There are no specific rules requiring companies to guarantee the explainability of their algorithms, nor detailed protocols to protect the personal data of workers processed by such platforms. Nor has informed consent for the use of AI in internal processes been comprehensively regulated.
This creates a field of uncertainty, where cornerstones such as human dignity, privacy, and equality before the law could be compromised if the matter is not addressed in time.
Since 2023, four AI-related bills have been introduced (Dockets 23.771, 23.919, 24.484, and 24.875), reflecting a growing interest in updating the regulatory framework. However, these initiatives have been received with criticism for lacking sufficient technical rigor, failing to align with international standards, and adopting an overly bureaucratic approach. Paradoxically, one of them was even drafted with the assistance of ChatGPT.
The dilemma is clear: we are in the midst of a collective learning process. We still only partially understand the scope of AI, and these initial regulatory initiatives seek to strike a balance between protecting workers and promoting business innovation. Ultimately, this is an opportunity to refine and modernize the legal debate, not to halt technological evolution.
Costa Rica must move towards intelligent legislation for artificial intelligence. A modern legal framework should encompass principles of transparency, fairness, and algorithmic accountability; the right to a rationale of automated decisions; clear mechanisms for complaints and human review of algorithmic decisions; strengthened protection of workers’ personal data; and an explicit prohibition of algorithmic discrimination.
Moreover, regulation should differentiate between the capacities and resources of large companies and SMEs, establishing incentives for the responsible adoption of technology without overburdening those with less technical capacity.
In our country, we can build a model that combines technological innovation with effective protection of labor rights, thereby becoming a benchmark for Latin America.
If we succeed in enacting clear, agile, and modern regulations, we will not only attract digital investment and talent but also demonstrate that AI can be used to improve job quality, increase productivity, and reduce inequalities.
This is about building a national vision that understands that technological development should not dehumanize us but rather enhance our capacities as a society. In the end, technology must always remain at the service of human beings, never the other way around.