Translating page…
PLENA
VERITA ASCENDA LEGIBLA FORTIA PROVA NAVIGA DIGITA DETECTA AEQUITA TEMPORA
Demo ▾
🌐 English Français Español Português Deutsch عربي Swahili हिन्दी Русский 中文 Bahasa فارسی
⚖ An Algorithm Decided. You Can Challenge It.

Algorithms Make Decisions About You.
You Have the Right to Know Why.

AI systems screen job applications, assess visas, score credit, and price insurance. The EU AI Act, GDPR Article 22, Canada's Treasury Board Directive, and equivalent laws give you specific rights. AEQUITA explains them — and gives you the formal request tools to exercise them yourself.

📚 What AEQUITA Is
AI can summarise rights frameworks. It cannot tell you which complaint body has jurisdiction in your country right now, under which regulation, with which deadline. The EU AI Act is live. AEQUITA tracAI rights information and tools platform — not a law firm. AEQUITA provides information about AI rights and generates formal request documents. It does not enforce your rights — only you can exercise them. Regulators and courts enforce AI law. For legal proceedings related to AI rights violations, consult a licensed attorney.
← All Platforms
🤖 Have algorithms already made decisions about you?
If you checked any box — an algorithm has already made a decision about you. You have rights. AEQUITA explains them.
Right to Explanation Letter

Generate a formal request for an explanation of any AI decision — under GDPR Art. 22 or EU AI Act. AEQUITA drafts it. You send it.

Your Algorithmic Footprint
Hiring AI Credit Scoring Visa Processing Insurance Pricing Social Media Reach Housing Applications Benefits Eligibility

In every one of these areas, you have specific rights. Most people have never been told.

💬 Get help on messaging apps
WhatsApp Telegram

Message PLENA directly on WhatsApp or Telegram — same knowledge, any app. Free. Requires a free AI key on first use.

← Back
⚙️
Connect AI for Specific Guidance
AEQUITA works in Demo mode without a key. Connect an AI provider for country-specific, situation-specific analysis of your rights.
Where to get a key
FREEGroqFree tier, fast.Get key →
PAIDClaudeBest for legal analysis.Get key →
← All Services
📜
Your Rights When AI Makes Decisions About You
These rights exist in law today. Most people do not know about them. The people most affected — immigrants, newcomers, minorities — are the least likely to have been told.
Loading your rights…
📜
Your AI Rights
AEQUITA provides rights information — not legal advice. These are general explanations of rights that exist in your jurisdiction. How they apply to your specific situation depends on the precise facts. For assistance enforcing these rights, contact your country's data protection authority or an employment/discrimination lawyer. Many offer free initial consultations.
← All Services
✊
Challenge an Algorithmic Decision
An AI or automated system rejected or disadvantaged you. Here is the exact process to request a human review, demand an explanation, and escalate if you are ignored.
Building your challenge strategy…
✊
Your Challenge Strategy
AEQUITA provides rights information and challenge guidance — not legal representation. If your challenge is ignored or rejected and you believe you have been discriminated against, contact your country's relevant authority: ICO (UK) · CNIL (France) · OPC (Canada) · EEOC (USA employment) · CFPB (USA financial).
← All Services
💼
Hiring Algorithms — What They Do to You
72% of Fortune 500 companies use AI in hiring. CV screening, automated video interview analysis, psychometric scoring — and how these systems disproportionately disadvantage newcomers and minorities.
How CV screening AI works
Your CV may never be read by a human
Most large employers now run every CV through an Applicant Tracking System (ATS) that scores CVs by keyword matching, formatting compliance, employment gap detection, and educational credential parsing. A CV from a strong candidate with a foreign university name, non-standard date formats, or a name that does not match the demographic pattern of previous hires can be rejected before any human sees it.

What you can do: Request confirmation from the employer that your application received human review. Under GDPR (EU/UK) and equivalent legislation, you have the right to request that a solely automated rejection be reviewed by a human. Use a CV adaptor tool to optimise your CV for ATS systems.
Automated video interview scoring
An algorithm may have scored your facial expressions
Some employers use AI video analysis tools (HireVue, Pymetrics) that score candidates on facial micro-expressions, speech patterns, and word choice. Multiple studies have found these tools perform less accurately on darker skin tones and non-native speakers — systematically disadvantaging exactly the population PLENA serves.

Your rights: In the EU, automated video scoring of candidates constitutes automated decision-making under GDPR Article 22. You have the right to request human review of any decision based substantially on automated processing. In Illinois (USA), the AI Video Interview Act requires employer disclosure. Ask the recruiter explicitly whether your video was scored by an automated system.
What to do if you suspect algorithmic discrimination
The steps that create a record
1
Request in writing that your application be reviewed by a human. Send this via email so you have a record. Reference GDPR Article 22 (EU/UK) or applicable legislation in your jurisdiction.
2
Request the reasoning behind the rejection. Under GDPR you have the right to an explanation of automated decisions. Ask specifically whether the rejection was made by an automated system.
3
If you believe the system discriminated based on national origin, race, or equivalent protected characteristic — file a complaint with your employment equality authority. UK: EHRC. EU: national equality body. USA: EEOC. Canada: CHRC.
AEQUITA provides rights information and guidance — not legal representation. Employment discrimination cases are complex and fact-specific. For assistance with a formal complaint, contact your national employment equality authority or an employment lawyer. Many offer free initial consultations.
← All Services
💳
Credit & Insurance Algorithms
Algorithms that set your credit score, interest rate, or insurance premium may be using data you did not consent to and penalising you for factors correlated with being an immigrant or minority.
What credit and insurance algorithms use
Data you did not know was being scored
Beyond your repayment history, credit and insurance algorithms may use: your postal code (penalising immigrant-dense neighbourhoods), the frequency of address changes (common for newcomers), the time since your first account was opened (disadvantaging recent arrivals), the device you use to apply (mobile vs desktop), and in some jurisdictions, data aggregated from your shopping behaviour, social media, and app usage.

Some of these data uses are restricted or prohibited by law. The problem is that most people do not know to ask what data was used.
Your rights in credit and insurance decisions
The right to an explanation — and to dispute it
Right to know data used Right to explanation Right to dispute errors Right to human review
Under GDPR (EU/UK), the Equal Credit Opportunity Act (USA), the Financial Consumer Agency of Canada Act (Canada), and equivalent legislation: you have the right to know the principal reasons for a credit or insurance decision, the right to dispute inaccurate data, and the right to request human review of an automated decision. In the EU, you have the explicit right to not be subject to decisions based solely on automated processing that significantly affect you.

In practice: Ask the lender or insurer in writing for the specific factors that led to the decision and whether the decision was made by an automated system. If it was — request human review and provide any context that the algorithm would not have had (e.g. your employment start date, explanation of address changes).
AEQUITA provides rights information — not financial or legal advice. For credit reporting disputes: USA — AnnualCreditReport.com and CFPB (cfpb.gov) · Canada — Equifax and TransUnion dispute processes · UK — ICO and Financial Ombudsman Service · EU — your national data protection authority.
← All Services
⚠ High-Stakes Domain — Read Before Proceeding
Immigration decisions can have severe and irreversible consequences. The rights information on this screen is general and educational. For your specific immigration situation, always consult a licensed immigration attorney or accredited representative. AEQUITA does not provide immigration advice.
🌍
AI in Visa & Immigration Decisions
Countries increasingly use AI to risk-score visa applications, screen asylum claims, and flag individuals for enhanced review. The legal framework around these uses is contested and evolving — but rights do exist.
How AI is used in immigration
Risk scoring, document verification, and pattern matching
Immigration authorities in Canada, the UK, the EU, Australia, and the USA use AI systems to: risk-score visa applications (assigning likelihood of refusal before human review), verify the authenticity of documents at scale, identify patterns associated with fraud, and flag applications for enhanced scrutiny. Some of these systems have been found to introduce racial and national-origin bias — scoring applicants from certain countries as higher risk based on historical rejection patterns rather than individual merits.
The EU AI Act and immigration
Immigration AI is classified as High Risk
The EU AI Act explicitly classifies AI systems used in migration, asylum, and border control as high-risk. This means: providers must register the system, conduct conformity assessments, maintain documentation, enable human oversight, and ensure the system can be overridden by human decision-makers. If you are in the EU and believe an AI system affected your visa or asylum application, you have the right to request information about whether AI was used and to have the decision reviewed by a human.

Contact the relevant national migration authority and explicitly state that you are requesting information under the EU AI Act regarding the use of automated decision-making in your case.
Canada — IRCC and algorithmic processing
Chinook and automated screening
IRCC (Immigration, Refugees and Citizenship Canada) has used the Chinook system to process large volumes of applications. A Federal Court ruling in 2020 found that automated processing without adequate human review could violate procedural fairness requirements. If you received a refusal that was processed unusually quickly and gave no individual reasoning, you have the right to request written reasons and a reconsideration review. Consult a regulated RCIC or immigration lawyer regarding judicial review if appropriate.
AEQUITA provides general information about AI rights in immigration contexts — not immigration advice. Any action regarding your specific immigration case must be taken with the guidance of a licensed immigration attorney or accredited representative. For free or low-cost help: immigrationadvocates.org (USA) · gov.uk/find-an-immigration-adviser (UK) · Your provincial legal aid (Canada).
← All Services
🏛
AI Laws by Country — Plain Language
The legal frameworks that give you rights when AI makes decisions about you. What each law actually says — not the headlines, but the rights you can use today.
European Union — strongest protection globally
EU AI Act (2024) + GDPR Article 22
What the EU AI Act does: Creates a risk-tiered framework. High-risk AI (hiring, credit, immigration, education, law enforcement) must meet strict requirements — transparency, human oversight, registration, and conformity assessment. Bans certain AI uses entirely (social scoring by governments, real-time biometric surveillance in public). Gives individuals the right to explanation and human review for high-risk AI decisions.

Timeline — where enforcement stands now (March 2026): Prohibited practices have been enforceable since 2 February 2025. Full enforcement of high-risk AI obligations begins 2 August 2026. Finland was the first EU member state to obtain full AI Act enforcement powers (December 2025). If you are in the EU and an AI system affected a high-risk decision about you — hiring, credit, immigration, education — you can begin exercising your rights now: the law is in force, national authorities are operational.

GDPR Article 22: You have the right to not be subject to a decision based solely on automated processing that significantly affects you. You can request human review. The organisation must explain the logic of the decision in plain language upon request.

Enforcement: Your national Data Protection Authority (DPA). The EU AI Office for GPAI models and cross-border cases. Fines up to €35M or 7% of global turnover for prohibited practices; up to €15M or 3% for high-risk violations. EU AI Act official →
United Kingdom
UK GDPR Article 22 + ICO AI Guidance
Post-Brexit: UK retained GDPR Article 22 in the UK GDPR. Same right to not be subject to solely automated decisions, same right to human review and explanation. The ICO (Information Commissioner's Office) has published specific guidance on AI decision-making.

Employment: The Equality Act 2010 prohibits indirect discrimination by automated systems. If an AI hiring system disproportionately screens out people of a particular nationality, race, or similar protected characteristic — that is potentially unlawful even if unintentional.

Enforcement: ICO for data rights · EHRC for equality · Financial Ombudsman for financial services. ICO AI guidance →
Canada
AIDA (Artificial Intelligence and Data Act) + PIPEDA
AIDA (Artificial Intelligence and Data Act): Canada's proposed federal AI legislation was part of Bill C-27, which died on the order paper when Parliament was prorogued in January 2025. AIDA was never passed into law. A new Parliament must re-introduce AI legislation from scratch — no timeline confirmed as of March 2026. Canadians currently have no dedicated federal AI rights law equivalent to the EU AI Act.

Treasury Board Directive on Automated Decision-Making: Still in force for federal government AI systems. Requires impact assessments, human review mechanisms, and explainability based on the impact level of the decision — this gives Canadians meaningful rights against federal government automated decisions specifically.

PIPEDA / Bill C-27 Consumer Privacy Protection Act (CPPA): The privacy law portion of Bill C-27 also lapsed. PIPEDA (the existing federal privacy law) remains in force. Quebec's Law 25 (in force since 2023) is Canada's strongest privacy framework and includes rights around automated decision-making for Quebec residents.

Enforcement: OPC (Office of the Privacy Commissioner) · CHRC for employment discrimination · Quebec Commission d'accès à l'information for Law 25. OPC →
United States
Sector-specific laws + Executive Orders
No comprehensive federal AI Act — USA regulation remains sector-specific. Key protections: FCRA (Fair Credit Reporting Act) — right to know reasons for credit decisions, right to dispute. ECOA (Equal Credit Opportunity Act) — credit decisions cannot discriminate by national origin, race, sex. Title VII — employment discrimination applies to AI hiring tools. ADA — disability discrimination prohibition applies to automated assessments.

Federal AI Policy (2025–2026): President Trump's Executive Order on AI (December 11, 2025) and the White House National AI Policy Framework (March 20, 2026) establish a "minimally burdensome" federal standard and created a DOJ AI Litigation Task Force to challenge state AI laws deemed "onerous." A Senate vote of 99–1 rejected a proposed 10-year moratorium on state AI laws (July 2025). The former Biden Executive Order on AI (2023) directing federal agencies on AI civil rights protections was revoked. CFPB guidance that ECOA applies to algorithmic credit decisions remains in effect. FTC enforcement under Operation AI Comply continues, focusing on unsubstantiated AI capability claims.

State laws (active as of March 2026): Illinois AI Video Interview Act · Colorado AI Act (effective June 30, 2026, though under federal challenge) · NYC Local Law 144 (hiring algorithm bias audits) · California AI transparency law (in force January 1, 2026) · New York frontier AI transparency law (effective January 1, 2027). Check your state — the landscape is shifting rapidly. CFPB → EEOC →
Australia
Privacy Act + AI Governance Framework
Privacy Act 1988 (amended): Gives Australians rights to access personal data held by organisations and to correct inaccurate data used in automated decisions. The Australian Privacy Principles apply to large organisations and government agencies.

AI Governance Framework: Australia has published voluntary AI Ethics Principles but mandatory regulation is developing. The OAIC (Office of the Australian Information Commissioner) investigates complaints about automated data use.

Discrimination law: The Racial Discrimination Act and Sex Discrimination Act apply to automated decisions — an AI system that disproportionately disadvantages people based on race or national origin may be unlawful. OAIC →
AEQUITA provides information about AI laws in plain language — not legal advice. Laws change rapidly in this area. The information above reflects the state of legislation as of early 2026. For your specific situation, consult a data protection or employment lawyer in your jurisdiction.
Simple, honest pricing

Your AI rights are free to know. Always.

Free Forever
$0/month
No signup required.
  • All 6 services
  • Rights by country
  • Hiring algorithm guide
  • AI laws in plain language
  • Challenge guidance
  • 12 languages
  • AI-powered situation analysis
With AI Key
Pay-as-you-use
Use your own Groq (free) or Claude key.
  • Everything in Free
  • Situation-specific AI analysis
  • Draft challenge letters
  • Complaint strategy for your case
  • Groq tier is free
Institutional
Custom
For HR, legal, compliance, NGOs.
  • Everything in AI tier
  • Employer compliance guidance
  • Staff rights training
  • AI procurement assessment
  • Regulatory update service

What AEQUITA is

AEQUITA is the twelfth platform in the PLENA suite — an AI rights and algorithmic accountability platform. It explains the rights that exist in law when AI systems make decisions about your employment, credit, immigration status, housing, or insurance — and how to use those rights.

Why this matters for PLENA's audience specifically

"The people most affected by algorithmic discrimination are the least likely to know their rights exist — and the least likely to be told."

Hiring AI that screens out international names. Credit algorithms that penalise frequent address changes. Insurance pricing that disadvantages immigrant postcodes. Visa processing systems that risk-score by nationality. These automated decisions shape the lives of newcomers and diaspora professionals — often invisibly, always without explanation. AEQUITA changes that.

AEQUITA extends FORTIA into the AI age

FORTIA explains your rights as a worker, tenant, and citizen under existing law. AEQUITA explains the new category of rights that emerged with AI legislation — specifically the rights that apply when a machine, rather than a human, makes decisions about you. Together they cover the full spectrum of rights that PLENA's audience needs.

Institutional value

Employers deploying AI in hiring, financial institutions using algorithmic credit scoring, and government agencies using automated decision-making all face increasing regulatory pressure to demonstrate compliance with AI rights legislation. AEQUITA's institutional tier helps compliance teams, HR departments, and legal firms understand their obligations — and build the human review mechanisms that law now requires.

Created by

AEQUITA is part of PLENA, created by Jean Claude Havyarimana. [email protected]

`); w.document.close(); }; })(); /* Preserve ?lang= across internal navigation */ (function(){ var m=location.search.match(/[?&]lang=([^&]+)/i); var lang=m?m[1].toLowerCase():''; if(!lang||lang==='en')return; // English: no param needed document.addEventListener('DOMContentLoaded',function(){ // Pages that don't support ?lang= param - skip them var noLang=['verita.html','terms.html','privacy.html','professionals.html','404.html']; document.querySelectorAll('a[href]').forEach(function(a){ var h=a.getAttribute('href')||''; var base=h.split('?')[0].split('/').pop(); // Only internal .html links, not anchors, not external, not already parameterised, not excluded if(h.match(/^[^#:]*\.html$/) && h.indexOf('?')<0 && h.indexOf('//')<0 && noLang.indexOf(base)<0){ a.setAttribute('href', h+'?lang='+lang); } }); }); })();
🔏 Round 3 — Bias Evidence Vault
Log every interaction where you suspect AI discrimination or algorithmic bias — job rejections, credit refusals, content moderation, automated decisions. Over time, your log becomes a pattern-of-bias report you can submit to the ICO, EHRC, or a regulator. All data stays on this device.
← Back
🏛
For Institutions
Academic institutions, employers, legal aid bodies, hospitals, and government agencies — deploy AEQUITA at scale for your communities, students, or beneficiaries.

What institutional access includes:

  • ✓ Admin dashboard — usage reporting, user management, audit trails
  • ✓ Bulk user onboarding — CSV upload or API provisioning
  • ✓ Data Processing Agreement (GDPR / UK GDPR compliant)
  • ✓ SSO/SAML integration for enterprise IT environments
  • ✓ White-label option for your brand
  • ✓ Annual invoicing compatible with procurement systems
  • ✓ Dedicated support and SLA
Universities & Higher Education
Support international students, researchers, and staff. Deploy as part of orientation, welfare, or student services.
Legal Aid & Advocacy Bodies
Multiply the reach of your services. Give clients the documentation tools, rights knowledge, and literacy they need before their appointment.
Employers & HR Teams
Support international hires, reduce employment disputes, and ensure every employee understands their rights and contracts from Day 1.
Government & Integration Bodies
Support communities you serve. Equip citizens, newcomers, and beneficiaries with the knowledge they need to access services and understand their rights.
Request Institutional Access

Contact us with your institution type, approximate user count, and intended use. We respond within 2 business days.

Contact for Institutional Access →

⚡ Enhanced Tools

⚡ Additional Tools

⚡ Advanced Tools

Track Subject Access Request deadlines. Controllers must respond within 30 days (UK/EU GDPR), 30 days (Canada), or 30 days (Australia).

Work through this checklist to determine whether your decision was fully automated or had meaningful human review.