Translating page…
PLENA
VERITA ASCENDA LEGIBLA FORTIA PROVA NAVIGA DIGITA DETECTA AEQUITA TEMPORA CONSERVA ORIGINA COMMERCIA
Demo
🌐 Choose your language: English Français Español Português Deutsch العربية Swahili हिन्दी Русский 中文 Bahasa فارسی Italiano 日本語
⚖ An Algorithm Decided. You Can Challenge It.

Algorithms Make Decisions About You.
You Have the Right to Know Why.

AI systems screen job applications, assess visas, score credit, and price insurance. The EU AI Act, GDPR Article 22, Canada's Treasury Board Directive, and equivalent laws give you specific rights. AEQUITA explains them — and gives you the formal request tools to exercise them yourself.

🔍 What AEQUITA Is
AI can summarise rights frameworks. It cannot tell you which complaint body has jurisdiction in your country right now, under which regulation, with which deadline. The EU AI Act is live. AEQUITA tracks which regulator handles your case, which regulation gives you leverage, and what deadline you are working against. AI rights information and tools platform — not a law firm. AEQUITA provides information about AI rights and generates formal request documents. It does not enforce your rights — only you can exercise them. Regulators and courts enforce AI law. For legal proceedings related to AI rights violations, consult a licensed attorney.
← All Platforms
🤖 Have algorithms already made decisions about you?
If you checked any box — an algorithm has already made a decision about you. You have rights. AEQUITA explains them.
Right to Explanation Letter

Generate a formal request for an explanation of any AI decision — under GDPR Art. 22 or EU AI Act. AEQUITA drafts it. You send it.

Your Algorithmic Footprint
Hiring AI Credit Scoring Visa Processing Insurance Pricing Social Media Reach Housing Applications Benefits Eligibility

In every one of these areas, you have specific rights. Most people have never been told.

⚖ Bias Vault — Document Your Experience

Log every AI decision made about you. Build a timestamped evidence record. Generate a formal bias pattern report you can file with a regulator.

Timestamped entries Bias pattern report Regulator-ready Exports to PROVA
Connect to other PLENA platforms

PROVA builds your formal dispute file · TEMPORA tracks every deadline · CONSERVA permanently archives your key documents

💬 Get help on messaging apps
WhatsApp Telegram

Message PLENA directly on WhatsApp or Telegram — same knowledge, any app. Free. Requires a free AI key on first use.

⚡ Live Regulatory Updates
Sources: EUR-Lex · LegiScan · ICO · EDPB
← Back
⚙️
Connect AI for Specific Guidance
AEQUITA works in Demo mode without a key. Connect an AI provider for country-specific, situation-specific analysis of your rights.
Where to get a key
FREEGroqFree tier, fast.Get key →
PAIDClaudeBest for legal analysis.Get key →
← All Services
📄
Your Rights When AI Makes Decisions About You
These rights exist in law today. Most people do not know about them. The people most affected — immigrants, newcomers, minorities — are the least likely to have been told.
Loading your rights…
Your AI Rights
AEQUITA provides rights information — not legal advice. These are general explanations of rights that exist in your jurisdiction. How they apply to your specific situation depends on the precise facts. For assistance enforcing these rights, contact your country's data protection authority or an employment/discrimination lawyer. Many offer free initial consultations.
← All Services
Challenge an Algorithmic Decision
An AI or automated system rejected or disadvantaged you. Here is the exact process to request a human review, demand an explanation, and escalate if you are ignored.
Building your challenge strategy…
Your Challenge Strategy
AEQUITA provides rights information and challenge guidance — not legal representation. If your challenge is ignored or rejected and you believe you have been discriminated against, contact your country's relevant authority: ICO (UK) · CNIL (France) · OPC (Canada) · EEOC (USA employment) · CFPB (USA financial).
← All Services
📄
Hiring Algorithms — What They Do to You
72% of Fortune 500 companies use AI in hiring. CV screening, automated video interview analysis, psychometric scoring — and how these systems disproportionately disadvantage newcomers and minorities.
How CV screening AI works
Your CV may never be read by a human
Most large employers now run every CV through an Applicant Tracking System (ATS) that scores CVs by keyword matching, formatting compliance, employment gap detection, and educational credential parsing. A CV from a strong candidate with a foreign university name, non-standard date formats, or a name that does not match the demographic pattern of previous hires can be rejected before any human sees it.

What you can do: Request confirmation from the employer that your application received human review. Under GDPR (EU/UK) and equivalent legislation, you have the right to request that a solely automated rejection be reviewed by a human. Use a CV adaptor tool to optimise your CV for ATS systems.
Automated video interview scoring
An algorithm may have scored your facial expressions
Some employers use AI video analysis tools (HireVue, Pymetrics) that score candidates on facial micro-expressions, speech patterns, and word choice. Multiple studies have found these tools perform less accurately on darker skin tones and non-native speakers — systematically disadvantaging exactly the population PLENA serves.

Your rights: In the EU, automated video scoring of candidates constitutes automated decision-making under GDPR Article 22. You have the right to request human review of any decision based substantially on automated processing. In Illinois (USA), the AI Video Interview Act requires employer disclosure. Ask the recruiter explicitly whether your video was scored by an automated system.
What to do if you suspect algorithmic discrimination
The steps that create a record
1
Request in writing that your application be reviewed by a human. Send this via email so you have a record. Reference GDPR Article 22 (EU/UK) or applicable legislation in your jurisdiction.
2
Request the reasoning behind the rejection. Under GDPR you have the right to an explanation of automated decisions. Ask specifically whether the rejection was made by an automated system.
3
If you believe the system discriminated based on national origin, race, or equivalent protected characteristic — file a complaint with your employment equality authority. UK: EHRC. EU: national equality body. USA: EEOC. Canada: CHRC.
AEQUITA provides rights information and guidance — not legal representation. Employment discrimination cases are complex and fact-specific. For assistance with a formal complaint, contact your national employment equality authority or an employment lawyer. Many offer free initial consultations.
← All Services
📄
Credit & Insurance Algorithms
Algorithms that set your credit score, interest rate, or insurance premium may be using data you did not consent to and penalising you for factors correlated with being an immigrant or minority.
What credit and insurance algorithms use
Data you did not know was being scored
Beyond your repayment history, credit and insurance algorithms may use: your postal code (penalising immigrant-dense neighbourhoods), the frequency of address changes (common for newcomers), the time since your first account was opened (disadvantaging recent arrivals), the device you use to apply (mobile vs desktop), and in some jurisdictions, data aggregated from your shopping behaviour, social media, and app usage.

Some of these data uses are restricted or prohibited by law. The problem is that most people do not know to ask what data was used.
Your rights in credit and insurance decisions
The right to an explanation — and to dispute it
Right to know data used Right to explanation Right to dispute errors Right to human review
Under GDPR (EU/UK), the Equal Credit Opportunity Act (USA), the Financial Consumer Agency of Canada Act (Canada), and equivalent legislation: you have the right to know the principal reasons for a credit or insurance decision, the right to dispute inaccurate data, and the right to request human review of an automated decision. In the EU, you have the explicit right to not be subject to decisions based solely on automated processing that significantly affect you.

In practice: Ask the lender or insurer in writing for the specific factors that led to the decision and whether the decision was made by an automated system. If it was — request human review and provide any context that the algorithm would not have had (e.g. your employment start date, explanation of address changes).
AEQUITA provides rights information — not financial or legal advice. For credit reporting disputes: USA — AnnualCreditReport.com and CFPB (cfpb.gov) · Canada — Equifax and TransUnion dispute processes · UK — ICO and Financial Ombudsman Service · EU — your national data protection authority.
← All Services
⚠ High-Stakes Domain — Read Before Proceeding
Immigration decisions can have severe and irreversible consequences. The rights information on this screen is general and educational. For your specific immigration situation, always consult a licensed immigration attorney or accredited representative. AEQUITA does not provide immigration advice.
📄
AI in Visa & Immigration Decisions
Countries increasingly use AI to risk-score visa applications, screen asylum claims, and flag individuals for enhanced review. The legal framework around these uses is contested and evolving — but rights do exist.
How AI is used in immigration
Risk scoring, document verification, and pattern matching
Immigration authorities in Canada, the UK, the EU, Australia, and the USA use AI systems to: risk-score visa applications (assigning likelihood of refusal before human review), verify the authenticity of documents at scale, identify patterns associated with fraud, and flag applications for enhanced scrutiny. Some of these systems have been found to introduce racial and national-origin bias — scoring applicants from certain countries as higher risk based on historical rejection patterns rather than individual merits.
The EU AI Act and immigration
Immigration AI is classified as High Risk
The EU AI Act explicitly classifies AI systems used in migration, asylum, and border control as high-risk. This means: providers must register the system, conduct conformity assessments, maintain documentation, enable human oversight, and ensure the system can be overridden by human decision-makers. If you are in the EU and believe an AI system affected your visa or asylum application, you have the right to request information about whether AI was used and to have the decision reviewed by a human.

Contact the relevant national migration authority and explicitly state that you are requesting information under the EU AI Act regarding the use of automated decision-making in your case.
Canada — IRCC and algorithmic processing
Chinook and automated screening
IRCC (Immigration, Refugees and Citizenship Canada) has used the Chinook system to process large volumes of applications. A Federal Court ruling in 2020 found that automated processing without adequate human review could violate procedural fairness requirements. If you received a refusal that was processed unusually quickly and gave no individual reasoning, you have the right to request written reasons and a reconsideration review. Consult a regulated RCIC or immigration lawyer regarding judicial review if appropriate.
AEQUITA provides general information about AI rights in immigration contexts — not immigration advice. Any action regarding your specific immigration case must be taken with the guidance of a licensed immigration attorney or accredited representative. For free or low-cost help: immigrationadvocates.org (USA) · gov.uk/find-an-immigration-adviser (UK) · Your provincial legal aid (Canada).
← All Services
📄
AI Laws by Country — Plain Language
The legal frameworks that give you rights when AI makes decisions about you. What each law actually says — not the headlines, but the rights you can use today.
European Union — strongest protection globally
EU AI Act (2024) + GDPR Article 22
What the EU AI Act does: Creates a risk-tiered framework. High-risk AI (hiring, credit, immigration, education, law enforcement) must meet strict requirements — transparency, human oversight, registration, and conformity assessment. Bans certain AI uses entirely (social scoring by governments, real-time biometric surveillance in public). Gives individuals the right to explanation and human review for high-risk AI decisions.

Timeline — where enforcement stands now (March 2026): Prohibited practices have been enforceable since 2 February 2025. Full enforcement of high-risk AI obligations begins 2 August 2026. Finland was the first EU member state to obtain full AI Act enforcement powers (December 2025). If you are in the EU and an AI system affected a high-risk decision about you — hiring, credit, immigration, education — you can begin exercising your rights now: the law is in force, national authorities are operational.

GDPR Article 22: You have the right to not be subject to a decision based solely on automated processing that significantly affects you. You can request human review. The organisation must explain the logic of the decision in plain language upon request.

Enforcement: Your national Data Protection Authority (DPA). The EU AI Office for GPAI models and cross-border cases. Fines up to €35M or 7% of global turnover for prohibited practices; up to €15M or 3% for high-risk violations. EU AI Act official →
United Kingdom
UK GDPR Article 22 + ICO AI Guidance
Post-Brexit: UK retained GDPR Article 22 in the UK GDPR. Same right to not be subject to solely automated decisions, same right to human review and explanation. The ICO (Information Commissioner's Office) has published specific guidance on AI decision-making.

Employment: The Equality Act 2010 prohibits indirect discrimination by automated systems. If an AI hiring system disproportionately screens out people of a particular nationality, race, or similar protected characteristic — that is potentially unlawful even if unintentional.

Enforcement: ICO for data rights · EHRC for equality · Financial Ombudsman for financial services. ICO AI guidance →
Canada
AIDA (Artificial Intelligence and Data Act) + PIPEDA
AIDA (Artificial Intelligence and Data Act): Canada's proposed federal AI legislation was part of Bill C-27, which died on the order paper when Parliament was prorogued in January 2025. AIDA was never passed into law. A new Parliament must re-introduce AI legislation from scratch — no timeline confirmed as of March 2026. Canadians currently have no dedicated federal AI rights law equivalent to the EU AI Act.

Treasury Board Directive on Automated Decision-Making: Still in force for federal government AI systems. Requires impact assessments, human review mechanisms, and explainability based on the impact level of the decision — this gives Canadians meaningful rights against federal government automated decisions specifically.

PIPEDA / Bill C-27 Consumer Privacy Protection Act (CPPA): The privacy law portion of Bill C-27 also lapsed. PIPEDA (the existing federal privacy law) remains in force. Quebec's Law 25 (in force since 2023) is Canada's strongest privacy framework and includes rights around automated decision-making for Quebec residents.

Enforcement: OPC (Office of the Privacy Commissioner) · CHRC for employment discrimination · Quebec Commission d'accès à l'information for Law 25. OPC →
United States
Sector-specific laws + Executive Orders
No comprehensive federal AI Act — USA regulation remains sector-specific. Key protections: FCRA (Fair Credit Reporting Act) — right to know reasons for credit decisions, right to dispute. ECOA (Equal Credit Opportunity Act) — credit decisions cannot discriminate by national origin, race, sex. Title VII — employment discrimination applies to AI hiring tools. ADA — disability discrimination prohibition applies to automated assessments.

Federal AI Policy (2025–2026): President Trump's Executive Order on AI (December 11, 2025) and the White House National AI Policy Framework (March 20, 2026) establish a "minimally burdensome" federal standard and created a DOJ AI Litigation Task Force to challenge state AI laws deemed "onerous." A Senate vote of 99–1 rejected a proposed 10-year moratorium on state AI laws (July 2025). The former Biden Executive Order on AI (2023) directing federal agencies on AI civil rights protections was revoked. CFPB guidance that ECOA applies to algorithmic credit decisions remains in effect. FTC enforcement under Operation AI Comply continues, focusing on unsubstantiated AI capability claims.

State laws (active as of March 2026): Illinois AI Video Interview Act · Colorado AI Act (effective June 30, 2026, though under federal challenge) · NYC Local Law 144 (hiring algorithm bias audits) · California AI transparency law (in force January 1, 2026) · New York frontier AI transparency law (effective January 1, 2027). Check your state — the landscape is shifting rapidly. CFPB → EEOC →
Australia
Privacy Act + AI Governance Framework
Privacy Act 1988 (amended): Gives Australians rights to access personal data held by organisations and to correct inaccurate data used in automated decisions. The Australian Privacy Principles apply to large organisations and government agencies.

AI Governance Framework: Australia has published voluntary AI Ethics Principles but mandatory regulation is developing. The OAIC (Office of the Australian Information Commissioner) investigates complaints about automated data use.

Discrimination law: The Racial Discrimination Act and Sex Discrimination Act apply to automated decisions — an AI system that disproportionately disadvantages people based on race or national origin may be unlawful. OAIC →
AEQUITA provides information about AI laws in plain language — not legal advice. Laws change rapidly in this area. The information above reflects the state of legislation as of early 2026. For your specific situation, consult a data protection or employment lawyer in your jurisdiction.
Simple, honest pricing

Your AI rights are free to know. Always.

Free, indefinitely
$0/month
No signup required.
  • All 6 services
  • Rights by country
  • Hiring algorithm guide
  • AI laws in plain language
  • Challenge guidance
  • 14 languages
  • AI-powered situation analysis
With AI Key
Pay-as-you-use
Use your own Groq (free) or Claude key.
  • Everything in Free
  • Situation-specific AI analysis
  • Draft challenge letters
  • Complaint strategy for your case
  • Groq tier is free
Institutional
Custom
For HR, legal, compliance, NGOs.
  • Everything in AI tier
  • Employer compliance guidance
  • Staff rights training
  • AI procurement assessment
  • Regulatory update service

What AEQUITA is

AEQUITA is the twelfth platform in the PLENA suite — an AI rights and algorithmic accountability platform. It explains the rights that exist in law when AI systems make decisions about your employment, credit, immigration status, housing, or insurance — and how to use those rights.

Why this matters for PLENA's audience specifically

"The people most affected by algorithmic discrimination are the least likely to know their rights exist — and the least likely to be told."

Hiring AI that screens out international names. Credit algorithms that penalise frequent address changes. Insurance pricing that disadvantages immigrant postcodes. Visa processing systems that risk-score by nationality. These automated decisions shape the lives of newcomers and diaspora professionals — often invisibly, always without explanation. AEQUITA changes that.

AEQUITA extends FORTIA into the AI age

FORTIA explains your rights as a worker, tenant, and citizen under existing law. AEQUITA explains the new category of rights that emerged with AI legislation — specifically the rights that apply when a machine, rather than a human, makes decisions about you. Together they cover the full spectrum of rights that PLENA's audience needs.

Institutional value

Employers deploying AI in hiring, financial institutions using algorithmic credit scoring, and government agencies using automated decision-making all face increasing regulatory pressure to demonstrate compliance with AI rights legislation. AEQUITA's institutional tier helps compliance teams, HR departments, and legal firms understand their obligations — and build the human review mechanisms that law now requires.

Created by

AEQUITA is part of PLENA, created by Jean Claude Havyarimana. hvyjea0012@protonmail.com

← All Services
Deadline Calculator
Enter the date of your rejection and your country. AEQUITA generates every filing deadline — SAR response windows, DPA complaint periods, tribunal limits — so you never lose a right to missed timing.
Track these deadlines in TEMPORA

TEMPORA extracts deadlines from official documents and keeps a permanent evidence trail.

← All Services
🏥
Healthcare AI Rights
AI systems now influence diagnoses, triage decisions, treatment recommendations, and insurance prior authorisations. You have specific rights in each of these areas — and they are among the least known.
Prior Authorisation AI

Insurance companies use AI to automatically approve or deny requests for medical treatments, scans, and medications. In the US: the No Surprises Act and CMS rules require human review of AI denials. In the EU: AI prior authorisation systems are classified high-risk under the EU AI Act — you have the right to an explanation and human review. What to do: request in writing that a licensed physician review any AI-issued denial before you accept it.

AI in Diagnosis and Treatment

Hospitals increasingly use AI to assist with diagnosis and triage. In Texas (USA): SB 1188, effective September 2025, requires practitioners to maintain human oversight of AI-generated medical decisions and disclose AI use to patients. In the EU: diagnostic AI in the highest-risk tier requires pre-deployment assessment, transparency, and human oversight. Your right globally: ask your provider directly — "Was an AI system used in my diagnosis or treatment recommendation?" They are increasingly required to tell you.

Medical Data in AI Scoring

Insurance algorithms may use health data — including inferred data from app usage, shopping behaviour, or wearables — to set premiums. Under GDPR (EU/UK): you have the right to know what data was used and to challenge inaccurate data. Under HIPAA (US): your health data has specific protections but algorithmic use of inferred health data is less clearly covered. Under POPIA (South Africa): special category health data requires explicit consent for processing in insurance scoring.

What to Do Right Now
1. Ask your healthcare provider in writing whether AI was used in any decision about you.
2. Request under your data protection rights to see what personal data they hold and how it is processed.
3. If an AI-assisted decision harmed you: file a complaint with your health regulator AND your data protection authority — both simultaneously.
4. In the EU: report high-risk AI violations to your national market surveillance authority under the EU AI Act.
Healthcare AI law is evolving rapidly. This information reflects the law as of early 2026. For decisions that may have caused you harm, always consult a healthcare lawyer or patient rights advocate in your country.
← All Services
📱
Social Media Algorithmic Rights
Platforms use AI to moderate content, decide what you see, and determine who sees your posts. The EU Digital Services Act now gives you enforceable rights against these decisions — and a formal complaints process.
EU Digital Services Act — in force for large platforms (Feb 2024)

If you are in the EU or accessible by EU users: platforms with over 45 million monthly EU users ("Very Large Online Platforms" — Meta, YouTube, TikTok, X, LinkedIn, Pinterest, Snapchat) must give you: the right to receive an explanation of why content was removed or restricted; the right to appeal moderation decisions to a human reviewer; the right to use a DSA-certified out-of-court dispute settlement body; and the right to a non-personalised (non-algorithmic) feed on recommendation-based services.

How to Challenge a Content Moderation Decision (EU)
Step 1: Use the platform's internal appeal mechanism — all VLOPs must provide this under DSA Article 20. Document the date you filed.
Step 2: If the platform upholds the decision, file with a DSA-certified out-of-court dispute settlement body. These are independent and free for users. Find certified bodies at: ec.europa.eu/info/law/better-regulation
Step 3: File a complaint with your national DSA coordinator. Each EU member state has a designated Digital Services Coordinator responsible for enforcement.
Step 4: For systemic issues or very large platforms: the European Commission has direct enforcement power. Fines up to 6% of global turnover.
Algorithmic Reach Suppression (Shadowbanning)

Platforms may reduce the visibility of your content without removing it or notifying you. Under DSA Article 27: VLOPs must explain the main parameters of their recommender systems in plain language. Under Article 28: you have the right to opt out of personalised recommendations. What to request: a transparency report on your account explaining any reach restrictions applied, what triggered them, and how to appeal.

UK / Non-EU Users

The UK Online Safety Act (in force 2024) requires platforms to publish content moderation policies clearly and provide appeals for content removal. It is less detailed than DSA on algorithmic rights, but data rights under UK GDPR still apply to automated content decisions that affect you. Contact the ICO if a platform used automated processing of your personal data without adequate transparency.

DSA rights apply to platforms serving EU users. If you are outside the EU, UK Online Safety Act and general data protection rights may still apply. Check your country's digital rights laws.
← AI Laws by Country
🇺🇸
US State-by-State AI Law Guide
Over 70 AI-related laws passed in at least 27 states in 2025 alone. The laws that actually apply to most US workers and consumers are state laws, not federal ones. Select your state for specific, current protections.
← All Services
📄
Template Library
Ready-to-send formal letters — pre-filled with the correct legal citations for your country. Select a template, enter your details, and send.

A Subject Access Request compels an organisation to provide all personal data they hold about you — including any data used in automated decisions. This is the foundation of every AI rights challenge.

Under GDPR Article 22 (EU and UK), you have the right to request that a solely automated decision be reviewed by a human being. This letter formally invokes that right.

Under the EU AI Act (Article 86) and GDPR Article 22, you have the right to a meaningful explanation of how an automated decision was reached — what data was used, what weighting was applied, and why the decision went the way it did.

If an organisation fails to respond to your SAR or challenge letter, the next step is a formal complaint to your Data Protection Authority. This template is ready to send directly to your national DPA.

EEOC charges must be filed within 180 days of the discriminatory act (300 days in states with equivalent agencies). This template prepares the narrative for your EEOC charge submission at eeoc.gov/filing-charge-discrimination

The ICO (Information Commissioner's Office) handles UK GDPR complaints. File at ico.org.uk/make-a-complaint. This template prepares your complaint narrative.

NYC Local Law 144 requires employers in New York City to publish the results of independent bias audits for any AI hiring tools — and to notify candidates that AI was used. This letter formally requests disclosure of audit results and notification compliance.

Illinois's Artificial Intelligence Video Interview Act requires employers to notify candidates before using AI to evaluate video interviews, explain how the AI works, and obtain consent. Non-compliance is actionable under the Illinois Human Rights Act.

← All Services
📋
My Cases
Track every challenge, SAR, and complaint. Log responses. Record outcomes. All stored locally on your device — nothing is sent to any server.
+ Log a New Case
⊕ Report to Collective Database (opt-in)

Your case — stripped of identifying details — contributes to pattern detection. If 40 people report the same employer, AEQUITA can surface that to regulators and journalists.

Build a formal case file in PROVA

PROVA creates a formally structured dispute package — timestamped evidence log, complaint letters, escalation guide — that regulators and tribunals can act on.

← Institutional
🏢
Employer AI Compliance Checker
For HR teams, compliance officers, and legal departments. Answer 10 questions about your AI hiring and decision-making processes — receive a gap report showing exactly which laws apply and what you still need to do.
Organisation profile
AI systems in use
Current compliance measures
← All Services
🔍
Data Export Auditor
Describe what you found in your data export. AEQUITA identifies algorithmic profiling, inferred sensitive attributes, and dark patterns — without you sharing any personal data.
🔒
Privacy first. Only describe categories of data you found — never paste actual records, account numbers, or medical details. AEQUITA sends your description to our AI proxy for analysis; if you follow this guidance, no personal data is ever transmitted from your device.
← All Services
🌍
Harm Pattern Map
Anonymised, crowdsourced reports of algorithmic harm. Patterns emerge when multiple people report the same organisation. No personal data is stored or displayed.
Total reports
Organisations flagged
Systemic alerts
Loading patterns…
Add your report
Your case — anonymised — contributes to pattern detection. When 40 people report the same org, AEQUITA alerts regulators.
← All Services
Cross-Border Case Engine
You live in one country but were harmed by a company based in another. AEQUITA maps which laws apply from both sides, which regulators have jurisdiction, and gives you the best-fit strategy across borders.
🔏 Round 3 — Bias Evidence Vault
Log every interaction where you suspect AI discrimination or algorithmic bias — job rejections, credit refusals, content moderation, automated decisions. Over time, your log becomes a pattern-of-bias report you can submit to the ICO, EHRC, or a regulator. All data stays on this device.
← Back
📄
For Institutions
Academic institutions, employers, legal aid bodies, hospitals, and government agencies — deploy AEQUITA at scale for your communities, students, or beneficiaries.

What institutional access includes:

  • ✓ Admin dashboard — usage reporting, user management, audit trails
  • ✓ Bulk user onboarding — CSV upload or API provisioning
  • ✓ Data Processing Agreement (GDPR / UK GDPR compliant)
  • ✓ SSO/SAML integration for enterprise IT environments
  • ✓ White-label option for your brand
  • ✓ Annual invoicing compatible with procurement systems
  • ✓ Dedicated support and SLA
Universities & Higher Education
Support international students, researchers, and staff. Deploy as part of orientation, welfare, or student services.
Legal Aid & Advocacy Bodies
Multiply the reach of your services. Give clients the documentation tools, rights knowledge, and literacy they need before their appointment.
Employers & HR Teams
Support international hires, reduce employment disputes, and ensure each employee understands their rights and contracts from Day 1.
Government & Integration Bodies
Support communities you serve. Equip citizens, newcomers, and beneficiaries with the knowledge they need to access services and understand their rights.
Request Institutional Access

Contact us with your institution type, approximate user count, and intended use. We respond within 2 business days.

Contact for Institutional Access →

⚡ Enhanced Tools

⚡ Additional Tools

⚡ Advanced Tools

Track Subject Access Request deadlines. Controllers must respond within 30 days (UK/EU GDPR), 30 days (Canada), or 30 days (Australia).

Work through this checklist to determine whether your decision was fully automated or had meaningful human review.

AEQUITA · Watchlists

What's changing in AI accountability

Unlike the other PLENA apps that resolve a single question and close, AEQUITA stays open. Your exposure to algorithmic decisions doesn't end — so the monitoring doesn't either.

Checking sign-in state…
Active watchlists
curated + personal
Changes this week
in the last 7 days
High-impact signals
enforcement or rule changes
Loading latest updates…
AEQUITA groups live legal updates into watchlists that match how a person affected by an algorithmic decision actually thinks about their situation. Curated watchlists are built for the most common concerns; personal watchlists let you track something specific to your situation.

Each watchlist pulls from the PLENA legal-updates feed, populated daily by a background job that fetches from EUR-Lex (EU), ICO (UK), LegiScan (US), and EDPB. When a law changes or a regulator issues a new decision, it flows into matching watchlists automatically.