Navigating the EU AI Act: Key Compliance Dates and Implications


Introduction: From Landmark Law to Daily Reality

The EU Artificial Intelligence Act (EU AI Act) has moved from an abstract legislative project in Brussels to a concrete operational challenge for organizations building or using AI in (or touching) the EU. It is the world’s first comprehensive horizontal AI regulation, and its impact reaches far beyond Europe’s borders, effectively setting a global reference point for “responsible AI by law” rather than by voluntary principles.

As of August 1, 2024, the EU AI Act is formally in force, and we are now in a staggered rollout of obligations that translate high‑level legal text into day‑to‑day governance, design, and documentation work for product teams, legal, compliance and engineering. Prohibitions on certain “unacceptable risk” AI practices and AI literacy obligations start applying from 2 February 2025, while the rules for general‑purpose AI (GPAI) models and system‑level provider obligations begin from 2 August 2025—two years before the Act becomes fully applicable in August 2026.11114

This article offers a comprehensive, structured exploration of this “real‑world compliance phase”. It will:

  • trace the historical context and evolution of the EU AI Act;
  • explain its current relevance, with timelines, trends, and challenges;
  • illustrate practical applications via governance, risk classification, and documentation case examples;
  • and explore future implications for AI innovation, global regulation, and organizational strategy.

The target audience is professional and academic: legal, compliance, AI product owners, risk officers, and technical leaders who must translate the Act into concrete operating models. The aim is not just to summarize the law, but to illuminate how it is reshaping AI practice in real time.


1. Historical Context: How the EU AI Act Came to Be

1.1 From Ethics Guidelines to Binding Law

The EU’s journey toward regulating AI did not start with the Act itself. It emerged from a broader digital strategy focused on data protection (GDPR), platform regulation (Digital Services Act, Digital Markets Act), and cyber resilience. Around 2018–2019, the European Commission began to articulate AI as both an opportunity and a systemic risk.

Key milestones included:

  • 2018 – High-Level Expert Group on AI (HLEG AI)
    The European Commission convened an expert group to draft Ethics Guidelines for Trustworthy AI, published in 2019. These laid out principles such as human agency and oversight, privacy and data governance, transparency, diversity, non-discrimination, and accountability. They were non‑binding, but signaled that AI would not remain self‑regulated.
  • 2019 – Coordinated Plan on AI
    The Commission and Member States adopted a coordinated plan to foster AI development and address risks, preparing the ground for later legislation.
  • 2020 – Inception Impact Assessment & Public Consultation
    The Commission launched a consultation on policy options, including soft law (codes of conduct) and hard law (regulation). Feedback from industry, civil society, and academia showed growing support for risk‑based regulation rather than a ban on AI.

This period framed AI as something that needed a “trust first” approach: Europe would compete not by outspending on raw compute, but by becoming a global hub for safe and trustworthy AI.7

1.2 The Risk‑Based Proposal (2021)

In April 2021, the Commission published its proposal for the Artificial Intelligence Act. It introduced the now‑familiar risk‑based structure:

  1. Unacceptable risk – banned outright.
  2. High‑risk – allowed, but under strict obligations.
  3. Limited risk – subject to transparency duties.
  4. Minimal risk – largely unregulated.

This “pyramid of risk” distinguished the Act from more sectoral or purely liability‑based approaches. It did not regulate AI as a monolith, but tied regulatory intensity to the system’s use and impact.

The legislative process then moved through the usual EU machinery:

  • European Parliament and Council debated and amended the proposal,
  • concerns around social scoring, biometric surveillance and law enforcement use of AI dominated early political discourse;
  • industry lobbied for flexibility and innovation protection, particularly around general‑purpose models and open‑source AI.

1.3 Negotiations, Trilogue, and Political Agreement (2021–2023)

Between 2021 and late 2023, the proposal was negotiated in trilogue (Commission, Parliament, Council). Key areas of contention and eventual compromise included:

  • Biometric surveillance and remote biometric identification (RBI)
    Civil society and Parliament pushed for bans on real‑time RBI in public spaces. The final compromise allows narrow carve‑outs (e.g., for locating victims of serious crimes or preventing terrorist attacks) but with stringent safeguards.
  • General‑purpose AI and foundation models
    The rise of large language models (LLMs) and generative AI (e.g., GPT‑4 class systems) during negotiations forced lawmakers to reconsider the architecture of the Act. Initially, the focus was on application‑level systems. By 2023, it became clear that the “upstream” model providers (GPAI) had to be directly regulated via dedicated provisions, including transparency and risk management duties.45
  • Innovation vs. regulation
    To counter fears of stifled innovation, the Act introduced regulatory sandboxes, reduced obligations for SMEs in some contexts, and phased implementation to give industry time to adapt.

By December 2023, the EU reached political agreement on the text.

1.4 Entry into Force and a Staggered Implementation (2024–2026)

The EU AI Act formally entered into force on 1 August 2024.1 However, most obligations do not apply immediately. Instead, the Act is rolled out in phases:11114

  • 1 August 2024 – Act enters into force (publication + 20 days).
  • 2 February 2025 (6 months after entry into force) –
    • Prohibitions on “unacceptable risk” AI practices (Article 5) apply.
    • AI literacy obligations begin to apply.
  • 2 May 2025 (9 months) –
    • Codes of practice and some support measures.
  • 2 August 2025 (12 months) –
    • Obligations for providers of general‑purpose AI models become applicable.
  • 2 August 2026 (24 months) –
    • Most rules for high‑risk AI systems become applicable.
  • 2027–2028 –
    • Some sectoral and supervisory provisions reach full operation.

This staggered design is deliberate: it brings the most critical safeguards (bans and literacy) online early, gives upstream model providers a year to prepare, and leaves high‑risk application providers two years to re‑engineer their pipelines.

We are now in the transition from theory to practice: organizations are building AI governance programs, risk classification processes, and documentation systems to meet these dates.


2. Current Relevance: Why the EU AI Act Matters Now

2.1 Key Dates Driving “Real‑World Compliance”

Three milestones are particularly important for organizations today:

  1. 2 February 2025 – Bans and AI literacy start to bite
    From this date, the following become unlawful within the EU:12313
    • AI systems that apply social scoring by public authorities.
    • AI used for real‑time remote biometric identification in public spaces for law enforcement (with narrow exceptions).
    • AI that scrapes facial images indiscriminately from the internet or CCTV for facial recognition.
    • AI systems that attempt to infer emotions in workplaces or educational institutions.
    • Certain forms of manipulative or exploitative AI (e.g., systems that exploit vulnerabilities of children or people with disabilities).
    At the same time, AI literacy requirements start applying. Organizations must take reasonable steps to ensure that users and affected persons have an appropriate understanding of AI systems’ capabilities, limitations, and the meaning of outputs.312
  2. 2 August 2025 – General-Purpose AI obligations apply
    From this date, developers and providers of general‑purpose AI (GPAI) models must comply with specific obligations, including:45
    • Technical documentation about training, capabilities, and limitations.
    • Transparency around training data composition (categories, not individual datapoints).
    • Risk management and mitigation for systemic risks for “high‑impact” GPAI (very large models above certain compute thresholds).
    • Information-sharing duties to downstream deployers.
  3. 2 August 2026 – High‑Risk system obligations fully apply
    High‑risk AI systems (as defined in Annex III and other parts of the Act) must comply with comprehensive requirements, including risk management, data governance, documentation, logging, human oversight, robustness, and cybersecurity.689

For many organizations, these dates are no longer distant legal markers; they are project milestones, feeding into roadmaps, hiring, tooling, and board‑level risk reporting.

2.2 Trends: A Governance and Documentation Wave

The Act is catalyzing a wave of AI governance-building across Europe and beyond. Current trends include:

  • Creation of AI governance committees and roles
    Companies are appointing AI Risk OfficersChief AI Ethics & Compliance Officers, and cross‑functional AI councils combining legal, data, security, and product leaders.
  • Standardization around risk classification frameworks
    Organizations are building internal taxonomies that map their AI use cases against the Act’s categories (unacceptable, high‑risk, limited, minimal), often embedding this into project intake forms and model registries.
  • Documentation culture shift
    The Act’s requirements for technical documentation, logs, and records are pushing teams to treat documentation as a first‑class artifact—on par with code and model weights—rather than an afterthought.
  • Vendor and supply‑chain pressure
    Larger organizations are updating procurement and vendor due diligence templates to ask about AI Act alignment, CE marking for high‑risk systems, and GPAI provider obligations.

These trends reflect an emerging reality: AI compliance is no longer something that happens after deployment. It is becoming part of the design and delivery lifecycle.

2.3 Challenges: Ambiguity, Overlap, and Resource Constraints

The real‑world compliance phase is also exposing significant challenges:

  1. Ambiguity in classification
    Determining whether a system is “high‑risk” is not always straightforward. Article 6 sets classification rules, and Annex III lists high‑risk use cases (e.g., AI used in recruitment, credit scoring, law enforcement, critical infrastructure).8 But many systems are borderline (e.g., internal HR analytics vs. fully automated hiring decisions). Misclassification can mean under‑regulating (risking sanctions) or over‑regulating (wasting resources).
  2. Overlap with other legal frameworks
    Organizations must reconcile the AI Act with GDPR, sectoral safety law (e.g., for medical devices or machinery), the Cyber Resilience Act, and sector‑specific rules. This creates a complex compliance mesh where overlapping obligations must be harmonised.
  3. SMEs and resource gaps
    While the Act includes measures to support SMEs and startups, the reality is that smaller players face a disproportionate governance and documentation burden. Many lack in‑house legal teams or mature risk management processes. Without thoughtful guidance and tooling, there is a risk of innovation chilling, or of superficial, checkbox compliance.
  4. AI literacy in practice
    The requirement to ensure “AI literacy” sounds progressive but is operationally complex:312
    • How deep must user education go?
    • Does it include employees who operate or monitor AI internally?
    • How do you measure whether literacy has been achieved?
    Organizations are experimenting with onboarding content, in‑product notices, FAQs, and training modules.

2.4 Data Points and Early Impact

While the AI Act is still in early implementation, several indicators highlight its growing impact:

  • Legal and consulting firms report a surge in AI Act readiness projects across finance, healthcare, manufacturing, and the public sector.
  • Tools like EU AI Act compliance checkers and model registries are emerging to standardize assessments and documentation.1011
  • National authorities and the future European AI Office are ramping up. The Commission has already started publishing draft guidelines to clarify GPAI obligations and high‑risk system interpretations.4

The message is clear: the AI Act is no longer an abstract future regulation—it is restructuring how AI projects are conceived and operated today.


3. Core Architecture of the EU AI Act

To understand real-world compliance, we need a clear picture of the Act’s structure.

3.1 The Risk‑Based Framework

The AI Act’s central logic is the risk-based approach:17

  1. Unacceptable risk – banned
    Certain AI practices are prohibited because they are considered fundamentally incompatible with EU values and rights (Article 5).215
  2. High-risk – heavily regulated
    Systems that pose a significant risk to health, safety, or fundamental rights, especially in sectors like employment, education, critical infrastructure, and law enforcement.68
  3. Limited risk – transparency obligations
    AI systems that interact with humans, generate synthetic content (deepfakes), or manipulate content, but do not meet high-risk criteria, must offer transparency (e.g., clear labeling of AI content).
  4. Minimal risk – largely unregulated
    Most everyday AI applications (e.g., spam filters, basic recommendation engines) fall into this category and are largely free from specific burdens.

3.2 Prohibited (“Unacceptable”) AI Practices

From 2 February 2025, the following practices are banned in the EU:235

  • Social scoring by public authorities that leads to unfavorable or disproportionate treatment of individuals in unrelated contexts.
  • Exploitative or manipulative systems that materially distort behavior, particularly of vulnerable groups (children, disabled persons).
  • Unlawful real‑time remote biometric identification in publicly accessible spaces for law enforcement, except for narrow exceptions under strict conditions.
  • Emotion recognition in workplaces and educational institutions.
  • Biometric categorisation using sensitive attributes (e.g., race, political opinions, sexual orientation) in certain contexts.
  • Indiscriminate facial image scraping from the internet or CCTV footage to build or expand facial recognition databases.

Compliance implication: organizations must audit existing AI deployments for these uses and discontinue them in the EU by February 2, 2025.13

3.3 High-Risk AI Systems

High-risk AI systems are subject to comprehensive requirements. They fall into two broad categories:68

  1. AI systems that are safety components of regulated products (e.g., medical devices, machinery, toys, aviation).
  2. Standalone AI systems in areas listed in Annex III, such as:
    • Biometric identification and categorization.
    • Management and operation of critical infrastructure.
    • Education and vocational training (e.g., grading, access).
    • Employment, worker management, and access to self‑employment (e.g., CV screening tools).
    • Access to essential private and public services (e.g., credit scoring, social benefits).
    • Law enforcement, migration, asylum, and border control.
    • Administration of justice and democratic processes.

Providers of high-risk systems must implement a risk management system, ensure data and data governance quality, maintain technical documentation, provide logs and traceability, ensure transparency and user information, support effective human oversight, and ensure robustness, accuracy, and cybersecurity.69

3.4 General‑Purpose AI Models

The Act introduces dedicated obligations for providers of general‑purpose AI (GPAI) models, such as large language models and foundation models:45

  • GPAI providers must prepare technical documentation, including information about capabilities, limitations, and training processes.
  • They must provide information and support to downstream deployers (e.g., instructions on appropriate use, known risks).
  • High‑impact GPAI models (above certain compute or parameter thresholds) must implement more robust risk management and security measures, and conduct model evaluation and adversarial testing.

These obligations start applying from 2 August 2025, giving model providers a short but critical window to operationalize compliance.

3.5 AI Literacy and Transparency

The AI Act also requires that users and affected persons have appropriate AI literacy, meaning they understand at a high level how AI systems function, their limitations, and how to interpret their outputs.312

In practice, this intersects with:

  • Transparency duties for limited‑risk systems, such as:
    • Disclosing when users interact with an AI system (e.g., chatbots).
    • Labeling AI‑generated content (deepfake or synthetic media).
  • Internal training for staff who design, operate, or oversee high‑risk AI systems.

The literacy requirement is intentionally broad, leaving room for sector‑specific and organizational adaptation.


4. Practical Applications: How Organizations Are Responding

4.1 Building AI Governance and Risk Classification

Scenario 1: Cross‑border tech company with consumer and enterprise AI products

A European-headquartered technology company offers:

  • a SaaS platform that includes AI‑assisted document analysis;
  • an internal AI‑based hiring tool;
  • customer‑facing chatbots for support;
  • internal business analytics powered by machine learning.

To comply with the AI Act, the company takes these steps:

  1. Establishes an AI Governance Board
    • Members from legal, privacy, information security, product, data science, and ethics.
    • Mandate: maintain an AI inventory, approve high‑risk classification, oversee documentation, and coordinate with the Data Protection Officer (DPO).
  2. Creates an AI Use Case Registry
    • Every AI system (internal or external) is registered with metadata: purpose, data used, affected individuals, geography, and potential risk category.
    • The registry maps each use case to the AI Act’s risk categories, referencing Article 6 and Annex III.8
  3. Implements a Risk Classification Workflow
    • Projects submit a classification questionnaire; the tool generates a preliminary risk rating.
    • High‑risk candidates are escalated to the AI Governance Board for review.
  4. Introduces IAAL (“Internal As‑Approved List”) of Vendors and Models
    • Only pre‑vetted vendors or GPAI models (with AI Act‑aligned documentation) are allowed for use in high‑risk contexts.

This governance layer becomes a permanent part of the development lifecycle, not a one‑off exercise.

4.2 High‑Risk Systems: Documentation and Conformity Assessment

Scenario 2: HR Tech Provider

A company offers an AI‑enabled recruitment platform across the EU, which automatically screens CVs, ranks candidates, and suggests shortlists. Under the AI Act, this is a high‑risk AI system (employment and worker management).8

To remain in the market after August 2026, the provider must:

  1. Implement a Risk Management Framework
    • Identify potential harms (discrimination, exclusion of qualified candidates, bias across genders, ages, ethnicities).
    • Design mitigation measures (fairness constraints, bias testing, human reviewer oversight).
  2. Ensure Data Governance Quality
    • Document training data sources, preprocessing, representativeness, and potential bias.
    • Implement data minimization and quality checks, aligned with GDPR principles.
  3. Create Technical Documentation
    • System architecture, model types, feature sets, training methodology, evaluation metrics, known limitations, and post‑deployment monitoring processes.9
    • This documentation must be sufficiently detailed to allow competent authorities to assess compliance.
  4. Set Up Logging and Record‑Keeping
    • Maintain logs of significant AI system events and decisions to support audits and incident investigation.
  5. Provide Instructions for Use and Transparency
    • Clear instructions to client organizations about appropriate and inappropriate uses, required human oversight, and how to interpret AI outputs.
  6. Undergo Conformity Assessment
    • Before placing the system on the market or putting it into service, verify that it meets AI Act requirements.
    • Apply CE marking to signal conformity. For some cases, self‑assessment is possible; in others, a notified body may be required.910

This is not a superficial paperwork exercise; it requires deep integration between legal, product, and data science teams.

4.3 GPAI Providers: New Upstream Responsibilities

Scenario 3: General‑Purpose AI Model Provider

A company develops a large multilingual transformer model used by hundreds of downstream developers to build chatbots, copilots, and generative applications. Under the AI Act, it qualifies as a GPAI provider.

From August 2, 2025, it must:

  1. Produce Model‑Level Technical Documentation
    • Training data composition at a categorical level (e.g., percentage from web crawl, books, code repositories).
    • High‑level description of architecture and capabilities.
    • Limitations and known failure modes (e.g., hallucinations, biases).
  2. Conduct and Document Evaluations
    • Benchmarks for safety, robustness, fairness, and misuse potential (e.g., ability to generate harmful content, disinformation, or code for cyberattacks).
  3. Implement Risk Management for High‑Impact Models
    • For models above certain compute thresholds, conduct systemic risk assessments, scenario analysis, and stress testing.
    • Put in place technical and organizational measures to mitigate systemic risks (e.g., rate limiting, content filtering, usage restrictions).
  4. Support Downstream Deployers
    • Provide clear guidance, documentation, and technical mechanisms (e.g., content filters, policy APIs) to help deployers meet their own obligations under the AI Act.
  5. Coordinate with the European AI Office
    • For very large or impactful models, engage with the AI Office for registration, reporting, and potential oversight.

This marks a major shift: responsibility is now shared along the AI value chain, not just at the last-mile application level.

4.4 AI Literacy in Practice

Scenario 4: Public‑Facing Generative AI Platform

A media platform integrates generative AI to help users create images and text. To comply with literacy and transparency obligations, it:

  • Adds clear, plain‑language disclosures that users are interacting with an AI system.
  • Labels AI‑generated content, and offers optional watermarks or metadata for downstream provenance.
  • Provides an “AI 101” section explaining:
    • what generative models are;
    • that outputs may be inaccurate or biased;
    • that users should not rely on outputs as professional advice without verification.
  • Trains customer support and content moderators on how the AI works, its failure modes, and safe‑use best practices.

Over time, literacy measures can evolve into interactive tutorials, in‑product explanations, and contextual hints, making AI more understandable and trustworthy to everyday users.


5. Future Implications: Where the EU AI Act Is Taking Us

5.1 Global Regulatory Convergence (or Fragmentation)

The EU AI Act is already influencing policy beyond Europe. Countries and regions (e.g., Brazil, Canada, UK, some US states) are exploring or enacting AI rules that echo or respond to its approach.

Possible futures:

  • Convergence scenario
    Other jurisdictions adopt similar risk‑based models and recognize EU AI Act compliance as a “gold standard”, reducing friction for global companies. We see mutual recognition or interoperability frameworks emerge, similar to GDPR’s role in data protection.
  • Fragmentation scenario
    Different regions take divergent paths (e.g., more permissive in some countries, more state‑controlled in others). Companies face a “regulatory sudoku”, needing tailored compliance for each market.

Either way, the EU AI Act has become a reference point for global AI regulation debates.

5.2 Impact on Innovation and Open Source

Critics fear that heavy documentation and conformity assessment could slow down AI innovation, especially for startups and open‑source communities. Supporters argue that trust and safety are pre‑conditions for sustainable innovation.

Likely dynamics:

  • More professionalization of AI development
    Teams will integrate compliance, safety, and documentation from day one, similar to how privacy and security by design became standard after GDPR.
  • Tooling and automation for compliance
    We can expect a growing ecosystem of AI Act compliance platforms, automated documentation generators, model registries, and risk assessment tools, lowering the overhead.
  • Open source under pressure but not doomed
    Clarifications and guidelines are emerging to ensure open‑source models are not unfairly burdened when they are not deployed in high‑risk contexts. But large, general‑purpose open models may still face documentation expectations if they are widely deployed.

5.3 Technical Trajectory: Safer, More Transparent AI by Default

The Act will push the AI field toward:

  • Better evaluation and benchmarking of models, including fairness, robustness, and misuse resistance.
  • Explainability tooling and documentation as standard practice, even when not mandated, to ease internal review and external scrutiny.
  • Instrumentation and logging built into AI pipelines, enabling monitoring, runtime protections, and auditability.

This could accelerate the emergence of “AI DevOps + Governance” (often called AI GRC or ML Ops with compliance), where development, deployment, and oversight are tightly integrated.

5.4 Organizational Culture: From “Move Fast” to “Move Thoughtfully”

As organizations grapple with the AI Act, culture will likely shift:

  • Product teams will be more accustomed to asking legal and ethical questions early, not late.
  • Business stakeholders will increasingly frame AI initiatives in terms of risk‑reward trade‑offs, rather than pure capability.
  • Board‑level oversight of AI will become normal, with AI risk discussed alongside cybersecurity, climate, and geopolitical risk.

Paradoxically, this may unlock more ambitious AI use cases in sensitive domains (healthcare, education, public services) because robust guardrails make it politically and socially acceptable to deploy them.

5.5 Open Questions and Research Frontiers

Despite its ambition, the AI Act leaves many questions open, offering fertile ground for research and experimentation:

  1. Effectiveness of bans and literacy requirements
    Will banning emotion recognition in workplaces and social scoring practices actually reduce harm, or will similar practices resurface under new labels? How do we measure AI literacy outcomes?
  2. Metrics for “systemic risk” in GPAI
    As GPAI obligations come into force, we need robust metrics for systemic harms (e.g., disinformation, economic disruption, coordination of cyberattacks).
  3. Interplay with future technologies
    How will the AI Act evolve in the face of agentic AI, multi‑modal systems, synthetic reality, or post‑quantum crypto?
  4. Impact on small actors and global South
    Will compliance tooling and support be sufficient to prevent the Act from entrenching large incumbents while marginalizing smaller innovators, particularly outside the EU?
  5. Operationalization of human oversight
    The Act insists on “human oversight” for high‑risk systems, but what kind of human oversight is meaningful? When is it just a rubber stamp? This is an area where research, design, and policy will converge.

Conclusion: From Law on Paper to Living Practice

The EU AI Act’s “real‑world compliance” phase marks a turning point in the relationship between AI innovation and legal oversight. What began as high‑level ethical guidelines and political debates has now crystallized into concrete obligations, staggered across a timeline that started in August 2024 and accelerates through 2025 and 2026.

From 2 February 2025, organizations in and around the EU must ensure that banned AI practices are discontinued and that AI literacy is actively fostered. From 2 August 2025, providers of general‑purpose AI models must own their share of responsibility through documentation, evaluation, and risk management. And by 2 August 2026, providers and users of high‑risk AI systems will be operating under a fully fledged regulatory regime that demands governance, classification, documentation, oversight, and continuous monitoring.

This is not simply a compliance exercise. It is a re‑architecture of how AI is conceived, built, and deployed:

  • Governance structures and AI inventories are becoming standard.
  • Risk classification is integrated into project intake.
  • Documentation, logging, and transparency are treated as core technical work.
  • Upstream model providers are formally accountable for downstream impact.

Looking ahead, several paths remain open. The Act could help catalyze a global convergence around trustworthy AI, or contribute to regulatory fragmentation that complicates cross‑border operations. It may bolster trust and unlock AI deployment in high‑stakes domains, or—if poorly implemented—burden smaller innovators. Much will depend on how regulators, industry, and civil society collaborate in this early phase.

What is already clear is that the era of purely voluntary AI ethics is over, at least for those who build and deploy AI in relation to the EU. The AI Act turns ideals of fairness, transparency, and safety into legal obligations. For forward‑looking organizations, this is not merely a constraint but an opportunity: to design AI systems that are robust, respectful of human rights, and worthy of public trust—and to build a strategic advantage in a world where responsible AI is rapidly becoming the global norm.


Navigating the EU AI Act: Key Compliance Dates and Implications

Discover more from Jarlhalla Group

Subscribe to get the latest posts sent to your email.

Leave a Reply

Your email address will not be published. Required fields are marked *

Discover more from Jarlhalla Group

Subscribe now to keep reading and get access to the full archive.

Continue reading