R&W Abo Buch Datenbank Veranstaltungen Betriebs-Berater
 
 
 
AI Act Compact (2024), S. VII 
Table of Contents 
Peter Hense, Tea Mustać 

VII Table of Contents

  1. Preface
  2. I. On the Scope
    1. 1. The Scope of the AI Act
      1. a. On AI Systems
        1. (1) Introduction
        2. (2) A Deeper Dive: Qualitative and Quantitative Aspects of AI Systems
        3. (3) On Ends and Beginnings: AI Infrastructure and AI Systems
        4. (4) Solutions From the Field of AI Safety, Computation and UML
        5. (5) Conclusion
      2. b. Risk-based Approach
      3. c. Personal Scope
        1. (1) Providers
        2. (2) Product Manufacturers
        3. (3) Deployers
        4. (4) Importers and Distributors
      4. d. Territorial Scope
      5. e. On AI Literacy
        1. (1) Operationalizing AI Literacy
          1. i. Objectives
          2. ii. Training Needs, Including Target Groups and Levels of Expertise
          3. iii. Training Content
          4. iv. Training Methods
          5. v. Training Frequency
          6. vi. Evaluation
        2. (2) Final Remarks on AI Literacy
    2. 2. The Scope of the Book
  3. II. Design
    1. 1. Idea
    2. 2. Legal Requirements Engineering
    3. 3. Intended Purpose
      1. a. Usability Engineering
      2. b. Human-factor Engineering
    4. 4. Initial Risk Categorization and Assessment
  4. III. Development
    1. 1. AI System Requirements
      1. a. Quality Management System
        1. (1) Types of Quality Management
        2. (2) Structure and Main Components
        3. (3) Software and AI System Quality Models
        4. (4) Operationalizing the Legal Requirements Under Article 17
          1. i. First Line
          2. ii. Second Line
          3. iii. Third Line
      2. b. Risk Management
        1. (1) Risk Identification
        2. (2) Risk Assessment
          1. i. Factors for Estimating Likelihood
          2. ii. Factors for Estimating Impact
        3. (3) Risk Mitigation
        4. (4) Risk Acceptance
        5. (5) Risk Tolerance
        6. (6) Establishing Controlling, Monitoring and Incident Response Processes
          1. i. Controlling
          2. ii. Monitoring
          3. iii. Incident Response Process
      3. c. Human Oversight
      4. d. Accuracy, Robustness, and Cybersecurity of AI Systems (Article 15 AI Act)
        1. (1) Overview
          1. i. Guiding Principles for the AI System Life Cycle87 (Article 15(1))
          2. ii. Benchmarking and Standardization, Article 15(2)
          3. iii. Documentation Obligations, Article 15(3)
          4. iv. Resilience, Article 15(4)
          5. v. Cybersecurity, Article 15(5)
        2. (2) Detailed Explanations
          1. i. Significance of Harmonized Norms and Standards in the AI Act136
          2. ii. Consistent Performance Throughout the AI System Life Cycle: Universal Lessons in Machine Learning
          3. iii. What Are Feedback Loops153 and How Should They Be Addressed?
          4. iv. “Engineering Safety in Machine Learning”
            1. (a) Standardization: Accumulated Experience
            2. (b) AI Engineering Best Practices
            3. (c) Case Study: Safety and Robustness of AI Systems in the Automotive Sector
            4. (d) Inherently Safe Design in Machine Learning172
            5. (e) A Framework for Testing AI Systems
            6. (f) AI Red Teaming
            7. (g) NIST Test Platform “Dioptra”
    2. 2. Overview of Data Governance & Data Management (Article 10)
      1. a. Machine Learning and Training Data in a Nutshell
      2. b. Mandatory Quality Criteria for Training, Validation, and Testing of AI Systems, Article 10(1)
      3. c. Data Governance & Data Management, Article 10(2)
      4. d. Standardization of Data (Quality) Management
      5. e. Definitions and Metrics
        1. (1) Training Data, Article 3(29)
        2. (2) Validation Data, Article 3(30)
        3. (3) Validation Dataset, Article 3(31)
        4. (4) Testing Data, Article 3(32)
      6. f. The Individual Elements
        1. (1) Relevant Design Choices, Article 10(2)(a)
        2. (2) Data Collection Processes and the Origin of Data, and in the Case of Personal Data, the Original Purpose of Data Collection, Article 10(2)(b)
          1. i. Data Sources in Machine Learning Practice
          2. ii. Datasheets for Datasets
          3. iii. Data Protection Law
        3. (3) Relevant Data-Preparation Processing Operations, such as Annotation, Labeling, Cleaning, Updating, Enrichment, and Aggregation, Article 10(2)(c)
          1. i. Annotation and Labeling
          2. ii. Data Cleaning
          3. iii. Updating
          4. iv. Enrichment
          5. v. Aggregation
        4. (4) Formulation of Assumptions, Particularly Regarding the Information that the Data are Intended to Measure and Represent, Article 10(2)(d)
        5. (5) Assessment of the Availability, Quantity, and Suitability of the Necessary Data Sets, Article 10(2)(e)
        6. (6) Examination of Possible Biases Affecting Health and Safety, Fundamental Rights, or Leading to Discrimination Prohibited Under Union Law Article 10(2)(f) and Appropriate Measures to Detect, Prevent, and Mitigate Possible Biases Identified According to Article 10(2)(g)
        7. (7) Identification of Relevant Data Gaps or Shortcomings Preventing Compliance and Appropriate Mitigation Measures, Article 10(2)(h)
      7. g. Combating Bias and Discrimination, Articles 10(3), (4), and (5)
        1. (1) Recitals and Ethics Guidelines for Trustworthy AI by the High-Level Expert Group on Artificial Intelligence (HLEG AI, 2019)
        2. (2) Fundamental Rights Agency (FRA), LIBE, HLEG AI and the Toronto Declaration
        3. (3) Research and Science: It’s Not Just the Data, Stupid!
        4. (4) International Standardization
        5. (5) Relevant, Sufficiently Representative, Free of Errors and Complete in view of the Intended Purpose
          1. i. The Purpose of the System
          2. ii. Relevance
          3. iii. Representativeness
          4. iv. Error-Free
          5. v. Completeness
        6. (6) Balanced Statistical Characteristics in Datasets
        7. (7) Geographically, Contextually, Behaviorally, or Functionally Typical Datasets
          1. i. Geographical Setting
          2. ii. Contextual Setting
          3. iii. Behavioural Setting
          4. iv. Functional Setting
        8. (8) Processing of Sensitive Data for the Analysis and Mitigation of Biases229
        9. (9) AI, Bias, and European Anti-Discrimination Law: An Overview239
          1. i. Legal Framework
          2. ii. Scope of Application and Protected Characteristics
          3. iii. Direct and Indirect Discrimination
          4. iv. Justification
          5. v. Positive Measures
          6. vi. Reversal of Burden of Proof
          7. vii. Legal Consequences
          8. viii. Respondents
    3. 3. Testing & Compliance
      1. a. Sandboxes
        1. (1) Sandboxes in the AI Act
        2. (2) Establishment and Operation of AI Sandboxes
        3. (3) A Case for Participating in AI Sandboxes?
        4. (4) Sandbox Plan
        5. (5) Exit Reports
        6. (6) Consequences
        7. (7) Processing of Data Within the Sandbox
      2. b. Testing in Real World Conditions Outside AI Regulatory
      3. c. Conformity Assessment
        1. (1) What?
        2. (2) When?
        3. (3) Who and How?
        4. (4) Exceptions Make The Rule
        5. (5) Why?
      4. d. Harmonised Standards, Common Specifications & Presumption of Conformity
        1. (1) Harmonised Standards
        2. (2) Presumption of Conformity
        3. (3) Common Specifications
        4. (4) Codes of Conduct and Guidelines
      5. e. Placing on the Market
    4. 4. Technical Documentation
  5. IV. Deployment
    1. 1. Providers
      1. a. The Obvious
      2. b. Documentation Keeping (Article 18) and Automatically Generated Logs (Article 19)
      3. c. Risk Management
      4. d. Human Oversight
      5. e. Transparency and Provision of Information to Deployers and/or End Users
      6. f. Post-market Monitoring (Article 72), Corrective Action & Duty of Information (Article 20), Reporting Serious Incidents (Article 73) and Cooperation with the Authorities
        1. (1) Goal and Purpose
        2. (2) Key Features
        3. (3) Post-market Monitoring Plan
        4. (4) Exit Report
        5. (5) Interplay with Other Systems and Processes
    2. 2. Deployers
      1. a. The Obvious: Due Diligence, Use According to Instructions for Use (Logs, Human Oversight), Transparency, Monitoring and Duty of Information, Reporting
        1. (1) Due Diligence: The Legal Deep-dive
        2. (2) Use According to Instructions for Use (Logs, Human Oversight)
        3. (3) Transparency
        4. (4) Monitoring and Duty of Information
      2. b. Data Governance
      3. c. Fundamental Rights Impact Assessment (FRIA)
        1. (1) Who?
        2. (2) What?
        3. (3) Why?
          1. i. Make Adjustments and Implement Additional Measures
          2. ii. To Assess the Acceptability of Residual Risks
        4. (4) End Result
      4. d. Consulting the Works Council
      5. e. Data Protection Impact Assessment
      6. f. Article 25 Obligations Along the AI Value Chain313
  6. V. Special Considerations
    1. 1. IP, TDM and GenAI in the AI Act
      1. a. General Purpose AI in the AI Act
        1. (1) AI Model Providers and AI System Providers
        2. (2) General-purpose AI Model Provider’s Obligations
        3. (3) General-purpose AI System Provider’s Obligations
        4. (4) Copyright and General-purpose AI Models
      2. b. Training and Training Data331
        1. (1) GenAI Needs Data (and It Needs a Lot of It)
        2. (2) Memorization and Overfitting340
        3. (3) Text and Data Mining
          1. i. The Copyright in the Digital Singe Market Directive (CDSM Directive)
          2. ii. TDM and Transformer Models
          3. iii. Three-Step Test
          4. iv. Conclusion
      3. c. Use, Inputs and Outputs
        1. (1) Use and Inputs
        2. (2) Outputs
        3. (3) The Curious Case of Retrieval-Augmented Generation (RAG)
    2. 2. AI in Financial Institutions
      1. a. Two Important Caveats
      2. b. One Additional (Major) Burden
      3. c. Where It Gets Easier
    3. 3. Biometric and Emotion Recognition Systems
      1. a. Biometric Identification
      2. b. Biometric Verification
      3. c. Biometric Categorisation
      4. d. Emotion Recognition
 
stats