Artificial Intelligence

Artificial Intelligence

3 Key Takeaways

  1. Structure AI liability before deployment: Allocate responsibility among OEMs, software developers, and fleet operators through detailed indemnities, insurance requirements, and data-sharing obligations for incident reconstruction.
  2. Treat the EU AI Act as a global baseline: High-risk automotive AI systems face detailed requirements for risk management, technical documentation, and post-market monitoring. Build compliance frameworks that satisfy both AI regulations and existing vehicle safety regimes.
  3. Secure the AI data pipeline: Training data quality, privacy compliance, and cybersecurity protections are critical. Poor data governance creates both safety risks and regulatory exposure across the AI lifecycle.

AI concerns have evolved from abstract to operational. Our survey reveals a landscape in which companies are shifting from debating responsibility to building the systems that demonstrate it. Liability allocation for AI-driven vehicle decisions (48%) declined 30 percentage points from 78% last year. This substantial drop does not reflect reduced importance but rather industry acceptance that liability allocation is a contractual and insurance problem to be solved, rather than an abstract legal question to be debated. The 48% who still cite it recognize that operationalizing these liability frameworks remains complex, particularly in determining fault among OEMs, software developers, sensor suppliers, and fleet operators.

Data quality, privacy, and cybersecurity requirements for AI training data (44%) has emerged as a major concern, representing a new explicit category that reflects growing recognition that AI systems are only as reliable as the data used to train them. This 44% response rate indicates substantial anxiety about managing data governance across the AI lifecycle, from initial training datasets through continuous learning and model updates.

What was once positioned as customer enhancement is now recognized as legal risk, as AI-powered driver interaction risks (31%) registered 4 percentage points higher than last year’s 27%. This shift could reflect an evolved thinking about voice assistants, driver monitoring systems, and in-cabin AI that continuously collects behavioral and biometric data

Manufacturing and maintenance applications of AI have declined significantly. AI use in manufacturing quality control (19%) and AI-enabled predictive diagnostics (18%) are down marginally from last year, suggesting these applications are now routine rather than legally novel. We were surprised to see AI vendor contracting risks (17%) so low on our survey, given the complexity of allocating responsibility for model integrity and regulatory compliance across vendor relationships.

One Big Thing:

Following Europe’s Lead

Compliance challenges under emerging AI laws and regulations (49%) now marginally exceed liability allocation (48%) as the top AI concern. The European Union’s AI Act, in force since 2024 and phasing in requirements through 2027, establishes the most comprehensive regulatory framework for automotive AI systems globally. While nominally applicable only to EU markets, the Act effectively sets standards that OEMs must meet worldwide because vehicle platforms are designed for global deployment and supply chains serve multiple markets simultaneously. No major automaker builds EU-only AI systems.

The AI Act treats most automotive AI systems as high-risk applications, including safety-critical components and ADAS or ADS functions. This classification triggers detailed requirements that layer on top of existing vehicle conformity assessment and product safety regimes. Companies must implement risk management systems, maintain technical documentation, ensure logging and traceability of AI decisions, provide for human oversight, establish quality management systems, and conduct post-market monitoring of AI system performance.

The complexity arises from dual compliance tracks. A Tier 1 supplier providing an AI-powered perception system must demonstrate compliance with both automotive functional safety standards and AI Act requirements for data quality, model validation, and bias mitigation. Without clear contractual allocation, both the supplier and the integrating OEM face potential enforcement exposure when regulators investigate an AI system failure.

U.S. and UK regulators are developing frameworks that draw heavily on the AI Act’s structure. NIST AI profiles, state AI statutes, and sectoral cybersecurity codes are moving from abstract principles to concrete obligations for data governance, transparency, and security that mirror the AI Act’s approach. This regulatory convergence means companies building AI compliance programs under the AI Act are likely to develop frameworks that will satisfy emerging requirements in other jurisdictions as well.

Artificial Intelligence Contact

Michael J. Word
Dykema Member
Chicago
312-627-2263
mword@dykema.com