Artificial Intelligence

3 Key Takeaways

  1. Structure AI liability before deployment: Allocate responsibility among OEMs, software developers, and fleet operators through detailed contractual indemnities, insurance requirements, and data-sharing obligations for incident reconstruction.
  2. Secure the AI data pipeline: Training data quality, privacy compliance, and cybersecurity protections are critical. Poor data governance could create both legal and regulatory exposure across the AI lifecycle.
  3. Prepare for AI regulatory compliance: The EU AI Act is beginning to take effect, while AIrelated regulatory frameworks are continuing to develop in the U.S., UK, and China. Companies should consider how these jurisdictionspecific developments may inform their product development and regulatory planning, depending on where they operate.

AI concerns have evolved from abstract to operational. Our survey reveals a landscape in which companies are shifting from debating responsibility to negotiating frameworks that address responsibility allocation at the front end. Concern over where to allocate liability for AI-driven vehicle decisions (48%) declined 30 percentage points from 78% last year. This substantial drop does not reflect reduced importance but rather industry acceptance that liability allocation is a contractual and insurance problem to be solved. The 48% who cite liability allocation as a concern likely recognize that operationalizing these liability frameworks remains complex, particularly in determining potential liability allocation among OEMs, software developers, sensor suppliers, and fleet operators.

Data quality, privacy, and cybersecurity requirements for AI training data (44%) has emerged as a major concern, representing a new explicit category that reflects growing recognition that AI systems are only as reliable as the data used to train them. This 44% response rate indicates concern about managing data governance across the AI lifecycle, from initial training datasets through continuous learning and model updates.

Additionally, the risk presented by AI-powered driver interaction (31%) registers 4 percentage points higher than last year’s 27%. This shift could reflect an evolved view of voice assistants, driver monitoring systems, and in-cabin AI that could continuously collect behavioral and biometric data.

Concerns about AI use in manufacturing and maintenance applications have declined significantly. Reported concern around AI use in manufacturing quality control (19%) and AI-enabled predictive diagnostics (18%) is down marginally from last year, suggesting these applications are viewed as routine rather than legally novel. By contrast, we were surprised to see relatively low concern about AI vendor contracting risks (17%), given the complexity of allocating responsibility for model integrity and regulatory compliance across vendor relationships.

By the Numbers

Which AI-Related Legal Risks will be top priorities for the industry in 2026?*

*Asked to select up to three

One Big Thing:

What is Europe Doing?

The survey shows that compliance challenges arising from emerging AI laws and regulations (49%) have narrowly surpassed liability allocation (48%) as the leading AI concern. The European Union’s AI Act, in force since 2024 and phasing in requirements through 2027, establishes the most comprehensive regulatory framework for automotive AI systems globally. While nominally applicable only to EU markets, the big question remains whether the Act will be used effectively to set baseline standards that should be met worldwide.

The EU AI Act treats most automotive AI systems as “highrisk” applications. This classification triggers additional obligations that layer on top of existing vehicle conformity assessment and product compliance regimes. If companies seek to comply with the EU AI Act, they may consider implementing risk management systems, maintaining technical documentation, ensuring logging and traceability of AI decisions, providing for human oversight, establishing quality management systems, and conducting post-market monitoring of AI system performance.

Companies should also be aware of the complexity that arises from managing dual compliance tracks with differing standards. U.S. and UK regulators are developing frameworks that draw heavily on the EU AI Act’s structure. At the same time, NIST AI profiles, state AI statutes, and sectoral cybersecurity codes are moving from abstract principles to concrete obligations for data governance, transparency, and security that largely mirror the EU AI Act’s approach. As a result, to the extent companies build AI compliance programs under the EU AI Act, such frameworks are likely to satisfy emerging requirements in other jurisdictions as well.

Artificial Intelligence Contact

Michael J. Word
Member
Chicago
312-627-2263
mword@dykema.com