Skip to main content

Liability and governance challenges in the age of AI

When the European Union’s Artificial Intelligence Act (EU AI Act) came into effect in 2024, it marked the world’s first comprehensive regulatory framework for AI. The law introduced risk-based obligations—ranging from minimal to unacceptable—and codified requirements around transparency, accountability, and testing. But more than a legal milestone, it crystallized a broader debate: who is responsible when AI systems cause harm?

The EU framework sends a clear signal: responsibility cannot be outsourced. Whether an AI system is developed by a global model provider or embedded in a narrow enterprise workflow, accountability extends across the ecosystem. Most organizations now recognize distinct layers in the AI value chain:

  • Model providers, who train and distribute the core LLMs
  • Platform providers, who package models into usable products
  • System integrators and enterprises, who build and deploy applications

Each layer carries distinct—but overlapping—responsibilities. Model providers must stand behind the data and algorithms used in training. Platform providers, though not involved in training, play a critical role in how models are accessed and configured, including authentication, data protection, and versioning. Enterprises cannot disclaim liability simply because they didn’t build the model—they are expected to implement guardrails, such as system prompts or filters, to mitigate foreseeable risks. End-users are typically not held liable, though edge cases involving malicious or deceptive use do exist.

In the U.S., where no comprehensive AI law exists, a patchwork of executive actions, agency guidelines, and state laws is beginning to shape expectations. The National Institute of Standards and Technology (NIST) AI Risk Management Framework (AI RMF) has emerged as a de facto standard. Though voluntary, it is increasingly referenced in procurement policies, insurance assessments, and state legislation. Colorado, for instance, allows deployers of “high-risk” AI systems to cite alignment with the NIST framework as a legal defense.

Even without statutory mandates, organizations diverging from widely accepted frameworks may face liability under negligence theories. U.S. companies deploying generative AI are now expected to document how they “map, measure, and manage” risks—core pillars of the NIST approach. This reinforces the principle that responsibility does not end with deployment. It requires continuous oversight, auditability, and technical safeguards, regardless of regulatory jurisdiction.

Guardrails and Mitigation Strategies

For IT engineers working in enterprises, understanding expectations on their liabilities is critical.

Guardrails form the backbone of corporate AI governance. In practice, guardrails translate regulatory and ethical obligations into actionable engineering controls that protect both users and the organization. They can include pre-filtering of user inputs, blocking sensitive keywords before they reach an LLM, or enforcing structured outputs through system prompts. More advanced strategies may rely on retrieval-augmented generation or domain-specific ontologies to ensure accuracy and reduce the risk of hallucinations.

This approach mirrors broader practices of corporate responsibility: organizations cannot retroactively correct flaws in external systems, but they can design policies and tools to mitigate foreseeable risks. Liability therefore attaches not only to the origin of AI models but also to the quality of the safeguards applied during deployment.

Increasingly, these controls are not just internal governance mechanisms—they are also the primary way enterprises demonstrate compliance with emerging standards like NIST’s AI Risk Management Framework and state-level AI laws that expect operationalized risk mitigation.

Data Security and Privacy Considerations

While guardrails help control how AI behaves, they cannot fully address the challenges of handling sensitive data. Enterprises must also make deliberate choices about where and how AI processes information.

Cloud services provide scalability and cutting-edge performance but require sensitive data to be transmitted beyond an organization’s perimeter. Local or open-source models, by contrast, minimize exposure but impose higher costs and may introduce performance limitations.

Enterprises must understand whether data transmitted to model providers can be stored, reused for training, or retained for compliance purposes. Some providers now offer enterprise options with data retention limits (e.g., 30 days) and explicit opt-out mechanisms, but literacy gaps among organizations remain a serious compliance risk.

Testing and Reliability

Even with secure data handling in place, AI systems remain probabilistic rather than deterministic. Outputs vary depending on prompt structure, temperature parameters, and context. As a result, traditional testing methodologies are insufficient.

Organizations increasingly experiment with multi-model validation, in which outputs from two or more LLMs are compared (LLM as a Judge). Agreement between models can be interpreted as higher confidence, while divergence signals uncertainty. This technique, however, raises new questions: what if the models share similar biases, so that their agreement may simply reinforce error?

Testing efforts are therefore expected to expand in scope and cost. Enterprises will need to combine systematic guardrails, statistical confidence measures, and scenario testing particularly in high-stakes domains such as healthcare, finance, or public safety.

Rigorous testing alone, however, cannot anticipate every way an AI system might be misused. That’s where “functional red teaming” comes in: deliberately simulating adversarial scenarios (including attempts by end-users to exploit legitimate functions) to uncover vulnerabilities that standard testing might miss. By combining systematic testing with red teaming, enterprises can better ensure that AI systems are safe, reliable, and resilient against both accidental errors and intentional misuse.

The Workforce Gap

Even the most robust testing and red teaming cannot succeed without skilled professionals to design, monitor, and maintain AI systems.

Beyond liability and governance, generative AI is reshaping the technology workforce itself. The automation of entry-level coding tasks has led many firms to reduce junior positions. This short-term efficiency gain carries long-term risks. Without entry points into the profession, the pipeline of skilled engineers capable of managing, testing, and orchestrating advanced AI systems may contract sharply over the next decade.

At the same time, demand is rising for highly versatile engineers with expertise spanning architecture, testing, security, and orchestration of AI agents. These “unicorn” professionals are rare, and without systematic investment in education and mentorship, the talent shortage could undermine the sustainability of responsible AI.

Conclusion

The integration of LLMs into business and society requires a multi-layered approach to responsibility. Model providers are expected to ensure transparency in training practices. Enterprises are expected to implement effective guardrails and align with evolving regulations and standards, including widely adopted frameworks such as the NIST AI RMF and EU AI Act.. Engineers are expected to test systems under a wide range of conditions. And policymakers must anticipate the structural effects on the workforce.

AI is unlikely to eliminate the need for human expertise. AI cannot be truly responsible without skilled humans to guide it. Governance, testing, and safeguards are only effective when supported by professionals trained to design, monitor, and intervene in AI systems. Investing in workforce development is therefore a core component of responsible AI—without it, even the most advanced models risk misuse, errors, and unintended consequences.

The post Liability and governance challenges in the age of AI appeared first on SD Times.



from SD Times https://ift.tt/QZlBOkf

Comments

Popular posts from this blog

A guide to data integration tools

CData Software is a leader in data access and connectivity solutions. It specializes in the development of data drivers and data access technologies for real-time access to online or on-premise applications, databases and web APIs. The company is focused on bringing data connectivity capabilities natively into tools organizations already use. It also features ETL/ELT solutions, enterprise connectors, and data visualization. Matillion ’s data transformation software empowers customers to extract data from a wide number of sources, load it into their chosen cloud data warehouse (CDW) and transform that data from its siloed source state, into analytics-ready insights – prepared for advanced analytics, machine learning, and artificial intelligence use cases. Only Matillion is purpose-built for Snowflake, Amazon Redshift, Google BigQuery, and Microsoft Azure, enabling businesses to achieve new levels of simplicity, speed, scale, and savings. Trusted by companies of all sizes to meet...

2022: The year of hybrid work

Remote work was once considered a luxury to many, but in 2020, it became a necessity for a large portion of the workforce, as the scary and unknown COVID-19 virus sickened and even took the lives of so many people around the world.  Some workers were able to thrive in a remote setting, while others felt isolated and struggled to keep up a balance between their work and home lives. Last year saw the availability of life-saving vaccines, so companies were able to start having the conversation about what to do next. Should they keep everyone remote? Should they go back to working in the office full time? Or should they do something in between? Enter hybrid work, which offers a mix of the two. A Fall 2021 study conducted by Google revealed that over 75% of survey respondents expect hybrid work to become a standard practice within their organization within the next three years.  Thus, two years after the world abruptly shifted to widespread adoption of remote work, we are dec...

Olive and NTT DATA Join Forces to Accelerate the Global Development and Deployment of AI Solutions

U.S.A., March 14, 2021 — Olive , the automation company creating the Internet of Healthcare, today announced an alliance with NTT DATA , a global digital business and IT services leader. The collaboration will fast track the creation of new healthcare solutions to transform the health experience for humans — both in the traditional healthcare setting and at home. As a member of Olive’s Deploy, Develop and Distribute Partnership Programs , NTT DATA is leveraging Olive’s open platform to innovate, build and distribute solutions to Olive’s customers, which include some of the country’s largest health providers. Olive and NTT DATA will co-develop new Loops — applications that work on Olive’s platform to provide humans real-time intelligence — and new machine learning and robotic process automation (RPA) models. NTT DATA and Olive will devote an early focus to enabling efficiencies in supply chain and IT, with other disciplines to follow. “This is an exciting period of growth at Olive, so...