U.S. Senator Cynthia Lummis has introduced the Responsible Innovation Steps for Emerging Technologies (RISE) Act, legislation aiming to establish a civil liability framework for artificial intelligence applications.
The core proposal seeks to shield AI developers and entities deploying off-the-shelf AI models from lawsuits related to inherently unpredictable AI behaviors. Simultaneously, it places responsibility on professionals who choose to utilize AI technology within their specific field, emphasizing they must understand its limitations.
Legal experts observing the bill indicate its provision of developer immunity is generally supported as crucial for preventing excessive litigation from stifling U.S. innovation. However, critics of the bill contend that its requirements for transparency lack sufficient detail, potentially leaving users under-protected.
Contrasting the EU’s AI Act, which prioritizes establishing identifiable user rights and stringent pre-deployment compliance requirements, the RISE Act adopts a different approach. It concentrates specifically on defining responsibilities when professionals integrate AI tools into their workflows.
Industry stakeholders suggest the proposed legislation requires refinement to make a suitable impact. Potential enhancements include mandating rigorous third-party audits of AI systems and broadening its scope to more comprehensively cover AI applications offered directly to consumers.
The RISE Act is recognized as providing a foundational legal structure for AI governance. Yet, experts agree it must undergo further development to effectively address the technology’s rapidly advancing complexities and inherent risks, ensuring public trust and safety are maintained.