Cynthia Lummis suggests the Rise Act, an AI -Bill proposal that requires transparency for legal immunity

Senator Cynthia Lummis (R-WY) has introduced the responsible innovation and secure expertise (RISE) Act of 2025, a legislative proposal designed to clarify the responsibilities of artificial intelligence (AI) used by professionals.

The bill could bring transparency from AI developers – which does not stop to require models to be open source.

In a press release, Lummis said Rise Act would mean that professionals, such as doctors, lawyers, engineers and financial advisers, remain legally responsible for the advice they give even when informed by AI systems.

At that time, AI developers that create the systems can only protect themselves from civil liability when things go wrong if they publicly release model cards.

The proposed bill defines model cards as detailed technical documents that reveal an AI system’s training data sources, intended use cases, benefit metrics, known limitations and potential error conditions. All of this is intended to help help professionals assess whether the tool is appropriate for their work.

“Wyoming appreciates both innovation and accountability; the increase in increase creates predictable standards that encourage a safer AI development while retaining professional autonomy,” Lummis said in a press release.

“This legislation does not create carpetimmunity for AI,” the lumm ice continued.

However, immunity assigned under this law has clear boundaries. Legislation excludes protection of developers in the case of recklessness, intentional mismatch, fraud, knowledge of erroneous representation, or when actions fall outside the defined extent of professional use.

In addition, developers are facing a duty for continuous accountability under Rise Act. AI documentation and specifications must be updated within 30 days of implementing new versions or detecting significant error conditions, which strengthens continuous transparency obligations.

Stops shortly after open source

Increase law, as it is written now, stops briefly to mandate that AI models become completely open source.

Developers can withhold proprietary information, but only if the redacted material is not related to security and each failure is accompanied by a written justification that explains the exemption of trade.

In a prior interview with Coindesk, Simon Kim, CEO of Hashed, spoke one of Korea’s leading VC funds, about the danger of centralized, closed source AI, which is effectively a black box.

“Openai is not open and it is controlled by very few people, so it is quite dangerous. To make this type [closed source] Basic model looks like creating a ‘god’, but we don’t know how it works, “Kim said at the time.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top