NIST’s AI standards eyed as inspiration for federal regulation ...Middle East

News by : (NC news line) -

Amid discussions by Congress of what federal regulations for artificial intelligence could look like, experts and legislators have begun pointing to existing guidelines created by the National Institute of Standards and Technology (NIST) over the last two years, as a model.

“I think when it comes to AI and protecting data, coming from the lens of a startup, from a young company, we’ve looked at the new standards that [NIST] have around AI, and they’re actually pretty good,” Bhavin Shah, founder and CEO of AI company Moveworks, testified at a June 5 House Oversight Committee hearing.

“They do provide for a lot of recommendations that we actually follow,” he added.

Shah was referencing NIST’s AI Risk Management Framework, a set of voluntary guidelines released in 2023, and developed by a collaboration between private and public sector experts.

The framework aims to provide systems that developers and deployers of AI can follow to increase the safety of their AI models and trustworthiness with their users. The framework was designed to be flexible and general enough in its outline that companies or organizations of different sizes can implement their suggestions.

In 2024, NIST released a similar framework specific to generative AI — newer AI models that are capable of generating new text, images and other content.

Why is NIST’s AI framework influential?

Because there currently are no federal regulations for AI, companies often look to NIST’s framework to find common language or standards, said Atlanta-based Patrice Williams-Lindo, a management consultant and CEO of career consulting company Career Nomad.

“It’s where technical rigor meets trust,” Williams-Lindo said. “It’s not regulatory, but it creates that common language. So if you think about the times when maybe Congress doesn’t move fast enough, NIST can help industry self-govern responsibly and ethically.”

NIST’s AI framework assesses the full life cycle of an AI product from conception to monitoring after it has been released, using a “map, manage and measure” strategy, said Boston-based Anthony Habayeb, cofounder and CEO of AI governance platform Monitaur. Users following the framework would map out and identify risks in the AI model, measure the AI performance and risk levels, then manage those identified risks and respond to them.

“What NIST then does is give you some tactical guidance on how you can build towards transparency, how you could build towards better fairness without explicitly defining the details of how to do it,” Habayeb said.

NIST’s research and findings often lay the groundwork for national laws and policies. Following the 9/11 attacks, for example, NIST was one of the agencies to investigate the technical reasons behind the building collapses, and subsequent reports changed aspects of the White House’s public safety communications standards, building codes and DNA processing.

In the early 2010s, as software and cloud services became more prevalent, NIST served as a technical advisor in developing the FedRAMP program, which is how government agencies procure software today. In 2014, NIST released its cybersecurity framework, a collection of information from hundreds of workshops and participants, which has been widely implemented by private companies, Shah said in his June 5 testimony.

“NIST is often the soft law before the hard law hits,” Williams-Lindo said.

In early June, NIST became a part of the U.S. Department of Commerce. It’s a move that Ylli Bajraktari, president and CEO of tech-focused think tank Special Competitive Studies Project, said during the June 5 hearing could be beneficial for helping companies develop their best AI policies. Whether or not Congress adopts a federal AI policy, Bajraktari said, NIST’s AI framework is a good influence on the private and public sector.

“I think NIST is well positioned for this,” he testified.

Could NIST standards become law?

Legislators from both sides of the aisle and expert witnesses of varying backgrounds praised NIST’s clear, nonpartisan AI framework at a handful of congressional hearings that have happened since Republicans proposed a 10-year moratorium on state-level AI laws in the “Big Beautiful Bill” working its way through Congress.

“Congress likes NIST because NIST doesn’t issue regulations,” said Boise, Idaho-based Thomas Leithauser, a legal analyst at software and information services company Wolters Kluwer. “NIST offers recommendations and guidance.”

Though many tech companies follow the guidelines outlined in the AI framework, it’s entirely voluntary. They are standards, not regulations, Habayeb said.

But some legislators arguing for AI regulations at the national level and in their states say that a framework like NIST’s isn’t enough. Many AI laws are shaped as consumer protection laws, including recently passed state laws that address discrimination caused by AI used for facial recognition, banking, hiring and healthcare.

True regulations are needed to keep people safe from potential harms caused by AI algorithms, said Rep. Kathy Castor, a Democrat from Florida, in a May 21 hearing of the House Subcommittee on Commerce, Manufacturing and Trade.

“What the heck is Congress doing?” Castor said about Republican efforts to block state-level regulation. “What are you doing to take the cops off the beat while states have acted to protect us?”

NIST’s AI framework garners bipartisan support, but could it be turned into a federal law? Yes and no, said Williams-Lindo.

It’s a strong starting point, and already a guidepost for many tech companies, she said. But it does not serve as a regulatory board, which may be needed to keep the quickly growing field of AI in check against the harms it could cause.

“NIST is operational, so we still will need a real plan that measures … algorithmic harm, or ensures that historically excluded communities aren’t collateral damage,” she said. “That’s usually where there’s a gap right in that federal leadership.”

Yet some industry players say companies don’t need the hard threat of regulation to follow NIST standards. Aside from safety concerns, builders of AI should see the competitive edge in following safety guidelines, Habayeb said.

“If you skip a step of certain testing or data validation, you’re going to have a less than ideal system,” Habayeb said. “And you should care about that for business purposes, not even regulation.”

Read More Details
Finally We wish PressBee provided you with enough information of ( NIST’s AI standards eyed as inspiration for federal regulation )

Also on site :

Most Viewed News
جديد الاخبار