New York state will monitor its use of AI after signing a new bill
New York state government agencies will have to conduct reviews and publish detailed reports on how they use artificial intelligence software, under a new law signed by Gov. Kathy Hochul.
Hochul, a Democrat, signed the law last week after it was passed by state lawmakers earlier this year.
The law requires state agencies to conduct evaluations of any software that uses algorithms, computer models or AI techniques, then submit those reviews to the governor and top legislative leaders and post them online.
It also prohibits the use of AI in certain situations, such as automatically deciding whether someone receives unemployment benefits or child care assistance, unless the system is consistently monitored by a human.
The law protects workers from having hours cut due to AI
Government employees will also be protected from having their hours or duties limited due to AI under the law, addressing major concerns raised by critics against generative AI.
State Sen. Kristen Gonzalez, a Democrat who sponsored the bill, called the legislation an important step in establishing alternatives to how emerging technologies are used in state government.
Experts have long called for more control over productive AI as the technology becomes more widespread.
Some of the major concerns raised by critics, aside from job security, include security concerns about personal information, and that AI can increase misinformation due to its tendency to invent facts, repeat false statements and its ability to create near reality-based images. in the instructions.
Several other states are implementing laws governing AI, or are preparing for them. In May, Colorado introduced the Colorado AI Act, which sets requirements for developers to avoid bias and discrimination in high-risk AI programs that make major decisions, which will go into effect in 2026. More AI bills will also come into effect in the new year. in California after it was signed into law in September, including one that requires major Internet platforms to identify and block fraudulent election-related content, and another that requires developers to be open about data. sets used to train their systems.
Canada does not have a government regulatory framework for AI, although the proposed Artificial Intelligence and Data Act (AIDA) is included in Bill C-27. Still under consideration, there is no timeline on whether or not it will become law. Earlier this fall, the federal government also announced the launch of the Canadian Artificial Intelligence Safety Institute, which aims to advance research on AI safety and responsible development.
Alberta is working to develop its own rules around artificial intelligence, the privacy commission said in March, with a particular focus on privacy issues like deepfakes.
Source link