Illinois is officially staking its claim in the wild west of AI regulation. In a landmark move, state lawmakers have passed a bill banning AI from acting as a standalone therapist and placing firm guardrails on how mental health professionals can use AI to support care. Governor JB Pritzker signed the bill into law on Aug. 1.
The legislation, dubbed the Wellness and Oversight for Psychological Resources Act, was introduced by Rep. Bob Morgan and makes one thing clear: only licensed professionals can deliver therapeutic or psychotherapeutic services to another human being.
This Tweet is currently unavailable. It might be loading or has been removed.
“We have already heard the horror stories when artificial intelligence pretends to be a licensed therapist. Individuals in crisis unknowingly turned to AI for help and were pushed toward dangerous, even lethal, behaviors,” said Rep. Morgan in a statement to Mashable. “Every day, AI develops further in our country without the guardrails necessary to protect people. By passing HB 1806, we are taking action to pause the unchecked expansion of AI in mental healthcare and putting necessary regulation in place before more harm is done.”
Under the new state law, mental health providers are barred from using AI to independently make therapeutic decisions, interact directly with clients, or create treatment plans — unless a licensed professional has reviewed and approved it. The law also closes a loophole that allows unlicensed persons to advertise themselves as “therapists.”
Violating the act could cost up to $10,000 per offense, with fines scaling based on the severity of the infraction. The law went into effect immediately.
Illinois is now the first state to regulate the use of AI in mental healthcare. It joins a growing list of AI-related laws already on the books, including recent amendments to the state’s Human Rights Act. Those changes make it a civil rights violation to use AI in ways that result in discriminatory treatment of employees — such as using zip codes as a proxy for protected classes — or to deploy AI tools without notifying employees.
The role of AI in mental health care is still hotly contested. On one hand, the appeal is clear: with the high cost of care leaving millions of Americans without access to therapy, turning to AI can seem like a practical alternative. But warnings from mental health professionals, researchers, and even OpenAI CEO Sam Altman point to serious concerns. At best, AI-driven therapy poses major privacy risks. At worst, it could exacerbate a person’s mental health crisis, especially when vulnerable users mistake a chatbot for professional care.
“By clearly defining how AI can and cannot be used in mental health care,” Morgan’s statement reads. “We’re protecting patients, supporting ethical providers, and keeping treatment in the hands of trained, licensed professionals.”