Ethical and Legal Frontiers: Artificial Intelligence in Mental and Behavioral Health Care
Artificial intelligence (AI) is rapidly making its way into mental and behavioral health care, promising new tools for assessment, monitoring, and treatment. Yet for psychologists, social workers, and marriage and family therapists, its emergence also raises difficult ethical and professional questions. As lawmakers in New York, New Jersey, and Pennsylvania move toward regulating the use of AI in therapy, practitioners will need to navigate both the opportunities and the risks that accompany these technologies.
AI’s Growing Role in Clinical Practice
AI is being used to analyze speech and text for signs of depression, generate personalized treatment suggestions, and even assist patients with exposure therapy. These tools can help clinicians detect issues earlier, track client progress, and handle administrative work more efficiently. However, their clinical utility still remains largely untested. Most AI systems are not designed – or approved – to replace a licensed therapist’s judgment, and their use in direct therapeutic interaction remains ethically and legally contentious.
Ethical Risks and Professional Responsibilities
The ethical concerns surrounding AI go to the heart of mental health practice. Above all, there is the question of whether technology can ever replicate the empathy, understanding, and trust essential to therapeutic care. When clients disclose deeply personal information, they rely on confidentiality and on the presence of a trained human being who can interpret emotion, nuance, and context – qualities AI does not possess. Using AI to mediate or mimic that relationship risks diminishing the human element that makes therapy effective.
Privacy and data protection are equally critical. AI systems often require access to sensitive health data, including session notes, biometric information, or communication transcripts. Without strict safeguards, this data can be exposed, misused, or sold to third parties. Ethical practice requires explicit informed consent, clear explanation of how AI tools operate, and adherence to confidentiality standards such as HIPAA.
Bias and fairness present another challenge. AI models are only as reliable as the data used to train them. When those datasets fail to represent diverse populations, the results can reinforce inequities, producing inaccurate assessments or inappropriate recommendations. For behavioral health professionals committed to equity and client welfare, this makes human oversight non-negotiable. Therapists must retain ultimate responsibility for interpreting data and making all clinical decisions.
The Legal Landscape in New York, Pennsylvania, and New Jersey
Lawmakers are beginning to codify these boundaries.
In New York, Senate Bill S8484, known as the Oversight of Technology in Mental Health Care Act, would allow licensed professionals to use AI only for administrative or supportive functions – such as maintaining records or analyzing anonymized data – while prohibiting any use of AI for therapeutic communication, emotional detection, or independent treatment decisions. Written, informed consent would be required before any recording or transcription involving AI. The Oversight of Technology in Mental Health Care Act would add further regulation to the AI space in New York after the recently enacted S3008, which prohibits the operation of AI companion models in New York, unless the model contains a protocol able to detect and address suicidal ideation or expressions of self-harm expressed and contains a clear and conspicuous notification that the user is not communicating with a human.
Pennsylvania’s proposed House Bill HB1993, the Artificial Intelligence in Mental Health Therapy Act, adopts definitions similar to the pending legislation in New York but imposes explicit prohibitions on AI systems making therapeutic judgments, generating treatment plans without human review, or simulating emotional interaction. Violations would be treated as unprofessional conduct under the Commonwealth’s existing licensing laws, exposing practitioners to disciplinary action.
New Jersey’s pending legislation, Assembly Bill A5603, takes a less comprehensive approach, prohibiting those who develop or deploy an AI system from advertising or representing to the public that the system is, or is able to act as, a licensed mental health professional. The intent is to ensure that all mental health services are delivered by qualified, licensed professionals who remain accountable for patient care.
Together, these initiatives signal a regional trend toward regulating AI in behavioral health care, preserving the primacy of human clinical judgment while still allowing technology to serve administrative and analytical roles.
Looking Ahead
For mental and behavioral health professionals, these developments serve as a reminder that technology must be guided by ethics, not the other way around. AI can improve efficiency and broaden access, but it should never replace the empathy, discernment, and accountability that define therapeutic practice. As New York, Pennsylvania and New Jersey move forward with new legislation, clinicians should review their own use of AI tools, ensure they obtain informed consent, and remain vigilant about privacy and bias.
The future of AI in mental health will depend not only on innovation but on the profession’s commitment to ethical, human-centered care.
Whether you are incorporating digital tools into your practice or seeking guidance on state regulatory changes, our firm is here to help you stay compliant and protect your professional integrity. Please contact Evan Sampson in our Health Care Practice Group at 856.301.2561 or esampson@postschell.com to discuss compliant measures to implement AI in your practice or questions about the evolving legal landscape in your state.
Disclaimer: This post does not offer specific legal advice, nor does it create an attorney-client relationship. You should not reach any legal conclusions based on the information contained in this post without first seeking the advice of counsel.