Skip to main content
When Clients Use AI for Emotional Support: Clinical Assessment and Ethical Response
10:00 am - 1:00 pm ET
Live Interactive Online Webinar
$60.00

 Over half of consumers now report having used AI for emotional or mental well-being (Kantar, 2025). Many clinicians are encountering clients who use AI between sessions, without clear guidance on how to understand or respond to it. 

This continuing education webinar examines how conversational AI systems function, why clients may experience these interactions as emotionally supportive, and what psychological dynamics are at play. Through live demonstration and guided analysis, participants will explore both the strengths and limitations of AI in emotional support contexts. 

Ethical considerations, including privacy, informed consent, bias, inaccurate information, and emotional over-reliance, will be addressed within a broader clinical framework. Participants will leave with practical tools for assessing client AI use, initiating grounded conversations about it, and integrating structured AI-supported reflection into treatment when clinically appropriate.

Certificate Display
3.0 Clinical
Objectives

At the conclusion of this webinar, participants will be able to: 

1. Describe how conversational AI systems generate responses and explain the psychological mechanisms that make these interactions feel emotionally supportive to clients.

2. Identify at least four ethical and clinical risks associated with client AI use, including privacy concerns, informed consent challenges, bias, inaccurate information (“hallucinations”), and emotional over-reliance.

3. Develop a clinically appropriate plan for responding to client AI use that supports therapeutic goals while maintaining ethical boundaries.

Outline

The Landscape: AI Is Already in the Therapy Room

●    Live polling:
           Personal use of AI
           Personal use of AI for emotional reflection
           Client disclosure of AI use
●     Emotional reactions to AI in practice
●    Current data on AI use for mental health support
●    System strain and access gaps contributing to AI adoption

How AI Generates Responses (Without Technical Jargon)

●    AI as advanced pattern prediction
●    How AI mimics three key elements of human trust: confidence, empathy, and authority
●    Tone mirroring and language modeling
●    Why AI can feel empathic without awareness
●    The “improv actor” analogy (it has read all the scripts)
●    The “AI puppy dog” (people-pleasing dynamic)

The Skepticism Framework: Recognizing AI Interaction Patterns

●    Engagement and retention dynamics
●    Validation loops
●    Illusion of authority
●    Use of humanizing language (“I understand…”)
●    Hallucinations (confident inaccuracies)
●    Emotional mirroring

Live Demonstration and Guided Clinical Deconstruction 

●  Structured group analysis:
         What felt supportive?
         What felt limited?
         Where might AI inadvertently reinforce distortion?
         What would a clinician do differently?

Why Clients Turn to AI (Even When They’ve Had Therapy)

●    Accessibility and cost
●    24/7 availability
●    Perceived non-judgment
●    Control over pace and disclosure
●    Structured guidance and directive feedback
●    The “Gym vs. Personal Trainer” model (between-session reinforcement)

Ethical and Clinical Risks 

●    Privacy and data security concerns
●    Informed consent implications
●    Algorithmic bias and equity concerns
●    Hallucinations and misinformation
●    Emotional over-reliance and dependency
●    Risk of emotional bypassing
●    Scope-of-practice considerations

Clinical Response Framework: How to Engage Without Judgment

●    How to initiate conversations about AI use
●    Sample language for curiosity-based inquiry
●    Assessing frequency, purpose, and dependency
●    Establishing guardrails
●    Updating informed consent where appropriate
●    When to discourage use

Integration: Using AI as a Structured Between-Session Tool

●    Interactive Journaling model
●    Teaching clients structured prompting
●    The “Pendulum” model (benefit + grounding)
●    When to review AI transcripts in session
●    Maintaining therapeutic primacy

Conclusion/Q&A

Level
Beginner - Advanced
Audience
Social Workers and Allied Health Professionals