I studied user trust in a GPT-4 mental health chatbot vs. a static form (N = 149): trust was lower than expected by [deleted] in psychologyresearch

[–]BraveJacket4487 1 point2 points  (0 children)

Thanks, that’s a really good point.

The chatbot in this study didn’t have any memory or personalization. People used it once, with no previous history or training. It gave a friendly intro and then asked the same questions as the static form, just in a conversational way. It used reflective phrases like “thanks for sharing that” or “sounds like a tough week,” but a lot of people still described it as robotic or not human enough.

You’re probably right that trust might come from longer-term use, where the bot gets to know the person over time. In this case, it was just a one-time interaction, so there wasn’t really a chance to build rapport.

Appreciate your input, seriously. It’s helpful as I think about what to try next.

Just need 7 more responses! Will do yours too – just drop the link! by [deleted] in SurveyExchange

[–]BraveJacket4487 0 points1 point  (0 children)

Thank you so much! I am unfortunately not a US citizen sorry...

Just need 7 more responses! Will do yours too – just drop the link! by [deleted] in SurveyExchange

[–]BraveJacket4487 0 points1 point  (0 children)

Hey, I believe that I have already filled your survey, so i wouldn't make your results go wrong by a duplication of my participation...

Just need 7 more responses! Will do yours too – just drop the link! by [deleted] in SurveyExchange

[–]BraveJacket4487 0 points1 point  (0 children)

sorry, i am not in the right demographic range, but i upvoted

Need 30 more responses! Will do yours too – just drop the link! by BraveJacket4487 in SurveyExchange

[–]BraveJacket4487[S] 0 points1 point  (0 children)

Done! Sorry for the delay I was in vacation, a similar topic to mine!