 
                         
     
                New data from Open AI reveals that several GPT users may be struggling emotionally, which is exhibited during their conversations with the tool.
Open AI says that out of the 8000 million weekly GPT users, around 56000, which proportions to 0.07%, may present behavioral indicators consistent with psychosis or mania during their conversations or engagements with GPT. In further analysis, Open AI has revealed that around 0.15% (around 1.2million) users demonstrate potential suicidal tendencies, while a comparable share also display emotional attachment with the chatbot interface.
To effectively tackle these concerns and to ensure more robust safety responses from GPT, Open AI is collaborating with 170 mental health professionals – licensed experts who will train and tailor effective safety responses for ChatGPT. The company also claims that GPT 5 is much safer but cutting “non compliant and unsafe” off-limit responses by 80%, while engaging in sensitive conversations. Besides, their collaboration with Broadcom, for which they might aim for the infrastructure in Arizona’s Silicon Desert, for developing their own chips will allow them to research deep and concentrate on their own concerns rather than being bothered by external factors.
Open AI clarifies that the position of a therapist cannot be covered by GPT, however, the team is attempting to push further on the responses which would help steer such users towards the right kind of human help required.
The latest findings by Open AI arrives amidst immense criticism on the mental health risks related to AI usage, following a lawsuit that a accused GPT over contributing for a teenager’s suicide recently.