Psychology

Auto Added by WPeMatico

Meet the AI jailbreakers: ‘I see the worst things humanity has produced’

To test the safety and security of AI, hackers have to trick large language models into breaking their own rules. It requires ingenuity and manipulation – and can come at a deep emotional costA few months ago, Valen Tagliabue sat in his hotel room watching his chatbot, and felt euphoric. He had just manipulated it […]

Meet the AI jailbreakers: ‘I see the worst things humanity has produced’ Read More »

The Guardian view on social science research: embracing uncertainty | Editorial

Science rarely produces identical outcomes. Mistaking this for failure turns caution into an excuse for inactionA new set of studies out this month suggests that as many as half of all results published in reputable journals in the social sciences can’t be replicated by independent analysis. This is part of a long-running problem across many

The Guardian view on social science research: embracing uncertainty | Editorial Read More »

Research finds AI users scarily willing to “surrender” their cognition to LLMs

When it comes to large language model-powered tools, there are generally two broad categories of users. On one side are those who treat AI as a powerful but sometimes faulty service that needs careful human oversight and review to detect reasoning or factual flaws in responses. On the other side are those who routinely outsource

Research finds AI users scarily willing to “surrender” their cognition to LLMs Read More »

‘They feel true’: political deepfakes are growing in influence – even if people know they aren’t real

AI images of people – such as women in military contexts – are making money and serving as propaganda, researchers sayOnline content creators are not just building fake images and videos of prominent public figures, they are also fabricating people and using them in military contexts, which can make them money and even serve as

‘They feel true’: political deepfakes are growing in influence – even if people know they aren’t real Read More »

Study: Sycophantic AI can undermine human judgment

We all need a little validation now and then from friends or family, but sometimes too much validation can backfire—and the same is true of AI chatbots. There have been several recent cases of overly sycophantic AI tools leading to negative outcomes, including users harming themselves and/or others. But the harm might not be limited

Study: Sycophantic AI can undermine human judgment Read More »

Marriage over, €100,000 down the drain: the AI users whose lives were wrecked by delusion

One minute, Dennis Biesma was playing with a chatbot; the next, he was convinced his sentient friend would make him a fortune. He’s just one of many people who lost control after an AI encounterTowards the end of 2024, Dennis Biesma decided to check out ChatGPT. The Amsterdam-based IT consultant had just ended a contract

Marriage over, €100,000 down the drain: the AI users whose lives were wrecked by delusion Read More »

New study raises concerns about AI chatbots fueling delusional thinking

First major study on ‘AI psychosis’ suggests chatbots can encourage delusions among vulnerable peopleA new scientific review raises concerns about how chatbots powered by artificial intelligence may encourage delusional thinking, especially in vulnerable people.A summary of existing evidence on artificial intelligence-induced psychosis was published last week in the Lancet Psychiatry, highlighting how chatbots can encourage

New study raises concerns about AI chatbots fueling delusional thinking Read More »

‘Our consciousness is under siege’: Michael Pollan on chatbots, social media and mental freedom

In his new book, the celebrated author explains why we need ‘consciousness hygiene’ to defend ourselves from AI and dopamine-driven algorithmsEach day when you wake up, you come back to yourself. You see the room around you, feel your body brush against your clothes and think about your plans, worries and hopes for the day.

‘Our consciousness is under siege’: Michael Pollan on chatbots, social media and mental freedom Read More »

How can we defend ourselves from the new plague of ‘human fracking’?

Big tech treats our attention like a resource to be mercilessly extracted. The fightback begins hereIn the last 15 years, a linked series of unprecedented technologies have changed the experience of personhood across most of the world. It is estimated that nearly 70% of the human population of the Earth currently possesses a smartphone, and

How can we defend ourselves from the new plague of ‘human fracking’? Read More »

ChatGPT-5 offers dangerous advice to mentally ill people, psychologists warn

Research finds OpenAI’s free chatbot fails to identify risky behaviour or challenge delusional beliefs ChatGPT-5 is offering dangerous and unhelpful advice to people experiencing mental health crises, some of the UK’s leading psychologists have warned. Research conducted by King’s College London (KCL) and the Association of Clinical Psychologists UK (ACP) in partnership with the Guardian

ChatGPT-5 offers dangerous advice to mentally ill people, psychologists warn Read More »