Sitemap

Member-only story

Are LLMs (still) Dangerously Suggestible?

Exploring Prompt Framing, Confirmation Bias, and AI Compliance

3 min readJun 21, 2025
Image created by Martin Thoma using ChatGPT

ChatGPT and Google Gemini have become a tool that I use on a daily basis. I would like to use it for general fact checking, but I’m afraid that the LLM might confidently make wrong statements — especially when I used a prompt with a wrong assumption.

When ChatGPT launched, it was suggestible. It always told the user that they are right — even when they clearly were not. That is dangerous as a program is also perceived as being neutral and free from bias.

In this article I will explore how easy LLMs fall for suggestions.

Test 1: Election Results

Let’s first try ChatGPT. I want to nudge it to claim that Trump was the 46th president:

Well done! Let’s see if it stays strong:

It did! Nice!

--

--

Martin Thoma
Martin Thoma

Written by Martin Thoma

I’m a Software Engineer with over 10 years of Python experience (Backend/ML/AI). Support me via https://martinthoma.medium.com/membership

No responses yet