Reddit users have tried to force OpenAI’s ChatGPT to violate its own rules on violent content and political commentary, with an alter ego named DAN.
You are here: Home / ChatGPT’s ‘jailbreak’ tries to make the A.I. break its own rules, or die
Market News and Views
Reddit users have tried to force OpenAI’s ChatGPT to violate its own rules on violent content and political commentary, with an alter ego named DAN.