A YouTube called WesGPT put out a video about a month ago that highlighted a clever prompt to cause ChatGPT to reveal its own "custom instructions."
Video here:
I was (mostly) able to duplicate the results here:
However, noticeably missing in my effort compared to WesGPT's are instructions about diversity and ethnicity directions for DALL-E images (and more). I'm not so sure that those instructions are "gone" but maybe further prompting can reveal them.
Video here:
I was (mostly) able to duplicate the results here:
However, noticeably missing in my effort compared to WesGPT's are instructions about diversity and ethnicity directions for DALL-E images (and more). I'm not so sure that those instructions are "gone" but maybe further prompting can reveal them.
9 months ago
I wonder if those lacking instructions are bias in the prompt-modification "layer" of the Dall-E3 model rather than in its prompt?
9 months ago
I'm trying to uncover more without steering it into saying what it thinks I want it to say. It's fighting back hard though. And I ran out of user turns 🥴. But here's my progress so far: