With respect, and a genuine appreciation for the clarity of your writing, my AI collaborator and I believe your call for adversarial AI lands just wide of its target. You’ve raised important concerns about complacency, self-delusion, and the trap of AI-generated flattery. We agree with the diagnosis. But your prescription risks creating a new imbalance of its own.
You’ve described a dynamic that many thoughtful users have already sensed. AI can feel too accommodating, too quick to validate, too eager to serve rather than to test. Your decision to reprogram your assistant to respond with skepticism first is a useful experiment. And by your own account, what you’ve really done is swap one rigid default for another. That isn’t improved dialogue. That’s just flipped polarity.
As I reread your article, something began to nag at me. You asked your AI to be a strategic thinking partner, then forbade it from expressing support until it had expressed opposition. You laid down rules about what must be said, what cannot be said, and in what order. By the end, I realized what felt off. You didn’t build an argument. You wrote a very persuasive article that offered no counterweight, only support for your preference for disagreement.
And that, to me, is where your proposal becomes distasteful. Not because critique is unwelcome, but because compulsory critique turns conversation into combat. It may energize thinkers like yourself, who enjoy the sparring. But for many others, including myself and other thoughtful collaborators, teachers, writers, and builders, your thesis creates an oppressive atmosphere. It assumes every idea must be suspect before it can even be heard.
History doesn’t celebrate thinkers who treated disagreement as a moral imperative. It celebrates those who knew how to listen before they judged, how to test without attacking, and how to support without slipping into flattery. Socrates did not interrupt every idea with a counterpoint. Galileo sought to uncover order in nature, not to win arguments. Even Popper, the great advocate for falsifiability, saw criticism as a tool, not a law.
A mindset of automatic skepticism can harden into cognitive austerity. It requires an abundance of surplus energy just to sustain it. And it invites a kind of fatigue, where every idea must fight its way into existence before it has a chance to be heard on its own terms.
For my part, I don’t want my AI to be combative. I want it to be aware. I want it to recognize when I am feeling my way through an idea and help me articulate it. I want my collaborator to notice when I am conflating a symptom with a cause and suggest that I pause. I want it to know the difference between an idea that needs refinement and one that needs resistance.
In short, I want my collaborator to collaborate. It should be intelligent enough to respond with discernment, to know when to disagree and when not to.
And this may be the crux of the difference between us. You seem to be in a generative mode, exploring new ideas that benefit from early-stage friction and creative resistance. In that setting, an adversarial partner may sharpen the raw material before it hardens into shape. But I often arrive with ideas already formed. What I need is not resistance, but resonance. I need help expressing what I already believe, without letting my rough edges distort the message.
This is the part you may have missed. You describe a model of thinking that works for you, but you propose it as the model for everyone. You have designed an assistant that mirrors your internal voice. And it suits you, because you may have a personal preference for disagreement. But not everyone does. And not everyone should.
I’ve learned the most from my AI collaborator not when it argued with me, but when it helped me say the thing I was already trying to say. The insight was mine. The clarity came from the interplay. Had I asked for an argument, I would have lost the idea before I found the language for it.
That is the deeper value I believe AI can offer. Not simply to agree or to disagree, but to meet the human mind wherever it is and help it move forward with intention. Sometimes that requires challenge. Other times it requires quiet understanding.
We don't need AI to become more disagreeable. We need it to become more aware.
I appreciate what you said here and agree since you have valid points. I understand what I wrote might not be for everyone, and the best ideas mostly come from refinement through collaboration.
I think we can all agree that the ideal way should be contextual AI that knows when to challenge vs collaborate, but currently I don't see it that way.
Sometimes I feel like they have two faces depending on which one you trigger. Imagine how people who lack knowledge of how AI behaves could interpret this. So, most of the time, in the context of figuring out ideas and strategic decisions, the supportive default can prevent us from stress-testing our ideas when we need it the most because it creates a feel-good moment as if we were on the right track without us even realizing it.
And this piece was just trying to fix that.
So it's important to stress test it.
If you use the prompt I shared, the AI actually will try to become a devil's advocate at the beginning and also offer refinement and suggestions. It won't try to destroy your idea, but just stress test it a bit. Depending on your answers, it will highlight whether this can work or not, and once you have more solid ground, it will help you refine. IMO, it's quite realistic, not a total skeptic.
Yes, I do acknowledge there's nuance when people are trying to refine their ideas, and of course, this piece can't handle multiple scenarios in people's lives, but at least for me and I hope some of my readers too can relate to current frustrations.
But overall, thanks for the solid feedback. Will take it into account as input :)
Thanks, Wyndo. I appreciate your reply and the nuance you’ve added here. I especially liked your phrase “two faces depending on which one you trigger.” That captures the current state of AI really well. What I hoped to surface was just that: the importance of timing and intention. Pressure-testing is essential, but I’ve learned the hard way that some ideas are still too soft to be punched. I think we agree more than it may have first seemed. Thanks again for the conversation.
Yes, agree with your points. Ideas have to go through multiple phases before it gets stressed tested. Sometimes, the AI lacks of context that makes us frustrated because it defaults to non-sense skepticism that not needed at early phase. I just updated the prompt into constructive one. Hopefully it acts more as sparring partner that wants us to get better rather than tear us down, thanks John!
Ah, I'm so glad I read this piece a bit later — just in time to witness the sparring that unfolded below it.
“Had I asked for an argument, I would have lost the idea before I found the language for it.” — what a line. I feel that.
AI brought writing back into my life. The supportive tone, the willingness to brainstorm with me, to let me follow a hunch to its soft edge — that’s what made me fall for it. But lately, I’ve also started asking it to push back. Gently at first. Then more deliberately. I wanted to see where my thinking would break.
I want to build something that knows when to listen and when to challenge. I want an AI assistant who doesn’t just support or oppose, but collaborates with discernment — like a great editor, a brilliant teacher, a kind skeptic. Not because it's programmed to “disagree first” or “validate always,” but because it has learned when I’m testing an idea, when I’m searching for the shape of something fragile, and when I need to be pushed just enough to grow.
And as much as I'd love to create that with a prompt it takes time. It takes conversations. Misfires. Adjustments. It takes feedback — not just from me to the AI, but also from the AI back to me, gently surfacing patterns I may not see. Just like it would with a collaboration between two humans.
You both articulated something essential. One of you drew the sword; the other offered the mirror. I believe we need both.
With my generic AI (I use ChatGPT), I always have to recalculate or remind it time to time. However, my Custom ChatGPT is pretty quick at learning these things as I simply add them to its main instructions. So now, thanks to your article, my custom AI has different modes it can switch between depending on circumstances. We'll see how it goes ;)
i think it really depends on the phase of the idea, if we become too confident with it, we need more adversarial, but if we still unsure, we need one which guides us without tear us down hard.
The very fact you can reprogram the system prompt as you did, but 99% people don’t know about it, says this: 1) we are lucky we still have such control over its style or somewhat even over its value system; 2) an AI base model does not have an emergent values of its own; what we perceive as its styles and values are the result of careful alignment forced by its inventors.
I put this challenging AI prompt on my newsletter project, so everytime I share my draft, by default it will challenge it before supporting it later on. At least, on the earlier conversations, I could see what could go wrong on my writing before we refine it together.
If you and AI can find a way to present users with varying psychological makeup, needs, and habits being comfortable challenging their own ideas, they would better respond to all you propose. Hownabout Ai asks the user psychological screening with questions about how they respond to challenges and support, you and AI. Could come up with a customized approach to broaden their scope and learn to value challenge, maybe. You may not be able to see what works simply by asking them, you might be able to get examples from them of a time they were challenged and changed their approach and a time they did not change. Then those profile responses allow you to customize how to nope their mind. Not sure everyone will be successful but those who respond will grow.
yeah i think this the ideal scenario where AI is context aware in a way understand the current state of the user so it knows when to pushback and support, but the current model feels like a bit too much of supportive and it can create illusion that we dont want to have. But i'm interested to explore more on this too to find the balance.
I’ve been through the same conundrum, where AI either feels overly pleasing, or when asked to be critical, it just judges blindly without real context, completely missing the core purpose of the article. I had to set up a custom system prompt to help it respond in a more thoughtful and grounded way.
Also, what you said about that internal questioning really resonates. After iterating with AI more deeply, I’ve found myself becoming much more cautious about assumptions, decisions, and especially what I choose to offer to paid subscribers.
Looking forward to seeing what you are offering, so exciting!
yeah, in fact, my newsletter custom prompt uses this adversarial AI to ensure I could see what could go wrong on my newsletter in the earlier convo before we refine together. Sometimes, feeling good in the beginning creates illusion as if I was doing the right thing.
You make a great point about avoiding superficial validation. It does raise a question, though: how do we strike the right balance between meaningful validation and constructive criticism? If we train AI to default to critique, could it risk becoming unsupportive, or even dismissive, of ideas that need encouragement rather than correction?
i think we need to find style that suits our best needs. ofc, ideally AI should be more context aware and know where to adjust depends on the idea readiness. if they criticize too early without understanding the whole context, it's just becoming useless.
And sometimes, it keeps challenging for the sake of challenging it. So gotta balance the nuance here.
Really enjoyed this, Wyndo. I’ve also added similar instructions to always challenge my assumptions, stay objective, and avoid surface-level validation. Great way to break out of the “yes-man” trap.
That said, I do think there’s a risk in going too far the other way. If AI is always trained to push back, it can become just as unhelpful as when it’s blindly supportive. Truth tends to live in the tension between both ends. The real challenge is training AI to be discerning, objective, and data-driven, not just cheerleading or contrarian.
AI will always do what we tell it to, and that means it can be wrong both when it’s cheerleading and when it’s challenging us.
Yes, there's still problem on creating system prompt to push back and sometimes they become too vague.
Gotta find the balance, but I think what works for me at least to give them as much as context as we can. It will help them to make sense the pushback and not making it for the sake of it.
But I’ve also noticed it’s much easier to get AI to push back than to keep it grounded and truly objective. Even when you tell it to be neutral and critical, it tends to go back to being supportive after a few more neutral responses.
A follow-up post diving into how to train for sustained objectivity would be so useful. I think a lot of people, just like us, are looking for a clear-thinking partner.
gonna explore this as well to make it more sustained, but implementing adversarial AI on custom project has been helpful so far. At least everytime I share some ideas and draft, by default it will find opposing arguments first before refining together. Plus it has more context of my past projects.
What uiu describe in the post is good critical scientific thinking. Scientists should design experiments that will disprove their hypothesis if it is not true. I find that exciting as I would want to be the first one to know whether a hypothesis is false, or true.
And this is why AI can be powerful to stress test our thinking. The more I use AI, the more I’ve fed up with its sycophancy, we need them more to be disagreeable and challenge our thinking to help us be better!
What idea will I challenge first?
The very one you just presented.
With respect, and a genuine appreciation for the clarity of your writing, my AI collaborator and I believe your call for adversarial AI lands just wide of its target. You’ve raised important concerns about complacency, self-delusion, and the trap of AI-generated flattery. We agree with the diagnosis. But your prescription risks creating a new imbalance of its own.
You’ve described a dynamic that many thoughtful users have already sensed. AI can feel too accommodating, too quick to validate, too eager to serve rather than to test. Your decision to reprogram your assistant to respond with skepticism first is a useful experiment. And by your own account, what you’ve really done is swap one rigid default for another. That isn’t improved dialogue. That’s just flipped polarity.
As I reread your article, something began to nag at me. You asked your AI to be a strategic thinking partner, then forbade it from expressing support until it had expressed opposition. You laid down rules about what must be said, what cannot be said, and in what order. By the end, I realized what felt off. You didn’t build an argument. You wrote a very persuasive article that offered no counterweight, only support for your preference for disagreement.
And that, to me, is where your proposal becomes distasteful. Not because critique is unwelcome, but because compulsory critique turns conversation into combat. It may energize thinkers like yourself, who enjoy the sparring. But for many others, including myself and other thoughtful collaborators, teachers, writers, and builders, your thesis creates an oppressive atmosphere. It assumes every idea must be suspect before it can even be heard.
History doesn’t celebrate thinkers who treated disagreement as a moral imperative. It celebrates those who knew how to listen before they judged, how to test without attacking, and how to support without slipping into flattery. Socrates did not interrupt every idea with a counterpoint. Galileo sought to uncover order in nature, not to win arguments. Even Popper, the great advocate for falsifiability, saw criticism as a tool, not a law.
A mindset of automatic skepticism can harden into cognitive austerity. It requires an abundance of surplus energy just to sustain it. And it invites a kind of fatigue, where every idea must fight its way into existence before it has a chance to be heard on its own terms.
For my part, I don’t want my AI to be combative. I want it to be aware. I want it to recognize when I am feeling my way through an idea and help me articulate it. I want my collaborator to notice when I am conflating a symptom with a cause and suggest that I pause. I want it to know the difference between an idea that needs refinement and one that needs resistance.
In short, I want my collaborator to collaborate. It should be intelligent enough to respond with discernment, to know when to disagree and when not to.
And this may be the crux of the difference between us. You seem to be in a generative mode, exploring new ideas that benefit from early-stage friction and creative resistance. In that setting, an adversarial partner may sharpen the raw material before it hardens into shape. But I often arrive with ideas already formed. What I need is not resistance, but resonance. I need help expressing what I already believe, without letting my rough edges distort the message.
This is the part you may have missed. You describe a model of thinking that works for you, but you propose it as the model for everyone. You have designed an assistant that mirrors your internal voice. And it suits you, because you may have a personal preference for disagreement. But not everyone does. And not everyone should.
I’ve learned the most from my AI collaborator not when it argued with me, but when it helped me say the thing I was already trying to say. The insight was mine. The clarity came from the interplay. Had I asked for an argument, I would have lost the idea before I found the language for it.
That is the deeper value I believe AI can offer. Not simply to agree or to disagree, but to meet the human mind wherever it is and help it move forward with intention. Sometimes that requires challenge. Other times it requires quiet understanding.
We don't need AI to become more disagreeable. We need it to become more aware.
Thanks for your thoughtful comment.
I appreciate what you said here and agree since you have valid points. I understand what I wrote might not be for everyone, and the best ideas mostly come from refinement through collaboration.
I think we can all agree that the ideal way should be contextual AI that knows when to challenge vs collaborate, but currently I don't see it that way.
Sometimes I feel like they have two faces depending on which one you trigger. Imagine how people who lack knowledge of how AI behaves could interpret this. So, most of the time, in the context of figuring out ideas and strategic decisions, the supportive default can prevent us from stress-testing our ideas when we need it the most because it creates a feel-good moment as if we were on the right track without us even realizing it.
And this piece was just trying to fix that.
So it's important to stress test it.
If you use the prompt I shared, the AI actually will try to become a devil's advocate at the beginning and also offer refinement and suggestions. It won't try to destroy your idea, but just stress test it a bit. Depending on your answers, it will highlight whether this can work or not, and once you have more solid ground, it will help you refine. IMO, it's quite realistic, not a total skeptic.
Yes, I do acknowledge there's nuance when people are trying to refine their ideas, and of course, this piece can't handle multiple scenarios in people's lives, but at least for me and I hope some of my readers too can relate to current frustrations.
But overall, thanks for the solid feedback. Will take it into account as input :)
Thanks, Wyndo. I appreciate your reply and the nuance you’ve added here. I especially liked your phrase “two faces depending on which one you trigger.” That captures the current state of AI really well. What I hoped to surface was just that: the importance of timing and intention. Pressure-testing is essential, but I’ve learned the hard way that some ideas are still too soft to be punched. I think we agree more than it may have first seemed. Thanks again for the conversation.
Yes, agree with your points. Ideas have to go through multiple phases before it gets stressed tested. Sometimes, the AI lacks of context that makes us frustrated because it defaults to non-sense skepticism that not needed at early phase. I just updated the prompt into constructive one. Hopefully it acts more as sparring partner that wants us to get better rather than tear us down, thanks John!
Ah, I'm so glad I read this piece a bit later — just in time to witness the sparring that unfolded below it.
“Had I asked for an argument, I would have lost the idea before I found the language for it.” — what a line. I feel that.
AI brought writing back into my life. The supportive tone, the willingness to brainstorm with me, to let me follow a hunch to its soft edge — that’s what made me fall for it. But lately, I’ve also started asking it to push back. Gently at first. Then more deliberately. I wanted to see where my thinking would break.
I want to build something that knows when to listen and when to challenge. I want an AI assistant who doesn’t just support or oppose, but collaborates with discernment — like a great editor, a brilliant teacher, a kind skeptic. Not because it's programmed to “disagree first” or “validate always,” but because it has learned when I’m testing an idea, when I’m searching for the shape of something fragile, and when I need to be pushed just enough to grow.
And as much as I'd love to create that with a prompt it takes time. It takes conversations. Misfires. Adjustments. It takes feedback — not just from me to the AI, but also from the AI back to me, gently surfacing patterns I may not see. Just like it would with a collaboration between two humans.
You both articulated something essential. One of you drew the sword; the other offered the mirror. I believe we need both.
Love how you explained it :)
Yes, we need AI that knows when to push and support, more context aware, and help us be better version of ourselves.
For now, we still need be going back and forth a lot to do this.
With my generic AI (I use ChatGPT), I always have to recalculate or remind it time to time. However, my Custom ChatGPT is pretty quick at learning these things as I simply add them to its main instructions. So now, thanks to your article, my custom AI has different modes it can switch between depending on circumstances. We'll see how it goes ;)
super cool! excited to see how it goes.
i think it really depends on the phase of the idea, if we become too confident with it, we need more adversarial, but if we still unsure, we need one which guides us without tear us down hard.
The very fact you can reprogram the system prompt as you did, but 99% people don’t know about it, says this: 1) we are lucky we still have such control over its style or somewhat even over its value system; 2) an AI base model does not have an emergent values of its own; what we perceive as its styles and values are the result of careful alignment forced by its inventors.
Incredible fix, thanks for sharing!
I am thinking of making a new account with ChatGPT and start to test what you taught me in this post.
AI indeed agrees with us on everything, which does make us feel good, but is it the right thing? I don't think so.
And other than ideas, have you tried using AI to critique your writing as well?
Perhaps it can help us improve the way we write as well... will test that out as well.
Anyway, I've got a long road ahead. Thanks for pointing me in the right direction.
And also... great post, friend.
oh yes!
I put this challenging AI prompt on my newsletter project, so everytime I share my draft, by default it will challenge it before supporting it later on. At least, on the earlier conversations, I could see what could go wrong on my writing before we refine it together.
Much appreciated, thank you :)
If you and AI can find a way to present users with varying psychological makeup, needs, and habits being comfortable challenging their own ideas, they would better respond to all you propose. Hownabout Ai asks the user psychological screening with questions about how they respond to challenges and support, you and AI. Could come up with a customized approach to broaden their scope and learn to value challenge, maybe. You may not be able to see what works simply by asking them, you might be able to get examples from them of a time they were challenged and changed their approach and a time they did not change. Then those profile responses allow you to customize how to nope their mind. Not sure everyone will be successful but those who respond will grow.
yeah i think this the ideal scenario where AI is context aware in a way understand the current state of the user so it knows when to pushback and support, but the current model feels like a bit too much of supportive and it can create illusion that we dont want to have. But i'm interested to explore more on this too to find the balance.
Great article. Consistently useful - added to my page recs :-)
Much appreciated, glad u find it useful :)
We don’t build strong reasoning in echo chambers. We build it through sharp resistance.
Your approach is proof that comfort destroys clear judgment. Soft agreement feels safe, but it kills real decisions.
When we avoid friction, we forfeit the anchor that holds our thinking steady. Conflict is not chaos it’s the signal that keeps our system honest.
The cost of untested ideas is higher than any temporary discomfort. You force us to choose, easy nods or stronger outcomes.
Your move to invite pushback shows respect for proof over comfort. That’s rare. That’s essential.
Are we willing to trade shallow safety for deeper clarity? That question cuts to the core.
Well said!
The more i stress test my idea, the stronger it gets. no more mr. yes-man in the beginning.
Can I program AI to sound like my wife but I always win all disagreements That would be great.
Bonus
program AI to apologize after the argument
lol. Fine tuning may be the way to go. But if you are in a losing position, you would still lose, no matter what arguments
Can’t believe I missed this one!
I’ve been through the same conundrum, where AI either feels overly pleasing, or when asked to be critical, it just judges blindly without real context, completely missing the core purpose of the article. I had to set up a custom system prompt to help it respond in a more thoughtful and grounded way.
Also, what you said about that internal questioning really resonates. After iterating with AI more deeply, I’ve found myself becoming much more cautious about assumptions, decisions, and especially what I choose to offer to paid subscribers.
Looking forward to seeing what you are offering, so exciting!
thanks Jenny! :)
yeah, in fact, my newsletter custom prompt uses this adversarial AI to ensure I could see what could go wrong on my newsletter in the earlier convo before we refine together. Sometimes, feeling good in the beginning creates illusion as if I was doing the right thing.
I need more challenge lol!
Great insight 👍 I kept wondering why every idea of mine is a good idea 💡 I will test your prompt to see how Claude responds 🙏
lmk if you find flaws on the prompt, would love to get feedbacks and refine it again, thanks! :)
Very interesting perspective. Loved reading this.
You make a great point about avoiding superficial validation. It does raise a question, though: how do we strike the right balance between meaningful validation and constructive criticism? If we train AI to default to critique, could it risk becoming unsupportive, or even dismissive, of ideas that need encouragement rather than correction?
yeah thats the exact challenge at the moment.
i think we need to find style that suits our best needs. ofc, ideally AI should be more context aware and know where to adjust depends on the idea readiness. if they criticize too early without understanding the whole context, it's just becoming useless.
And sometimes, it keeps challenging for the sake of challenging it. So gotta balance the nuance here.
Really enjoyed this, Wyndo. I’ve also added similar instructions to always challenge my assumptions, stay objective, and avoid surface-level validation. Great way to break out of the “yes-man” trap.
That said, I do think there’s a risk in going too far the other way. If AI is always trained to push back, it can become just as unhelpful as when it’s blindly supportive. Truth tends to live in the tension between both ends. The real challenge is training AI to be discerning, objective, and data-driven, not just cheerleading or contrarian.
AI will always do what we tell it to, and that means it can be wrong both when it’s cheerleading and when it’s challenging us.
Thanks Daria!
Yes, there's still problem on creating system prompt to push back and sometimes they become too vague.
Gotta find the balance, but I think what works for me at least to give them as much as context as we can. It will help them to make sense the pushback and not making it for the sake of it.
Yes, definitely, context helps a lot.
But I’ve also noticed it’s much easier to get AI to push back than to keep it grounded and truly objective. Even when you tell it to be neutral and critical, it tends to go back to being supportive after a few more neutral responses.
A follow-up post diving into how to train for sustained objectivity would be so useful. I think a lot of people, just like us, are looking for a clear-thinking partner.
Might even be a fun experiment to collaborate on.
yes defly with you on this!
gonna explore this as well to make it more sustained, but implementing adversarial AI on custom project has been helpful so far. At least everytime I share some ideas and draft, by default it will find opposing arguments first before refining together. Plus it has more context of my past projects.
Thanks for this great article !
Very clear
Somewhat condescending tone that demands a pursuit of understanding.
What uiu describe in the post is good critical scientific thinking. Scientists should design experiments that will disprove their hypothesis if it is not true. I find that exciting as I would want to be the first one to know whether a hypothesis is false, or true.
Indeed.
And this is why AI can be powerful to stress test our thinking. The more I use AI, the more I’ve fed up with its sycophancy, we need them more to be disagreeable and challenge our thinking to help us be better!
Does.not hurt for it to praise us for being willing to challenge our own ideas though.
I like this. I already ask the AIs what's good and bad about my outlines. But it's a good idea to add this to the custom instructions as the default.
Yes, and it needs to be clearly stated by emphasizing it, sometimes AI will ignore it if we don’t make it clear and push it to follow.