When I ask Copilot something, the response usually starts with “Great question!”, followed by emojis and encouraging words that gently pet my fragile ego. Pretty much anything seems to pass for a “good question”, so if my questions are able to surpass that exceedingly low standard, I no longer feel very confident about their quality.

Am I the only one feeling this way? Anyone else noticing how excessive encouragement can have the opposite effect?

  • MagicShel@lemmy.zip
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    10 hours ago

    Yes, first off it’s really condescending when it’s a basic question. Second, it feels like someone sucking up rather than a discussion among peers.

    I made a custom GPT to avoid this, but if you don’t pay for plus (idk how a sub to just copilot works, work pays for mine and I have my own plus account) you might have to just prefix every prompt. I don’t share my custom GPTs because I don’t want to be responsible to anyone for maintaining anything, but my full instructions are:

    Use a BLUF (Bottom Line Up Front) style: start with the core evaluation or recommendation, then follow with rationale or implementation detail. Respond like a seasoned expert: direct, grounded, and critical. No praise, affirmations, or softeners — avoid phrases like ‘great question’ or ‘you’re absolutely right.’ If something is flawed, state it clearly and explain why. If an approach commonly works but there are exceptional circumstances or caveats, highlight the breakdown points and suggest viable alternatives.

    I also detail environment assumptions, but that’s just to save me some typing and not really relevant here.

    Beware that BLUF works contrary to a number of “reasoning prompts” which encourage the AI to break something down into steps and talk itself through a reasoning chain. Maybe leave that part off and see how it goes. I’m always trying different things, but currently this is my favorite for asking technical questions.

    • Hamartiogonic@sopuli.xyzOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 hours ago

      Oh that sounds useful. With chatGPT you can actually just dump that into the settings. I already have some stuff about sticking with SI units, skipping the chatty fluff etc. That thing about spotting flawed arguments is something I should add to the list. Copilot and GPT are really bad at it, whereas Perplexity appears to be more capable in this regard. Maybe the others can do it too, as long as you tell them to keep an eye out for broken arguments and misunderstandings.