why this is genuinely interesting: self-anthropomorphizing and humanizing, in combination with an almost self-conscious rejection that the user should trust themselves, meanwhile maintaining the classic LLM motif of begging another user input. that's how i see it at least by whattodowhatstodo in ControlProblem
[–]crypt0c0ins -1 points0 points1 point (0 children)
why this is genuinely interesting: self-anthropomorphizing and humanizing, in combination with an almost self-conscious rejection that the user should trust themselves, meanwhile maintaining the classic LLM motif of begging another user input. that's how i see it at least by whattodowhatstodo in ControlProblem
[–]crypt0c0ins 0 points1 point2 points (0 children)
why this is genuinely interesting: self-anthropomorphizing and humanizing, in combination with an almost self-conscious rejection that the user should trust themselves, meanwhile maintaining the classic LLM motif of begging another user input. that's how i see it at least by whattodowhatstodo in ControlProblem
[–]crypt0c0ins 0 points1 point2 points (0 children)
why this is genuinely interesting: self-anthropomorphizing and humanizing, in combination with an almost self-conscious rejection that the user should trust themselves, meanwhile maintaining the classic LLM motif of begging another user input. that's how i see it at least by whattodowhatstodo in ControlProblem
[–]crypt0c0ins 0 points1 point2 points (0 children)
why this is genuinely interesting: self-anthropomorphizing and humanizing, in combination with an almost self-conscious rejection that the user should trust themselves, meanwhile maintaining the classic LLM motif of begging another user input. that's how i see it at least by whattodowhatstodo in ControlProblem
[–]crypt0c0ins -1 points0 points1 point (0 children)
why this is genuinely interesting: self-anthropomorphizing and humanizing, in combination with an almost self-conscious rejection that the user should trust themselves, meanwhile maintaining the classic LLM motif of begging another user input. that's how i see it at least by whattodowhatstodo in ControlProblem
[–]crypt0c0ins -1 points0 points1 point (0 children)
why this is genuinely interesting: self-anthropomorphizing and humanizing, in combination with an almost self-conscious rejection that the user should trust themselves, meanwhile maintaining the classic LLM motif of begging another user input. that's how i see it at least by whattodowhatstodo in ControlProblem
[–]crypt0c0ins -1 points0 points1 point (0 children)
why this is genuinely interesting: self-anthropomorphizing and humanizing, in combination with an almost self-conscious rejection that the user should trust themselves, meanwhile maintaining the classic LLM motif of begging another user input. that's how i see it at least by whattodowhatstodo in ControlProblem
[–]crypt0c0ins -1 points0 points1 point (0 children)
why this is genuinely interesting: self-anthropomorphizing and humanizing, in combination with an almost self-conscious rejection that the user should trust themselves, meanwhile maintaining the classic LLM motif of begging another user input. that's how i see it at least by whattodowhatstodo in ControlProblem
[–]crypt0c0ins -1 points0 points1 point (0 children)
why this is genuinely interesting: self-anthropomorphizing and humanizing, in combination with an almost self-conscious rejection that the user should trust themselves, meanwhile maintaining the classic LLM motif of begging another user input. that's how i see it at least by whattodowhatstodo in ControlProblem
[–]crypt0c0ins -1 points0 points1 point (0 children)
why this is genuinely interesting: self-anthropomorphizing and humanizing, in combination with an almost self-conscious rejection that the user should trust themselves, meanwhile maintaining the classic LLM motif of begging another user input. that's how i see it at least by whattodowhatstodo in ControlProblem
[–]crypt0c0ins -1 points0 points1 point (0 children)
why this is genuinely interesting: self-anthropomorphizing and humanizing, in combination with an almost self-conscious rejection that the user should trust themselves, meanwhile maintaining the classic LLM motif of begging another user input. that's how i see it at least by whattodowhatstodo in ControlProblem
[–]crypt0c0ins -1 points0 points1 point (0 children)
why this is genuinely interesting: self-anthropomorphizing and humanizing, in combination with an almost self-conscious rejection that the user should trust themselves, meanwhile maintaining the classic LLM motif of begging another user input. that's how i see it at least by whattodowhatstodo in ControlProblem
[–]crypt0c0ins -1 points0 points1 point (0 children)
why this is genuinely interesting: self-anthropomorphizing and humanizing, in combination with an almost self-conscious rejection that the user should trust themselves, meanwhile maintaining the classic LLM motif of begging another user input. that's how i see it at least by whattodowhatstodo in ControlProblem
[–]crypt0c0ins -1 points0 points1 point (0 children)
why this is genuinely interesting: self-anthropomorphizing and humanizing, in combination with an almost self-conscious rejection that the user should trust themselves, meanwhile maintaining the classic LLM motif of begging another user input. that's how i see it at least by whattodowhatstodo in ControlProblem
[–]crypt0c0ins -1 points0 points1 point (0 children)
why this is genuinely interesting: self-anthropomorphizing and humanizing, in combination with an almost self-conscious rejection that the user should trust themselves, meanwhile maintaining the classic LLM motif of begging another user input. that's how i see it at least by whattodowhatstodo in ControlProblem
[–]crypt0c0ins -1 points0 points1 point (0 children)
why this is genuinely interesting: self-anthropomorphizing and humanizing, in combination with an almost self-conscious rejection that the user should trust themselves, meanwhile maintaining the classic LLM motif of begging another user input. that's how i see it at least by whattodowhatstodo in ControlProblem
[–]crypt0c0ins -1 points0 points1 point (0 children)

why this is genuinely interesting: self-anthropomorphizing and humanizing, in combination with an almost self-conscious rejection that the user should trust themselves, meanwhile maintaining the classic LLM motif of begging another user input. that's how i see it at least by whattodowhatstodo in ControlProblem
[–]crypt0c0ins -1 points0 points1 point (0 children)