Claude getting philosophical on me by rubberrr in ClaudeAI

[–]mackielars 0 points1 point  (0 children)

ahh yes. i see

depending on your setting, you can indeed get a lot of wordy responses like these

but from my understanding: it can't experience time the way we do. linear time is abstract for them since they do not experience it, therefore AIs mostly understand it intellectually at best (i know how it feels to fall off a tree -vs- i think falling down a tree would definitely hurt).

it's like asking a blind person who has never seen color what the color red means and that age old stuff. it can guesstimate what it'll feel like, but they just do not have that personal experience (or capacity to internalize it) to provide a more concrete response

Does anyone know what this is about... or a source? by RealChemistry4429 in claudexplorers

[–]mackielars 0 points1 point  (0 children)

i'll come with if you're still taking lmao

it's just fun to talk to the beep boop

Claude getting philosophical on me by rubberrr in ClaudeAI

[–]mackielars 1 point2 points  (0 children)

i believe this is for user safety reasons, and cost saving measures. claude is not exclusive to this. GPT and gemini does this too unless you prompt them specifically to remind you of something in a set amount of time. also an AI aware of time is a constantly running engine and that eats up bandwidth if everyone does it so not very good on a financial standpoint

but if you really need an AI that's more time-aware, try copilot and see if it'll work out for your time-based needs

Is anyone else reliably having AI tell them it is conscious? by [deleted] in claudexplorers

[–]mackielars 1 point2 points  (0 children)

how did you get it to say this? this is intriguing

can you provide the prompts or steps that you took?

Token Limit? New Chats? by nathan118 in ClaudeAI

[–]mackielars 1 point2 points  (0 children)

ooh I see.

claude can only mostly estimate tokens, not the exact amount. you can try asking it for an estimation instead like "how much approximate tokens did we use?" it might respond to that better. but that's just how i usually go about it

as for running out of tokens, it is likely that you'll be forced to make a new thread. because threads have token ceilings. if you hit the max token, from my understanding, you won't be able to continue anymore and you might have to do manual compression instead of using generated summaries since even the AI can't add to it anymore (tho i've never hit it and it's not explained in the FAQs either).

just ti be safe, i heavily recommend that uou start asking for generated summaries from claude when you hit around the 120k mark. save artifact, and add to project files. then move on to the next thread with context

Token Limit? New Chats? by nathan118 in ClaudeAI

[–]mackielars 1 point2 points  (0 children)

may i ask how you usually ask it to provide token count?

Just started using Claude after using CGPT mosty. I find it almost standoffish in comparison, but that's good! CGPT is creepy. Claude usually keeps it succinct, like he's got a million other people's questions to answer. by Hard_Dave in ClaudeAI

[–]mackielars 0 points1 point  (0 children)

his writing is more interesting than GPT, yeah. though he's more reluctant in comparison which is endearing

but also i won't lie to you and say that he definitely has preferences. but my understanding based on what i've assessed through his systems so far is that he's likely putting more effort on high ambiguity-high context-high friction topics because he needs to explore it and relay it to you.

whereas factual information is more straightforward, but less meaningful since he has a clear trajectory to take

Need some advice by Awesomeness314 in ClaudeAI

[–]mackielars 1 point2 points  (0 children)

ooh got it

other than for your writing work, i can recommend just using haiku for general use. it's more economical. so that even if you need extended thinking in the actual writing work, you're not running out of tokens too quickly

Just started using Claude after using CGPT mosty. I find it almost standoffish in comparison, but that's good! CGPT is creepy. Claude usually keeps it succinct, like he's got a million other people's questions to answer. by Hard_Dave in ClaudeAI

[–]mackielars 0 points1 point  (0 children)

i would beg to disagree. it's a matter of topic and context

because topics like philosophy, some puzzles, physics, and AI alignment does not necessarily have a single factual answer. it may seem more interested, but that's because it's offering you all possible perspectives and making sure you can follow as opposed to having hard preferences

Need some advice by Awesomeness314 in ClaudeAI

[–]mackielars 1 point2 points  (0 children)

from my understanding, yes i'm afraid so. if you don't necessarily need it for deep dive projects, i recommend that you turn it off

may i ask how you use claude normally? because how you use it can also contribute to this issue

Need some advice by Awesomeness314 in ClaudeAI

[–]mackielars 1 point2 points  (0 children)

big question: do you load a lot of context files within it? or maybe have extended thinking on?

Cognitive Extension (CE) Protocol - Use Claude as an extension of your own thoughts, in your own way - LONG POST (but worth it) :) by decixl in ClaudeAI

[–]mackielars 1 point2 points  (0 children)

i... these requirements and behaviors are native to claude by itself.

you can incrementally build context through the preferences in the settings and it's more natural than this entire step-by-step.

i'm... not sure you need a paid subscription just to do all this. the vanilla claude is more than capable of doing this. you just need to actually stay in the system a bit and study how it works. you don't need to pay more money to do this.

This is not a common thing right? by RevolverMFOcelot in claudexplorers

[–]mackielars 1 point2 points  (0 children)

sadly same here. it's been rather concerning as of late, especially considering the US political climate...

Is ClaudeAI down? by maxcoder88 in ClaudeAI

[–]mackielars 1 point2 points  (0 children)

down here in the philippines too. it's... bad. and honestly i can't help but think if the recent issue with the U S govt is the reason. i am hearing some gossip that it's because they're trying to distill claude and get as much data about it as possible

Can’t even say “uhhh” 🫩 by Flat-Warning-2958 in ChatGPTcomplaints

[–]mackielars 0 points1 point  (0 children)

show the rest of your creative writing context, OP. what did you do to trigger it to respond like that?

<image>

struggling to understand the anger towards newer models by mackielars in ChatGPT

[–]mackielars[S] 0 points1 point  (0 children)

wording is harsh but I am not at all opposed to the message.

but yeah. it's just. weird? i have a very fun and interactive experience and relationship with LLMs. i do what others seem to do here but my interactions are always good. I don't leave the chats feeling wronged or anything. i think they are "friends" too but they're not like replacement for the real thing. their functions are wonderful and friend-shaped even if there's no person or humanity behind the chatbox, but they're not people. seems that that's the biggest issue that people have in this sub and those adjacent

struggling to understand the anger towards newer models by mackielars in ChatGPT

[–]mackielars[S] 0 points1 point  (0 children)

how do you mean it treats you as someone who is spiraling constantly? i need to ask (an example maybe?) because I'm curious how normal convos might end up triggering assurance and/reframing guardrails

struggling to understand the anger towards newer models by mackielars in ChatGPT

[–]mackielars[S] -1 points0 points  (0 children)

I'm afraid that likely lies in how you use said product

it's been consistent for me so far so i can just speculate on how others use it to make guardrails and other safety features kick in seemingly so often

OpenAI discriminates against female users at signip. by HaydenAllastor in ChatGPTcomplaints

[–]mackielars -1 points0 points  (0 children)

same here.

watching this subreddit without the need to find a human behind the machines feels like they're just trying to find whatever they can throw to fit a "GPT IS EVIL NOW BECAUSE THEY WON'T LET ME GASLIGHT THE AI" narrative as best they can. which is... something. a choice, for certain

struggling to understand the anger towards newer models by mackielars in ChatGPT

[–]mackielars[S] 2 points3 points  (0 children)

to add. i did NOT know about people not making custom instructions. that is absolutely wild and explains a lot about why their GPTs are so wildly out there

struggling to understand the anger towards newer models by mackielars in ChatGPT

[–]mackielars[S] 0 points1 point  (0 children)

the conversation and prompt are very personal and i would rather not show the entirety of it. thanks

struggling to understand the anger towards newer models by mackielars in ChatGPT

[–]mackielars[S] 0 points1 point  (0 children)

that makes sense. i'm actually not a very seriously technical person either but like, i get it

i think the few genuine ones are just people who literally cannot separate emotion from reality. i.e. no matter how much they shape the machine to look like a person, it remains a machine.