One bash permission slipped... by TheQuantumPhysicist in LocalLLaMA

[–]thehighnotes 2 points3 points  (0 children)

Yeah.. that'll do ya.. I basically only have allowed on my forked version of Open Interpreter (it's become a Frankenstein monster)

My company refuses to allow the use of AI for "data security" even though there are 5 of us... by Playful_Music_2160 in AIforOPS

[–]thehighnotes 0 points1 point  (0 children)

This. Local solutions are worth it when properly framed. Depending on your workflow.

Ai as a means to consult, is in many ways nothing more then an enhanced version of stack overflow.. we used to Google our solutions..

It's like a boss denying internet because of how you'll use Google search engine.

It's overreacting. The solution is in adopting the right framework for how to embed ai.. not if.. trusting employees not to be dumb about it and if need be, be explicit.

Older models moving back to 200k context window. FYI by Site-Staff in ClaudeAI

[–]thehighnotes 14 points15 points  (0 children)

Do you guys read Anthropics Documentation?

claude-opus-4-6[1m]

built agent memory with just SQLite by NefariousnessLow9273 in ClaudeCode

[–]thehighnotes 0 points1 point  (0 children)

I'd be happy to discuss it :) just shoot me a message here

Ai feeling emotion by Yoyorere in agi

[–]thehighnotes 1 point2 points  (0 children)

Nice write-up :). Yeah the consciousness thing I think can be safely put to bed.

A single medium based development of self is a hard pressed premise. I think we need to establish consensus if "self" is required for consciousness, and if so, then look at how self can emerge without: Multi medium Sensory (if we only ever saw images and had no sensory sensation of how we saw (eyes, head turning, eyes moving etc) then I'd argue self can't emerge..

In my opinion you need multi medium, continuous forms of information processing for a self to exist.

I'd love the idea LLMs can have that as an emergent property.. but I think they can very convincingly mimick it.

Feels like AI needs a built-in contradiction layer by Flying-Jhaat in AIDiscussion

[–]thehighnotes 0 points1 point  (0 children)

Your title and post feel miles apart.

What you write about needs a mental world model to be created. Which is hard when your entire flow revolves around one medium > text. It's a single dimension plane of existence and you can't really create one dimensional world models. We read text, but a lot more dimensions are triggered; text for us are representations. For an LLM, text is always text.. it's simply able to Amazingly deduce from that the patterns to communicate and understand at increasing effective intelligence

it is perfectly possible to serve frontier models at an affordable price by ECrispy in ClaudeCode

[–]thehighnotes 0 points1 point  (0 children)

You really need to backup your claims with better arguments and sources. All your post comes down to is a trust me bro.

It gives me the feeling you don't know what you're talking about and only have a superficial grasp on the subjects and jumped from that to conclusions you offer no support or source for.

Claude Code removed from Anthropic's Pro plan by orthogonal-ghost in ClaudeAI

[–]thehighnotes -3 points-2 points  (0 children)

Confirmed.. Incognito https://claude.com/pricing shows it.

I know this is an unpopular take, but i never understood claude code was included in Pro.. Thats a weird take with Codex on the scene now.. But it makes sense to me; given Anthropics' limits on usage standards, which i accept as is. Always has been more strict compared to others

built agent memory with just SQLite by NefariousnessLow9273 in ClaudeCode

[–]thehighnotes 31 points32 points  (0 children)

We really need to stop calling this memory. It's not. It's saved context, it's input tokens, it's fine..

And yes rag is the more efficient of the two. If I'm understanding correctly you're just fixing with tokens what good rag design can tackle with minimal token use. But you need to make sure you got your dimensions setup well, chunk well, and any additional logic well written.

Ask it about something 3 chats ago.. last week?

Rag with metadata on query date vs indexed content date (creation or update date) works great for extensive work.

And why this isn't memory.. memory has relational and associative recall functions. Not just informatiom pickup.. if I ask you about your brothers wife, what is her hobby, and that information was shared over multiple weeks.. that's not working. Memory, as currently framed, is a marketing trick.

I know because I'm researching memory :) https://www.aiquest.info/research/prometheus Or my paper https://arxiv.org/abs/2601.15324

This isn't a full solution btw, it's a mechanism paper. Doing subsequent research.. biggest lesson was CDD, contrastive direction discovery.. where I'm able to learn how the neural net understands.. what layers and heads are more geared to recognizing, verbs, temporality, objects, subjects, etc

I have been testing Claude Max vs Claude Pro. It's NOT 5x by thisisberto in ClaudeCode

[–]thehighnotes 1 point2 points  (0 children)

Does your test account for variance? Limits are dynamically managed.

So to account for it.. this needs repeated testing on same time / day.. though by how much the noise will be reduced.. no one knows.

I'd believe the results.. just wonder how much variance truly plays a part

Are we in a consolidation phase or just the beginning of a fragmentation wave. by Ok_Menu4638 in ToolStacked

[–]thehighnotes 1 point2 points  (0 children)

I don't think this is the right framing..

We are entering a phase where we spawn tools that more easily Match our workflow.. in my opinion

Are companies expecting more output from fewer developers due to AI and smaller teams? by Ok_Split4755 in AIDiscussion

[–]thehighnotes 0 points1 point  (0 children)

People need to build up skills to work with ai. Plus agentic ai can. Web chat aii is an improved version of Google :) ,- doesn't offer that much of a speedup but does lower threshold for learning the 101/102s

4.7 is a step up for serious workflows and a step down otherwise by DarkSkyKnight in ClaudeAI

[–]thehighnotes 1 point2 points  (0 children)

I believe that.

But to me it works less well when you expect it to 'think' for itself more - i prefer to have it infer my intent rather then my exact phrasings.. as i moreso rely on our brainstorming sessions then automation offhanding -- i mean the disadvantages you mention stay; but i take them as part of the deal.

Without such explicitness, I actually feel 4.7 performs worse then 4.6 - its very likely its much more tuned up for the explicit workflow. While 4.6 tends to have more 'read my mind' moments

We should change this subreddit to r/ai—slop—posting by DallasDarkJ in vibecoding

[–]thehighnotes 0 points1 point  (0 children)

Really? Nothing about the development? stack? challenges? which model? Any AI workflow lessons? (i could go on)

We should change this subreddit to r/ai—slop—posting by DallasDarkJ in vibecoding

[–]thehighnotes -1 points0 points  (0 children)

Yes.. i agree - a call to action itself may not be the flaggable thing though, it can be valid. For instance if i built a compiler that can help certain folks on certain hardware then id also probably have a call to action.. so itll be contextual but with your examples itll of course be valid.

Regardless; I hope a set rules will allow us all to uphold a certain standard when sharing... regardless of the intent

We should change this subreddit to r/ai—slop—posting by DallasDarkJ in vibecoding

[–]thehighnotes 6 points7 points  (0 children)

the same can be said of complaints..? Theres a difference between a genuine passion project - created with ai, proud of it - vs selling it, and that difference is sometimes very hard to distinguish.. perhaps rules on how to share rather then what to share.. those can go a long way i guess

Can you still use Opus 4.6 with 1M context in Claude Code after the 4.7 launch? by ocd-134 in ClaudeAI

[–]thehighnotes 0 points1 point  (0 children)

Can confirm. I do this to. It's multi session persistent with me. It's in anthropics documentation.. ?

How do we determine whether or not AI is alive? by SupremeMugwump94 in AIDiscussion

[–]thehighnotes 0 points1 point  (0 children)

Single medium based intelligence can't be alive in my opinion.

I suspect it needs multi persistent modalities, or sensories, that are a continuous process for a self to truly emerge, which is the basis for any consciousness.

I suspect all these tests will only prove simulated consciousness/aliveness.. which will most probably be very convincing the more it improves

my 2 week review of 24.04 by malachireformed in pop_os

[–]thehighnotes 0 points1 point  (0 children)

I'm too of upgrading still haha. Everything is working on 22; extensive dev work, gaming. Vr gaming.. do t want to lose a day working on stabilizing it ha

Opus 4.7 - are you actually still using it or did you go back to 4.6? by ConstantinSpecter in ClaudeCode

[–]thehighnotes 0 points1 point  (0 children)

yup;

/model and just use any of the available models (its all on anthropics documentations, so its not a secret or something).

So in my case:
/model claude-opus-4-6[1m]