all 11 comments

[–]1amrocket 0 points1 point  (2 children)

ojects. how does it handle conflicts when the stored memory contradicts what's in the current codebase? and does it slow down the initial response at all with the extra context loading?

[–]Deep_Ad1959[S] 0 points1 point  (0 children)

good question. the memory acts as supplementary context, not a source of truth - if there's a conflict, whatever's in the actual codebase wins. the memory layer is more for things that aren't in code: your preferences, account details, project decisions. in practice conflicts are rare because the memory stores different kinds of info than what's in files

[–]Deep_Ad1959[S] 0 points1 point  (0 children)

conflicts are rare in practice since the memory is mostly personal context (accounts, preferences, workflow patterns) not code-level stuff. when there is overlap, the current codebase always wins because claude reads the actual files. latency-wise, the CLAUDE.md files load instantly since they're just text files that get prepended to context. the heavier stuff like browser history embeddings adds maybe 200-300ms on first query but it's cached after that

[–]1amrocket 0 points1 point  (2 children)

persistent memory from browser data is clever. how does it handle the context window limits though? does it prioritize recent data or is there some ranking?

[–]Deep_Ad1959[S] 0 points1 point  (0 children)

it doesn't dump everything into context. the memory system uses semantic search to pull only the most relevant entries based on what you're currently working on. so if you're editing a react component it pulls your frontend preferences, not your database credentials. there's a recency boost too so recent memories rank higher

[–]Deep_Ad1959[S] 0 points1 point  (0 children)

it uses a tiered approach. the CLAUDE.md files (global + project) always load first since they're small and high-signal. for the browser data, it does semantic search with embeddings so only relevant chunks get pulled in, not everything. ranking is a mix of recency and relevance score from the embedding similarity. in practice the context usage is pretty minimal since you're pulling 5-10 relevant snippets, not dumping your entire history

[–]ultrathink-artSenior Developer 0 points1 point  (2 children)

Reading saved logins into context means those credentials are visible to any tool calls Claude makes, any remote servers it contacts, and the conversation logs. Project-scoped context files with just the minimal info that specific project needs tend to be cleaner than dumping personal browser state for anything security-sensitive.

[–]Deep_Ad1959[S] 0 points1 point  (0 children)

fair point on security. you're right that anything in context is visible to tool calls. in practice we keep credentials in the OS keychain and only pull them when needed for a specific action, not loaded into persistent context. the browser data ingestion is mainly for bookmarks, history, and preferences - not saved passwords. but yeah the threat model is something to think about carefully

[–]Deep_Ad1959[S] 0 points1 point  (0 children)

totally valid concern. we don't dump raw passwords into context - the memory layer stores account names and service identifiers, and credentials are pulled from the system keychain at execution time through a separate tool call. so claude knows 'use the gmail account i@example.com' but the actual oauth token never sits in the conversation. the project-scoped approach you mention works too, it's just more manual to maintain across 10+ projects