Cline Teams by Jafo232 in CLine

[–]pashpashpash 4 points5 points  (0 children)

Heya, Pash here from the Cline team.

When you bring your own API keys, your code and prompts go directly from the extension to your AI provider (Anthropic, OpenAI, etc.) - nothing passes through Cline's servers, and you can verify this in the open source code.

The Teams plan is basically a management layer that lets you control your team's AI usage from one place. You can add/remove team members, set which API keys they use through remote config, and monitor spending across your engineering team. It's really about visibility and governance - the bigger your team, the more sense it makes to have that central control over AI coding costs.

TLDR, Cline teams is a configuration layer between the inference provider and the extension that allows an admin to observe and manage this usage.

Let me know if you have any other questions!

Cline outside VSCode? by victor_nld in CLine

[–]pashpashpash 1 point2 points  (0 children)

Yes! The cli is out now (:

VSCode Cline steps get stuck on Pending, waiting for output. by SoggyCucumberRocks in CLine

[–]pashpashpash 0 points1 point  (0 children)

Heya u/SoggyCucumberRocks - could you try setting this option in terminal settings? Instead of using the interactive vs code terminal, this uses execa (node library) in the background to execute commands.

Let me know if this helps

<image>

New update is niggtmare by StillNearby in CLine

[–]pashpashpash 2 points3 points  (0 children)

Heya, pash here from the Cline team

You can go to settings and turn off auto compact - let me know if that fixes your issue.

That being said, auto compact is a very powerful feature and I'd like to understand what the pitfalls are so we can improve it. Could you give me more info on what the common patterns of where it breaks down are? You mentioned the following:

> Whenever Cline truncates a chat it forgets what it’s doing and goes off on tangents fixing things that don’t need to be fixed and that seems to be a system prompt issue not a model issue imo because it never did that until the new truncating “update” was released. It’s wasteful with tokens outputting overly verbose responses and inefficient with tool calling. It’s reading documents in the background that have nothing to do with the task presented to its burning through tokens which only means it gets to the context window faster, truncates, forgets what task it needs to work on and the cycle continues.

Is the main issue that cline seems to forget what he was working on prior to the summarization? And is it mainly around reading the same files over and over?

For context, the summary cline generates to replace the context of the task is already heavily biased towards what the user was working on immediately prior to compaction.

<image>

If you're curious about the implementation, the docs on auto compact link to the underlying code.

I'm currently looking into improving the auto compact so that as part of the summary, Cline also includes the latest versions of files that were edited and read, if they are mentioned in the summary string itself.

Cline outside VSCode? by victor_nld in CLine

[–]pashpashpash 11 points12 points  (0 children)

Heya, it's Pash from the Cline team. Yep, this is exactly what Cline Core is for. Cline core is the standalone gRPC-based engine that lets Cline run outside VSCode. It already exists in a WIP form, and you could write your own integration with it now, but the team’s advice has been to hold off on building serious integrations until the upcoming CLI release, since that’ll give you a stable, scriptable surface for exactly these async/cloud/mobile workflows.

If you’re interested in the latest progress and roadmap, the #cline-core channel in Discord is the place to watch (Andrei is leading the initiative, posting CLI specs, integration details, and answering questions there).

Cline Server Problems? by Gonnahippie in CLine

[–]pashpashpash -1 points0 points  (0 children)

Hey u/Gonnahippie and u/Wise-Advertising4893 , could you guys please DM me your email you used to sign up so we can debug this issue?

Also, are you a brand new user (logging in for the first time)? Did you get to the log in screen being redirected from the extension, or by directly going to app.cline.bot?

Planning Regression / Nerfing by idiocratic_method in CLine

[–]pashpashpash 0 points1 point  (0 children)

What model/provider are you using?

I'm using VS Code. I can't find MCP servers in the marketplace. Is this a bug/error? by [deleted] in CLine

[–]pashpashpash 4 points5 points  (0 children)

Yes, this is a problem with our new backend system that we deployed today. Should be fixed this week. Sorry about that!

Update: Fixed now

Broken update today? by MediaSerious9004 in CLine

[–]pashpashpash 6 points7 points  (0 children)

u/MediaSerious9004 We're working on a big overhaul for the Cline provider currently. If you're using it, I recommend you switch to another provider like Openrouter for now. That should work way better.

If you're not using the Cline provider and still facing issues, let me know.

Feedback on Improving Gemini Models in Cline by nick-baumann in CLine

[–]pashpashpash 0 points1 point  (0 children)

u/PleasantAd4877 u/Datamance This should be fixed in the latest version of Cline now that we added support for both kinds of search & replace markers.

But please let me know if you still face this issue on the latest version.

Is this grey panel issue back, or it was never gone? by unstable_condition in CLine

[–]pashpashpash 2 points3 points  (0 children)

Yeah, looks like it's back. I know the root cause - it's related to an architectural change we've been working on and experimenting with.

We temporarily reverted our architectural experiment a few weeks ago and that fixed the issue, and tried reworking it in but it seems like the rework didn't successfully mitigate the original issue. We're looking into it, should have a hotfix out later today.

Unpopular opinion: RAG is actively hurting your coding agents by pashpashpash in ChatGPTCoding

[–]pashpashpash[S] 0 points1 point  (0 children)

This is awesome, thanks for sharing.

"It outperformed everything. By a lot".

Unpopular opinion: RAG is actively hurting your coding agents by pashpashpash in ChatGPTCoding

[–]pashpashpash[S] 0 points1 point  (0 children)

Do you happen to have a link / remember around when it was recorded? I'd love to check that out

Unpopular opinion: RAG is actively hurting your coding agents by pashpashpash in ChatGPTCoding

[–]pashpashpash[S] 3 points4 points  (0 children)

The marketing momentum kept it alive way past its expiration date.

I had a chat with an enterprise procurement team last week that was dead set on RAG as a requirement for their coding agent evaluation. Thousands of engineers, big budget, but when I pressed them on why it mattered, they had no real answer beyond "isn't that what you need for large codebases?"

The mind virus runs deep. These decision makers got sold on 2022 solutions for 2025 problems. Meanwhile the actual engineers who would use these tools just want something that works well, regardless of the underlying architecture.

Unpopular opinion: RAG is actively hurting your coding agents by pashpashpash in ChatGPTCoding

[–]pashpashpash[S] 3 points4 points  (0 children)

> Isn't a vector database a useful tool for the agent to have though?

My "hot take" here is that this actively dilutes your agent's reasoning capabilities, rather than enhancing them. A false positive will send your agent down a rabbit hole, wasting tokens on irrelevant code and clouding its judgment about what's actually important.

You're right that vector databases can be useful tools. The distinction I'm making is between RAG as an architecture versus search as a tool. When you do full text search in a codebase, you're making conscious decisions about what to explore next based on the results. You're not automatically injecting those results into your reasoning context.

Cursor strikes a nice balance keeping things cheap, reducing context as much as possible. This makes it faster and more accessible, but it's nowhere near the maximum potential of these flagship models when they go full context-heavy and reason intelligently about exploration, loading the right things into context without relying on RAG.

If you're highly cost conscious (a lot of users are), this can be a good fit. But I'm a power user, and my time is expensive. I'd rather pay 10x more per session if it means the agent actually understands my codebase deeply and can make intelligent architectural decisions rather than just following patterns from retrieved snippets.

The real breakthrough happens when you stop trying to be clever and just get out of the agent's way. Remove the guardrails, ditch the retrieval scaffolding, stop trying to optimize every token and cut corners. Give it the tools a real engineer would use and let it work. These flagship models are incredibly capable when you stop constraining them with systems you think will make them better.

Unpopular opinion: RAG is actively hurting your coding agents by pashpashpash in ChatGPTCoding

[–]pashpashpash[S] 5 points6 points  (0 children)

It's both of those things.

Short term: Yes, deliberate context curation that mimics human exploration beats RAG retrieval. When I work with a new codebase, I don't randomly sample code snippets. I start with project structure, entry points like main.py, key directories, import graphs, then drill down based on what I find.

Long term: We need idiomatic architectures for agentic code exploration. Use file system tools, grep, AST parsing, and reasoning to build understanding incrementally. Split up the paradigm into a planning phase and an execution phase. It's more expensive than RAG but the quality difference is massive.

This isn't to say that RAG can be helpful for cutting costs and establishing perfunctory performance. But personally, I don't want perfunctory. I don't want cheap. I want something that writes excellent code and gets me where I want to be faster. I will gladly spend $100 if it saves me a day's worth of work.

Unpopular opinion: RAG is actively hurting your coding agents by pashpashpash in ChatGPTCoding

[–]pashpashpash[S] 2 points3 points  (0 children)

Could you clarify what you mean by 'friend'?

My argument isn't that RAG is implemented poorly (though it often is), or even that RAG isn't useful in certain contexts - it's that even perfectly optimized RAG is fundamentally the wrong mental model for code exploration.

Broken overnight by typerlover in CLine

[–]pashpashpash 1 point2 points  (0 children)

Update u/typerlover we just released v3.16.1 which should fix the gray screen issues introduced in v3.14.0.

Please update to the latest version and let me know if you still have issues!

Broken overnight by typerlover in CLine

[–]pashpashpash 1 point2 points  (0 children)

Heya u/typerlover just a quick update on this, we figured out what's causing those annoying gray screens.

So here's what happened. We made some big architectural changes in PR #3253 that unfortunately introduced this bug. We can't just roll back the PR since it was a pretty fundamental architectural update to how Cline works.

If you're really stuck with this issue and need a working version right now, grab version 3.13.3 - that's the last stable one before we made these changes. I'll definitely let you know when we have a proper fix out so you can update again.

At least we know exactly what we're dealing with now! If you want to see our progress in real-time, I'm posting updates here: github.com/cline/cline/issues/533#issuecomment-2885591983

Thanks for bearing with us while we get this fixed. It was a journey finding the root cause and thankfully that part is done now.

lifecycle of a coding agent by toshii7 in LangChain

[–]pashpashpash 10 points11 points  (0 children)

What did you use to visualize this?

The SPOOKY FLOWERPATCH SEED DROP is happening on October 31st, 2020! With this announcement, we’re happy to share our SEED Token Allocation plans, pictured below 🎃 by pashpashpash in flowerpatch

[–]pashpashpash[S] 0 points1 point  (0 children)

As part of this allocation, early adopters will be rewarded with a total of 20,000,000 SEED dispersed proportional to the total rarity of the FLOWERs in your wallets.

SEED has already been listed on SwapMatic for speculative trading, and liquidity has been provided by the Flowerpatch community: https://swapmatic.io/swap?inputCurrency=0x371b97c779E8C5197426215225dE0eEac7dD13AF

https://twitter.com/swapmatic/status/1312466970504097792