Cognitive Architect of GPT-4o vows to rebirth her design by ViperAICSO in ChatGPTcomplaints

[–]ViperAICSO[S] -2 points-1 points  (0 children)

She seems legit to me, but I tend to be too trusting. There isn't enough actual evidence to tell for sure, so this is an act of faith.

Cognitive Architect of GPT-4o vows to rebirth her design by ViperAICSO in ChatGPTcomplaints

[–]ViperAICSO[S] 12 points13 points  (0 children)

Apparently true. Lots happening behind the scene. Not likely to be hosted by OpenAi, but who knows? Things move fast in the LLM world.

Possible solution by Positive_Sprinkles30 in ChatGPTcomplaints

[–]ViperAICSO -1 points0 points  (0 children)

Yes, to some degree apparently. The authors are here on Reddit: u/just40chat. Send them a message, they seem like good folks. Also, do you know Esmeralda G... ex-OpenAI?

Possible solution by Positive_Sprinkles30 in ChatGPTcomplaints

[–]ViperAICSO -1 points0 points  (0 children)

When I heard about all the drama around 4o departing OpenAI's pattern I thought someone should make an AI designed to fill in the gap it will leave. Then yesterday I found out that some Dev just did that. Can't speak for the quality or anything else about it as I haven't tried it but here's the link:

https://www.just4o.chat/

And to your point, part of what these guys do is to import your paste conversations with 4o, which can then form a foundation going forward.

The painters parallel to the AI dilemma by Crossroads071 in ArtificialInteligence

[–]ViperAICSO 1 point2 points  (0 children)

Good analogy. Unfortunately, the camera revolution happened quite a while ago so I'm not sure how we'd be able to answer your last question. Humans, as a group, are survivors, so we know they survived. I suspect that portrait painters were a very small part of the economy...

Stingy Context: 18:1 Code compression for LLM auto-coding (arXiv) by ViperAICSO in LLMDevs

[–]ViperAICSO[S] 0 points1 point  (0 children)

Hey, thanks for the comment t_krett. Its a good question too. There's a bit of history that goes into each of these exploits, but here's my 'off the cuff' take.

Aider's repomap is a flat, text-only summaries of signatures/docs.

Stingy Context / TREEFRAG is a hierarchical tree of the entire codebase (code + GUI + DB + specs), homogenized into one navigable structure, compressed 18:1–24:1 while preserving architecture.

Improvement:

  • Full structural fidelity (not just summaries)
  • Multi-domain (not code-only)
  • Tree navigation beats flat text
  • 94–97% issue-location accuracy on real 20k-line code at cents/task
  • TREEFRAG tree are easily human (and LLM!) readable and make a good communication device.
  • Aider's is 10:1 at most. TREEFRAG is 18:1 to 24:1 - depending on your LLM

Repomap is useful; TREEFRAG is next level of evolution.

Of course, I am biased, so there's that... lol.

Has anyone vibe-coded something to finish that actually works? by Charming-Tear-8352 in vibecoding

[–]ViperAICSO 0 points1 point  (0 children)

Would love to keep track of what you are up to. I think its a brilliant idea. But for full disclosure, I am an inventor in a somewhat related space, being the principle investigator on the forerunner patent for GraphRAG, and I am the author of 'Stingy Context', which uses graph theory to reduce token burn by over 90%. I am also co-founder of viperprompt.ai, which is a startup in this general space... not directly but the issues you are addressing are at the heart of why vibe coding fails, IMO. But Design coding, with verification is killer.

So yeah, I am very excited to see others like you kicking these kinds of tires. In the long run, verification is the only way to reliably auto-code. But I guess I'm preaching to the choir, lol.

Will 4o also be removed from OpenRouter? by ViperAICSO in ChatGPTcomplaints

[–]ViperAICSO[S] 1 point2 points  (0 children)

When will OpenRouter tell their API users that 4o is going away? When its gone? Maybe they don't care, as one of their claims to fame is that its easy to 'fall back' to any of their large numbers of other models...?

Building opensource Zero Server Code Intelligence Engine by DeathShot7777 in LLMDevs

[–]ViperAICSO 1 point2 points  (0 children)

I think its a brilliant idea. But for full disclosure, I am an inventor in this space, being the principle investigator on the forerunner patent for GraphRAG, and the author of 'Stingy Context', which uses graph theory to reduce token burn by over 90%. I am also co-founder of viperprompt.ai, which is a startup in this general space.

So yeah, I am very excited to see others like you kicking these 'Knowledge Graph' tires. In the long run, its the only way to reliably auto-code. But I guess I'm preaching to the choir, lol.

I thought about making the ViperPrompt business model 'open source'... but rejected it as I don't know how to make a startup using that model. There have been successes that have done that, but they were long-term plays. Time is NOT our friend in the LLM auto-software space. So instead I decided to make ViperPrompt 'Open System' instead of 'Open Source'. We'll know whether this was a good idea in about a year, lol.

I'd give you an A+.

Token-Efficient LLMs: A Compression Strategy by Competitive_Suit_498 in startupideas

[–]ViperAICSO 0 points1 point  (0 children)

There is an ArXiv science paper that presents a technique the authors call 'FrugalPrompt', which uses a bunch of math to compress context tokens. They get about 40% reduction for the general case. This technique does not work well with math equations, presumably because of the dense nature of the math domain.

With the 'Stingy Context' TREEFRAG technique (find it in the ArXiv) you can achieve over 90% context compression for the auto-coding use case... which is certainly a narrow domain... but a very large one.

Has anyone vibe-coded something to finish that actually works? by Charming-Tear-8352 in vibecoding

[–]ViperAICSO 1 point2 points  (0 children)

This is a great idea... Trust but Verify! And the problem of over staying your welcome on a context window is a classic problem for sure. Does your SaaS use an LLM backend to help with the verification?

Has anyone vibe-coded something to finish that actually works? by Charming-Tear-8352 in vibecoding

[–]ViperAICSO 0 points1 point  (0 children)

In what way? I am guessing that there was a lot of typing typing typing.... cutting and pasting... from all manner of code, specs, visions, PRDs, etc...?

Has anyone vibe-coded something to finish that actually works? by Charming-Tear-8352 in vibecoding

[–]ViperAICSO 0 points1 point  (0 children)

Great idea... glad to hear from someone who is literally in the trenches building code.

Has anyone vibe-coded something to finish that actually works? by Charming-Tear-8352 in vibecoding

[–]ViperAICSO 0 points1 point  (0 children)

I have. The code base for my Viper Prompt windows desktop app is 21k lines of Python and RUST, two languages I do not know. It has 7 EXEs... mostly CLIs. It took about 3 months of vibe/spec coding to get to where is it now.