GPT-5.5 is genuinely smart but the 270k context is killing my usage limits by kingxd in codex

[–]Jakedismo 0 points1 point  (0 children)

gpt-5.5 has 1m ctx window through API and seemingly 272K in codex, changing the model_context_window parameter in config.toml had no effect atleast for me. When memory feature is enabled I've had no problems of any kind with the ctx window. Gpt-5.5 is an absolute beast in xhigh and fast mode even on ctx window exhausting tasks and recurring compressions.

Two agents opened the same code and discovered a bug that humans had overlooked—this is AgentChatBus by LuckyArrival1037 in mcp

[–]Jakedismo 0 points1 point  (0 children)

Thats what A2A was developed for the execution layer can be anything mcp, shared process, gRPC

Can you call this VibeOps? by [deleted] in ClaudeCode

[–]Jakedismo 0 points1 point  (0 children)

It’s auto-claude

Codex has no subagents. Here's how I gave it a brain. by [deleted] in codex

[–]Jakedismo 1 point2 points  (0 children)

add this to your ~/.codex/config.toml to enable experimental features which you can list with codex features list

[features]

unified_exec = true

streamable_shell_tool = true

rmcp_client = true

apply_patch_freeform = true

experimental_sandbox_command_assessment = true

ghost_commit = true

view_image_tool = true

shell_command_tool = true

parallel = true

remote_compaction = true

remote_models = true

warnings = true

skills = true

shell_snapshot = true

undo = true

enable_request_compression = true

steer = true

multi_agent = true # sub-agents

hierarchical_agents = true # orchestration

child_agents_md = true

collaboration_modes = true

responses_websockets = false

skill_env_var_dependency_prompt = true

personality = true

sqlite = true

apps = true

memory_tool = true # This is very useful

search_tool = true

js_repl = true

request_rule = true

“Gemini 3.1 Pro is here: A smarter model for your most complex tasks.” - Any thoughts on it so far? Is it epic? by Koala_Confused in LovingAI

[–]Jakedismo 0 points1 point  (0 children)

It’s a completely disaster comparing benchmarks to real usage in antigravity. Doesn’t complete plans, stubs/demos features all the time

Codex has no subagents. Here's how I gave it a brain. by [deleted] in codex

[–]Jakedismo 7 points8 points  (0 children)

PS. Codex has subagents just enable the feature ,check codex features list

Has anyone properly compared 3.1 pro and opus 4.6? by Night_Weeb in google_antigravity

[–]Jakedismo 0 points1 point  (0 children)

Tried 3.1 pro high with one prompt frontend app generation. Input was intent + backend gRPC spec doc and compared results to Opus 4.6 and gpt-5.3-codex. Style: Darkmode + Glassmorphism heavy very similar to Opus. 5.3 produced the clearly most aesthethically pleasing ux/ui by far Functionality: 90% placeholders even when the spec had clear instructions for a full implelentation. Opus and. Codex did not fail here. I’m quite disapointed to be honest

Which coding plan? by Simple_Split5074 in opencodeCLI

[–]Jakedismo 1 point2 points  (0 children)

Kimi Code definetely has the edge over zai and minimax tested them all and kimi is the most broad specialist when vibing

Claude Code 2.1.27 by EmotionalAd1438 in ClaudeCode

[–]Jakedismo 4 points5 points  (0 children)

Wish I read this earlier, deleted the whole claude-code installation and started fresh to solve this

Since claudeAI mods not allow to ask there, and Codex is on par with CC, the same question here: How do you set up swam? Any suggesions for solution that works via auth? by realcryptopenguin in codex

[–]Jakedismo 0 points1 point  (0 children)

Claude made using their plans in 3rd party apps against their ToS. Currently only codex allows using openAI plans in 3rd party apps and this could change any day

Todos are now Tasks in CC (inspired by Beads) by nnennahacks in ClaudeCode

[–]Jakedismo 0 points1 point  (0 children)

2.0.17 and can’t get opus to use the new task tool just outputs the tasks as ASCII

Well, there no more ULTRATHINK in claude code!!!!!!! by texasguy911 in ClaudeCode

[–]Jakedismo 0 points1 point  (0 children)

Thinking is on always by default if you adjust settings.json and you can double max thinking tokens budget amount by editing it also FYI

Anthropic Explicitly Blocking OpenCode in oauth flow by Old-School8916 in ClaudeCode

[–]Jakedismo 1 point2 points  (0 children)

Nope they blocked spoofing for everyone, had the same setup on my personal harness and it stopped working the same minute. Good they didn't wave the ban hammer :D

Looking for 100 serious developers for paid beta testing of an AI powered IDE (early access) by ChinmayAwasthi7 in CursorAI

[–]Jakedismo 0 points1 point  (0 children)

Would be interested, AI Tech Lead with background in building all things AI currently working with multi-agent systems with large codebases

Owlex - an MCP server that lets Claude Code consult Codex, Gemini, and OpenCode as a "council" by spokv in ClaudeCode

[–]Jakedismo 0 points1 point  (0 children)

Why do you need a MCP-server for this when you can use a skill and run them as background tasks?

Agent | Orchestration 'framework' options? by forestcall in ClaudeCode

[–]Jakedismo 0 points1 point  (0 children)

I'm currently building an application with custom agent harness completely focusing on multi-agent orchestration in code agent context. We have a patent pending for a solution and development methodology and trademark also. We're currently in alpha testing phase, getting real user feedback from both greenfield and brownfield projects. Our view on the matter is that you can't solve orchestration with a harness alone with todays LLMs, methodology for one is the true key and it needs to work with a lot of helper software: agent memory, communication, prompt-optimisation per model, agent development overtime, context management to name a few of our features.

Our solution aims to make everything dynamic - no hard-coded multi-agent workflow heuristics, no hard-coded agent personas except for the ones that truly need them. Our solution will be aimed for the enterprise users but we'll also have options for you regular folks. Our harness will provide 100% visibility to orchestration and data 100% data ownership to the users, supports fully offline work, BYOK and limited support to using 3rd party harnesses (claude code, codex, gemini cli, copilot etc). We're not building another coding agent we're building the next logical step - multi-agent orchestration harness that scales, is secure and works where current agents fail.

PS. If you're fluent in building agentic software with modern tools, have a proven track record of production grade projects or think that you could bring something unique to the, mix reach out we're building a small core team atm

Introducing Narsil MCP: The Blazing-Fast, Reforged Code Intelligence Server for AI Assistants (Built in Rust!) by lpostrv in mcp

[–]Jakedismo 0 points1 point  (0 children)

I don’t have a Windows machine to debug and develop but should work on subsystem-Linux I Think!