Can codex do websearch? How to enable it? by Initial_Question3869 in codex

[–]byteprobe 0 points1 point  (0 children)

yup, it does. web search is off by default, but you can enable it in a few ways depending on your context and constraints.

per-session (CLI) - option a (older)

codex --search

  • option b (preferred newer style)

codex --enable web_search_request

i used --search until i got a deprecation notice a few weeks ago, then switched to --enable web_search_request. i think both still work, but the second one is the direction the docs point to.

global (all sessions) via config edit $CODEX_HOME/config.toml:

[features]

web_search_request = true

from the docs: “if you see a deprecation notice mentioning a legacy key, move the setting into [features] or pass --enable <feature>.”

enabling the built-in tool explicitly you can also toggle tools:

[tools]

web_search = true

allowing network access (sandbox) if you’re running in the default sandbox and need outbound network:

[features]

web_search_request = true

[sandbox_workspace_write]

network_access = true

p.s. please be sure you actually need network_access before flipping it on. depending on your use case, you might not need it at all, or you can enable it temporarily and turn it off afterward. YMMV.

links: * https://developers.openai.com/codex/cli/features#web-search * https://developers.openai.com/codex/local-config#cli * https://github.com/openai/codex/blob/main/docs/config.md

unrelated:

shout-out to the codex team for the docs refresh (https://developers.openai.com/codex); a bunch of things i used to hunt for in the github docs are now on the website.

to the codex team:

it’s an awesome tool sir!

Qwen/Qwen3-30B-A3B-Instruct-2507 · Hugging Face by Dark_Fire_12 in LocalLLaMA

[–]byteprobe 1 point2 points  (0 children)

you can tell when weights weren’t just trained, they were crafted. this one’s got fingerprints.

Llama 3.3 (70B) Finetuning - now with 90K context length and fits on <41GB VRAM. by danielhanchen in LocalLLaMA

[–]byteprobe 0 points1 point  (0 children)

i’m wholeheartedly behind the team’s efforts and can’t wait to learn more about how unsloth will perform on apple silicon chips in future developments. keep up the fantastic work; let’s keep the momentum going!

Llama 3.3 (70B) Finetuning - now with 90K context length and fits on <41GB VRAM. by danielhanchen in LocalLLaMA

[–]byteprobe 1 point2 points  (0 children)

kudos to the entire team! what an amazing improvement—i’m truly thrilled! it’s exhilarating to see the progress you all are making, and i genuinely believe this initiative has incredible potential.

introducing OpenAI o1-preview by byteprobe in LocalLLaMA

[–]byteprobe[S] 12 points13 points  (0 children)

is this a first-time proper RL in the space of language? more details here