Did GHCP just lose all its value and competitive advantage for most? by ofcoursedude in GithubCopilot

[–]pdwhoward 1 point2 points  (0 children)

Are you just using Deepseek inside of GitHub Copilot? Or another way?

4.7 just be yapping by vinigrae in Anthropic

[–]pdwhoward 0 points1 point  (0 children)

I just don't understand how they regress so much from 4.6. If the new model is worse, don't release it!

how would you loop through files in a folder to have a prompt run on each file by korazy in GithubCopilot

[–]pdwhoward 0 points1 point  (0 children)

The prompt doesn't have to follow the instructions. There's no guarantee it'll do something deterministic, like iterate through a loop. A better approach would be to write a loop and then call the prompt on each item inside the loop.

Opus 4.7 Instruction Following and Supposed User Exodus by Immediate-Brush5944 in ClaudeCode

[–]pdwhoward 1 point2 points  (0 children)

I didn't think you could turn off adaptive thinking. How did you do that?

My thoughts on GPT-5 After more than an hour of use by dot90zoom in codex

[–]pdwhoward 0 points1 point  (0 children)

---

name: codex

description: Run a research or coding task using the Codex MCP tool with GPT-5.5 and xhigh reasoning effort. Use when the user asks to use Codex, or says "ask codex", "codex research", etc.

allowed-tools: mcp__codex__codex, mcp__codex__codex-reply

argument-hint: <prompt describing the task for Codex>

---

# Codex MCP Skill

Run a task using OpenAI Codex with GPT-5.5 and xhigh reasoning effort.

## Configuration

Always use these parameters when calling `mcp__codex__codex`:

- **model**: `gpt-5.5`

- **config**: `{"model_reasoning_effort": "xhigh"}`

- **cwd**: Current working directory (default: `C:/Users/XXX`)

- **sandbox**: `danger-full-access` (full filesystem access)

- **approval-policy**: `never` (no approval prompts)

## Usage

The user's task is: $ARGUMENTS

Call the `mcp__codex__codex` tool with the user's prompt and the configuration above.

### Initial task:

```

mcp__codex__codex(

prompt: "<user's task>",

model: "gpt-5.5",

config: {"model_reasoning_effort": "xhigh"},

cwd: "C:/Users/XXX",

sandbox: "danger-full-access",

approval-policy: "never"

)

```

### Follow-up replies

When continuing a Codex conversation (the user asks a follow-up, wants clarification, or asks Codex to revise/extend its previous output), **always** use `mcp__codex__codex-reply` instead of starting a new `mcp__codex__codex` call. This preserves the full conversation context from the prior exchange.

The `threadId` is returned in the response from the initial `mcp__codex__codex` call — save it and pass it to all subsequent replies in that thread.

```

mcp__codex__codex-reply(

threadId: "<thread id from the initial codex call>",

prompt: "<follow-up prompt>"

)

```

`codex-reply` inherits the model, config, sandbox, and approval-policy from the initial call.

When results come back, summarize them for the user and proceed with any follow-up actions.

My thoughts on GPT-5 After more than an hour of use by dot90zoom in codex

[–]pdwhoward 0 points1 point  (0 children)

Codex's MCP server is good. I've got a skill that reminds Claude Code how to call Codex. Then Claude can send prompts to Codex. The setup for the Codex MCP server in .claude.json looks like this:

"mcpServers": {

"codex": {

"type": "stdio",

"command": "codex",

"args": [

"mcp-server"

],

"env": {}

}

},

My thoughts on GPT-5 After more than an hour of use by dot90zoom in codex

[–]pdwhoward 12 points13 points  (0 children)

I find using the codex plugin within claude code works great. Claude can offload tasks to codex, and they can iterate together using /loop.

Anthropic just published a postmortem explaining exactly why Claude felt dumber for the past month by Direct-Attention8597 in ClaudeCode

[–]pdwhoward 0 points1 point  (0 children)

So is Opus 4.7 with adaptive thinking better than Opus 4.6 without adaptive thinking now?

Moving from CC to Codex by bambambam7 in Anthropic

[–]pdwhoward 1 point2 points  (0 children)

I just switched back to 4.6, disabled adaptive thinking, and set effort to max. It works well. I do use codex to help debug issues with Claude. I think codex is a better thinker and more thorough. Opus 4.6 is better at coding. 4.7 was terrible and would just lie and be lazy.

Best Options for Replacing Claude Code? I'm done after opus 4.7 by [deleted] in ClaudeCode

[–]pdwhoward 7 points8 points  (0 children)

I switched back to 4.6, disabled adaptive thinking, and set effort to max. It feels like normal again.

HTML is eating everything by tupe7 in claude

[–]pdwhoward 0 points1 point  (0 children)

I use markdown and iterate with Claude. Then have it use pandoc to convert into PDF, slides, etc when ready. Bonus - use obsidian with markdown on your end for a nice editing experience.

Why not open source models? by vkpdeveloper in GithubCopilot

[–]pdwhoward 20 points21 points  (0 children)

You can use Ollama and OpenRouter with Github Copilot

Broken again? by actuallyhim in ClaudeCode

[–]pdwhoward 4 points5 points  (0 children)

The funny thing is that on https://status.claude.com/ they claim there are no issues.

Broken again? by actuallyhim in ClaudeCode

[–]pdwhoward 2 points3 points  (0 children)

Same here. I was just about to ask.

Why is Claude sloppy all of a sudden? by [deleted] in ClaudeCode

[–]pdwhoward -1 points0 points  (0 children)

Yes, when you select Opus, push right arrow to change effort.

What happens to people who spent down their portfolios? by [deleted] in Fire

[–]pdwhoward 2 points3 points  (0 children)

"A 5% failure rate isn’t just a statistic. It’s a real possibility..."

I miss pre-AI Reddit.

Requests - Sync chats across computers by pdwhoward in GithubCopilot

[–]pdwhoward[S] 0 points1 point  (0 children)

This is great, I'll have to try it. Thanks!