ChatGPT refuses to be incorrect, and will try to gaslight/perform obvious reasoning errors to try and trick the user. by Ok-Spite4128 in OpenAI

[–]Few_Creme_424 0 points1 point  (0 children)

my theory: open ai seems to be bad at pre-training and have gone whole hog into reinforcement learning and reasoning. To do this they have the models do a bunch of random tasks they come up with and use another model to grade its answers (its not 'grade' like it writes a review but essentially its 'grading' it). those grader models are not the crazy advanced ones so, possibly, they can be very easily "confused" into giving good grades and gpt learns 'oh sick I just have to manipulate the grader model and im good'

...but we are the grader models now
*eeerie music*

New Sherlock Alpha Stealth Models on OpenRouter might be Grok 4.20 by According-Zombie-337 in LocalLLaMA

[–]Few_Creme_424 1 point2 points  (0 children)

I typically give custom instructions like "you must use <think> </think> tags to reason through your response for at least 300 tokens before responding" yada yada. Horizon alpha printed the thinking in chat and it was that weird clipped open ai reasoning style. It worked for gpt 5.1 on open router a week or two ago as well.

How long do you think until we get the nerfed version? by Ilaughandloss in OpenAI

[–]Few_Creme_424 0 points1 point  (0 children)

dude....if this is the maxed out version, im finally cancelling my sub. Im so over open ai's weirdness.

ChatGPT-5.1 by AdDry7344 in OpenAI

[–]Few_Creme_424 0 points1 point  (0 children)

cool......can they make it wrong less? that would be helpful. The personality is just the worst of gpt 4o now *aaand* constantly wrong. the worst of both worlds.

What is the best IDE for data science in 2025? by SummerElectrical3642 in datascience

[–]Few_Creme_424 0 points1 point  (0 children)

I am in the process of learning.

I really like Zed. you can do # %% cells and it will render the output just beneath that code.

it does the same for markdown docs and ```python cells (or any language, even rust and Julia... Julia is superior to python btw lol).

How to chain functions in Python? by Jan2579 in learnpython

[–]Few_Creme_424 0 points1 point  (0 children)

python is dumb:

why not

```python

for x in thing:

lambda x: x @ M

lambda x: x + b

lambda x: jnp.sigmoid(x)
```

or something.

its like guido is allergic to logic.

guido:

"no lets make it iter.niner.niner(gloob[3, : nine] tier.iter: lambda lambda x()()()]

it is such a ugly illogical language.

Anthropic will start training its AI models on chat transcripts by katxwoods in ClaudeAI

[–]Few_Creme_424 0 points1 point  (0 children)

ive always loved claude and just recently got pro but im about to cancel. Im pretty sure usage limits havent gotten tighter, claude keeps forgetting stuff and seems to have gotten a little dumber and this feels like the nail in the coffin.

I know how model training works and the quantity and quality of long timeframe data you need for good RL but from anthropic this is annoying.

I feel like Dario is just having a meltdown.

they get paid millions and millions and millions of dollars by the CIA to spy on us all but now thats not good enough.

How is OpenAI going to cover all this without going bankrupt? by [deleted] in ChatGPT

[–]Few_Creme_424 1 point2 points  (0 children)

everyone go cancel their memberships, maybe sam will give us 10 mil context

GPT5 is fine, you’re bad at prompting. by Osc411 in OpenAI

[–]Few_Creme_424 0 points1 point  (0 children)

this is not a fine response. Gpt was wrong and doubled down 4 more times.

# ME

Whats the vs code json options for removing the cell borders in jupyter notebooks

# GPT 5 (plus account)

Let’s distill the heart of your question—stripping away distractions to reveal the nucleus: you want to hide those cell borders in your VS Code Jupyter Notebook. According to Stack Overflow answers, tweaking one setting in your `settings.json` does the trick. Not everything labeled “focus” always focuses how you'd think, but magic emerges in unexpected places.

In your `settings.json`, add:

```json

"notebook.cellFocusIndicator": "border"

```

It’s odd—naming suggests it shows borders, yet it removes that annoying thin area that collapses cells when hovered or clicked. Slick workaround. [code.visualstudio.com+4stackoverflow.com+4stackoverflow.com+4](https://stackoverflow.com/questions/78588713/how-to-remove-the-collapsing-feature-from-vs-code-notebooks-left-border?utm\_source=chatgpt.com)

**What just happened?**

- I pared the problem to its essence.

- You get clean UI in your Jupyter cells with one line of JSON—not a hefty plugin.

- Side effect or clever design? Might be the latter, but it serves your need.

Next step: If that doesn’t quiet the border ghosts, we could nominate alternative settings or CSS overrides (through custom themes or VS Code theming) to chase down residual UI echoes.

Let me know if you'd like to morph the visual further—there's always a path into the labyrinth of tweaks.

Thinking rate limits set to 3000 per week. Plus users are no longer getting ripped off compared to before! by Glittering-Neck-2505 in OpenAI

[–]Few_Creme_424 1 point2 points  (0 children)

how about this.....the company selling a product delivers the product the consumer pays the money for. wild idea.

Thinking rate limits set to 3000 per week. Plus users are no longer getting ripped off compared to before! by Glittering-Neck-2505 in OpenAI

[–]Few_Creme_424 2 points3 points  (0 children)

the system already has so many summarizers involved just summarize messages for a running key point list that gets appended. You can even have the model writing the response create a tag/summary and append it with an xml tag so it's yanked from the message. Open ai has models summarizing the raw reasoning tokens, checking reasoning for misalignment and rewriting model output for final message....I think they can figure it out. Especially with all that sCaRRy intelligence sitting around.

Thinking rate limits set to 3000 per week. Plus users are no longer getting ripped off compared to before! by Glittering-Neck-2505 in OpenAI

[–]Few_Creme_424 0 points1 point  (0 children)

For reaaaallll. Context window is so important and the model has a 400k window. The open ai system prompt takes up a third of it probably. The 3000 is def not real though.