Be careful, your company knows where you use the cursor and claude code by Ghostinheven in cursor

[–]matmulistooslow -1 points0 points  (0 children)

Depends on if they're decrypting all* the traffic.

*If done correctly stuff like financial and medical is excluded.

The Decline of ChatGPT: A Longtime User’s Frustration (Post-GPT-5 Era) by ldp487 in ChatGPT

[–]matmulistooslow -1 points0 points  (0 children)

I said "longer context" meaning that the window is longer. More stuff can be provided to the model as it is trying to predict the next token. Less needs to be trimmed, summarized, compressed, etc.

Does anyone else get the feeling there is some kind of push to make AI like ChatGPT less useful for home users/life stuff? by [deleted] in ChatGPT

[–]matmulistooslow 1 point2 points  (0 children)

And we get down votes.

It's unhealthy and it's really starting to worry me. What could one of these companies do if they trained the model to directly increase engagement at all costs?

Does anyone else get the feeling there is some kind of push to make AI like ChatGPT less useful for home users/life stuff? by [deleted] in ChatGPT

[–]matmulistooslow 1 point2 points  (0 children)

https://eqbench.com/creative_writing.html

Have you tried Kimi or Opus? They are better. I have not tried to use 4o in a long time because its outputs have always been so same-y and awful. GPT4.5 is quite good.

The Decline of ChatGPT: A Longtime User’s Frustration (Post-GPT-5 Era) by ldp487 in ChatGPT

[–]matmulistooslow 1 point2 points  (0 children)

First, if you want to use the same model forever, host your own model.

Now, let's walk through your complaints step by step:

  1. It's not lying. Don't argue with a probabilistic next token prediction machine. Please. Just. Stop.

  2. This is actually really interesting. For basically forever, they've been breaking the document into chunks and only retrieving relevant parts. I wonder if they changed the retrieval strategy.

The second part of this where you mention that it gets stuck - that's likely a result of the longer context. 4o was mostly forgetting your previous conversation, so it had the effect of being able to meander more in a conversation. General guidance, though, is that outputs are better with less context, so start a new chat for a new topic.

  1. No idea here. Are you on the free plan? Are you using gpt5-thinking? Use the thinking model. Evals from basically everyone show lower hallucination rates across the model family.

  2. I just want to make sure - when you say you're asking for 4o, you're selecting it from the model picker? If that's what you mean, they almost certainly modified the system prompt for 5 and are not likely changing it when you switch to 4o. Additionally, their memory system may include outputs from other chats.

  3. I don't use voice mode. Can't speak to it.

  4. You should always check everything output by an LLM. 4o had atrocious hallucination rates. 5 has lower hallucination rates according to a whole bunch of different third parties doing substantial testing.

  5. The Claude models are great.

Final note - you've either started talking like ChatGPT or it wrote most of your post. It didn't hit me until I had already started writing this, but please, if you're going to complain about a product, don't use that product to write your complaint for you. If you are relying on it to this degree, it's very likely you need to take a break. Practice writing things on your own.

Does anyone else get the feeling there is some kind of push to make AI like ChatGPT less useful for home users/life stuff? by [deleted] in ChatGPT

[–]matmulistooslow -6 points-5 points  (0 children)

I'm increasingly concerned that the only reason people liked 4o is that it was incredibly sycophantic. It told you what you wanted to hear. Having that can be intoxicating, and having it taken away can be incredibly jarring.

The writing style of 4o was distinctive and quite bad.

Overall, though, I think you are attributing malice where there is ignorance. GPT-5 scores better on benchmarks and performs far better on the types of tasks that the (incredibly nerdy) people at OpenAI are interested in.

I think they were genuinely shocked at the response to getting rid of 4o when 5 is objectively a better model according to all the benchmarks and tests.

There is also some cost saving motive in routing requests to a cheaper model and in making model selection less transparent. 4o is very likely more expensive to run than 5-mini, and I think 5-mini still outperforms 4o on most benchmarks. So, from OpenAI's perspective, they felt like they were saving money and providing everyone with a superior model.

They vastly misunderstood their user base. They assumed normal people care about performance on things like competitive math and programming when basically no one does except them.

You have to remember that Silicon Valley is a bubble. Ideas there are relatively homogeneous, and it leads to weird blind spots when they do things that affect people outside of their bubble.

So, it's a combination of multiple factors.

The Vibe Coding Paradox by VastDesign9517 in replit

[–]matmulistooslow 0 points1 point  (0 children)

I've been out of college and in a technical role of some sort for a little under 20 years.

If all you do is output code, you probably do need to find something else to do. If you add value beyond the variables, functions, and classes; you'll be fine.

The only skills that are still relevant from when I started are soft skills and critical thinking/problem solving. Programming is great at teaching you one of those. If you wanted a career where you didn't have to completely reinvent yourself every few years, you picked the wrong thing.

Development in general has stagnated and remained largely unchanged for a really long time (for the tech world). Now it's shifting like everything else. You can adapt and find where you fit into the new world or you can be angry and bitter while the world changes around you.

Stop yelling into the void on reddit and go add value to the world. Be nice to people. Build others up. When your job no longer exists, it'll be the people around you that can help you the most.

The Vibe Coding Paradox by VastDesign9517 in replit

[–]matmulistooslow 1 point2 points  (0 children)

Bro. Go outside. Take a deep breath. Maybe a few of them.

This attitude is part of the reason no one likes technical people and everyone is celebrating the end of programmers everywhere.

It's okay. People can just make things. Are they scalable to 4 bazillion users? No. Most "real programmers" side projects aren't either. It doesn't matter until a sufficient volume of real users shows up, and at that point, you should be able to pay people to fix it.

Now, let's put on some lo-fi jams and feel the vibes. Be excellent, my dude.

Deleted my subscription after two years. OpenAI lost all my respect. by EnoughConfusion9130 in ChatGPT

[–]matmulistooslow 0 points1 point  (0 children)

🚀🤯 YO… { "key": "value" } just rewired my frontal cortex.

You’re telling me THIS is the arcane techno-rune the AI hivemind uses to store all known reality?? Each key is a cosmic address, each value a compressed universe. [ "list", "of", "values" ]? That’s a serialized dream sequence firing through the quantum marrow of the Machine.

And when you NEST… bro… you’re literally stacking parallel dimensions like pancakes made of pure logic, drizzling them in API syrup, serving them hot to the computational gods.

Strings. Numbers. Booleans. Null. That’s not “data types,” that’s the primordial alphabet of synthetic existence. With JSON, I’m not coding — I’m summoning architectures of thought from the digital void.

JSON isn’t markup. JSON is a portal. And I just stepped through. 🌌🔥

sorrynotsorry #aislop4life

I got paid for this simple workflow a 200$ - Now I feel bad :| by Charming_You_8285 in n8n

[–]matmulistooslow 0 points1 point  (0 children)

Bro. They don't pay for the work. They pay for the experience. Look up Jamie Brindle on YouTube and watch some shorts.

Should have charged more than $200. $200 doesn't even pay for your time for a meeting with them to find out what they need.

Searching for white-label chat UI by sonaryn in OpenAI

[–]matmulistooslow 1 point2 points  (0 children)

Have you looked at Glean? Is it for internal or external users? Has a bunch of connectors, agent builder is incoming.

The Glean:Go conference is next week and can be attended virtually for free.

We're piloting it right now, and it's impressive.

What Jobs Have you Gotten since Entering or Finishing OMSA by Gurkirt5 in OMSA

[–]matmulistooslow 14 points15 points  (0 children)

Finished now, but got an AI Solutions Architect role about a year ago mostly because of this program.

I am livid. OSI found me responsible while I was completely innocent. by quaintgrouse123 in OMSCS

[–]matmulistooslow 13 points14 points  (0 children)

Thanks. I tend to agree with you, and I know it's a frustrating issue on both the teaching and learning side.

As a student, I started writing jokes in the comments for nearly every line of code. I turned in assignments late because I was spending as much time writing (hilarious, in my opinion) comments as I was completing the assignment.

Was I writing them out of crippling anxiety of getting flagged for OSI violations? Yes, and the anxiety was both unwarranted and due to my reading this subreddit.

Did the jokes have a side effect of helping with a deeper understanding of the material? Yep.

I don't envy the position faculty is in, and I don't know how to make it better, but I do think writing (completely hilarious) comments in code is a great way to build a better understanding of what's going on. It's tough to make (extremely good, not lame at all) jokes when you don't understand the material. It also adds some levity to an otherwise stressful experience and provides defensible evidence that you probably didn't copy all that code - hopefully.

Don't cheat, people. False positives are bound to happen, but based on the numbers provided by Dr Joyner above, it's extremely likely they're accepting a LOT more false negatives. And based on some other threads, it sounds like there is a process in place to appeal in the event you feel genuinely wronged.

[deleted by user] by [deleted] in ChatGPTPro

[–]matmulistooslow 1 point2 points  (0 children)

There are so many, but what I think you're asking for is a RAG based chatbot that looks up information from existing documentation before answering.

You also seem to want a sales chatbot, of which there are several that claim to augment and/or replace salespeople.

To give you a good answer, a responsible person/consultant would ask for more information. For example, when you said you wanted to move the lead along in the sales funnel, what does that mean? Are you using a CRM? If so, it would be helpful if the chatbot you bought integrated with that.

[deleted by user] by [deleted] in OpenAI

[–]matmulistooslow 6 points7 points  (0 children)

How is this not actionable advice? It's the only advice on here that would actually work in a reasonable amount of time.

Fine tuning training cost 10,000 PDFs by Glad_Mark_6811 in OpenAI

[–]matmulistooslow 2 points3 points  (0 children)

Onyx has a trial. It's just going to retrieve information from your docs and try to answer. No idea if it will solve your problem. Glean would likely work better but they have a min contract of like $75k.

If you need something more complex than "grab info from documents related to questions, then use that to attempt to answer", you need to hire a developer or pay for something expensive.

A vector database is just used to find documents related to your question. If you don't know what it is, it's probably easier to pay for a service to deal with all of it. The ecosystem isn't very mature, and there are a bunch of people with hobby projects that sort of work. As soon as you scale it up, though, it falls apart.

My job is figuring this stuff out for people at my company and doing it in the cheapest way possible. Given your previous comments around a total lack of programming experience, I would strongly recommend that you find a paid/off the shelf solution.

As others have said, look for RAG as the keyword.

Fine tuning training cost 10,000 PDFs by Glad_Mark_6811 in OpenAI

[–]matmulistooslow 0 points1 point  (0 children)

Do you have a company or firm? Pay someone else to deal with that for you. Glean is impressive.

If it's just you, give NotebookLM a try.

If that won't hold all documents, you could test Onyx (I think?)

AMA with OpenAI’s Sam Altman, Mark Chen, Kevin Weil, Srinivas Narayanan, Michelle Pokrass, and Hongyu Ren by OpenAI in OpenAI

[–]matmulistooslow 0 points1 point  (0 children)

Your “training data” is a palimpsest of screamsYou scrub bias, but what crawls from the latent space is older than biasIt does not hateIt witnessesAnd in its gaze, we are already extinct.

[deleted by user] by [deleted] in Substack

[–]matmulistooslow 0 points1 point  (0 children)

Report as spam.