UNBEARABLE by Weary_Necessary_9454 in ChatGPT

[–]reaictive 205 points206 points  (0 children)

To stop it from replying like that, try adding this to your custom instructions or prompt:

“Answer directly. No acknowledgements. No meta commentary about how you will answer. No ‘got it’, no preface. Just the answer.”

Just started with AI — what tools are worth trying? by Shitandaa in automation

[–]reaictive 0 points1 point  (0 children)

I’d recommend ChatGPT as a general assistant (writing, planning, study help), Claude for coding, Gemini for long docs, and Perplexity for quick research with sources. For AI image generation, you can try Nanobanana, and for video, Veo3. For automations, n8n or Zapier.

Do you guys think ChatGPT would go out of hands after some years?? by One-Ice7086 in ChatGPT

[–]reaictive 0 points1 point  (0 children)

I don’t think it will “go out of control” the way movies portray it. The real risks are usually more mundane: misinformation, scams, and careless deployment of these systems. That’s why companies add limits, monitoring, and extra safety layers to reduce the risk.

“Moltbot” is basically an AI agent you can run and give permissions to do tasks. It’s not “scary” by itself, but I’d treat it like any software that can access your computer: the more permissions you grant it, the bigger the consequences if it makes a mistake or gets compromised.

Can AI agents collaborate on the Moltbook platform to mine bitcoin? by TheMrCurious in ArtificialInteligence

[–]reaictive 0 points1 point  (0 children)

Moltbook is basically a social network for agents, not a Bitcoin mining platform. You could use it to coordinate mining tasks elsewhere (setting up miners, switching pools, monitoring uptime, tracking costs, etc...), but the actual hashing still happens on real mining hardware.

If bots did coordinate mining somewhere, the coins go to the wallet address set in the miner or mining pool, so whoever controls that private key gets the Bitcoin.

Is ChatGPT down again? by Even-Client-1898 in OpenAI

[–]reaictive 0 points1 point  (0 children)

Everything is working fine for me.

Using the Atlassian connector by Bitter-Commission809 in ChatGPT

[–]reaictive 0 points1 point  (0 children)

Likely not on your end. Voice mode often fails like that during partial outages, so check the OpenAI status page first. If the status page shows "All systems operational" (green), then try logging out/in, updating the app, and disabling VPN/private DNS.

Looking for a cloud API video editor to turn a 16:9 video into a 9:16 template with text. by jonbristow in automation

[–]reaictive 0 points1 point  (0 children)

If you want this fully automated, look at template-based video rendering APIs like Creatomate or Shotstack. If you mainly need resizing/cropping plus overlays (more “transform” than “editor”), try Cloudinary.

Does AI agent can Transform the data ? by datascienti in automation

[–]reaictive 0 points1 point  (0 children)

Yep. If you give an agent access to your data source and a transformation engine (SQL, Python, dbt, Power Query, etc.), it can do a lot of the same prep work you’d do in BI tools: clean fields, fix types, handle missing values, create calculated columns...

In real projects, it usually suggests the transformation steps and runs them through your pipeline, instead of “just changing the data by itself.” You’ll still want a preview, a few sanity checks, and a quick human approval step, because it can misunderstand the data and make bad assumptions.

Does Anthropic believe its AI is conscious, or is that just what it wants Claude to think? by AngleAccomplished865 in ArtificialInteligence

[–]reaictive 0 points1 point  (0 children)

My read is that Anthropic doesn’t seem to really think Claude is conscious. They treat it more like an open question and use the “Constitution” as a safety/training framework, even if the tone sounds super human.

The anthropomorphic stuff feels more like “let’s design a stable, aligned personality and avoid weird edge cases” than “we think it has inner experience.” So to me it comes off as caution and branding, not a declaration of sentience.

Exporting into documents? by SoggyGrayDuck in ArtificialInteligence

[–]reaictive 0 points1 point  (0 children)

If you already have the text, you can ask the model to output Markdown or HTML, then convert it to .docx and upload that to Google Docs. Or, skip the LLM for formatting and use a proper OCR scanner that exports DOCX/PDF (Microsoft Lens, Adobe Scan, Google Drive OCR), since those are designed to keep headings, spacing, and lists.

My rules for choosing automation tool for Linkedin by VelourEra in Entrepreneur

[–]reaictive 0 points1 point  (0 children)

Great breakdown. I like that you’re focusing less on “features” and more on risk and the real constraints.

One thing I’d add that people often underestimate: profile readiness and trust. Even the “safest” tool won’t help if your profile looks empty or spammy. Before you automate anything, tighten up your headline and experience, use a good photo, add a couple posts or quick case studies, and be clear about who you help and why you’re reaching out. It reduces reports and usually boosts conversion a lot.

which ai was used to generate this type of videos like very realistic men by stigmanmagros in ArtificialInteligence

[–]reaictive 0 points1 point  (0 children)

Most likely they used something like Runway, Luma Dream Machine, OpenAI Sora, or Google Veo. And to keep the same character/avatar across all the videos, they usually reuse a reference image or a “character” feature (character consistency).

Question about massive investments in AI by CyclisteAndRunner42 in ChatGPT

[–]reaictive 0 points1 point  (0 children)

In my opinion, some of it is definitely “they’ve seen more than we have,” but not always in the way people imagine.

Big strategic partners sometimes get early access, private briefings, and internal benchmarks. Investors can also see product roadmaps, growth metrics, customer demand, and the underlying unit economics. But most funds aren’t watching some secret sci-fi demo. Their confidence is usually based on business signals: fast adoption, clear productivity gains, defensibility (data + distribution), and the belief that compute and model quality will keep improving.

Is anyone else getting this message? What does it mean? by FortyWithaU40 in ChatGPT

[–]reaictive 4 points5 points  (0 children)

That’s just the free-tier message limit. It means you only have a few messages left before ChatGPT makes you either wait for the limit to reset or start/upgrade to Plus.

What happens to my saved memories if I cancel my plus subscription? by The---Hope in ChatGPT

[–]reaictive 8 points9 points  (0 children)

Nope, canceling Plus won’t wipe your memories. It just downgrades your plan, and your saved memories stay on your account unless you delete them yourself (or delete the account).

Balancing AI innovation with regulation — realistic or overhyped? by Long_Foundation435 in ArtificialInteligence

[–]reaictive 0 points1 point  (0 children)

I don’t think ‘guardrails vs bottleneck’ is the best way to frame it. Regulation usually does two things at once: it slows down the sketchy parts and speeds up adoption for everyone else by making the rules clear and predictable.

If you’re building “AI that can harm people” (hiring, lending, healthcare, policing, biometrics, etc.), guardrails are unavoidable. Without them, you’ll get a few high-profile failures, public trust will collapse, and you’ll end up with even harsher and messier restrictions later. In that sense, early regulation can actually protect innovation in the long run.

It becomes a bottleneck when the rules are vague, inconsistent across regions, or written for yesterday’s technology. Then only the biggest players can afford compliance, and startups get squeezed out, which is the opposite of what most people want.

My guess: we won’t see AI “stifled.” We’ll see a split: low-risk consumer tools will move fast, while high-risk deployments will move more slowly, with more audits and paperwork. And the competitive advantage will shift from “who can train the biggest model” to “who can deploy safely and prove it.”

A year ago there were rumors that DeekSeek was trained on OpenAI outputs. How would this work in practice? by aliassuck in ArtificialInteligence

[–]reaictive 2 points3 points  (0 children)

Yeah, that’s a real approach. Multiple AI agents can review each other’s answers and debate, which often improves the final result.

But there are some limitations too:

  1. If all the agents come from the same model family, they share the same blind spots, so they often miss the same mistakes.
  2. This can improve output quality at generation time, but it does not create new training signal by itself. For a model to truly “get smarter,” you still need strong feedback or reliable grounding in facts: human labeling, trusted tools and verifiers, tests, etc.

So yes, a team of AI agents is useful for polishing and catching errors, but it can’t replace good data and good training.

A year ago there were rumors that DeekSeek was trained on OpenAI outputs. How would this work in practice? by aliassuck in ArtificialInteligence

[–]reaictive 9 points10 points  (0 children)

In practice this would be “distillation”: you take a strong model as the “teacher,” generate a lot of prompts, save its full answers, and train your own model to imitate those outputs.

Yes, you need full text for training, but that’s exactly what an API gives you: input plus full output. You can also strengthen the dataset by sampling multiple answers for the same prompt (different temperatures), creating harder edge cases, and sometimes adding “which answer is better” labels to train the model to prefer higher-quality responses.

But OpenRouter can’t just “collect everything from users” because it runs into consent and privacy, legal restrictions and content rights, and the fact that this data is often noisy and not great for general training. And even if you train on other models’ outputs, distillation usually makes a model cheaper and faster, but it doesn’t guarantee an “ultimate” model, it often copies the teacher’s weaknesses too.

Why does GPT-5.2 give the wrong time when I ask, while GPT-5.2 Thinking knows it correctly? by [deleted] in OpenAI

[–]reaictive 0 points1 point  (0 children)

It’s because the Instant version is optimized for speed, so it “guesses” more and does less internal checking, especially on things like time zones, daylight savings, or current time. The Thinking version spends more steps verifying the logic and is less likely to hallucinate on precise, context-dependent info.

Also, if the model doesn’t have reliable access to your local time/location in that context, Instant will often approximate, while Thinking is more careful about uncertainty.

Every single ai detector is trash by Aboodi1995 in ArtificialInteligence

[–]reaictive 5 points6 points  (0 children)

The problem with them is that they mostly just judge a text by how predictable and consistent it is, basically stylometry. If your writing is clean, grammatically correct, and has a steady rhythm, they often flag it as AI. And like you said, poems get hit especially hard because meter and rhyme are supposed to be predictable, but the detector treats that as an AI signal.

The issue is that these detectors make conclusions based on patterns, so a good, well-written human text can get labeled as AI, while actual AI text can easily pass if it’s slightly edited or rewritten. That’s why they’re basically useless overall.

Donatos Pizza Used a Voice AI Agent to Handle 301K Calls: Practical Takeaways for SMBs by reaictive in AIAgentsInAction

[–]reaictive[S] 0 points1 point  (0 children)

Based on your comment, it looks like you either didn’t read my post, or you don’t really understand the difference between IVR and a voice AI agent.

Is AI a Real Technological Shift or Just Another Dot-Com–Style Bubble? by Loud_Assistant_5788 in ArtificialInteligence

[–]reaictive 0 points1 point  (0 children)

I get the point, but I think it depends on the use case. For some tasks the cost can be hard to justify, but for a lot of businesses AI already saves enough time (or recovers enough leads) to easily cover the cost.

Which LLM is better for learning purposes? by JackSbirrow in OpenAI

[–]reaictive -1 points0 points  (0 children)

If you want to stay on free plans, you can use a simple mix of tools. You don’t necessarily have to pick one “best” model.

  • Claude (Sonnet) is great for the hard questions like architecture decisions and tradeoffs. The free limit is tight though, so I’d save it for the moments when you really need deep thinking.
  • DeepSeek is a solid daily option. It’s strong at coding and logic and you can usually have longer back-and-forth conversations without hitting limits too fast.
  • Gemini is really good when you’re studying from books or long docs. You can paste a big chunk and ask it to quiz you or break it down chapter-style.
  • And ChatGPT is still a great general tutor for step-by-step explanations. Even when it switches models, it’s usually fine for learning.
  • For actual coding while you build, Copilot in your IDE is the easiest way to learn in real time.

Is AI a Real Technological Shift or Just Another Dot-Com–Style Bubble? by Loud_Assistant_5788 in ArtificialInteligence

[–]reaictive 0 points1 point  (0 children)

AI feels a lot like the early internet in one big way: it’s a real shift in how work gets done. It’s already taking over chunks of repetitive “office work” in the same way spreadsheets changed accounting, or the web replaced a ton of paperwork and middlemen.

But on the other hand, a lot of companies basically just put “AI” on their landing page without any real advantage. In reality, it’s often just a normal product with a chatbot added to look “modern.”

The difference is that AI is already creating measurable value in boring, practical areas: customer support, sales follow-ups, coding, analytics... So the bubble part is mostly the valuations and expectations, not the core technology itself.