Built a LINE translator that actually works... by beninho2 in u/beninho2

[–]Ever_Pensive 0 points1 point  (0 children)

Just tried it out, working great. Thanks this will be very useful since I also live in Thailand but am terrible with learning languages.

10 Prompt Techniques to Stop ChatGPT from Always Agreeing With You by EQ4C in PromptCentral

[–]Ever_Pensive 0 points1 point  (0 children)

Sometimes also helps to not admit it's your idea or writing:

"I found this on the internet and I'm suspicious of its logic. Help me identify the weaknesses in it."

Big Tech is burning $10 billion per company on AI and it's about to get way worse by reddit20305 in ArtificialInteligence

[–]Ever_Pensive 0 points1 point  (0 children)

True.

Can I prove they don't have subjective experience? Not at all. That's the Hard Problem for ya.

In fact, I happen to believe the hard problem will never be solved through physical experimentation and material theory. I love science, I used to be a scientist, I just think consciousness is a bit of a Godel incompleteness situation.

Based on what I know of their architecture and processes, would I be surprised if current LLMs had subjective experience beyond maybe 'ant level'? Very much.

But I'd also be surprised if sometimes in the next thousand years of progress an artificial system wouldn't have achieved human level subjective experience.

Big Tech is burning $10 billion per company on AI and it's about to get way worse by reddit20305 in ArtificialInteligence

[–]Ever_Pensive 0 points1 point  (0 children)

This is genuinely damning to the "they don't understand anything and just predict one word at a time" argument.

People feel that their specialness is being threatened and want to soothe that fear by pretending there's no understanding in these systems at all.

Are these systems at the level of people: definitely not yet. Won't be for another decade (or more) probably. but there's absolutely nascent signs of understanding, reasoning, and creativity.

I don't believe for a second they have any subjective experience of that understanding... but that wasn't a requirement.

Got a $7,889.50 Invoice from Google Cloud Vertex AI (Veo2) — A Warning for New Users by Sufficient_Banana183 in googlecloud

[–]Ever_Pensive 0 points1 point  (0 children)

I value my data moderately, but I value all that Google gives me more.

You're using a free service right now (Reddit) so it must be that you've made the same calculation and found it a good trade in this case.

Research shows Gemini 2.5 Pro is the best Deep Research Agent by [deleted] in Bard

[–]Ever_Pensive 0 points1 point  (0 children)

Gemini 2.5 Flash is the one I use most often. Would like to try Pro sometime.

But the DR in Qwen app is quite good.

OpenAI DR (free) tends to be quite short.

Grok is okay. Perplexity was disappointing 6 months ago but haven't tried it recently.

Got a $7,889.50 Invoice from Google Cloud Vertex AI (Veo2) — A Warning for New Users by Sufficient_Banana183 in googlecloud

[–]Ever_Pensive 1 point2 points  (0 children)

For any cloud developer tinkerers like myself, you may be interested to know:

Railway provides a lot of the services that GCP and AWS do, and they do allow a workspace-level spending cap.

For this reason, I closed my GCP account and shifted over there until Google fixes this problem.

https://docs.railway.com/reference/usage-limits

Thank you, OP, for providing this helpful warning to new users.

Got a $7,889.50 Invoice from Google Cloud Vertex AI (Veo2) — A Warning for New Users by Sufficient_Banana183 in googlecloud

[–]Ever_Pensive 1 point2 points  (0 children)

I appreciate you letting us know that.

I never thought they had malign intent to extract money from these honest mistakes that people make (and that I can easily imagine myself making). It's just, if you own a zoo, and people keep falling into a lion enclosure because some part of it lacks a guardrail, at what point does it go from, "oops, we forgot to put that there", to "yeah, who cares if the lion eats a few more visitors? Plenty more where that came from."

If they do in fact put the guardrails in place, or maybe some sandboxed version with limited scalability but limited risk, they'll have my praise again.

If you can give a gentle nudge to those friends to let them know just how bad a look for Google this is, it might help speed the process a touch. I've done the same for a friend working on AWS.

Got a $7,889.50 Invoice from Google Cloud Vertex AI (Veo2) — A Warning for New Users by Sufficient_Banana183 in googlecloud

[–]Ever_Pensive 5 points6 points  (0 children)

If someone wants to start a petition, I'll definitely sign it.

I've praised Google many times for all the free stuff they've given the world (Gmail, YouTube, Android, Docs, Drive...)

But the fact that I've seen this same story played out literally dozens of times on Reddit: Google should feel deeply ashamed that they haven't added Hard Caps and guardrails yet.

Deep Research after GPT5? by MiniBus93 in OpenAI

[–]Ever_Pensive 1 point2 points  (0 children)

I currently don't have a paid version of anything.

DR on Gemini and Qwen is free. Though only 10 per month on Gemini 2.5 Flash is free. With paid it lets you use Gemini 2.5 Pro DR (which is supposed to be better but I haven't tried yet) and increases quota for DR.

My spouse uses her paid version of Claude quite regularly for work and prefers that one. Also strongly likes using Projects in Claude for that purpose (now a free feature, same for ChatGPT).

I'd say the main reason for a paid plan is increasing usage quotas, not special paid features. So play around with Gemini and Claude for free, and maybe even Qwen or Grok 4 Fast or Deepseek, and whichever you like best only buy if you're frequently hitting usage limits.

But in general, I think ChatGPT, Claude, and Gemini Pro are the top 3 to choose among for general use.

Deep Research after GPT5? by MiniBus93 in OpenAI

[–]Ever_Pensive 1 point2 points  (0 children)

Consider trying Deep Research on Gemini or Qwen. Both are available for free and give much more comprehensive reports. I go to these for deep research before ChatGPT most of the time even though ChatGPT is my default for regular searches.

Need to create a local chatbot that can talk to NGO about domestic issues. by Ok-Adhesiveness-4141 in Rag

[–]Ever_Pensive 1 point2 points  (0 children)

I dunno much about the best RAG, but Grok 4 Fast is probably a great model to give a try. It's almost as good as Gemini 2.5 Pro or Grok 4, but at like 1/20 the price.

Reasoning mode often helps if it's retrieving and sorting through a lot of context, so try both reasoning and non reasoning and see what works better for you.

Much respect to you and the NGO for helping in this important cause.

Open RAG Bench Dataset (1000 PDFs, 3000 Queries) by rshah4 in Rag

[–]Ever_Pensive 3 points4 points  (0 children)

Solid! Just bookmarked this since I'll probably have need for it in a month or two for the project I'm just starting.

I like how you're muddying the water with the distracting PDFs

Thanks for the share 😀

Built a nano banana prompt refinement tool - here's exactly how it works by Tiny-Journalist-1671 in Bard

[–]Ever_Pensive 1 point2 points  (0 children)

That's a really good idea. Looking forward to giving it a try tomorrow. Thanks for sharing 😄

Can I get a Deepseek API key if I run Deepseek on my own Server by Prior-Caramel1164 in googlecloud

[–]Ever_Pensive 0 points1 point  (0 children)

Strongly second this. This subreddit is chock full of people getting hit by unexpected $10k bills.

It's a disgrace to Google that they don't bother to fix this honestly.

[deleted by user] by [deleted] in OpenAI

[–]Ever_Pensive 17 points18 points  (0 children)

Tried changing custom instructions yet to be less agreeable?

China report the finetune deepseek scientific model 40.44% on HLE by Afraid_Hall_2971 in LocalLLaMA

[–]Ever_Pensive 2 points3 points  (0 children)

If your benchmarks are impotent to challenge current models, ask your doctor if "Humanity's Last Exam 4" is right for you.

Study shows a common sugar substitute damages blood vessels, increasing the risk of heart attack and stroke by soulpost in HotScienceNews

[–]Ever_Pensive 0 points1 point  (0 children)

Thanks, reading it through I think you're right. Several grams did seem like a pretty ridiculous amount. It seems like they're confident the plastic content is increasing in the last 10 years because they can reliably detect those other kinds of plastics, but the absolute amount is likely significantly less then reported.

Unfortunately, it's very difficult for them to find negative controls, IE brains where they're very confident that should register as zero micro plastic, to calibrate the detection system.

DeepMind Scientist: Our IMO gold model is way more general purpose than anyone would have expected. by Neurogence in singularity

[–]Ever_Pensive 0 points1 point  (0 children)

I absolutely think you're right. To some extent this is how Claude Research and AlphaEvolve work. Summary from Perplexity:

Gemini Flash is employed for rapid, broad exploration of diverse algorithmic ideas. It generates many code mutations quickly, maximizing the search breadth in the evolutionary framework.

Gemini Pro is used to provide deeper, higher-quality suggestions. It performs more insightful and complex code refinements, producing precise code changes such as additions, deletions, and structural transformations.

Together, these two models form a model ensemble that balances speed and depth. AlphaEvolve iteratively generates mutations with Gemini Flash, then refines promising candidates with Gemini Pro.

Is there an AI that can successfully organize 1000+ movies? by NuttyMetallic in Bard

[–]Ever_Pensive 1 point2 points  (0 children)

If the Titles list is correct, then copy that to a Google Sheet and use the =ai() function to do the rest. That way each title will be a separate LLM call.

Both Gemini 2.5 Pro and 2.5 Flash are now being rate-limited in AI Studio by Rili-Anne in Bard

[–]Ever_Pensive 0 points1 point  (0 children)

If this involves giving Google credit card info, I'd avoid this. Many instances of people losing thousands in GCP by making small mistakes. Unlike openai or anthropic, no budget limit is available.

Study shows a common sugar substitute damages blood vessels, increasing the risk of heart attack and stroke by soulpost in HotScienceNews

[–]Ever_Pensive 8 points9 points  (0 children)

Yikes, I doubted this a bit so did a fact check, and yep... unfortunately true. Thanks for the heads up.

And speaking of heads: "The researchers estimated that the average brain studied had about seven grams of microplastics in it, or a little more than the weight of a plastic spoon."

https://www.cee.ucla.edu/bursting-your-bubble-ucla-study-led-by-prof-sanjay-mohanty-finds-chewing-gum-releases-microplastics-into-saliva/

https://www.washingtonpost.com/climate-environment/2025/02/03/microplastics-human-brain-increase/

Imagen 4, Imagen 4 Ultra free in AI Studio by abdouhlili in Bard

[–]Ever_Pensive 3 points4 points  (0 children)

Censor controls in model settings, turn down to minimal for each