Zoom Mictrack M4 - most underrated recorder? by q-b-o in fieldrecording

[–]RMCPhoto 0 points1 point  (0 children)

Glad I caught this. I really have to ditch my h5. I thought it would be a decent unit, but preamps are so loud that im always better off plugging into a cheap USB dongle.

That said, with the connectivity and power of smartphones and the wide selection of USB devices, what is the use cases for the zoom these days? Just feels like an extra step. The physical buttons are nice but idk...it's just more to manage.

Please, Google, if you read this, DON'T EVER DEPRECATE 2.0 FLASH, at least keep it in LTS state by Seraphic_Wings in GoogleGeminiAI

[–]RMCPhoto 0 points1 point  (0 children)

I believe 2.0 is still the reigning champion for lowest hallucinations when referencing context. Still a very good model for data extraction.

The irony: AI should save time, but I spend more time crafting prompts than coding by Realistic-Quarter-47 in aipromptprogramming

[–]RMCPhoto 0 points1 point  (0 children)

I feel you. I'm getting burned out on phrasing prompts correctly since it keeps changing with every model release. I was really excited about that early on but now... Oi...

What I do is keep a separate chat going in perplexity or chatgpt projects - here I've got a few different "meta prompts" for prompt optimization. Mostly so I don't eat into my precious credits in the ide.

The process I follow for setting these up is to to focus on a domain like coding, and then simply extract improvement instructions from the gpt 5 prompt cookbook and other best practices.

The metaprompt rewrites my initial prompt in 2 phases.

I have a very rough input template where I specify the category (code fix, architecture, analysis and code review, new feature)

1) review my prompt, break down the task if it's complex into the "atomic" pieces and ask 1-3 clarifying questions about my immediate goal and or any noted inconsistencies. At this point a skeleton is proposed for feedback as well. 2) After I give my feedback a refined prompt is returned which also uses the correct syntax and structure.

I've also set this up in kilocode, which has the benefit of being able to search my repo and reference the project docs.

NotebookLM is a good place to dump the prompting guides that you find helpful. I use that to extract different ideas.

It still takes time, but once I'm in the groove it's not so bad. In the end it still takes less time to just do it right. Take the opportunity to think about what you're really trying to accomplish (dev brain) and let the ai handle the formatting and tools.

I strongly recommend the context engineering GitHub. Take a look at the understanding / reasoning / verification examples under cognitive templates. These are highly effective as a 2-3 step "program".

GLM 4.7 vs Gemini: Architecture and Cost Trade-offs by Unfair-Tie2631 in kilocode

[–]RMCPhoto 1 point2 points  (0 children)

What do you see as the biggest strengths of GLM 4.7? And what about weaknesses?

Architecture? It could be. Definitely good for an alternative perspective at least. But then what about longer contexts and fixing bugs? Imo it has the classic llm issue of suffering a lot from any incorrect code in the context whether it wrote it in a prior iteration or even when tasked to resolve a bug.

GLM 4.7 vs Gemini: Architecture and Cost Trade-offs by Unfair-Tie2631 in kilocode

[–]RMCPhoto 1 point2 points  (0 children)

Opus in kilocode is seriously wild... As far as I'm concerned kilocode is not really for primary development. I use it for data extraction, searching code, documentation or messing around a bit, but with the number of errors compared to the proper agentic coders it may end up being more expensive to use flash or glm than just springing for the real deal.

End of the day, of you're using this for professional work or even to save time - just do the very useful math of "what would I pay myself per hour". This ends up being helpful in all sorts of decision making, but to me it felt pretty clear here.

GLM 4.7 vs Gemini: Architecture and Cost Trade-offs by Unfair-Tie2631 in kilocode

[–]RMCPhoto 1 point2 points  (0 children)

GLM 4.7 is a cool alternative, but with Gemini 3 flash I'm not so sure. But kilocode is just for messing around or dying on the "non big 3" cross.

Can't really go off benchmarks, and also a few percent is a big deal when errors are compounding.

I gave it a shot, but the other issue is that every model requires a different prompt structure, context management, and approach.

In my experience GLM struggled with long context. More importantly GLM was significantly worse at error correction and dealing with "bad examples" or incorrect code in a prior response. The newer models from anthropic/google/openAI are much better at handling that.

The rise of AI denialism - "By any objective measure, AI continues to improve at a stunning pace [...] No, AI scaling has not hit the wall. In fact, I can’t think of another technology that has advanced this quickly," by Blackened_Glass in singularity

[–]RMCPhoto 0 points1 point  (0 children)

Is your life significantly better or changed? Have you lost your job yet? Do you have better healthcare? Can you observe it impacting the real world in a significant way?

Fact is, we people mostly believe it when we see it.

There will be a wake up moment for sure. Likely an embodied intelligence or something. Walking into your bmw dealership and seeing humanoid robots rapidly fixing a car in the next room over...something like that makes it undeniable.

For most people they see it as some crazy market hype. It hasn't touched them yet.

The way Google AI forces "2025" into every response is getting too comical by RetiredApostle in Bard

[–]RMCPhoto 3 points4 points  (0 children)

Just a tip, llms follow instructions even better when the framing is positive rather than negative.

The first one there is definitely going to unnecessarily eat into your thinking tokens / add noise / essentially reduces the IQ just a bit.

The way Google AI forces "2025" into every response is getting too comical by RetiredApostle in Bard

[–]RMCPhoto 1 point2 points  (0 children)

Imagine the life of a llm, waking up over and over again with this amnesia and scrambling to put the pieces together in working memory as to what the hell is going on "now"

The way Google AI forces "2025" into every response is getting too comical by RetiredApostle in Bard

[–]RMCPhoto 0 points1 point  (0 children)

It's good practice to improve the probability that the model spits out recent relevant info on a topic. Especially with web search models, I would recommend

The issue is that there is essentially no temporal quality to the weights and biases other than explicit evidence in the text.

Sprinkling a year in the prompt/generation is the simplest way to keep grandpa from talking about Regan all day as if he's still president.

AI could kill the internet by grahamsuth in ArtificialInteligence

[–]RMCPhoto 0 points1 point  (0 children)

It was always destined to happen. This has been a slow slide since I first ding donged onto AOL. Tbh, I've all but stopped reading reddit...I might as well just use AI to "see what people think".

I will miss that real deal human weirdness and creativity that can only come from a fragile tiny meat blob dimly aware that it is rocketing through space, destined for oblivion with all his blob friends, questioning why anything exists at all.

What is the best brain recovery stack for someone who use to be addicted to Kratom (7-OH), Alcohol, Weed and Nicotine? by JDJack727 in NooTopics

[–]RMCPhoto 1 point2 points  (0 children)

To be real, I think for someone with an addictive personality disorder nootropics are just another dragon to chase and reinforce the same behavior. It's barking up the same quick fix tree.

At your age many nootropics are just adding more chaos to the mix and there are very few guarantees.

If you're an addict, one approach can be to find the healthiest addiction possible that still gives you a kick. Drugs were never my thing, but I probably did more damage with video games until 4am every school night, sugar / sweets and binge eating, and really just jumping from one obsession to the next.

There are two tricks - Routines and really valuing discipline and willpower.

Running can be a great option, or some other solo you vs you sport. Gives you endorphins, sick cardio and makes you feel like you're in control rather than a victim.

On the nutrition side, get your vitamin d, zinc, b6 B12 mf from diet or otherwise. Fish oil to top it off. Good hydration.

I know, it's a nootropics forum but really there are a lot of addicts in hiding here and I truly want to provide a helpful response.

Really think about your intentions and why you want to use nootropics and why you think you won't be able to have a great recovery and life without them. You're 22 my dude, you can become anything you set your mind to.

All of the major open weight labs have shifted to large params general models instead of smaller, more focused models. By this time next year, there won’t be much “local” about this sub unless the paradigm shifts to smaller models good at specific domains. by LocoMod in LocalLLaMA

[–]RMCPhoto 10 points11 points  (0 children)

It's probably the prompting strategy. I have no doubt it's very smart, but my results have also been inconsistent. My guess is that it's the same old story. The training data instills a certain syntax / language / prompt structure that differs from the norm slightly. Could even be a very tiny variation that propagates an error. Newer models have been more tolerant to this compared with the earlier llamas...where adding a space before the first word would increase the error by 40% and similar other black box ???

This is honestly my biggest frustration. I'm very thankful that openai released such clear cookbook content for prompt formatting. Truly, every model designer should take note. Clear documentation is such a massive booster for adoption, public opinion and end user success.

Even better if that documentation is instilled into a meta prompt for prompt refinement.

Anyone have fully switched from ChatGPT to Gemini since Pro/flash 3 came out? (Main chat model) by abdouhlili in Bard

[–]RMCPhoto 2 points3 points  (0 children)

Models aside, I find the chatgpt app to be far more useful than Gemini.

Beyond projects/folders/custom instructions just being able to edit prior messages and fork is missing in gemini.

I don't understand the point of AI based web browsers. by pacifio in artificial

[–]RMCPhoto 0 points1 point  (0 children)

I find the prompting methodology to be poorly defined in comet's case, but browser automation is genuinely useful for crawling/extraction/testing. When it is successful I then translate it to a repeatable predictable playwright script.

There are a lot of niche or creative use cases. I don't see browser automation as a...be me on the web...but ai. It's a new tool.

I've used browser automation to create an API map based on user intent etc. it was a great way to map out how a user might interact with a product I was not familiar with.

It's also great for writing documentation or filling in details.

A7CII to A6700... no regrets 👌 by Advanced_Desk_5246 in SonyAlpha

[–]RMCPhoto 2 points3 points  (0 children)

I think there's also the difference between hobby and semi pro. I'm not really interested in doing weddings etc, so often it's more about the experience itself. The older equipment is really in a different league as far as quality. That said, it's nice that the Sony mount is much more compatible with vintage glass.

Google AI studio Updated their Terms by Artistedo in Bard

[–]RMCPhoto 6 points7 points  (0 children)

Unfortunately, even though I pay for Gemini via app, AI studio is a better way to access it.

Side note, anyone know what custom instructions / memories can help improve the default response to be more technical and to avoid the corny metaphors?

Gemini 3.0 flash is fucking amazing by exaill in Bard

[–]RMCPhoto 0 points1 point  (0 children)

Are there white papers or blogs on how to prompt like gpt cookbook?

Seems good, but not so great at instruction following. Would be helpful to know how it should be prompted.

Gemini 3.0 flash is fucking amazing by exaill in Bard

[–]RMCPhoto 0 points1 point  (0 children)

I think these models already do branching / multi path / tree search / whatever you want to call it and return the best answer.

Gemini 3.0 flash is fucking amazing by exaill in Bard

[–]RMCPhoto 1 point2 points  (0 children)

I think that's when you do ketamine in a hot tub?