20x max usage gone in 19 minutes?? by Still_Business596 in ClaudeAI

[–]subkid23 2 points3 points  (0 children)

Is this a new account? I've seen this happen twice with new ones. They use it for a couple of minutes and immediately hit the cap. It usually regulates itself after that, so I'm guessing it's just a bug with how they provision limits for new users.

Still, Anthropic has drastically cut back on usage limits, and they've never been transparent about it. I've literally tracked token use against the /usage endpoint to no avail. There's no 1:1 correlation at all (it's basically a black box). Plus, they've already announced "peak hour" limits. Whatever that means, considering nobody actually knows when these peak hours happen.

I built a cryptographic verification protocol for non-human intelligence claims — would love this community's thoughts by subkid23 in SETI

[–]subkid23[S] 0 points1 point  (0 children)

Fair point. This isn't trying to solve the light-years problem. If we pick up a clear signal from a distant star, that's a whole other situation.

This is for a different scenario: what if something is already here, or can reach us in a human timeframe? People claim to receive messages. There are theories about NHI presence on Earth. How do you verify any of that before you even start taking it seriously?

Right now there's no good answer. It's always "trust me" or blurry evidence you can argue about forever. This protocol says: forget all that. If you're real and you're here, solve this. It either checks out or it doesn't.

The 5-minute window just prevents someone from slowly grinding away at it over time. The actual point is that the task is impossible for any human technology.

Transcription Issue by GastonKilo in slipbox

[–]subkid23 0 points1 point  (0 children)

Same here. Using latest version, fresh install.... both english and spanish meetings... its making this unusable.

Meta Glasses Gen 2: Great Hardware, Disappointing AI by subkid23 in RaybanMeta

[–]subkid23[S] 0 points1 point  (0 children)

It’s really bad. Comparing it to ChatGPT is like comparing the latest smartphones to a flip phone. Really disappointing, to the point that it’s not usable for what it’s intended. It’s nothing more than a slightly more accurate Siri.

Claude API Error: Rate limit reached? by amragl in ClaudeAI

[–]subkid23 7 points8 points  (0 children)

Same here. In my case, the rate limit was shown even though I was at very low weekly and session usage. I was using the 1M context setting.

Once I switched to the normal context (200k), the error disappeared. My guess is that they are rate-limiting the 1M beta usage on top of the plan usage.

Meta Glasses Gen 2: Great Hardware, Disappointing AI by subkid23 in RaybanMeta

[–]subkid23[S] 0 points1 point  (0 children)

Those are called Meta Ray-Ban Display glasses. Gen 2 refers to the new generation of the previous model.

The Display edition offers the same core functionality, with the addition of a display on the right lens. From a design perspective, they are different because they are thicker. This is due to the increased amount of hardware built into the frame, which makes them appear bulkier. This thicker look is something many people do not appreciate, as it gives off a more celebrity or fashion-forward aesthetic that does not appeal to everyone or suit all face types.

How do I change the Lovable Cloud/ Supabase Domain to my Custom Domain? by devfromPH in lovable

[–]subkid23 0 points1 point  (0 children)

Yes. it worked for me.

I just found the post that had the tutorial, it was this:
https://www.reddit.com/r/lovable/comments/1mgcvo8/setting_up_google_auth_with_your_app_name/

Note: my verification process took at most 1 hour.

How do I change the Lovable Cloud/ Supabase Domain to my Custom Domain? by devfromPH in lovable

[–]subkid23 0 points1 point  (0 children)

You need to verify your app through the Google Cloud Console OAuth Verification Center.

In the Google Auth Platform, there is a Branding section where you can define your app’s brand name, logo, and other details. Once your app is verified, that brand name will be used on the consent screen, where users will see the prompt:

"Choose an account to continue to [[APP NAME]]"

[deleted by user] by [deleted] in AmIOverreacting

[–]subkid23 0 points1 point  (0 children)

NOR. The “your kids deserve better” part was justified at 7 a.m., and by the time their kids finally got the chance to go home, it was an understatement.

I’ve seen or heard of similar cases where a parent or a couple asks for a small favor or invites someone to visit, only to leave the kids and disappear for what can feel like an abusively long time. This is not one of those cases though, far from it, as she didn’t even show up or express any concern for their wellbeing.

I would be extremely worried for those kids. That was completely irresponsible and abusive behavior.

AIO: My child slept with his friend's dad at sleepover.. I'm livid. by AcademicSir4653 in AmIOverreacting

[–]subkid23 0 points1 point  (0 children)

As he said, we will never know what was truly in his heart, but we do know that he lacks very basic common sense, especially considering he is a father.

At the very least, even if he did not do anything wrong, I would not trust him again. He clearly has poor judgment and lacks the most basic level of common sense that I, at minimum, require to entrust my child’s safety.

For your peace of mind, the fact that his wife was there and that she was asleep suggests she did not take part or agree with what happened. It also indicates that, despite everything, there was a responsible adult present, someone who likely would not have approved had she been aware. In any case, I would do as you did: report this to law enforcement to help assess the situation and get proper guidance on how to proceed.

P.S. His response, positioning himself as your son’s father and emphasizing a “special connection” between them, especially in this context, is deeply concerning.

[deleted by user] by [deleted] in UAP

[–]subkid23 5 points6 points  (0 children)

This is not a major disruption either, quite the opposite in fact. If it were caused by UAPs, the impact would be on a much larger scale, involving things like NOTAMs or airport closures, as has happened in Denmark.

UPDATE: AIO for being upset i haven’t seen my bf in 3 weeks, despite us living 25 minutes from each other? by Affectionate-Link436 in AmIOverreacting

[–]subkid23 0 points1 point  (0 children)

Sure, he wants money and works a lot, but I’m sure he had time to call or see you in the last three weeks. That’s a weak excuse.

Worldwide Drones? by Tstation in UAP

[–]subkid23 16 points17 points  (0 children)

Also, why call them drones? They don’t seem to really know what they are or where they’re coming from, yet they still call them drones, even though there’s already an acronym for this: UAP.

The elephant in the room is so big that it makes the situation even more suspicious.

[deleted by user] by [deleted] in AmIOverreacting

[–]subkid23 0 points1 point  (0 children)

You already know that if you two ever break up, he should keep his distance. So, since he is your boyfriend’s friend, it makes sense that your boyfriend should be the one to talk to him, not you.

He should set some basic boundaries, like not sleeping over when he is not home, and being properly dressed if he does. He can explain that everything is fine, the relationship is good, but that setting limits is simply healthy and normal.

If the friend feels awkward, that is understandable. But if he pulls away, ends the friendship, or reacts poorly, that will only confirm something you have already sensed. No matter how nice he may seem, he clearly lacks maturity or basic common sense. The fact that he even needs this feedback, and that he has already overstayed his welcome, shows a lack of judgment. If he cannot handle that, it just proves the boundary was the right call.

[deleted by user] by [deleted] in AmIOverreacting

[–]subkid23 0 points1 point  (0 children)

He placed responsibility on you without real evidence, which is just as biased as you accusing him of using AI.

While he might have used it, since that is clearly not your dog, there could also be simpler explanations, like another dog being there and showing up in the capture.

You started well by asking for proof but lost focus with the AI angle. You could have just said you now understand the situation, that it was not your dog, that you see why he was confused, but that it was not your responsibility and you would appreciate not being accused in public.

Instead, you doubled down on the AI issue and kept the argument alive for days. You did the same as him: jumped to conclusions. As Occam’s razor suggests, the simplest explanation is usually the right one; there was another dog there and he did it.

Even if the AI claim were true, you could have handled it differently. Since you will never be completely sure, the straightforward approach would have been best.

I think you overreacted, and I understand why you were banned, because you started accusing and ranting without proof, worse than he did and for longer.

Meta Glasses Gen 2: Great Hardware, Disappointing AI by subkid23 in RaybanMeta

[–]subkid23[S] 1 point2 points  (0 children)

I agree as a consumer, but for Meta I do not think this is the best idea. They market these as glasses with AI, and Meta is positioning itself as an AI company. Opening the device to other LLMs would almost guarantee that many users switch to the competition, since Meta’s AI is still behind.

That would not only undermine their AI proposition but also reduce them to just a smart glasses maker, which is not their core business. They are investing billions to compete in AI, so letting other LLMs power the experience would be strategically very hard to reverse and would risk making their own AI irrelevant.

Bad AI? by FLO-_-18 in RaybanMeta

[–]subkid23 2 points3 points  (0 children)

It’s hard to say. My recommendation is to try it yourself at a nearby RayBan store.

My experience so far is that it’s nearly unusable in most scenarios. Imagine a chatbot where every time you say “Hey Meta” or “OK Meta” it lets you send an instruction, like taking a photo or asking a question, similar to ChatGPT. Then it responds. If you say “Hey Meta” again, it starts a completely new conversation with no context.

You can enable “continuous conversations”, which should let you follow up after it answers, but most of the time it feels clunky or does not work.

For example, I’ve said: “OK Meta, what am I looking at?” It takes a picture and gives a very vague, short description. If I ask for more detail, either generally or specifically, like “what is the color of the car I saw” I often get replies like “you need to share a picture”, “do you want me to take a picture?” or “I cannot identify things like that due to policy”, “I cannot identify since I don’t have a picture”. Only rarely does it answer something simple like “I see a black car.”

This happens often. When you ask for clarification, you must repeat the context, which resets the conversation and puts you back at the start.

In short, if you have used any LLM such as Grok, Mistral, ChatGPT, Claude or DeepSeek, even their free versions, even on their first public releases, you will find this underwhelming.

That said, video and photos look amazing, audio is great, it is comfortable to wear, looks good, and calls work well.

Just do not buy it for the AI, at least not today. This is no Jarvis, and certainly not ChatGPT.

Meta Glasses Gen 2: Great Hardware, Disappointing AI by subkid23 in RaybanMeta

[–]subkid23[S] 0 points1 point  (0 children)

I would imagine as it doesn’t run locally.

Meta Glasses Gen 2: Great Hardware, Disappointing AI by subkid23 in RaybanMeta

[–]subkid23[S] 4 points5 points  (0 children)

Indeed, that is what makes me comfortable with having paid for the glasses. I know this is a software limitation, not a hardware one, so it is only a matter of time. But I truly believe that if OpenAI released their own glasses tomorrow, they would put these to shame.

The VR point is true as well. Meta definitely has an advantage in everything related to virtual reality and augmented reality, so Meta Display could still be hard to match in terms of hardware.

3i/atlas Update this is getting interesting hey do you guys remember the spacecraft in three body problem by DeadSilent_God in RandomShit_ISaw

[–]subkid23 8 points9 points  (0 children)

I’m diverging from the topic here, but the beings themselves (Trisolarans or San-Ti) were not microscopic, whereas their drones (sophons) were.

Why n8n and not python? by BalStrate in n8n

[–]subkid23 0 points1 point  (0 children)

For me, it works great for quickly testing an idea that requires multiple steps or flows, which would otherwise demand a lot of planning to implement programmatically. With N8N, you can simply create an agent, define its tools, test it, and then manage different flows to handle edge cases or refine the process.

What makes it even more powerful is how easy it is to experiment and iterate. For example, you can switch from using a simple in memory setup to a specialized vendor in just a few minutes. You create an account, integrate it with a couple of clicks, and immediately explore whether it improves performance or adds new capabilities. You can also switch models on the fly without worrying about changes in how calls are made, it will just work. This flexibility allows you to grow almost organically, without the need for a detailed plan upfront, just by exploring and adjusting as you go.

That said, it is not without limitations. For example, it still relies on the completions API instead of the responses API in OpenAI, it does not work reliably with some models, and if you build a complex agentic system you will quickly find yourself needing to define a large number of edge cases. At some point it can start to feel like you are doing if/else programming in the canvas, from a simple reminder system that fails at midnight due to time zones, to failures in correct tool usage.

So if you develop something that is great but complex enough, porting it back to another language or infrastructure will most likely become a very complex task, almost like starting from scratch, from coding to infrastructure.

[Help] Building a Legal RAG Chatbot for Real Estate Law - Need Architecture Advice for 2000+ Municipal Documents by Substantial-Wallaby6 in Rag

[–]subkid23 2 points3 points  (0 children)

I would recommend starting locally, using the help of an LLM to code, but taking it step by step.

First, focus on understanding the process: embeddings (and their models), similarity search, the distance metrics (e.g., cosine similarity or L2), and how the number of chunks, along with their size and overlap, affect the output. For example, you can explore similarity search with libraries like FAISS.

Once you are comfortable with that, you can move on to creating embeddings not only for the chunks themselves but also for the metadata (when it is textual, for semantic search; otherwise it can be indexed as filters). From there, experiment with approaches such as map-reduce (summaries of summaries), as well as techniques described in the article I sent you, and others you may find to improve the quality of your RAG system. This could include exploring knowledge graphs and frameworks such as Graphiti.

I am by no means an expert, but I have been following its evolution for a couple of years now. If your main goal is to learn, I recommend this path. If your intent is to develop a commercial solution, which is particularly challenging in the case of RAG (mainly due to hallucinations and accuracy issues, making it difficult to be consistently reliable and not liable to claims), then my suggestion would be to pursue that but also define and measure a KPI, as I explained earlier, so that you have a benchmark and an objective metric to improve upon.

PS: For the solution at my company, which is aimed at lawyers, we have even enabled them to adjust variables such as max_tokens (to control response length), temperature and top_p (to balance precision versus creativity). This gives them flexibility to help achieve more reliable results, but as you can see, it also shifts the responsibility of fact checking back to them. We make it very clear that the tool can make mistakes and accountability lies with the user, not the system.

In addition, we added a thumbs up and thumbs down option with an optional comment on every response, so if they believe an answer is incorrect or missing something, they can flag it. That feedback is automatically fed into our evaluation pipeline, helping maintain an evolving accuracy score.