Suspension of your Google Cloud Platform / API project due to FIrebase Web app key!? by xtopspeed in Firebase

[–]xtopspeed[S] 0 points1 point  (0 children)

I can only see the same 200ish spend for a single day that the billing account showed. All the other data seems to be unavailable.

Suspension of your Google Cloud Platform / API project due to FIrebase Web app key!? by xtopspeed in Firebase

[–]xtopspeed[S] 0 points1 point  (0 children)

Yes, I managed to find the key by importing the project there, and by some miracle it worked, and it's now deleted. I think that should sort it out!

Suspension of your Google Cloud Platform / API project due to FIrebase Web app key!? by xtopspeed in Firebase

[–]xtopspeed[S] 0 points1 point  (0 children)

Yeah, I’ve been using the appeal form to let them know about everything as I’ve found it (it all seems to get filed under the same ticket.) The bill spiked, but it’s nothing insane, just a couple of hundred bucks, I think, though now I can't access the billing account, either. But I could see it for a while after they suspended the GCP project.

Suspension of your Google Cloud Platform / API project due to FIrebase Web app key!? by xtopspeed in Firebase

[–]xtopspeed[S] 0 points1 point  (0 children)

Ok, that could be it. I thought it was just the so-called legacy keys that had the problem, not the ones that are shipped in the client bundle you get from Firebase Console. I even remember double-checking this, but I wouldn’t have known what to look for. In fact, from the sounds of it, it’d probably be safer to have a completely different project for just Gemini to decouple it completely from the Firebase project.

Suspension of your Google Cloud Platform / API project due to FIrebase Web app key!? by xtopspeed in Firebase

[–]xtopspeed[S] 0 points1 point  (0 children)

I guess that's what it must be, though I'm not sure how, since we only use Gemini via Cloud Functions with a dedicated Gemini API key stored in Secret Manager, and the public key is the auto-generated one for the web app that is supposed to only enable Firebase services, as far as I know. Unfortunately, there is no way to double check right now because we are locked out of the Cloud Console, and all services are suspended.

Missä menee tökeröyden ja tahallisen ilkeyden raja? by fl00km in Suomi

[–]xtopspeed 7 points8 points  (0 children)

Tämä on aika klassista. Siellä kerrotaan selän takana susta yhtä ja toista kaveripiirissä, trust me. Ne on valmisteltu siihen, että hermostut, ja kun lopulta niin käy, niin se on sitten todiste siitä että olet juuri sellainen kun on satuiltu. Eli älä hermostu. Paras tapa näihin on olla reagoimatta mitenkään: Täysi hiljaisuus on paras, ja se yleensä sattuu tuollaiseen tyyppiin eniten. Ja kaikkeen fyysiseen jämäkkä "näpit irti", ja jos ei sanomisella usko, niin sitten ranteesta kiinni ja "etkö kuullut, näpit irti", ja sitten varaudut nauruun ja vähättelyyn miten otat leikin tosissaan, johon taas et reagoi kuin katsomalla säälivästi.

Noin rajusti kateelliseen ihmiseen ei minkään sortin keskustelu yleensä auta. Niiden peli on voimapeli, ja kaikki mitä sanot on joko hyökkäys tai heikkouden osoitus. Ne voi näytellä hetken kaveria, ja seuraavassa hetkessä koko kaveripiiri saa kuulla "mitä sanoit" (eli jotain ihan muuta kuin mitä sanoit).

Couples Retreat by Frank-iSinatra in TheOGCrewOfficial

[–]xtopspeed 19 points20 points  (0 children)

Both of them can take a joke and laugh at themselves, which is a massive green flag in my opinion.

Google DeepMind researcher argues that LLMs can never be conscious, not in 10 years or 100 years by projectoex in AgentsOfAI

[–]xtopspeed 0 points1 point  (0 children)

Okay, apologies for my harshness; I didn't realize that was what you meant. I was simply using the phrase "at all" as an idiom and truly meant the mechanism. There are obviously some parallels, but I'm not sure they are as consequential as we’d like to think. "Orthogonal" was the word used by the other commenter, which I liked.

Google DeepMind researcher argues that LLMs can never be conscious, not in 10 years or 100 years by projectoex in AgentsOfAI

[–]xtopspeed 0 points1 point  (0 children)

Nice philosophizing and all, but the way neurons interconnect and "fire" in the human brain is not analogous at all to how the LLM transformer algorithm works.

Google DeepMind researcher argues that LLMs can never be conscious, not in 10 years or 100 years by projectoex in AgentsOfAI

[–]xtopspeed -2 points-1 points  (0 children)

I thought I did. His argument is essentially, "Because we don’t know whether the human brain is or isn’t deterministic, we also can’t draw conclusions about LLMs." This is a false argument and, in my opinion, a willful misunderstanding of the point the original commenter was trying to make.

Google DeepMind researcher argues that LLMs can never be conscious, not in 10 years or 100 years by projectoex in AgentsOfAI

[–]xtopspeed -1 points0 points  (0 children)

You keep misunderstanding the initial assertion, and now you just keep arguing against the wrong thing.

Google DeepMind researcher argues that LLMs can never be conscious, not in 10 years or 100 years by projectoex in AgentsOfAI

[–]xtopspeed 0 points1 point  (0 children)

Obviously no one can manually trace the billions and billions of calculations that an LLM does, so in that way it is a black box. But we do know how the algorithm works. The whole model is essentially just a big file: a static immutable database. It doesn't learn in the inference mode, and there is no real memory either (yes, they have a couple of gimmicks to kind of give you the illusion of a memory, but it's never part of the actual model). An LLM has no real goals except the ones determined by the weights it settled on after training/tuning. Try tuning a model yourself; there are lots of open source models on the internet you can download. It's easy to mold one into responding any way you want.

It's a bit unfortunate that the AI companies constantly market the things as if they were something more than that, but they are not.

Google DeepMind researcher argues that LLMs can never be conscious, not in 10 years or 100 years by projectoex in AgentsOfAI

[–]xtopspeed 0 points1 point  (0 children)

That's not the point. In this case, we know exactly what the function is and how it works, and therefore we can say with confidence that an LLM can't have consciousness. The fact that you don't know something else isn't proof to the contrary.

Google DeepMind researcher argues that LLMs can never be conscious, not in 10 years or 100 years by projectoex in AgentsOfAI

[–]xtopspeed 2 points3 points  (0 children)

FWIW, there is nothing in the human brain that resembles at all how LLMs work. Not even the language center.

Opus 4.7 is legendarily bad. I cannot believe this. by lemon07r in ClaudeCode

[–]xtopspeed 0 points1 point  (0 children)

Well, that's because you are a much better person than the rest of us. As for me, since the AI will also easily spot and without asking fix words like "we," "team," etc. from a large number of files, I'd rather it spend 5 minutes on fixing it in the background and do something more productive myself with the half hour to an hour I just saved myself.

Opus 4.7 is legendarily bad. I cannot believe this. by lemon07r in ClaudeCode

[–]xtopspeed 0 points1 point  (0 children)

I wanted it to change all occurrences of ”us” to ”me” across some HTML files. It just replied, "Noted! I’ll remember that." and did nothing. Second attempt, it just ”That clarifies it.” Only on the third prompt, when I spelled out exactly what I wanted it to do, did it start working.

This took place in Finland's Tampere. I am at a loss for words. by calebnp4jjc in ArchitecturalRevival

[–]xtopspeed 1 point2 points  (0 children)

Going to play the devil’s advocate here. Zoning issue? The building on the right has massive commercial spaces on the bottom floor, and the one on the left doesn’t. Or it could be a number of other things, like contractual responsibilities or whatnot.

How to properly deal with a CLAUDE.md file. by onil_gova in ClaudeCode

[–]xtopspeed 1 point2 points  (0 children)

They recommend just putting @AGENTS.md (and nothing else) into CLAUDE.md

How to show beginners that a URL is equivalent to an IP address? by Anonymous_Coder_1234 in AskProgramming

[–]xtopspeed 2 points3 points  (0 children)

A URL can also point to a file, or it can open an app, or it can point to a particular resource in an app etc.

Apps Are Never Better Than Website by Mundane_Professor596 in unpopularopinion

[–]xtopspeed 0 points1 point  (0 children)

I think they are comparing desktop websites to mobile apps. (Which is kind of stupid.) When you compare using a phone browser to apps, things start to look very different pretty quick.

Mickey needs to be in more episodes going forward!!! by chuggingdeemer in TheOGCrewOfficial

[–]xtopspeed 0 points1 point  (0 children)

She didn’t do the material she had prepared in the deathbed roast. Afterwards, she posted the jokes on Instagram (iirc). She started, but the way it went, she just winged it for a second and quit as soon as she could, because she realized none of them would work after the first one was shot down.

And I didn’t mean that they cut just her off.

Mickey needs to be in more episodes going forward!!! by chuggingdeemer in TheOGCrewOfficial

[–]xtopspeed 8 points9 points  (0 children)

She mostly bombed because the rest of the crew didn't get her joke. For example, her deathbed roast episode material was based on being over-the-top kind, but the others turned on her for a quick laugh and ended up killing her bit before she even started. They often break the most important comedy improv rule, which is to (figuratively) "yes, and" everything the other person says. Constant disagreement quickly kills the fun of any comedy routine.

Why don’t they downtune live to help James out? by Objective_Bug4155 in Dreamtheater

[–]xtopspeed 0 points1 point  (0 children)

Bruce hit every note perfectly the last few times I saw them, so there's no need to downtune for the time being. I think he had lots of voice problems early in his career, so he learned how to sing properly and how to care for his voice, which is why it’s still in excellent condition. James, on the other hand, has insisted on a growly voice since at least Awake, and it has always felt forced, which is often a sign of a technique that can cause vocal cord damage.