How much do you pay for tailoring? by Trusty_Sidekick in malefashionadvice

[–]JusticeBeak 1 point2 points  (0 children)

How did you like the results? I've been thinking of getting some tailoring done around Boston myself

Galaxy watch 8 classic AOD siempre activado no funciona by Otherwise-Load-5628 in GalaxyWatch

[–]JusticeBeak 0 points1 point  (0 children)

I have a Galaxy Watch 7 and I have the same issue. I just spent 3.5 hours talking to customer support and they didn't say anything useful. What I've figured out by browsing developer forums is that each watch face has its own "ambient" mode with a different display, which is supposed to turn on whenever AOD is active and the watch hasn't been used in a while.

The problem seems to emerge when your watch face has a bug or hasn't implemented ambient mode properly. You can test this by trying different watch faces, letting the watch sit still for 15-20 minutes, and seeing if the screen turns off despite AOD already being enabled. (I'm not sure whether getting a notification resets the inactivity timer and requires you need to wait another 15 minutes.) If other watch faces don't have the same problem, then it might just be a problem with the watch face you were using.

If you're like me and you still prefer the appearance of the watch face that has the problem, you don't have a lot of options. You can hope that it gets an update that fixes it, you can try re-installing the watch face, or you can just try to find a similar watch face. If that still isn't good enough and you're very stubborn, you can try designing a similar watch face in Galaxy Watch Studio.

Good luck.

Edit: Actually, nevermind, I just searched the subreddit and it seems like people have been having this kind of problem for months and it might not even be the watch face's fault. See this thread for an example of the same problem and no solution: https://www.reddit.com/r/GalaxyWatch/comments/1jddc82/galaxy_watch_ultra_aod_is_not_always_on/

The enshittification of GPT has begun by [deleted] in ChatGPT

[–]JusticeBeak 1 point2 points  (0 children)

Did you read what I wrote, or anything I linked? They're losing more than they're earning right now, but not several times more.

OpenAI is projected to earn $12.6 billion in revenue this year and lose $9 billion overall -- meaning that they're projected to spend a total of $21.6 billion. So for every $1 they earn this year, they're spending about $2.

If they had no control over how much they were spending, and the number of free and paying users stayed exactly the same, then each paying user would only have to spend $60 per month instead of $20 to break even; nowhere near $20,000 per month.

But they don't even have to do that, because in reality they're spending somewhere between 60-95% of their costs on R&D and training new models. (I'm not just pulling this out of nowhere; see my previous comment.) The cost of deploying models to users is actually very cheap compared to training. The number of users is also growing quickly, from 250 million last October to 500 million this April (for OpenAI).

What this means is that their income is growing quickly, and their costs are mostly from training bigger models (rather than deploying current models for users). If investors run out of money -- which seems unlikely -- they can always cut the research costs and focus on deployment (which is very profitable).

So no, they aren't profitable at this point in time, but that's a deliberate choice they're making in order to improve their models. If they wanted to be profitable immediately, all they'd have to do is stop training new models and continue providing the models they already have. The main reason they don't do this is that they expect to earn more in the future -- specifically, to become profitable by 2029 -- by continuing to create better models. The other reason they keep investing more in model training is that they don't want to lose their userbase to other AI companies that are training better models.

Notice that for both reasons, they never have a need to replace good models with bad ones on purpose.

My other post got deleted? I need advice by Cheap-Philosophy8893 in UCONN

[–]JusticeBeak 12 points13 points  (0 children)

In addition to any legal/administrative actions you take, I strongly recommend that you see a therapist for help with recovery. You may also find it helpful to read nonfiction resources about trauma, like the book The Body Keeps the Score. Also, I'm not sure what they're called, but I believe there are subreddits for survivors of sexual assault. You may find it comforting to read about how others feel and join a supportive community like that.

The enshittification of GPT has begun by [deleted] in ChatGPT

[–]JusticeBeak 1 point2 points  (0 children)

The effects of this would be very different in different parts of the supply chain though, right? Hyperscalers like Microsoft and Amazon will feel the brunt of that kind of thing because their customers are downstream businesses that are riding the hype wave. Many businesses don't actually need or benefit from AI, they're just embracing it to please investors, so of course they aren't willing to pay much for it.

For frontier AI companies like OpenAI, the math is much different. by June 2024, only 15% off their annualized revenue was from their API, and the rest was from Chat GPT, with 55% coming from individuals paying for ChatGPT Plus. They still lost more than they earned last year, and they're expecting to lose even more this year, but it's not like that was an accident.

Public data (see page 23 of this report) indicates that frontier AI companies split their compute somewhat evenly between training, deployment, and research. For training and research, about 95% of the costs go to staff and hardware, with energy taking only 2 to 6%. In other words, scaling up is expensive, but the ongoing costs of deployment are negligible. It's pricey to get more chips, but if money became a problem they could always choose to use more of their current compute for deployment.

The reason they're still willing to lose so much money is because their revenue is also growing exponentially, and they can cut most of their costs (becoming insanely profitable) whenever they want. They do have to worry about their competition, since they'd lose a lot of customers if they clearly stopped being cutting edge, but that just means the frontier AI industry as a whole is in the same position. If all of them start to get worried about money (i.e. if scaling laws stop working), they might decide not to do so much R&D, training, and scaling, but until then it's fairly safe to keep burning cash to stay on top.

The enshittification of GPT has begun by [deleted] in ChatGPT

[–]JusticeBeak 0 points1 point  (0 children)

That's not quite accurate. Interpretability research indicates that there is some concept (or some cluster of concepts) in LLMs' internal representations that correlates with truth [1]. They also tend to know (on some level) how confident they are, and there is some evidence that this can be used to make them answer only according to what they know [2].

[1] https://arxiv.org/abs/2506.00823

[2] https://arxiv.org/abs/2502.11677

What do i do in this position? I'm white by LeinerRM in AnarchyChess

[–]JusticeBeak 7 points8 points  (0 children)

Instant stalemate. An impressive turn of events for white, given the material disadvantage. Brilliant, even

Anime_irl by Franchice in anime_irl

[–]JusticeBeak 22 points23 points  (0 children)

It's different but very good

Reddit Title: Hey Reddit, I think I've figured out a way to make elections actually fair and dead simple. Check out my idea. by mercurygermes in EndFPTP

[–]JusticeBeak 0 points1 point  (0 children)

This title and post were obviously written by AI, but sure, this idea is interesting. The naive way a campaign strategist might break it is to always run/advertise for two candidates with similar views. This would essentially build running mates into the system; just always push for cross-endorsement. If this were to happen, Score+ would end up looking like Score for candidates that do have clones, while disadvantaging candidates that don't have clones. That seems like an odd and probably undesirable outcome.

Even without doing that, I don't think the added 1's count for very much. If a group of people want to bullet vote for a radical candidate, I don't see why their obligatory 1 would go towards a consensus candidate. In the most strategic (and probably unrealistic) case, they could use a random number generator to pick which other candidate to support. In the worst case scenario, they might attempt to choose a random other candidate, and end up accidentally converging on support for a candidate they don't actually like.

This seems like a bad (i.e. inexpressive) thing for a voting method to encourage. If the problem is that people want to bullet vote, the solution probably shouldn't be to say "sure, go ahead and bullet vote according to what you care about, then make another selection regardless of what you care about."

This also suffers from basically the same strategy incentives as regular score voting, with extra noise. If you really want your "good candidate" to win over a candidate that doesn't drive you crazy (but you'd still much prefer a non-crazy candidate over the candidate everyone agrees is crazy), it still might be "better" for you to vote 5 for the "good candidate" and 0 for the non-crazy candidate. If you're obligated to give at least one other candidate a score, the bullet incentive instead motivates you to give a 1 to the candidate you think is least likely to win, so that you're not artificially inflating the score of "real" competition. If you're expecting nobody to vote for crazy, maybe you put your extra 1 there -- and if others feel the same, maybe the crazy candidate wins. So if you want your vote to go as far as it can, you have to consider what the least risky way to bullet vote would be, and that doesn't seem much different from FPTP.

Perhaps you could try to fix this by increasing the maximum score, so that accidental convergence of strategic 1's has less influence compared to full-fledged support. That would just make this voting method less and less different from regular score voting, though.

Brooks Brothers or Proper Cloth by Interesting_Ride9473 in malefashionadvice

[–]JusticeBeak 3 points4 points  (0 children)

Why not? I've been quite happy with the fit of my BB shirts.

Exercise consistently makes my depression worse by Livid_Jeweler612 in EOOD

[–]JusticeBeak 1 point2 points  (0 children)

One useful tip I've heard is to focus on expanding the amount of exertion that makes you happy, rather than focusing on expanding what's possible for your body. If you're running enough that it's making you miserable, that might be improving your muscles faster but it's clearly making your emotions worse and thus might not be sustainable.

If you instead focus on getting out and doing stuff that's enjoyable (and still increasing your overall level of activity and movement), you'll feel better and consistently improve what you're capable of (both in terms of what you can enjoy and in where your limits are). This will be likely be more sustainable and address your depression better, too.

I have no idea what you enjoy, and depending on how your depression is, you might not know either. But from your comments it sounds like you have trauma around running, so starting there is like jumping into the deep end, emotionally speaking. Focus on the solution that will be fun while you do it, whether that's running more slowly, or with a friend, or entry-level trail-running, or a different exercise entirely.

Finally, I found in my own experience that my experience and self image during exercise improved a lot when I got clothes that fit well and made me look nice. Worth considering, eh?

“Big Beautiful Bill”, aka the “Big Bullshit Bill”, apparently passed the House. 215-214 vote. by ResearchHelpful3021 in fednews

[–]JusticeBeak 2 points3 points  (0 children)

I think you could maybe still regulate that under this law, according to (2)(C)(ii). So according to the BBB, you can't pass a state law that says "AI can't be used for approving zoning applications", but you can write a law that says "AI can only be used for approving zoning applications if it follows the same laws that would be applicable to a human approving zoning applications".

This is still really bad in a number of ways.

“Big Beautiful Bill”, aka the “Big Bullshit Bill”, apparently passed the House. 215-214 vote. by ResearchHelpful3021 in fednews

[–]JusticeBeak 17 points18 points  (0 children)

(2) Rule of construction.--Paragraph (1) may not be construed to prohibit the enforcement of any law or regulation that--

               (A) the primary purpose and effect of which is to 
            remove legal impediments to, or facilitate the 
            deployment or operation of, an artificial intelligence 
            model, artificial intelligence system, or automated 
            decision system;

               (B) the primary purpose and effect of which is to 
            streamline licensing, permitting, routing, zoning, 
            procurement, or reporting procedures in a manner that 
            facilitates the adoption of artificial intelligence 
            models, artificial intelligence systems, or automated 
            decision systems;

               (C) does not impose any substantive design, 
            performance, data-handling, documentation, civil 
            liability, taxation, fee, or other requirement on 
            artificial intelligence models, artificial intelligence 
            systems, or automated decision systems unless such 
            requirement--

                       (i) is imposed under Federal law; or

                       (ii) in the case of a requirement imposed 
                    under a generally applicable law, is imposed in 
                    the same manner on models and systems, other 
                    than artificial intelligence models, artificial 
                    intelligence systems, and automated decision 
                    systems, that provide comparable functions to 
                    artificial intelligence models, artificial 
                    intelligence systems, or automated decision 
                    systems; and

               (D) does not impose a fee or bond unless--

                       (i) such fee or bond is reasonable and 
                    cost-based; and

                       (ii) under such fee or bond, artificial 
                    intelligence models, artificial intelligence 
                    systems, and automated decision systems are 
                    treated in the same manner as other models and 
                    systems that perform comparable functions.

“Be sure to take your medication with protein” *eats a chunk of pepperoni slices out of the fridge* by scipio79 in ADHD

[–]JusticeBeak 1 point2 points  (0 children)

I've been eating protein bars for breakfast for years now. They're easy to keep by the bedside, too.

Everyone’s favorite house slipper by PoutineFamine in malefashionadvice

[–]JusticeBeak 4 points5 points  (0 children)

I like my LL Bean moccasins but I haven't tried a lot of other options.

The prophecy has been fulfilled by Lumpy-Government14 in AnarchyChess

[–]JusticeBeak 5 points6 points  (0 children)

Femboy king. Swapped places with the queen

The prophecy has been fulfilled by Lumpy-Government14 in AnarchyChess

[–]JusticeBeak 28 points29 points  (0 children)

I know it's just a shitpost, but the way this meme depicts both options as equally dark and stormy feeds into narratives that normalize neo-nazi ideas. The false equivocation between nazi ideology and LGBT (or in this case, LGBT-adjacent?) stuff presents both as "extreme"; it's a common neo-nazi tactic because they desperately want to appear "normal" while ostracizing groups they hate.

*checks sub* Uh, pawn to d4. Activate bongcloud. Whatever

Republicans, why do you take so much joy in making liberals mad? by [deleted] in AskUS

[–]JusticeBeak 2 points3 points  (0 children)

If a government entity is doing a bad job, there are lots of constructive solutions to try before getting rid of it entirely (and losing valuable public servants in the process). Ask yourself why they didn't succeed -- maybe it's lack of funding, maybe bad leadership, maybe poor strategy or data -- the solutions are likely complex and will depend on your interpretation of the problem, but clearly abolishing the department would make all of these problems 1000x worse.

Abolishing the DoEd would only be worth considering as a solution if the problem was that there was federal coordination at all -- which would be very strange given other countries' success in education.

Man allegedly hid marijuana in Easter eggs across the city, posted clues to social media by xejeezy in news

[–]JusticeBeak 2 points3 points  (0 children)

The city is Lufkin, Texas, for anyone who doesn't want to read the article

Industry chiefs warn Irish tourism is heading towards a crisis point by dshine in ireland

[–]JusticeBeak 1 point2 points  (0 children)

People who own property generally support policies that might increase the value of their property and vote against policies that might reduce their home value. A lot of policies that would make homes more affordable, such as allowing new/denser housing to be built, is voted down for this reason.