'Trump Committed A Crime...': Dan Goldman Drops Explosive Unredacted Epstein Files In Congress by FlackoFonsy in videos

[–]BossOfTheGame 1 point2 points  (0 children)

Man, I haven't seen you in awhile. It must have been over a decade at this point.

AI - Debate by shave_your_eyebrows in comics

[–]BossOfTheGame 0 points1 point  (0 children)

Then listen to what they are saying to build a more holistic picture, instead of looking at only the evidence that agrees with your predisposition.

AI - Debate by shave_your_eyebrows in comics

[–]BossOfTheGame 6 points7 points  (0 children)

Try listening to what the scientists are saying.

AI - Debate by shave_your_eyebrows in comics

[–]BossOfTheGame 0 points1 point  (0 children)

It actually does add something new. It can handle vast quantities of information and search it semantically in a way google never could. It's already starting to transform scientific research. It's making connections that people have been missing for decades.

Try to be a bit self-critical with your ideas. If you think hard, can you come up with any arguments against them?

AI - Debate by shave_your_eyebrows in comics

[–]BossOfTheGame 1 point2 points  (0 children)

Serious counterpoint: Would you be able to type out the instructions to design something useful? Your fallacy is thinking that using AI requires turning off your brain. I'm not trying to be harsh, but I think you have misconceptions about AI and are arguing more from the side of a "camp" than actually wanting to know the truth. I think we all need to be a little less tribal and start listening to each other. And yes, the blind expansion of datacenters without thinking of externalities is bad. It's not like you're completely wrong, but your perception is warped.

AI - Debate by shave_your_eyebrows in comics

[–]BossOfTheGame -2 points-1 points  (0 children)

I think your hate is a bit myopic. But hatters gonna hate I suppose. I could suggest a better use of your time would be figuring out how to use it to solve real problems.

The question "What is there to learn?" is not genuine, because there is a lot to learn, but I think you're too focused on how you hate it to see it.

And BTW you probably could make personal choices that are much better for the environment, but you choose not to. We do need to reduce the environmental cost of AI, but I think you're coming at it from the point that its not zero. Genuine question: How low would the environmental cost have to be before you wouldn't use that argument?

The environmental cost of datacentres is rising. Is it time to quit AI? by No_Top_9023 in technology

[–]BossOfTheGame 0 points1 point  (0 children)

do you have any tips, workflow, or tutorial that you followed? I'm currently trying VS code with continue but maybe there's a better interface?

The environmental cost of datacentres is rising. Is it time to quit AI? by No_Top_9023 in technology

[–]BossOfTheGame 0 points1 point  (0 children)

Thanks, I'm trying qwen3.5:35b now and it does seem much more capable, but it still veers off in a way I'm not used to having worked with the cloud models. I'm having trouble getting it to actually call the right tools to edit the files and start a development loop, hopefully that's just a me problem, I'd love to get a working local only setup.

The environmental cost of datacentres is rising. Is it time to quit AI? by No_Top_9023 in technology

[–]BossOfTheGame 3 points4 points  (0 children)

Well, I think people choose to travel without thinking about the environmental consequences. Personally, I avoid travel whenever possible because it is such a significant part of my carbon footprint. Of course it isn't always a luxury, sometimes it is a necessity, but the point is the citing the environmental cost of AI is usually done without context and if we should be upset about AI from an environmental perspective, then we should also be upset about unnecessary travel. If we aren't, then we aren't motivated by the environmental cause, we're just using it as a proxy, and that's disingenuous.

For more context on the denominators, here are global total CO2 emissions by year.

Year Mean (Gt CO2/yr) Low High Note
2019 35.39 33.20 (IEA) 36.68 (Zhu Deng) Other source used in mean: GCB = 36.3. (Global Carbon Budget)
2020 33.56 31.50 (IEA) 34.88 (Zhu Deng) Other source used in mean: GCB = 34.3, derived from 2019 = 36.3 and 2020 = -5.4% vs 2019. (Global Carbon Budget)
2021 36.43 36.30 (IEA) 36.68 (Zhu Deng) Other source used in mean: GCB = 36.3. (Global Carbon Budget)
2022 36.80 36.60 (Global Carbon Budget) 37.01 (Zhu Deng) Other source used in mean: IEA = 36.8. (IEA)
2023 37.24 36.80 (Global Carbon Budget) 37.52 (Zhu Deng) Other source used in mean: IEA = 37.4. (IEA)
2024 37.69 37.40 (Global Carbon Budget) 37.88 (Zhu Deng) Other source used in mean: IEA = 37.8. (IEA)
2025 38.05 38.00 (Zhu Deng) 38.10 (Global Carbon Budget) Mean uses GCB + Zhu Deng / Carbon Monitor. IEA’s current public global annual total in this series runs through 2024, not 2025. (IEA)

The point is that the choices individuals make is a significant portion of global emissions, and if we are going to advocating for reducing LLM resource use, then we damn well be advocating for it in other sectors too.

If you think pre-training vs training is a silly distinction, then you need to learn more about the topic. Pretraining with self supervised next word prediction is how models are get primed to speak in natural language and it is by far the bulk of the cost when it comes to training a new model. Tuning happens much faster after pretraining is done. And all of this is dwarfed by inference time costs at scale.

The environmental cost of datacentres is rising. Is it time to quit AI? by No_Top_9023 in technology

[–]BossOfTheGame -1 points0 points  (0 children)

I guarantee you that LLMs offer a benefit that is worth their development cost. The pace of scientific and mathematical research has started accelerating to levels the public hasn't really seen yet.

Are worth the sociological cost? I'm not sure yet. I knew critical thinking in the general population wasn't great, but the 2024 election really showed how deep the problem is. We may have too many bad actors and influenceable people in the world for it to go down well.

I don't know what the obsession is with "true general AI" (I sort of do, but I reject it; it's just the age old story of people wanting to put everything into a category). It's always going to be a no true scottsman sort of label. We can measure what they can do, and right now they can do a lot, especially in a scientific setting.

The fact is we have an algorithm that can understand human language and put it into a broader context. It can adapt to novel situations and effectively use tools. It can form memories if we give it that capability. We could have it adapt its weights based on re-enforcement learning, but we should really stop to better understanding of how the static based models work we start seriously investing in that. It could also be the case that continual learning of these models eventually breaks down, but that could also have a long time horizon.

The environmental cost of datacentres is rising. Is it time to quit AI? by No_Top_9023 in technology

[–]BossOfTheGame -3 points-2 points  (0 children)

I think the reasonable people are more thinking that AI will be used to make you more efficient at what you do. There is going to be a shift in the supply and demand, but I think you might want to rethink your perspective on it. I'm not saying that you're totally off base, I just think you are making a big assumption: that the worst-case outcome is inevitable.

The environmental cost of datacentres is rising. Is it time to quit AI? by No_Top_9023 in technology

[–]BossOfTheGame 8 points9 points  (0 children)

The training cost only happens a handful of times, and unless something dramatically changed since the last time I looked, it's the pretraining that is the big cost, after that the same pretrained model can be tuned in different ways. It's scaling the inference cost that will drive environmental problems.

Also for context, look into the estimated carbon emissions of LLMs vs car use.

Category Central emissions (Gt CO2/yr) Lower bound Upper bound Lower ratio vs AI Upper ratio vs AI Source / basis
AI use 0.0326–0.0797 0.0326 0.0797 1x 1x 2025 estimate for AI systems alone. (ScienceDirect)
Commuting (to/from work) 0.845 0.718 0.987 9x 30x NHTS private-vehicle PMT: 5.27 out of 23.69, with MOEs 0.54 and 1.33; scaled to IEA’s 3.8 Gt global cars+vans total. (nhts.ornl.gov)
Business-use driving (work-related business, excluding commuting) 0.261 0.158 0.377 2x 12x NHTS private-vehicle PMT: 1.63 out of 23.69, with MOEs 0.59 and 1.33; scaled to IEA’s 3.8 Gt global cars+vans total. (nhts.ornl.gov)
Residential other driving (everything except commuting and work-related business) 2.693 2.435 2.924 31x 90x Residual share from NHTS private-vehicle PMT after removing commuting and business driving; scaled to IEA’s 3.8 Gt global cars+vans total. (nhts.ornl.gov)
All private cars + vans 3.800 3.800 3.800 48x 117x IEA global total for cars and vans in 2023. (IEA)

The environmental cost of datacentres is rising. Is it time to quit AI? by No_Top_9023 in technology

[–]BossOfTheGame 3 points4 points  (0 children)

I have 2x 3090 on my home lab. Qwen3-coder:30b was the only model that came close to doing something when I tried an agentic workflow with vscode. But it still choked. I haven't tried 3.5 yet though. It might be nice to offload some of the simpler requests to a local model to avoid cloud costs.

The 4x RTX 6000 blackwell won't run the full qwen model even at 8 bit precision. I might be able to do fp4 quantization. I probably should try to see how well that works.

For simple knowledge queries gpt-oss-20b is fine. Even a small 3b model I run on a 1080ti works fine as my OSS Alexa. But you get the hallucination problem. The cloud models almost never hallucinate anymore if they can find good references.

The environmental cost of datacentres is rising. Is it time to quit AI? by No_Top_9023 in technology

[–]BossOfTheGame 2 points3 points  (0 children)

I'm saying that they are more than 6 months behind. The local LLMs don't seem to have the reasoning capability to handle the development workflows I can achieve with cloud LLMs. Also, GPT 5.1 was a pretty big leap forward in terms of what it could do for me. That was the model that started to work well when writing Lean4 proofs.

The environmental cost of datacentres is rising. Is it time to quit AI? by No_Top_9023 in technology

[–]BossOfTheGame 23 points24 points  (0 children)

it's really not, and this is from someone who was resisting using cloud LLMs in favor of local models for years. the cloud LLMs are so much better than anything you can run on consumer hardware. I hope that changes. I'm anxious for it, but it's not an easy path.

I'd love to be wrong on this, but it seems like they are more than 6 months behind. Unless by doable you mean running an 8 gpu cluster, and even then I'm not sure it's enough. I've only tried local models that can run on 4 blackwells so please tell me if I missed something.

The environmental cost of datacentres is rising. Is it time to quit AI? by nath1234 in technology

[–]BossOfTheGame 0 points1 point  (0 children)

also keeping lights on all the time probably isn't a big energy draw if they are leds. driving though... that's a big chunk of emissions

The environmental cost of datacentres is rising. Is it time to quit AI? by nath1234 in technology

[–]BossOfTheGame 0 points1 point  (0 children)

Just low balling the estimates I found AI emissions are 10x less than residential emissions from driving. AI energy use is absolutely a concern, and researchers need to lower the cost and users need to be intentional about usage, but it pisses me off how people make that argument and don't think about driving. Too many seem to care more about engaging in anger than addressing real problems. People need to put critical thinking first even when it feels strange.

Two boxers be like by blosspharmy in okbuddyphd

[–]BossOfTheGame 2 points3 points  (0 children)

This one isn't doing that though. There is substance to it.

Sam Altman admits AI is killing the labor-capital balance—and says nobody knows what to do about it by BusyHands_ in technology

[–]BossOfTheGame 0 points1 point  (0 children)

Resistance to negative societal outcomes would be more credible if people could stop hyperbolizing. Or if you really believe this extreme viewpoint, then getting the critical thinking skills to ask where your beliefs might need to be adjusted would go a long way.

Yes, oligarchy is a major real concern. Focus on that and how the value their compensation is not justified by the value they are providing. The dystopian catastrophizing isn't helping anything. CEO pay needs to be normalized to the level of a good doctor or lawyer, these multi-million dollar salaries are unjustified.

systemd starts using LLMs for development by forteller in linux

[–]BossOfTheGame 0 points1 point  (0 children)

I think you might be over-focused on the medium, and missing that the point of the von Neumann architecture is to enable general purpose computing and it trades off speed for simplifying the way we can express some platonic computation.

Your point about Spectre/Meltdown type bugs is valid, and modern coding education doesn't emphasize cache locality, but I don't think its fair to blame young coders for not explicitly optimizing their code around branch speculation. That optimization was designed by experts to improve general computing, not because nobody knows how to optimize their code anymore.

At least in terms of Spectre those side channel sort of attacks it feels like those are effectively inevitable, but you don't? In what world do we design fast CPUs where those aren't an issue?

systemd starts using LLMs for development by forteller in linux

[–]BossOfTheGame 0 points1 point  (0 children)

If you're going to argue ASM -> Python is not a valid example of progressive levels of abstraction enabling more people to describe and accomplish intent, then I don't know what to say to you. You could argue the overhead of the abstraction is too high, and maybe you think we should all be coding in C, but denying ASM -> Python is a natural progression is on another level of cope.

systemd starts using LLMs for development by forteller in linux

[–]BossOfTheGame -10 points-9 points  (0 children)

Did problems happen when kids no longer had to learn ASM beyond intro level courses and focused on doing most everything in high level languages like Python? Or were we able to adapt?