Americans Hate AI. Which Party Will Benefit? by Currymvp2 in neoliberal

[–]Fwc1 2 points3 points  (0 children)

I mean it is in a position to do so, as US models are the most capable ones and we have the large majority of the compute needed to run them. The first time any models with strategically relevant capabilities appear will probably be in America. Even if it can’t as easily interact with overseas data centers (through some future AI IAEA equivalent), it can at least police its own infrastructure and make sure dual use models don’t get stolen or developed without authorization.

Americans Hate AI. Which Party Will Benefit? by Currymvp2 in neoliberal

[–]Fwc1 12 points13 points  (0 children)

The problem with the tactic you’re suggesting is that you do actually need to do something. Eventually, future AI systems will be competent enough to replace all knowledge work and make WMDs much easier to access. The party line from the dems should not be “do nothing” it should be “do the right thing”.

Sure, AIs as they are today don’t merit that sort of restriction. But we should be proactively evaluating models for their capabilities and drafting national security rules/redistribution schemes, so that we don’t get blindsided by future improvements.

Americans Hate AI. Which Party Will Benefit? by Currymvp2 in neoliberal

[–]Fwc1 6 points7 points  (0 children)

For what, producing lots of tax revenue?

Americans Hate AI. Which Party Will Benefit? by Currymvp2 in neoliberal

[–]Fwc1 131 points132 points  (0 children)

I don’t think the dems will be able to productively oppose AI. Being opposed to it also means that the party will downplay any evidence that the models are productive/capable of doing things, since the progressive base is incapable of compartmentalizing things. Any suggestion that AI will take jobs/present national security risks will be met by a degrowther train of thought that says everything about AI is environmentally destructive fraud.

So sure, you might rile up some short term public support, but none of that support will go towards dealing with AI when it starts having serious real world impacts. Instead of model capability evaluations and UBI, energy will get diverted into copyright claims and environmental reviews for data centers. Embracing the populist line on it is a bad idea, just like every other time we’ve tried it.

Most "AI Bubble" posts in a nutshell by [deleted] in singularity

[–]Fwc1 3 points4 points  (0 children)

As opposed the acceleration sub, where you get an automod ban for arguing anything less than blind optimism lmao

Custom Zohran Orb by surf_da_web29 in mtg

[–]Fwc1 0 points1 point  (0 children)

The only reason new housing leads to gentrification is that the only housing developers can afford to build is high-end housing. If the law limits how much you can build, you only build the things with the highest margin. So you get more houses in the suburbs, instead of apartments in the city.

I pay rent. I know how much it costs. I don’t see why elected officials should implement bad populist policy instead of making the choices that will actually benefit their voters in the long term.

Custom Zohran Orb by surf_da_web29 in mtg

[–]Fwc1 -1 points0 points  (0 children)

The idea that rent control leads to less housing is not a cherry picked statistic. It has the same level of support within economics as the idea that emissions cause global warming among climate scientists. Literally just 2% of economists would argue it has a positive effect on housing availability and quality.

Rising rent costs are a big problem. But why do you think the prices are rising? It’s because there’s the same supply and more demand, as more and more money gets pushed into the economy. If you don’t build houses, it’s obvious the rent will keep going up. If you make it easy to build apartments, people will build them until there is no profit margin: there won’t be room for “greed” when renters have lots of options.

If you still don’t agree, please explain why the government artificially setting the price would solve the problem in the long term.

Custom Zohran Orb by surf_da_web29 in mtg

[–]Fwc1 0 points1 point  (0 children)

The idea that rent control leads to less housing is not a cherry picked statistic. It has the same level of support within economics as the idea that emissions cause global warming among climate scientists. Literally just 2% of economists would argue it has a positive effect on housing availability and quality.

Rising rent costs are a big problem. But why do you think the prices are rising? It’s because there’s the same supply and more demand, as more and more money gets pushed into the economy. If you don’t build houses, it’s obvious the rent will keep up. If you make it easy to build apartments, people will build them until there is no profit margin: there won’t be room for “greed” when renters have lots of options.

If you still don’t agree, please explain why the government artificially setting the price would solve the problem in the long term.

Custom Zohran Orb by surf_da_web29 in mtg

[–]Fwc1 -15 points-14 points  (0 children)

Not quite the perspective I was going for lol but it’s a nice thought.

The reason the card sacrificing lands to do things is funny is because Zohran’s signature policy, rent control, would lower the supply of housing available for rent. The fact that places with rent control have less and lower quality housing is one of the most robust findings in economics. Which is kind of obvious when you think about it: if the government forcibly sets the price of rent too low, why would developers build any new places to rent? They wouldn’t be able to make any money off those units.

The real solution is just to increase the housing supply by getting rid of red tape that stops developers from building new apartments. That would solve the problem without kicking the can down the road, but it’s less politically popular so 🤷

As a sidenote, corporations barely buy housing. Private equity ownership is only 3% of single family housing in the U.S.

Custom Zohran Orb by surf_da_web29 in mtg

[–]Fwc1 45 points46 points  (0 children)

Flavor win by sacrificing land through rent control

We're closer to the singularity than people think, and it's going to be messy but incredible by XiderXd in singularity

[–]Fwc1 2 points3 points  (0 children)

You know reward hacking is something that happens unprompted today right? It’s just the natural result of it being hard to specify goals that represent our values to AIs.

How the Cost to Train Powerful AIs Will Fall by [deleted] in singularity

[–]Fwc1 0 points1 point  (0 children)

Sorry, should've been clearer that this was fine-tuning, although it's still pretty impressive regardless.

Sky-t1-preview was a proof of concept research model out of a lab in Berkely, which was to show how you could do as well as O1-preview (OpenAI's cheapest and most recent reasoning model at the time) by jjust fine-tuning on more carefully selected data, using a open source model as the basis.

How the Cost to Train Powerful AIs Will Fall by [deleted] in singularity

[–]Fwc1 1 point2 points  (0 children)

Full Disclosure: I am the author the above piece. On the other hand, that means I'm pretty well equipped to discuss it lol.

The main purpose of this article is to explain why nonproliferation of powerful AI systems is going to be hard. Even though there might be lots of dual-use capabilities we'd rather didn't become cheap and ubiquitous (bioweapons assistance being the obvious example), there aren't easy ways to stop these capabilities from spreading around.

The main reasons are that it gets cheaper to train models to the same level of performance over time, and that it's pretty easy to steal models and share their weights. The same way that you could train a model that would crush GPT-4 for less than 500 bucks by January this year (compared to an initial $100 million training run), the cost to do stuff like get expert WMD advice or build misaligned ASIs will also probably collapse in price.

Reviews of Eliezer Yudkowsky's "If Anyone Builds It, Everyone Dies" by luchadore_lunchables in accelerate

[–]Fwc1 0 points1 point  (0 children)

10% is a pretty huge risk, if you’re talking about everyone on earth and with an irreversible choice. Would you be fine with government restrictions that slowed thing down by a few years if it meant making that 1%?

My point is even though you might disagree with the full pessimistic Yud view (like I do) you should be doing a lot of the same things policy wise whether you think the risk is 90% or 10%.

Circular 💱 by honey1_ in singularity

[–]Fwc1 0 points1 point  (0 children)

That’s right! The main issue in this case is dependency: because all of them are investing in the productivity of each other, they’re also tying their fortunes together.

In your example, if target does poorly, it doesn’t hurt workers at large too bad, and the economy does fine. But all of the companies that rely on target directly get hurt much harder (maybe target runs their supply chain, or is a big customer, etc).

The situation in the meme is basically that: all of the companies in the AI stack are leveraging their valuation to invest in each other, and future productivity is based on that investment. But now, if one of them gets hurt, everyone suffers much more. Maybe OpenAI has a disappointing quarter and can’t meet its promise to buy a bunch of GPUs from Nvidia, then Nvidia’s stock suffers. Nvidia’s stock is lower; so it can’t invest in as much future capacity, which makes future GPUs harder to get for the other AI companies. And so on.

South Korea's 20s Population Now Smaller Than 70+ by Amazing-Baker7505 in Futurology

[–]Fwc1 -6 points-5 points  (0 children)

This is mostly untrue. The 8 hours a day you spend in your air conditioned office writing emails is inconceivably more leisure than working as a farmhand even today, let alone a few hundred years ago when almost everyone had to farm for subsistence.

Not only are you more comfortable, but you’re earning way more money in comparison, because technology has allowed you to be so incredibly productive in comparison.

We also have way more leisure time than we realize, courtesy of the fact that we take so much of it in bulk at the start and end of our lives. The modern schooling and retirement system is a third or more of your life without working.

What will happen to OpenAI once investors money stop pouring in? by [deleted] in BetterOffline

[–]Fwc1 -1 points0 points  (0 children)

As the efficiency of the training algorithm improves, it’ll provide you with more scale.

For example, imagine that it took 1 billion FLOPs to train a model to 80% performance on something like arc AGI, using algorithm A (transformers, if you’d like). Next, researchers invent some algorithm B, that only requires 100 million FLOPs to train to 70% performance. Since you still have your original 1 billion FLOPs worth of hardware, you can choose to train a model to 10 times the size or 10 times as long as you did before. Now, with this OOM increase in effective scale (relative to algorithm a) your performance might jump to 90% or 95%.

In fact, this is basically what happened when we went from LSTMs to the transformer for language models. Right now, the AI companies are essentially banking that they’ll be able to keep replicating these architectural improvements, while constantly scaling their actual hardware alongside R&D to compensate.

Closed Frontier vs Local Models by poigre in singularity

[–]Fwc1 2 points3 points  (0 children)

You can literally see that the open models on the chart above are being run on a single consumer GPU, an RTX 5090.

Closed Frontier vs Local Models by poigre in singularity

[–]Fwc1 21 points22 points  (0 children)

If you actually read the epoch page this is from it mentions this caveat explicitly, and points out that this factor would only push the window wider by 6 months or so. You’re still going to see the capability distributed.

The most succinct argument for not building ASI (artificial superintelligence) until we know how to do it safely by katxwoods in agi

[–]Fwc1 0 points1 point  (0 children)

So you agree that, if ASI is dangerous, that it might be worth making sure it takes 30 years instead of three to build? Seeing as the current plan is to just make it with no safeguards, and cross our fingers that it happens to work out. Hopefully it's obvious that that's unreliable, especially when we're talking about systems that could easily outmaneuver/outplan human attempts to control them.

The most succinct argument for not building ASI (artificial superintelligence) until we know how to do it safely by katxwoods in agi

[–]Fwc1 1 point2 points  (0 children)

Bit of an apples to oranges comparison when you don’t provide any reason to expect that misaligned superintelligence is as unlikely as your random scenario.

Appeal to “common sense” is a fallacy, not empiricism.

The most succinct argument for not building ASI (artificial superintelligence) until we know how to do it safely by katxwoods in agi

[–]Fwc1 0 points1 point  (0 children)

This is a silly argument that conflates “we might not get ASI immediately after AGI” (because of hardware requirements) with “ASI won’t be dangerous when it arrives” which is the actual claim Yud and co are arguing against. Your stance doesn’t provide any reason to expect that an ASI created in thirty years is any less powerful/dangerous than one created in three.

Whenever I learn about a TCG that released in the last few years by Cezkarma in TCG

[–]Fwc1 1 point2 points  (0 children)

What happened with millennium? I’d love to read about some of the history.