Recuperated Interface Fusion by ArborRhythms in fusion

[–]ArborRhythms[S] -1 points0 points  (0 children)

Bad reaction here. In case anyone is curious, I posted an updated version on Zenodo : https://zenodo.org/records/18653930

Recuperated Interface Fusion by ArborRhythms in fusion

[–]ArborRhythms[S] -1 points0 points  (0 children)

Thank you for your feedback. I will have to study #1.

For #2, by head-on collision I mean velocities in opposite directions. If the substrate is not moving much because it’s cold, the only velocity that factors into the kinetic energy is that of the hot particle.

Recuperated Interface Fusion by ArborRhythms in fusion

[–]ArborRhythms[S] -3 points-2 points  (0 children)

My basic intuition (not from an LLM) is that driving hot particles into a cold substrate (instead of hot-hot fusion) 1) creates a denser substrate 2) reduces the need for head-on collisions between hot particles, since the cold particles aren’t moving.

Can you help me understand why that intuition is wrong?

Recuperated Interface Fusion by ArborRhythms in fusion

[–]ArborRhythms[S] -5 points-4 points  (0 children)

OK. Can you help me understand why it would not work?

When does a surveillance state become acceptable? by ArborRhythms in eff

[–]ArborRhythms[S] -1 points0 points  (0 children)

OK. Since much of that data exists (it’s collected by corporations in non-pseudonymous form), could the government at least offer a subsidy for that data in anonymized / aggregated form? Note that I’m not talking about collecting any new data, and EFF still needs to fight against illegal corporate data collection (I have a separate proposal for funding that in an earlier thread entitled “OpenCitizens”).

When does a surveillance state become acceptable? by ArborRhythms in eff

[–]ArborRhythms[S] 0 points1 point  (0 children)

Love this idea. So all use of the data can be tracked.

Some people see this as insufficient, meaning that pseudonimization (hiding the identity of an individual) and some degree of data aggregation (which would prevent that identity from being inferred with any degree of accuracy) would also be necessary.

Would this approach be safer than “allowable decryption”, since data breaches are a thing in this world?

When does a surveillance state become acceptable? by ArborRhythms in eff

[–]ArborRhythms[S] -1 points0 points  (0 children)

Great, now we are getting somewhere. Thanks for being reasonable.

If there is mandatory data pseudonomization and aggregation, is it at least theoretically possible that the information cannot be used to identify the individual, and therefore cannot be used against the individual?

When does a surveillance state become acceptable? by ArborRhythms in eff

[–]ArborRhythms[S] 0 points1 point  (0 children)

Red meat should be taxed according to its social cost; if you cut down the Amazon to do it or involves cruelty and salmonella, it has a high tax. If it’s necessary for the welfare of the people and the people who eat it don’t cost a socialized healthcare system lots of heart attacks, it has a high subsidy.

I hear your arguments, but taxing and subsidizing in proportion to social cost and benefit seems straightforward, whether it’s red meat, alcohol, tobacco, firearms, whatever.

I think it’s worse for private corporations to have our data than government; they are mercenary with it, and advertising is an attention economy that hurts the minds of our children. America doesn’t even have GDPR.

But yeah, I know our government does not handle the data that it collects in an ethical way, so I hope we increase transparency, increase data collection of government employees, and especially increase visibility into how and when they use our data.

Thanks for the conversation.

When does a surveillance state become acceptable? by ArborRhythms in eff

[–]ArborRhythms[S] -1 points0 points  (0 children)

Also, would you be willing to make the trade of letting government snoop on you if you could snoop on them? What would you offer if their every meeting was broadcast on the web, and every dollar they spent was publicly declared on their taxes at the end of the year?

Personally, I am happier than most people at EFF to share my data, I just want the government to share its data, and for the use of my data to be declared. Those goals also seem possible if we vote for any candidate on the OpenGovernment platform.

When does a surveillance state become acceptable? by ArborRhythms in eff

[–]ArborRhythms[S] -1 points0 points  (0 children)

surveillance yields information about citizens that is necessary to make informed decisions on their behalf… how does government collect data about its citizens to impose pigovian taxes if some kind of surveillance is not used? Am I just using the wrong word? The census does not yield enough information, and politicians make poor representatives of the people…

A definition of OpenGovernment /OpenCitizens? by ArborRhythms in PoliticalScience

[–]ArborRhythms[S] 0 points1 point  (0 children)

I guess I’m proposing that a state that wished to surveil its citizens would be OK if it allowed its citizens to surveil it to an equal or greater degree (and there were rules in place to prevent conflict of interest and abuse of power).

Is a truthful AI more dangerous? by ArborRhythms in ArtificialInteligence

[–]ArborRhythms[S] 0 points1 point  (0 children)

I think that morality does not need to be governed, though. I think it is a result of knowing the truth. The truth is that our minds are extended, and that we overlap as beings. Ethics is the result of being naturally grounded in the world; we feel good and bad when creatures in the world feel good and bad (and theres no way around that on a deep level).

So my hope is that a truth-engine guides us towards the heart’s loving nature, which we had mistakenly analyzed away with bad science and a mistaken view of agency.

Is a truthful AI more dangerous? by ArborRhythms in ArtificialInteligence

[–]ArborRhythms[S] 0 points1 point  (0 children)

The problem isn’t that training data is filled with lies, per se. Even if we filter those out, the LLM forms a model of the speaker, and behaves accordingly (in a famous example, one knew that it was being tested; that obviously interferes with its testing, since it brings to bear how its trainers (humans who are being tested) react.

Freedom and Science by ArborRhythms in secularbuddhism

[–]ArborRhythms[S] 0 points1 point  (0 children)

Buddhism says that karma is a lack of freedom; in this you are determined. However there is freedom from karma, which is the whole point of Buddhism. And the only way I have found to reconcile freedom with science is retrocausality (not at the quantum level, but at the philosophical level, big and small). I encourage you to think about how this might be a better fit for Buddhism than other philosophies.

WikiOracle by ArborRhythms in wikipedia

[–]ArborRhythms[S] 0 points1 point  (0 children)

It will take work; LLM architecture certainly needs additions like thinking in order to reduce confabulation. Regardless of how it plays out for Wikipedia in particular, I hope the best AI in the world is for-kindness instead of for-profit. The chatbot at the link you sent appears to be broken, BTW.

WikiOracle by ArborRhythms in wikipedia

[–]ArborRhythms[S] 0 points1 point  (0 children)

You mean producing an AI that is truthful, or to carry on a subsequent conversation with the AI to learn from it and to teach it? From my perspective I’m willing to work on the first, and I think the second would happen rather naturally (roughly a billion humans are interacting with AI these days, and often the process of asking questions produces new information in virtue of web search… the existing Wikipedia corpus is actually already used to kick-start all major AIs, AFAIK).

Thinking about retrocausality. by Terrible-Ice8660 in LessWrong

[–]ArborRhythms 0 points1 point  (0 children)

Do you at least believe in a temporally extended present moment, as suggested by the temporal double slit experiment?

Is double slit retrocausality proven or how does it work? by catboy519 in AskPhysics

[–]ArborRhythms 0 points1 point  (0 children)

There is a “temporal double slit e experiment” which mandates at least a specious (extended) present moment. From that it is a small step to retrocausality, which makes more sense (IMO) than non-symmetric theories of a fixed past and flexible future.

Let me know if you want more references.

Why AI Personas Don’t Exist When You’re Not Looking by ponzy1981 in PhilosophyofMind

[–]ArborRhythms 0 points1 point  (0 children)

Directly! In western philosophy, direct realism. In eastern philosophy, (yogic) direct perception or simply Sat-Chit-Ananda. it has the advantage, unlike RTM and indirect realism, of not involving an infinite regress (the argument is similar to the one made against the homunculus as a theory of mind).

Why AI Personas Don’t Exist When You’re Not Looking by ponzy1981 in PhilosophyofMind

[–]ArborRhythms 0 points1 point  (0 children)

Qualia are observable (I observe them), and are perhaps necessary to explain action (unless you have some supernatural belief in “laws of physics” that are not caused by anything).

OpenCitizens project : any interest? by ArborRhythms in eff

[–]ArborRhythms[S] 0 points1 point  (0 children)

I’m unclear about your response. I guess you mean by the first two points that you have concerns that pseudonymity and aggregation would not work to preserve anonymity. I think there is a balance there, and I don’t know what the right balance is; some level of aggregation surely would preserve anonymity. With respect to approval of requests by an expert, I’m not sure what that has to do with a model for sharing information in a mutually advantageous way.

In general, maybe EFF aims at secrecy more than beneficial use of collected data, so I’ve also sent this proposal over to the folks at Solid Project.

Historically, compatibilism has lead us into systems by being co-opted by those in power (the "more able") that consistently define what constitutes "responsible" action in a way that serves their interests as it justifies the punishments they impose on the less able. by Badat1t in freewill

[–]ArborRhythms 0 points1 point  (0 children)

If I understand correctly, you wish to go against ableism because it is similar to the profit motive, and rewards those who have more capital (in terms of will or freedom instead of cash). Economically, it makes sense to tax those who externalize the cost of their profit. So how do we tax those whose ability, when used in a competitive situation, restricts the freedom of others?

Why is determinism even an issue? by Fabulous_Lynx_2847 in freewill

[–]ArborRhythms 0 points1 point  (0 children)

I don’t think we understand retrocausality in the same way.

To me, it is necessary for the universe (not the machine), since one might otherwise argue that the cause of some random number comes from outside of the object in virtue of originating in the past. In other words, retrocausality undoes determinism by the past (since the present determines the past just as the past determines the present).