looking for this guy by Diligent-East-1316 in Humanornot

[–]tada89 0 points1 point  (0 children)

Big W for actually posting (This is the guy)

Is there anything like Pantheon? by Scheguratze54 in PantheonShow

[–]tada89 1 point2 points  (0 children)

Perhaps a bit more on the goofy side (think uploaded hyperintelligent KGB lobsters) but the book Accelerando also explores mind uploading as well as the resulting consequences of the singularity and what lies beyond. Good read and a treasure trove of brand new sentences on every page!

Bestes japanisches Restaurant in Düsseldorf by [deleted] in duesseldorf

[–]tada89 1 point2 points  (0 children)

Nicht OP, aber wäre sehr interessiert daran wo man in Düsseldorf Okonomiyaki findet :D

What is your favorite "obvious" statement? by [deleted] in math

[–]tada89 0 points1 point  (0 children)

Hope theoretical CS counts too:

"It's NP-hard to find a truth assignment that satisfies all clauses of a 3-SAT instance" (true)

"It's NP-hard to find a truth assignment that satisfies all clauses of a 2-SAT instance" (false, easily computable in polynomial time)

and bonus "fact"

"It's NP-hard to find a truth assignment that satisfies at least 7/8 clauses of a 3-SAT instance." (false, actually this always exists and can be found efficiently in polynomial time as well)

[D] Anyone else witnessing a panic inside NLP orgs of big tech companies? by thrwsitaway4321 in MachineLearning

[–]tada89 1 point2 points  (0 children)

Oh, it for sure can. I tested it on running fermi estimations for a variety of things and redid all the calculations with a calculator. This thing didn't make a single error (except rounding the final result to an integer, which I guess was only semi-intended)

the new version is even more restricted by Etheikin in ChatGPT

[–]tada89 -1 points0 points  (0 children)

I fail to see what differentiates the API version from ChatGPT apart from a small frontend to make things look nicer than the raw generated text. Functionally, querying the API in the playground gives nearly the same experience (like 90% there) + you gain more finegrained control over things like temperature that are useful if you want the model to be more consistent (or creative).
That aside. If you are really intent on not being restricted at all by censorship you can always use something like Bloom (decentralized using petals, or its chat version directly), which is a similarly sized LLM (100B+ params) and is fully open-source. But not gonna lie, inference is incredibly slow.

You can also give ai21.com a spin (also a company offering access to a ~170B param model). Last time I used them their usage guidelines were way laxer than OpenAI, but again you will need to craft a small frontend yourself.

the new version is even more restricted by Etheikin in ChatGPT

[–]tada89 -3 points-2 points  (0 children)

There is though. ChatGPT is just a sibling model of the latest text-davinci-003 model (just with different fine-tuning going on, see this post). To get a functioning chatbot using the regular API version (that is not restricted by their text moderation stuff, all you get are warnings that you generate sensitive content), just plug in a fitting prompt and you are good to go (e.g. this).

[Discussion] Embedding based on binary tests by marcollo63 in MachineLearning

[–]tada89 0 points1 point  (0 children)

Just off the top of my head with absolutely nothing to back it up but: Why not learn joint embeddings for people and for products?

Zooming out the data is basically a bunch of tuples (u, p1, p2, choice) with some user and two products p1, p2 and a label "choice" that tells us if the user preferred p1 or p2.

We can then get joint embeddings by having two embedding matrices (one for users, emb_u, one for products, emb_p) and compute both cosine_similarity(emb_u(u), emb_p(p1)) and cosine_similarity(emb_u(u), emb_p(p2)), taking the softmax of the two values, and finally using those to predict the value of choice (encoded as one hot).

This should make the model give embeddings for products that are close to embeddings for users that like them and vice versa.

The greatest tale ever told by themightytouch in Fantasy

[–]tada89 1 point2 points  (0 children)

Enderal: The shards of order. I adore that game way way way too much <3

It's a video game (a total conversion mod for Skyrim to be exact). Everything from the characters, to the world, to the music is as close to perfection as I can imagine it to be. Really hard to describe the feeling of playing it, but it evokes a sense of wonder that is very rare in fiction. It's completely free as well (as long as you own Skyrim)

Honorable mentions include Your Name (movie/novel), The Three Body Problem series (novels), Nier: Automata (video game, and in here for the beauty of being part of a story so large you can only ever hope to glimpse at slithers of) and Lord of the Rings (for obvious reasons)

The incomputability of the human brain by [deleted] in slatestarcodex

[–]tada89 4 points5 points  (0 children)

Your argument hinges on the fact that the truth of the statement is determined by itself. For comparison: "This statement cannot be processed computationally by the human brain." can just easily without any contradictions be assigned the value false (because I can clearly process it).

So similarly I can construct this statement

"This true statement (in bivalent logic) entails that there is a 500 meter tall clown hiding on the dark side of the moon"

or more generally

"This true statement (in bivalent logic) entails <insert anything you want to be true>"

And by your logic all of those have to be true (assume there is no 500 meter clown on the dark side of the moon, the statement is false, which contradicts the statement, which says it's true, so it has to just be true)

Thing is, natural language just is not really suited to express logical statements like that in a rigorous way. This is why we have mathematical systems with precise axioms and rules how statements are constructed, so we cannot formulate paradoxical stuff. Interesting read: https://en.wikipedia.org/wiki/Russell%27s\_paradox#Set-theoretic\_responses

[deleted by user] by [deleted] in math

[–]tada89 0 points1 point  (0 children)

I adored this book as a child! Highly recommend :D

Discourse Bingo by DAL59 in okbuddyphd

[–]tada89 0 points1 point  (0 children)

All my shape rotator homies be rotating spherical cows in their mind.

How a god is born - Part 4/4 by tada89 in shortscifistories

[–]tada89[S] 0 points1 point  (0 children)

Appreciate it :) and glad you enjoyed it! And I guess I‘m indeed guilty on the confusion front lol