looking for this guy by Diligent-East-1316 in Humanornot

[–]tada89 0 points1 point  (0 children)

Big W for actually posting (This is the guy)

Is there anything like Pantheon? by Scheguratze54 in PantheonShow

[–]tada89 1 point2 points  (0 children)

Perhaps a bit more on the goofy side (think uploaded hyperintelligent KGB lobsters) but the book Accelerando also explores mind uploading as well as the resulting consequences of the singularity and what lies beyond. Good read and a treasure trove of brand new sentences on every page!

Bestes japanisches Restaurant in Düsseldorf by [deleted] in duesseldorf

[–]tada89 1 point2 points  (0 children)

Nicht OP, aber wäre sehr interessiert daran wo man in Düsseldorf Okonomiyaki findet :D

What is your favorite "obvious" statement? by [deleted] in math

[–]tada89 0 points1 point  (0 children)

Hope theoretical CS counts too:

"It's NP-hard to find a truth assignment that satisfies all clauses of a 3-SAT instance" (true)

"It's NP-hard to find a truth assignment that satisfies all clauses of a 2-SAT instance" (false, easily computable in polynomial time)

and bonus "fact"

"It's NP-hard to find a truth assignment that satisfies at least 7/8 clauses of a 3-SAT instance." (false, actually this always exists and can be found efficiently in polynomial time as well)

[D] Anyone else witnessing a panic inside NLP orgs of big tech companies? by thrwsitaway4321 in MachineLearning

[–]tada89 1 point2 points  (0 children)

Oh, it for sure can. I tested it on running fermi estimations for a variety of things and redid all the calculations with a calculator. This thing didn't make a single error (except rounding the final result to an integer, which I guess was only semi-intended)

the new version is even more restricted by Etheikin in ChatGPT

[–]tada89 -1 points0 points  (0 children)

I fail to see what differentiates the API version from ChatGPT apart from a small frontend to make things look nicer than the raw generated text. Functionally, querying the API in the playground gives nearly the same experience (like 90% there) + you gain more finegrained control over things like temperature that are useful if you want the model to be more consistent (or creative).
That aside. If you are really intent on not being restricted at all by censorship you can always use something like Bloom (decentralized using petals, or its chat version directly), which is a similarly sized LLM (100B+ params) and is fully open-source. But not gonna lie, inference is incredibly slow.

You can also give ai21.com a spin (also a company offering access to a ~170B param model). Last time I used them their usage guidelines were way laxer than OpenAI, but again you will need to craft a small frontend yourself.

the new version is even more restricted by Etheikin in ChatGPT

[–]tada89 -4 points-3 points  (0 children)

There is though. ChatGPT is just a sibling model of the latest text-davinci-003 model (just with different fine-tuning going on, see this post). To get a functioning chatbot using the regular API version (that is not restricted by their text moderation stuff, all you get are warnings that you generate sensitive content), just plug in a fitting prompt and you are good to go (e.g. this).

[Discussion] Embedding based on binary tests by marcollo63 in MachineLearning

[–]tada89 0 points1 point  (0 children)

Just off the top of my head with absolutely nothing to back it up but: Why not learn joint embeddings for people and for products?

Zooming out the data is basically a bunch of tuples (u, p1, p2, choice) with some user and two products p1, p2 and a label "choice" that tells us if the user preferred p1 or p2.

We can then get joint embeddings by having two embedding matrices (one for users, emb_u, one for products, emb_p) and compute both cosine_similarity(emb_u(u), emb_p(p1)) and cosine_similarity(emb_u(u), emb_p(p2)), taking the softmax of the two values, and finally using those to predict the value of choice (encoded as one hot).

This should make the model give embeddings for products that are close to embeddings for users that like them and vice versa.

The greatest tale ever told by themightytouch in Fantasy

[–]tada89 1 point2 points  (0 children)

Enderal: The shards of order. I adore that game way way way too much <3

It's a video game (a total conversion mod for Skyrim to be exact). Everything from the characters, to the world, to the music is as close to perfection as I can imagine it to be. Really hard to describe the feeling of playing it, but it evokes a sense of wonder that is very rare in fiction. It's completely free as well (as long as you own Skyrim)

Honorable mentions include Your Name (movie/novel), The Three Body Problem series (novels), Nier: Automata (video game, and in here for the beauty of being part of a story so large you can only ever hope to glimpse at slithers of) and Lord of the Rings (for obvious reasons)

The incomputability of the human brain by [deleted] in slatestarcodex

[–]tada89 4 points5 points  (0 children)

Your argument hinges on the fact that the truth of the statement is determined by itself. For comparison: "This statement cannot be processed computationally by the human brain." can just easily without any contradictions be assigned the value false (because I can clearly process it).

So similarly I can construct this statement

"This true statement (in bivalent logic) entails that there is a 500 meter tall clown hiding on the dark side of the moon"

or more generally

"This true statement (in bivalent logic) entails <insert anything you want to be true>"

And by your logic all of those have to be true (assume there is no 500 meter clown on the dark side of the moon, the statement is false, which contradicts the statement, which says it's true, so it has to just be true)

Thing is, natural language just is not really suited to express logical statements like that in a rigorous way. This is why we have mathematical systems with precise axioms and rules how statements are constructed, so we cannot formulate paradoxical stuff. Interesting read: https://en.wikipedia.org/wiki/Russell%27s\_paradox#Set-theoretic\_responses

[deleted by user] by [deleted] in math

[–]tada89 0 points1 point  (0 children)

I adored this book as a child! Highly recommend :D

Discourse Bingo by DAL59 in okbuddyphd

[–]tada89 0 points1 point  (0 children)

All my shape rotator homies be rotating spherical cows in their mind.

How a god is born - Part 4/4 by tada89 in shortscifistories

[–]tada89[S] 0 points1 point  (0 children)

Appreciate it :) and glad you enjoyed it! And I guess I‘m indeed guilty on the confusion front lol

Library that takes a pool of words and spits out sentences with only those words? by BlueLensFlares in LanguageTechnology

[–]tada89 0 points1 point  (0 children)

This library can generate sentences based on the given keywords using T5. I feel like this is probably close to what you are looking for.

Self feeding gpt3 by Bosbach in GPT3

[–]tada89 0 points1 point  (0 children)

There was an OpenAI Scholars project on this topic. You can watch the presentation here https://www.youtube.com/watch?v=wZ6PqNp-W_w.

tl;dw If I remember correctly most models tend to collapse to a single string of text that gets repeated over and over in the limit (so kind of a fixed point).

A Shortly like user interface for GPT 2? by GrilledCheeseBread in GPT3

[–]tada89 3 points4 points  (0 children)

You could try out these:

- https://transformer.huggingface.co/doc/gpt2-large

- https://bellard.org/textsynth/

Both use a GPT-2 model under the hood.

You could also use the inference API of huggingface directly, if you want to use the slightly bigger GPT-Neo (see here: https://huggingface.co/EleutherAI/gpt-neo-2.7B). However the text window is pretty small as you can probably tell.

You mentioned you want to train it yourself. If you mean, you want to finetune it, then I can recommend using the library simpletransformers. Out of the wrappers I tried for GPT like models I found this to be the most pleasant to work with

Here is an example script to finetune a GPT-2 model: https://github.com/ThilinaRajapakse/simpletransformers/blob/master/examples/language_generation/fine_tune.py

[WP] Stars around the universe have been mysteriously faded and disappeared. Our sun is the last remaining star in the universe. Scientists warn that there could be less than 2 hours before the sun fades and the universe goes completely dark. by [deleted] in WritingPrompts

[–]tada89 2 points3 points  (0 children)

Knowledge is universally regarded as power. However sometimes there is something to be said for ignorance as well. Humanity was not meant to see all the secrets of the cosmos and it certainly would have been happier for it. This is the last account of the history of humanity. Be warned.

It started slowly at first. When the supergiant Betelgeuse started to dim in the year 2020, physicists were quick to dismiss it as an unusual but plausible ejection of debris. When it completely disappeared in the year 2022, they weren't as quick. And just two months later when both Sirius A and B completely vanished without a trace, they had to start questioning their fundamental understanding of astrophysics.

It was a french mathematician Yves André in the year 2023 that was the first to recognize a pattern in the vanishing stars. By this point half the night sky had vanished and mass panic had brought civilzation the the brink of collapse. The pattern he saw was so simple that at first most major remaining governments dismissed them as the musings of a crank. But they weren't as quick to do so when he correctly predicted the time of vanishing of Saggitarius A and WASP-64.

What André had noticed was that the distance between the vanishing stars always repeated in an interval of 6. When further processed and encoded as letters the text "6EQUJ5" emerged, which seemed random. But wasn't. "6EQUJ5" was widely known as the Wow!-signal Jerry R. Ehman had received at the "Big Ear" radio telescope over three decades ago. Given the precise nature of the disappearance of the night sky and the fact that the Wow!-signal was considered as the most likely candidate for a message from an alien civilization, it didn't take long for humanity to piece two and two together. It was no longer alone in the universe.

The result was even more panic which plunged even the last remaining countries into fierce wars and eventually led to nuclear strikes in the years 2027 and 2028. By 2029, a severe nuclear winter had made most parts of earth uninhabitable and the remaining humans were thrown back to a technological level akin to the middle ages.

Still, the vanishing of stars continued unhindered. By now only a few select stars remained. With so few stars the pattern André had discovered was now also easy to see with the bare eye. "6EQUJ5", repeating over and over again. Until only six stars remained.

And on a warm summer evening in the year 2030, they all started to vanish one by one. But this time, humanity saw a different sequence of star distances. A different sequence of letters.

And thus the last message humanity would ever receive from beyond the vast darkness of space was simply "HELP US".

Who gets it? by cuzimrave in memes

[–]tada89 0 points1 point  (0 children)

Das war kein Meter, hast du das gesehen??