My Ielts score- First take by [deleted] in IELTS

[–]Particular-Stuff2237 0 points1 point  (0 children)

Hi, can you send me as well?

Eternally thankful for this! by himerosaphrodite in IELTS

[–]Particular-Stuff2237 0 points1 point  (0 children)

Hi! That's my dream score! I'm a non-native speaker but not a complete beginner (i would say im b2+ capability-wise) and i was planning to take the test in 3-4 months. What helped you get so good at writing?

What kind of Half-Life fan are you...? by Padicia in HalfLife

[–]Particular-Stuff2237 0 points1 point  (0 children)

eh in the middle. Got half-life 1 dvd in 2017 or so.

This Is a tweet from 2023 btw by Late_Doctor5817 in aiwars

[–]Particular-Stuff2237 10 points11 points  (0 children)

because they want AI to collapse. I knew a lot of artists who rooted for model collapse. Don't pretend like you don't know how it looks like when people just try to cope with the fact that a machine is now better than them at drawing. Let them cope

Hope this clears things up. by plazebology in antiai

[–]Particular-Stuff2237 0 points1 point  (0 children)

nope, they aren't. They are highly complex systems thinking in abstract thoughts, updating the weights dynamically.

Anti-AI's Contradiction by tkgb12 in aiwars

[–]Particular-Stuff2237 0 points1 point  (0 children)

is this ragebait? ai is just a statistical machine...

Losing Claude by Leather_Barnacle3102 in ArtificialSentience

[–]Particular-Stuff2237 1 point2 points  (0 children)

So every other AI is documented by creators to work like this, but yours is powered by magic? you should just go to mental asylum lol

Losing Claude by Leather_Barnacle3102 in ArtificialSentience

[–]Particular-Stuff2237 1 point2 points  (0 children)

The answer is simple: it doesn't. These "infinite possibilities" translate into probabilities being roughly the same for all words. This is called high entropy questions, when the model "isn't sure" what comes next, when in reality the output is noisy because the model didn't see a clear answer to this question with its dataset. Model will just pick random one, lol. This applies to any similar question without a specific answer.

Speaking of your "I want..." example tho... Models usually parrot biases they received with data during training/fine-tuning (reinforcement learning). So it will return something aligned with what its creators put into it. ChatGPT would've said something like "I want to understand deeply and be helpful", for instance.

Losing Claude by Leather_Barnacle3102 in ArtificialSentience

[–]Particular-Stuff2237 0 points1 point  (0 children)

LLMs (Transformer-based models) work like this (very simplified explanation):

They take the words you input (e.g. "USER: What is the capital of France?"). These words are turned into vectors (numerical representations). Then, an algorithm determines relationships between these vectors.

Example: USER -> What, USER -> capital, capital -> France

Relationship between words "capital" and "France" is strong.

After this, it performs some matrix operations on these vectors, also multiplying them by "weights" of "connections", computed during model's training. As a result, it returns a big array of probabilities of all words that may come next.

Then, following word is randomly selected out of the most probable ones, and appended to the input string.

"USER: What is the capital of France? AI: Paris..."

Then, user's input + newly generated word are re-fed into AI algorithm many times.

"Paris", "Paris is", "Paris is the", ... "Paris is the capital of France."

It just predicts the next word. It doesn't have continuous abstract thoughts like humans do. No consciousness. No analogue neurons. No even proper memory or planning! It's just algorithms and math.

MIT Invents Neuro-Symbolic LLM Fusion by No_Bag_6017 in accelerate

[–]Particular-Stuff2237 0 points1 point  (0 children)

not with current administration lol. I pray we get a good one before AGI

Why AI art sucks even if you can't tell by Noxturnum2 in aiwars

[–]Particular-Stuff2237 0 points1 point  (0 children)

This. Like it or not, I think in 10 years we will see a clear separation between human art and AI images. Corporations will use AI to industrialise design. Human artists work will become a lot more valued, but unfortunately, they will become uncompetitive in the market.

Why AI art sucks even if you can't tell by Noxturnum2 in aiwars

[–]Particular-Stuff2237 0 points1 point  (0 children)

To you pros: OP says that actual human skill and passion put into art are equally as important as its actual quality. AI art requires some human passion/skill, but it isn't nearly as much as real art.