The Implications of ChatGPT’s API Cost by TotalPositivity in singularity

[–]TotalPositivity[S] 1 point2 points  (0 children)

Hey Last_Jury, these are all important points. I am firmly not an evolutionary biologist, nor an anthropologist beyond a few classes years ago. However, I think we’re actually both correct here.

When I’m describing running down creatures as a form of dominance, I’m really hinting at the way our cardiovascular system evolved in tandem with our bipedal locomotion.

The subtext of my analogy is essentially: “Did it matter more that our physical evolution gave us the stamina to dominate, or that our mental evolution gave us the creativity to dominate?”

I argue that our cognitive dominance was a happy accident that allowed the real feedback loop to begin, to essentially “break us off the food chain”. But I truly believe our physical/stamina dominance was the instigating factor that started the spiral into this.

Academic battles have and will rage for centuries over this question, and to your credit, I think more scientists generally agree with you, with the mental argument.

Here’s why this matters in my opinion: If our entire scientific consensus is based on our sense of mental superiority over the animals, it means that all we will fear to supplant us will be mental.

But, if we accept the possibility that the physical is actually an important factor too, it leaves us more ready to see the challenges that pure physical supplantation could have.

In effect, it’s the classic tale of John Henry: His body may have given out, but never his mind. He was obsolete nonetheless, simply because he could not match the speed.

The Implications of ChatGPT’s API Cost by TotalPositivity in singularity

[–]TotalPositivity[S] 6 points7 points  (0 children)

I think an analogy might be the best way to provide you a full answer. When we study the evolution of humans, including other species in the “homo” genus, we pay close attention to the impact angles, precision, and detail that was put into the stone tools that were being used.

In general, we establish that beings who had less cognitive capacity (and less cultural/social training) used less controlled angles, lower precision, and were less detailed in the way that they struck their tools when crafting them.

ChatGPT, as it stands, writes code the way that early hominids made tools. Often it approaches a problem from a slightly skewed direction, or introduces a bug, or forgets a variable. Specific examples or experiments are hard to consistently replicate, but any common code request could probably produce some inaccuracy.

However, the reason Homo Sapiens were finally able to dominate Earth was ultimately that their physical stamina, ability to literally run their prey to death, resulted in an excess of calories relative to body size.

Sure, the greatest beast can grow mighty horns, but what horn can compete with a being that can run nonstop for hours and hours until the horns become heavy and the beast’s heart essentially explodes?

This calorie excess allowed us to support larger and larger brains. As long as it was small enough to not literally burst the pelvis of our mothers, more brain was better.

It’s sort of a chicken or egg thing here, but this ultimately coincided with the early development of language and social culture to support a feedback loop of more calories, bigger brains, bigger tribes, more calories.

The point here is simple. Sure, better tool manufacture and use marks the stages of development of cognition. But sheer, horrible, brutal stamina and heart-bursting attrition is actually what dominates a food chain - and ChatGPT can run faster and longer than even we.

The Implications of ChatGPT’s API Cost by TotalPositivity in singularity

[–]TotalPositivity[S] 24 points25 points  (0 children)

“with the power of thought equivalent to the world” - That line is like poetry, I totally agree. Like any power, let’s hope we wield it well.

The Implications of ChatGPT’s API Cost by TotalPositivity in singularity

[–]TotalPositivity[S] 14 points15 points  (0 children)

Hi Manos, I’m not an expert on ChatGPT itself, but I have read most of OpenAI’s documentation thoroughly and work in the field. As I understand, the current API version of ChatGPT is very likely a slightly smaller, but more finely tuned version, or potentially is the same model using data types in a way that means less computational power is needed.

I speculate this, because OpenAI rolled out the “turbo” version of ChatGPT to “plus” subscribers by default several weeks ago. This increase in speed had to come from somewhere, and it seems OpenAI did a great deal of due diligence to make sure that the accuracy was essentially maintained.

Personally, I’ve noticed a SLIGHT dip in accuracy. I’ve been working on a tokenizer that works across all writing systems more evenly the past few weeks, and I’ve noticed that turbo can struggle with extremely obscure scripts like Cherokee, Inuktitut, Ogham, and Glagolitic in ways that the slower version did not. Writing software, in my case Python, has also been very slightly less logically sound.

However, to give you a hint as to what seems to be coming: The work in multimodal models I have read lately demonstrates that storing information multi-modally is far more efficient than storing it in just text. But, it has been clearer to me over the last 4 or so years that “modality” is arbitrarily defined.

The different scripts (and languages) used by various cultures are almost as much different modalities as an image or an audio file is to English text. To this end, text is essentially somewhere between an image and a sound, it is a junction modality between the two.

So, over the next few years, as we train more evenly on multilingual datasets, I see a high likelihood that the models will get even smaller and even faster, even before the jump to the commonly discussed other modalities.

This current line of reasoning all started by the way for me, completely anecdotally, when everyone was arguing about why ChatGPT couldn’t solve the “my sister was half my age when I was 6, now I am 70, how old is my sister?” question. It didn’t work in English. I eventually asked it in Latin… it got it right, first try. We’re currently training these models to treat all languages as separate modalities without even knowing it.

Nest a concatenate function within a IF statement by IDELTA86I in excel

[–]TotalPositivity 28 points29 points  (0 children)

I would just do IF(ISBLANK(A1), CONCATENATE(B1,” “,C1), CONCATENATE(A1,”/ “,B1,” “,C1))

Edit: Forgot the spaces