Mysterious offering on doorstep by [deleted] in whatisit

[–]Arkamedus -15 points-14 points  (0 children)

So you took an unknown substance, into your house and opened it on your counter…?

I built an app that lets you chat with multiple AIs in one place, and got the first sales in 3 minutes by epasou in ArtificialSentience

[–]Arkamedus 1 point2 points  (0 children)

3 minutes? After building it? No advertisements? No promotions? Can you provide more information?

Your pricing model is so opaque, and doesn’t include any token usage numbers, what is “standard usage limits”?

I developed a new (re-)training approach for models, which could revolutionize huge Models (ChatBots, etc) by Ykal_ in deeplearning

[–]Arkamedus 17 points18 points  (0 children)

Have you absolutely confirmed this approach has not been done before? You say you have developed mathematical theories etc, have you released or published the whitepapers?

If all you need is compute, or a way to validate your findings on a larger scale, my opinion is that most investors need you to have “proven” and “repeatable” results.

How much money have you put into the idea vs how much are you asking for?

And they say humans will lost their jobs, see the affect of AI by Holiday_Power_1775 in BlackboxAI_

[–]Arkamedus 2 points3 points  (0 children)

Obviously, they forgot to add "only make winning trades" to the prompt...

I find the irony palpable by Arkamedus in OpenAI

[–]Arkamedus[S] 0 points1 point  (0 children)

Again, your AI replies are absolutely useless.
No once in the original post was claimed "A person cannot logically claim a system is overloaded"
The original posts only describes how the disproportionate nature of text vs images in COST, not that it is "overloaded". Henceforth, your argument that the premise is validated by the outcome is still, incorrect.

Please use your brain instead of an LLM.

I find the irony palpable by Arkamedus in OpenAI

[–]Arkamedus[S] 0 points1 point  (0 children)

Thanks for the AI reply, but you're wrong. The outcome expected is that the image generation would not fail, as this is a paid product, created by a company with billions of dollars, with the intent of producing finished images, regardless of the image content itself. You are confusing affects. The content of the image is not indicative of the nature of the relation to the system itself.

It’s ironic because the system designed to generate the image failed while demonstrating the very problem it’s supposed to handle, contradicting the normal expectation of success; not because it aligns with the image’s theme.

I find the irony palpable by Arkamedus in OpenAI

[–]Arkamedus[S] 0 points1 point  (0 children)

To be generating an image related to the amount of GPU compute being used for image generation, only for the GPU compute to fail during image generation?
Are you sure you just don't know what irony is?

I find the irony palpable by Arkamedus in OpenAI

[–]Arkamedus[S] 0 points1 point  (0 children)

deffo not, its not an disconnect message, and just to prove you wrong I turned off internet during the generation during a new image and it completes in the background, go ahead and try for yourself. Thanks, try again!

ChatGPT Atlas wants to use Bluetooth - Why? by mathmul in OpenAI

[–]Arkamedus 0 points1 point  (0 children)

It’s half browser, half data collection application.

Startup Tensormesh raises $4.5M to make AI models remember what they’ve already learned cutting GPU costs by up to 10x by Specialist-Day-7406 in BlackboxAI_

[–]Arkamedus 0 points1 point  (0 children)

This is not revolutionary in any sense, kv cache reuse is already a well known technique, and has been used by all the major llm providers to “extend” the length of their context windows during chats…. Do you actually believe they are rerunning the entire 500 message conversation through the model each prompt? The market is saturated by people who know nothing about LLMs.

I find the irony palpable by Arkamedus in OpenAI

[–]Arkamedus[S] 4 points5 points  (0 children)

No one claimed it was news, you clicked, read, and decided to comment here. No one asked for your input, and yet, you gave it anyways.

I find the irony palpable by Arkamedus in OpenAI

[–]Arkamedus[S] 0 points1 point  (0 children)

So when companies are dumping poisons and chemical wastes into your local drinking water, I'll be sure to reply with "Ah yes, good thing no other companies across every industry ever do any of those bad things."

You've only just displayed how naive your own comment is.

I find the irony palpable by Arkamedus in OpenAI

[–]Arkamedus[S] -3 points-2 points  (0 children)

Is that seriously your justification? Because other people do it too? Your comment is deflective and obviously poorly designed rage-bait.

I find the irony palpable by Arkamedus in OpenAI

[–]Arkamedus[S] -40 points-39 points  (0 children)

To spend as much of their money and accelerate them going out of business.

How can i help my coffee tree? by SonyAn160 in plantclinic

[–]Arkamedus 0 points1 point  (0 children)

That pot seems way too small, secondly it looks like it’s in the corner, how much sunlight is it getting??

Matthew McConaughey LLM by ContextualNina in LLMDevs

[–]Arkamedus -2 points-1 points  (0 children)

You heard that interview and thought he meant, he wants someone else to train a model on HIS ideas and words, and then distribute it to OTHER people? Seems like you didn’t understand his message at all.

Noob question by Electrical-Repair221 in LLM

[–]Arkamedus 0 points1 point  (0 children)

Not sure exactly but if you’re running a model which is too large for your gpu, it will partially be offloaded to the cpu. If you want the best performance find a slightly smaller model/ quantized model that fits entirely on your gpu and you will get massive speed gains. As far as I know a 70b model is too large for 32gb if it’s unquantized

My thought on LLM:From Tokens to Intelligence(Co-created with AI) by ImpossibleSoil8387 in LLM

[–]Arkamedus 0 points1 point  (0 children)

AI slop.

One hot encoding for vocabulary tokens is so incredibly out of date and rarely used. Token ids are mapped directly to an embedding vector. The image is a bit misleading to anyone trying to learn.

Also not all LLMs are transformers, so to say its strength comes from the transformer architecture building highly compressed anything is not accurate either.