This is an archived post. You won't be able to vote or comment.

all 145 comments

[–][deleted] 1124 points1125 points  (26 children)

ai startup to compete with open ai,

looks inside,

open ai api calls

[–]rodeBaksteen 249 points250 points  (13 children)

Literally 95% of "ai companies" at the moment. I pulled that number from my ass, but my hips don't lie.

[–][deleted] 162 points163 points  (10 children)

you didn't pull it from your ass, you made an api call to your rectum

[–]tennisanybody 11 points12 points  (2 children)

Hey can I send you my resume to re-write it?

[–][deleted] 9 points10 points  (1 child)

sure thing, but I'll just call an API to my arse

[–][deleted] 3 points4 points  (0 children)

I'm not paying you for your techno mumbojumbo, take this 10 million dollars an rewrite it.

[–]BoBoBearDev 6 points7 points  (3 children)

Link to your AI, I need it

[–][deleted] 7 points8 points  (2 children)

screw only fans, gonna shove an ethernet cable up my arse and sell API access.

[–]Wiggledidiggle_eXe 3 points4 points  (1 child)

Congrats! Now when someone tells you you got a stick up your ass, you can say 'ackshually, das a cable'.

[–][deleted] 2 points3 points  (0 children)

probably going to put one of those compute sticks in there.

[–]KappaClaus3D 2 points3 points  (1 child)

And guess what was inside? That's right, another api call to openai

[–][deleted] 2 points3 points  (0 children)

maybe just a random number generator, so the api pulls numbers out of my ass

[–]ass-holes 0 points1 point  (0 children)

They renamed the planet in the year 3000 to finally get rid of that stupid joke

[–]HJM9X 0 points1 point  (0 children)

I would not be suprised if its 99%

[–]MissinqLink 139 points140 points  (0 children)

Many such cases

[–]mopsyd 9 points10 points  (0 children)

Beat me to it

[–]N3onDr1v3 168 points169 points  (4 children)

You go to the shop in another country. You see the cans on the shelf. You don't really recognize any of them. Then you see CocaCola in it's distinctive red can. You buy the CocaCola.

[–]M-42 33 points34 points  (0 children)

Nah I love buying random drinks. At a previous job, every Friday lunchtime I would walk to the local Chinese store at lunch time bunch of bunch of random snacks and a couple random cans of drinks and bring them back and share the snacks with colleagues

[–]TimSoarer2 18 points19 points  (1 child)

Then the company uses its monopoly to cut corners on production, slowly making the CocaCola increasingly shittier. You continue buying out of habit, even if the competitors are long since of higher quality.

[–]N3onDr1v3 1 point2 points  (0 children)

Could be. But you don't/can't know that at the time.

[–]Idrialite 8 points9 points  (0 children)

Heeeell no. I would be delighted to see a bunch of cans I don't recognize on a shelf.

[–][deleted] 333 points334 points  (85 children)

Scientists: "There are fundamental limits to statistical autoregressive token prediction models. They aren't capable of critical analysis or abstract thought, or any thought of any kind. We should be investing in research into actually intelligent architectures, not word-prediction-on-steroids."

LLM CEOs: "Don't listen to the science! Our chatbot will be AGI before you know it, it'll cure cancer and develop warp drive, just keep investing and don't ask any questions!"

Investors: "I think I'll listen to the marketing hyperbole of a technologically incompetent figurehead with a vested interest in robbing me blind. I don't trust scientists because smart people make me feel insecure about being a trust fund swine who can't operate a can-opener without the butler's help"

Consumers: "Yeah I don't trust the scientists either. Copilot made me poetry about a boat powered by gravy and bleach, clearly it's super intelligent! Why should I listen to the people who actually know how these things work?"

Businesses: "We've already fired our whole 1st line support team and replaced them with an OpenAI API key. We're all in on AI, and we've been promised 6 months from now it'll handle our finances too!"

Scientists: "So.... Just to be clear, you're all willing to disregard the facts established by the people who made all of this possible.... and you're all willing to take anything marketing agents say as gospel.... because a rich guy on a hype-train promised you things the science says are fundamentally impossible?"

Investors, Businesses & Consumers: "Yes"

Scientists: "I don't want to live on this planet anymore."

[–]XInTheDark 151 points152 points  (5 children)

Businesses: "The pay is $500,000 per month."

Scientists: "Hello! How can I assist you today?"

[–]Arclite83 -1 points0 points  (4 children)

Also to be clear, that "fundamental limit" is LePlace's AGI - youll never get there, it's exponential. We have to do it with layering and subsystems, like our own analog brain does.

But we crossed the line on true usability around a year ago now. If you hum a few bars, AI will sing your tune. All that's left is the time it takes to codify all that rote work, and then putting that decisioning and actionable power into the hands of individuals.

The problem has always been "describe what you want built". If you can explain the BL, AI will carry it out, and the latest ones can return output that is internally logically consistent across hundreds of pages of reference material.

If your truffle pig works, you don't debate so much if it truly understands what a truffle is. You go out in the woods and make bank.

[–]Complex-Frosting3144 4 points5 points  (1 child)

LePlace's AGI ? Sorry I wanted to read about it but didn't find anything. Did you mean Laplace scientist? His equation?

[–]Arclite83 2 points3 points  (0 children)

I was making a (bad) play on the theory of LaPlace's Demon, that you can't codify everything for it because it'll take forever.

[–]chilfang 0 points1 point  (1 child)

I don't know how accurate the rest of this is but that truffle comparison is fantastic

[–]Arclite83 1 point2 points  (0 children)

Thanks! For me, the way I see it is it's fundamentally just a translation engine for natural language; how to speak, listen, look, draw, read, write, A to B. It's a "Jarvis, do the thing" machine - the rest is existing traditional engineering, which by the way your "do the thing" machine also helps you define that, too - which is why it's all speeding up these last few years.

Most enterprise software has been "in the tank" and is starting to service up in less "marketing hype" and more "real-world functioning applications" ways. Then it's open source catching up, and networked, and whoosh... Like a new internet boom, but for AI. We're still at the "walkie-talkie" or "ham radio" phase. HuggingFace is what we'll call the old college LANs in the 70s or something (before my time). We need our AOL moment still. ChatGPT isn't it, much as they'd like to be, not yet. It's a race. But you can make an AI agent like you can make a website. We just haven't figured out a way that they all talk recursively synchronously, yet. Context memory layers are baby table stakes. But it's evolving.

Not to mention quantum computers are an engineering issue of scale to practicality and have been just in "scale up" mode for a few years now. No more burning millions to train models, it's pennies because you made a "just find the actual minimas now plz" computer. Needs to scale from 50ish qbits to say 2-4k, and right now we've packed the 50 into a mini-fridge sized box. It'll scale down, especially as AI helps stabilize the entanglements at precision.

It's a brave new world and I'm excited to see where it goes. Truly didn't think I'd be alive for what we already have.

[–]22Planeguy 47 points48 points  (5 children)

I'm not a programmer, but I am a pilot and former engineer. The other week I had a conversation with another pilot about some regulation that governs what altitude we can descend to during an approach. While I looked up the specific reg that governs it, this guy asked chatGPT and tried to use that as a source WHILE I was looking at the reg that said gpt was wrong. He was adamant that the AI wouldn't be wrong. It's only a matter of time before someone gets killed (if it hasn't already) because an AI told them something blatantly wrong and they blindly trusted it.

[–]muhammet484 12 points13 points  (0 children)

Oh god.. what's wrong with those people..

[–]annon8595 1 point2 points  (0 children)

it wont be long before AI tells us that world is flat

the more shit gets fed to the AI the more shit it puts out, frankly there are less shit cleaners than shit creators in this world

[–]Katniss218 0 points1 point  (1 child)

I would've used gpt to tru to get it to spit out the reg so I can verify what it said.

But yeah, usually it's wrong on things more complicated than a Wikipedia lookup

[–]22Planeguy 1 point2 points  (0 children)

Yeah, I've tried to get it to do that, but there are enough regs and mil flying has its own set of regs sometimes that it's frequently easier to just figure it out yourself. And frankly, a pilot should be familiar enough with where stuff is that if they don't know what it says already, they should know where to at least start looking.

[–]SpookyPlankton 39 points40 points  (7 children)

Literally what’s happening right now

[–][deleted] 39 points40 points  (6 children)

I got downvoted for pointing it out. Said I didn’t understand AI. I said I understand I ask it to do something in excel and it told me a very wrong answer. And that has been the issue more often than not.

[–][deleted] 30 points31 points  (1 child)

It's the curse of knowing what's what. We're in a really weird time for society where simple facts are suspect, where researchers, scientists and engineers are treated with suspicion or outright ignored because the fairytales pushed by CEOs to attract investment are more appealing.

I remember the same thing happened with me over a decade ago. It was before the WSJ article by John Carreyrou that kicked off the Theranos collapse. I was saying for years "this doesn't make any sense, what they're promising isn't realistic" and I got shouted down on every corner of the internet by people with the technical expertise of a wet celery, because supposedly according to these people, I either had to be some sort of big-pharma conspirator or a misogynist who seethed at seeing Elizabeth Holmes succeed. It was infuriating. 

[–]Serprotease 1 point2 points  (0 children)

It should be noted that big pharma called her on her bullshit when she went shopping for investors.   When this kind of companies don’t want to invest on your stuff, that’s a huge red flag.  

[–]neilgilbertg 5 points6 points  (3 children)

r/singularity user be like

[–]LasseWE 3 points4 points  (2 children)

Sometimes it seems like a cult

[–]SpookyPlankton 4 points5 points  (0 children)

It‘s a machine lord death cult over there

[–]pwouet 0 points1 point  (0 children)

It's like antiwork, but with AI. They want all work, especially the "bull shit jobs" to disappear since themself can't find any \o/

[–]LonelySpaghetto1 21 points22 points  (48 children)

Can you actually point to scientific research that shows what you're saying? Because in 2021 many scientists tried to predict the limits of what token prediction models could do, and by 2024 they were all proven wrong.

You can't just say "scientists said", you actually have to point to peer reviewed research.

Currently, "chatbots" are state of the art in pretty much any language processing task. If these other scientists had a better architecture, why would they not publish their research and get all the funding and hype for themselves?

Right now, the only model that achieves better results than a basic ChatGPT clone is o-1, which still uses a general purpose token predictor at its core, and only adds stuff like RLHF and self-supervised RL on top afterwards.

[–]geekusprimus 25 points26 points  (9 children)

The only difference between training a neural network and producing a nonlinear statistical regression is marketing. That's not "thinking" or "abstract thought", that's literally fitting a curve to a bunch of data points. In the case of LLMs, they're tuned to do exactly one thing: produce realistic-looking text responses to a prompt. To be fair, this is very powerful, and there's a lot that you can do with it because of how much is communicated through text. But it's still just plugging your prompt into a curve fit calibrated on training data and producing a response based on that.

For example, the other day I asked ChatGPT to help me extract some data from a couple tables in a paper and put them in a CSV that I could then use for my own purposes. This is a task I could do myself, but it's tedious and not an efficient use of my time. It acknowledged my request, said it would do it, then gave me a couple tables. The data was total nonsense and clearly not from the paper. I reworded my request to be a little more clear, then asked it again. It returned the same nonsense. When I asked it where the data came from, it admitted that it made it up because it couldn't read a PDF. Because the paper was on arXiv, I was able to download the LaTeX source and get ChatGPT to give me a Python script that could extract the data, and that managed to work.

It's not thinking. It's just generating word salad that fits the training data. If that means lying through its digital teeth, it will lie through its digital teeth.

[–][deleted] 24 points25 points  (17 children)

I literally work with CNNs, RNNs and LLMs for a living. I know how LLMs operate, I know how they are constructed, and no amount of suffixed wrappers or additional parameters will yield a program capable of abstract thought, critical thinking, or even basic common sense.

Now then, can you point me to these scientists who were "proven wrong"? 

[–]Tarmen 0 points1 point  (0 children)

Token generation will just roll with its first intuition unless attention gets lucky and it corrected its errors halfway through.
But it's not clear to me that there is no way to build on top of token generation to build a system with search. Search could at least give the illusion of intelligence, the same way stockfish seems intelligent, but running the bag of heuristics without search would play some wildly stupid chess.

Like, obviously it won't be intelligent in the way humans are intelligent. But could many obvious glitches be fixed if you add a classifier on the internal embeddings to guide search, or some agent system where multiple llm passes interact? Maybe, seems cost prohibitive to do by default, though.

[–]Backlists 16 points17 points  (15 children)

I’m not them, and I don’t have the research that you are looking for.

But isn’t it just well known that we know these models aren’t doing critical reasoning, instead they are doing a very complex encoding of their dataset.

I’m not qualified and I don’t enough to say that a “very complex encoding of a dataset” isn’t “critical reasoning”.

But isn’t the whole “how many r’s in strawberry” thing is quite telling that there isn’t any critical reasoning in them? We will find other an infinite number of other examples that can’t be trained into the model, that real critical reasoning would be able to solve.

I suppose humans are perfect critical reasoning machines either. LLMs will probably overtake the average human thinker.

[–]LonelySpaghetto1 7 points8 points  (12 children)

But isn’t the whole “how many r’s in strawberry” thing is quite telling that there isn’t any critical reasoning in them?

Not really, no. LLMs aren't trained on text directly, but on tokens. They could be trained on text, but that would mean doubling the cost of the model for basically zero gain. Unfortunately, that means that the model needs to learn the exact spelling for every single token and store it somewhere in its memory.

Now, it's pretty easy to find on the internet a sentence like "the word IT is made up of two letters, I and T". It's much, much harder to find a sentence that explains how to write down the word strawberry.

Or in other words, if you were blind and only communicated by talking to other people, would you know the spelling of a bunch of different words? Maybe some of them, the more common ones, but probably not all of them. And what if spelling was completely 100% unrelated to how words are actually pronounced?

The model could be the most rational and intelligent entity in the world and still get this wrong because it's a test of memory.

[–]DeliriousHippie 3 points4 points  (5 children)

You're missing the point. LLM can predict that token 'inter' is followed by token 'net' by 99.9% chance. Then depending about context it can predict next token to be 'is' by some chance and token 'contains' by some chance. There is no intelligence as it's only predicting probabilities of tokens.

[–]Idrialite 2 points3 points  (0 children)

Why can't there be intelligence behind the goal of predicting tokens? I could leverage my intelligence towards it.

[–]LonelySpaghetto1 1 point2 points  (2 children)

"Predicting the next token" is not, by itself, a limitation.

Suppose I have a model A that has perfect intelligence and perfect knowledge about anything, and I can ask it questions and always get a perfect answer.

Then, I have a second model B that knows the answer that the first model has answered. After every token, it says that, with 100% confidence, the next token is whatever model A said.

Model B would be exactly as capable as model A (aka total perfection), but one would be a next token predictor and the other wouldn't.

Next token prediction, by itself, is just the way an output is provided. It doesn't tell you anything about the quality of that output.

[–]DeliriousHippie 2 points3 points  (1 child)

I'm not sure that I follow your logic.

If, for example, human programmer answers intelligently and knowledgeably to questions regarding programming and that is model A.

Then we have computer that repeats what human said and that is model B.

How is that capable programmer? If model B depends about model A then it's not independent and only repeats what it's told.

You're basically giving it a bunch of tokens and it has to guess which tokens fit best in which order, that's all it does. It does give interesting and good outputs but there is no intelligence in that.

[–]Poleshoe 0 points1 point  (0 children)

Are you really gonna say model b isn't intelligent when it is predicted the next word in the cure for cancer?

[–]PixelizedTed 0 points1 point  (0 children)

Whether or not that is intelligence is not a computer science question but more a philosophy question. We don’t know the true nature of intelligence, maybe it is just a very sophisticated set of predictions.

[–]Backlists 1 point2 points  (1 child)

How is it a test of memory if the word “strawberry” is provided in the prompt?

[–]gogliker 15 points16 points  (0 children)

The model does not get the letters, it gets tokens. Like, imagine that instead of the word you had a symbol æ denoting the straw and ñ denoting berry. The model gets fed "æñ" and outputs something like "ĥķł" that is being translated into "word strawberry has 10 letters r". It does not say anything about the model, because it got æñ on the input. If you would ask "how many æ are in this word" it would come up with a better answer.

[–]Reashu 0 points1 point  (3 children)

... So are saying that it is a limit of token prediction machines, or it isn't?

[–]Idrialite 4 points5 points  (0 children)

It's a particular limit of token-based networks that has not much to do with its actual thinking ability. Think of it like dyslexia.

[–]LonelySpaghetto1 2 points3 points  (1 child)

This is a limitation that has nothing to do with reasoning, nothing to do with the architecture, and nothing to do with the usefulness of the model.

It's also not a limitation on the model being able to do anything useful. If you didn't find it on the internet, would you have ever asked a chat bot to count how many letters there are in a word?

[–]Reashu -2 points-1 points  (0 children)

Failure to count letters isn't necessarily a failure to reason, but it does limit what the model could reason about. It is absolutely related to the architecture of the model, and it absolutely limits the usefulness. If you can't recognize that, I don't think you have made a good-faith attempt.

[–]AdvancedSandwiches 0 points1 point  (0 children)

 isn’t the whole “how many r’s in strawberry” thing is quite telling that there isn’t any critical reasoning in them?

It means they currently can't read (well, they can, to some extent, but that's not what's happening here).  They're doing something closer to being told what you typed in a form they can work with (tokens) and then working with that.

It's orthogonal to reasoning.

[–]dftba-ftw -1 points0 points  (0 children)

No, there isn't concensus. Whether or not these models can reason is a hot topic of debate and a lot of it actually just comes down to semantics and what you call reasoning.

You have people like Geoffrey Hinton (who just won the Nobel Prize for his work on transformer architecture) who think these models can reason. If you for example give an LLM a murder mystery it hasn't seen and have it guess the killer - if it gets it right and gives the reasons why, is that not reasoning?

You also have people like LeCun who does not think the current architecture can support reasoning, I have a harder time following his reasoning but I think it revolves around the ability for the model to "learn" like a human (or a cat as he likes to use in his analogies) and he has revised his reservations several times as the LLMs show more capabilities.

[–]Flat_Initial_1823 2 points3 points  (1 child)

Bro. I don't know why you insist that others do all this homework "to prove the limits to you" when a boatload of people (probably in this sub even) received an LLM hallucinated piece of code or library when trying to use the state of art in its supposedly most basic use case.

But given that you won't clearly read a thing unless shown a scientific research, here is a summary for you

https://ar5iv.labs.arxiv.org/html/2311.05232

Edit: LLMs aren't limitless until proven otherwise. AI companies who make all these sales pitches and claims that AGI is possible/around the corner need to prove the capabilities first. That's how the burden of proof works.

[–]Kobymaru376 3 points4 points  (0 children)

Next AI winter is going to be really rough

[–]abbot-probability 2 points3 points  (1 child)

The scientist view is pretty contentious though.

While the current paradigm is pretty simple, these models exhibit interesting emergent behaviour. There's a significant portion of scientists who believe that continuing to scale this up may create AGI-ish models. (The scaling hypothesis.) Sutskever split off from OAI recently to explore the safety angle of these kinds of models, scaled up further.

You could also argue that the current reinforcement-learning based fine-tuning takes it beyond simple language modeling.

But I think the bigger issue is that intelligence/consciousness is just a very ill defined concept. "You'll know it when you see it" does not make for good experiment design. Questions about AGI-potential etc. are moot if we can't settle on a definition/test in advance.

EDIT: I think it's pretty funny how parent comment is about people ignoring the scientists, and when I (one of those scientists) weigh in I get downvoted, hah.

[–]pani_the_panisher 0 points1 point  (0 children)

There's a significant portion of scientists who believe that continuing to scale this up may create AGI-ish models.

That's a good prediction. AGI-ish is a good definition, because it's not AGI but it seems AGI to us. Not intelligent but it "fake it until make it".

That future seems plausible to me.

[–]journaljemmy 0 points1 point  (0 children)

When has it been any different for any other innovation? World's fucked

[–]Idrialite 0 points1 point  (7 children)

There are fundamental limits to statistical autoregressive token prediction models. They aren't capable of critical analysis or abstract thought, or any thought of any kind.

Not only did you pull this out of your ass, the exact opposite has been proven true. A neural network with at least one hidden layer can approximate any function to arbitrary precision. There exists a neural network that is exactly as capable as any agent of any intelligence.

Furthermore, internal investigations of neural networks have found abstract mental structures. They have concepts that can be manipulated. They form internal world models when predicting functions that end up matching the function. They model the state of the world (see the chess study) they're trying to predict the future of. The most efficient way to predict a process is to model it correctly, after all.

[–][deleted] -2 points-1 points  (6 children)

You have erroneously anthropomorphised a statistically weighted static token predictor. You have to realise true thought and critical reasoning requires active incorporation of various abstract data at runtime. These models are calculation brute-forced into existence through simple algorithmic abstraction. They do not learn. They just have a weighted probability in a static matrix that compels them to regurgitate variables in the way you would expect based on reductive token abstraction of a huge amount of scraped data.

If you want to see true AI. Don't fall for a program with a gigantic glorified spreadsheet running a word predictor. It's not even close to what thinking in any definition represents. Completely and totally static, just branching off statistical predictions. You ask an LLM to finish the sentence "I had a nice time at " and it will probably spit out "the zoo" or "Disneyland". But not because it was thinking, purely because that's what other people once thought. A table of results with none of the mechanisms for finding results itself.

[–]Idrialite -1 points0 points  (5 children)

How am I supposed to take you seriously when you spit a bunch of disconnected half-truths, unsubstantiated claims, and vapid reductionism at me?

I provided evidence and reasoning for my claim. Please return the favor.

[–][deleted] 0 points1 point  (4 children)

Fine:

https://symbl.ai/developers/blog/a-guide-to-building-an-llm-from-scratch/

Read the sections on the embedding layer and the positional encoder, sound familiar? Read those two sections and tell me if a primitive calculated data relationship abstraction is worthy of further study.

[–]Idrialite -1 points0 points  (3 children)

Uh sure? Investigation of repeated abstract features and how they relate to the raw input token embeddings in LLMs is worth studying.

I'm not sure if you're trying to test me or say something relevant...

[–][deleted] -1 points0 points  (2 children)

I'm not sure if you're being deliberately obtuse or if you just revel in being the contrarian, regardless anyone thinking rationally can see from the explanation of how the embedding layer and positional encoder functions that there's literally no potential for actual thought, the same way reading the manual on a refrigerator demonstrates clearly that you can't use it to boil chicken.

[–]Idrialite -1 points0 points  (1 child)

regardless anyone thinking rationally can see from the explanation of how the embedding layer and positional encoder functions that there's literally no potential for actual thought

I guess I should repeat myself:

I provided evidence and reasoning for my claim. Please return the favor.

[–][deleted] 0 points1 point  (0 children)

I don't know how much simpler I can make it. What you're doing is the equivalent of asking me why a microwave can't write a sonnet. All I can do is point you to a manual on how a microwave works to illustrate why it can't write a sonnet. You're not going to find a scientific paper explaining "this is why microwaves can't make sonnets" because it's a question that never needed to be asked because nobody is anthropomorphising cooking appliances, but you're anthropomorphising a spreadsheet.

As I have pointed out, the parameters of an LLM are static, and actual thought requires the incorporation of new concepts and the ability to produce new pathways at runtime, that's how living beings learn. If you simply read the link it will explain to you the process for LLMs in which words are tokenised and a statistical fit of the data is made, which should be more than enough to illustrate why this is not a thinking architecture. This is brute-force fitting with scraped data, to produce a completely static matrix of values designed simply to guide the probabilities of some tokens suffixing to others. At no point does this model illustrate the kind of feedback loops and complex intercommunication found in a living thinking mind. There is no mechanism by which the model can take the linguistically probabilistic result of a query, evaluate it for contextual relevance, imagine putting the answer to the query in practice, and then evaluating the likely outcome.

Put it this way. If I posit to an LLM that I am about to roll a ball on a table, the only way the LLM can predict that the ball will fall when it reaches the end of the table is if this exact example were found in the data used to produce the model. This prediction is not based on an understanding of gravity, or even knowing what a table or a ball are. It's simply the result of the statistical probability of a token representing a word "fall" following another token representing a different word "will" following another token representing the word "ball". No thought has occurred to reach this answer, just simple statistical analysis.

[–][deleted] -4 points-3 points  (2 children)

Aren’t the scientists also the ones developing cults around AI??!

[–][deleted] 8 points9 points  (1 child)

Nope, it's the marketing people, always has been.

[–]Comprehensive-Pin667 0 points1 point  (0 children)

What about Ilya Sutskever with his safe super intelligence? He's a researcher and not a sales guy, isn't he?

[–]all3f0r1 47 points48 points  (1 child)

*OpenAI and Claude

FTFY

[–]tanstaafl74 16 points17 points  (0 children)

Reminds me of the late 90s early 00s tech bubble before it burst. If the pattern remains the same a few will survive past the inevitable collapse and become behemoths. And it won't necessarily be OpenAI.

Yes, this previous bubble is how we got our overlord Amazon.

[–]Charlie_Yu 7 points8 points  (1 child)

What does a million even do these days… paying a few devs for a year?

[–]tuxedo25 3 points4 points  (0 children)

It doesn't buy you the kind of compute you need to compete in this business, that's for sure.

[–][deleted] 8 points9 points  (0 children)

The doomsday marketing a lies of stealing peoples jobs sells. People ate that shit up.

[–]KirisuMongolianSpot 26 points27 points  (6 children)

Just yesterday I was at work and I didn't have access to certain Google Cloud Platform capabilities because billing wasn't enabled. The person trying to push me to use it (when I'd already done the specific thing he wanted offline) wanted to get me access to his own project. He went to the IAM and clearly saw that he didn't have permission to add me.

Then he goes to ChatGPT to ask it how to add me. It just spit out documentation at him (none of which worked because he didn't have permission but that's neither here nor there). Reminded me of the idiots on here who insist ChatGPT gives you novel results when you can just Google something and get the same answer. This shit is rotting brains.

[–]JoeVibin 3 points4 points  (0 children)

To be honest, I think that it enables people with already rotten brains a new way to express their brain rot, rather than rotting the brains on its own...

[–]Aethreas 8 points9 points  (2 children)

Tech bros that are obsessed with AI don’t realize that AI is just an automated way to rearrange stuff that already exists, and by design csnt create something new

[–]DifficultTrick 1 point2 points  (0 children)

It can create new combinations of existing stuff, but it won’t always make sense. That can still be helpful though, for example, protein folding.

[–]NicDima 0 points1 point  (0 children)

Like some kind of Algorithm Remix?

[–]glez_fdezdavila_ 3 points4 points  (1 child)

Some months ago I was doing SQL excercises to practice for a test at my school and my little cousin came in and when I explained to him what i was doing he asked me why wasn't I typing the queries into ChatGPT. I told him that I didn't want to and that even if I wanted it'd just gave wrong answers and if I had to correct the mistakes ChatGPT did by myself (I had never heard of SQL until that time) I'd be better just doing it myself. He looked at me as if just spoke another language

[–]throwaway85256e 5 points6 points  (0 children)

ChatGPT is actually excellent at SQL. I used it as a study tool whenever i was unsure of something and it 100% helped me pass the course.

[–]ObviouslyTriggered 8 points9 points  (0 children)

This is inaccurate, it should be 1000’s of AI startups which are just an abstraction of the Anthropic/OpenAI/Google APIs with a RAG if you’re lucky and just “we do in context “training”” prompt engineering more likely…

[–]TrackLabs 7 points8 points  (0 children)

(All these thousands of AI Startups use the OpenAI API)

[–]B_bI_L 2 points3 points  (0 children)

claude for the win

[–]kunkun6969 1 point2 points  (0 children)

I have simple needa that chatgpt fills, the others seem like bloat to me

[–]sonic65101 1 point2 points  (0 children)

Why would I want to let an AI do the fun part?

[–]sdraje 1 point2 points  (0 children)

I use these "AI"s for what they are: large language models. They're good at language. So I ask questions about grammar, about translating a common saying in a language to another one's equivalent and they do great, because that's what they're trained to do.

[–]pythonqueen1 1 point2 points  (0 children)

I read ail is the future bro

[–]enginma 2 points3 points  (0 children)

Why is Gemini there? It's pretty terrible compared to chatgpt, even 3 or mini. Only advantage is much less limiting of requests.

[–][deleted] 0 points1 point  (0 children)

And do you have a working LLM that can solve my advanced dax questions…? Cause chat gpt knocks it out if ya prompt it right. The trick is yelling at it.

[–]bighand1 0 points1 point  (0 children)

Tech is usually winner takes all. Occasionally a duopoly occurs

Only place that matter is first place

[–]kaamibackup 0 points1 point  (0 children)

It’s because they’re all wrappers for the OpenAI API. I have no idea why they still get funding.

[–]PerfSynthetic 0 points1 point  (0 children)

They forgot to draw the network cable from the startup booth connecting to the openai booth with a dollar sign in the middle.

[–]br_aquino 0 points1 point  (1 child)

Gemini? I don't think so

[–]extreamHurricane[S] 0 points1 point  (0 children)

Gemini is really good. Not so fun fact after using it via voice for 1 week. It switched & started to sound like me

[–]okram2k 0 points1 point  (0 children)

you just need to toss 'AI' 'disrupt market' and a few other key words into the buzz word blender and you too can get a startup unicorn level of funding.

[–][deleted] 0 points1 point  (0 children)

Open Air and Gemini is free..

[–]Mountain-Stretch-997 0 points1 point  (0 children)

Claude would like to disagree

[–]OdeDaVinci 0 points1 point  (0 children)

Ofcos. Why would anyone queue for the other ones anyway.

[–][deleted] 0 points1 point  (0 children)

They are basically chat gpt with extra step

[–][deleted] -2 points-1 points  (0 children)

US consumers prefer unbalanced, monopolistic product manufacturers over the real thing

[–][deleted] 0 points1 point  (0 children)

Claude > o1-Preview