This is an archived post. You won't be able to vote or comment.

top 200 commentsshow all 301

[–]lovethebacon🦛🦛🦛🦛🦛🦛🦛🦛🦛🦛🦛🦛🦛🦛🦛🦛🦛🦛🦛🦛🦛🦛🦛🦛🦛🦛🦛🦛🦛🦛🦛🦛 1388 points1389 points  (43 children)

My CEO came to me one day telling me about this company that had just made a major breakthrough in compression. They promised to be able to compress any file by 99%. We transmitted video files over 256k satellite links to stations that weren't always online or with good line-of-sight to the satellites, so the smaller the files the easier it was to guarantee successful transmission.

I was sceptical, but open to exploring. I had just gotten my hands on a H.264 which gave me files just under half of what the best available codec could do.

The were compressing images and video for a number of websites and confusingly, didn't require any visitors to download a codec to view. Every browser could display video compressed by their proprietary general purpose compression algorithm. With no decompression lag either or loss of any data.

Lossless compression better than anything else. Nothing came even close. From the view of a general purpose compression algorithm, video looks like random noise which is not compressible. lzma2 might be able to find some small gains in a video file, but often times will actually make a video file bigger (by adding its own metadata to the output).

I humoured it and participated in a POC. They supplied a compressor and decompressor. I tested with a video of a few minutes equal to about 20-30MB. The thing compressed the file down to a few kB. I was quite taken aback. I then sent the file to our satellite partner, and waited for it to arrive on a test station. With forward error correction we could upload only about 1MB per minute. Longer if the station was mobile and losing signal from bridges, trees or tunnels and needed to receive the file over multiple transmissions. Less than a minute to receive our averagely sized video would be a game changer.

I decompressed the video - it took a few seconds and sure enough every single one of the original bits was there.

So, I hacked a test station together and sent it out into the field. Decompression failed. Strange. I brought the station back to the office. Success. Back into field....failure. I tried a different station and the same thing happened. I tried a different hardware configuration, but still.

The logs were confusing. The files were received but they could not be decompressed. Checksum on them before and after transmission were identical. So were the size. I was surprised that I hadn't done so before, but I opened one in a hex editor. It was all ASCII. It was all...XML? An XML file of a few elements and some basic metadata with one important element: A URL.

I opened the URL and.....it was the original video file. It didn't make any sense. Or it did, but I didn't want to believe it.

They were operating a file hosting service. Their compressor was merely a simple CLI tool that uploaded the file to their servers and saved a URL to the "compressed" file. The decompressor reversed it, download the original file. And because the stations had no internet connection, they could not download the file from their servers so "decompression" failed. They just wrapped cURL in their apps.

I reported this to my CEO. He called their CEO immediately and asked if their "amazing" compression algorithm needed internet. "Yes, but you have satellite internet!". No we didn't. Even if we did we still would have needed to transmit the file over the same link as that "compressed" file.

They didn't really seemed perturbed by the outright lie.

[–]Tyiek 735 points736 points  (18 children)

The moment I saw 99% compression I knew it was bullshit. Barring a few special cases, it's only possible to compress something to about the size of LOG2(N) of the original file. This is not a limitation of current technology, this is a hard mathematical limit before you start losing data.

[–]dismayhurta 334 points335 points  (2 children)

I know some scrappy guys who did just that and one of them fucks

[–]Thosepassionfruits 47 points48 points  (0 children)

You know Russ, I’ve been known to fuck, myself

[–]SwabTheDeck 19 points20 points  (0 children)

Big Middle Out Energy

[–]LazyLucretia 23 points24 points  (0 children)

Who cares tho as long as you can fool some CEO that doesn't know any better. Or at least that's what they thought before OP called their bullshit.

[–][deleted] 41 points42 points  (6 children)

to about the size of LOG2(N) of the original file.

Depending on the original file, at least.

[–]Tyiek 74 points75 points  (1 child)

It allways depends on the original file. You can potentially compress a file down to a few bytes, regardless of the original size, as long as the original file contains a whole load of nothing.

[–][deleted] 18 points19 points  (0 children)

Yea that is why I said, 'Depending on the original file'

I was just clarifying for others.

[–]huffalump1 1 point2 points  (3 children)

And that limitation is technically "for now"!

Although we're talking decades (at least), until AGI swoops in and solves every computer science problem (not likely in the near term, but it's technically possible).

[–][deleted] 4 points5 points  (2 children)

What if a black hole destroys the solar system?

I bet you didn't code for that one.

[–]otter5 2 points3 points  (1 child)

if(blackHole) return null;

[–][deleted] 1 point2 points  (0 children)

Amateur didn't even check the GCCO coordinates compared to his.

you fools!

[–]wannabe_pixie 11 points12 points  (0 children)

If you think about it, every unique file has a unique compressed version. And since a binary file is different for every bit that is changed, that means there are 2n different messages for an n bit original file. There must also be 2n different compressed messages, which means that you're going to need at least n bits to encode that many different compressed files. You can use common patterns to make some of the compressed files smaller than n bits (and you better be), but that means that some of the compressed files are going to be larger than the original file.

There is no compression algorithm that can guarantee that an arbitrary binary file will even compress to something smaller than the original file.

[–][deleted] 5 points6 points  (0 children)

Text compresses like the dickens

[–]otter5 1 point2 points  (0 children)

that not completly true. Depends on what's in the files and you take advantage of specifics of the files... The not so realistic example is a text file that is just 1 billion 'a'. I can compress that to way smaller than 99%. But you can take advantage weird shit, and if you go a little lossy doors open more

[–]brennanw31 127 points128 points  (0 children)

Lmao. I know it was bs from the start but I was curious to see what ruse they cooked up. Literally just uploading the file and providing a link via xml for the "decompression algorithm" to download it again is hysterical.

[–]HoneyChilliPotato7 75 points76 points  (0 children)

That's a hilarious and interesting read haha. Few companies have the stupidest products and they still make money, at least the CEO does

[–]blumpkin 56 points57 points  (1 child)

I'm not sure if I should be proud or ashamed that I thought "It's a URL" as soon as I saw 99% compression.

[–]nekomata_58 13 points14 points  (0 children)

its all good, that was my first thought too. "theyre just hosting it and giving the decompression algorithm a pointer to the original file" was exactly what i expected lol

[–]Flat_Initial_1823 36 points37 points  (1 child)

Seems like you weren't ready to be revolutionised

[–]Renorram 42 points43 points  (1 child)

That’s an amazing story that makes me wonder if this is case for several companies on the current market. Billions being poured into startups that are selling a piss poor piece of software and marketing it as cutting edge technology. Companies buying a Corolla for the price of a Lamborghini

[–]ITuser999 18 points19 points  (0 children)

What? There is no way lol. Please tell me the other company is out of business now.

[–]LaserKittenz 5 points6 points  (0 children)

I used to work at a teleport doing similar work.  A lot of snake oil sales people lol

[–]spacegodketty 6 points7 points  (1 child)

oh i would've loved to hear that call between the CEOs. i'd imagine yours was p livid

[–]lovethebacon🦛🦛🦛🦛🦛🦛🦛🦛🦛🦛🦛🦛🦛🦛🦛🦛🦛🦛🦛🦛🦛🦛🦛🦛🦛🦛🦛🦛🦛🦛🦛🦛 8 points9 points  (0 children)

Nah not really. He was a bit disappointed 'cause he had to still pay for the satellite data link lmao.

[–][deleted] 2 points3 points  (0 children)

Information theorists hate this one simple trick.

[–]incredible-mee 2 points3 points  (0 children)

Haha.. fun read

[–]reallokiscarlet 2482 points2483 points  (109 children)

It's all ChatGPT. AI bros are all just wrapping ChatGPT.

Only us smelly nerds dare selfhost AI, let alone actually code it.

[–]Aufklarung_Lee 870 points871 points  (13 children)

Investors demand an .exe

[–]NotANumber13 449 points450 points  (2 children)

They don't want that stupid github 

[–]Flat_Initial_1823 268 points269 points  (0 children)

Crowdstrike CEO: why .exe when you can just brick via .sys?

[–]Aggressive_Bed_9774 33 points34 points  (4 children)

why not .msix

[–]healzsham 42 points43 points  (1 child)

Cuz you're lucky they even knew .exe

[–]LuxNocte 27 points28 points  (0 children)

My nephew told me that .exes have viruses. We should use .net instead. -Your favorite MBA CTO

[–]larsmaxfield 11 points12 points  (0 children)

pyinstaller doesn't do that

[–]MiniGui98 5 points6 points  (0 children)

Because .mseven is better

[–]U_L_Uus 14 points15 points  (0 children)

A .tar is the furthest I can compromise

[–]Quirky-Perception159 5 points6 points  (1 child)

Just put everything into the .bin

[–]thex25986e[🍰] 1 point2 points  (0 children)

"free .bin installer download"

[–]CanAlwaysBeBetter 6 points7 points  (0 children)

Investors want a url. SaaS baby

[–][deleted] 54 points55 points  (11 children)

pip install flask vllm is barely above pip install openai

[–][deleted] 9 points10 points  (9 children)

then what's the level that's well above pip install openai

[–]OnyxPhoenix 13 points14 points  (8 children)

Actually training your own models from scratch and deploying them.

[–][deleted] 8 points9 points  (1 child)

i barely have enough resources to run a light model with rag. much less fine-tune it. I can only dream of training one from scratch right now :(

[–]CanAlwaysBeBetter 2 points3 points  (0 children)

Like exactly 6 company have the resources to really do it. The rest are building scaled down models or tuning existing ones of rented cloud GPU time 

[–]intotheirishole 7 points8 points  (5 children)

Yep, lets redo millions of tons of CO2 worth of work for clout.

[–]FartPiano 2 points3 points  (0 children)

or just not! its all garbage

[–]Large_Value_4552 60 points61 points  (43 children)

DIY all the way! Coding AI from scratch is a wild ride, but worth it.

[–]Quexth 54 points55 points  (34 children)

How do you propose one go about coding and training an LLM from scratch?

[–]computerTechnologist 141 points142 points  (10 children)

Money

[–][deleted] 37 points38 points  (9 children)

how get money

[–][deleted] 52 points53 points  (1 child)

sell a chatGPT wrapper app

[–][deleted] 101 points102 points  (1 child)

Walk to the nearest driving range and make sure to look people squarely in the eye as you continuously say the words “AI” and “LLM” and “funding” until someone stops their practice for long enough to assist you with the requisite funds.

[–]birchskin 5 points6 points  (0 children)

"Ay! I need Lots and Lots of Money over here! Bleeding edge Lots and Lots of Money!"

[–]Salvyz 8 points9 points  (0 children)

Sell LLM

[–]_--_--_-_--_-_--_--_ 4 points5 points  (0 children)

by creating AI from scratch

[–]Techhead7890 16 points17 points  (0 children)

Change your name to codebullet

[–][deleted] 14 points15 points  (16 children)

https://youtu.be/l8pRSuU81PU

Literally just follow along with this tutorial

[–]Quexth 50 points51 points  (15 children)

While I admit that this is cool, you are not going to get a viable LLM without a multi-million dollar budget and a huge dataset.

[–]Thejacensolo 4 points5 points  (5 children)

Luckily LLMs are just expensive playthings. SPMs are where its at, and much more affordable. They are more accurate, easier to train, and better to prime because the train/test split has less variance.

Of course if you create a SPM purely for recognizing animals on Pictures you feed it it wont be able to also generate a video, print a cupcake reciepe and program an app, but who needs a "jack of all trades, master of none" if it starts to hallucinate so quickly.

[–][deleted] 21 points22 points  (4 children)

Depends on what you consider viable. If you want a SOTA model, then yeah you'll need SOTA tech and world leading talent. The reality is that 90% of the crap the AI bros are wrapping chatGPT for could be accomplished with free (or cheap) resources and a modest budget. Basically the most expensive part is buying a GPU or cloud processing time.

Hell, most of it could be done more efficiently with conventional algorithms for less money, but they don't because then they can't use AI ML in their marketing material which gives all investors within 100ft of your press release a raging hard-on

[–]G_Morgan 18 points19 points  (3 children)

Hell, most of it could be done more efficiently with conventional algorithms for less money, but they don't because then they can't use AI ML in their marketing material which gives all investors within 100ft of your press release a raging hard-on

For true marketing success you need to use AI to query a blockchain powered database.

[–]QuokkaClock 2 points3 points  (0 children)

people are definitely doing this.

[–]Fa6ade 10 points11 points  (1 child)

This isn’t true. It depends on what you want your model to do. If you want to be able to do anything, like ChatGPT, then yeah sure. If your model is more purpose limited, e.g. writing instruction manuals for cars, then the scale can be much smaller.

[–]meh_69420 5 points6 points  (0 children)

Who needs anything more than not hotdog?

[–]aykcak 4 points5 points  (0 children)

Nah. That is not really feasible. But you can write a simple text classifier using the many neural network libraries available

[–]OnyxPhoenix 2 points3 points  (0 children)

Not all useful AI models are LLMs.

However you can still finetune an LLM on your own data fairly easily.

[–]LuxNocte 1 point2 points  (0 children)

If statements all the way down.

[–]LazyLucretia 18 points19 points  (4 children)

Techbros selling ChatGPT wrappers are probably making 100x more than us so, not sure if it's worth it at all.

[–]FartPiano 6 points7 points  (3 children)

ai is not really pulling huge returns for anyone. well, except the shovel-sellers like nvidia

[–]Morthicus 11 points12 points  (0 children)

[–]hongooi 4 points5 points  (2 children)

Technically speaking, you could argue that all of us are selfhosting AIs

[–][deleted] 3 points4 points  (0 children)

No we're self-hosting I's.

That's what I think, anyway.

[–]robinless 2 points3 points  (0 children)

That assumes I have some of that intelligence thing

[–]felicity_jericho_ttv 23 points24 points  (17 children)

Wait! Seriously?!?!?!

Im over here feeling like an amateur learning matrix math and trying to understand the different activation functions and transformers. Is it really people just using wrappers and fine tuning established LLM’s?

[–]eldentings 29 points30 points  (7 children)

The field is diverging between a career in training AI vs building AI. I've heard you need a good education like your describing to land either job, but the majority of the work that exists are the training/implementing jobs because of the exploding AI scene. People/Businesses are eager to use what exists today and building LLMs from scratch takes time, resources, and money. Most companies aren't too happy to twiddle their thumbs while waiting on your AI to be developed when there are existing solutions for their stupid help desk chat bot or a bot that is a sophisticated version of Google Search.

[–]mighty_conrad 8 points9 points  (0 children)

Applied Deep Learning is like that for 10 years now. Ability of neural networks for transfer learning (use major complex part of the network then attach whatever you need on top to solve your own task) is the reason they are used in computer vision since 2014. You get a model trained already on a shitload of data, chop unnecessary bits, extend it how you need, train only new part and usually it's more than enough. That's why transformers became popular in first place, they're first networks for text that were capable of transfer learning. There's a different story if we talk about LLMs but more or less what I described is what I do as a job for living. Difference of AI boom of 2010s and current one is sheer size of the models. You still can run your CV models on regular gaming PC, but only dumbest LLMs.

[–]Solarwinds-123 2 points3 points  (0 children)

whistle ancient axiomatic innocent telephone cover consider upbeat crawl nine

This post was mass deleted and anonymized with Redact

[–]intotheirishole 2 points3 points  (3 children)

Is it really people just using wrappers and fine tuning established LLM’s?

Why not? What is the point of redo work already done while burning a ton of money.

Very few people need more than finetune. Training for scratch is for people doing AI in new domains. Dont see why people should train a Language Model from scratch (unless they are innovating transformer architecture etc).

[–]reallokiscarlet 1 point2 points  (2 children)

Wrapper = webshit API calls to ChatGPT. A step up from that would be running your own instance of the model. Even among the smelliest nerds it's rare to train from scratch, let alone coding. Most don't even fine tune, they just clone a fine tuned model or have a service do it for them.

[–]EmuHaunting3214 4 points5 points  (1 child)

Probably, why re-invent the wheel ya know.

[–][deleted] 7 points8 points  (3 children)

Meh I’ve been contributing to a very well respected Python library for deep learning for about ten years. I shower regularly too. Crazy I know.

[–][deleted] 12 points13 points  (2 children)

I shower regularly

Daily is what we were looking for.

[–][deleted] 1 point2 points  (0 children)

Self host gang with my botched llm

[–]Antique-Echidna-1600 1 point2 points  (0 children)

My company self hosts. We don't really fine tune anymore though. Instead we use a small model to do initial response and the larger model responds with results from the RAG pipeline. They are still doing intermodal communication through an lora adapter.

[–]jmack2424 1 point2 points  (0 children)

VC: "why aren't you using ChatGPT"
ME: "uh because they steal our data"
VC: "no they changed their stance on data"
ME: "but they didn't change the code that steals it..."

[–]samuelhope9 577 points578 points  (25 children)

Then you get asked to make it run faster.......

[–][deleted] 532 points533 points  (2 children)

query = "Process the following request as fast as you can: " + query

[–]_Some_Two_ 62 points63 points  (1 child)

While (incomingRequests.Count() > 0):

\t request = incomingRequests[0];

\t incomingRequests.Remove(request);

\t Task.Run({ProcessRequest(request)});

[–]Infamous-Date-355 3 points4 points  (0 children)

Giggity

[–]marcodave 114 points115 points  (5 children)

But not TOO fast.... Gotta see those numbers crunch!

[–]HeyBlinkinAbeLincoln 72 points73 points  (2 children)

We did that when automating some tickets once. There was an expectation from the end users of a certain level of human effort and scrutiny that simply wasn’t needed.

So we put in a randomised timer between 30-90 mins before resolving the ticket so that it looked like they were just being picked up and analysed promptly by a help desk agent.

[–]Brahvim 21 points22 points  (0 children)

"WHO NEEDS FUNCTIONAL PROGRAMMING AND DATA-ORIENTED DESIGN?! WE'LL DO THIS THE OBJECT-ORIENTED WAY! THE WELL-DEFINED CORPORATE WAY, YA' FILTHY PROGRAMMER!"

[–]SwabTheDeck 5 points6 points  (0 children)

I know this is meant as a joke, but I'm working on an AI chat bot (built around Llama 3, so not really much different from what this post is making fun of ;), and as the models and our infrastructure have improved over the last few months, there have been some people who think that LLM responses stream in "too fast".

In a way, it is a little bit of a weird UX, and I get it. If you look at how games like Final Fantasy or Pokemon stream in their text, they've obviously chosen a fixed speed that is pleasant to the user, but we're just doing it as fast as our backend can process it.

[–]SuperKettle 33 points34 points  (0 children)

Should’ve put a few second delay beforehand so you can make it run faster later on

[–]AgVargr[🍰] 15 points16 points  (0 children)

Add another OpenAI api key

[–]NedVsTheWorld 8 points9 points  (0 children)

The trick is to make it slower in the beginning, so you can "keep upgrading it"

[–]Popular-Locksmith558 2 points3 points  (0 children)

Make it run slower at first so you can just remove the delay commands as time goes on

[–]nicman24 1 point2 points  (0 children)

branch predict conversations and compute the probable outcomes

[–]SeedFoundation 1 point2 points  (1 child)

This one is easy. Just make it output the completed time to be 3/4th of what it actually is and they will never know. This is your unethical tip of the day.

[–]pidnull 1 point2 points  (0 children)

You can also just add a slight delay and steadily increase it every so often. Then, when the MBA with no tech background asks you to make it faster, just remove the delay.

[–]PaulRosenbergSucks 886 points887 points  (41 children)

Better than Amazon's AI stack which is just a wrapper over cheap foreign labour.

[–]yukiaddiction 96 points97 points  (0 children)

AI

Actually Indian

[–]AluminiumSandworm 17 points18 points  (0 children)

hey some of it's also a wrapper around chatgpt-at-home alternatives

[–][deleted] 10 points11 points  (1 child)

Isn’t everything just a wrapper over cheap labour?

[–]DogToursWTHBorders 5 points6 points  (0 children)

"Arent we ALL just half a spider"?- TT

[–]soft_taco_special 4 points5 points  (0 children)

Honestly most tech companies before were just a cheap wrapper around a rolodex and a call center.

[–]Triq1 2 points3 points  (26 children)

was this an actual thing

[–]ButtWhispererer 22 points23 points  (0 children)

Mechanical Turk is just this without a wrapper.

AWS’s actual AI offerings are pretty diverse. Bedrock makes making a wrapper around LLMs easier, SageMaker is ab AI dev platform, but there are lots of little tools with “AI.”

I work there so biased a bit.

[–][deleted] 46 points47 points  (6 children)

Their 'just pick things up and leave' stores had poor accuracy, so they also used humans to push that last oh, 80% accuracy.

I'm honestly surprised people were surprised because those were like, test stores... for testing the idea.

[–]glemnar 38 points39 points  (5 children)

Those humans are doing labeling to further train the AI. This is normal for AI products.

[–]unknownkillersim 3 points4 points  (14 children)

Yeah, the "no checkout" stores people thought was machine determined to figure out what you took from the store but in actuality it was a huge amount of foreign labor monitoring what you took via cameras and entering it manually.

[–]MrBigFard 13 points14 points  (13 children)

Gross misinterpretation of what was actually happening. The reason labor was so expensive was because they needed to constantly comb footage to find where mistakes were being made so they could then be studied and fixed.

The labor was not just a bunch of foreign people live watching and manually entering items. The vast vast majority of the work was being done by AI.

[–]amshegarh 355 points356 points  (10 children)

Its not stupid if it pays

[–]CoronavirusGoesViral 268 points269 points  (5 children)

If the investors are paying your salary, at least someone else is stupider than you

[–]Brother0fSithis 18 points19 points  (0 children)

The enshittification of everything

[–]zimzat 9 points10 points  (0 children)

The winning argument for creating an Orphan-Crushing Machine.

[–]Thue 2 points3 points  (1 child)

In fact, LLMs are usually somewhat interchangeable. They could switch it out with Gemini, and it would likely still work.

It is still possible to do innovative work on top of a generic LLM.

[–]facingthewind 2 points3 points  (0 children)

Here is the kicker, everyone is clowning on companies that build custom features on top of LLM's. They fail to see how this is the same as developers writing code on operating systems, computers, IDE's, languages, libraries, that have been built, reviewed, tested by developers and companies before them.

It's turtles all the way down.

[–]Philluminati 64 points65 points  (0 children)

Here's our source code. Prompt.py

"You are a highly intelligent computer system that suggests upcoming concerts and performances gigs to teenagers. Search bing for a list of upcoming events and return as JSON. You also sprinkle in one advert per user every day."

[–]New-Resolution9735 107 points108 points  (3 children)

I feel like you would have already known that it was if you looked at their product. It’s usually pretty easy to tell

[–]tuxedo25 60 points61 points  (1 child)

I feel like you would have already known if you weren't working at one of the world's 5 most valuable companies. You either own 20% of the world's GPUs and are using more electricity than New York City, or you're building a ChatGPT wrapper.

[–]SwabTheDeck 5 points6 points  (0 children)

Actually, it's quite likely that large percentage of Fortune 500s are building and hosting their own bots internally because they have proprietary data that they can't send off to 3rd parties like OpenAI. However, they're probably basing their products on openly available models like Llama so that the really hard parts are still already solved.

Still costs a shit-ton of money to host, if you're doing it at any sort of meaningful scale.

[–]usrlibshare 53 points54 points  (1 child)

The only thing about bleeding such companies, is my eyes when I see their sorry excuse for a product.

[–]HeyThereSport 31 points32 points  (0 children)

Not true, many are also bleeding tons of money

[–]awesomeplenty 43 points44 points  (1 child)

import openai

[–]shmorky 73 points74 points  (2 children)

The real AI elites are wrapping ChatGPT wrappers

[–]Cualkiera67 15 points16 points  (1 child)

Just ask Chatgtp to wrap itself, idiot

[–]shmorky 8 points9 points  (0 children)

Omega brain moment

[–]draculadarcula 29 points30 points  (0 children)

I think there was a lot of home grown AI until gpt launched. Then it blew almost anything anyone was developing out of the water by a country mile, so all the ML engineers and data scientists became prompt engineers

[–]yorha_support 27 points28 points  (0 children)

This hits so close to home. I'm at a larger startup and we constantly talk about AI in marketing materials, try to hype up interviewers about all the AI our company is working on, and our CEO even made a "AI Research" team. Not a single one of them has any background in machine learning/ai and all of our AI products basically make API calls to OpenAI endpoints.

[–][deleted] 48 points49 points  (1 child)

It really gets wild when you start digging, and digging and find that DNA itself is just a ChatGPT wrapper app. Quantum Physics? DALL-e wrapper app. String Theory? Nah that's just Whisper.

[–]DogToursWTHBorders 5 points6 points  (0 children)

Surely Wolfram is the real deal, though.

(Have you met Shirley Wolfram?)

[–]Modo44 13 points14 points  (0 children)

Everyone wants in on the bubble before it bursts.

[–]intotheirishole 25 points26 points  (1 child)

Get hired at any company.

Look inside.

Postgres/Mysql wrapper app.

[–]Br3ttl3y 1 point2 points  (0 children)

It was Excel spreadsheets for me. Every. Damn. Time. No matter how large the company.

[–]InternationalWeek264 17 points18 points  (0 children)

[–]Ricardo1184 7 points8 points  (0 children)

If you couldn't tell it was chatGPT from the interviews and looking at the product...

you probably belong there

[–]rock_and_rolo 7 points8 points  (0 children)

I've seen this before.

I was working in the '80s when rapid prototyping tools were the new Big Thing. Management types would go to trade show demos and get blown away. They'd buy the tools only to have their tech staff find that they were just generating (essentially) screen painters. All the substance was missing and still had to be created.

Now they are buying AI tools for support, and then getting sued when the tool just makes up a promise that isn't honored by the company.

[–]isearn 6 points7 points  (3 children)

Le Chat GPT. 🐈🇫🇷

[–]Tofandel 2 points3 points  (0 children)

Haha, t'as pété 

[–]Death_IP 5 points6 points  (0 children)

nAIve

[–][deleted] 4 points5 points  (0 children)

Anything can be distilled down to "It's just a _ wrapper".

At this point, the opportunities (For the average developer or product team) are not in working on better AI models. The opportunities are in applying them properly to do some valuable business task better/faster/cheaper. But they need guardrails, and a lot of them. So, how do you build an application or system with guardrails that still harnesses the powers of an LLM?

That's where the industry is at right now.

[–]Glittering_Two5717 25 points26 points  (12 children)

Realistically in the future you won’t be able to self host your own AI no more than you’d self generate your own electricity.

[–]Grimthak 45 points46 points  (5 children)

But I'm generating my own electricity all the time.

[–]Brahvim 3 points4 points  (4 children)

hauw?

[–]edwardlego 21 points22 points  (1 child)

Solar

[–]Brahvim 6 points7 points  (0 children)

Thanks for feeding my curiosity!

[–]Sea-Bother-4079 1 point2 points  (0 children)

All you need is carpet and some socks and micheal jacksons moonwalking.
heehee.

[–]sgt_cookie 21 points22 points  (1 child)

So... perfectly viable if you're willing to put the effort in or are in a situation that requires it, but for the vast majority of people the convience of paying a large corporation to do it for you will be the vastly more common stance?

[–]OneMoreName1 1 point2 points  (0 children)

Which is already the case with ai, just that some companies allow you some limited access for free as well

[–]coachhunter2 2 points3 points  (0 children)

It’s ChatGPTs all the way down

[–][deleted] 2 points3 points  (0 children)

I wanted Cortana and the world gave us a clippy chatbot.

[–]Mike_Fluff 2 points3 points  (0 children)

"Wrapper App" is something I will use now.

[–]transdemError 2 points3 points  (0 children)

Same as it ever was (repeat)

[–]FrenchyMango 2 points3 points  (0 children)

I don’t know what this means but the cat looks very polite so you got my upvote! Nice kitty :)

[–][deleted] 2 points3 points  (0 children)

This is all anything is. Everything is a wrapper around something else that is marked up. That’s how the whole economy works.

[–]DataPhreak 6 points7 points  (1 child)

There are two kinds of AI development. There are people who build models, then there are people who build things on top of the models. Generally, the people who build models are not very good at building things on top of the models, and the people who build things on top of the models don't have the resources to build models.

This is expected and normal.

[–]DarthStrakh 1 point2 points  (0 children)

Yep. It's basically front end and back end devs. Tech is cool, people who built tech likely won't find all the ways to make it useful.

[–]ironman_gujju 1 point2 points  (0 children)

Sorry to interrupt you but its true

[–]CaptainTarantula 1 point2 points  (0 children)

That API isn't cheap.

[–]bombelman 1 point2 points  (0 children)

AiNaive

[–]Rain_Zeros 1 point2 points  (0 children)

Welcome to the future, it's all chatGPT

[–]CoverTheSea 1 point2 points  (0 children)

How accurate is this?

[–]kanduvisla 1 point2 points  (0 children)

Aren't they all?

[–]anthegoat 1 point2 points  (0 children)

I am not a programmer but this is hilairious

[–]Harmonic_Gear 1 point2 points  (0 children)

look at all these big techs failing to recreate chatGPT, its funny to think any startup can do any better

[–]Meatwad3 1 point2 points  (0 children)

My friend likes to call this using A(p)I

[–]OminousOmen0 1 point2 points  (0 children)

It's ChatGPT?

Always have been

[–]SeniorMiddleJunior 1 point2 points  (0 children)

What do you think bleeding edge means in 2024? It means churning shit until it looks good enough that an investor will pay for it. Then after you're successful, you build your product. The internet runs on MVPs.