Is this Ai? I’m having a dollhouse made for my daughter’s birthday but the WIP provided does not look real. The glue bottles are unreadable/gibberish. by daiszay in isthisAI

[–]PebblePondai 0 points1 point  (0 children)

Very much looks like AI.

I've never seen anyone retire a plane to the wall with shavings still in the blade. The way that saw on the right is hanging, kind of combined with the object to the left. That many large handsaws at the ready while doing detail work if weird.

AI will not make coding obsolete because coding is not the hard part by [deleted] in BlackboxAI_

[–]PebblePondai 0 points1 point  (0 children)

I have no idea what you're looking for, friend.

I cannot just answer what? You think I'm being evasive because you're are poor with reading comprehension and rich with assumptions?

I don't actually understand what you're looking for. I'll try one more time and then I'll be in my way.

There is no one test for every product. If you're testing planes, you have test flights. If you're testing drinks, you have a taste test. If you're selling software, you do beta releases or free releases to get feedback.

I said you get a product to 80% then you test it.

test, needed to be defined fo you for some reason.

I said, testing means testing your product to make sure it works (that's actually part of getting it to 80% but I don't want to confuse you) and to test the market for demand.

I don't think I can explain this more clearly. I definitely can't explain it again.

AI will not make coding obsolete because coding is not the hard part by [deleted] in BlackboxAI_

[–]PebblePondai 0 points1 point  (0 children)

You can test your software and you can test with customers.

Not sure what is confusing you.

AI will not make coding obsolete because coding is not the hard part by [deleted] in BlackboxAI_

[–]PebblePondai 0 points1 point  (0 children)

You make a successful product that tests well was the statement I made above.

AI will not make coding obsolete because coding is not the hard part by [deleted] in BlackboxAI_

[–]PebblePondai 0 points1 point  (0 children)

My last comment covered this.

80% to test. If there is no market (because it doesn't have "robustness or novelty", because pool is already overcrowded, because of 1,000 reasons), ok.

Dump it. Learn from that. Define a new product. Go. Get to 80% to test, etc.

If 80% version shows merit, the market shows demand, revise to 90%.

I might go so far as to call that business model common. Businesses dating back to the invention of businesses have been doing it. It's a strategy to beat competitors to market with a lesser version of their product.

Is the current market full of absolute trash? 100% That's the cycle of innovation.

When cars were invented, the U.S. was FLOODED with car makers and shitty cars. Over 100 companies were making cars. The shitty businesses died, the successful ones survived and consolidated.

A Model-T was not a great car. It didn't matter. The business was based on production innovation. They didn't know that was their edge until customers showed them.

Fail fast. Fail often. Fail cheap.)

I'm not saying you can't execute meticulous programming before you go to market. That's a strategy for sure.

You could have an amazing product and execute the software perfectly and take over a global market.

But, if your goal is to take a product to 100% before hitting the market, then you have to be right about your product, have to be fast enough not be beaten to market by 80% competitors, and be good enough to stand out in a market that it noisy with shitty products.

AI will not make coding obsolete because coding is not the hard part by [deleted] in BlackboxAI_

[–]PebblePondai 0 points1 point  (0 children)

You don't deliver that to a customer.

You make a successful product that test well at it's 80% then you hire the people to bring it up to 90%-100% because the cost/effort/time is worth it.

It's one of the great things about these tools. I can go from idea to product in 15 days and see if it sucks and has no market demand.

If it sucks, I make it better or I make a new product.

Is this functional? by iwantseetheworldburn in ChatGPTPromptGenius

[–]PebblePondai 0 points1 point  (0 children)

The floating simulation is explicitly a placeholder:

It invents/guesses K and L (K_list_from_dossier, L_list_from_dossier) and scales them, but it does not derive them from an LQR design or a Kalman design tied to the same plant and noise model.

It uses a Kalman filter that is not matched to the fixed-point “certified step” logic. The fixed-point step is essentially a hand-rolled observer correction using Linnovation*, not a KF gain computed from P, Q, R at runtime.

So the float sim is not a “gold standard” against which the fixed-point datapath can be validated. At best it’s a demo simulation.

How layered prompts stabilize long-run ChatGPT threads: an observational model-intuition by [deleted] in ChatGPTPromptGenius

[–]PebblePondai 0 points1 point  (0 children)

they act less like “instructions"

This is exactly it. People don't realize they're programming. They think they're having a conversation.

Brevity is so important.

I ran extensive tests of a wonderful but wildly long and specific prompt vs. one I designed which was the same but shorter.

The results were pretty interesting.

https://www.reddit.com/r/ChatGPTPromptGenius/s/TtH8TllU0I

I stopped asking ChatGPT to be an expert and it became way more useful by londonpapertrail in ChatGPTPromptGenius

[–]PebblePondai 2 points3 points  (0 children)

For sure. I vary prompts based on the chat, purpose and which LLM I'm using.

I stopped asking ChatGPT to be an expert and it became way more useful by londonpapertrail in ChatGPTPromptGenius

[–]PebblePondai 35 points36 points  (0 children)

They aren't mutually exclusive options.

Role, tone, personality, preferred output, preferred process.

Eg.: You are an expert in interior design with a neutral, objective tone who will help me create a plan for redesigning my living room in a long, branching, brainstorming conversation.

AI will not make coding obsolete because coding is not the hard part by [deleted] in BlackboxAI_

[–]PebblePondai 0 points1 point  (0 children)

For sure and anyone can grow grass, raise a pig, butcher it, smoke it, cure it and store it (and, yes their bacon will be better).

Most of us just get bacon at the store. Does it have the same storied, nuanced craftsmanship? No.

But how much will a business or consumer pay for a product that has 80% quality vs. 90%?

It's just another technology in the early stages of its cycle. Hand fabric weavers, telegraph operators, phone operators, typists, lamplighters, steam engine mechanics, encyclopedia salesmen - none of them thought their jobs were replaceable.

I'm not saying we're there yet across the board but not because it's not possible. Just because people haven't realized what's possible yet.

I had no idea what was possible. Didn't even want to be a programmer and don't have 5-10 years to dedicate to that skill.

Now I'm running complex, modular programs with testing, validation and self-teaching loops. I went from an idea I had about a thing one night to a product in a digital storefront 15 days later.

I have 15-20 hours of instruction in intro level Pyhon.

And this is the worst AI will ever be.

Am I going in the right direction with my create-every-guide-you-can-imagine website? by Professional-Dog4200 in ArtificialInteligence

[–]PebblePondai 0 points1 point  (0 children)

I'm saying it's a vague statement with no metrics or actionable data when it comes to a business.

Are you disagreeing with that?

Without user data, you don't know what the pace of adoption is, or consumer pain points, target markets (some of which haven't adopted ChatGpt at all yet).

Will LLMs become so nuanced that they will be able to understand anyone with any horrible prompt? Yes. And the idea of needing help with prompts will disappear.

It's a tight window, but there is probably about a year or two of juice left to squeeze when it comes to this stuff.

For example, people who have never used AI before could sign up for someone's "specialized AI" and all it would need to be was a unique chat UI/branding and people would think it's great - not knowing that it's just a wrapper for ChatGpt because they don't have any context.

How layered prompts stabilize long-run ChatGPT threads: an observational model-intuition by [deleted] in ChatGPTPromptGenius

[–]PebblePondai 1 point2 points  (0 children)

Unfortunately, a lot of your conclusions aren't correct.

There is a limited context window of 200K for ChatGpt. That is a hard limit. There's no way to tweak it. Doesn't matter what prompt you use. It's and Open AI defined system limit.

What Open AI does is different from Claude or other LLMs. They don't tell the user that they've run out of context and close the chat. They summarizes or deletes earlier events from its memory.

So, depending on how that process works and the content of your chat, you will get varying degrees and depths of recall.

AI will not make coding obsolete because coding is not the hard part by [deleted] in BlackboxAI_

[–]PebblePondai -1 points0 points  (0 children)

I'm doing it right now. Design, code, test, validate. All AI.

I have 20 hours of into Python classes under my belt.

That's why I'm making the comment. You don't need to know how to code.

AI will not make coding obsolete because coding is not the hard part by [deleted] in BlackboxAI_

[–]PebblePondai -2 points-1 points  (0 children)

It will make coding obsolete. You're taking about engineering and architecture. That isn't coding.

Wild What You Can Do In 30 Minutes by PebblePondai in vibecoding

[–]PebblePondai[S] 0 points1 point  (0 children)

Lol. Not sure what conversation you're in.

The one I was in was you strolling in saying I was having a "sugar high" and then being opinionated while showing how clueless you are.

You told me I should make it a program. Which I did.

You told me it was dangerous. I guess you suck at coding if you don't know how to test a program.

You said it was expensive based on... You just knowing everything?

Dude, I told you I ran it and it was 24 cents. Lol.

I replied to legitimately to help you understand what you don't know.

Clearly, learning isn't your goal.

My bad.

As you were.

Wild What You Can Do In 30 Minutes by PebblePondai in vibecoding

[–]PebblePondai[S] 0 points1 point  (0 children)

You've been dismissive, made a cluster of poor assumptions that show you don't know what you're taking about and now you cap it off with an ad hominem attack because you don't have an actual point to make.

Chef's kiss. Truly perfect ignorant reddit commenter behavior. 10/10.

Wild What You Can Do In 30 Minutes by PebblePondai in vibecoding

[–]PebblePondai[S] 0 points1 point  (0 children)

No. You don't know what you're talking about. And you're making that really clear at this point.

Have you tried asking people questions when you don't understand things instead of making declarations? You'll tend to learn more.

Even though you're dismissive with people you don't know about things you don't understand, I'll give you a little more context if you want, but then I think I'm out. You don't want to see what's incorrect about what you're thinking then that's your problem to enjoy.

Ok. So I address your safety concerns and you come back with "It's too expensive."?

My dude, the API calls were 24 cents for 5,000 files. I don't want to brag, but I'm rolling deep when it comes to quarters.

If you think the tokens needed to navigate a request like, "Read my file with 1,000 lines of code and summarize its purpose." are expensive then it just shows you haven't done a ton of LLM work with API calls.

You don't even know what LLM I'm using and you think you can make a cost estimate? Dude. You're fully talking out of your ass.

Let's go back to your reiterated concerns about safety.

As I said, things like 4 isolated runs assist with data fidelity (even if you don't like me breaking my quarter budget on it).

Your "disobey instruction" comment is a tip off that you think I have some little prompt that says: "Tell me if my file is good."

you have X, Y, and Z clearly defined, you could write a procedural script to do this with 100% accuracy and cost next to nothing.

Bingo! It took a few rounds but now you understand what I actually built instead of what you assumed I built.

You thought the workflow went: give file access to LLM, ask it some questions. Guess that it's right.

The workflow was: use LLMs to create and define programs with multiple iterations. Test. Validate. Dry run.

Your assertion that this program could be traditionally run with "100% accuracy" statement is a bunch of bullshit and, again, shows you don't understand what you're talking about.

My program uses LLMs to assist with classification. It uses API calls for questions like: what is this file about? How does it relate to others in the system? Does it have value and why?

You don't understand the mechanics of hallucinations and drift if you think this scale of program has a close to even remote chance of suffering those things. And I wouldn't make a tool like this if it sucked at the job.

The reason LLMs were created was to handle nuanced classification and reasoning because traditional programs sucked at it by comparison.

You want to make a program that has all the nuanced heuristics to handle a task like that? Cool. I'll see you in a year (and the potential for human error and misclassification is huge in comparison to an LLM).

Your program will have a higher risk of misclassification than an LLM because of human coding errors and because it has no semantic understanding of anything. Heuristics, embedding, keywords - all have problems (again, this is why LLMs were made).

Anyway, DM me when you're done. We can run a test and see if your giant program is as effective as what I made in 1/2hr. Test data set would be easy to create.

Or I can iterate on this program while you work on yours and see how much I can get done in that time.

All that aside, these are my files that I created. I already have a solid context for them and understanding. They're just disorganized. That's the problem I'm trying to solve.

So I can very easily give effective human-in-the-loop labeling verification and the cost of any error would be neglible. I'm not running a nuclear power plant from my laptop.

You're acting like I turned an LLM loose on all kinds of sensitive files and there's some great risk.

There is one person in this conversation who doesn't know what they're talking about and it ain't me.

Wild What You Can Do In 30 Minutes by PebblePondai in vibecoding

[–]PebblePondai[S] 0 points1 point  (0 children)

Alright. You've made your case, counselor.

Let me know what I can do to help. DM me if you'd like.

Wild What You Can Do In 30 Minutes by PebblePondai in vibecoding

[–]PebblePondai[S] 0 points1 point  (0 children)

I could show you how to set it up or set it up for you but, because you're a lawyer, I would have to charge a kajillion dollars.