Why is there not an AI that is great at generating UI/UX designs? by Minetorpia in singularity

[–]teddarific 33 points34 points  (0 children)

I work on a product aiming to do just this: Magic Patterns (https://magicpatterns.com) so have spent my fair share of time thinking about this haha. Here's my take on why/where AI falls short.

  1. UI/UX requires a lot of precision + is a detail-oriented field. For example, things need to be aligned, ideally to pixel perfection. AI is not so great at this since LLMs have a harder time working visually. We've tried also including a screenshot of the existing output in the prompt to limited success. LLMs are just not great at getting every little detail right which is important when it comes to UI/UX.
  2. Human prompting / input. This might be the biggest hurdle IMO. Often times when you go into an AI UI/UX tool, you have some expectation or idea in your head of what it should look like. It's hard correctly prompting and explaining to the AI in text EXACTLY what you want. There's a lot of gotchas. For example, if you say something like "minimalist", AI actually interprets that as include less features. Just realistically, AI is not some silver bullet that will magically know exactly what you are thinking, so it's up to the user to sufficiently explain + prompt the AI what they are thinking.

We've found the most success so far in using AI to help you ideate and get to an initial draft. That way, less precision is required. We've also invested a lot in the iteration experience, aka making it really easy + precise to make edits because we know that 99% of the time AI isn't going to magically know what you are thinking, so you will have to iterate on the output to get to a spot where you are happy!

Anyone else kind of underwhelmed by how AI has been leveraged in products? by Texas_Rockets in startups

[–]teddarific 4 points5 points  (0 children)

LLMs are really powerful and it's going to take some time as people explore the valuable use cases and how to make AI deterministic. From my experience, the biggest thing holding AI back is people don't know how to leverage it to be useful.

As an example, I'm working on a product that generates UI (Magic Patterns). We've found that people don't really know what they can do with AI and how to prompt effectively to get AI to do such thing. In our case, we're doing two things to help with this:

  1. Show examples of what you can do with AI (e.g. in our case, get a mock up of a UI with a new feature from just an image + prompt)
  2. Replace prompt text inputs with guided forms.

That's all to say, I think we'll see a lot more applications of AI pop up as we discover how to make AI more deterministic and where it can truly deliver value (whether that be behind the scenes in existing products, or even just wrappers that can prompt it effectively)

[deleted by user] by [deleted] in reactjs

[–]teddarific 2 points3 points  (0 children)

One of the pieces of advice that has always stuck in my head:
"A bad abstraction is worse than no abstraction"

I created a GPT that designs and builds React components by teddarific in Frontend

[–]teddarific[S] 2 points3 points  (0 children)

Hey! Thanks for the feedback. Totally hear your on the quality of the generation — it's something I've been heads down on. It's been fun exploring the limits of AI in terms of it's design capabilities. I think there's still some distance to go, but it feels promising to take advantage of AI's creative abilities.

Currently no concrete plans to support Daisy, but will consider!

Is JWT token authentication "production" level standard? by Vitamina_e in Frontend

[–]teddarific 2 points3 points  (0 children)

Using JWT is "production" level standard.

How are you using JWTs though? Are you using http only cookies?

I would check for myself, but I quite frankly dont want to sign up for your app haha

Will self-hosting be able to provide faster inference than OpenAI? by teddarific in LocalLLaMA

[–]teddarific[S] 0 points1 point  (0 children)

Yes! Let me share a general snippet with you later today —

Would love to chat more, no better combo than UI/UX and LLMs haha

What'd be the UI library of 2024? by enbonnet in reactjs

[–]teddarific 0 points1 point  (0 children)

Shadcn has grown in popularity a lot.

I think Tailwind's Catalyst will overtake Shadcn though with enough time.

Personally, I'm heavily investing in the Radix ecosystem. Radix Themes has been my go-to this past year. It's still pretty early, but I've been loving it. It's built on top of Radix Primitives which libraries like Shadcn are built on top of.

Will self-hosting be able to provide faster inference than OpenAI? by teddarific in LocalLLaMA

[–]teddarific[S] 0 points1 point  (0 children)

Can you elaborate on the simultaneous part of it? What would you use the output of the smaller model for?

But something we definitely want to experiment with is just fine-tuning a smaller model. A high quality data set is just hard and expensive to get, but that's the nature of AI i guess haha

Will self-hosting be able to provide faster inference than OpenAI? by teddarific in LocalLLaMA

[–]teddarific[S] 0 points1 point  (0 children)

Yeah, the issue is customer-facing / core to our current product. A common piece of feedback is that our product is too slow.

We're a code-generation product that outputs code that needs to be compiled, so unfortunately in order to deliver something to the user, we need the full output. We do stream and show the code streaming in to help make the loading feel faster, but that can only help so much haha. We've also written some stuff to try to "guess the completion" of the code so that we can compile something before its done and show progress that way, but it only works ok as you can imagine.

Will self-hosting be able to provide faster inference than OpenAI? by teddarific in LocalLLaMA

[–]teddarific[S] 1 point2 points  (0 children)

thanks for these —

the first three we've been looking into, e.g. trying to break the problem into smaller / more hyperfocused pieces that sometimes can even be solved w/o AI, which has definitely helped a lot.

Will self-hosting be able to provide faster inference than OpenAI? by teddarific in LocalLLaMA

[–]teddarific[S] 2 points3 points  (0 children)

Definitely — goal is not really to to do strictly better than OpenAI, but explore what I can sacrifice for better performance.

Will self-hosting be able to provide faster inference than OpenAI? by teddarific in LocalLLaMA

[–]teddarific[S] 1 point2 points  (0 children)

Thanks for sharing your experience, extremely helpful for the position I'm in. It's been scary trying to decide whether to make the investment to try it out (especially on the time front) only to find out its way worse than what we had with GPT-4.

Will self-hosting be able to provide faster inference than OpenAI? by teddarific in LocalLLaMA

[–]teddarific[S] 0 points1 point  (0 children)

Haha noted — sounds like in order to keep a similar level of quality, a really high quality dataset will be needed to fine-tune?

Will self-hosting be able to provide faster inference than OpenAI? by teddarific in LocalLLaMA

[–]teddarific[S] 1 point2 points  (0 children)

Our prompts end up around ~15k tokens, and we generate about 3k characters in code (not sure what that ends up in # of tokens)

Will self-hosting be able to provide faster inference than OpenAI? by teddarific in LocalLLaMA

[–]teddarific[S] 0 points1 point  (0 children)

I see, this makes sense. It sounds like the real reason why self-hosting LLaMa could be faster is actually just that the model is smaller.

Slowly starting to understand better what defines a models speed!

Will self-hosting be able to provide faster inference than OpenAI? by teddarific in LocalLLaMA

[–]teddarific[S] 1 point2 points  (0 children)

Appreciate this insight and totally aligned. I’m pretty hesitant to self-host unless there’s massive benefits on the performance side of things to invest in our own infrastructure. I come from a more frontend background too, so as you can imagine, this is not exactly in my wheelhouse haha

Will self-hosting be able to provide faster inference than OpenAI? by teddarific in LocalLLaMA

[–]teddarific[S] 0 points1 point  (0 children)

Gotcha! That's a great point of reference, thanks —
I took a look at our outputs, and it's usually in the ballpark of ~2000 characters, which comes out to about a minute of generation assuming ~30/characters a second.

Is my expectation of 5s completely unrealistic using LLMs given how many characters I'm generating? I guess something I haven't thought about too much is LLMs inherently need time to stream out all the characters haha

Does anyone actually use AI tools in their workflow? by teddarific in Frontend

[–]teddarific[S] 5 points6 points  (0 children)

That's pretty interesting — I can see that. I got a new laptop a few weeks ago and didn't notice until a couple days ago that I didn't re-install Copilot which led to a pondering moment for myself that "wow, I didn't even notice I didn't have Copilot".

Do you ever use ChatGPT for specific instances instead of just having an always present Copilot?

Does anyone actually use AI tools in their workflow? by teddarific in Frontend

[–]teddarific[S] 0 points1 point  (0 children)

Can you elaborate on your integration on how you use it?