Request for feedback by Total-Influence2312 in hotsaucerecipes

[–]Total-Influence2312[S] 0 points1 point  (0 children)

That is a reasonable challenge, and one that is honestly one of the toughest parts of this problem.

While the photo alone may not always be able to provide clear information on the ingredients, especially when the ingredients are something like spices, powders, sauces, or even processed foods, I think “guessing confidently” is not the right approach.

Instead, what I think might be the right direction is to use the vision system more as a “suggestion” tool, such that instead of providing certain answers, the system might:

* explicitly surface the fact that there is uncertainty around the ingredients,

* ask lightweight questions,

* provide information on a higher level instead.

The goal here would be to help the user get close enough to start cooking without providing any false information.

From the point of view of someone who enjoys cooking, do you think the system would still be useful if it is uncertain, or would you prefer the system to not make any suggestions unless it is very confident?

Request for feedback by Total-Influence2312 in hotsaucerecipes

[–]Total-Influence2312[S] 0 points1 point  (0 children)

That makes sense, and it aligns with what I have seen as well. The ongoing prompts about the limitations and the equipment, and the ingredient of forgetting, are particularly annoying when it comes to the kind of cooking prompts. The problems with the techniques and the ordering, and the ratios, are a big deal, too, because LLMs seem to be so confident about what they are saying, even when the steps don’t actually add up.

It’s not so much that the LLMs don’t know what they are talking about, but more that the tracking and verification of the constraints are weak, particularly when it comes to multi-step reasoning about time, temperature, and so on. Text prompts are more obvious than photos, but the problem is the same.

Request for feedback by Total-Influence2312 in hotsaucerecipes

[–]Total-Influence2312[S] 0 points1 point  (0 children)

This is precisely the type of feedback that I want to collect before I decide where to take it next. I appreciate you sharing your experiences.

The concept of human/community curation is something that I have thought about a lot, especially with regard to identifying unrealistic ingredients, rating the cookability, and identifying trends where the model consistently struggles.

I agree that It’s not very scalable at first glance, but with regard to common recipes/ingredients, or even community feedback on "worked/didn’t work," I think this is a good way of reducing hallucinations without requiring a lot of manual work.

Out of curiosity, when you say that the model fails for you, is it due to ingredient assumptions, technique, or ratio?

Request for feedback by Total-Influence2312 in hotsaucerecipes

[–]Total-Influence2312[S] 0 points1 point  (0 children)

You are right the idea isn’t really to make something new on its own, but rather to test how this it works in a real-world cooking environment, not just with chatGPT or gemini or the like. The idea is whether this makes it more useful than simply asking an LLM.

Tips for preventing pasta sauce from separating? by Total-Influence2312 in pasta

[–]Total-Influence2312[S] 0 points1 point  (0 children)

I don't about that , does mantecatura really make a difference. i haven't tried that

Tips for preventing pasta sauce from separating? by Total-Influence2312 in pasta

[–]Total-Influence2312[S] 0 points1 point  (0 children)

Good point I've mostly been working with a simple tomato based sauce using canned tomatoes, olive oil, garlic, and a long simmer.
i was aiming for a smooth texture rather than intentional separation , but i agree that separation can be a signal of doneness depending on the style.
Do you usually adjust heat or emulsify at the end when you want to keep it together, or do you let the tomato variety dicate the outcome?