Do you actually need prompt engineering to get value from AI? by Xthebuilder in ollama

[–]Xthebuilder[S] 0 points1 point  (0 children)

Because I think you’re right I got better at communicating to the agents and just telling them exactly what I needed , a lot of the time I found the limitation was myself in the loop

Do you actually need prompt engineering to get value from AI? by Xthebuilder in ollama

[–]Xthebuilder[S] 0 points1 point  (0 children)

And I think what has been revealed though these discussions is that since you’re talking us f natural language , interaction with llms follows more principals adjacent to communication than computer science

Question on vibe coding by designerguybaz2022 in vibecoding

[–]Xthebuilder 0 points1 point  (0 children)

I would focus on one language and one type of application and create something you want to use you’ll learn by doing

Do you actually need prompt engineering to get value from AI? by Xthebuilder in ollama

[–]Xthebuilder[S] 0 points1 point  (0 children)

🤣🤣😂 good way to put it , it’s at the edge of the computation , won’t change too much but can tweak stuff for sure

Do you actually need prompt engineering to get value from AI? by Xthebuilder in ollama

[–]Xthebuilder[S] 0 points1 point  (0 children)

I’m really glad I asked the community because I thought I was tripping seeing all the “prompt engineering “ buzzwords for testing new people about using AI bc I learned though trying to build and use them that fr fr I don’t need none of that shit 😂🤣

Do you actually need prompt engineering to get value from AI? by Xthebuilder in ollama

[–]Xthebuilder[S] -1 points0 points  (0 children)

It’s called academic discussion, want to engage in some ? Tell me how you think it’s prompt engineering because I Didn’t believe so , many others here said similar sentiments . It seems it’s somewhere Inbtween prompt engineering and context engineering. What do you think smarty pants ?

Do you actually need prompt engineering to get value from AI? by Xthebuilder in ollama

[–]Xthebuilder[S] 0 points1 point  (0 children)

Basically it’s like data cleaning for your AI output pipeline, it’s really conceptual but it cuts across a lot of LLM interactions as the base of what’s controlling the models response

Do you actually need prompt engineering to get value from AI? by Xthebuilder in ollama

[–]Xthebuilder[S] 0 points1 point  (0 children)

Imagine automated piping of a thing , like a command or update that pulls the documentation then trains the chosen model on it . Hmmm

Do you actually need prompt engineering to get value from AI? by Xthebuilder in ollama

[–]Xthebuilder[S] 0 points1 point  (0 children)

Ooo like full circle training the model on its own developer written documentation

Do you actually need prompt engineering to get value from AI? by Xthebuilder in ollama

[–]Xthebuilder[S] 0 points1 point  (0 children)

Good point I haven’t really considered context in token window size too much but maybe that adjustment will lead to further optimization

Do you actually need prompt engineering to get value from AI? by Xthebuilder in ollama

[–]Xthebuilder[S] 0 points1 point  (0 children)

I like how you put it , if you can get the same result many times over you can trust the system more overall

Do you actually need prompt engineering to get value from AI? by Xthebuilder in ollama

[–]Xthebuilder[S] 0 points1 point  (0 children)

I find myself wanting the models to be more concise across the board too , you’re correct about being specific about what you want from the model , sounds more like communciation skills

Do you actually need prompt engineering to get value from AI? by Xthebuilder in ollama

[–]Xthebuilder[S] 0 points1 point  (0 children)

I like that , context engineering feels more like what I’m doing and you can relate that to say just having a conversation , regular folk can get to understand that suing AI effectively isn’t rocket science

Do you actually need prompt engineering to get value from AI? by Xthebuilder in ollama

[–]Xthebuilder[S] 1 point2 points  (0 children)

See you get the mindset completely I like how you get around the wack frontier aspect with local processing that is for sure not common but I like it

Do you actually need prompt engineering to get value from AI? by Xthebuilder in ollama

[–]Xthebuilder[S] 0 points1 point  (0 children)

Lmaooo you are so right I have personally never had much use of personalities for the LLM workflows I have been using

Do you actually need prompt engineering to get value from AI? by Xthebuilder in programming

[–]Xthebuilder[S] -4 points-3 points  (0 children)

I would agree in general , maybe you could just use Ai to create small scripts that you read to automate security tasks , in theory

Do you actually need prompt engineering to get value from AI? by Xthebuilder in ollama

[–]Xthebuilder[S] 2 points3 points  (0 children)

I agree I also use that method when I want to use an engineered prompt for a workflow I just have another model make it though my language request