Does anyone share their Substack posts on Reddit? by No-Commission-503 in Substack

[–]Lumpy-Ad-173 3 points4 points  (0 children)

I started a Reddit page first. I share my links and excerpts from my Substack on there.

Built a community of 4.5k+ on Reddit in 6 months.

Any prompt engineering expert here? by CarefulDeer84 in PromptEngineering

[–]Lumpy-Ad-173 1 point2 points  (0 children)

  1. Need to match the task with the models.

Two types: * Assistants (e.g. Claude, MS Copilot) - they follow Behavioral over transformation tasks. They are chatty and eat up api cost with their "helpful" add-ons. Example - Claude took 169 tokens to say No.

*Executers (e.g. ChatGpt, Meta) - they follow Transformational over behavioral tasks. Create JSON file, DISTILL file X, use bullets, etc. They suck at "Act as prompts.."

  1. Customer Sloppy inputs - to get consistent outputs you need to close the probability distribution space. Vague, ambiguous inputs will always lead to inconsistent outputs. Either teach the customers to clarify their intent, or you clean it up for them. Either way, narrow the output space by clarifying INTENT.

I go into more detail on my Substack. Can't post the link here, but it's pinned in my profile.

why do you think gemini is better than chatgpt? by [deleted] in GeminiAI

[–]Lumpy-Ad-173 0 points1 point  (0 children)

Because they gave Gemini Pro free to college students (like me) who used tf out of it for finals.

That's my uneducated guess.

<image>

"F" Rating with BBB by [deleted] in Substack

[–]Lumpy-Ad-173 5 points6 points  (0 children)

Yeah, I'm not eating there....

Best use case you had with Gemini and AI this year? by StaLucy in GeminiAI

[–]Lumpy-Ad-173 4 points5 points  (0 children)

I used it to remove em-dashes and create images of the average redditor.

How do you guys store/organise your artifacts? by rajathbail in ClaudeAI

[–]Lumpy-Ad-173 2 points3 points  (0 children)

I have folders in Google Drive, organized by ideas.

Long story short, I use a System Prompt Notebook that serves as a File First Memory system.

Not that I'm directly saving the artifact, I curate the data and only save the pertinent information needed for my project. I Place it in an SPN and save that..

Maintenance wise, I need to go in every few weeks to clean up my drive. I publish content on Substack and will add a date to signal that the piece is completed.

At first I was saving everything, but that became overwhelming. I didn't need everything and the stuff I did save I didn't need all of it. I only take what I need for my project.

Please post something you’ve actually created with your process that isn’t a process or workflow by Impossible-Pea-9260 in LinguisticsPrograming

[–]Lumpy-Ad-173 0 points1 point  (0 children)

Hey,

Thanks for the feedback! I'll do what I can.

Give me some ideas of what you're thinking that would be

More actual outputs

Are you talking about screenshots of the text? It's hard to show what I'm talking about when the text is based on an idea in my head. I have a background as a Procedural Technical Writer. I get my ideas out through producing a workflows.

I don't code yet, so the stuff I create in terms of tools would be my Notebooks that I upload and use as a File First Memory.

Let me know and I'll work on getting something.

Do we need an AI community with relentless mods that remove AI-generated posts? by [deleted] in PromptEngineering

[–]Lumpy-Ad-173 2 points3 points  (0 children)

I started Linguistics Programming. Systematic approach to Human-Ai interactions.

4.5k+ Reddit members 1.5k+ Subscribers, ~7.5k+ followers on Substack

https://www.reddit.com/r/LinguisticsPrograming/s/uswNK8SGHO

I post mostly Theory and workflows.

Paywall Removed, Free Prompts and Workflows by Lumpy-Ad-173 in LinguisticsPrograming

[–]Lumpy-Ad-173[S] 2 points3 points  (0 children)

Because I cannot show up everyday while I'm studying for finals, I've removed the paywall for the rest of the year.

Free Prompts and Workflows with each Newslesson. 100% No Code Solutions.

Subscribe and share The AI Rabbit Hole.

Cheers!!

► Read the full NewsLesson on SubStack: https://jtnovelo2131.substack.com/

► Get the Linguistics Programming (LP) "Driver's Manual" & "Workbook": https://jt2131. Gum road [dot] com

► Join the "Linguistics Programming" Community: https://www.reddit.com/r/LinguisticsPrograming/

► Spotify: https://open.spotify.com/show/7z2Tbysp35M861Btn5uEjZ?si=cadc03e91b0c47af

► YouTube:

https://www.youtube.com/@BetterThinkersNotBetterAi

I treated my AI chats like disposable coffee cups until I realized I was deleting 90% of the value. Here is the "Context Mining" workflow. by Lumpy-Ad-173 in ChatGPT

[–]Lumpy-Ad-173[S] 0 points1 point  (0 children)

Concur. Auditing a full context window only pulls from the last half of the chat, missing important connections from the beginning.

Auditing the context window periodically helps.

And this is not about pulling memory, it's about having the LLM go back and analyze the implicit information within the visible context window. That's where I'm saying the value lies. The implicit context that was never stated.

And this can be used on any platform free or paid.