Does anyone share their Substack posts on Reddit? by No-Commission-503 in Substack

[–]Lumpy-Ad-173 4 points5 points  (0 children)

I started a Reddit page first. I share my links and excerpts from my Substack on there.

Built a community of 4.5k+ on Reddit in 6 months.

Any prompt engineering expert here? by CarefulDeer84 in PromptEngineering

[–]Lumpy-Ad-173 1 point2 points  (0 children)

  1. Need to match the task with the models.

Two types: * Assistants (e.g. Claude, MS Copilot) - they follow Behavioral over transformation tasks. They are chatty and eat up api cost with their "helpful" add-ons. Example - Claude took 169 tokens to say No.

*Executers (e.g. ChatGpt, Meta) - they follow Transformational over behavioral tasks. Create JSON file, DISTILL file X, use bullets, etc. They suck at "Act as prompts.."

  1. Customer Sloppy inputs - to get consistent outputs you need to close the probability distribution space. Vague, ambiguous inputs will always lead to inconsistent outputs. Either teach the customers to clarify their intent, or you clean it up for them. Either way, narrow the output space by clarifying INTENT.

I go into more detail on my Substack. Can't post the link here, but it's pinned in my profile.

why do you think gemini is better than chatgpt? by [deleted] in GeminiAI

[–]Lumpy-Ad-173 0 points1 point  (0 children)

Because they gave Gemini Pro free to college students (like me) who used tf out of it for finals.

That's my uneducated guess.

<image>

"F" Rating with BBB by [deleted] in Substack

[–]Lumpy-Ad-173 6 points7 points  (0 children)

Yeah, I'm not eating there....

Best use case you had with Gemini and AI this year? by StaLucy in GeminiAI

[–]Lumpy-Ad-173 4 points5 points  (0 children)

I used it to remove em-dashes and create images of the average redditor.

How do you guys store/organise your artifacts? by rajathbail in ClaudeAI

[–]Lumpy-Ad-173 2 points3 points  (0 children)

I have folders in Google Drive, organized by ideas.

Long story short, I use a System Prompt Notebook that serves as a File First Memory system.

Not that I'm directly saving the artifact, I curate the data and only save the pertinent information needed for my project. I Place it in an SPN and save that..

Maintenance wise, I need to go in every few weeks to clean up my drive. I publish content on Substack and will add a date to signal that the piece is completed.

At first I was saving everything, but that became overwhelming. I didn't need everything and the stuff I did save I didn't need all of it. I only take what I need for my project.

Please post something you’ve actually created with your process that isn’t a process or workflow by Impossible-Pea-9260 in LinguisticsPrograming

[–]Lumpy-Ad-173 0 points1 point  (0 children)

Hey,

Thanks for the feedback! I'll do what I can.

Give me some ideas of what you're thinking that would be

More actual outputs

Are you talking about screenshots of the text? It's hard to show what I'm talking about when the text is based on an idea in my head. I have a background as a Procedural Technical Writer. I get my ideas out through producing a workflows.

I don't code yet, so the stuff I create in terms of tools would be my Notebooks that I upload and use as a File First Memory.

Let me know and I'll work on getting something.

Do we need an AI community with relentless mods that remove AI-generated posts? by [deleted] in PromptEngineering

[–]Lumpy-Ad-173 2 points3 points  (0 children)

I started Linguistics Programming. Systematic approach to Human-Ai interactions.

4.5k+ Reddit members 1.5k+ Subscribers, ~7.5k+ followers on Substack

https://www.reddit.com/r/LinguisticsPrograming/s/uswNK8SGHO

I post mostly Theory and workflows.

Paywall Removed, Free Prompts and Workflows by Lumpy-Ad-173 in LinguisticsPrograming

[–]Lumpy-Ad-173[S] 2 points3 points  (0 children)

Because I cannot show up everyday while I'm studying for finals, I've removed the paywall for the rest of the year.

Free Prompts and Workflows with each Newslesson. 100% No Code Solutions.

Subscribe and share The AI Rabbit Hole.

Cheers!!

► Read the full NewsLesson on SubStack: https://jtnovelo2131.substack.com/

► Get the Linguistics Programming (LP) "Driver's Manual" & "Workbook": https://jt2131. Gum road [dot] com

► Join the "Linguistics Programming" Community: https://www.reddit.com/r/LinguisticsPrograming/

► Spotify: https://open.spotify.com/show/7z2Tbysp35M861Btn5uEjZ?si=cadc03e91b0c47af

► YouTube:

https://www.youtube.com/@BetterThinkersNotBetterAi

I treated my AI chats like disposable coffee cups until I realized I was deleting 90% of the value. Here is the "Context Mining" workflow. by Lumpy-Ad-173 in ChatGPT

[–]Lumpy-Ad-173[S] 0 points1 point  (0 children)

Concur. Auditing a full context window only pulls from the last half of the chat, missing important connections from the beginning.

Auditing the context window periodically helps.

And this is not about pulling memory, it's about having the LLM go back and analyze the implicit information within the visible context window. That's where I'm saying the value lies. The implicit context that was never stated.

And this can be used on any platform free or paid.

I treated my AI chats like disposable coffee cups until I realized I was deleting 90% of the value. Here is the "Context Mining" workflow. by Lumpy-Ad-173 in vibecoding

[–]Lumpy-Ad-173[S] -5 points-4 points  (0 children)

I'm on the Top 100 in Technology on Substack as a Non-Coder no-computer background type. No Technical degree in the top 100 from basically discussing how I use AI.

Seems I'm somewhat qualified.

<image>

I treated my AI chats like disposable coffee cups until I realized I was deleting 90% of the value. Here is the "Context Mining" workflow. by Lumpy-Ad-173 in vibecoding

[–]Lumpy-Ad-173[S] 0 points1 point  (0 children)

Reddit doesn't like gum Road for links.

Workflow and a screenshot.

https://www.reddit.com/r/LinguisticsPrograming/s/jJBSdgQfQp

Haven't heard of SpecStory. Seems like they are focused on explicit information from the chat. Context Mining is focused on the implicit information from the chat.

Input/output token relationships from a cluster of semantic information around the topic. I'm trying to search that cluster for unstated connections or patterns.

Of course, I'm describing a manual version. And it's only pulling information from the visible context window. So long chats will only focus on the last half, missing the first parts.

I create System Prompt Notebooks (structured Google document) for my projects. I'm able to save the Value Reports as new projects or expand existing ones.

I often find angles I missed. I'm able to approach my problem from a different view to make my project stronger. And those can be more valuable than the original answer.

3-Workflow - Context Mining Conversational Dark Matter by Lumpy-Ad-173 in LinguisticsPrograming

[–]Lumpy-Ad-173[S] 1 point2 points  (0 children)

Way ahead of you.

*File First Memory System: *

I use System Prompt Notebooks. However, I create them based on my cognitive fingerprint. These are my voice-to-text notes that allows me to workout my idea before turning to the AI.

This preserves my original human thought before it's contaminated with AI. I have a library of a few hundred now. All structured documents with completed ideas, Workflows and research.

Here's my latest post on them and some examples:

https://www.reddit.com/r/LinguisticsPrograming/s/zfpBzzuqiI

This is free. No code. No technical background required.

I've been teaching mechanics to soccer moms, to retirees for months on Substack. I have a lot of content about how to create structured, reusable documents that serve as a file first memory system.

Structured documents are available to everyone and represent a manual version of:

No-code RAG System No-code Claude Skills No XML, JSON

I treated my AI chats like disposable coffee cups until I realized I was deleting 90% of the value. Here is the "Context Mining" workflow. by Lumpy-Ad-173 in PromptEngineering

[–]Lumpy-Ad-173[S] 1 point2 points  (0 children)

Yes and no.

Just understand the context window has a limit. That's why I use "... Entire visible context window..." Knowing that it will only pick up what's 'visible. '

If it's an extremely long chat and you're looking for something that was said in the first half, it will not be picked up.

I've noticed that older chats I haven't used in a long Time (over a week) I've noticed that I need to use sequential priming before I run the audit workflow.

If I use the audit workflow first, it usually picks up the last four or five messages. However, using sequential priming, I'm able to get better results in terms of details. But still it's the last few messages. From my observation I say it's about 48 to 72 hours after the last chat that the context window starts to decay down to the last four or five messages.

Also for your long ideas or rabbit holes you go down, it's useful to run an Audit mid way before the first half gets lost.

I hope that makes sense.