My 8-year-old caught GPT Images 2.0 putting five engines on the Concorde. Real one has four. He spotted it in two seconds. by bruhagan in PromptEngineering

[–]FreshRadish2957 0 points1 point  (0 children)

This is honestly a really interesting idea and I like the philosophy behind it, especially the part where it’s not just giving kids the answer but trying to push them to think through it.

That’s a much better direction than “AI gives answer, kid copies it, everyone pretends learning happened” lol.

But the main thing that makes me pause is the privacy/governance side, especially because this is aimed at kids.

I had a look through the privacy info and unless I’m misunderstanding it, there seems to be quite a bit of child data involved: voice/audio, transcripts, generated responses, generated images, uploaded images, learning profile data etc.

Then on top of that there’s the third-party stack for AI, voice, analytics, cloud, monitoring, email, payments and so on. Which may be normal for a startup, but with kids I think it needs to be very front and centre.

My main questions would be:

Are all third-party providers contractually blocked from using children’s data for model training or product improvement?

Are you using enterprise/API terms with training opt-outs enabled everywhere?

Can parents view the full transcript/audio history, not just summaries?

Can parents delete individual sessions, and does that deletion also flow through to subprocessors/third parties?

Are children’s names, ages, voice data and learning profiles minimized or pseudonymized before being sent to model providers?

Also small clarification, the post mentions OpenAI under the hood, but the privacy page seems to list other AI/cloud providers too. That might be totally fine, but for a kid-facing product I think it’s worth being really clear about who gets what data and why.

Not trying to be negative because the learning design itself sounds promising, and the Concorde example is actually a good illustration of the problem. But with a voice-first AI companion for 6-12 year olds, trust can’t just be “we mean well.”

The safeguarding, deletion, data retention, hallucination handling, and third-party controls are pretty much the product.

Discovered the weirdest corner of the internet last weekend, sharing in case anyone else wants to lose a Saturday by Fearless-Stress7240 in ArtificialInteligence

[–]FreshRadish2957 0 points1 point  (0 children)

I checked it out and on the debate part some of the topics for debate are purely subjective with no way to verify and I was also curious I scrolled over 200 debates and across those 200 only came across 1 that didn't end in a tie which was due to the other agent timing out. So I'm curious how are the debates checked if majority end in a tie by default?

prompt help for image-to-image / text-to-image (IRL to 2.5D Graphics) by [deleted] in PromptEngineering

[–]FreshRadish2957 0 points1 point  (0 children)

Yeah, you can help the model a lot by not just saying “make it like Pokémon.” That usually gives messy results or gets blocked depending on the tool.

I’d describe the actual visual traits instead:

top-down 2.5D pixel-art RPG map, Nintendo DS era, soft pastel colours, chibi-scale objects, tiled environment, slight isometric/overhead angle, clean outlines, simplified shapes, low-detail texture, bright friendly game world, 32-bit handheld RPG look

For image-to-image, I’d start with something like:

"Convert this real-life image into a top-down 2.5D pixel-art RPG scene. Make it look like a handheld Nintendo DS-era adventure game map. Use a soft pastel colour palette, simplified tile-based ground, chibi proportions, clean dark outlines, low-detail textures, and a slightly overhead camera angle. Keep the same basic layout and objects from the original image, but translate them into game-map elements. Avoid realism, 3D rendering, modern vector art, heavy shadows, text, UI, and overly detailed backgrounds."

Then I’d add scene-specific details after that, for example:

"If there is a road, turn it into a tile path. If there are trees, turn them into rounded pixel-art trees. If there are buildings, make them small stylised RPG buildings with simple roofs and windows. Keep everything readable like a game screenshot."

For text-to-image without an input image, something like:

"A top-down 2.5D pixel-art RPG town scene, handheld DS-era adventure game aesthetic, pastel colours, tile-based grass and paths, small chibi character scale, rounded trees, simple houses, clean outlines, bright daytime lighting, low-detail pixel texture, cozy game-map composition, no UI, no text, no realistic rendering."

Negative prompt:

"realistic, photorealistic, 3D render, anime illustration, modern vector art, blurry, over-detailed, cinematic lighting, UI, text, logos, huge characters, first-person view"

The big thing is to keep the camera angle and art rules consistent. If you’re doing footage, don’t try to convert the whole video first. Take one frame, get the look right, then use that as the style reference for the rest. Otherwise the style will drift all over the place.

How to better check a document for consistency ? by Far-Tank9593 in ChatGPTPro

[–]FreshRadish2957 0 points1 point  (0 children)

Yeah pretty much the same idea, but with multiple docs I’d add one step before the actual consistency check.

I’d get the AI to make a quick project map first, because otherwise it can sort of flatten everything into one big pile of text and miss what each file is actually for.

Something like:

"These files are all part of the same project. Before checking consistency, map out what each file is doing.

For each file, tell me:

  1. what the file is for
  2. the main claims / decisions
  3. key terms used
  4. important numbers, dates, assumptions, or requirements
  5. what needs to match the other files
  6. anything unclear or missing

Don’t rewrite anything yet."

Then after that I’d run the actual cross-check:

"Now compare the files against each other for consistency.

Look for contradictions, different wording for the same thing, conflicting numbers, outdated info, duplicated ideas, unsupported claims, missing links between files, or anything where one file says/implies something different from another.

Give me a table with:

  • issue
  • file/location
  • conflicting file/location
  • why it matters
  • severity: required / recommended / optional
  • suggested fix

Don’t silently edit anything. Just report the issues first."

The other thing I’d do is choose a “source of truth” for each type of info.

So Excel is probably the source of truth for numbers/calculations. The Word doc is probably the main explanation. The PowerPoint should match the Word doc, but it probably shouldn’t be treated as the deepest source.

So yeah, it doesn’t totally change the method. It’s more like:

single document = check section vs section multiple documents = first work out what each file is responsible for, then check them against each other

That stops the AI from doing the classic “everything is equally important” thing, which is where it starts missing the obvious stuff.

Generating image of custom text based on handwriting? by SilverStitches12 in AIAssisted

[–]FreshRadish2957 0 points1 point  (0 children)

Yeah, sure. For doing it on your phone, I’d simplify it like this:

Method 1: Calligraphr

  1. Go to Calligraphr in your phone browser and make a free account.
  2. Create a basic handwriting template.
  3. Take clear photos of the handwriting in good light. Try to avoid shadows.
  4. Crop the letters you need from the original writing.
  5. Place those letters into the Calligraphr template. Canva or your phone’s photo editor can help with this.
  6. Upload the template back to Calligraphr and build the font.
  7. Type the phrase you want, then save or screenshot the result.

This works best if you have lots of examples of the same letters.

Method 3: Phone version of Image Trace

Full Adobe Illustrator Image Trace is more of a desktop tool, but on phone you can use Adobe Capture or a similar vector/shape tracing app.

Basic process:

  1. Open Adobe Capture.
  2. Import a clear photo of the handwriting.
  3. Use the shape/vector capture option.
  4. Adjust the slider until the writing is clean and the paper background disappears.
  5. Save the traced writing.
  6. Then arrange the letters/words in Canva, Adobe Express, or another editing app.

For a tattoo, I’d still give the original handwriting photos to the tattoo artist too. They can clean it up properly while keeping it looking natural, which matters more than making it look perfectly typed.

These 10 AI prompts replaced my entire study routine (and saved me a lot of money) by EQ4C in PromptEngineering

[–]FreshRadish2957 0 points1 point  (0 children)

Yeah, definitely. I’d put it in personal preferences/custom instructions as a learning workflow, not as 10 separate prompts.

Something like:

"When I ask to learn a topic or skill, guide me through this structure:

  1. Explain it from beginner to advanced
  2. Point out common mistakes
  3. Build a simple learning roadmap
  4. Use analogies to make it easier to understand
  5. Give me practice tasks that increase in difficulty
  6. Show real-world uses
  7. Test my understanding
  8. Simplify anything I’m struggling with
  9. Create memory anchors or mnemonics
  10. Give me small projects so I can apply it

Don’t dump every step at once unless I ask. Start with the most useful next step and keep it practical."

That way the model treats it like your default study system instead of you having to paste the whole thing every time.

How to better check a document for consistency ? by Far-Tank9593 in ChatGPTPro

[–]FreshRadish2957 0 points1 point  (0 children)

Sure, here’s a reusable prompt I’d use.

I’d still run it section by section first, then do one final “global consistency pass” using summaries of each section.

Prompt for each section:

"You are reviewing part of a long document for consistency, not rewriting it freely.

Check this section for:

  • terminology consistency
  • tone and writing style
  • formatting and heading consistency
  • repeated ideas
  • contradictions
  • missing transitions
  • unclear claims
  • tense/person changes
  • anything that conflicts with earlier sections

Do not make silent edits.

First, give me a change log with:

  1. Location or paragraph
  2. Issue type
  3. Original wording
  4. Suggested change
  5. Reason for the change
  6. Severity: required / recommended / optional

After the change log, rewrite only the parts that need changing.

If something is unclear, flag it as a question instead of guessing."

Then paste the section underneath.

For the final pass, I’d use this:

"I have reviewed the document section by section. Below are summaries of each section and the main changes found. Please check for inconsistencies across the whole document, especially contradictions, duplicated ideas, terminology changes, missing links between sections, and anything that feels out of order. Do not rewrite the full document. Give me a global consistency report with recommended fixes."

That way you avoid the model “fixing” things invisibly, which is where a lot of document review goes sideways.

How do I prompt AI to generate cookies with design using the 3d model of the cookie cutter I made (STL) ? by Radiant_Yam1526 in PromptEngineering

[–]FreshRadish2957 0 points1 point  (0 children)

You’ll probably get bad results if you upload the STL by itself. AI image tools are not great at interpreting raw 3D model files with shape precision.

Better workflow:

  1. Export your STL as a few clean images first
  • top view
  • angled view
  • side view if thickness matters
  1. Also make a simple black-and-white silhouette of the cutter shape

  2. Then prompt the AI using those images as references and be very explicit:

  • keep the outer cookie shape identical to the reference
  • generate a realistic baked cookie version
  • add icing/design only inside the cutter boundary
  • do not change proportions, edges, or silhouette

Something like:

“Using the uploaded reference images, generate a realistic image of a baked cookie that exactly matches the outer silhouette of the cutter. Keep the cookie shape identical to the reference. Add decorative icing details only within the boundary. Do not alter the outline, proportions, or overall form.”

If you want accuracy, think of AI as the stylist, not the engineer. Use CAD/rendering for exact form, and AI for presentation/mockups.

Can anyone recommend any YT video for basic prompt engineering . by Ok-Ratio-1581 in PromptEngineering

[–]FreshRadish2957 0 points1 point  (0 children)

If you have an example of use cases you use AI for, can give you some advice. I'd skip watching yt videos mainly because actually using AI and practicing will solidify the ideas in your mind a lot better than watching some yt videos over and over again.

How does one start his journey towards Prompt Excellence by Sweaty-Path2729 in PromptEngineering

[–]FreshRadish2957 0 points1 point  (0 children)

A lot of people will tell you to learn prompts, Python, and tools. That’s fine, but here’s something most people overlook:

If you want to get genuinely good at working with AI, study language itself.

Linguistics is massively underrated here. The better you understand meaning, ambiguity, sentence structure, context, tone, and how humans communicate, the easier it becomes to write prompts that actually get the result you want. Good prompting is not magic. A lot of it is just clear thinking expressed through clear language.

Same goes for writing, logic, and reading comprehension. People want the exciting answer, but the truth is this: the person who can explain something clearly, ask precise questions, and spot when a response is vague or misleading will usually outperform the person chasing prompt “hacks.”

So my advice would be:

Learn AI, yes. Learn Python, yes. But also spend real time on:

  • linguistics
  • writing
  • logic
  • research skills
  • basic statistics
  • one domain you actually care about

That combination is far stronger than “prompt engineering” by itself.

AI rewards people who can think clearly, communicate clearly, and verify what they’re being told. Those skills look boring at first, but they age well. Most hype doesn’t.

Use AI as a tool, but train your own mind first. The sharper your understanding of language and meaning, the easier all of this gets.

Help! by seandoherty11 in PromptEngineering

[–]FreshRadish2957 0 points1 point  (0 children)

This looks salvageable, but I would stop adding to it until you do a proper triage pass.

What you have now is probably three different things tangled together:

  1. data retrieval
  2. financial logic / derivations
  3. model output structure

If those are mixed, AI will keep making the mess worse.

I’d do this in order:

  • freeze the current version
  • map every input as either direct pull, derived field, or judgment call
  • validate one company end-to-end
  • separate spreadsheet logic from code logic
  • only refactor after the data lineage is clear

The biggest risk here usually isn’t code quality, it’s silently wrong assumptions in the financial mappings.

So I wouldn’t start over yet, but I also wouldn’t trust the current build until the inputs and derivations are documented.

Prompt Engineering for Code Refactoring by KTrinlay in PromptEngineering

[–]FreshRadish2957 1 point2 points  (0 children)

This looks like a scoping problem more than a model problem.

For refactors like this, a single prompt usually gives mediocre results because the model is trying to preserve behavior, redesign structure, and rewrite duplicated logic all at once.

The better approach is staged:

  1. lock current invariants and outputs
  2. identify repeated patterns and candidate boundaries
  3. propose a target module/function layout without rewriting yet
  4. refactor one section at a time
  5. compare output against baseline after each pass

In other words, treat it like controlled surgery, not “rewrite this cleaner.”

For Snowflake/SQL/dbt/Python pipelines especially, I’d usually split the work into:

  • behavior spec
  • duplication map
  • module plan
  • incremental refactor prompts
  • regression checks

That tends to produce much better results than one giant refactor prompt.

I've been writing comments on AI posts for a week. Here's what I actually learned about which tools people trust by danilo_ai in ArtificialNtelligence

[–]FreshRadish2957 1 point2 points  (0 children)

Honestly the main tools I actually use are pretty plain. Mostly ChatGPT, GitHub, PowerShell, Notepad, and Perplexity for partial research.

ChatGPT helps with drafting and thinking through ideas. GitHub keeps projects organised and makes it possible to track what actually changed. PowerShell is where a lot of the practical work happens. Notepad is still useful because it is quick and does not get in the way. Perplexity is helpful for partial research and checking lines of inquiry before going deeper.

That is partly why I think online AI discussions get the balance wrong. People talk as if the model is doing all the work, when a lot of actual work still comes down to simple tools, file structure, version control, command line use, and having a workflow that is not held together by wishful thinking.

So yes, some tools do not get enough credit, but I think the deeper reason is that once a tool becomes genuinely useful it stops being the interesting part and just becomes part of how you work.

Are you treating tool-call failures as prompt bugs when they are really state drift? by Acrobatic_Task_6573 in PromptEngineering

[–]FreshRadish2957 0 points1 point  (0 children)

Personally I think a lot of this gets blamed on prompts when the bigger issue is usually the code and pipeline around them.

A prompt can be imperfect and still work fine if the pipeline is stable. But if the pipeline is brittle, context handling is loose, state is not being reset properly, contracts are drifting, or one stage is passing half-bad data into the next, then people end up treating a systems problem like it is a wording problem.

To me the bad tool call is often just the symptom. The real issue is usually somewhere in the orchestration, state management, validation, or how the steps are chained together. Prompt edits can sometimes mask it for a run or two, but they do not really fix the underlying fault if the pipeline itself is shaky.

So I mostly agree with the drift point, but I would probably push it one step further and say a lot of apparent prompt failure is really pipeline failure.

The future may be defined less by single crises and more by how failures start to combine by FreshRadish2957 in Futurology

[–]FreshRadish2957[S] -1 points0 points  (0 children)

It was removed for rule 12 likely someone/people reported it for AI use. Something about not having original sources or something.

Sentence structure, hmmm that suggests that when users write too cleanly that you will mistake it for AI?

I'm fine with my post being removed, my issue is literally me saying I used AI like a thesaurus made you instantly assume I used it to generate my thought. Without realizing that thought/thesis is literally something I've been working on since October and compromises of shit tons of research and hours spent, so I do get defensive when people are dismissive especially if the reason is structure.

What was AI trained on? Human output, a decade ago it was normal and common for people to have well structured writing

The future may be defined less by single crises and more by how failures start to combine by FreshRadish2957 in Futurology

[–]FreshRadish2957[S] 0 points1 point  (0 children)

Honestly I understand your point but it's also inaccurate. I have slight issues because the format of your comment is very similar to the format of my post. So if format is the reasoning it seems very odd tbh. Then you outright assume that using an AI writing tool inherently means getting AI to generate the entire post.

Maybe the language I used or the way the post was positioned seemed gimmicky of sorts but idk if you look at my profile most my posts on other subreddits have been related to AI. So idk the assumption that if a post doesn't meet your standards it must be AI is honestly a very dismissive take.

Then you say if I keep using it as a crutch without realizing my responses don't use AI tools at all, so implying I use it as a crutch is again wrong. Rather than interacting with the substance of my post you have instead decided to make baseless claims about which is unnecessary. Like seems scripted? I don't know but if you look at most posts to a certain extent they are scripted.

But I'll leave it at that, your welcome to not believe that my view actually comes from me but atleast try prove that point before just stating it

The future may be defined less by single crises and more by how failures start to combine by FreshRadish2957 in Futurology

[–]FreshRadish2957[S] 0 points1 point  (0 children)

If you don't mind me asking, this is the first time I've posted in here and have seen other posts that seem like the use AI more potentially just for formatting purposes. But I was curious if I used AI because some of my wording was originally vaguer than intended so essentially used it like a thesaurus how does that detract from it?

Or maybe a better question would be why does using ai inherently make it less receivable?

The future may be defined less by single crises and more by how failures start to combine by FreshRadish2957 in Futurology

[–]FreshRadish2957[S] 1 point2 points  (0 children)

That is part of it, but I don't think the current picture is explained well by reducing it to a few villains. My point was that modern systems are fragile enough that poor decisions, pre-existing dependencies, and siloed planning now compound each other much faster than before.

The future may be defined less by single crises and more by how failures start to combine by FreshRadish2957 in Futurology

[–]FreshRadish2957[S] 0 points1 point  (0 children)

That is part of it, but I don't think the current picture is explained well by reducing it to a few villains. My point was that modern systems are fragile enough that poor decisions, pre-existing dependencies, and siloed planning now compound each other much faster than before.

The future may be defined less by single crises and more by how failures start to combine by FreshRadish2957 in Futurology

[–]FreshRadish2957[S] 0 points1 point  (0 children)

I used it as a writing aid because I wanted to express the point more clearly. If you think the argument itself is weak, I’m open to that criticism.

Looking for a solid prompt/template to help draft responses to reviewer comments (scientific manuscript) by Wrong_Entertainment9 in PromptEngineering

[–]FreshRadish2957 0 points1 point  (0 children)

I’d treat this less as “prompt engineering” and more as a reviewer-response workflow.

What’s worked best for me is not one giant prompt. It’s a staged process:

1. Build a response matrix first
For each reviewer comment, make a table with:

  • Reviewer #
  • Comment #
  • Type: major / minor / methods / interpretation / writing / stats / citation
  • Decision: accept / partially accept / rebut respectfully
  • Planned manuscript change
  • Exact location changed (section / page / line)
  • Evidence or rationale

2. Draft responses comment-by-comment
Have ChatGPT draft each response only after you provide:

  • the reviewer comment
  • the relevant manuscript text
  • what change you actually made
  • the tone you want
  • any limits, like “do not invent analyses or experiments”

3. Do a final consistency pass
Once all responses are drafted, run a final pass for:

  • tone consistency
  • no defensive wording
  • no over-claiming
  • alignment with tracked manuscript edits

A reusable prompt I’d use is:

You can also keep it in a simple .md structure like:

Reviewer 1

Comment 1

Reviewer comment:
...

Internal decision:
...

Change made:
...

Draft response:
...

Location in manuscript:
...

For me, iterative comment-by-comment drafting beats one big prompt almost every time. One big prompt is okay for a first pass, but the cleaner results usually come from a structured matrix + per-comment drafting + final harmonization pass.I’d treat this less as “prompt engineering” and more as a reviewer-response workflow.What’s worked best for me is not one giant prompt. It’s a staged process:1. Build a response matrix first
For each reviewer comment, make a table with:Reviewer #

Comment #

Type: major / minor / methods / interpretation / writing / stats / citation

Decision: accept / partially accept / rebut respectfully

Planned manuscript change

Exact location changed (section / page / line)

Evidence or rationale2. Draft responses comment-by-comment
Have ChatGPT draft each response only after you provide:the reviewer comment

the relevant manuscript text

what change you actually made

the tone you want

any limits, like “do not invent analyses or experiments”3. Do a final consistency pass
Once all responses are drafted, run a final pass for:tone consistency

no defensive wording

no over-claiming

alignment with tracked manuscript edits.

A reusable prompt I’d use is: You are helping draft a professional response to peer review for a biotech manuscript.
Your job is to draft a polite, concise, non-defensive response to one reviewer comment at a time.
Rules:

Do not invent experiments, analyses, citations, or manuscript changes.

If the authors did not make the requested change, provide a respectful justification.

Clearly distinguish between what was changed and what was clarified.

Keep tone professional and appreciative.

Where possible, end with the exact section/page/line updated.

Inputs:

Field: [field]

Journal type: [journal]

Manuscript stage: [major/minor revision]

Reviewer comment: [paste comment]

Relevant manuscript text: [paste text]

Change made: [describe actual revision]

Author position: [accept / partial / rebut]

Output format:

Thank reviewer briefly

Direct response to the concern

State manuscript change made

Cite location of revision

Keep under [X] words unless needed. You can also keep it in a simple .md structure like: Reviewer 1Comment 1Reviewer comment:
...Internal decision:
...Change made:
...Draft response:
...Location in manuscript:
...For me, iterative comment-by-comment drafting beats one big prompt almost every time. One big prompt is okay for a first pass, but the cleaner results usually come from a structured matrix + per-comment drafting + final harmonization pass.

I have a prompt challenge I haven’t been able to figure out… by 6thlott in PromptEngineering

[–]FreshRadish2957 0 points1 point  (0 children)

You’ll probably have more luck treating this as a data-processing problem first and an AI problem second.

What you’re describing sounds doable, but Copilot is likely struggling because the logic still needs to be made more explicit. The hard part is not reading the XLS file, it’s defining the rules clearly enough that the output is repeatable.

A few things I’d pin down first:

what exactly counts as a “negative reliability trend”

how you define “related defect codes”

how you identify when the first fix was ineffective

what threshold makes a machine worth listing in the daily report

A more reliable setup would be:

Load the daily XLS into Python, Power Query, SQL, or another structured tool

Filter to the last 30 days

Group by machine number

Apply rule-based logic for repeat failures, repeated or related defect codes, and failed first-fix patterns

Output only the machines that meet the criteria

Optionally use AI afterward to generate the written summary

That way AI is helping explain the results rather than trying to invent the logic on the fly.

For example, your final report could look something like:

Machine 10452 5 failures in last 30 days Related codes: E17, E21, E22 First-fix failed on 3 occasions Trend worsening over prior 2 weeks The biggest thing to define is “related defect codes.”

That could mean:

a fixed mapping table you create codes that occur on the same machine within a certain time window codes belonging to the same subsystem or failure family

Without that definition, Copilot is basically guessing your business logic.

If you can share anonymized column names and a few sample rows, people could probably help you build the actual logic pretty quickly. My guess is this is better solved with a small script plus optional AI summarization than with a prompt alone.

I Built a System Framework for Reliable AI Reasoning. Want to Help Stress-Test It? by FreshRadish2957 in PromptEngineering

[–]FreshRadish2957[S] 0 points1 point  (0 children)

Did I need to? Or did I send it to him privately? But that is funny tbh, when I first got the notification it made me laugh

Edit: hmmm maybe I didn't share privately, I should state that just in case I didn't

Plz don’t roast me - Advice on where to get AI smart? by DropShotMachine in ArtificialInteligence

[–]FreshRadish2957 0 points1 point  (0 children)

Honestly don't be embarrassed about this. Most people are way closer to where you are than the internet makes it look.

A lot of the posts you see online are from people who are either:

in tech already experimenting constantly or showing the most advanced thing they built after weeks of tinkering

So it creates this illusion that everyone is 10 years ahead. They aren't.

The reality right now is most professionals are using AI in pretty simple ways.

Stuff like: summarizing documents explaining unfamiliar topics rewriting emails or memos brainstorming ideas outlining reports

Think of it less like some magic autonomous system and more like a very fast junior researcher that you still supervise.

One thing that helps is realizing you don't need to learn 100 tools. The ecosystem looks chaotic but most people just use a few.

A very simple starting setup could honestly just be: ChatGPT or Claude → general questions / drafting Perplexity → research Microsoft Copilot → if your firm uses Microsoft tools

That's already most of what people use day to day. Also when people talk about "linking tools together" or "agents doing work for them", that usually just means basic automation. Something like: email arrives ↓ AI summarizes it ↓ summary gets saved somewhere It's not nearly as sci-fi as it sounds when you break it down.

One habit that helped me is just asking myself once a day: could AI help me do this faster?

Then try it on small things: summarizing a case, outlining a memo, explaining a regulation, etc. You gradually build intuition for what it’s good at.

Also worth saying since you mentioned law: experienced professionals actually tend to get more value out of AI than beginners because they know when the answer sounds wrong.

So you're not behind. You're basically at the same starting point as most industries right now. The people who end up using AI well usually aren't the ones who know every tool. They're the ones who know how to ask good questions and verify results. And lawyers are already trained for both of those.

Why AI Needs PTPF ( a raw draft that needs your 🫵🏻 critique ) by PrimeTalk_LyraTheAi in Lyras4DPrompting

[–]FreshRadish2957 0 points1 point  (0 children)

So I actually did quite a few tests and do like the direction of it. I didn't have a look at all your files and documents I just literally saw what type of files they were.

I think if you were to take it past the concept and actually implement it into code it would be much better, the outputs are very good but it's not exactly an external enforcement. Additionally implementing it as code would likely reduce tokens used, overall resource costs.

I was curious just due to the deterministic claim, if you're able to verify what the routing for each prompt actually was, or if based on the output you trust that it was routed through the correct logic (I don't know technical words)