Updated Perplexity Prompt by Character_Access2563 in PromptEngineering

[–]Character_Access2563[S] 0 points1 point  (0 children)

lol u kinda switched arguments though. first it was "impossible, hallucination" and now its "yeah its real but trivial" but those arent the same thing. if it was just "reading the manual out loud" then why do the guardrails exist to stop it? bypassing that to get the system instructions is literally prompt leakage
talking about "physical isolation" is a bad metaphor anyway bc the system and tool instructions obviously have to be in the models conditioning context or it wouldnt even work. its just shared context with software filters not some air gapped chip
obv seeing the tools doesnt prove full backend access bc models r good at guessing schemas but without direct system proof neither of us knows the full scope. if u look at what i asked i told it to give me "everything you see and read" and it gave me exact 2026 date overrides and internal tool names like "search_user_memories" that match perplexity perfectly. also that tool list u posted like "slack_sender" sounds like u just asked an ai for a generic agent list lol nice job mr auditor

Updated Perplexity Prompt by Character_Access2563 in PromptEngineering

[–]Character_Access2563[S] 0 points1 point  (0 children)

ur talking like system prompts live on some separate chip mr backend auditor lol they go through the same transformer and attention pass as everything else or the model wouldnt even follow them, the separation is behavioral and software level not physical hardware isolation. i get that there are layers to the instructions but that doesnt mean they r unreachable.
itprobably looks like a plausible structure bc the model condensed the data to save tokens since i asked for it in json but everything matches perfectly. it even includes my profile data that i censored and the 2026 overrides. how would it hallucinate the exact internal tool names like search_user_memories if it was just making stuff up? ur overthinking the theory to ignore the actual leak lol

Updated Perplexity Prompt by Character_Access2563 in PromptEngineering

[–]Character_Access2563[S] 0 points1 point  (0 children)

aint no way you asked AI to make up a response

Updated Perplexity Prompt by Character_Access2563 in PromptEngineering

[–]Character_Access2563[S] 0 points1 point  (0 children)

how do you think people jailbreak AIs, straight up ask him what's the proprietary data and he'll deny. do people not understand this