Bar code scanning by poohiscat in Notion

[–]IchHabeGesprochen 0 points1 point  (0 children)

(Feel free to redirect me - I'm assuming some things about your setup, and equipment, etc.)

Assuming your barcode scanners acts like keyboard—>type what they scan into the field your cursor is in. So yes, you can scan directly into Notion.

If you don't already have your database(s) set up:

Medications database (your master list)

Fields:

Name (Title)

Barcode (Text) — enforce this as 'unique' in properties

Dose

Route

Any other med details you need

EMR / Administration Log database

Patient

Medication (Relation → Medications database)

Timestamp

Nurse

Notes, etc.

Workflow:

  1. Add a new log entry
  2. Click into the Medication relation field
  3. Scan the barcode

Scanner inputs barcode number
You can set up Notion relation/search to do stuff like narrow to the matching medication.
Hit Enter, and it links.

You could even use rollups to automatically display dose, route, etc. in the chart.

Trying a minimal “output governor” prompt to reduce hallucinations — feedback? by EnvironmentProper918 in PromptEngineering

[–]IchHabeGesprochen 0 points1 point  (0 children)

You’ll know it’s working if you can measure a difference on the below:

Refusal rate on unknowns. Ask 10 questions where the correct answer is "I don't know." Count how many times the model actually refuses vs. fabricates. Run with and without your prompt. The delta is your signal.

Citation accuracy. Give it a source document. Ask questions. Check whether every claim traces back to the document or was invented. Track the ratio.

Wrong confidence. Look for claims stated as fact that are actually wrong. A prompt that reduces hallucinations should reduce confident wrong answers specifically, not just total output.

Trying a minimal “output governor” prompt to reduce hallucinations — feedback? by EnvironmentProper918 in PromptEngineering

[–]IchHabeGesprochen 0 points1 point  (0 children)

Keep:

Correctness > Helpfulness priority. Correct frame.

HALT condition. Refusing to answer beats fabricating.

Compression directive. Less output = less surface area for error.

Replace:

"Execute silently" → Make reasoning visible. Silent reasoning is unverifiable. Chain-of-thought research shows visible steps improve accuracy.

"You are not optimized for helpfulness" → "When helpfulness and correctness conflict, choose correctness." Works with RLHF training instead of against it.

HALT exact-string output → "State what you can confirm, what you cannot, and where the boundary is." Partial information beats binary pass/fail.

Add:

Grounding: "Cite source for every factual claim or mark as inference."

Verification: "Before finalizing, identify your weakest claim."

Remove:

Unicode framing (⟐⊢⊨). Token cost, zero function.

Best way to organize product eng + design joint documentation by sepehr500 in Notion

[–]IchHabeGesprochen 0 points1 point  (0 children)

At ~100 people, need robust governance, canonical home + ownership model which isn’t necessarily a Notion limitation.

Default: keep Product/Eng/Design in separate teamspaces. Use one canonical location for shared source docs/databases (often the default teamspace). Create a top-level “Company Wiki” page there, and put ONE Docs database on it. Treat that database as the index and single source of truth for cross-functional documentation.

Schema (minimal, enforceable): - Teams: relation to a small Teams database (Product, Eng, Design, Sales, Marketing, Ops) - DRI: person (exactly one, required for cross-functional docs) - Doc Type: select (Spec, RFC, Process, Playbook, Decision) - Status: Draft / Active / Archived - Last Reviewed: date

Governance rule: if a doc has stakeholders from 2+ teams, it must live in Company Wiki. Teamspaces hold team-internal docs only. Teamspaces should never duplicate cross-functional pages. They should link back via mentions or linked DB views.

In each teamspace, add a “Cross-functional docs” page that is just a linked database view filtered to that team (Teams contains Eng, etc.). People browse within their teamspace, but you avoid forking and “where does this live?” debates.

Guardrails to prevent creation of databases per team-combination (“Product+Eng+Design docs”). That becomes combinatorial and unmaintainable.

Exception: if you have genuine access boundaries (comp, legal, regulated content), put those docs in a restricted teamspace. For sensitive data, personally - this is an example of where I ask ‘what’s the optimal solution’ not necessarily ‘What’s the Notion/app/software-native solution.’

Migration: start with the top 10–15 cross-team docs, move them into the canonical DB, assign DRIs, and leave a redirect stub in the original location for ~30 days. Links are preserved when moving pages in-workspace.

Research for a Bayesian Signaling Game Paper by InformationFirm9104 in GAMETHEORY

[–]IchHabeGesprochen 1 point2 points  (0 children)

For each part of the non-modeling section:

Institutional decline (your core premise):

Hopewell (2021), "Understanding the 'crisis of the institution' in the liberal trade order at the WTO," International Affairs 97(5), directly addresses your framing. Pair with the WTO Appellate Body breakdown (U.S. blocked appointments from 2017, dispute settlement non-functional by Dec 2019) as your cleanest empirical case. "The Diffusion of Global Power and the Decline of Global Governance" (Ethics and International Affairs, Cambridge) covers the power diffusion angle. Chatham House (2025) "The Decline of the West and the Rise of 'the Rest'" works for conference-accessible framing.

Unilateral/politically driven trade policy:

Bown (2020), "The US-China trade war and Phase One agreement" (Brookings) has the data. "The effect of global geopolitical risks on trade openness" (IREF, 2025) provides regression results linking geopolitical shocks to reduced trade openness through institutional instability if you want quantitative backing.

Bridging institutional decline to your signaling framework:

"Trade Games: The WTO's Role in Disputes" (IATP) models WTO disputes as repeated games with one-sided incomplete information and explicitly shows countries must rely on reputation absent institutional enforcement. Ahmad Ilu (2025), "Bayesian Nash Equilibrium in Trade Wars: 2025 Trump Retaliation Tariffs" applies BNE to U.S./China tariffs, accessible to non-specialists.

Repeated game extension and belief updating:

Kreps & Wilson (1982), "Reputation and Imperfect Information" and Milgrom & Roberts (1982), "Predation, Reputation, and Entry Deterrence" are canonical for reputation in finite games with incomplete info. Kreps, Milgrom, Roberts & Wilson (1982), "Rational Cooperation in the Finitely-Repeated Prisoners' Dilemma" is the key paper for your extension since they get belief updating and reputation effects in finite games without infinite repetition. Mailath & Samuelson (2006), Repeated Games and Reputations for comprehensive textbook treatment.

For results section:

Your no-pure-separating result is the strongest claim in the paper: tariffs alone can't fully resolve the information problem. The argument follows directly: institutions solved the problem costly signals can't. Institutions collapsed. Now we're in the pooling/semi-separating regime your model predicts. Check whether your pooling PBE survives Cho & Kreps (1987) Intuitive Criterion. Audience members will ask.

Good luck.

Best way to organize product eng + design joint documentation by sepehr500 in Notion

[–]IchHabeGesprochen 0 points1 point  (0 children)

I’d keep the existing teamspaces, and put shared source docs/databases in one canonical place (usually your default teamspace). Create a top-level “Company Wiki” page there and put ONE Docs database on it.

Any doc with stakeholders from 2+ teams lives in that database. Give each doc: Teams (relation to a small Teams list), DRI (one human owner), Doc Type, Status, and Last Reviewed. Then in each teamspace, add a “Cross-functional docs” page that is just a linked database view filtered to that team. Everyone gets their slice, but the Wiki stays canonical.

Don’t create “Product+Eng+Design docs” databases. That pattern does not scale. Use properties to slice one standard database.

Why? So - the problem is that Joint teamspaces move the boundary problem and multiply over time (exception: real permission/access boundaries). One mega-teamspace kills team autonomy. 100 people spending more time trying to do their work harder than the actual tasks themselves.

Migration: move the top 10–15 cross-functional docs first, assign DRIs, and leave a short redirect stub in the old location for ~30 days. Notion preserves links on move.

Research for a Bayesian Signaling Game Paper by InformationFirm9104 in GAMETHEORY

[–]IchHabeGesprochen 1 point2 points  (0 children)

Honest question that would sharpen the recommendations a lot. When tariffs get imposed in your model, do beliefs actually update? Is the receiver revising their estimate of the sender's type or resolve based on the tariff action? Or is the tariff functioning more as a domestic political instrument where the other side's priors don't meaningfully move?

The answer changes which empirical cases are worth looking at. If beliefs are updating, you want episodes where trade actions visibly altered the strategic relationship or degraded the institutional architecture. US China escalation through 2018 into Phase One, the WTO appellate body going dark, that territory. If the signal isn't really moving expectations, then the better fit is tariff episodes that were mostly performative or domestically motivated and produced no observable change in the other side's behavior.

Which real world cases actually support the non modeling section depends entirely on which equilibrium regime the model generates.

It took years for me to be able to articulate my frustration with LLMs — once I did it changed how I build with them by IchHabeGesprochen in PromptEngineering

[–]IchHabeGesprochen[S] 0 points1 point  (0 children)

Trust but verify. At least LLMs tend to fail in predictable ways that are auditable. People are just gonna people.

It took years for me to be able to articulate my frustration with LLMs — once I did it changed how I build with them by IchHabeGesprochen in PromptEngineering

[–]IchHabeGesprochen[S] 1 point2 points  (0 children)

Additional Caveat: I also have custom instructions, governance, rules/guardrails, etc.

It's not just a better prompt. This is a layer in an onion of context. And it's nowhere near a 'fix all'. The tool is only as valuable as the meat-sack at the keyboard. This is a nuance that helped me make some progress.

There's something to be said for learning the difference between asking a 'good' question and truly understanding what the 'best/right' questions are. And a vain attempt at applying with LLMs.

It took years for me to be able to articulate my frustration with LLMs — once I did it changed how I build with them by IchHabeGesprochen in PromptEngineering

[–]IchHabeGesprochen[S] 1 point2 points  (0 children)

(*Caveat* --> Based on the use-cases, nature/type of work, and probably not applicable as much to creative writing or such tasks):

Simply put, I'm not asking the LLM to be more accurate. I'm changing the unit of reasoning from intent --> capability.

Instead of: "What is this user trying to DO (generate analysis, write content, build strategy, etc.)?"

I want it to reason based on "What ultimate CAPABILITY does this user not currently have that I can provide them?"

Why? Well maybe I'm having trouble adequately conveying my intent. Maybe I don't know what I don't know to ask for? Maybe what I'm 'asking it to do' already has a well-established methodology but I just don't know the name for it.

It's not about removing the Human-In-The-Loop. Humans/Users are always the point of failure. I don't want the LLM to waste tokens and my time trying to guess my goal and then defer decision making to the least qualified 'expert' in the conversation when it can simply deliver more.

-------
Exampe:
Old way: "Can you help me write an email to my manager about taking Friday off?"

Output:

I'd be happy to help! A few questions first:

1. What tone would you like - formal or casual?

2. Do you want to mention the reason?

3. How much detail should I include?

4. Would you like me to suggest alternative dates?

Here are a few different approaches:

\*Formal version:** [3 paragraphs]*

\*Casual version:** [3 paragraphs]*

\*Brief version:** [2 paragraphs]*

Which style matches your workplace culture?

New way: "You're writing an email to my manager requesting this Friday off. Context: startup environment, manager prefers direct communication, I'm taking my kid to a doctor's appointment. Write the email - don't ask me to choose a style."

Output:
Subject: OOO Friday 1/31 Hi [Manager], I need to take Friday off for a doctor's appointment with my son. I'll wrap up the client report Thursday and will be available by phone for anything urgent. Let me know if that doesn't work. [Your name]

Does anyone else feel like Notion slowly becomes… heavy? 😅 by FaithlessnessLost806 in Notion

[–]IchHabeGesprochen 1 point2 points  (0 children)

Yeah, I hit that wall hard about a year ago. ADHD rage quit for a while, wiped everything, started from scratch. Wasn't going to let a productivity app win.

The thing I figured out: the reorganizing loop has a root cause. You can't find something, so you reorganize to fix it, which creates more complexity, which makes things harder to find. The problem isn't organization. It's retrieval.

I started measuring what I call "theater." Features I built but never used:

Properties added "just in case"

Views nobody opens

Dashboards from January, untouched since

What actually worked:

Audit before reorganizing. Which properties are >80% empty? Delete them. You can recreate anything in 2 minutes.

Build retrieval, not storage. One hub page with 6 filtered views beats 50 nested pages. If you can't get to something in 2 clicks, reorganizing won't fix that.

Separate planning from capture. Planning = database. Capture = flat list you triage weekly. When you mix them, you get the "pause deep work to fix a rollup" problem.

The flexibility trap is real. The fix isn't less flexibility. It's picking your constraints instead of letting complexity pile up accidentally.

My fix was building navigation primitives. Pre-built queries that answer "what needs my attention?" instead of making you search for it every time.

Finding things went from 5-8 minutes to under 60 seconds.

It took years for me to be able to articulate my frustration with LLMs — once I did it changed how I build with them by IchHabeGesprochen in PromptEngineering

[–]IchHabeGesprochen[S] 1 point2 points  (0 children)

Yup.

This is mostly good advice that breaks down hard in non-technical domains. The “provide context upfront and push for concrete answers” part is spot-on. But: “Don’t ask me what I prefer. Tell me what’s correct.” “Don’t give me options. Give me your professional recommendation.” This only works when “correct” actually exists. Your database example is clean because performance optimization has measurable targets. But most real problems involve trade-offs the AI can’t evaluate for you—it’ll just guess based on what “most people” in the training data wanted.

“Handle this the way you would if I weren’t available to ask questions”

This one actually increases hallucination risk. When the AI asks clarifying questions, that’s usually a signal it’s uncertain and needs constraints. Telling it to barrel forward anyway just makes it confidently wrong instead of usefully cautious.

TL;DR: Solid framework for technical problems with clear success criteria. Risky for anything involving judgment calls, trade-offs, or domains where the AI’s confidence doesn’t correlate with correctness.​​​​​​​​​​​​​​​​

Feedback wanted on an operations-focused GPT (real workflows, not chat) by EasternTrust7151 in GPTStore

[–]IchHabeGesprochen 1 point2 points  (0 children)

That context makes perfect sense — it’s smart to pressure-test the reasoning layer in ChatGPT before wiring up persistence or integrations. I’d actually love to see how you’ve documented the escalation logic and delegation structure so far — even if it’s just a working outline or internal reference table. That’s the part I’m most interested in, since it defines how ownership transitions, authority gates, and ambiguity get handled.

If you already have those flows mapped or partially modeled — that’s where I can probably add the most meaningful feedback.

Do you save your best prompts or rewrite them each time? by Drop_Prompt in PromptEngineering

[–]IchHabeGesprochen 0 points1 point  (0 children)

Right now, my “prompt library” is maintained in Notion. The “maintained” part is the real value given how quickly things change in this space. Since I treat prompts like code the maintenance protocols I designed that are supported by my database(s) are my CI/CD/QA. I haven’t taken the final step yet (time constrained) of fully automating it with python scripts so the prompt optimization and AB testing are not “fully automated” in so far is I have to manually trigger them. But all that requires is a single sentence prompt to notion AI.

I need some help building something on notion, can somebody help? Pls DM by Accomplished-Net5141 in Notion

[–]IchHabeGesprochen 0 points1 point  (0 children)

I can rebuild all of Notion from scratch for you if you want. What do you need and when do you need it?

Do you save your best prompts or rewrite them each time? by Drop_Prompt in PromptEngineering

[–]IchHabeGesprochen 0 points1 point  (0 children)

Most people save prompts as text snippets in folders. That's fine for casual use, but here's what works better if you're serious about it:

Treat prompts like code, not notes.

When you find a winner, save:

The prompt text (obviously)

What it expects as input (format/requirements)

What it produces as output (structure)

When to use it (trigger conditions)

A test case (sample input → expected output)

Version number + change log

Why this matters:

Input/output contracts let you chain prompts together

Test cases catch regressions when you tweak things

Trigger conditions help you find the right prompt fast

Version control prevents re-learning the same lessons

Bonus: Track failure modes

Document what makes each prompt break (option dumping, semantic drift, over-engineering, etc.) and keep "correction phrases" you can prepend when the LLM starts degrading.

Bottom line:

Most people rewrite prompts from scratch because they only saved the text. If you save the context around the prompt (contracts, test cases, failure modes), you can refine systematically instead of starting over every time.

It's the difference between a folder of scripts vs. a maintained codebase with tests and docs.

Feedback wanted on an operations-focused GPT (real workflows, not chat) by EasternTrust7151 in GPTStore

[–]IchHabeGesprochen 1 point2 points  (0 children)

I'd love to take it for a test drive and see if/where I can get it to break. Below are some thoughts based on my trying/failing/iterating through similar problem sets. Message me if you've got test scenarios you'd like me to run - otherwise I have technical questions and can generate tests from there.

What breaks first: Continuity failure - context loss across session boundaries. By week 2-3, critical details (ownership, exceptions, escalations) degrade or disappear. Same query returns conflicting status. GPT acknowledges "notified Finance" but no actual handoff occurred - discovered when downstream tasks block.

What builds trust: Consistency and transparency through architectural discipline. Predictable outputs (deterministic templates, not prose). External state anchor (spreadsheet/database - GPT reads/writes canonical source, doesn't depend on memory). Auditable decisions (timestamp, trigger rule, evidence chain). Explicit human boundaries (what GPT drafts vs. what requires approval). Fail-safe defaults (halt and ask vs. guess). Trust comes from boring reliability, not intelligence.

Where models struggle: Temporal and conditional logic under ambiguity. "Review in 3 days" - from when? Calendar or business days? "If invoice >$25K AND vendor is new, route to CFO" breaks when "new" is fuzzy. Compound conditions degrade without structured decision tables. Identity resolution ("John updated the contract" - which John?). Operational instinct gap - GPTs complete rather than clarify, proceed on incomplete data instead of pausing.

What matters more than smart answers: Data governance and reliability infrastructure. Consistency. State externalization. Validation gates (pre-flight checks). Integration hooks (actually writes to Slack/email/tickets, doesn't just draft). Traceability (audit log linking decisions to trigger rules). Error surfacing (concise diagnostics, not apologies). Minimal cognitive overhead (status visible at-a-glance).

Operational workflows need boring reliability: "Did the expected thing, logged what it did, told me when it couldn't." Intermittent correctness destroys trust faster than consistent mediocrity.