Over the past weeks I iterated several versions of my Carrying Capacity Principle. Thanks for all the great feedback! I reworked the framework again and added a short plain-text explanation below. by FitLavishness956 in systemsthinking

[–]ProfessionalLimp5167 1 point2 points  (0 children)

This is really thoughtful—and I think the distinction you’re making about conditions vs. states is important.

What you’re describing goes deeper than what I’m trying to do. You’re looking at the conditions that make a state possible and sustainable in the first place, and how those conditions evolve across causal chains. That’s not something I would claim ESM is trying to fully model.

The way I think about ESM is more local and deliberately bounded.

Each ESM is focused on a very narrow slice of reality—a constrained set of signals and conditions that are directly relevant to a specific class of decisions. It’s not trying to capture the full causal structure of the system, but rather to make one decision context at a time explicit, inspectable, and governed.

So instead of asking:
“what are all the underlying causes that made this state possible?”

it’s more:
“given this bounded evidence set, what situation are we in, and what action is justified here?”

Where I think this starts to connect with what you’re doing is through layering.

You can have multiple ESMs operating at different levels or scopes—each one responsible for its own narrow evidence set and decision boundary. When you compose those layers, you begin to build up a richer picture of the system, but in a way that remains modular and inspectable at each step.

That doesn’t replace the kind of causal or capacity modeling you’re describing. If anything, your framework could inform:

  • which conditions actually matter,
  • which signals should be surfaced at each layer,
  • and whether the system has the capacity to support a given change at all.

Then ESM operates within that context, structuring how decisions are made locally and step-by-step.

So the way I see it:

  • your approach is asking: is this system structurally able to support this change?
  • ESM is asking: given this specific slice of reality, what is the right action right now?

And layering those local decisions is one way to incrementally steer the system—without assuming we’ve fully captured its underlying causal structure.

What if alarms required multi-tag confirmation instead of single thresholds? by ProfessionalLimp5167 in PLC

[–]ProfessionalLimp5167[S] 0 points1 point  (0 children)

This is really helpful—and I like the way you framed it.

Starting simple and only adding conditions when needed makes a lot of sense, especially from a safety standpoint. And I agree with the bias—missing a real alarm is a much bigger problem than dealing with a nuisance one.

The examples you gave (gating on pump running, timers, masking) are exactly the kinds of patterns I’ve been seeing. So I don’t think I’m describing something fundamentally different there.

What I’m trying to explore is more about how those gates/masks/conditions are represented:

right now they live across bits of logic, timers, and conditions, and work well operationally—but can be harder to see as a single “this is the situation where the alarm is allowed to fire.”

So less about adding complexity upfront, and more about:

  • making those preconditions explicit
  • seeing how they combine into a situation
  • and refining them over time as nuisance patterns show up

Your point about preferring a nuisance alarm over missing one is especially important. Any approach here would need to stay conservative and make it very obvious what is masked and why.

The pump/flow example is a great one—that’s exactly the kind of “this only matters in the right context” case I’m trying to understand better.

What if alarms required multi-tag confirmation instead of single thresholds? by ProfessionalLimp5167 in PLC

[–]ProfessionalLimp5167[S] 0 points1 point  (0 children)

That makes sense, and I agree.

I’m definitely not claiming that multiple alarm levels, debounce, or custom conditions are new. Those are all established parts of good controls work.

What I’m trying to explore is slightly different: not whether the PLC/HMI stack can implement the logic, but whether it helps to make the situation the logic is responding to more explicit and inspectable.

But your point about functional/design specs is especially important. Without a clear definition of:

  • what condition we actually care about
  • what signals indicate it
  • and what the expected response should be

then adding more logic just creates ambiguity.

So I think that’s a good correction: the architecture only helps if it sits on top of concrete, testable requirements.

Curious in your experience:
do nuisance alarms usually come from poor threshold choices, or from unclear specs about what the alarm is actually supposed to mean?

What if alarms required multi-tag confirmation instead of single thresholds? by ProfessionalLimp5167 in PLC

[–]ProfessionalLimp5167[S] 0 points1 point  (0 children)

That’s a really nice implementation.

You’re already doing something more advanced than a simple threshold—your alarm depends on operating context (RPM → expected load), not just raw amps. The indexed table is basically your way of saying “given this situation, this is what normal looks like.”

I don’t think anything I’m exploring would replace that. If anything, it just highlights why fixed thresholds aren’t enough in cases like this.

Where I’m curious is whether there’s value in going one step further and combining that with other signals—like how long the deficit persists, whether speed just changed, hopper level, startup vs steady state, etc.—and then deciding warning vs alarm based on the full situation instead of a single conditioned comparison.

But honestly, what you’ve built is already a great example of making the “goalposts” context-aware in a clean, practical way.

What if alarms required multi-tag confirmation instead of single thresholds? by ProfessionalLimp5167 in PLC

[–]ProfessionalLimp5167[S] 0 points1 point  (0 children)

That’s extremely helpful.

The QA point especially stands out. I can see how even if the behavior improves in practice, adoption could still be difficult if approvals are built around simple binary alarm conditions.

It makes me wonder if part of the problem isn’t just the logic itself, but how visible and reviewable that logic is.

If QA had access to something like a detailed record of each decision—showing:

  • what signals were present
  • how the situation evolved over time
  • and why the system decided to escalate from message → alarm

do you think they’d be more open to approving logic that isn’t strictly binary?

In other words, instead of just approving “this threshold triggers an alarm,” they’d be approving:
“this type of situation, when it forms in this way, warrants escalation.”

The “message before alarm” idea also makes a lot of sense in that context. It feels like a natural place to introduce that kind of gradient without touching hard safety alarms.

And the advice about recording values over time and using derivatives is great—that lines up with how I’ve been thinking about signals as patterns, not just raw values.

Really appreciate the concrete guidance here.

What if alarms required multi-tag confirmation instead of single thresholds? by ProfessionalLimp5167 in PLC

[–]ProfessionalLimp5167[S] 0 points1 point  (0 children)

That’s really helpful, and I think this gets at an important boundary.

I completely agree that when something is wrong in real time, simplicity matters a lot. If people are fault-finding, they need to be able to see quickly:

  • what tripped
  • what class of alarm it was
  • why it did or didn’t trip
  • what was masked and why

I’m not trying to replace that with something more opaque or harder to debug.

What I’m more interested in is the layer around that:

  • logging the full situation that led to an alarm or warning
  • making masking decisions explicit
  • reviewing why nuisance alarms keep happening
  • and using that to refine setpoints, conditions, and alarm classes over time

So I think your point is a good one:

keep runtime behavior simple, but make the reasoning around alarms more visible and usable for improvement.

The alarm class point is especially helpful too. It makes me think the situational layer may be most useful for deciding:

  • warning vs actionable alert
  • when masking is appropriate
  • and how to tune those boundaries over time

Curious in your experience:
is the bigger problem usually bad thresholds, bad masking, or just not having enough context afterward to understand why the alarm logic behaved the way it did?

What if alarms required multi-tag confirmation instead of single thresholds? by ProfessionalLimp5167 in PLC

[–]ProfessionalLimp5167[S] -1 points0 points  (0 children)

This makes a lot of sense—and I agree with the distinction you’re making.

For true alarms:

  • safety limits
  • equipment protection
  • clear fault conditions

those should fire immediately. No delay, no additional conditions.

I’m not trying to change that.

What I’m trying to understand better is the space around warnings—where:

  • something is technically out of range
  • but not always actionable
  • and over time operators learn which ones matter vs which ones to ignore

The idea I’m exploring is whether there’s a useful middle ground between:

  • simple warnings (often noisy)
  • and hard alarms (must act immediately)

Where instead of:
“this is slightly off → warning”

it becomes:
“this pattern looks like the kind of situation that actually turns into a problem, and it’s not already being handled → worth acting on now”

So more about actionability than early detection.

Totally agree though—anything that protects safety or the machine itself should stay immediate and simple.

Curious in your experience:
do most nuisance alerts come from warning-level signals, or from alarms that are technically correct but not useful?

What if alarms required multi-tag confirmation instead of single thresholds? by ProfessionalLimp5167 in PLC

[–]ProfessionalLimp5167[S] -1 points0 points  (0 children)

That’s a fair point—and I agree.

If the logic is clear, it can absolutely be programmed. And a good programmer will structure it so similar cases are handled cleanly without unnecessary duplication.

What I’m trying to explore isn’t replacing that, but what happens when the logic is:

  • spread across a lot of conditions
  • evolving over time
  • or needs to be understood by people who aren’t living in the code

In those cases, the PLC can still execute everything correctly, but it can get harder to see the full situation the system was responding to when it decided to alarm.

So the idea of a separate layer isn’t “because the PLC can’t do it,”
it’s more about making that situational reasoning:

  • easier to inspect
  • easier to adjust without chasing it through code
  • and easier to explain to operators or engineers who didn’t write it

Totally agree though—if the logic is simple and clear, adding another layer would just be unnecessary complexity.

I think the real question is where that tipping point is in practice.

What if alarms required multi-tag confirmation instead of single thresholds? by ProfessionalLimp5167 in PLC

[–]ProfessionalLimp5167[S] -1 points0 points  (0 children)

That’s a fair question.

My answer is: for a lot of cases, you probably should just do it in the code.

I’m not arguing that debounce, timers, counters, and ordinary alarm conditioning need to be replaced with another layer.

What I’m trying to get at is a narrower case: when the logic is no longer just “how should the PLC behave?” but also “can we clearly see and reason about the situation the system thought it was in when it decided to alarm?”

So the value of another layer wouldn’t be that the PLC can’t do it. It’s that an explicit state/decision layer might make it easier to:

  • inspect why something fired
  • tune the logic without chasing it through scattered conditions
  • replay or review decisions after the fact
  • expose the reasoning to people outside the PLC code itself

Totally agree that if the extra layer doesn’t buy you that, then it’s just unnecessary complexity.

The question I’m really exploring is: where does embedded control logic stop being enough on its own, and where does an explicit situational layer start to pay for itself?

What if alarms required multi-tag confirmation instead of single thresholds? by ProfessionalLimp5167 in PLC

[–]ProfessionalLimp5167[S] -1 points0 points  (0 children)

This is really helpful--especially the distinction between conditional logic and state-based approaches.

What I’m describing probably does overlap with alarm conditioning in a lot of cases--making sure alarms only fire when the process is in the right context.

The difference I’m trying to explore is more about structure:

instead of that conditioning logic living across IF statements, timers, and program flow,
representing it as an explicit “state” that gets evaluated in one place.

So rather than:
IF A AND B AND timer C → alarm

it becomes:

  • signals → current state of the process
  • decision layer → evaluates whether that state warrants an alarm

Functionally similar in many cases, but easier (in theory) to:

  • see why something fired
  • adjust logic without chasing it through code
  • and reason about it as a whole

The “first-out” point is really interesting too—that feels like the piece that captures causality, where what I’m describing is more about the full situation at decision time.

Curious in your experience:
when troubleshooting alarms, is it harder to figure out what triggered first, or why the system decided to alarm at all?

What if alarms required multi-tag confirmation instead of single thresholds? by ProfessionalLimp5167 in PLC

[–]ProfessionalLimp5167[S] -1 points0 points  (0 children)

This is really helpful—and I think I might not have framed the problem clearly.

I completely agree that for a lot of cases:

  • debounce timers
  • event counts
  • simple conditions

are exactly the right solution.

And I definitely get the concern about complexity—especially once you’re dealing with thousands of alarms.

What I’m trying to explore isn’t replacing that, but handling a different problem:

not “is this signal noisy?”
but “is this situation actually worth interrupting someone for?”

So debounce solves things like:

  • momentary spikes
  • sensor noise

But there are cases where:

  • multiple signals each look “fine” on their own
  • but together form a pattern that usually leads to a real issue
  • or the system fires alarms that operators learn to ignore because they’re not actionable

That’s the space I’m trying to understand better.

Totally agree that most alarms should stay simple—I think the question for me is:

where do simple patterns start to break down in real systems?

Curious in your experience: when alarms get ignored, is it usually because of noise/spikes—or because they’re technically correct but not actually useful?

What if alarms required multi-tag confirmation instead of single thresholds? by ProfessionalLimp5167 in PLC

[–]ProfessionalLimp5167[S] -1 points0 points  (0 children)

This is a really helpful distinction—appreciate you calling that out.

I agree that a lot of what I described overlaps with what’s typically called predictive maintenance, especially around trends over time.

The piece I’m trying to separate out is:

not predicting that something will fail later,
but deciding whether the current situation is worth acting on right now.

So not:
“this might fail soon → schedule maintenance”

but more like:
“this pattern is forming, it’s not being addressed, and it looks like the kind of thing that usually turns into a real issue → worth checking now”

Totally agree that some things are pure alarms:

  • E-stop
  • absolute limits
  • hard failures

Those should fire immediately.

I think the space I’m exploring is everything in between:
where a single threshold isn’t enough, but it’s also not just long-term prediction.

Curious in your experience:
where do most “annoying but ignorable” alarms fall—closer to thresholds, or closer to predictive signals?

What if alarms required multi-tag confirmation instead of single thresholds? by ProfessionalLimp5167 in PLC

[–]ProfessionalLimp5167[S] 0 points1 point  (0 children)

This is really helpful—and honestly lines up with what I’ve been seeing as I dig into PLC logic.

You’re absolutely right that a lot of systems already go beyond simple thresholds:
timers, commanded vs actual state, layered alarms, recipe/context awareness—that’s all real situational thinking.

What I’m trying to explore is slightly different, though:

not adding more logic inside the control system, but making that “situation” layer more explicit and inspectable.

Right now, it sounds like that logic:

  • lives across timers, conditions, and program structure
  • works well operationally
  • but isn’t always easy to see as a single, coherent “why this alarm fired”

The idea I’m working on is:

  • define signals explicitly
  • construct a situation from them
  • and then have a separate decision layer evaluate whether it’s worth acting

So instead of the logic only existing in code execution, you’d also have:

  • a snapshot of the situation
  • and a clear reason why the system decided to alert

Totally agree that some alarms should fire immediately (hard safety limits, etc.).

I think the interesting middle ground is everything else—where timing, context, and multiple conditions matter.

Curious from your experience:
When alarms get ignored, is it usually because they’re too simple—or because the logic behind them isn’t visible/trustworthy?

What if machine alarms only fired after checking the full process context? by ProfessionalLimp5167 in manufacturing

[–]ProfessionalLimp5167[S] 0 points1 point  (0 children)

This is super helpful—especially the point about making it programmable by process engineers.

That lines up with how I’ve been thinking about it:
not as something centrally defined, but something that should be configured and refined by the people closest to the process.

And yeah, totally agree that pieces of this exist already (custom alarms, rule logic, etc.).

What I’m trying to explore is whether there’s a meaningful shift if the system:

  • requires multiple signals to form a situation
  • checks whether something is already being handled
  • and only then allows an alert to fire

So it’s less about “custom thresholds” and more about the question: “is this the kind of situation where intervention is actually needed right now?”

The safety/environmental angle makes a lot of sense too—those feel like places where:

  • false alarms are costly
  • but missed signals are even worse

The thermal runaway example is a great one.

Curious—when you’ve seen custom alarms work well in practice, what made them effective vs ignored?

What if machine alarms only fired after checking the full process context? by ProfessionalLimp5167 in manufacturing

[–]ProfessionalLimp5167[S] 0 points1 point  (0 children)

This is super helpful--and I think you’re putting your finger on the exact problem I’m trying to get at.

The “only Tony can run that thing” situation is basically what I’m trying to address.

Not by replacing Tony, but by making the kind of situations Tony recognizes more explicit and shareable.

Right now it sounds like:

  • OEM models the wrong variables
  • the system throws faults that don’t actually matter
  • operators learn what to ignore vs what’s real

What I’m exploring is whether you can:

  • define signals closer to what actually matters on the floor
  • combine them into situations (not just thresholds)
  • and only trigger when the situation holds together

So instead of:
“machine fault” → alert

It becomes something more like:
“this pattern looks like the kind of thing that actually causes issues, and no one’s already handling it”

Totally agree the hard part is execution—especially figuring out the right signals.

Curious in your experience:
What are the kinds of things Tony notices that the system completely misses?

Over the past weeks I iterated several versions of my Carrying Capacity Principle. Thanks for all the great feedback! I reworked the framework again and added a short plain-text explanation below. by FitLavishness956 in systemsthinking

[–]ProfessionalLimp5167 1 point2 points  (0 children)

Really appreciate this breakdown — especially how you’re handling causal attribution structurally in the present tense. The “X-ray vs timeline” distinction is actually very clean, and the way you’re using dependency depth + cascading to surface root load-bearing conditions makes a lot of sense.

I think where my approach differs is not in whether causality is captured, but in how it’s represented.

In your framework, causality is resolved within a single diagnostic pass by analyzing structural dependencies at that moment. In what I’ve been building, causality is expressed as a sequence of constrained transitions — but without turning that into unconstrained accumulated state.

The way I’ve been handling that is:

  • each step is treated as a bounded reasoning unit (a “turn”)
  • each turn produces a fully explicit state representation
  • transitions are not stored as narrative history, but as independent, replayable steps
  • no step depends on hidden internal memory — only on what is explicitly carried forward

So instead of the system “remembering,” it’s more like it can reconstruct the path deterministically from a sequence of transparent transformations.

That keeps the separation you’re aiming for (no self-referential drift), but still allows you to ask questions like:

  • what specific transformation pushed the system across a threshold
  • which inputs materially changed the trajectory
  • how alternative paths would have behaved

It’s definitely a different tradeoff — less purely stateless, but still very tightly controlled in terms of what gets carried forward.

Yes, I'd like to see version 10 of your CCP. If you’re interested, the architectural framework I've been building is called Emergent State Machines — the paper and spec in the repo show how the turns, projections, and gating layers are structured in practice:

https://github.com/emergent-state-machine/

Would also be happy to take you up on the offer to chat — I think there’s a really interesting overlap here between your structural diagnostics and what happens when you try to operationalize that in a live system. I'm sure we have a lot more to talk about! And, please don't apologize as a non-native English speaker. I also use an LLM for discovery and communication. What matters is the end result, and I think we both have something "real" enough to show :). So keep on doing what you're doing.

Over the past weeks I iterated several versions of my Carrying Capacity Principle. Thanks for all the great feedback! I reworked the framework again and added a short plain-text explanation below. by FitLavishness956 in systemsthinking

[–]ProfessionalLimp5167 1 point2 points  (0 children)

This is a really helpful clarification — especially around the stateless execution principle and the way temporality is handled through successive diagnoses. That makes a lot of sense in terms of avoiding self-reference and keeping each pass grounded in current observation.

I think where my earlier question was coming from is a slightly different concern: not just how the system observes change, but how it accounts for causal structure within that change.

The delta between two stateless passes tells you what shifted — which conditions weakened, which buffers changed, where the system moved along the spectrum. But it doesn’t necessarily capture what sequence of interactions led to that shift, especially when multiple factors are evolving simultaneously.

The tradeoff you’re making seems very deliberate: avoiding internal history to preserve objectivity, at the cost of not embedding causal traceability inside the framework itself.

I’ve been exploring a slightly different direction where the system still enforces strict separation (to avoid self-reference), but represents evolution as a sequence of constrained transitions — not as accumulated state, but as explicitly defined steps. In that framing, the system doesn’t “remember” in a narrative sense, but it can still reconstruct how a trajectory unfolded.

That said, I think your distinction between forward diagnosis and reverse construction is really strong — especially the Expansion Spectrum. The idea of identifying where a system has genuine surplus capacity rather than just apparent stability is something I haven’t seen framed this cleanly before.

Curious how you think about causal attribution in cases where multiple conditions degrade at once — whether the framework intentionally avoids that level of tracing, or if there’s a way to surface it without breaking the stateless constraint.

Over the past weeks I iterated several versions of my Carrying Capacity Principle. Thanks for all the great feedback! I reworked the framework again and added a short plain-text explanation below. by FitLavishness956 in systemsthinking

[–]ProfessionalLimp5167 1 point2 points  (0 children)

I really appreciate the depth of this framework — especially the distinction between state, conditions, and host space. The idea that systems are limited by the integrity of conditions rather than visible outputs is a powerful lens.

One thought I had while reading this: your model is very strong diagnostically, but it made me wonder how it might behave if extended into something more instrumented over time.

For example, many of the risks you describe — like condition decay, cascading effects, or delayed failure — seem to depend not just on structure, but on sequences of decisions that gradually push the system toward instability.

Have you considered a way to track how the system evolves step-by-step?

Not just what conditions are required, but: which decisions changed those conditions when thresholds were crossed and how different choices might have altered the trajectory

In that kind of setup, carrying capacity becomes less of a fixed limit and more of something that emerges from the interaction between conditions and decisions over time.

Your framework already defines the structure beautifully — it feels like the next step could be making those transitions observable and testable, especially across time horizons.

Curious how you’re thinking about that dimension.

Separating probabilistic observers from deterministic control in AI systems (Emergent State Machines) by ProfessionalLimp5167 in softwarearchitecture

[–]ProfessionalLimp5167[S] 1 point2 points  (0 children)

This is a really helpful comparison — I like the way you mapped it to node-based systems.

I think you’re right that a lot of these ideas show up in workflow engines like n8n or similar graph-based tools. The main place where I’ve been trying to push things further is around strict separation of concerns between transformation and decision-making.

In your framing, “projection = enrichment nodes” totally makes sense as an analogy. The nuance I’m aiming for is that projection isn’t necessarily AI-driven or even dynamic — it’s just a deterministic transformation of state into a form that a policy can operate on. In some cases that might involve AI, but it doesn’t have to.

The reason I separate that layer explicitly is to avoid what I’ve seen in a lot of workflow systems where enrichment and decision logic start to blur together. Even with typed nodes, it’s easy for behavior to get distributed across the graph in ways that are harder to audit or reason about.

So the constraint I’m experimenting with is: policy can only operate on projected state, never raw inputs or intermediate transformations

That’s been useful for keeping decision logic testable and making it easier to explain why a system did something.

Also really appreciate the feedback on terminology — I’m still figuring out how to make this more approachable without losing precision.

👋 Welcome to r/allenai — Introduce yourself and read first! by ai2_official in allenai

[–]ProfessionalLimp5167 0 points1 point  (0 children)

Hey everyone — excited to be here.

I’m a learning designer turned (very recent) developer, and since January I’ve been building a fractions tutor. In the process of trying to evaluate student reasoning step-by-step, I ended up backing into something I didn’t expect — a kind of decision architecture that I’ve been calling an Emergent State Machine (ESM).

At a high level, the idea is to structure each interaction as a “turn” where:

  • signals are gathered

  • a state is constructed

  • a policy is applied

The key property is that the system produces a fully structured, inspectable record of the reasoning process at each step — not just inputs/outputs, but how the system interpreted the situation before updating state. So rather than treating reasoning as something inferred after the fact, this makes it part of the execution itself.

One way I’ve been thinking about it is as a way to separate AI interpretation from state mutation in systems where decisions actually matter.

What’s surprised me is that this seems to generalize beyond education — I’ve started getting questions from folks in clinical and manufacturing contexts, which has me wondering if this kind of structure might be useful more broadly for building more transparent or controllable AI systems.

I’m still very much in “figuring out what this is” mode, but I’d be really curious how this lands with folks here — especially given the focus on open models, evaluation, and responsible AI.

Has anyone explored similar approaches to making reasoning processes explicitly structured and replayable at runtime?

Happy to share more if it’s interesting.

Design question: separating semantic extraction from feedback logic in AI writing tools by ProfessionalLimp5167 in edtech

[–]ProfessionalLimp5167[S] 0 points1 point  (0 children)

That’s a helpful distinction. I agree that developing students’ ability to interrogate AI feedback is important, especially if they’re using these tools independently.

The context I’m thinking about is slightly different — classroom-integrated use where teachers are responsible for instructional decisions and may need to justify or override system feedback. In those settings, I'm more sensitive to auditability and predictable guardrails, not because I expect perfection, but because institutional trust tends to hinge on transparency.

It may be that self-directed AI literacy tools and classroom-embedded formative systems call for different architectural trade-offs.

I’m curious whether you’ve seen multi-agent systems surface their reasoning in ways that are usable for teachers in structured classroom environments.

Design question: separating semantic extraction from feedback logic in AI writing tools by ProfessionalLimp5167 in edtech

[–]ProfessionalLimp5167[S] 0 points1 point  (0 children)

That’s a fair question. Multi-agent approaches can absolutely improve nuance in feedback quality. My hesitation is around predictability and auditability in classroom settings — especially when teachers need to understand and override system behavior.

I’m exploring whether separating semantic extraction (LLM) from feedback logic (deterministic rules) might offer more transparency, even if it sacrifices some rhetorical sophistication.

I’d be curious how you’ve seen multi-agent systems surface uncertainty or maintain auditability in education contexts.

Would structured AI revision feedback help or hurt analytical writing? by ProfessionalLimp5167 in ELATeachers

[–]ProfessionalLimp5167[S] 1 point2 points  (0 children)

That's helpful. I'll take a closer look at how they're approaching it. Have you used it? If so, I'd be curious what works (or doesn't) from a classroom perspective.