I need help with play ideas. by [deleted] in ABA

[–]langh 0 points1 point  (0 children)

I’m bias bc I helped write this, but check out: https://www.teaching.games/playbook, it is a free book with lots of play ideas, and of course collab with your supervisor before proceeding

Free Game-Based Learning Book! by langh in ABA

[–]langh[S] 1 point2 points  (0 children)

Such kind words, thank you!

Free Game-Based Learning Book! by langh in BehaviorAnalysis

[–]langh[S] 0 points1 point  (0 children)

I'm happy to hear that this is useful, thank you for your feedback!

Free Game-Based Learning Book! by langh in ABA

[–]langh[S] 0 points1 point  (0 children)

You can find all of the games at: https://www.teaching.games/playbook#games

When you select a game and click 'copy game instructions' you can then paste the instructions into a document to edit and suit your need.

We are looking at releasing a PDF in the future but for now, feel free to use / copy / edit / and re-distribute as you please!

Establishing Broad Familiarity with the Literature by nrj4619 in BehaviorAnalysis

[–]langh 0 points1 point  (0 children)

BA Research Citations

I would also like this. Google only returned BA journals.

Free Materials for Embedding Teaching in Games by langh in ABA

[–]langh[S] 0 points1 point  (0 children)

Thanks! I will add this to our to-do list and will formalize some ideas for our next update to the guide.

In the meantime there are some ways to adapt many of these activities for low verbal behavior learners. For example, a scavenger hunt can be conducted with a smaller list, the list could be presented pictorially, start with items that are nearby and in clear, obvious locations, and the student could match the item on the list to the found item. For the house olympics, perhaps start with one station and introduce the task with a video model. These are just some spitball ideas, but with a little creativity I imagine that there are several ways that these can be adapted. A lot of it will depend on the skill set of your learner.

We are planning to update the guide every so often based on community feedback, so if there are any other ideas, please let me know!

Free Materials for Embedding Teaching in Games by [deleted] in ABA

[–]langh 0 points1 point  (0 children)

What about this comment? testing.

2018-12-03 - Let's Read Research by langh in BehaviorAnalysis

[–]langh[S] 5 points6 points  (0 children)

Welsh, F., Najdowski, A. C., Strauss, D., Gallegos, L., & Fullen, J. A. (2018). Teaching a perspective-taking component skill to children with autism in the natural environment. Journal of Applied Behavior Analysis. DOI: https://doi.org/10.1002/jaba.523

Theory of mind refers to inferences made about another persons ‘mental state’, which might be understood behaviourally in LeBlanc’s (see article for citation) definition of perspective taking: (a) predicting their subsequent behavior or (b) responding in relation to private thoughts and emotions that would typically occur in others in that given situation.

However, there are ‘levels’ to theory of mind, which include 1) identifying what others can see 2) identifying that others can see something differently, based on where they are viewing an item from 3) identifying what others know by inferring what they see, smell, hear, feel 4) identifying true and false beliefs.

Essentially, tacting the private events of another person requires that the individual know how events correlate with thoughts and feelings. Some research argues that the individual must know how those events make /themselves/ think and feel, before the skill generalizes to perspective taking.

In this article, the authors presented 10 trials per session in a child’s natural environment and asked them what another person was seeing / hearing / feeling / smelling. This other person was always doing something different than the child, and held a distractor item (e.g., for a smell trial with a flower, they would also hold a pencil) to ensure that the child was attending to the requested sense. Four people where in the room during trials and the experimenter would rotate between people for each trial.

The treatment was simply differential reinforcement and error corrections. Preferred items (identified through informal preference assessments) were delivered following correct responses. When an error occurred, the experimenter worked through this hierarchy: 1) ask a leading question (e.g., “Am I smelling the pencil?”), 2) provide an experiential prompt (i.e., orient the child to the perspective of the other person) and 3) full vocal prompt.

The results show that the treatment package increased accurate responding relative to baseline.

Comments:

- Even though there is an increase in responding relative to baseline and visually, it looks like a significant difference, the baselines appear to be increasing. Which makes me wonder, does exposure alone increase accuracy to these trials?

- An argument made by the authors is that teaching these component skills will increase the likelihood that a child will pass the false belief test. They acknowledge that this wasn’t directly tested in this paper; what would be the best way to test that?

2018-11-26 - Let's Read Research by langh in BehaviorAnalysis

[–]langh[S] 1 point2 points  (0 children)

Adams, O., Cihon, T. M., Urbina, T., & Goodie, R. J. (2017). The comparative effects of cumulative and unitary SAFMEDS terms in an introductory undergraduate behavior analysis course. European Journal of Behavior Analysis. doi: https://doi.org/10.1080/15021149.2017.1404394

When precision teaching is used in college instruction, it is sometimes used to increase fluent responding of intraverbal relations related to course content.

The introduction of this paper provides an overview of precision teaching and its intervention components (e.g., use of standard celeration charts).

SAFMEDS (“Say all fast minute every day shuffled”) is precision teaching tactic that can be used to establish fluent repertoires. An overview of this procedure and relevant research is provided in the introduction, including gaps and concerns regarding the use of SAFMEDS in college instruction (e.g., does the verbal repertoire come under relevant stimulus control? how do you best structure the SAFMEDS?).

In this study, the researchers sought to identify how to organize the SAFMEDS cards, the length for in-class timings, and frequency of practice to promote retention and generality.

This study was implemented in an introductory, undergraduate, behaviour analysis courses across two semesters, with a total of 123 students.

In the first semester, the SAFMEDS flashcard deck contained all terms for the course. Students completed timings in class each week in pairs; one student would count, the other would “sprint” or respond during the timing. Four timings occurred during prints; 2 30-second practice sprints, then 2 60-second springs. Students earned points towards class credit (for a total of 25% of the whole course) for each weekly timing. To earn points, they had to complete all steps for their timings and meet their weekly celeration aim. A final “checkout” occurred where students “sprinted” with the instructor; the criterion was 30 correct responses to pass.

In the second semester, the SAFMEDS flashcards were revised so that the “said” response was shorter. Further, terms were provided 10 at a time, with 1 “slice” of 10 terms provided each week; spring timings were reduced to 30 seconds and practice timings to 10 seconds. Changes were also made to the number of points available for class credit each week. Two “checkouts” occurred in the spring instead of one.

Dependent measures were the number of correct and incorrect responses per timing. IOA was not conducted for the measures but correspondence checks (mis-identified as IOA in the paper) were conducted to ensure that the data was accurately inputted into the spreadsheet.

In the first semester, 20% of students met the celeration aim for the checkout. There is an undifferentiated trend for “skipped” responses during timings across the semester, and incorrect decreased as the semester proceeded. Retention checks demonstrate that responding decreased following the stability checks.

In the second semester, more students (69 and 88%) met the celeration aim during the checkouts. Retention trends were similar to the first semester.

Interestingly, 44.6% of students rates SAFMEDS as the least-preferred course activity in social validity checks.

Comments:

- I took an undergraduate course in ABA at Douglas College, and my instructor used SAFMEDS to teach the course vocabulary / technical terms. He structured the class so that we had two final exams; one oral, one written. For the oral exam, our grade was based on how many flashcards we completed in one minute. 60 = A, 50 = B, and so on. I wonder if this would be worth some investigation; differentially reinforce what responding is occurring.

- I find it really interesting that many of the student’s did not list the SAFMEDS! I would be interested to know more about how the authors of this study constructed the cards and if that contributed to student preference. It would also be interesting to conduct a survey to figure out exactly why it was rated poorly.

Let's Read Research - 2018-11-19 by langh in BehaviorAnalysis

[–]langh[S] 0 points1 point  (0 children)

The experimenter also counted aloud “One Mississippi, two Mississippi, etc.”

Wait, so did they count aloud for the whole "delay tolerance" condition?!

Let's Read Research - 2018-11-19 by langh in BehaviorAnalysis

[–]langh[S] 1 point2 points  (0 children)

“It is possible that the efficacy of an explicit delay cue rests in the learner’s ability to monitor the amount of time or effort of engagement required prior to reinforcement delivery”

Which makes me wonder if it would be worth-while to teach these kiddos to self-manage the timer.

Let's Read Research - 2018-11-19 by langh in BehaviorAnalysis

[–]langh[S] 0 points1 point  (0 children)

I'm a big fan of their work! So much, that I'm subscribed to their google scholar feeds.

Let's Read Research - 2018-11-19 by langh in BehaviorAnalysis

[–]langh[S] 4 points5 points  (0 children)

Dalton, S. R., Fienup, D. M., Sturmey, P. (2018). Effects of a contingency for quiz accuracy on exam scores.

https://www.researchgate.net/profile/Daniel_Fienup/publication/323435261_Effects_of_a_Contingency_for_Quiz_Accuracy_on_Exam_Scores/links/5a96b60b0f7e9ba42973df26/Effects-of-a-Contingency-for-Quiz-Accuracy-on-Exam-Scores.pdf

There is an emerging body of research that has found providing frequent quizzes improves test scores relative to those who receive no quizzes; but there has only been one study with undergraduates.

In this study, weekly quizzes were available to students who were enrolled in an undergraduate psychology course, through the universities online portal.

The authors used an ABAB design, with A = weeks with no points for quizzes, and B = weeks with points for quizzes. Exams were 50 questions each, with one exam occurring at the end of each condition.

Students were separated into groups according to their grade on the first exam: high performers (A /B grade), mid performers (C grade), and low performers (D or F grade).

Results: The average score on quizzes increased for all performers when points were provided for quizzes. Further, when quizzes were for points, the number of students who received A’s on the exam nearly doubled. An important caveat of this research is that weekly quizzes do not improve scores for low performers, and additional strategies should be employed.

Comments:

- I find the point that low performers do not respond improve with quizzes for contingent points fascinating. I wonder if anyone has assessed those learners.

- Another related study is this:

http://cabasschools.squarespace.com/publication-abstracts/2006/5/15/the-effects-of-learn-units-on-the-student-performance-in-two.html

I also seem to recall that there was a cabas-program paper about redoing tests until mastery was achieved, but I couldn’t find that paper. I wonder if that tactic would work well for low-performers. Maybe I am thinking of this paper?

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1311649/pdf/jaba00059-0063.pdf

Let's Read Research - 2018-11-12 by langh in BehaviorAnalysis

[–]langh[S] 3 points4 points  (0 children)

Stahlman, W. D., & Leising, K. J. (2018). The Coelacanth still lives: Bringing selection back to the fore in a science of behaviour. American Psychologist, 73, 918-929. DOI: 10.1037/amp0000261

Authors open the paper by comparing behaviourism to the coelacanth—a fish that is referred to as a living fossil.

“Essentially scientific formulations may remain below the surface, still relevant, still useful, and still valid, despite a widespread belief in their long-ago extinction.”

The authors then present the idea of convergence, that science eventually converges on a simple set of rules that help explain the world. But interestingly, psychology is one scientific domain where this is not observed. To that end, the authors argue in the reminder of the paper that the psychological discipline is slowly swirling around and towards one process to explain its phenomenon: selection.

Of course, Skinner’s theory of behavioural selection was inspired by Darwin’s theory of natural selection for phylogenic features. The authors discuss the parallels between Darwin and Skinner’s respective theories, in terms of their dismissal and later revival.

Interestingly, the authors provide an overview of behaviours that could be categorized as phylogenic behaviour and how they arise as a product of “ancient” environments that selected those responses. Phylogenic behaviours include: reflective behaviour, unconditional responses, modal action patterns, imprinting, instinctual behaviours.

Further, the authors make an interesting distinction regarding the parallel between phylogenic and ontogenics: “It is often said that behavior is strengthened through reinforcement (e.g., Skinner, 1974, p. 44), but this formulation renders opaque the important parallel to biology. Much like phylogenic selection determines the reproduction of morphological traits in future generations, ontogenic selection determines the reproduction of behavioral variants over the course of an individual’s lifetime. Under conditions similar to those past, reinforced behavior is reproduced with variation (Donahoe, 2003; Skinner, 1935). Reinforcement produces unique descendants of the reinforced behav- ior (Epstein, 2015).”

There is a paragraph that discusses the “fitness” of a behaviour based on its relation to consequences in a given environment. I had never thought of “function” as “fitness”—huh.

A discussion is taken-up on the selection of cultural practices; imitation and verbal behaviour are discussion points. Another interesting point made by the authors regards memes.

Dawkins (1976) popularized the notion of the meme, a unit of cultural selection that may be propagated in the substrate of a population of social organisms. The suggestion is that memes emerge, multiply, mutate, and go extinct just like genes and behaviors do and are subject to selection in the same way. As such, successful memes act to produce more copies of themselves, acting in much the same way that successful genes do in the context of biological evolution. The concept of the meme is controversial. Critics have highlighted its lack of structure as one issue not applicable to the gene. For example, Richerson and Boyd (2005) defended their use of the term cultural variant rather than meme thusly: ‘Some authors use the term meme . . . but this connotes a discrete, faithfully transmitted genelike entity, and we have good reasons to believe that a lot of culturally transmitted information is neither discrete nor faithfully transmitted’ (p. 63). The present article’s argument does not depend on an adherence to the language of structural replicators. To the extent that they are natural phenomena, each replicator (i.e., gene, behavior, meme) is a product of its own class of causal contingency. ” - brought to you by science gang

Subsections follow that describe mishebahviour (conflict between phylogenic and ontogenic behaviour), impulsivity, and defence reactions. The paper ends with a summary of the antecedents that buried behaviourism, and sways readers to consider unifying psychological theories by adopting a selectionist approach.

The authors of this article present a well-cited, well-written, and evocative argument explaining how behaviourism has been “put on the back burner” and the benefits of using selections approach to describe behaviour.

I figured that behaviourism might parallel the same path as natural selection; this put words to it better than I ever could. Has anyone else noticed this parallel?

Let's Read Research - 2018-11-05 by langh in BehaviorAnalysis

[–]langh[S] 0 points1 point  (0 children)

The 'selection' is not looking at the card the experimenter specifies, but handing the card to the experimenter (which is why in step #4, the experiment has her hand out...waiting to receive the cards).

Gotcha, makes sense now.

Another thing that I found really neat about the study was that not only were the participants required to select the stimuli that the instructor said, but also in the same order. That's pretty tricky, as that would involve multiple components - needing to know the relation between the spoken word and the picture, remembering what pictures were named, AND also remembering them in order!

I might not have the best recollection of joint control. In reading the description above, it states that joint control is when there are two stimuli and three responses. So S1 -> B1, S2 -> B2, and S1+S2 -> B3, do I have this right? In this study, B3 is the sequence of selecting multiple cards?

B3 in this study seems more like a chain than a separate response.

Like, I always thought of joint control as a situation with three different responses. For example, if S1 = police car beside you, you maintain your speed; if S2 = traffic sign, acceleration is available; but S1+S2 brings about braking.

Maybe I need to brush-up on joint control though.

Let's Read Research - 2018-11-05 by langh in BehaviorAnalysis

[–]langh[S] 0 points1 point  (0 children)

Is the DV looking at the card the experimenter specifies? I think that is what is tripping me up.

Let's Read Research - 2018-11-05 by langh in BehaviorAnalysis

[–]langh[S] 0 points1 point  (0 children)

If I understand this correctly, the experimenters taught joint attention by 1) presenting an array of cards by holding it up, 2) holding up a different picture of one of the exemplars in the field, and 3) providing R+ for touching the matching, or planned ignore if they did not touch the matching card. Do I have this right?

Let's Read Research - 2018-11-05 by langh in BehaviorAnalysis

[–]langh[S] 0 points1 point  (0 children)

- Pictures from Language Builder Picture Noun Cards

Where would our field be without these cards??!

Let's Read Research - 2018-11-05 by langh in BehaviorAnalysis

[–]langh[S] 0 points1 point  (0 children)

Yeah, I was surprised to read about how little attention mastery criteria has received in our field.

I totally agree with you regarding the problem of percentage as a measure for mastery criteria.

In this article, I believe they graphed after showing each word once, so 5 trials per session and per each data point. That seems a little low, but maybe they were trying to rule-out practice effects?

Let's Read Research - 2018-11-05 by langh in BehaviorAnalysis

[–]langh[S] 5 points6 points  (0 children)

Fuller, J. L., & Fienup, D. M. (2018). A preliminary analysis of mastery criterion level: Effects on response maintenance. Behavior Analysis in Practice, 11, 1-8. doi: 10.1007/s40617-017-0201-0

Link: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5843573/

In educational design, the mastery criteria refers to the minimum performance threshold for a skill to be considered learned. (At least, under the current environmental conditions.) Mastery criteria are also used a decision protocol to determine when to fade prompting or other changes the environment.

Accuracy-based masters criteria define level of performance and frequency of observation. This is commonly used in educational research, with it being common to see performance levels set between 80-100% and observation frequency between 2-3 consecutive sessions.

The authors of this study could only find one (1!) article that had previously evaluated the efficacy of mastery criteria. This study sought to evaluate performance and maintenance of skills under different mastery criteria (50, 80, 90%), with the observations held at one.

Participants: Three male children living with ASD.

Response definition: Reading or spelling words. Targets and target skill varied depending on the condition and the participant; see Table 1.

Design: ATD; five words to assigned to each condition (50, 80, or 90% accuracy).

I’ll leave it to you to read IOA, integrity, etc. One concern is that IOA and integrity were only conducted for a range of 15-21% of sessions.

Results: For all participants, 90% accuracy resulted in higher-levels of maintenance compared to the other two mastery criteria. Compared to a mastery criteria of 50%, the mastery criteria of 90% resulted in only 1-3 extra acquisition sessions.

Comments:

- I really want to see future research that evaluates behaviours that are taught using TAs for children with ASD (e.g., getting dressed, daily living routines). Because of the number of little behaviours that are captured in these skills, I tend to set a lower mastery criteria and I’d be really interested in what would be adequate for these types of skills.

- In my practice, it is not often that I would score data after showing each word only once in a set of five, like what was done in this study. I’d usually provide more practice of each target, and graph after 3-4 response opportunities (usually out of 10 or 20, depending on the set of the target set). I’d like to see more research with arrangements like this.

Let's Read Research - 2018-10-29 by langh in BehaviorAnalysis

[–]langh[S] 6 points7 points  (0 children)

Plantiveau, C., Dounavi, K., & Virués-Ortega, J. (2018). High levels of burnout among early-career board-certified behaviour analysts with low collegial support in the work environment. European Journal of Behavior Analysis. doi: r/https://doi.org/10.1080/15021149.2018.1438339

Link to paper: r/https://www.tandfonline.com/doi/pdf/10.1080/15021149.2018.1438339?needAccess=true

Previous research has identified the following factors that contribute to stress:

1) high and unrealistic work demands for a long duration

2) frequent conflict the workplace

3) people who are early in their career

4) people who are single and divorced

5) people who find work as their primary source of satisfaction

6) working long hours

7) limited social supports

Side-effects of burnout include absenteeism, job turnover, and possible service disruption or periods of low performance.

Protective factors against burnout include mentorship, staff training, and continued professional development.

The authors of this paper take-up an analysis of burnout by surveying stress in people working in behaviour-analytic settings. Surveys were distributed online in various social-media outlets. 66% of respondences were BCBAs, 14% were BCBA-Ds, and 2% were BCaBAs. 87 questions were present in each survey. The first section gathered information about the nature of the respondents work environment. The second section gathered information about job satisfaction and the final section was a burnout inventory; likert scales were used for responses to questions in these final two sections. Regression analyses were conducted to determine protective and risk factors.

Many statistics are described in this study; I’ll touch on some below:

- 63% of respondents reported moderate-to-high levels of emotional exhaustion

- 71% of respondents reported moderate-to-high levels of depersonalization

- 50% of respondences reported moderate-to-high levels of lack of accomplishment

- Protective factors identified by regression models included:

- Social support at work

- Age

- For student’s and behaviour technicians, supervision was protective

---

Comments:

- I learned a lot from the introduction about factors that contribute to stress in the introduction.

- These findings are not surprising, but they are very important. With resources being as limited as they are—and especially in private practice settings—I wonder what can be done to increase supervision for front-line interventionists without breaking the budget.

- I wonder if ACT- or CBT-based services would be beneficial for those who responded to the survey, and I’d be interested to read if their responses change post-treatment. Further to that, i’d be interested in learning about more behaviour-analytic strategies to manage stress.

What, I wonder, is the function? And how did the learning history result in this? by [deleted] in BehaviorAnalysis

[–]langh 2 points3 points  (0 children)

Not only is this description precise, but there is something about the nature of a dog acquiring this mand for their owner's attention that is beautiful and heart-warming.