- Joined
- Aug 31, 2011
- Messages
- 2,464
- Reaction score
- 4,261
In another thread (Adult autism assessment), @borne_before requested an example of the matching law (the formula that appears in my account info here). In an effort to not derail that thread more, I'm replying in this new thread. What follows is a long (and- to most of you- boring explanation/example of key ABA principle).
Herrnstein's seminal work on this (https://doi.org/10.1901/jeab.1961.4-267) from 1961 was done with pigeons. Basically, the pigeons could peck a red or white key. Concurrent schedules of reinforcement were set up whereby- for example- pecking the red key would get you would generally result in getting twice as many food pellets as pecking the white key. He found that under such schedules, the pigeons pecked the red key twice as much as they pecked the red key. Filling this into the equation in my signature, you get:
Number peck red Number food pellet for red
--------------------- = ---------------------------------------
Number peck white Number of food pellets for white
Altering the schedules of reinforcement (he used different variable interval schedules for each key) led to alterations in the number of pecks to each different color key that kept the equation true. Note that he controlled for position/location bias (key were right next to each other), as well as for chance that the bird would just learn to alternate between the keys (e.g., there was a minimum time after pecking one key until the other key could "pay off" with a food pellet.
Subsequent studies (c.f. Baum 1974 https://doi.org/10.1901/jeab.1974.22-231 have shown that this "simplified" matching law does not account for all variance in behavior, as is does not also account for bias (e.g. if a lot of effort was needed to switch between keys, the bird would learn that pecking just the red would lead to higher rates), as well as sensitivity to the differences in the scheduled. This led to to revise the formula to the generalized matching law:
R=Response parameter (frequency, duration, etc.)
r= Reinforcement parameter
k= bias for choosing one response over the other
a= Sensitivity of the behavior to variations in reinforcement distribution
This generalized law has been shown to account for most of the variance in animal responding und concurrent VI schedules.
For an example with humans, Hoch et. (https://doi.org/10.1901/jaba.2002.35-171) looked at increasing the amount of time that three young boys (ages 9-11) chose to play in an area where a sibling or peer was present vs. in another area by themselves. By altering the magnitude of reinforcement (i.e., duration of access toys) and the quality or reinforcement (access to highly preferred vs. less highly preferred toys), the boys were more likely to choose play with peers/siblings than to play alone. The important thing here is that there was no aversive or time-out from reinforcement necessary- the children could- and sometimes did- choose to play alone. They just chose to play with the other kids more.
In a more "personal" example- if you like both beer and bourbon, but like beer twice as much as you like bourbon, you will choose to drink beer on 4 out of six opportunities and bourbon on 2 out of six (assuming things like they both cost the same and are just as easy to get ahold of).
Herrnstein's seminal work on this (https://doi.org/10.1901/jeab.1961.4-267) from 1961 was done with pigeons. Basically, the pigeons could peck a red or white key. Concurrent schedules of reinforcement were set up whereby- for example- pecking the red key would get you would generally result in getting twice as many food pellets as pecking the white key. He found that under such schedules, the pigeons pecked the red key twice as much as they pecked the red key. Filling this into the equation in my signature, you get:
Number peck red Number food pellet for red
--------------------- = ---------------------------------------
Number peck white Number of food pellets for white
Altering the schedules of reinforcement (he used different variable interval schedules for each key) led to alterations in the number of pecks to each different color key that kept the equation true. Note that he controlled for position/location bias (key were right next to each other), as well as for chance that the bird would just learn to alternate between the keys (e.g., there was a minimum time after pecking one key until the other key could "pay off" with a food pellet.
Subsequent studies (c.f. Baum 1974 https://doi.org/10.1901/jeab.1974.22-231 have shown that this "simplified" matching law does not account for all variance in behavior, as is does not also account for bias (e.g. if a lot of effort was needed to switch between keys, the bird would learn that pecking just the red would lead to higher rates), as well as sensitivity to the differences in the scheduled. This led to to revise the formula to the generalized matching law:
R=Response parameter (frequency, duration, etc.)
r= Reinforcement parameter
k= bias for choosing one response over the other
a= Sensitivity of the behavior to variations in reinforcement distribution
This generalized law has been shown to account for most of the variance in animal responding und concurrent VI schedules.
For an example with humans, Hoch et. (https://doi.org/10.1901/jaba.2002.35-171) looked at increasing the amount of time that three young boys (ages 9-11) chose to play in an area where a sibling or peer was present vs. in another area by themselves. By altering the magnitude of reinforcement (i.e., duration of access toys) and the quality or reinforcement (access to highly preferred vs. less highly preferred toys), the boys were more likely to choose play with peers/siblings than to play alone. The important thing here is that there was no aversive or time-out from reinforcement necessary- the children could- and sometimes did- choose to play alone. They just chose to play with the other kids more.
In a more "personal" example- if you like both beer and bourbon, but like beer twice as much as you like bourbon, you will choose to drink beer on 4 out of six opportunities and bourbon on 2 out of six (assuming things like they both cost the same and are just as easy to get ahold of).