Tone and light were combined with shock for the animals.
The animals that only received light-shocks showed stronger conditioning.
The results of this experiment make sense if you consider predictability again.
If the tone had already been established as a predictable CS, another predictable situation would not need conditionability.
It was only presented to the other group of animals if it was coupled with a UCS that causes light.
The timing of the pairs can result in the blocking of the effect of a particular CS.
If you moved to a large metropolitan area, it would be different from the small town you left behind.
The variety of restaurants is amazing.
Friends took you to a seafood restaurant where you ate coconut shrimp for the first time.
You came down with the stom ach flu during the night.
Classical conditioning took place when you were conditioned to avoid coconut shrimp.
The brain isn't as sensitive to all types of stimuli.
Evolutionary processes have prepared the brain to locate some types of correlations more easily than others, and some types of stimuli are more important for survival than others.
When a person or animal becomes ill after consuming a food with a novel taste, that taste may become a predictor of illness.
John Gar cia and his colleagues demonstrated in the 1960's that rats developed an intense dislike for a novel flavor when it was used as the ill ness was the UCR.
When classical conditioning of a tone and food occurs best when the tone is sounded just 0.50 seconds before the food is presented, but strong taste aversions can be conditioned when illness occurs more than an hour after the taste is experienced.
Researchers had learned a lot from decades of laboratory research, but the amount of time between the presentation of the UCS was at odds with what they had learned.
The journal devoted to animal behavior refused to publish the research.
It is a credit to Garcia's persistence that his work was accepted and considered a classic in demonstrating that an animal's evolutionary inheritance places limits on what it can learn.
The taste-aversion studies have two points of interest.
To become associated with the illness, the flavor had to be novel.
The time between the start of the taste and the start of the UCS can be long.
Humans are quite good at forming taste aversions.
Have you ever been nauseated after eating?
It is surprisingly common for such experiences.
In one survey of under graduates, many reported having aversions to food.
The majority of food develops several hours after being eaten.
The aversions can last for 50 years or more.
When we know that the illness was caused by the flu, we can experience taste aver sions.
The more developed part of our brain that knows this information is no match for the more primitive wiring that makes it easier to associate illness with food.
It is possible to have aversions to taste aversions.
Children undergoing cancer treatment can develop food aversions that can affect their ability to consume a normal diet.
The knowledge of taste aversions has been applied by psychologists.
Children were allowed to eat a special ice cream called "Mapletoff" before receiving cancer treatment.
The children ate less Mapletoff ice cream than the second group of patients.
The development of taste aversion depends on the taste- illness pair.
Patients who had not eaten any Mapletoff ice cream were more willing to consume their normal text than were patients who had not eaten any Mapletoff ice cream.
Human behavior is conditioned.
The permanent change in behavior, which distinguishes learning tions, can be learned by classically conditioning 11-month-old from behavior changes resulting from maturing.
Albert was afraid of a white rat.
Drug tolerance and accidental drug overdoses may lead to a situation in which classical conditioning explains.
When the UCS is presented more frequently, feelings with their products.
Water and lemon juice are automatically added to the participant's system before conditioning begins.
If the two of them arepaired frequently, the Cr will be strengthened.
Jim likes the perfume.
You blink when UCS, UCr is delivered to your eye.
There are a number of specific examples of taste aversions.
The serial killer's gruesome murders were the reason your roommate went to the new restaurant.
Dexter had an uncomfortable feeling when one of his nights was over after he developed a severe case of flu.
The teachers walked into class because he had eaten lemon shrimp.
Children who play with large plastic bags are killers.
Dexter's reaction to the teacher is an example of being trapped inside a bag.
Key elements of classical conditioning were outlined by him.
He was the subject of research that showed how she squirts lemon juice into a participant's mouth.
He found a way to use hypnotism on Julie before he left the lab.
How can we overcome these reactions?
She should not drink lemon juice while playing the country song.
It takes 15 minutes to start eating popcorn.
He tries a new approach when the situation is more unfair than he can handle.
The second type of learning is operant condition ing.
Edward Lee Thorndike developed an influen tial theory of learning based on his study of hungry animals.
If the animal formed the appropriate behavior, the puzzle box door opened and the animal could exit and eat some food that had been placed outside the door.
It is possible to get out of the puzzle box by pressing a lever or stepping on a platform.
The required response was more difficult in some cases.
The animal might have to respond to three different things: pulling a string, stepping on a platform, and reaching through the bars and turning one of the two latches in front of the door.
When Thorndike put an animal in the puzzle box, it took a long time to escape.
The animal explored parts of the chamber in a random manner, moving around in the puzzle box.
The development of operant conditioning was set in motion by Thorndike's research.
The behaviors that he described as satisfiers were stamped into the organisms by using the puzzle box.
More of the responses that opened the door were performed in chapter six.
Thorndike recorded the amount of time it took for the animal to escape.
Thorndike believed that the animal's first successful response was probably an accident.
Animals took less time to escape, so they became more proficient at responding.
The development of operant conditioning was set in motion by Thorndike's work.
The late Harvard psychologist B. F. Skinner has been described as the most famous psychologist who has ever lived.
He was looking for the stimuli that control behavior.
Skinner disliked a term.
Skinner's ideas can be applied to human behavior even though he relied on animals.
Let's return to the question we posed earlier: How did a Skinner box end up with a rat?
Thorndike's view is to eliminate the sound of your alarm clock early in the morning.
Reinforcers are at the heart of operant conditioning.
We can define a reinforcer punisher as an event that makes the behavior it follows more likely to occur again of responses.
The behavior that we want to strengthen or increase is what the response wants to strengthen or increase.
A real estate agent earns a commission for each house she sells, and the commission reinforces her efforts to sell as many houses as possible.
Negative reinforcers include playing music to reduce bore dom, cleaning your room so that your roommate will stop complaining that you're a slob, and taking medication to reduce the pain of a recent hernia repair.
Something stopped, was removed, or was reduced because you performed a target response in these situ ations.
The environmental change depends on the response being made, that's what it means when a change in the environment is contingent on a particular response being made.
Contingencies can be expressed as relations.
We would express the contingencies for several positive reinforcers and several negative reinforcers.
Although these examples may lead you to believe that positive reinforcement increases when a behavior is followed, this is not always the case.
The acts of cheating are the targets.
Positive reinforcement of cheating behavior is achieved because the grade is presented and there is a positive increase in the frequency reinforcer.
The answer appears to be no, the number of stu occurs when a negative reinforcer dents who admit to having cheated on exams is high, as many as 40% to is removed or terminated.
Negative reinforcement seems to be more difficult for students to grasp than positive reinforcement.
One of the major mistakes made by students of psychology is the misconception.
Positive and negative rein forcers can make the target behavior more likely to occur.
The target response that terminated it is likely to occur again if a negative reinforcer has been effective.
Water and sleep are examples of primary reinforcers.
Not all stimuli that follow a behavior will satisfy some biological need such as hunger, thirst, or sleep.
Money is the best example of a secondary reinforcer.
Money is used to purchase things in which a desired response is taught by reinforcement of other foods, beverages, and a place to catch up on sleep, whether it is a house or a hotel room.
Other important processes have been revealed by continued research on operant conditioning.
The Premack Principle, escape and avoidance, extinction, and stimulus control are some of the topics discussed in this section.
Think about the behaviors you observe in your environment.
You may realize that the final behav ior does not always occur from the start.
It is not likely that you learned to drive after just one ride.
Don't expect much from the rat when training him in a Skinner box.
The rat won't press the lever or bar until it enters a new environment.
You may have to teach it how to get food.
When using this method, you usually don't give reinforcement until the animal engages in a behavior that is close to the target response.
The concept of shaping is clear, but it may be difficult to actually do the job.
If reinforcers are not presented at the right moment, an inappropriate response may be shaped.
When the rat is near the food dish, you drop a piece of food into it.
The behavior of approaching the dish is reinforced by eating the food.
When the rat learns where the food is, you begin giving reinforcers.
When the rat must actually touch the lever to receive reinforcement, you make your response requirements more demanding.
You can require that the lever be pressed before the rein forcement is given if the rat has started touching the lever.
Slowly, you have made the response that produces reinforcement more similar to the target response of pressing the lever.
One of the best-known cases of shaping involves a patient who was admitted to a mental hospital at the age of 21 with the diagnosis of schizophrenia, and had been completely mute almost immediately upon commitment.
No one was able to get him to tell rats how to play basketbal.
He lived day in and day out without uttering a word.
A pack of chewing gum was accidentally dropped by a psychologist.
The patient's eyes turned to focus on the gum and then returned to their fixed gaze under the direction of their teacher.
There was a small sign of responsiveness when Dr. Jim Divine used shaping tech.
For the next two weeks, the psychologist held up the gum in front of the patient, although they wanted some visual contact.
The pa tient said "Gum, please" for the first time in 19 years.
Whether or not we realize it, shaping techniques have been used to help us acquire new behaviors.
Talk, write, and drive a car.
The appropriate delivery of reinforcers gradually shaped these behaviors.
Before you read further, remember the behaviors and events that were mentioned in our discussion of operant conditioning.
The roommates had trouble keeping their apartment clean.
Bill's behavior was being shaped by Erik.
He praised the work Bill did to keep the apartment clean.
Bill was finally cleaning the apartment on a regular basis, after he had reserved his praise for greater efforts.
Imagine if you were a music teacher in a middle school and you couldn't get your students to play jazz-rock compositions.
They happen in their natural habitats.
The key is shaping psychologist David Premack.
Premack found that playing jazz-rock could be a good way to break down complex behaviors into simpler ones.
"First approximations to the ultimate, more complex behavior" is what Premack described in and then to reinforce successive operant conditioning terms.
The Premack principle was used in a potentially life-threatening situation when a 7-year-old boy refused to eat all but very specific foods.
He became aggressive and difficult to handle when his parents gave him other foods.
His parents were worried about the health risks of his diet restriction.
They sought the help of a therapist who created a treatment program based on the Premack principle.
If he didn't eat the new food, he was given one of his less preferred foods so that he wouldn't go hungry.
When presented with new foods, the boy was calmer and began to eat a wider variety of foods.
There are many other examples of the Premack principle.
After running their laps, football players get to run new plays.
Raking and bagging leaves on Saturday morning is less preferred than playing a video game on Saturday afternoon.
As with classical conditioning, the responses can be used as a general term for the reduction.
When we talked about classical conditioning, extinction of classically conditioned classical conditioning, extinction responses involved presenting the CS without the UCS.
Ex occurs when a reduction tinction involves following the target behavior with no reinforcers.
If Johnny screams while his parents are shopping, his conditioning will go down because parents will ignore his behavior.
In real life, parents with the best of behavior are no longer followed by tentions.
They are likely to reinforce their child's screaming by giving them a piece of candy or something similar.
Reinforcement does not increase only desirable behaviors.
The very behaviors they seek to extinguish can be reinforced by parents.
Raymond suffered brain damage and mental retardation when he was born because of oxygen deprivation.
He hit his head against the wall when he was 2 years old.
His parents were horrified when he hit his head against the wall.
They turned to a psychologist who agreed to come to their house and observe their child.
The plan the psychologist came up with to eliminate head banging was not what Raymond's parents were expecting.
The psychologist said they were reinforcing Raymond's head banging.
His parents had to remain head banging because of the plan to eliminate the head banging.
Not responding to the head banging is when extinction passive.
Sometimes an increase in the behavior actually increases.
The rate of head banging could be very dangerous and the psychologist could be in danger if they put a protective head cover on Raymond.
The first few days were very difficult for his parents, but with encouragement they stuck to the plan and remained passive.
A signal telling a particularStimulus or signal tells the participant that its responses will be reinforced.
A green light or tone can be used to reinforce a signal to a rat in an operant conditioning chamber.
The schedule of reinforcement that the rat has experienced during training is what causes lever presses to be reinforced when the light or tone is present.
When the conditioning trials show discriminative stimuli, the responses are not reinforced and extinction occurs.
There are a lot of stimuli in the real world.
The "open" sign in a store window is a sign that tells people that they can enter the store and shop if they reach for the door handle.
The color of the traffic light at an intersection tells you if you should stop your car or proceed through the intersection.
Your friend's mood serves as a signal that a response such as telling a joke or making a sympathetic remark will be appreciated.
In an operant conditioning chamber, the experimenter can deliver reinforcers, such as a food pellet for a hungry rat or pigeon, according to a preset pattern.
The closed sign means that you will not be able to enjoy the ice cream because the response of pulling the door handle will not be reinforced.
If you were to look at the cumulative record of a rat, you would see that it was being trained in a Skinner box.
You note something interesting when you hold the rat in your hand.
The record shows a straight horizontal line.
The rat is no longer performing the target behavior.
The rat was put on a schedule by the researcher that resulted in the reduction and elimination of the behavior.
The experimenter can arrange for the reinforcer to be delivered to a specific schedule once the target response has been shaped.
A rat in a Skinner box gets a food pellet for each bar press, while a soda machine gives a cold drink every time you put money in it.
A high rate of responding is produced by a continuous schedule of reinforcement.
The response rate drops quickly once the reinforcer is no longer effective.
Food pellets don't work after the rat has eaten a lot of them.
Sometimes we don't want to specify the exact number of responses.
Sometimes the reinforcer will be delivered after 15 responses, sometimes after 35 responses, sometimes after 10 responses, and so forth.
We use the average number of responses to show the specific number of responses.
In our example, in which the values 15, 35, and 10 were used, the Fr schedule could be set or the average number of responses could be 20.
The more responses are made, the more often participants receive reinforcers.
Although both schedules produce many responses, they tend to produce higher rates of responding.
The slot machines were her downfall.
She was encouraged to continue playing when the bell rang and she collected a pot of quarters.
She had put quarters into the source before she knew it.
The higher the slope of the line, the higher the rate of responding.
After reinforcement has been delivered under a demanding schedule, the responding ceases for a brief period.
The pause doesn't happen when the schedule is in effect.
C should be reinforced.
The rate of responding decreases as time begins again.
The participant can't predict the end of the interval and can't judge when to respond.
A steady rate of responding is maintained.
She spent $250 just to win $33.
On a virtual reality schedule, slot machines pay off.
She knew she would get a reward at some point, but she couldn't predict when she would get it.
If you've ever become hooked on playing the lottery, you'll understand.
In front of a one-armed bandit in Las Vegas, high rates of responding in a Skinner box can be caused by the schedules of reinforcement.
After the reinforcement has been delivered, participants may pause for a brief period.
The reinforcer seems to serve as a signal to take a break.
I was able to observe workers who were paid by piecework.
When a worker started up the machine, he worked steadily and rapidly until the counter on the machine indicated that 100 pieces had been made.
The worker would take a break after recording the number on the work card.
TheReinforcement schedule was based on pervision from their boss.
The boss did not have to tell them to work faster or to take long breaks in order to be reprimanded.
The second type of intermittent schedule of reinforcement involves the passage of time.
Pearson Education, Inc., Upper Saddle River, New Jersey gave permission for the printed and electronically reproduced schedule.
A teacher walks students through the steps.
As they learn the solution, they are praised at each step.
The driver of a large delivery van puts the key in the ignition several times a day, and then she has to drive to the next stop on her route if the engine turns over.
A salesperson gets a bonus.
$500 is added to his regular salary after he has sold 10 cars.
Magazine subscriptions can only be sold by door-to-door salespeople.
Sometimes they have to make more and sometimes less calls to sign up for a subscription.
A group of friends are going to Las Vegas.
They are excited to play the slot machines.
They spend hours feeding the machines.
They are told that 1 quarter in 250 is a winner.
The students have been studying at a fairly even rate throughout the semester.
During the summer a group of students decide to hitch a ride.
They spend a lot of time walking along the highway waiting for a ride.
Sometimes they get a ride after a few minutes, but at other times they have to wait several hours for a ride.
Every Friday, students are given a quiz.
The employees of the company get their pay in the form of a check every two weeks.
Interval schedules have two basic types, fixed-Interval and variable-Interval.
Some examples of the schedules of reinforce ment are listed in Table 6-1.
The responses made before the end of the period are not reinforced.
You won't receive mail until it's time for the daily mail delivery, no matter how many times you check your mailbox.
Participants try to estimate the passage of time and make most of their responses toward the end of the interval, when they will be reinforced.
If your mail is delivered between 3 and 3:30 pm, you won't start looking for it until after that time.
A turkey is supposed to be roasted for 4 hours.
The longer participants stay, the better they become at timing their responses.
Because a response can be reinforced at any time, it makes sense for participants to maintain a steady rate of responding.
Think of the times you called a friend on the phone only to get a voice mail.
You probably didn't start redialing at a rapid pace.
If you were unsuccessful, most likely you called back a few minutes later and then a few minutes after that.
You didn't know if your friend was having several short conversations or a long one, and you didn't know when the sound of a ringing telephone would make you dial.
Time would tell.
You were able to get through after your friend hung up.
Your chances of getting through got better as time went on.