We have to consider alternative explanations for findings that our intuitive thinking ignores.
Have an important alternative that leads us astray.
The scientific method is referred to as one and only one technique by many psychologists.
The scientific method is a myth because the techniques psychologists use are different from those used by their colleagues in chemistry, physics, and biology.
Scientists use a variety of methods to protect themselves against error and get closer to the true state of the world.
We might say that it consists of safeguards against the dangers of putting too much stock in our intuitive thinking, using the language we've learned in this chapter.
Scientists often revise this theory if the hypotheses are found to be false.
The best set of safeguards we have against biases and other errors in intuitive thinking is the toolbox of science.
Let's take a peek at what's inside this tool box.
We want to conduct a study to find out about laughter.
We could try to answer these questions by taking people into our lab and observing their laughter.
We wouldn't be able to recreate the full range of settings that make people laugh.
Even if we didn't observe participants without their knowledge, their laughter could still have been influenced by the fact that they were in a laboratory.
They may have been more nervous than in the real world.
Sometimes it can be low in internal validity.
We can use a video camera or tape recorder, or we can use a paper and pencil.
naturalistic observation is used by many psychologists who study animals in their natural habitats.
We can better understand the range of behaviors displayed by individuals in the "real world" by doing so.
naturalistic observation was used by Robert Provine to observe human laughter.
He eavesdropped on 1,200 instances of laughter in social situations and recorded the gender of the laugher and laughee.
Women laugh more than men in social situations.
He discovered that less than 20 percent of laughing incidents are preceded by funny statements.
Most cases of laughter are preceded by un funny comments.
It's painfully familiar to any of us who've had researcher Jane Goodall laugh out loud at one of our jokes while her friends looked at us with blank stare, as Provine found that speakers laugh more than listeners.
Provine's work, which would have been difficult to pull off in a naturalistic observation, sheds new light on the consequences of laughter.
psychologists apply these designs to organisms as they go about their daily business
We can manipulate the key variables ourselves, so well-conducted and effect inferences from a study laboratory experiments tend to be high in internal validity.
naturalistic designs have no control over these variables and need to wait for behavior to unfold before our eyes Naturalistic designs can be problematic if people know they're being observed.
This is one factor that makes it more complicated than other factors.
If we look at the moon from a telescope for a long time, it doesn't change its behavior.
The case study is one of the simplest designs a psychologist can come up with.
There isn't a single recipe for a case study.
Over time, others might administer questionnaires, and still others might conduct repeated interviews or behavioral observations.
A single case of a non striped zebra is all it would take to prove a claim that all zebras have stripes.
The existence of "recovered memories" of child abuse is one of the most controversial topics in psychology.
Experts disagree about whether people can forget episodes of childhood sexual abuse, only to remember them later in life with the help of a therapist.
There have been several suggestive existence proof of recovered memories, but none has been completely convincing.
The debate continues.
Case studies can be used to study rare or unusual phenomena that are difficult or impossible to recreate in the laboratory, such as people with atypical symptoms or rare types of brain damage.
A six-month program was developed to treat this man's condition, which included techniques designed to enhance his sexual arousal in response to women and snuff out his sexual response to dogs.
A sample of 50 or even 5 individuals with this strange condition could take decades to accumulate in the laboratory.
The case study provided helpful insights into the treatment of this condition.
Case studies have their limits.
They can be very helpful in generating hypotheses.
They can occasionally offer existence in the 1960s, for example, when pioneer psychiatrist AA Beck was conducting therapy with a female client who appeared anxious during the session.
She admitted she was afraid she was boring him for the exam.
She harbored an irrational idea that everyone assumed was that only animals could dull her.
Beck pieced together a new form of therapy based on the idea that people's emotional distress stems from their genes, and it was called chimpan tial form of therapy.
Researchers demonstrated their irrational beliefs.
There are hundreds of observations purporting to show that nonprimate species facilitate communication.
It's important to have an alternative that is helpful for generating hypotheses.
If we want to find out about someone's personality and attitudes, a good place to start is by asking them directly.
We can learn a lot from the questionnaires and surveys if we design and administer them well.
Imagine being hired by a research firm to gauge people's attitudes towards a new brand of toothpaste, Brightooth, which supposedly prevents 99.99 percent of cavities.
We could stop people on the street, pay them money to brush their teeth with Brightooth, and measure their reactions to Brightooth on a survey.
The people on your street may not be like the people in the broader population.
Some people will refuse to participate, and they may be different from those who agreed to participate.
Brightooth executives want to market their product to people with bad teeth, so people with bad teeth might refuse to try Brightooth.
Modern psychologists would use this approach: identify a represen tative sample of the population, and administer our survey to people from that sample.
We could try to contact every 10,000 people listed in the U.S. population census data.
Every person in the population has an equal chance of being selected.
Random selection is important if we want to generalize our results.
Political pollsters are worried about random selection.
If their selection of survey respondents from the population is random, their election forecasts may be skewed and wildly inaccurate.
Some analysts argue that nonrandom selection led some pollsters to wrongly predict that Hillary Clinton would defeat Donald Trump in the 2016 U.S. presidential election.
In the past, polling agencies used to call people's landline phones, but now they rely on cell phones.
People who own cell phones are more likely to be Democratic than Republican, and this problem may have introduced biases into the 2016 polls.
If we want to generalize our results to most people, we need a random sample.
When it comes to surveys, bigger isn't always better.
Nonrandom selection can lead to wildly misleading conclusions.
Hite sent out 100,000 surveys to American women asking about their relationships with men.
She was able to identify potential survey respondents from lists of subscribers to women's mag.
Hite found that women who were married for five or more years were more likely to have extramarital affairs.
To put it mildly, that's all pretty depressing news.
The furor over Hite's find ings was lost in the fact that only 4.5 percent of her sample had responded to her survey.
Hite had no way of knowing if the 4.5 percent was representative of her full sample.
A poll conducted by the Harris organization at the same time used random selection and reported results that were very different from Hite's.
89 percent of women said they were satisfied with their current relationship, and only a small minority said they had extramarital affairs.
This is not a scientific poll.
The pol above isn't scientific because it's based on people who log on to the Web site, who are probably not a representative sample of people who watch Con News.
They were the most motivated to take the survey.
A reliable questionnaire should give similar scores over time.
To assess test-retest reliability, we could administer a personality questionnaire to a large group of people and read their temperature in two months.
Participants have an almost identical location if the measure is reliable.
The psychologists version scores should be the same at both times.
observa might say that the interviews are unreliable.
We'd demand our money back after opening the "iPhone" and finding a wristwatch.
People often confuse reliability and validity.
The central question is whether the polygraph can detect emotional arousal, not lying.
We need to measure something consistently before we can measure it well.
Imagine trying to measure the floors and walls of an apartment using a ruler made of Silly Putty, that is, a ruler whose length changes each time we pick it up.
Efforts at accurate measurement would be doomed.
Reliability doesn't guarantee validity.
A reliable test can be completely invalid.
Imagine a new measure of intelligence, the "Distance Index-Middle Width Intelligence Test" (DIMWIT), which takes the width of our index finger and subtracts it from the width of our middle finger.
The DIMWIT is a highly reliable measure of intelligence because the width of our fingers are not likely to change much over time and are likely to be measured similarly by different raters.
The DIMWIT is an invalid measure of intelligence because finger width has nothing to do with intelligence.
Roper organization asked Americans if they could get different answers depending on how they phrased the question, so we should bear that in mind when interpreting the results of self-report measures and surveys.
There were two negatives, one of which was that it seemed to surveys 300 female homemakers.
The two questions seem very similar.
Even though 81 percent of those who replied that they would like to have a job, only 22 percent of those who said the Holocaust may not have happened, were correct.
The number of people who said they'd like to have a job dropped when a later poll asked the second question.
We shouldn't assume that only 1 percent of people respond to a survey.
Survey wording is important.
According to a survey conducted in late 2015, 30 percent of U.S Republicans and 19 percent of U.S. Democrats supported bombing Agrabah.
Agrabah doesn't exist, that's the only problem.
The Greek character Narcissus was a handsome hunter who fell in love with a woman.
He died while looking at his reflection in the pond.
They're easy and cheap to administer, according to some.
We need a pencil, psychologists, narcissism and similar paper, and a willing participant, and we're off and running.
It's a good idea to first ask the person directly if we have a question about their personality.
Most of us possess access to subtle information regarding our emotional states, such as anxi individuals with high levels of such ety or guilt, about which outside observers aren't aware.
Self-report measures of personality traits and behaviors work well.
People's reports of how outgoing or shy they are tend to be associated with the reports of people who have spent a lot of time with them.
Extraversion is more observable than anxiety and these associations are higher for extraversion.
Self-report measures have some disadvantages as well.
They assume that respondents have enough insight into their personality characteristics to report them accurately.
It is questionable for certain groups of people.
People with high levels of personality quirks, like self-centeredness and excessive self-confidence, view themselves more positively than others.
People with a high IQ tend to see themselves through glasses.
Self-report questionnaires assume that participants are honest.
Imagine if a company required you to take a personality test for a job you really wanted.
They distort their answers to questions in a way that paints them in a positive light.
The tendency to answer questions in a socially desirable direction is one response set.
When applying for an important job, we're likely to engage in this response set.
It is difficult to trust people's reports of their abilities and achievements.
College students overstate their SAT scores by an average of 17 points.
Female undergraduates who use a fake lie detector machine report more lifetime sex partners than they normally do, suggesting that they understate the true numbers.
This response set people accused of crimes sometimes and psychologists have devised ways to compensate for it in clinical practice and research.
For engage in malingering to keep out of the way, they might have a few questions that measure trouble.
The dents' tendency to make themselves seem perfect was earned byVincent Gigante.
We're likely to see this response from people in New York City who are trying to get financial compensation for an injury or mistreatment on the job, accompanied by a bodyguard.
He was hospitalized 22 times for a mental illness and was diagnosed with malingering on self-report measures.
From the upper left corner of my computer screen, you can see a lot.
Asking others who know them well to give ratings on them is an alternative to asking people about themselves.
Observers' ratings of personality traits, such as conscientiousness, are often more valid than self-reports of these traits for predicting students' academic achievement and employees' work performance.
The ratings of one positive characteristic can influence the ratings of other positive characteristics.
The halo effect makes rats seem almost to regard the targets as angels.
If we find an employee attractive, we may be able to influence our ratings of his or her other features, such as conscientiousness and productivity.
People think that physically attractive people are more successful, confident, assertive, and intelligent than other people, even though these differences often don't reflect objective reality.
Student course evaluations of teaching are vulnerable to halo effects because if you like a teacher personally you're more likely to give him or her a break on the quality of teaching.
In one study, Richard Nisbett and Timothy Wilson randomly placed participants into two conditions.
Some people watched a videotape of a college professor with a foreign accent who was friendly to his students, while others watched a videotape of the same professor who was unfriendly to his students.
The professor's physical appearance, mannerisms, and accent were rated more positively by participants who watched the videotapes.
Students who like their professors tend to give them high ratings on characteristics that are irrelevant to teaching effectiveness, including the quality of the classroom audiovisual equipment and the readability of their handwriting (Greenwald & Gillmore, 1997; Williams & Ceci, 1997).
If two things are correlated, they relate to each other.
naturalistic observation and case studies allow us to describe the psychological world, but correlational designs allow us to predict the future.
Knowing people's SAT scores allows us to forecast what their grades will be, although by no means perfect.
Whenever researchers conduct a study of the extent to which two variables travel together, their design is correlational even if they don't describe it that way.
If the number of college students' Facebook friends is correlated with how outgoing these students are, then more outgoing students have more friends on Facebook.
Knowing that someone is good at math doesn't tell us anything about his singing ability.
If social anxiety is negatively correlated with perceived physical attractiveness, more socially anxious people would be less attractive and less attractive.
A correlation of - 1.0 is a perfect negative correlation, whereas a correlation of +1.0 is a perfect positive correlation.
The mathematics of calculating correlation coefficients gets a bit technical, so we won't talk about it.
A less-than-perfect correlation coefficients can be found if the values are lower than 1.0.
The absolute value of a correlation of + 0.27 is 0.27, and the absolute value of a correlation of - 0.27 is also 0.27.
Both correlation coefficients are large and informative, but they are not going in the same direction.
Give yourself a point if you think this is a trick question.
Adhere to the type of correlation being used.
When participants' blood alcohol level was high, their reaction times increased.
Positive Correlation increased.
People who missed more days of class had lower grades.
There is no correlation between head size and IQ.
The more negative life events people experience, the more likely they are.
Positive correlation is diagnosed with depression.
There is no correlation between a student's height and her exam scores.
Each person has a different graph in which each dot scores on one or both variables.
There is a correlation between the number of beers students drink the night before their first psychology exam and their scores on that exam.
The correlation is negative because the clump of dots goes from higher on the left to lower on the right of the graph.
The more beer students drink, the worse they will do on their first psychology exam.
Some students drink a lot of beer and do well on their first psychology exam, while others drink almost no beer and do poorly on their first psychology exam.
The scatterplot looks like a blob of dots that are pointing in opposite directions.
There is no correlation between students' shoe sizes and how well they do in the scatterplot.
We wouldn't do any better in our predictions if we tried to guess people's exam grades from their shoe sizes.
There is a clump of dots on the left and right of the graph.
The more psychology classes students attend, the better they will do on their first psychology exam.
There will always be the annoying students who don't attend any classes yet but do well on their exams, and the incredibly frustrated souls who attend all of their classes and still do poorly.
There will always be exceptions to the general trend if the correlation coefficients are not perfect.
It's tempting to argue against the correlation by saying "I know a person who..." The correlation between smoking and lung cancer isn't perfect because of the ripe old age of the anecdote.
Lung cancer and heart exceptions are required because the correlation is less than 1.0.
The existence of correlations is unaffected by exceptions.
psychological research shows that we're not good at estimating correlations.
There are two variables that can be correlated and still be different.
A statistical mirage is a perception of a statistical association relation.
Some police departments put more cops on the beat on nights when there's a full moon, and emergency department nurses insist that more babies are born during full moons.
Many people with arthritis think their joint pain increases during rainy weather, but carefully conducted studies show that is not the case.
You may have experienced an illusory correlation if that is the case.
In New York City, the majority of crosswalk buttons are "placebo buttons" that do nothing.
The same goes for many elevator buttons and switches.
Illusory correlations are the basis of many superstitions.
One of the game's greatest hitters is the case of Wade Boggs.
Boggs ate chicken before every game for 20 years because he thought it correlated with success in the batter's box.
Boggs thought that eating chicken and belting 95-mile-an-hour fastballs into humans had something to do with each other.
Why do we fall Prey to Illusory Correlation?
We're all susceptible to illusory correlation, so it's an inescapable fact of daily life.
We can think of a table of four probabilities as a way to understand why.
The harder we push on the car's pedals, the faster it will move.
The movement of the car and the force applied to it are two different things.
The lunar lunacy effect is back.
There are four possible relationships between the phases of the moon and whether a crime is committed.
The upper left-hand (A) cell of the table has cases in which there was a full moon and a crime.
There were cases in which there was a full moon and no crime occurred.
There were cases in which there was no full moon and a crime occurred.
The D cell consists of cases in which there was no full moon and no crime.
Our confirmation bias kicks in because this cell usually fits what we expect to see.
In the case of the lunar lunacy effect, there are cases in which there was a full moon and a crime.
When we think about what happens during a full moon, we tend to remember instances that are the most dramatic.
Those that fall into the (A) cell are usually the ones that grab our attention.
It's not likely that we'll rush home excitedly to tell our friend, many superstitions, such as avoiding "WOW, you're not going to believe this."
Our attention to the different cells in the table causes us to see illusory correlations.
To give the other three cells of the fourfold table a little more attention is one way to force ourselves to keep track of disconfirming instances.
When James Alcock and his students asked a group of participants who claimed they could predict the future from their dreams to keep a diary, their beliefs that they were prophetic dreamers vanished.
The (B) cell consists of cases that disconfirm prophetic dreams and was forced on participants by Alcock.
The phenomenon of illusory correlation explains why we need correlational designs and why we can't rely on our subjective impressions.
When we've learned to expect two things to go together, our intuitions are often misleading.
Adults may be more prone to illusory correlation than children because they've had years to build up expectations about certain events.
The correlational designs help us to control the problem of illusory correlation because they force us to weigh all cells equally.
correlational designs can be used to determine whether two variables are related.
They are able to allow us to predict behavior.
They can help us find out which variables predict which inmates will reoffend after being released from prison, or which life habits will lead to heart disease.
There are limitations to the conclusions we can draw from correla tional designs.
The correlation versus causation fallacy is an error of equating correlation with causation.
Illusory correlation is when there is no correlation.
There is a correlation in the case of the correlation versus causation fallacy.
There are two examples of the correlation versus causation fallacy.
A statistician with too much time on his hands once discovered a negative correlation between the number of PhD degrees awarded in a state.
Does this correlation mean the number of PhD degrees?
It's possible that people with PhDs have something against mules and are campaigning to have them relocated to a neighboring state.
This scenario is not likely.
Before reading the next paragraph, ask yourself if there is a third explanation.
Wyoming has many mules and few universities.
In New York, for example, there are few mules and many universities.
A team of researchers found a correlation between the number of babies born in Berlin and the number of storks nearby.
More births were accompanied by storks over a 30-year period.
The correlation doesn't show that storks deliver babies.
A third variable, population size, is the reason why highly populated city areas are characterized by large numbers of births.
The news media often fall prey to the correlation versus causation fallacy and should not be relied on to help distinguish correlation from cau sation.
The study described in the article is correlational.
There's a correlation between the amount of ice cream consumed and the number of violent crimes committed on that day, but that doesn't mean that eating ice cream causes crime, or that committing a crime leads to eating more ice cream.
A third variable that might explain the correlation is that people commit more crimes on hotter days because they eat more ice cream and go outside more.
More shark attacks were associated with larger numbers of tornadoes.
The number of people killed by boilers.
These correlations are not likely to reflect causality.
It's good news for Bruce Willis fans out there and it's also good news for shark fans.
The findings don't show that Facebook addiction affects the brain.
It's possible that certain brain characteristics, such as impulsivity, can lead to people becoming addicted to Facebook.
They're almost always taking their conclusions too far if the study is only based on correlational data.
Researchers can't make a participant younger or older because of these differences.
Let's look at them one by one.
By using this procedure, we tend to cancel out random differences between the two groups, such as gender, race, or two groups personality traits.
Scientific thinking isn't innate to the human species.
It's not surprising that the concept of the con participants that receive the trol group didn't emerge in psychology until the 20th century.
Many psychologists used to think they could figure out if a treatment worked without using control groups.
Imagine if we wanted to find out if a new drug, Miraculin, is effective in treating depression, we wanted to do the experiment with people who didn't receive it.
We would start with a large sample of people with depression.
We'd assign half of the participants to an experimental group, which gets Miraculin, and the other half to a control group, which doesn't.
manipulation of an independent variable is the second ingredient of an experiment.
The dependent variable is dependent on the level of the independent variable.
The level of participants' depression is the dependent variable.
A defi manipulation produces an effect on what they're measuring.
Imagine that two researchers used different doses of Miraculin and different scales to measure depression, one that operationally defines depression as an extremely sad mood lasting two weeks or more and the other that defines depression as moderately or extremely sad mood lasting five days or more.
The investigators might come up with different conclusions about Miraculin's effectiveness because of their measures.
It's not like "dictionary" definitions of a word, in which just about all dictionaries agree on the "right" definition.
Different researchers can use different operational definitions.
There is no way to know if the independent variable exerts an effect on the dependent variable.
The control group received Miraculin but not the psycho of the "recipe" for a psychological therapy.
The extra treatment is a variable other experiment.
The independent variable was different between the experimental and control groups.
There is no way to know if the differences between groups on the dependent variable were caused by Miraculin, or both.
Random assignment to conditions and manipulation of an independent variable allow us to infer cause-and-effect relations if we've done the study right.
This tip will work 100 percent of the time if you want to infer cause-and-effect relations from a study.
When reporting studies about physical or psychological health, the news rarely tells us if the data came from an experiment or a correlational design.
They don't usually tell us if the data shows a cause-and-effect relationship.
We can draw reasonably confident inferences from the study if it is an experiment.
Let's make sure the major points of the designs are clear.
Answer the four questions after reading the description of the study.
Like correlational designs, experimental designs can be difficult to evaluate because of many pitfalls.
We will explain how psychological scientists have learned to control traps.
Imagine we've developed a new "wonder drug" that is supposed to treat attention-deficit/hyperactivity disorder in children.
Half of our participants with this condition will receive the drug and the other half will not.
The conclusion of our study shows that children who received the drug are less inattentive and more active than children who did not.
Try to answer the question on your own.
There is a reason we can't celebrate just yet.
A researcher theorizes about the ancient Chinese.
Inter ment to groups and the experimenter can allow stressed-out psychology students not to receive treatment.
Half of her participants will receive treatment and the other half will not.
The presence or absence of two months later is the independent variable.
People who received acupuncture are less stressed out than people who don't.
There is a chance that the people who received 1 will be different.
We don't know why the 4.
The group was less anxious.
The participants who received the drug improvement may have improved just because they knew they were receiving treatment.
The placebo effect reminds us that expectations can create reality.
Patients in both the experimental and control groups don't know whether they're taking the actual medication or a placebo, so they're roughly equivalent in their expectations of improvement.
The placebo effect might have been operating in the Miraculin study because participants in the control group didn't receive a placebo.
Participants in the experimental group might have improved more than those in the control group because they were aware of their treatment.
It's important that patients don't know if they're getting the real medication or a placebo.
The experiment is ruined if patients aren't aware that they are blind to their condition.
There are differences that generate a confound.
If the blind is broken, there are two different things that can happen, one is experimental and the other is control.
Patients in the experimental group might improve more than those in the control group because they are aware that their treatment is real.
Patients in the control group might try to beat out the patients in the experimental group because they are resentful that they are receiving a placebo.
The control group may perform better than the experimental group.
Writers sometimes describe the placebo effect as being entirely in people's heads.
The effects of placebos can be just as powerful as those of real drugs.
Many of the same characteristics as genuine drugs can be found in placebos.
People assume that placebos enter the bloodstream more quickly than injected placebos, which is why they show more rapid and powerful effects.
Some patients become addicted to placebo pills.
placebos work better than placebos we believe to be cheaper because we assume that if something costs more, it's more effective.
placebo effects can trick us into concluding that an intervention works even if it doesn't.
Scores of companies market fast-paced video games that claim to boost memory, attention, and other thinking-related skills.
The positive results advertised by these companies are probably due to placebo effects, because participants expect to improve in their memory and attention after playing these games.
Some researchers maintain that up to 80 percent of the effectiveness of antidepressants, such as Prozac or Zoloft, is due to placebo effects.
There is growing evidence that placebos are equivalent to antidepressants in cases of mild or moderate depression, but not in severe depression.
Placebo effects are not equally powerful for all conditions.
They seem to exert their strongest effects on subjective reports of depression and pain, but their effects on objective measures of physical illnesses, such as cancer and heart disease, are weaker.
The effects of placebos may be more short-lived than those of actual medications.
Our expectations can sometimes affect our health.
There have been cases where a patient's health improves based on the expectation of a treatment or cure.
In order to answer this question, we have to travel back in time to the mid-18th century, when physician Frans Anton Mesmer was in Paris.
Mesmerism is a synonym for hypnotism, and it claims to cure people of all manners of physical and psychological ailments.
When an invisible magnetic fluid became unbalanced in people's bodies, it triggered emotional il nesses, according to Mesmer.
The flamboyant Mesmer, dressed in a flowing cape, just needed to touch his patients with a magnetic wand to cause them to shriek, laugh, and enter a coma.
Mesmer became so much in demand that he took to magnetizing trees for the rest of the world, which he said would give the same cures but in less time.
The French government was skeptical of Mesmer's extraor dinary claims.
The commission was headed up by Benjamin Franklin, who was the U.S. treasury secretary.
Ambassador to France to investigate Mesmer's claims.
Franklin set up a series of tests to find out if Mesmer's techniques were as effective as they seemed.
Although Mesmer believed he was curing people using his powers, the members of Franklin's commission probably just harnessed the tree and told people that it wasn't magnetized.
When people believed the trees had been magnetized, they had a spell.
He put dopamine into their brains.
To control for the placebo effect, the in was one of the first people to find out.
The condition in which they received surgery but no injection of fetal cells had found an ingenious way to separate this effect.
Patients were blind to which condition they were in.
As expected, present participants with a manipulation in which only some are tients who received the fetal implants improved in their movement exposed to the supposed treatment, and quality of life.
Patients who received a control treatment and those who did not signed up for the placebo control condition.
All participants are blind to the treatment they're receiving.
The same research approach is used for psychological participants' brains.
Scientists use the prospect of improvement in controlled studies.
The findings suggest that at least some placebos work in part because of research on the effects of surgery on patients with Par by jacking up the activity of dopamine, and other chemical mes kinson's disease, a condition marked by severe movement problems.
Parkinson's disease can be caused by a decline in quality of life.
The placebo effect may help the brain areas rich in the chemical messenger dopamine, which plays a role in our reward system.
Franklin might not have been surprised, because he knew that would have made him proud, and researchers tried to treat expectations of hope as therapeutic.
The nocebo effect is an "evil twin" of the placebo effect.
The nocebo effect is believed to be the reason for the ancient African and Caribbean practice of voodoo.
People who were allergic to roses sneezed when presented with fake roses.
A group of college students were deceived into believing that an electric current being passed into their heads could cause a headaches.
Even though the current was imaginary, more than two-thirds of the students reported headaches.
One patient had serious physical symptoms, such as extremely low blood pressure, after he overdosed on fake pills that he thought were antidepressants.
People who believe in the power of voodoo may experience pain when one of their enemies inserts a pin into a doll intended for them.
This is to symbolize them.
An example phenomenon shows the nocebo effect.
Keeping participants blind to their condition assignment and including a control condition that provides a placebo treatment are both important.
There is one more potential concern with experimental designs.
The experimenter knows the condition assignment in some cases.
It happens when researchers' eses lead them to unintentionally bias a study's outcome.
In the experimenter expectancy effect, researchers' biases affect the results in subtle ways.
In some cases, these researchers may end up falling prey to confirmation bias because they may find evidence for their hypotheses even when they are wrong.
When neither researchers nor the knowledge of which subjects are in which group are present, researchers are guarding themselves from confirmation bias by voluntarily shielding themselves.
Double-blind designs show how good scientists take special precautions to avoid fooling themselves and others, which is what science is all about.
The tale of a German teacher and his horse is one of the oldest examples of the experimenter expectancy effect.
Clever Hans, a handsome Arabian stallion, was purchased by von Osten in 1900.
Clever Hans responded to von Osten's questions correctly by tapping with his hooves.
He was able to tell the time of day by calculating square roots, adding and subtracting fractions.
He was able to give accurate answers to questions like the number of men in front of him who were wearing black hats.
von Osten was so proud of Clever Hans that he began showing him off in public.
You might be wondering if Clever Hans's feats were a result of tricks.
A panel of 13 psychologists who investigated Clever Hans saw no evidence of fraud on von Osten's part, and concluded that Clever Hans was a 14-year-old human.
Clever Hans was a true blue math whiz because he could add and subtract even when von Osten wasn't posing the questions.
It's not possible to conduct a double-blind study on the effectiveness of therapy.
Oscar Pfungst was skeptical of how clever Clever Hans really was, and in 1904 he launched a series of careful observations.
He focused not on the horse, but on the people asking him questions, which was something that previous psychologists didn't think to do.
Hans's questioners almost always tightened their muscles before the answer.
Clever Hans did not do better than any ordinary horse when Pfungst prevented him from seeing the questioner.
Clever Hans was able to detect subtle physical signals emitted by questioners.
Even without knowledge, people can give off signals that affect a horse's behavior.
This story reminds us that an extraordinary claim, in this case that a horse can perform math, requires extraordinary evidence.
The experimenter expectancy effect is referred to as the Rosenthal effect.
In the 1960s psychologist Robert Rosenthal conducted a series of experiments that convinced the psychological community that experimenter expectancy effects were genuine.
Rosenthal and Fode randomly assigned some psychology students a group of five maze bright rats--rats bred over many generations to run mazes quickly--and other students a group of five maze dull rats--rats bred over many generations.
Rosenthal and Fode randomly assigned students to groups and manipulated which type of rat they received.
Students were asked to run the rats in mazes and record their completion time.
Rosenthal and Fode randomly assigned rats to the students rather than the other way around.
There was a story about the "maze bright" and "maze dull" rats.
Students who were assigned the "maze bright" rats reported 29 percent faster maze running times than students who were assigned the "maze dull" rats.
Rats' running times were influenced by the students.
Clever Hans is performing in public and experimenters need to be aware of it.
By shielding them from this knowledge, they can't unintentionally influence the experimenter and the results of the study.
In some cases, participants' guesses may be correct, in other cases, they may not.
It's likely that a double-blind study of therapy is impossible.
One can't prevent people from knowing whether or not they're receiving therapy.
The effects of psychotherapy are more difficult to study than the effects of medication.