knowt logo

Chapter 11 | Research, Evaluation, and Assessment

Conducting Research

  • Research

    • the systematic and objective analysis and recording of controlled observations that may lead to the development of generalizations, principles or theories, resulting in prediction and possibly ultimate control of events

  • You must have a hypothesis before conducting research.

The Hypothesis, the Research Question, and Literature Review

  • Review of the literature

    • A review of the literature examines all the major research conducted in the area you are exploring.

  • A review of literature is done using professional publications, usually journal articles and books.

  • Variable

    • A characteristic, attribute, or trait that can be measured (e.g., height, intelligence, self-esteem, job satisfaction)

Defining Quantitative and Qualitative Research Designs

  • Research designs can broadly be categorized as quantitative or qualitative.

  • Quantitative research assumes that there is an objective reality within which research questions can be formulated and scientific methods used to measure the probability that certain behaviors, values, or beliefs either cause or are related to other behaviors, values, or beliefs.

  • Qualitative research, on the other hand, holds that there are multiple ways of viewing knowledge and that one can make sense of the world by immersing oneself in the research situation in an attempt to provide possible explanations for the problem being examined.

  • Researchers often combine these Quantitative and Qualitative approaches and use a mixed-methods approach.

Quantitative

Qualitative

“Truth” is sought through research. Knowledge is used to develop hypotheses.

Reality is socially constructed, and there are multiple realities. Knowledge emerges through research.

Mathematical, statistical, and logical. Hypothesis testing and attempt to find answers to research questions. Deductive process.

Philosophical and sociological. Multiple methods to understand the research question. Immersion in tasks with the goal of having knowledge emerge. Inductive process

Bias is problematic. Increased control of the study to increase the validity to reduce bias.

Bias is acknowledged. Reduced through the use of multiple methods of attaining data and examining results.

To discover evidence and “truth” and generalize to a larger audience.

To uncover information and describe findings so as to enlighten.

Detached, objective scientist.

The researcher is immersed in a social situation and describes and interprets findings.

Quantitative Research

  • four of the more popular types of Quantitative research are true experimental research, causal-comparative (ex post facto) research, correlational research, and survey research.

  • True Experimental Research

    • In true experimental research, you manipulate variables to see how they affect the outcome you are examining.

    • This type of research uses hypotheses and allows you to look at the causes of behavior.

    • Random assignment of subjects to treatment groups is almost always used.

  • Causal-Comparative (Ex Post Facto) Research

    • Causal-Comparative (Ex Post Facto) Research allows the investigator to examine variables of intact groups.

  • Correlational Research

    • Two popular kinds of correlational research are simple correlational studies and predictive correlational studies.

    • Correlational research uses correlation coefficients to show the strength of the relationship between two or more sets of scores.

  • Survey Research

    • n survey research, a questionnaire is designed to gather specific information from a target population.

Qualitative Research

  • The three of most popular types of Qualitative research are grounded theory, ethnographic research, and historical research.

  • Grounded Theory

    • Grounded theory is a process in which a broad research question is examined in a multitude of ways that eventually leads to the emergence of a theory.

  • Ethnographic Research

    • Sometimes called cultural anthropology, ethnographic research was made popular by Margaret Mead, who studied aboriginal youth in Samoa by immersing herself in their culture as she attempted to understand their lifestyle

    • Ethnographic research assumes that phenomena or events can best be understood within their cultural context.

  • Historical Research

    • Historical research relies on the systematic collection of information in an effort to examine and understand past events from a contextual framework.

    • When doing historical research, the researcher generally has a viewpoint and needs to go to the literature to support this viewpoint.

  • Coding

    • Identifying common themes among the various conversations being listened to and giving them a code.

    • This process helps organize data.

Examining Results

  • Many statistical analyses can be used in examining data gathered through research.

  • In quantitative research, when you are examining differences between groups or the relationship between groups, you can perform a number of analyses, including t-tests, analysis of variance (ANOVA), and correlations.

  • Qualitative data collection, particularly grounded theory and ethnographic research, relies on a process called inductive analysis, which means that patterns and categories emerge from data.

  • Ethnographic and grounded theory researchers classify their data by a process called coding, which breaks down large pieces of data into smaller parts that seem to hold some meaning relative to the research question.

  • In all types of qualitative research, the researcher must undertake a rigorous process of reviewing the data, synthesizing results, and drawing conclusions and generalizations.

  • The ability to provide reliable and valid results in qualitative research is based on whether the researcher is able to use multiple methods of gathering information and how accurately the researcher is able to record information.

Discussing the Results

  • Conclusions in all types of research are based on the data and information collected.

Using Research in Human Service Work

  • Knowledge of basic research techniques is valuable for human service professionals for a number of reasons, including but not limited to:

    • The knowledge enables human service professionals to understand professional journal articles and to draw conclusions concerning what might be the most effective interventions for their clients.

    • Research may sometimes validate what helpers are doing but at other times suggest new ways of approaching client change.

    • Research may suggest new avenues to explore and is often the basis for future research.

    • The use of basic research techniques can be valuable for program evaluation.

Evaluation and Needs Assessment

  • Evaluation and needs assessment techniques have to do with assessing and addressing gaps in existing systems to improve their worth and value.

  • Evaluation

    • Informs us about how well we have done something (e.g., a workshop, conference, class).

  • Needs assessment

    • Gives us information about what should be done.

Evaluation

  • Evaluations are conducted to determine if a program we have offered has been effective and what can be done to improve it (e.g., a workshop, conference, class).

  • Two types of evaluation are formative evaluation and summative evaluation.

  • Formative Evaluation (Process Evaluation)

    • Formative evaluation involves the assessment of a program during its implementation to gain feedback about how effective it has been and to allow for changes in the program as needed.

  • Summative Evaluation (Outcome Evaluation).

    • Summative evaluation is used to show the efficacy of a program that has been completed to determine if it should be used in the future (e.g., a parenting workshop).

Needs Assessment

  • A needs assessment is a process for determining and addressing needs or “gaps” between current conditions and desired conditions.

  • Such an assessment is often used to improve an existing structure, such as an organization or some aspects of a community.

  • Needs assessments can be conducted through mail surveys, online, or in any fashion where you can access your targeted population.

  • Descriptive statistics are generally used when pulling together the results.

The Human Service Professional and Evaluation

  • To improve a program while it is underway or to improve it for the next time that it might be given, formative and summative evaluation measures should be undertaken.

  • Program evaluation also has an important place in examining the effectiveness of job-related behaviors at an agency.

  • A responsible agency is willing to look at the effectiveness of its employees and programs; Such evaluation can significantly assist in understanding what works and needs to be changed.

  • In times of fiscal conservatism and increased accountability of programs, evaluation of agencies has become extremely important.

  • The application of evaluation techniques is an essential step in the accountability process. It assures the public and funding agencies that you are performing essential and effective services for your clients.

Defining Assessment

  • The term assessment includes a broad array of evaluative procedures that yield information about a person.

  • Assessment consists of a wide variety of procedures that can be broadly grouped into four areas: ability testing, personality testing, informal assessment, and clinical interview.

Types of Assessment Techniques

  • Assessment of Ability

    • The ability test’s overarching purpose is to measure aspects of the cognitive domain.

    • Ability tests can be broadly categorized into achievement testing and aptitude testing.

    • Achievement tests include survey battery tests, diagnostic tests, and readiness tests.

  • Aptitude Tests

    • Four major kinds of aptitude tests include tests of intellectual and cognitive functioning (individual intelligence tests and neuropsychological assessment), cognitive ability tests, special aptitude tests, and multiple aptitude tests.


  • Personality Assessment

    • The assessment of personality includes measuring one’s temperament, habits, likes, disposition, and nature.

    • The most common kinds of personality assessments include objective tests, projective techniques, and interest inventories.

  • Objective Personality Tests

    • Often given in a multiple-choice or true/false format, objective personality tests measure some aspect of personality.

  • Projective Techniques

    • Projective techniques assess personality characteristics by having individuals respond to unstructured stimuli.

  • Interest Inventories

    • Used to determine the likes and dislikes of a person as well as an individual’s personality orientation toward the world of work, interest inventories are almost exclusively administered as part of the career counseling process.


  • Informal Assessment Procedures

    • Informal assessment instruments are generally developed by the individual who will administer the procedure (e.g., helper, teacher).

    • Although they are often less valid than other kinds of instruments, they offer us an important and relatively easy method of examining a slice of the behavior of an individual.

    • Some of the more common informal procedures include rating scales, observation, classification systems, records and personal documents, environmental assessment, and performance-based assessment

  • Rating Scales

    • Rating scales allow an individual to give a subjective rating of a behavior on a scale to obtain a quantity of an attitude or characteristic.

  • Observation

    • Conducted by a professional who wishes to observe an individual, by significant others who have the opportunity to observe an individual in natural settings, and even by a client who is asked to observe specific behaviors he or she is working on changing, observation can offer an easy and important tool in understanding the individual.

  • Classification Systems

    • In contrast to rating scales, which tend to assess a quantity of specific attributes or characteristics, classification systems provide information about whether an individual has or does not have specific attributes or characteristics.

  • Environmental Assessment

    • Environmental assessment includes collecting information about a client’s home, school, or workplace, usually through observation or self-reports (e.g., checklists).

    • This form of appraisal is more systems-oriented and naturalistic.

  • Records and Personal Documents

    • A number of common forms of records and personal documents are used to inquire about the client, including asking the client to write an autobiography, collecting anecdotal information, completing a biographical inventory, examining cumulative records, completing a genogram, or having the client write a journal or keep a diary.

  • Performance-Based Assessment

    • This kind of assessment evaluates an individual using a variety of informal assessment procedures that are based on real-world responsibilities that are not highly loaded for cognitive skills.

    • These tests are used when large numbers of nondominant group individuals have been shown to do less well on a standardized test but just as well on the actual performance for which the test is assessing.


  • Clinical Interview

    • The clinical interview, another assessment technique, allows the helper to obtain an in-depth understanding of the client through an unstructured or structured interview process.

  • The structured interview allows the examinee to respond verbally or in writing to a set of pre-established items.

  • The unstructured interview does not have a preestablished list of items or questions; instead, client responses to helper inquiries establish the direction for follow-up questioning.


Norm-Referenced, Criterion-Referenced, Standardized, and Nonstandardized assessment

  • Assessment techniques are either norm-referenced or criterion-referenced.

  • Norm-referenced assessment

    • The individual can compare his or her score to the conglomerate scores of a peer or norm group, which often consists of a nationally representative sample of individuals.

  • Criterion-referenced assessment

    • Techniques are designed to assess the specific learning goals of an individual.

  • Standardized assessment

    • Procedures are administered in the same manner and under the same conditions each time they are given.

  • Nonstandardized assessment

    • Procedures are not necessarily given under the same conditions and in the same manner at each administration.

Basic Test Statistics

  • Relativity of Scores

    • Standardized testing scores are relative, and an individual’s raw score makes sense only in its relative position to his or her group.

  • Measures of Central Tendency and Measures of Variability

    • Measures of central tendency include the mean, the average of all the scores; the median, the middle score representing the point where 50% of the examinees score above and 50% fall below; and the mode, the most frequent score.

    • Two important measures of variability are the range, which represents the spread of scores from the highest to the lowest score, and the standard deviation, which represents how much, on average, scores vary from the mean.

  • Test Worthiness

    • Four qualities that are particularly important in establishing the worthiness of an assessment instrument are the following:

      • Validity—whether a test measures what it is supposed to measure.

      • Reliability—how accurately or precisely a test measures a trait or ability.

      • Practicality—the ease of administration and interpretation of the test.

      • Cross-cultural fairness--whether the test measures what it is supposed to measure for all subgroups to which the test is given.

  • Validity

    • Test validity involves a systematic method of showing that a test measures what it purports to measure.

    • A test’s validity is a function of how it is created.

  • Reliability

    • The second quality that is examined in determining the adequacy of an assessment instrument is reliability.

    • Whereas validity is used to show that a test measures what it is supposed to measure, reliability examines the accuracy of the test scores.

  • Practicality

    • If a test is effective but the cost of administering the test is high, the cost may be prohibitive.

  • Cross-Cultural Fairness of Tests

    • Test selection and interpretation are done with an awareness of the degree to which items may be culturally biased or the norming sample not reflective or inclusive of the client’s or student’s diversity.

    • Although it is impossible to eliminate all bias from tests, one should expect that the bias is small enough to allow for justifiable interpretations of any individual’s score.

Ethical, Professional, and Legal Issues

Informed Consent

  • Informed consent involves the client’s right to know the purpose and nature of all aspects of the client's involvement with the helper.

  • In reference to research and assessment procedures, clients have the right to know the general purposes of the research in which they are participating as well as how any assessment techniques they are subjected to will be used.

    • Except in special cases (e.g., court referrals for testing)

  • Clients have the right to refuse to take part in any assessment and research.

  • NOHS, 2015b (Standard 2)

    • Human service professionals obtain informed consent to provide services to clients at the beginning of the helping relationship. Clients should be informed that they may withdraw consent at any time except where denied by court order and should be able to ask questions before agreeing to the services.

    • Clients who are unable to give consent should have those who are legally able to give consent for them review an informed consent statement and provide appropriate consent.

Use of Human Subjects

  • Stanley Milgram’s research on obedience dramatically affected the way research is conducted in the United States.

  • As a result of Stanley Milgram’s study, as well as other research that had the potential to cause psychological or even physical harm to subjects, many restraints have been placed on the types of research in which people can participate.

  • Research that might cause physical or psychological harm is now shaped by ethical standards and by legislation.

  • Federal legislation requires that all organizations that conduct research supported by federal funds have a human subjects committee or institutional review board (IRB), whose purpose is to ensure that there is little or no risk to research participants.

Proper Interpretation and Use of Test Data

  • In the human service field, the main purpose of research, program evaluation, and assessment is to benefit our clients.

  • Research can help us understand those interventions that are most effective with our clients; program evaluation can help us understand whether the programs we are offering benefit our clients, and assessment techniques can help clients better understand themselves.

  • Research is an ever-evolving process that continually adds new knowledge to the field.

  • Assessment procedures are always improving, giving us better insights into the clients with whom we work.

  • Effective human service professionals do not view research, evaluation, and assessment as a fearful or stagnant process; Instead, they understand that new research ideas, new assessment procedures, and new programs will be devised, and they are excited about such developments and intelligent enough to adapt the information obtained from this newly gained knowledge in their own practices.

I

Chapter 11 | Research, Evaluation, and Assessment

Conducting Research

  • Research

    • the systematic and objective analysis and recording of controlled observations that may lead to the development of generalizations, principles or theories, resulting in prediction and possibly ultimate control of events

  • You must have a hypothesis before conducting research.

The Hypothesis, the Research Question, and Literature Review

  • Review of the literature

    • A review of the literature examines all the major research conducted in the area you are exploring.

  • A review of literature is done using professional publications, usually journal articles and books.

  • Variable

    • A characteristic, attribute, or trait that can be measured (e.g., height, intelligence, self-esteem, job satisfaction)

Defining Quantitative and Qualitative Research Designs

  • Research designs can broadly be categorized as quantitative or qualitative.

  • Quantitative research assumes that there is an objective reality within which research questions can be formulated and scientific methods used to measure the probability that certain behaviors, values, or beliefs either cause or are related to other behaviors, values, or beliefs.

  • Qualitative research, on the other hand, holds that there are multiple ways of viewing knowledge and that one can make sense of the world by immersing oneself in the research situation in an attempt to provide possible explanations for the problem being examined.

  • Researchers often combine these Quantitative and Qualitative approaches and use a mixed-methods approach.

Quantitative

Qualitative

“Truth” is sought through research. Knowledge is used to develop hypotheses.

Reality is socially constructed, and there are multiple realities. Knowledge emerges through research.

Mathematical, statistical, and logical. Hypothesis testing and attempt to find answers to research questions. Deductive process.

Philosophical and sociological. Multiple methods to understand the research question. Immersion in tasks with the goal of having knowledge emerge. Inductive process

Bias is problematic. Increased control of the study to increase the validity to reduce bias.

Bias is acknowledged. Reduced through the use of multiple methods of attaining data and examining results.

To discover evidence and “truth” and generalize to a larger audience.

To uncover information and describe findings so as to enlighten.

Detached, objective scientist.

The researcher is immersed in a social situation and describes and interprets findings.

Quantitative Research

  • four of the more popular types of Quantitative research are true experimental research, causal-comparative (ex post facto) research, correlational research, and survey research.

  • True Experimental Research

    • In true experimental research, you manipulate variables to see how they affect the outcome you are examining.

    • This type of research uses hypotheses and allows you to look at the causes of behavior.

    • Random assignment of subjects to treatment groups is almost always used.

  • Causal-Comparative (Ex Post Facto) Research

    • Causal-Comparative (Ex Post Facto) Research allows the investigator to examine variables of intact groups.

  • Correlational Research

    • Two popular kinds of correlational research are simple correlational studies and predictive correlational studies.

    • Correlational research uses correlation coefficients to show the strength of the relationship between two or more sets of scores.

  • Survey Research

    • n survey research, a questionnaire is designed to gather specific information from a target population.

Qualitative Research

  • The three of most popular types of Qualitative research are grounded theory, ethnographic research, and historical research.

  • Grounded Theory

    • Grounded theory is a process in which a broad research question is examined in a multitude of ways that eventually leads to the emergence of a theory.

  • Ethnographic Research

    • Sometimes called cultural anthropology, ethnographic research was made popular by Margaret Mead, who studied aboriginal youth in Samoa by immersing herself in their culture as she attempted to understand their lifestyle

    • Ethnographic research assumes that phenomena or events can best be understood within their cultural context.

  • Historical Research

    • Historical research relies on the systematic collection of information in an effort to examine and understand past events from a contextual framework.

    • When doing historical research, the researcher generally has a viewpoint and needs to go to the literature to support this viewpoint.

  • Coding

    • Identifying common themes among the various conversations being listened to and giving them a code.

    • This process helps organize data.

Examining Results

  • Many statistical analyses can be used in examining data gathered through research.

  • In quantitative research, when you are examining differences between groups or the relationship between groups, you can perform a number of analyses, including t-tests, analysis of variance (ANOVA), and correlations.

  • Qualitative data collection, particularly grounded theory and ethnographic research, relies on a process called inductive analysis, which means that patterns and categories emerge from data.

  • Ethnographic and grounded theory researchers classify their data by a process called coding, which breaks down large pieces of data into smaller parts that seem to hold some meaning relative to the research question.

  • In all types of qualitative research, the researcher must undertake a rigorous process of reviewing the data, synthesizing results, and drawing conclusions and generalizations.

  • The ability to provide reliable and valid results in qualitative research is based on whether the researcher is able to use multiple methods of gathering information and how accurately the researcher is able to record information.

Discussing the Results

  • Conclusions in all types of research are based on the data and information collected.

Using Research in Human Service Work

  • Knowledge of basic research techniques is valuable for human service professionals for a number of reasons, including but not limited to:

    • The knowledge enables human service professionals to understand professional journal articles and to draw conclusions concerning what might be the most effective interventions for their clients.

    • Research may sometimes validate what helpers are doing but at other times suggest new ways of approaching client change.

    • Research may suggest new avenues to explore and is often the basis for future research.

    • The use of basic research techniques can be valuable for program evaluation.

Evaluation and Needs Assessment

  • Evaluation and needs assessment techniques have to do with assessing and addressing gaps in existing systems to improve their worth and value.

  • Evaluation

    • Informs us about how well we have done something (e.g., a workshop, conference, class).

  • Needs assessment

    • Gives us information about what should be done.

Evaluation

  • Evaluations are conducted to determine if a program we have offered has been effective and what can be done to improve it (e.g., a workshop, conference, class).

  • Two types of evaluation are formative evaluation and summative evaluation.

  • Formative Evaluation (Process Evaluation)

    • Formative evaluation involves the assessment of a program during its implementation to gain feedback about how effective it has been and to allow for changes in the program as needed.

  • Summative Evaluation (Outcome Evaluation).

    • Summative evaluation is used to show the efficacy of a program that has been completed to determine if it should be used in the future (e.g., a parenting workshop).

Needs Assessment

  • A needs assessment is a process for determining and addressing needs or “gaps” between current conditions and desired conditions.

  • Such an assessment is often used to improve an existing structure, such as an organization or some aspects of a community.

  • Needs assessments can be conducted through mail surveys, online, or in any fashion where you can access your targeted population.

  • Descriptive statistics are generally used when pulling together the results.

The Human Service Professional and Evaluation

  • To improve a program while it is underway or to improve it for the next time that it might be given, formative and summative evaluation measures should be undertaken.

  • Program evaluation also has an important place in examining the effectiveness of job-related behaviors at an agency.

  • A responsible agency is willing to look at the effectiveness of its employees and programs; Such evaluation can significantly assist in understanding what works and needs to be changed.

  • In times of fiscal conservatism and increased accountability of programs, evaluation of agencies has become extremely important.

  • The application of evaluation techniques is an essential step in the accountability process. It assures the public and funding agencies that you are performing essential and effective services for your clients.

Defining Assessment

  • The term assessment includes a broad array of evaluative procedures that yield information about a person.

  • Assessment consists of a wide variety of procedures that can be broadly grouped into four areas: ability testing, personality testing, informal assessment, and clinical interview.

Types of Assessment Techniques

  • Assessment of Ability

    • The ability test’s overarching purpose is to measure aspects of the cognitive domain.

    • Ability tests can be broadly categorized into achievement testing and aptitude testing.

    • Achievement tests include survey battery tests, diagnostic tests, and readiness tests.

  • Aptitude Tests

    • Four major kinds of aptitude tests include tests of intellectual and cognitive functioning (individual intelligence tests and neuropsychological assessment), cognitive ability tests, special aptitude tests, and multiple aptitude tests.


  • Personality Assessment

    • The assessment of personality includes measuring one’s temperament, habits, likes, disposition, and nature.

    • The most common kinds of personality assessments include objective tests, projective techniques, and interest inventories.

  • Objective Personality Tests

    • Often given in a multiple-choice or true/false format, objective personality tests measure some aspect of personality.

  • Projective Techniques

    • Projective techniques assess personality characteristics by having individuals respond to unstructured stimuli.

  • Interest Inventories

    • Used to determine the likes and dislikes of a person as well as an individual’s personality orientation toward the world of work, interest inventories are almost exclusively administered as part of the career counseling process.


  • Informal Assessment Procedures

    • Informal assessment instruments are generally developed by the individual who will administer the procedure (e.g., helper, teacher).

    • Although they are often less valid than other kinds of instruments, they offer us an important and relatively easy method of examining a slice of the behavior of an individual.

    • Some of the more common informal procedures include rating scales, observation, classification systems, records and personal documents, environmental assessment, and performance-based assessment

  • Rating Scales

    • Rating scales allow an individual to give a subjective rating of a behavior on a scale to obtain a quantity of an attitude or characteristic.

  • Observation

    • Conducted by a professional who wishes to observe an individual, by significant others who have the opportunity to observe an individual in natural settings, and even by a client who is asked to observe specific behaviors he or she is working on changing, observation can offer an easy and important tool in understanding the individual.

  • Classification Systems

    • In contrast to rating scales, which tend to assess a quantity of specific attributes or characteristics, classification systems provide information about whether an individual has or does not have specific attributes or characteristics.

  • Environmental Assessment

    • Environmental assessment includes collecting information about a client’s home, school, or workplace, usually through observation or self-reports (e.g., checklists).

    • This form of appraisal is more systems-oriented and naturalistic.

  • Records and Personal Documents

    • A number of common forms of records and personal documents are used to inquire about the client, including asking the client to write an autobiography, collecting anecdotal information, completing a biographical inventory, examining cumulative records, completing a genogram, or having the client write a journal or keep a diary.

  • Performance-Based Assessment

    • This kind of assessment evaluates an individual using a variety of informal assessment procedures that are based on real-world responsibilities that are not highly loaded for cognitive skills.

    • These tests are used when large numbers of nondominant group individuals have been shown to do less well on a standardized test but just as well on the actual performance for which the test is assessing.


  • Clinical Interview

    • The clinical interview, another assessment technique, allows the helper to obtain an in-depth understanding of the client through an unstructured or structured interview process.

  • The structured interview allows the examinee to respond verbally or in writing to a set of pre-established items.

  • The unstructured interview does not have a preestablished list of items or questions; instead, client responses to helper inquiries establish the direction for follow-up questioning.


Norm-Referenced, Criterion-Referenced, Standardized, and Nonstandardized assessment

  • Assessment techniques are either norm-referenced or criterion-referenced.

  • Norm-referenced assessment

    • The individual can compare his or her score to the conglomerate scores of a peer or norm group, which often consists of a nationally representative sample of individuals.

  • Criterion-referenced assessment

    • Techniques are designed to assess the specific learning goals of an individual.

  • Standardized assessment

    • Procedures are administered in the same manner and under the same conditions each time they are given.

  • Nonstandardized assessment

    • Procedures are not necessarily given under the same conditions and in the same manner at each administration.

Basic Test Statistics

  • Relativity of Scores

    • Standardized testing scores are relative, and an individual’s raw score makes sense only in its relative position to his or her group.

  • Measures of Central Tendency and Measures of Variability

    • Measures of central tendency include the mean, the average of all the scores; the median, the middle score representing the point where 50% of the examinees score above and 50% fall below; and the mode, the most frequent score.

    • Two important measures of variability are the range, which represents the spread of scores from the highest to the lowest score, and the standard deviation, which represents how much, on average, scores vary from the mean.

  • Test Worthiness

    • Four qualities that are particularly important in establishing the worthiness of an assessment instrument are the following:

      • Validity—whether a test measures what it is supposed to measure.

      • Reliability—how accurately or precisely a test measures a trait or ability.

      • Practicality—the ease of administration and interpretation of the test.

      • Cross-cultural fairness--whether the test measures what it is supposed to measure for all subgroups to which the test is given.

  • Validity

    • Test validity involves a systematic method of showing that a test measures what it purports to measure.

    • A test’s validity is a function of how it is created.

  • Reliability

    • The second quality that is examined in determining the adequacy of an assessment instrument is reliability.

    • Whereas validity is used to show that a test measures what it is supposed to measure, reliability examines the accuracy of the test scores.

  • Practicality

    • If a test is effective but the cost of administering the test is high, the cost may be prohibitive.

  • Cross-Cultural Fairness of Tests

    • Test selection and interpretation are done with an awareness of the degree to which items may be culturally biased or the norming sample not reflective or inclusive of the client’s or student’s diversity.

    • Although it is impossible to eliminate all bias from tests, one should expect that the bias is small enough to allow for justifiable interpretations of any individual’s score.

Ethical, Professional, and Legal Issues

Informed Consent

  • Informed consent involves the client’s right to know the purpose and nature of all aspects of the client's involvement with the helper.

  • In reference to research and assessment procedures, clients have the right to know the general purposes of the research in which they are participating as well as how any assessment techniques they are subjected to will be used.

    • Except in special cases (e.g., court referrals for testing)

  • Clients have the right to refuse to take part in any assessment and research.

  • NOHS, 2015b (Standard 2)

    • Human service professionals obtain informed consent to provide services to clients at the beginning of the helping relationship. Clients should be informed that they may withdraw consent at any time except where denied by court order and should be able to ask questions before agreeing to the services.

    • Clients who are unable to give consent should have those who are legally able to give consent for them review an informed consent statement and provide appropriate consent.

Use of Human Subjects

  • Stanley Milgram’s research on obedience dramatically affected the way research is conducted in the United States.

  • As a result of Stanley Milgram’s study, as well as other research that had the potential to cause psychological or even physical harm to subjects, many restraints have been placed on the types of research in which people can participate.

  • Research that might cause physical or psychological harm is now shaped by ethical standards and by legislation.

  • Federal legislation requires that all organizations that conduct research supported by federal funds have a human subjects committee or institutional review board (IRB), whose purpose is to ensure that there is little or no risk to research participants.

Proper Interpretation and Use of Test Data

  • In the human service field, the main purpose of research, program evaluation, and assessment is to benefit our clients.

  • Research can help us understand those interventions that are most effective with our clients; program evaluation can help us understand whether the programs we are offering benefit our clients, and assessment techniques can help clients better understand themselves.

  • Research is an ever-evolving process that continually adds new knowledge to the field.

  • Assessment procedures are always improving, giving us better insights into the clients with whom we work.

  • Effective human service professionals do not view research, evaluation, and assessment as a fearful or stagnant process; Instead, they understand that new research ideas, new assessment procedures, and new programs will be devised, and they are excited about such developments and intelligent enough to adapt the information obtained from this newly gained knowledge in their own practices.