Document from University about Research Methods. The Pdf explores research methods in psychology, including empirical bases, experimental and quasi-experimental designs, and observational methods. This Pdf is useful for university students of Psychology.
See more23 Pages


Unlock the full PDF for free
Sign up to get full access to the document and start transforming it with AI.
Science is a system of thought. A rational explanation of how things work in the world and a process of getting closer to truths and urther from myths, fables and unquestioned or 'intuitive' ideas about people. A body of knowledge, particularly that which has resulted from the systematic application of the scientific method. Characteristics of science
Pseudoscience: Claims presented so that they appear scientific even though they lack supporting evidence and plausibility. · Characteristics:
Critical thinking: refers to a more careful style of forming and evaluating knowledge than simply using intuition. In addition to the scientific method, critical thinking will help us develop more effective and accurate ways to figure out what makes people do, think, and feel the things they do. Scientific Attitude
Put aside your own assumptions and biases, and look at the evidence. ■ Consider if there are other possible explanations for the facts or results. ■ See if there was a flaw in how the information was collected.
Science is a way of thinking that leads us towards testable explanations of what we observe in the world around us. Scientific research does not prove theories true, it exposes different explanations. The scientific method is the process of testing our ideas about the world by: EMPIRICAL METHOD Through experience + HYPOTHETICO-DEDUCTIVE METHOD · Gathering of data, throug experience, with no preconceptions Induction of patterns and relationships within the data setting up situations that test our ideas. making careful, organized observations. analyzing whether the data fit with our ideas. If the data don't fit our ideas, then we modify our ideas, and test again.
Laws: a general principle that applies to all situations. There are few universally accepted laws within the behavioral sciences. Theory: in science, is a set of principles, built on observations and other verifiable facts, that explains some phenomenon and predicts its future behavior. Hypothesis: a testable prediction consistent with our theory. "Testable" means that the hypothesis is stated in a way that we could make observations to find out if it is true.
Variables: the things that alter, and whose changes we can measure Samples: the people we are going to study or work with -> PARTICIPANTS . They are taken from the POPULATION. Design: overall structure and strategy of the research study . Resources . Nature of research aim · Previous research . Researcher's attitude Analysis: design and measurement will have a direct effect on the analysis.
Quantitative: Non-numerical data. Qualitative: Numerical data.
Levels of evidence are based on the methodological quality of the study, validity and applicability Level . Evidence from a systematic review or meta-analysis of all relevant Randomized controlled trial or evidence-based clinical practice guidelines Types of resources Level II . Evidence obtained from at least one well-designed RCT Level III . Evidence obtained from well-designed controlled trials without randomization TRIP Database searches these simultaneously Critically-Appraised Topics [Evidence Syntheses] FILTERED INFORMATION Critically-Appraised Individual Articles [Article Synopses] Level V . Evidence from systematic reviews of descriptive and qualitative studies Randomized Controlled Trials (RCTs) Cohort Studies UNFILTERED INFORMATION Level VI . Evidence from a single descriptive or qualitative study Level VII . Evidence from the opinion of authorities and/or reports of expert committees quality of evidence Systematic Reviews Level IV . Evidence from well-desgined case-control or cohort studies Case-Controlled Studies Case Series / Reports Background Information / Expert Opinion
Variables: is anything that varies. They are observable or hypothetical events that can change and whose changes can be measured in some way. Dependent variable (DV): variable in the study whose changes depend on the manipulation of the independent ● variable. . We do not know the values of the DV until after we have manipulated the IV. . Independent variable (IV): variable which is manipulated by the experimenter. Defined in terms of LEVELS. . We know the values of the IV before we start the experiment. Manipulation of the independent variable Temperature change produces Change in the dependent variable Number of aggressive story endings
Construct: Concept that is measurable but not directly observable. They must be carefully explained and measured (measurement must be precise and clear). construct depression depression inventory teachers observations clinical review operationalization An operational definition of a construct gives us the set of activities required to measure.
NOMINAL · Categories only · Numerals represent a rank order ORDINAL · Distance between subsequent numerals may not be equal INTERVAL · Subsequent numerals represent equal distances · Differences but no natural zero point RATIO · Numerals represent equal distances · Differences and a natural zero point
Measurement errors: discrepancies between the actual value of the variable being measured and the value obtained through a measurement tool or procedure. They may affect the reproducibility of outcomes across studies. Two types: · Systematic measurement errors are known as bias, which function as extraneous variables. Are consistent and predictable inaccuracies that occur in the same direction every time a measurement is taken. ● Random errors do not contribute to systematic differences between groups. Are unpredictable and occur without any consistent pattern. Reliability: Refers to how consistent the results of a measure are. In other words, if you repeat a study or a measurement, you should get the same results. ● Test-retest: refers to the stability of a measure over time, consistent scores every time we test. Is used to assess the reliability or stability of a measurement tool over time. It determines whether a test produces similar results when administered to the same group of people at two different points in time (so long as nothing significant has happened to them between testings). (A) 10 . Interrater: examines the consistency between different individuals (raters) who are evaluating or observing the same behavior or phenomenon, consistent scores no matter who is rating. . Internal: Looks at the consistency within the measurement itself, consistent scores no matter 2 4 6 8 10 Observer Mark's ratings how you ask. It ensures that all parts of a test or survey measure the same concept and produce similar results (not to be confused with internal validity!). The extent to which multiple measures, or items, are all answered the same by the same set of people. Relevant for measures that use more than one item to get at the same construct. ○ Cronbach's alpha: An average of all of the possible item-total correlations. Reliability: Do you get consistent scores every time? Test-retest reliability: People get consistent scores every time they take the test. Internal reliability: People give consistent scores on every item of a questionnaire. 5 Interrater reliability: Two coders" ratings of a set of targets are consistent with each other. 10 Jackie Observer Matt's ratings Observer Peter's ratings Jay Jackie 2 2 4 6 8 Observer Mark's ratings 10 Error sistemático Error aleatorio Figura 3- Lon ersare alsatarian varian an magnitud y dirección. Al contrario, fax errores siatereáticos tienden a ser consistentes. 6 Researchers start by stating a definition of their construct, to define the conceptual variable and get to the operational definition.