ResearchMethodology.pdf

    Issue s in Informing Science and Information Technology Volume 6, 2009

    Towards a Guide for Novice Researchers on Research Methodology:

    Review and Proposed Methods

    Timothy J. Ellis and Yair Levy Nova Southeastern University

    Graduate School of Computer and Information Sciences Fort Lauderdale, Florida, USA

    [email protected], [email protected]

    Abstract The novice researcher, such as the graduate student, can be overwhelmed by the intricacies of the research methods employed in conducting a scholarly inquiry. As both a consumer and producer of research, it is essential to have a firm grasp on just what is entailed in producing legitimate, valid results and conclusions. The very large and growing number of diverse research approaches in current practice exacerbates this problem. The goal of this review is to provide the novice re-searcher with a starting point in becoming a more informed consumer and producer of research. Toward addressing this goa l, a new system for deriving a proposed study type is deve loped. The PLD model inc ludes the three common drivers for selection of study type : research-worthy prob-lem (P), valid quality peer-reviewed literature (L), and data (D). The discussion inc ludes a review of some common research types and concludes with definitions, discussions, and examples of various fundamentals of research methods such as: a) forming research questions and hypotheses; b) acknowledging assumptions, limitations, and delimitations; and c) establishing re liability and validity.

    Ke ywords: Research methodology, reliability, va lidity, research questions, problem directed re-search

    Introduction The novice researcher, such as the graduate student, can be overwhelmed by the intricacies of the research methods employed in conducting a scholarly inquiry (Leedy & Ormrod, 2005). As both a consumer and producer of research, it is essential to have a firm grasp on just what is entailed in produc ing legitimate, valid results and conclusions. The very large and growing number of di-verse research approaches in current practice exacerbates this problem (Mertler & Vannatta,

    2001). The goal of this review is to pro-vide the novice researcher with a start-ing point in becoming a more informed consumer and producer of research in the form of a lexicon of terms and an analysis of the underlying constructs that apply to scholarly enquiry, regard-less of the specific methods employed.

    Scholarly research is, to a very great extent, characterized by the type of

    M aterial p ublished as p art of this p ublication, either on-lin e or in p rint, is copy righted by the Informing Scien ce Institute. Permission to make digital or p ap er copy of p art or all of these works for p ersonal or classroom use is granted without fee p rovided that the cop ies are not made or distributed for p rofit or commer cial advantage AND that cop ies 1) bear this notice in full and 2) give the full citation on the first p age. It is p er-missible to abstract these works so long as cred it is giv en. To copy in all other cases or to rep ublish or to p ost on a server or to redistribute to lists requires sp ecific p ermission and p ay ment of a fee. Contact Publisher@Informin gScience.or g to request redistribution p ermission.

    Guide for Novice Researchers on Research Methodology

    324

    study conducted and, by extension, the specific methods employed in conducting that type of study (Creswell, 2005, p. 61). Novice researchers, however, often mistakenly think that, since studies are known by how they are conducted, the research process starts with deciding upon just what type of study to conduct. On the contrary, the type of study one conducts is based upon three related issues: the problem driving the study, the body of knowledge, and the nature of the data available to the researcher.

    As discussed elsewhere, scholarly research starts with the identification of a tightly focused, lit-erature supported problem (Ellis & Levy, 2008). The research-worthy problem serves as the point of departure for the research. The nature of the research problem and the doma in from which it is drawn serves as a limiting factor on the type of research that can be conducted. Nunamaker, Chen, and Purdin (1991) noted that “It is clear that some research domains are sufficiently nar-row that they allow the use of only limited methodologies” (p. 91). The problem also serves as the guidance system for the study in that the research is, in essence, an attempt to, in some man-ner, develop at least a partial solution to the research problem. The best design cannot provide meaning to research and answer the question ‘Why was the study conducted,’ if there is not the anchor of a clearly identified research problem.

    The body of knowledge serves as the foundation upon which the study is built (Levy & Ellis, 2006). The literature also serves to channel the research, in that it indicates the type of study or studies that are appropriate based upon the nature of the problem driving the study. Likewise, the literature provides clear guidance on the specific methods to be followed in conducting a study of a given type. Although origina lity is of great value in scholarly work, it is usually not rewarded when applied to the research methods. Ignoring the wisdom contained in the existing body of knowledge can cause the novice researcher, at the least, a great deal of added work establishing the validity of the study.

    From an entirely practical perspective, the nature of the data available to the researcher serves as a final filter in determining the type of study to conduct. The type of data available should be considered a necessary, but certainly not sufficient, consideration for selecting research methods. The data should never supersede the necessity of a research-worthy problem serving as the anchor and the existing body of knowledge serving as the foundation for the research. The absence of the ability to gather the necessary data can, however, certainly make a study based upon research methods directly driven by a well-conceived problem and supported by current literature com-plete ly futile. Every solid research study must use data in order to validate the proposed theory. As a result, novice researchers should understand the centrality of access to data for their study success. Access to data refers to the ability of the researcher to actually collect the desired data for the study. Without access to data, it is impossible for a researcher to make any meaningful conclusions on the phenomena. Novice researchers should be aware that their access to data will also imply what type of methodology they will be using and what type of research, eventually they will conduct. Figure 1 illustrates the interaction among the research-worthy problem (P), the existing body of knowledge documented in the peer-reviewed literature (L), and the data avail-able to the researcher (D). The research-worthy problem (P) serves as the input to the process of selecting the appropriate type of research to conduct; the valid peer-reviewed literature (L) is the key funnel that limits the range of applicable research approaches, based on the body of knowl-edge; the data (D) available to the researcher serves as the final filter used to identify the specific study type.

    The balance of this paper explores the constructs underlying scholarly research in two aspects. The first section examines some of the types of studies most commonly used in information sys-tems research. The second section explores vital considerations for research methods that apply across all study types.

    Ellis & Levy

    325

    Study type

    Data available

    Valid peer-reviewed literature

    Research-worthy problem

    Figure 1. The PLD Mode l for De riving Study Type

    Types of Research There are a number of different ways to distinguish among types of research. The type of data available is certainly one vita l aspect (Gay, Mills, & Airasian, 2006; Leedy & Ormrod, 2005); different research approaches are appropriate for quantitative data – precise, numeric data derived from a reduced variable – than for qualitative data – complex, multidimensional data derived from a natural setting. Of equal importance is the nature of the problem be ing addressed by the research (Isaac & Michael, 1981). Some problems, for example, are relative ly new and require

    Table 1: Ke y Categories of Research

    Approach Mos t common type of data Stage of proble m Cate gories of The ory

    Experimental Quantitative Evaluation Testing or revising

    Causal-comparative Quantitative Evaluation Testing or revising

    Historical Quantitative or Qualitative Description Testing or revising

    Developmenta l Quantitative and qualitative Description Building or revising

    Correlational

    Quantitative Description Testing

    Case study Qualitative Exploration Building or revising

    Grounded theory

    Qualitative Exploration Building

    Ethnography

    Qualitative Descriptive Building

    Action research Quantitative and qualitative Applied exploration Building or revising

    Guide for Novice Researchers on Research Methodology

    326

    exploratory types of research, while more mature problems might better be addressed by descrip-tive or evaluative (hypothesis testing) approaches (Sekaran, 2003). Table 1 presents an overview of how the research approaches most commonly used in information systems (IS) are categorized. The subsections following Table 1 briefly explore each of these major research approaches. In general, research studies can be classified into three categories: theory building, theory testing, and theory revising. Theory building refers to research studies that aim at building a theory where no prior solid theory existed to expla in phenomena or specific scenario. Theory testing refers to research studies that aim at validating (i.e. testing) existing theories in new context. Theory revis-ing refers to research studies that aim at revising an existing theory.

    Experimental The essence of experimental research is determining if a cause-effect relationship exists between one factor or set of factors – the independent variable(s) – and a second factor or set of factors – the dependent variable(s) (Cook & Campbell, 1979). In an experiment, the researcher takes con-trol of and manipulates the independent variable, usua lly by randomly assigning partic ipants to two or more different groups that receive different treatments or implementations of the inde-pendent variable. The experimenter measures and compares the performance of the participants on the dependent variable to determine if changes in the independent variables are very like ly to cause similar changes in performance on the dependent variable. In medical settings, this type of research is very common. However, in many research fields it is somewhat difficult to control all the variables in the experiments, especially when dealing with research area that is related to or-ganizations and institutions. For that reason, the use of experiments in IS it is somewhat limited, and a less restrictive type of experiments is used. Such type is called quasi-experiment (Cook & Campbell, 1979). Similar to experiments, in quasi-experiments, the research is attempting to de-termining if a cause-effect relationship exists between one factor or set of factors – the independ-ent variable(s) – and a second factor or set of factors – the dependent variable(s). However in quasi-experiments, the researcher is unable to control a ll the variables in the experimentation, but most variables are controlled.

    An example of an experimental study would be research into which of two methods of inputting text in a personal digita l assistant, soft-key or handwriting recognition, is more accurate. The in-dependent variable would be method of text input. The dependent variable might be a count of the number of entry errors, and the comparison based on the mean of the group using the soft-key method with the mean of the group that used handwriting recognition input. An example of ex-perimenta l research can be found at Cockburn, Savage, and Wallace (2005).

    Causal-Comparative As with experimental studies, causal-comparative research focuses on determining if a cause-effect relationship exists between one factor of set of factors – the independent variable(s) – and a second factor or set of factors – the dependent variable(s). Unlike an experiment, the researcher does not take control of and manipulate the independent variable in causal-comparative research but rather observes, measures, and compares the performance on the dependent variable or vari-ables of subjects in naturally-occurring groupings based on the independent variable.

    An example of a causal-comparative study would be research into the impact monetary bonuses had on knowledge sharing behavior as exhibited by contributions to a company knowledge bank. The independent variable would be “monetary bonus,” and it might have two levels (i.e. “yes” and “no”). The dependent variable might be a count of the number of contributions, and the com-parison based upon an examination of the mean number of knowledge-base contributions made per employee in companies that provided a monetary bonus versus the mean number of contribu-tions made per employee in companies that did not provide a bonus. Since the researcher did not

    Ellis & Levy

    327

    assign companies to the “bonus” or “no bonus” categories, this study would be causal compara-tive , not experimental. An example of causal-comparative research can be found at Becerra-Fernandez, Zanakis, and Walczak (2002) who deve loped a knowledge discovery technique using neural network mode ling to classify a country’s investing risk based on a variety of independent variables.

    Case Study A case study is “an empirica l inquiry that investigates a contemporary phenomenon within its real life context using multiple sources of evidence” (Noor, 2008, p. 1602). The evidence used in a case study is typically qualitative in nature and focuses on developing an in-depth rather than broad, generalizable understanding. Case studies can be used to explore, describe, or explain phe-nomena by an exhaustive study within its natural setting (Yin, 1984). An example of a case study can be found in the study by Ramim and Levy (2006) who described the issues related to the im-pact of an insider’s attack combined with novice management on the survivability of an e-learning systems of a small university.

    Historical Historical research utilizes interpretation of qua litative data to explain the causes of change through time. This type of research is based upon the recognition of a historical problem or the identification of a need for certain historical knowledge and generally enta ils gathering as much relevant information about the problem or topic as possible. The research usually begins with the formation of a hypothesis that tentatively explains a suspected relationship between two or more historical factors and proceeds to a rigorous collection and organization of usua lly qua litative evidence. The verification of the authentic ity and validity of such evidence, together with its se-lection, organization, and analysis forms the basis for this type of research. An example of his-torical research can be found in the study by Grant and Grant (2008) who conducted a study to test the hypothesis that a new generation in knowledge management was emerging.

    Correlational The primary focus of the correlationa l type of research is to determine the presence and degree of a relationship between two factors. Although correlationa l studies are in a superficia l way similar to causal-comparative research – both types of study focus on analyzing quantitative data to de-termine if a relationship exists between two variables – the difference between the two cannot be ignored. Unlike causal-comparative research, in correlationa l studies, there is no attempt to de-termine if a cause-effect relationship exists (variable x causes changes in variable y). The goal for correlational studies is to determine if a predictive relationship exists (knowing the value of vari-able x allows one to predict the value of variable y). At a practical level, there is, therefore, no distinction made between independent and dependent variables in correlationa l research.

    An example of a simple correlationa l study would be research into the relationship between age and willingness to make e-commerce purchases. The two variables of interest would be age and number of e-commerce purchases made over a given period of time. The comparison would be based upon an examination of age of each subject in the study and the number of e-commerce purchases made by that subject. Since the researcher did not control either of the variables or at-tempt to determine if age caused changes in purchases, just if age could be used to predict behav-ior, the study would be correlationa l, not experimental or causal-comparative. An example of cor-relational research can be found in Cohen and Ellis (2003).

    Guide for Novice Researchers on Research Methodology

    328

    Developmental Developmenta l research attempts to answer the question: How can researchers build a ‘thing’ to address the problem? It is especially applicable when there is not an adequate solution to even test for efficacy in addressing the problem and presupposes that researchers don’t even know how to go about building a solution that can be tested. Developmenta l research generally entails three major elements:

    • Establishing and validating criteria the product must meet

    • Following a formalized, accepted process for developing the product

    • Subjecting the product to a formalized, accepted process to determine if it satis-fies the criteria.

    An example of developmental research would be Ellis and Hafner (2006) that detailed the devel-opment of an asynchronous environment for project-based collaborative learning experiences. Developmenta l research is distinguished from product development by: a focus on complex, in-novative solutions that have few, if any, accepted design and development princ iples; a compre-hensive grounding in the literature and theory; empirical testing of product’s practicality and ef-fectiveness; as well as thorough documentation, analysis, and reflection on processes and out-comes (van den Akker, Branch, Gustafson, Nieveen, & Plomp, 2000).

    Grounded Theory Grounded Theory is defined as “a systematic, qua litative procedure used to generate theory that expla ins, at a broad conceptual level, a process, an action, or interaction about substantive topic” (Creswell, 2005, p. 396). Grounded theory is used when theories currently documented in the lit-erature fail to adequately expla in the phenomena observed (Leedy & Ormrod, 2005). In such cases, revisions for existing theory may not be valid as the fundamental assumptions behind such theories may be flawed given the context or data at hand. Table 2 outlines the three key types of grounded theory design. According to Creswell, “choosing among the three approaches requires several considerations” (p. 403). He noted that such considerations depend on the key emphasis of the study such as: Is the aim of the study to follow given procedures? Is the aim of the aim of the study to follow predetermined categories? What is the position of the researcher? An example of Grounded Theory in the context of information systems inc ludes the study by Oliver, Why-mark, and Romm (2005). Oliver et al. used Grounded Theory to deve lop a conceptual mode l on enterprise-resources planning (ERP) systems adoption based on the various types of organiza-tional justifications and reported motives.

    Table 2: Types of Grounde d Theory Des ign (Cres we ll, 2005)

    Type of Grounde d Theory Des ign

    De finition

    Systematic Design “emphasizes the use of data analysis steps of open, axia l, and selective coding, and the development of a logic paradigm or a visua l picture of the theory generated” (Creswell, 2005, p. 397)

    Emerging Design “letting the theory emerge from the data rather than using spe-cific, preset categories” (Creswell, 2005, p. 401)

    Constructivist Design “focus is on the meanings ascribed by partic ipants in a study…more interested in the views, values, beliefs, feelings, assumptions, and ideologies of individuals than in gathering facts and describing acts” (Creswell, 2005, p. 402)

    Ellis & Levy

    329

    Ethnography The study of ethnography aims at “a particular person, program, or event in considerable depth. In an ethnography, the researcher looks at an entire group – more specifica lly, a group that shares a common culture – in depth” (Leedy & Ormrod, 2005, p. 151). According to Creswell (2005), ethnographic research deals with an in-depth qua litative investigation of a group that share a common culture. He indicated that ethnography is best used to explain various issues within a group of individuals that have been together for a considerable length of time and have, therefore, developed a common culture. Ethnographic research also provides a chronological collection of events related to a group of individuals sharing a common culture. Beynon-Davies (1997) out-lined the use of ethnographic research in the context of system development. He noted that for IS researchers, ethnographic research may provide value in the area of IS development, specifically in the process of capturing tacit knowledge during the system development life cycle (SDLC) (Beynon-Davies). Crabtree (2003) noted that “ethnography is an approach that is increasing inter-est to the designers of collaborative computing systems. Rejecting the use of theoretical frame-works and insisting instead on a rigorously descriptive mode of research” (p. ix). However, criti-cism for Crabtree’s advocacy of ethnography in information systems research was also voiced (Alexander, 2003).

    Action Research Action research is defined as “a type of research that focuses on finding a solution to a local prob-lem in a local setting” (Leedy & Ormrod, 2005, p. 114). Action research is unique in the approach as the researcher himse lf or herself are part of the practitioners group that face the actual problem the research is trying to address(Creswell, 2005). Additionally, the aim of action research is to investigate a localized and practical problem. According to DeLuca, Gallivan, and Kock (2008), there are five key steps in action research including: a) Diagnosing the problem; b) Planning the action; c) Taking the action; d) Eva luating the results; and e) Specifying lessons learned for the next cycle. During the course of all give steps of the action research, “researchers and practitio-ners collaborate during each step” (DeLuca et al., p. 49).

    Fundamentals of Research Methods For each study type there is an accepted methodology documented in texts (Gay et al., 2006; Isaac & Michael, 1981; Leedy & Ormrod, 2005; Yin, 1984) and exemplified in the literature (Levy & Ellis, 2006). As a first step in establishing the value of a proposed study, the novice re-searcher is well advised to close ly follow the template for the study type contained in the text and mode l her or his research methods after similar studies reported in the literature. Regardless of the type of study being conducted, there are a number of important factors that must be accommo-dated in an effective description of the research methods. In brief, the description must provide a detailed, step-by-step description of how the study will be conducted, answering the vital “who, what, where, when, why, and how” questions.

    1. What is going to be done

    2. Who is going to do each thing to be done

    3. How will each thing to be done be accomplished

    4. When, and in what order, will the things to be accomplished actually be done

    5. Where will those things be done

    6. Why – supported by the literature – for the answers to the What, Who, How, When, and Where

    Guide for Novice Researchers on Research Methodology

    330

    A properly developed description of the research methods would allow the reader to actually con-duct the study being proposed based upon the processes outlined. Included among those processes are: forming research questions and hypotheses; identifying assumptions, limitations, and de limi-tations; as well as establishing reliability and validity.

    Form Research Questions and Hypotheses

    Research questions Research questions are the essence of most research conducted and acts as the guiding plan for the investigation (Mertler & Vannatta, 2001). In general, research questions are “specific ques-tions that researchers seek to answer” (Creswell, 2005, p. 117). According to Maxwell (2005), “research questions state what you want to learn” (p. 69). A research investigation may have one or more research questions regardless of the specific type of the research including qua litative, quantitative, and mixed method types of research. Most quality peer-reviewed studies will have a specific section that highlights the research questions investigated. In most other published work that don’t have a specific section that highlights it, the research questions will appear either at the end of the problem statement or right after the literature review section. Maxwell suggested that a good research question is one that will point the researcher to the information that will lead him/her to understand what he/she set forth to investigate. According to Ellis and Levy (2008), “in order for the research to be at all meaningful, there has to be an identifiable connection be-tween the answers to the research questions and the research problem inspiring the study” (p. 20). However, research questions shouldn’t be created in a vacuum, but be strongly influenced by quality literature is suggesting about the phenomena (Berg, 1998). Moreover, the exact wording used to note the research questions is vital as the accuracy and appropriateness of the research question determine the methodology to be used (Mertler & Vannatta, 2001).

    The nature of the research questions will be dependent on the type of study being conducted. Studies based on quantitative data will generally be driven by research questions that are formu-lated on the confirmatory and predictive nature, while studies based on qualitative data will be more like ly driven by research questions that are formulated on the exploratory and interpretive nature.

    Examples of quantitative research questions in the context of information systems inc lude :

    – To what extent does users’ perceived usefulness increases the odds of their e-commerce usage?

    – Do computer self-efficacy and computer anxiety have a significant difference for males and females when using e-learning systems?

    – What are the contributions of users’ systems trust, deterrent severity, and motivation to their misuse of biometrics technology?

    – To what degree do team communication and team cohesiveness predict productivity of system development by virtual teams?

    Examples of qualitative research questions in the context of information systems include:

    – How does training help the implementation success of enterprise-wide information sys-tems?

    – Why do user involvement and user resistance help in the systems’ requirement gathering process? What are the systems characteristics that are valuable to users when using e-learning systems?

    – How do e-commerce users define information privacy?

    Ellis & Levy

    331

    Hypotheses One must keep in mind that “research questions are not the same as research hypotheses” (Maxwell, 2005, p. 69). In general, a hypothesis can be defined as a “logical supposition, a rea-sonable guess, an educated conjecture” about some aspect of daily life (Leedy & Ormrod, 2005, p. 6). In scholarly research, however, hypotheses are more than ‘educated guesses.’ A research hypothesis is a “prediction or conjecture about the outcome of a relationship among attributes or characteristics” (Creswell, 2005, p. 117). By convention, research is conservative and assumes the absence of a relationship among the attributes under consideration; hypotheses, therefore, are expressed in null terms. For example, if a study were to examine the impact interactive multime-dia animations have on the average amount of a purchase at an e-commerce site, the hypothesis would be stated: The average amount of purchase on an e-commerce site enhanced with interac-tive animations will not be different that the average amount of purchase on the same e-commerce site that is not enhanced with interactive animations. Not all types of research entail establishing and testing hypotheses. Research methods based upon quantitative data commonly test hypotheses; studies based upon qualitative data, on the other hand, explore propositions (Maxwell).

    Unlink hypotheses, propositions do predict a directiona lity for the results. If, for example, one were to examine consumer reaction to interactive animation on an e-commerce site, one might investigate the proposition that: Consumers will express a greater feeling of engagement and sat-isfaction when visiting e-commerce sites enhanced with interactive animations than similar sites that lack the enhancement.

    Acknowledge Assumptions, Limitations, and Delimitations For any given research investigation there are underlying assumptions, limitations, and de limita-tions (Creswell, 2005). According to Leedy and Ormrod (2005), assumptions, limitations, and delimitations are critical components of a viable research proposal; without these considerations clearly articulated, evaluators may raise some valid questions regarding the credibility of the pro-posal. The following three sub-sections provide definition and examples for each term.

    Assumptions Assumptions serve as the basic foundation of any proposed research (Leedy & Ormrod, 2005) and constitute “what the researcher takes for granted. But taking things for granted may cause much misunderstanding. What [researchers] may tacitly assume, others may never have consid-ered” (Leedy & Ormrod, p. 62). Moreover, assumptions can be viewed as something the re-searcher accepts as true without a concrete proof. Essentially, there is no research study without a basic set of assumptions (Berg, 1998). According to Williams and Colomb (2003), identifying the assumptions behind a given research proposal is one of the hardest issues to address, especially for novice researchers. Such difficulties emerge due to the fact that by nature “we all take our deepest beliefs for granted, rarely questioning them from someone else’s point of view” (Williams & Colomb, p. 200). It is important for novice researchers to learn how to explic itly document the ir assumptions in order to ensure that they are aware of those things taken as givens, rather than trying to hide or smear them from the reader. Explicitly documenting the research as-sumptions may help reduce misunderstanding and resistance to a proposed research as it demon-strates that the research proposal has been thoroughly considered (Leedy & Ormrod, 2005).

    To identify the assumption behind a proposa l, the researcher must ask himself the following ques-tion: “what do I believe that my readers must also believe (but may not) before they will think that my reasons are relevant to my claims?” (Williams & Colomb, p. 200).

    Guide for Novice Researchers on Research Methodology

    332

    Examples of assumptions researchers make include :

    – Participants in the study will make a sincere effort to complete the assigned tasks

    – The students participating in the Internet-based course have a basic familiarity with the personal computer and the use of the World Wide Web.

    Limitations Every study has a set of limitations (Leedy & Ormrod, 2005), or “potential weaknesses or prob-lems with the study identified by the researcher” (Creswell, 2005, p. 198). A limitation is an un-controllable threat to the internal va lidity of a study. As described in greater detail below, internal validity refers to the likelihood that the results of the study actually mean what the researcher in-dicates they mean. Explic itly stating the research limitations is vita l in order to allow other re-searchers to replicate the study or expand on a study (Creswell, 2005). Additionally, by explicitly stating the limitations of the research, a researcher can help other researchers “judge to what ex-tent the findings can or cannot be generalized to other people and situations” (Creswell, 2005, p. 198).

    Examples of limitations researchers may have:

    – All subjects in the study will be volunteers who may withdraw from the study at any time. The participants who finish the study might not, therefore, be truly representative of the population.

    – The members of the expert panel that will validate the proficiency survey instrument will be drawn from the faculty of … and may not truly represent universally accepted expert opinion.

    Delimitations Delimitations refer to “what the researcher is not going to do” (Leedy & Ormrod, 2005). In schol-arly research, the goals of the research outlines what the researcher intends to do; without the de-limitations, the reader will have difficulties in understanding the boundaries of the research. In order to constrain the scope of the study and make it more manageable, researchers should outline in the de limitations – the factors, constructs, and/or variables – that were intentiona lly left out of the study. Delimitations impact the external validity or generalizability of the results of the study. Examples of delimitations inc lude :

    – Participation in the study was delimited to only males aged 25-45 who had made a pur-chase via the internet within the past 12 months; generalization to other age groups or females may not be warranted.

    – This study examined attrition rates in MBA programs offered in continuing education de-partments of public colleges and universities; generalization to other educationa l pro-grams or similar programs offered in private institutions may not be warranted.

    Establish Reliability and Validity Every study must address threats to validity and reliability (Leedy & Ormrod, 2005). Although the concepts of validity and reliability originally started in quantitative research approaches, in recent years validity and reliability are being addressed in qualitative and mixed-methods ap-proaches as well (Berg, 1998; Maxwell, 2005). According to Leedy and Ormrod (2005), “the va-lidity and reliability of your measurement instruments influences the extent to which you can learn something about the phenomenon you are studying…and the extent to which you can draw meaningful conclusions from your data” (p. 31). The following two sections define and outline

    Ellis & Levy

    333

    the key types of validity and reliability re lated to common research investigation. Establishing an approach following published methods to address validity and reliability issues in a research pro-posal may drastically increase the overall acceptance of the research proposal.

    Reliability Reliability is defined as “the consistency with which a measuring instrument yie lds a certain re-sults when the entity be ing measured hasn’t changed” (Leedy & Ormrod, 2005, p. 31). According to Straub (1989), researchers should try to answer the following question in an attempt to address reliability; “do measures show stability across the unit of observation? That is, could measure-ment error be so high as to discredit the findings?” (p. 150). Reliability can be established in four different ways: equivalency, stability, inter-rater, and interna l consistency (Carmines & Zeller, 1991).

    Equivale ncy re liability. Equivalency reliability is concerned with how close ly measurements taken with one instrument match those taken with a second instrument under similar conditions. Equiva lency is often used to certify the reliability of a new measurement instrument or procedure by comparing the results of using that instrument with those obta ined by using established in-struments or processes. Equiva lency is usua lly established through the use of a statistical correla-tion (Pearson’s r for linear correlation or Eta for non-linear correlation).

    Stability re liability. Stability reliability – also know as test, re-test reliability – is concerned with how consistent results of measuring with a given instrument or process are over time. Stability is based on the assumption that, absent some identifiable explanation, the measurement should pro-duce the same results today as last month and will produce the same results next month. Stability, like equiva lency, is usually established through the use of a statistical correlation (Pearson’s r for linear correlation or Eta for non-linear correlation).

    Inte r-rate r re liability. Inter-rater reliability focuses on the extent agreement in the results of two or more individuals using the same measurement instrument or process. As with stability and equivalency, inter-rater reliability is usua lly established through the use of a statistical correlation (Pearson’s r for linear correlation or Eta for non-linear correlation).

    Inte rnal cons iste ncy. Unlike the previous methods of establishing reliability which were con-cerned with comparing the results of using an instrument or process with some external standard (another instrument, the same instrument over time, or the same instrument used by different people), internal consistence focuses on the level of agreement among the various parts of the instrument or process in assessing the characteristic being measured. In a 20-question survey measuring attitude toward knowledge sharing, for example , if the survey is interna lly consistent, there will be a strong correlation the responses on all 20 questions. Internal consistency is also measured by statistical correlation, but with the Cronbach α in place of Pearson r.

    Validity Validity refers to a researchers’ ability to “draw meaningful and justifiable inferences from scores about a sample or population” (Creswell, 2005, p. 600). There are various types of validity asso-ciated with scholarly research (Cook & Campbell, 1979). Validity of an instrument refers to “the extent to which the instrument measures what it is supposed to measure” (Leedy & Ormrod, 2005, p. 31). Thus, researchers when designing the ir study, must ask themselves “how might you be wrong?” (Maxwell, 2005, p. 105). Additionally, the validity of a study “depends on the rela-tionship of your conclusions to reality” (Maxwell, 2005, p. 105). This section will define and out-line the key validity issues. The two most common validity issues are internal validity and exter-nal validity.

    Guide for Novice Researchers on Research Methodology

    334

    Inte rnal validity. Internal validity refers to the “extent to which its design and the data that it yie lds allow the researcher to draw accurate conclusions about cause-and-effect and other rela-tionships within the data” (Leedy & Ormrod, 2005, pp. 103-104). According to Straub (1989), researchers should try to answer the following question in an attempt to address internal va lidity; “are there untested rival hypotheses for the observed effects?” (p. 150). Generally, establishing interna l validity requires examining one or more of the following: face validity, criterion va lidity, construct validity, content validity, or statistical conclusion va lidity.

    Face Validity. Face validity is based upon appearance; does the instrument or process seem to pass the test for reasonableness. Face validity is never sufficient by itself, but an informa l assess-ment of how well the study appears to be designed is often the first step in establishing its va lid-ity.

    Crite rion Re lated Validity. Also known as instrumental va lidity, criterion related validity is based upon the premise that processes and instruments used in a study are valid if they paralle l similar those used previous, validated research. In order to establish criterion re lated validity it is necessary to draw strong parallels between as many particulars of the validated study – popula-tion, c ircumstances, instruments used, methods followed – as possible.

    Cons truct Validity. Construct validity “is in essence operational issue. It asks whether the meas-ures chosen are true constructs describing the event or merely artifacts of the methodology itself” (Straub, 1989, p. 150). According to Straub, researchers should try to answer the following ques-tion in an attempt to address construct validity; “do measures show stability across methodology? That is, are the data a reflection of true scores or artifacts of the kind of instrument chosen?” (p. 150).

    Conte nt Validity. In survey-based research, the term content validity refers to “the degree to which items in an instrument reflect the content universe to which the instrument will be general-ized” (Boudreau, Gefen, & Straub, 2001, p. 5). According to Straub (1989), researchers should try to answer the following question in an attempt to address content validity; “are instrument measures drown from all possible measures of the properties under investigation?” (p. 150).

    Statis tical Conclus ion Validity. Statistical conc lusion va lidity refers to the “assessment of the mathematical relationships between variables and the like lihood that this mathematical assess-ment provides a correct picture of the covariation …(Type I and Type II error)” (Straub, 1989, p. 152). According to Straub, researchers should try to answer the following question in an attempt to address statistical conc lusion va lidity; “do the variables demonstrate relationships not expla in-able by chance or some other standard of comparison?” (p. 150).

    Exte rnal validity. External validity refers to the “extent to which its results apply to situations beyond the study itself…the extent to which the conclusions drawn can be generalized to other contexts” (Leedy & Ormrod, 2005, p. 105). Additionally, external validity addresses the “gener-alizability of sample results to the population of interest, across different measures, persons, set-tings, or times. External validity is important to demonstrate that research results are applicable in natural settings, as contrasted with classroom, laboratory, or survey-response settings” (King & He, 2005, p. 882).

    Summary One of the major challenges facing the novice researcher is matching the research she or he pro-poses with a research method that is appropriate and will be accepted by the scholarly commu-nity. The material presented in this paper is certainly not intended to be the ending point in the process of establishing the research methods for a given study. The novice researcher is encour-

    Ellis & Levy

    335

    aged, even expected to augment this material by referring to one or more of the texts and research examples cited.

    This paper does present a foundation upon which such a decision can be based on:

    1. Developing the PLD, a model for selection of research approach based upon the problem driving the study, the body of knowledge documented in peer-reviewed lit-erature, and the data available to the researcher;

    2. Identifying, in brief, several of the research approaches commonly used in informa-tion systems studies;

    3. Exploring several of the important terms and constructs that apply to scholarly re-search, regardless of the specific approach selected.

    References Ale xander, I. (2003). Designing collaborative systems. A practica l guide to ethnography. European Journal

    of In formation Systems, 12(3), 247-249.

    Becerra-Fe rnandez, I., Zanakis, S. H., & Walc za k, S. (2002). Knowledge discovery techniques for predict-ing country investment risk. Computers & Industrial Engineering, 43(4), 787-800.

    Berg, B. L. (1998). Qualitative research methods for the social sciences (3rd ed.). Boston, MA: Allyn & Bacon.

    Beynon-Davies, P. (1997). Ethnography and information systems development: Ethnography of, for and within is development. Information and Software Technology, 39(8), 531-540.

    Boudreau, M.-C., Gefen, D., & Straub, D. W. (2001). Validation in information systems research: A state-of-the-art assessment. MIS Quarterly, 25(1), 1-16.

    Cockburn, A., Savage, J., & Wallace, A. (2005). Tuning and testing scrolling interfaces that automatically zoo m. Proceeding of the Computer-Human Interaction 2005 Conference, Portland, Oregon, pp. 71-80.

    Cohen, M. S., & Ellis, T. J. (2003). Predictors of success: A longitudinal study of threaded discussion fo-rums. Proceeding of the Frontiers in Education Conference, Boulder, Colorado, pp. T3F-14–T13F-18.

    Cook, T. D., & Ca mpbell, D. T. (1979). Quasi-experimentation: Design & analysis issues from field set-tings. Boston, MA: Houghton Mifflin Co mpany.

    Crabtree, A. (2003). Designing collaborative systems. A practical guide to ethnography. Berlin: Springer-Verlag.

    Creswe ll, J. W. (2005). Educational research: Planning, conducting, and evaluating quantitative and qualitative research (2nd ed.). Upper Saddle River, NJ: Pearson.

    De Luca, D., Ga llivan, M. J., & Kock, N. (2008). Furthering information systems action research: A post-positivist synthesis of four dia lectics. Journal of the Association for Information Systems, 9(2), 48-72.

    Ellis, T. J., & Ha fner, W. (2006). A co mmunicat ion environment for asynchronous collaborative lea rning. Proceeding of the 37th Hawaii International Conference on System Sciences, Big Island, Hawa ii, pp. 3a-3a.

    Ellis, T. J., & Levy, Y. (2008). A fra mework of proble m-based research: A guide for novice researchers on the development of a research-worthy proble m. In forming Science: The International Journal of an Emerging Transdiscipline, 11, 17-33. Retrieved fro m http://inform.nu/Artic les/Vol11/ISJv 11p 017-033Ellis486.pdf

    Gay, L. R., Mills, G. E., & Airasian, P. (2006). Educational research: Competencies for analysis and ap-plications (8th ed.). Upper Saddle River, NJ: Pearson.

    Guide for Novice Researchers on Research Methodology

    336

    Grant, K. A., & Grant, C. T. (2008). Developing a mode l of ne xt generation knowledge manage ment. Is-sues in Informing Science and Information Technology, 5, 571– 590. Retrieved fro m http://proceedings.informingscience.org/InSITE2008/IISITv5p571-590Grant532.pdf

    Isaac, S., & M ichael, W. B. (1981). Handbook in research and evaluation. San Diego, CA: EdITS publish-ers.

    King, W. R., & He, J. (2005). Externa l validity in IS survey research. Communications of the Association for In formation Systems, 16, 880-894.

    Leedy, P. D., & Ormrod, J. E. (2005). Practical research: Planning and design (8th ed.). Upper Saddle River, NJ: Prentice Hall.

    Levy, Y., & Ellis, T. J. (2006). A systems approach to conduct an effective literature rev iew in support of informat ion systems research. Informing Science: The International Journal of an Emerging Transdis-cipline, 9, 181-212. Retrieved fro m http://inform.nu/Artic les/Vol9/ V9p181-212Levy99.pdf

    Maxwe ll, J. A. (2005). Qualitative research design: An interactive approach (2nd ed.). Thousand Okas, CA: Sage Publication.

    Mertler, C. A., & Vannatta, R. A. (2001). Advanced and multivariate statistical methods: Practical appli-cation and interpretation. Los Angeles, CA: Pyrcza k Publishing.

    Noor, K. (2008). Case study: A strategic research methodology. American Journal of Applied Sciences, 5(11), 1602-1604.

    Nunama ker, J. F., Chen, M., & Purdin, T. D. M. (1991). Systems development in information systems re-search. Journal of Management Information Systems, 7(3), 89-106.

    Oliver, D., Why mark, G., & Ro mm, C. (2005). Researching ERP adoption: An internet-based grounded theory approach. Online Information Review, 29(6), 585-604.

    Ra mim, M. M ., & Levy, Y. (2006). Securing e-lea rning systems: A case of insider cyber attacks and novice IT manage ment in a s ma ll university. Journal of Cases on Information Technology, 8(4), 24-35.

    Sekaran, U. (2003). Research methods for business (4th ed.). Hoboken, NJ: John Wiley & Sons.

    Straub, D. W. (1989). Va lidating instruments in MIS research. MIS Quarterly, 13(2), 147-170.

    van den Akker, J., Branch, R. M., Gustafson, K., Nieveen, N., & Plo mp, T. (2000). Design approaches and tools in education. Norwell, MA: Kluwer Academic Publishers.

    Willia ms, J. M ., & Colo mb, G. G. (2003). The craft o f argument (2nd ed.). Ne w Yo rk: Long man Publish-ers.

    Yin, R. K. (1984). Case study research: Design and methods. Ne wbury Park, CA : Sage Publicat ion.

    Biographies Dr. Timothy Ellis obtained a B.S. degree in History from Bradley University, an M.A. in Rehabilitation Counseling from Southern Illinois University, a C.A.G.S. in Rehabilitation Administration from North-eastern University, and a Ph.D. in Computing Technology in Education from Nova Southeastern University. He joined NSU as Assistant Pro-fessor in 1999 and currently teaches computer technology courses at both the Masters and Ph.D. level in the School of Computer and Infor-mation Sciences. Prior to joining NSU, he was on the faculty at Fisher College in the Computer Technology department and, prior to that, was a Systems Engineer for Tandy Business Products. His research interests inc lude: multimedia, distance education, and adult learning. He has published in several technical and educational journals inc luding

    Catalyst, Journa l of Instructional Delivery Systems, and Journal of Instructiona l Multimedia and

    Ellis & Levy

    337

    Hypermedia. His email address is [email protected]. His main website is located at http://www.scis.nova.edu/~ellist/

    Dr. Yair Levy is an associate professor at the Graduate School of Computer and Information Sc iences at Nova Southeastern University. During the mid to late 1990s, he assisted NASA to develop e-learning systems. He earned his Bachelor’s degree in Aerospace Engineering from the Technion (Israel Institute of Technology). He received his MBA with MIS concentration and Ph.D. in Management Information Systems from Florida Internationa l University. His current research interests inc lude cognitive value of IS, of online learning systems, ef-fectiveness of IS, and cognitive aspects of IS. Dr. Levy is the author of the book “Assessing the Value of e-Learning systems.” His research publications appear in the IS journals, conference proceedings, invited book chapters, and encyclopedias. Additionally, he chaired and co-

    chaired multiple sessions/tracks in recognized conferences. Currently, Dr. Levy is serving as the Editor-in-Chief of the Internationa l Journal of Doctoral Studies (IJDS). Additionally, he is serv-ing as an associate editor for the Internationa l Journa l of Web-based Learning and Teaching Technologies (IJWLTT). Moreover, he is serving as a member of editoria l review or advisory board of several refereed journals. Additionally, Dr. Levy has been serving as a referee research reviewer for numerous nationa l and international scientific journa ls, conference proceedings, as well as MIS and Information Security textbooks. He is also a frequent speaker at national and international meetings on MIS and online learning topics. To find out more about Dr. Levy, please visit his site : http://sc is.nova.edu/~levyy/

                                                                                                                                      Order Now