In order to solve a research problem a research approach has to be chosen. Several authors discuss different methods and classifications. The type of approach that is appropriate depends mainly on the nature of the research problems under investigation and the amount of knowledge the researcher already has in the research field. A good design ensures that the data collected is consistent with the objectives of the study and that the information is correctly gathered.
Methods describe steps that must be taken or how something will be performed. The word method is normally used in a narrow meaning; “interview method”, “inquiry method” etc. The term method points in this case towards the level where phenomenon is codified in other words; listed as data. It is necessary to have a genuine knowledge of the different methods to be able to adopt the empirical instrument fit to highlight the problem to be investigated.
Phenomenology or Positivism
A decision needed to be made as to choose between the phenomenological and the positivistic approach (refer to table 1). Since this study is based on an empirical research of people’s experiences and thoughts where profound interviews are central to the study, it was found that the phenomenological approach was best suited. Furthermore the researcher thinks that it is almost impossible to stay objective as recommended by the positivistic approach. The truth, according to positivists is found by following a method or a research that is in many ways independent of what you are studying (Esterby-Smith et al, 1996). Every influence from the researcher should be eliminated or minimised. By working with words and not with numbers the researcher intend to conduct a research of qualitative sort. Although there is a clear dichotomy between the positivist and phenomenological world views where a sharp differences of opinion exist between researchers about the desirability of methods, the reality of research also involves a lot of compromises between these pure positions (Esterby-Smith et al, 1996).
The world is external and objective
Observer is independent
Science is value-free
The world is socially constructed and subjective
Observer is part of what observed
Human interests drive Science
Focus on facts
Look for causality and
Reduce phenomena to
Formulate hypotheses and then test them
Focus on meanings
Try to understand what is happening
Look at the totality of each situation
Develop ideas through induction from data
Operationalising concepts so that they can be measured
Taking large samples
Using multiple methods to establish different views of phenomena
Small samples investigated in depth or over time
Table 1: Key features of positivist and phenomenological paradigms (from Easterby-Smith et al, 1996)
In a choice between mailed survey questionnaires and face-to-face in-depth interviews it was found that for the research question, interviews will be best suited. The most fundamental of all qualitative methods is that of interviewing (Esterby-Smith et.al, 1996). By talking to people in the organisation we can picture how they experience their world and thus we hope to get a fair representation of the organisation. Since the research question is qualitative, the qualitative interview seems like a natural choice. The interview method is a sensitive and powerful way to capture experiences and significances from the interviewed people’s daily life (Esterby-Smith et.al, 1996). Through the interview the respondents get the possibility to communicate their situation to the interlocutor in their own perspective and with their own words.
Qualitative versus Quantitative Technique
In those studies where data is not analysed efficiently in a quantitative way, qualitative
research is appropriate. Easterby-Smith et.al (1996) explains that in this type of research a small convenience or quota sample is used and the information sought relates to the
respondent’s motivations, beliefs, feelings and attitudes. An intuitive, subjective approach is used in gathering the data and the collection format is open-ended. Therefore the analysis and interpretation of data is more subjective in qualitative research than in quantitative research.
It is important to note that the qualitative approach is not intended to quantify or precisely measure a problem statistically as in the case of the quantitative data collection technique. Quantitative technique samples are drawn scientifically and coded to be analysed quantitatively. This type of research includes large-scale surveys, experiments and time series analysis. Qualitative research projects are often less structured in the beginning of the project, compared to quantitative research and are therefore applicable when the study is of an exploratory nature (Easterby-Smith et.al, 1996).
This study has employed the qualitative methodology because it is a more appropriate way to investigate an area in which only few previous studies have been done. A qualitative approach also enables the study in a natural setting, allowing the investigator to answer ‘how’ and ‘why’ questions and thus understand the nature and complexity of the process taking place. Since we were interested in feelings, attitudes and beliefs that the respondents had towards the adoption of Internet banking, the qualitative technique was selected when collecting data for this study.
The Research Approach Used In This Study
The research approach that was adopted in this study was being conducted in two phases. The initial phase consisted of secondary data research through journal publications and reviews of the Internet banking web sites offered by some selected local and foreign banks in Singapore. This was meant to form an idea of the latest Internet banking development and strategies adopted by each of the banks.
The second phase of the study consisted of face-to-face in-depth interviews conducted in Singapore with the managers of the selected local and foreign banks. A list of questions were used during the interview sessions. The questions used were modified from Yakhlef’s interview questions (Yakhlef, 2001). The aim of conducting these interviews was to confirm the observations made in the initial stage of the study as well as to understand the current development and impact of Internet banking on the distribution channel, types of products and services being offered online by the banks in Singapore.
This is similar to the study of the role of the Internet as a new distribution channel by
DeYoung (2001). The practitioners’ opinions were being sought for answer to questions
(1) Are the banks embracing the Internet as a marketing tool or as another
(2) What are the changes to the business model, distribution
strategies and resources in view of the implementation of Internet banking?
Which products and services are most suited for Internet banking and what are the new products and services offerings?
(4) How are the banks’ customers adopting Internet banking and
what are the factors influencing such adoption?
The interview results from the banks were being compared to reflect the similarity and
differences in the observations. These were used to reflect on the strategies adopted by the banks as well as the impact of Internet banking on the distribution networks, products and services offered by the banks.
Method of Data Collection
Data sources can be divided into five basic sources of data information: respondents, analogous situations, experimentation and primary and secondary data (Easterby-Smith et.al, 1996).
There are two main methods of gathering data from respondents: communication and
observation. Communication requires asking the respondents questions, and this is the most common method. This method is often used to find out what people think, and it is important that the questions are not biased and that the answers are honest. Observations are the process of recognition and recording of events and objects, i.e. observing what people do and how they do it. This method records what is happening but not why something happens (Easterby-Smith et.al, 1996).
This data source is an examination of cases similar to the one actually studied. Analogous situations include case studies and simulation. The case study method is used to investigate similar and relevant situations; for example a previous study made in the adoption of Internet banking can be used to draw conclusions relevant for our study. This method is especially useful when a complicated series of variables interact and result in a problem or an opportunity.
Simulation is the creation of an analogy of a real-world phenomenon, most often by using computer programmes. Since simulations can be done in a laboratory or an office, this type of research is less expensive than the use of surveys or test marketing. It may also be less time consuming. The limitation of simulation is that it is difficult to calculate the variable to be used (Easterby-Smith et.al, 1996).
This method is similar to simulation in its approach. One or more variables are consciously manipulated in order to derive cause and effect interrelation. Examples of
experimentation include increasing the educational efforts and then measuring the effects,
investigating attitudes before and after a specific project, using different educational
programs in different geographical areas, and then observing their effects (Easterby-Smith et.al, 1996).
This is data that researchers collect for the first time. Personal interviews are one of the
most important primary sources of information. This takes place when the researcher communicates with the respondents in a structured way. The respondents can provide important insight into a situation enabling the researcher to identify other relevant sources of evidence. However, the researcher must be aware of the fact that interviews are verbal
reports, and as such can be subject to the problem of bias, poor recall and poor or inaccurate articulation (Easterby-Smith et.al, 1996).
Primary data are collected especially to address a specific research objective (Aaker, Kumar and Day, 1995). Primary data could be qualitative and quantitative. A qualitative investigation is of that kind where the researcher gathers, analyses and interprets data where it is not possible to quantify meaningfully, i.e. express in figures. Information that is transmitted through words is called qualitative and information presented in digits is
called quantitative. Qualitative investigation is often designed as a case study or a
survey investigation with small samples. Qualitative data consist of detailed description
of situations (Sekaran, 2000).
A qualitative case study is to large extent build upon qualitative information that
has been gathered from interviews, observations and documents of different
kinds. Quantitative information that is derived from a survey investigation can be
used to support the results from qualitative data (Sekaran, 2000).
Secondary data, in contrast to primary data, consists of data already collected and published for another purpose than for the conducted research. Secondary data can originate from internal or external sources. Secondary data are one of the cheapest and easiest mean of access to information. The first thing a researcher should do is to search for secondary data that is available on the topic (Aaker, Kumar and Day, 1995). There are several sources of external data, including books and periodicals, government’s publications, census data, statistical abstracts, databases, media, annual reports of companies, for example. The main problem for the researcher is to find data that is relevant. Internal sources are from within the organisation and may include annual reports, sales reports, budgets, etc. Much of internal data could be proprietary and not available to all (Sekaran, 2000).
There are several advantages in using secondary data as it is less expensive than using
primary data, and it is also less time consuming. Secondary data is sometimes so wide-ranging and sophisticated that it would be impossible to collect it yourself. Disadvantages of using secondary data are the limitations in the accuracy of the publications and that
information needs of the study do not always coincide with the data obtained (Easterby-Smith et.al, 1996).
In this study, the Internet was used to obtain knowledge of the latest publications
in the research area. In this report the researcher has tried to use references in articles in order to find the source of origin. Other secondary data are company’s documents and internal data.
The benefit of individual interviews is the possibility for asking all sorts of questions.
The interview could be extensive under the condition that it is perceived as interesting for the respondent. An individual interview is suitable when there is a need to do an extensive and comprehensive interview. The drawback is the cost for each interview, and therefore the method is seldom used as large sampling (Sekaran, 2000).
Individual interviews can be carried out in different ways. The interviewer could
conduct structured interview with prepared questions to ask and a detailed instruction
for coding. On the other hand, it could be a totally unstructured interview, where the interviewer and the respondent together discuss a subject and a plan for the interview is not possible. In this case, it is common to work with an interview guide, which includes a wide area of prepared questions with follow-up questions that the interviewer wants to be answered. The prepared follow-up questions are only used if the respondents do not answer spontaneously to the wide area questions (Sekaran, 2000).
An interview guide will be used in the attempt to secure that needed information has to be answered and also to allow a more unstructured interview where it is possible to be surprised by the information that is given. Individual interviews were mainly used in this study. Communication through the telephone and e-mail has only been used when there has been a need to clarify or supplement earlier interviews. Registration of the interview could be carried out through taking notes during the interview or/and using a tape recorder. One of the main problem to get in contact with the respondent and to set a date for the interview, especially for a respondent within a company.
Semi-structured interviews were being adopted in this study because they are a valid approach for data collection in qualitative research (Sekaran, 2000). Interview subjects were chosen to represent different roles in the organisation to give the research different perspectives. Subjects received a brief letter explaining the study and soliciting participation in the interview. They then received phone calls to answer any questions they might have and interviews were then scheduled at a convenient time and place. A total of 10 out of 12 individuals from 5 different organizations agreed to participate, this indicates that Internet banking is an area of great interest. During the interviews extensive notes were being taken, then followed by cross-referring and transcribing them shortly thereafter. These notes together with the literature findings were used as raw data for the analysis.
The interviews lasted an average of 45-60 minutes. At the beginning of each interview the interviewer briefly explained the study and the structure of the interview. All interviews were conducted in the same way and similar questions were used at every encounter to ensure the conformity of the study. The interviews were performed in an unstructured way where we discussed around different subjects rather then directly about them. The interviewer did this on purpose to get the interview subject more relaxed and open-minded because we think that an interview that is too structured, can limit the usefulness of the interview since it can take the form of a questionnaire and that is not what we were after in this approach.
It is almost impossible to completely exclude research errors. Therefore, an evaluation of possible errors in the research ought to be done. The types of errors can be divided as systematic and random errors, that is a constant bias in the measurement. Random errors are non-systematic errors (Easterby-Smith et.al, 1996).
Reliability refers to what extent the results would be the same if the study was repeated. It measures therefore the methods ability to resist the influence of chance and to be consistent and accurate. Only the accuracy of what is actually studied is taken into consideration, which means that a study can have a high degree of reliability even though the research findings do not answer the research question (Easterby Smith et al, 1996). Reliability is concerned with the consistency, accuracy, and the predictability of the research findings and refers to the extent the results can be repeated. With high reliability the operations of a study can be repeated by a later investigator and still arrive at the same findings and conclusions. Therefore the goal of reliability is according to Yin (1989) to minimise the errors and biases in a study.
The concept of validity tells us if the data collection method used has the ability to measure the qualities intended to be measured. The validity measure refers to the extent of which the measurement is free from both systematic and random errors (Kinnear and Taylor, 1996). This means that through measuring the validity we get to know if the applied method is reliable. According to positivists, validity deals with the question “Are we measuring what we think we are measuring” but the phenomenological asks “Has the
researcher gained full access to the knowledge and meanings of informants” (Easterby Smith et al, 1996).
Validity can be divided into inner and external validity. Inner validity is the extent the results are in accordance to reality. External validity describes to which extent the results are applicable to other situations than the described. If the studied subject is unique, the
question of external validity is impossible to answer (Easterby Smith et al, 1996).
The reliability and validity of the study are both important for the generalisability of a study. If a lot of samples are studied it is easier to draw general and not for the study specific conclusions. The phenomenological researcher wants to know how likely it is that ideas and theories generated in one setting also will apply in other settings (Easterby-Smith et.al, 1996).
Sources of errors
The validity of research is dependent on the size of sampling and non-sampling errors.
Non-sampling errors are, for example, a faulty purpose, wrong research design and content, and errors in data processing and analysis. Sampling errors regard the difference in value between the selected sample and the total population. Samples are not used in this study, therefore the potential sources of errors will be non-sampling errors.
Sources of errors in specific research sectors of the study
Research into managers and management provides a case where the subjects of research are very likely to be more powerful than the researchers themselves. Managers tend to be powerful and busy people. They are unlikely to allow research access to their organization unless they can see some commercial or personal advantage to be derived from it. This means that access for fieldwork can be very difficult and may be hedged with many conditions about confidentiality and publication rights; feasible research questions may be determined more by access possibilities than by theoretical considerations. Managers very carefully value their time and therefore they often prefer short interviews.
Studying only organisations from one geographical location in depth has limited the generalisability of the study but we supplemented our work with interviews with managers of five different organisations. This does not exclude that the study is limited, but by doing this, the results could be a bit more generalised.
Evaluation of the Study
In this study validity has been defined in terms of our working definition of the concept
of Internet Banking and its relationship to strategy and organization.
The issue of reliability can be given in terms of the correctness of published studies as
well as in terms of the correctness in interview material (collection and processing).