Incorporating Experimental Research Designs in Business Communication Research

10
Proceedings of the 2008 Association for Business Communication Annual Convention. Copyright (c) 2008, Association for Business Communication Incorporating Experimental Research Designs in Business Communication Research Chris Lam, Matt Bauer Illinois Institute of Technology The authors would like to acknowledge Dr. Frank Parker for his help in the inception of this project idea. Introduction Research designs in communication research have historically utilized field studies, content analysis, case studies, and correlations. In fact, an analysis of research studies published in the Journal of Business Communication, spanning the past 10 years (1998-2008), was conducted and revealed several interesting patterns in terms of study designs. The number of studies published over the past 10 years has remained fairly consistent (anywhere from 8-13 per year). Further, the number of studies incorporating experimental designs has remained consistent as well. However, the number of studies incorporating experimental designs has been relatively small, 15 out of 101 total studies. That means 86 out of 101 studies have incorporated some other study design such as content analysis, field studies, case studies, and correlations. Therefore, there is a clear pattern showing a general lack of experimental design in business communication research. This paper will address this situation by first identifying general contexts where experimental design can be incorporated and when not to use experimental design in business communication research. It will then describe the research design of several types of studies published in the Journal of Business Communication and discuss how experimental research design might be incorporated in those studies. Finally, a description of the first author’s research design in his current dissertation will be discussed in order to demonstrate the process of choosing an experimental design and illustrate when to use and when not to use of an experimental approach. Features of Experimental Designs and Implications on Business Communication Research Experimental research designs can be quite advantageous in any field of study for several reasons. However, there are contexts in which experimental designs can be best applied. The first, and most important, feature of experimental design is the conclusions that can be drawn from such an approach. That is, experimental designs can provide results that reflect a cause and effect relationship. Secondly, another major feature of experimental designs is the ability to isolate and control variables in a scientific manner. These features are particularly important in business communication research for two major reasons: (1) business communication research often provides heuristics or recommendations to readers regarding “best practices” in various 1

Transcript of Incorporating Experimental Research Designs in Business Communication Research

Proceedings of the 2008 Association for Business Communication Annual Convention.

Copyright (c) 2008, Association for Business Communication

Incorporating Experimental Research Designs in Business Communication

Research

Chris Lam, Matt Bauer

Illinois Institute of Technology

The authors would like to acknowledge Dr. Frank Parker for his help in the inception of this

project idea.

Introduction

Research designs in communication research have historically utilized field studies, content

analysis, case studies, and correlations. In fact, an analysis of research studies published in the

Journal of Business Communication, spanning the past 10 years (1998-2008), was conducted and

revealed several interesting patterns in terms of study designs. The number of studies published

over the past 10 years has remained fairly consistent (anywhere from 8-13 per year). Further, the

number of studies incorporating experimental designs has remained consistent as well. However,

the number of studies incorporating experimental designs has been relatively small, 15 out of

101 total studies. That means 86 out of 101 studies have incorporated some other study design

such as content analysis, field studies, case studies, and correlations. Therefore, there is a clear

pattern showing a general lack of experimental design in business communication research. This

paper will address this situation by first identifying general contexts where experimental design

can be incorporated and when not to use experimental design in business communication

research. It will then describe the research design of several types of studies published in the

Journal of Business Communication and discuss how experimental research design might be

incorporated in those studies. Finally, a description of the first author’s research design in his

current dissertation will be discussed in order to demonstrate the process of choosing an

experimental design and illustrate when to use and when not to use of an experimental approach.

Features of Experimental Designs and Implications on Business Communication Research

Experimental research designs can be quite advantageous in any field of study for several

reasons. However, there are contexts in which experimental designs can be best applied. The

first, and most important, feature of experimental design is the conclusions that can be drawn

from such an approach. That is, experimental designs can provide results that reflect a cause and

effect relationship. Secondly, another major feature of experimental designs is the ability to

isolate and control variables in a scientific manner. These features are particularly important in

business communication research for two major reasons: (1) business communication research

often provides heuristics or recommendations to readers regarding “best practices” in various

1

Proceedings of the 2008 Association for Business Communication Annual Convention.

Copyright (c) 2008, Association for Business Communication

communication contexts and (2) communication is regarded as an essential variable that can

affect many workplace outcomes (e.g. job satisfaction and retention). Therefore, if certain

aspects of communication can be isolated and studied in an experimental design, strong claims

about those variables can be made.

The first feature of experimental design, the ability to infer causal relationships is valuable in any

field of study, including business communication. One characteristic of business communication

research is that many studies conclude by providing readers with “best practices.” Business

communication research, although it is most often rooted in theory, is often applied research.

That is, studies seek to understand how practitioners can effectively communicate in various

contexts. Therefore, many studies will provide practitioners with tangible ways to improve

various aspects of communication. These types of studies could possibly benefit from

experimental designs because the recommendations that result from the studies may be able to

hold even more weight if a cause-effect relationship can be proven. If a study cannot infer a

cause and effect relationship, then the advice given may or may not be attributed to the variable

being studied. For example, researchers investigated the relationship between rapport

management language behavior and perceptions of injustice in the workplace (Campbell, White,

& Durant (2007). The authors used a content analysis approach and concluded that certain

language behaviors were associated with perceptions of injustice among employees. Based on

the analysis, the authors concluded that a leader’s use of rapport management language behavior

may be associated with an employee’s perception of injustice. This is a perfectly valid and

important conclusion. However, one thing that the study cannot conclude is that a leader’s use of

certain language behavior LEADS to a change in an employee’s perception of injustice. That is,

any number of variables could be attributed to the employees’ perception of injustice. For

example, the policies of the organization, the leader’s personality, or any other number of

variables might affect an employee’s perception of injustice. The only way to know the impact

that the variable of communication has is to utilize an experimental design and control for those

possible confounding variables.

Another feature of experimental design, which is closely tied to the feature of causal

relationships, is the ability to isolate and control variables. The second characteristic of business

communication research is that many workplace outcomes are often attributed to the variable of

communication. Many mishaps and problems that occur in the workplace are often attributed to a

lack of communication or a communication breakdown. Besides this anecdotal evidence, aspects

of communication have been linked to a very diverse number of organizational outcomes

including impressions of likeability, initial return on investments, employee satisfaction, and

perceived effectiveness of a leader (Byron & Baldridge, 2007; Gao, Darroch, Mather, &

MacGregor, 2008; Madlock, 2008; Sharbrough, 2006). Though some of these studies already

incorporate experimental designs, others do not. The reason for using experimental designs in

studies that link communication to any number of organizational outcomes is that experimental

designs will allow the researcher to isolate the variable of communication. By doing so, the

researcher controls for confounding variables and can make conclusions about the impact of

communication.

Experimental designs are often very difficult to set up, but if set up correctly, they have distinct

features regarding the conclusions that can be drawn from research studies: (1) Experimental

2

Proceedings of the 2008 Association for Business Communication Annual Convention.

Copyright (c) 2008, Association for Business Communication

designs allow researchers to make and test claims about causal relationships and (2)

Experimental designs allow researchers to isolate variables in a scientific and rigorous manner.

Contexts when Experimental Design is not the Best Option

Although are distinct features of experimental designs, there are also some distinct contexts

when incorporating experimental design may not be the best option. There are two major

contexts in which experimental designs may not be the best option: (1) When a researcher wants

to be able to generalize conclusions beyond the parameters of an experiment and (2) When a

researcher wants to maintain the realism of data that can come from ethnographic designs.

The first context involves the generalizability of results in research studies. Although

experimental designs can allow researchers to investigate causal relationships, they cannot be

generalized beyond the parameters of the experiment. That is, many experiments may create and

test variables in a very context-specific manner. In these cases, results can only be generalized as

far as the experiment will allow. For example, experiments that incorporate subjects from ages

18-21 may only be able to generalize the results to that specific population.

The second context in which an experimental design may not be the best option is when

researchers want to study realistic data. Experimental designs often create artificial contexts that

may not reflect the actual situations that they are testing. Therefore, researchers who want to

study a specific group of people or a specific situation may be better-suited using ethnographic

research methods like case studies. This may be especially true of some communication research

because communication does not occur in a vacuum. That is, there may be contexts in which

researchers may not want to discover causal relationships. Instead, they may want to discover

how a specific group of people use language. For example, linguistic studies involving African-

American Vernacular English (AAVE) will likely not conclude with causal relationships.

Instead, those types of studies may make conclusions simply about various patterns of language

and how certain dialects of AAVE may vary from one region to another.

The previous two sections have discussed features of experimental designs, as well as contexts

when experiments may not be the best option. The following section will discuss two examples

of how an experiment may have been incorporated in existing research. Finally, the paper will

conclude with a discussion of the decision making process of choosing a research design.

Examples of Incorporating Experimental Designs in Business Communication Research

As been discussed, business communication research can utilize both experimental and non-

experimental research designs. However, the fact remains that experimental designs are not

widely used in business communication research. This section will present two studies that

exemplify different research designs and describes how an experimental design might be

incorporated in each so that researchers can see how experimental designs can be easily

implemented into current business communication research.

Example 1: Comparative Content Analysis

3

Proceedings of the 2008 Association for Business Communication Annual Convention.

Copyright (c) 2008, Association for Business Communication

The first type of research design that is often incorporated in communication research is a

content analysis, in which researchers analyze various sets of content and draw comparisons

between them. One example of this type of study was found in the Journal of Business

Communication where researchers incorporated a longitudinal comparative content analysis

design that examined presentational change in U.K. annual reports (Beattie, Dhanani, and Jones

2008). The researchers looked at three sets of data from 1965, 1994, and 2004. The goal of the

study was to discover trends in annual reporting by analyzing differences between annual reports

from the three different samples. They looked for key differences in presentational content

between the three groups including variables like graph usage, overall document length, and use

of pictures. Based on their analysis, they concluded that there is a trend to inlcude more

presentational content like graphs and pictures in recent annual reports, and that annual reports

are becoming more normalized. For example, they noticed that certain types of financial

information are almost always presented in bar and line graphs. This research design, involving

content analysis, allows the researchers to make conclusions about the differences between

annual reports. Although these conclusions are interesting and important in understanding the

current state of annual reports, they do not point to any conclusions that can change or alter the

way in which annual reports can be written. This is not a criticism of the current study, but a

commentary about how a future study may incorporate an experimental design and draw a new

set of conclusions.

In their study, they speculate that graphs and images are included in annual reports as a means

for “impression management” (Beattie, Dhanani, & Jones, 2008, p. 188). That is, by including

more images and graphs, annual reports will be perceived as more attractive and will influence

the impression of those reading the document. This hypothesis is actually quite interesting, and if

true, could have a major influence in the way writers and designers create annual reports.

However, a content analysis design cannot actually conclude whether or not this hypothesis is

indeed supported. One reason they cannot recommend any change in writing behavior is that in

an analysis of various reports, confounding variables cannot be controlled for. That is, any

number of variables could be attributed in affecting the overall impression of the report. For

example, the reader´s position in the company, age, or gender could have as much of an impact

on the overall impression as the inclusion of more images and graphs. This, however, cannot be

concretely concluded because variables cannot be controlled for in the current design.

The previous study incorporated a content analysis that answered some important research

questions. At the same time, the design did not allow the researchers to make any strong claims

about the impact of including more images and graphs in annual reports. However, if an

experimental design were incorporated, claims about the impact of including images and graphs

could be investigated. The researchers speculate in their article that impression management is

the reason for including certain information like graphs and images in annual reports. To test this

claim, an experiment could be set up. The first step in setting up such an experiment would be

determine a research question and subsequent hypothesis. In this case, the research question

could be, “Do more images and graphs lead to a more positive impressions of annual reports?”

The hypothesis, then, based on previous research, would be, “the inclusion of more images and

graphs in annual reports leads to more positive impressions of annual reports.” After the research

questions and hypotheses are determined, it is essential for the researcher to set up tangible ways

to measure the variables in question. Independent variables are variables that a researcher

4

Proceedings of the 2008 Association for Business Communication Annual Convention.

Copyright (c) 2008, Association for Business Communication

believes will have an effect on a dependent variable. In this example, the independent variable

will be images and graphs, while the dependent variable will be impression. Another way to

think about the relationship between variables is that the dependent variable depends on the level

of the independent variable. There are two possibilities in which impression can be measured: (1)

use an existing measure from previous literature or (2) create an instrument that will measure

impression. In the second option, this could be as simple as asking participants in the experiment

to rate their overall impression of the annual report from 1 to 5.

There are several ways that the independent variable can be measured. One simple way is to use

the content analysis and figure out an average number of images and graphs in the annual reports

for each time era. For example, the researcher could take the sample from 1965 to figure out how

many images and graphs to include in a “low” group. The researcher could then take the average

from the 1994 sample and create a “medium” group. Finally, the researcher could take the 2004

sample and figure out how many images and graphs to include in the “high” group. Then, the

researcher could choose an exemplary annual report and use the same report for the low,

medium, and high groups. The only difference between groups would be the number of images

and graphs included in the annual report.

The experimental procedure would be simple. It would employ a repeated measures design. This

means the researcher could ask each participant to read a sample from each group and then fill

out the measurement instrument for impression after reading each sample. After an appropriate

number of participants have completed the experiment, the researcher could run various

statistical analyses to determine whether there are significant differences between the sample

annual reports. If there are significant differences, the researcher can make stronger claims about

the impact images and graphs might have on impression. Because the researcher used identical

annual reports, other confounding variables are controlled for and only the manipulated variable,

in this case images and graphs, can be attributed for the differences between groups.

The important point to understand is not the specifics of setting up the experiment, however. We

have provided one example of how the study might be set up as an experiment, but there may be

many other effective ways to set it up. The most important point, however, is that the

conclusions a researcher can draw from an experiment will answer a different set research

questions than the content analysis. Furthermore, researchers will be able to make claims about

causality. In this particular case, an experiment might help the researchers make claims about the

importance of including images and graphs in annual reports. Regardless of the outcome of the

experiment, the researchers can give practical advice to authors of annual reports. That is, even if

the hypothesis is unsupported, and more images and graphs do not lead to a better impression of

the annual report, the researchers can make a claim that might sound like the following:

A content analysis revealed that authors of annual reports have been including more and

more graphs and other images in their reports. However, this trend does not appear to

significantly affect a reader’s impression of the overall report. Therefore, it may not be

necessary to include more images and graphs for the sake of impression management.

Conversely, if the hypothesis is indeed supported, and more graphs and images lead to a better

overall impression of the annual report, then the researcher can make the following claim:

5

Proceedings of the 2008 Association for Business Communication Annual Convention.

Copyright (c) 2008, Association for Business Communication

A content analysis revealed that authors of annual reports have been including more and

more graphs and images in their reports. Interestingly, this trend appears to significantly

affect a reader’s impression of the report. Therefore, it may be effective to include more

graphs and images for the sake of impression management.

Again, regardless of the outcome of the experiment, experimental designs allow researchers to

give advice to practitioners that is properly supported with causal claims.

Example 2: Correlation Studies

Correlation studies are interesting because they involve statistical tests, which may be misleading

to some readers of articles. That is, correlations reveal information about variables that may be

associated with each other, but do not necessarily indicate a causal relationship. For example, a

correlation study might reveal that height and scores on a standardized test are correlated.

However, most likely, it is the case that height does not actually lead to higher standardized tests

score. There may be several explanations for height being correlated with scores on a test. One

explanation that comes to mind is that taller people will likely be older, and therefore, have more

knowledge and score higher on a standardized test. The point is, however, that results from

correlations can sometimes be misleading if they are not interpreted correctly.

One study in the Journal of Business Communication sought to discover the impact that

motivating language had on job satisfaction and perceived supervisor effectiveness (Sharbrough,

Simmons & Cantrill 2006). In this study, the researchers hypothesized that a leader’s use of

motivating language had a positive correlation with job satisfaction and perceived supervisor

effectiveness. The setup and study design is not incorrect in any way. However, because the

study is a correlation, readers might misinterpret the results. One misinterpretation might stem

from the title of the article, “Motivating language in industry: Its impact on job satisfaction and

perceived supervisor effectiveness.” The title uses the word impact, which may suggest to

readers a causal relationship. However, a correlation design doesn’t allow for such a conclusion.

Researchers who run correlations can make conclusions only about the relationship between

variables--not about the direction of the relationship (which the word impact implies). And in

this case, the authors do acknowledge, in the discussion of their results, that a causal claim

cannot be made based on their study. If the researchers wanted to continue this line of research

and explore a causal relationship between the variables, one possible solution could be to run a

follow-up experiment to determine whether it is indeed the motivating language that is causing

increased job satisfaction and perceived leader effectiveness.

To set up an experiment that complements the previous study, the same dependent variables and

measurements could be used. That is, the exact same measures for job satisfaction and perceived

supervisor effectiveness could be used. However, in order to measure the independent variable,

motivating language, a new measurement instrument would need to be created. One possible

way to measure motivating language would be to create various scenarios in which a leader uses

different levels of motivating language. For example, a participant in the study might read a

series of statements from a fictional leader that include a specific level of motivating language.

One group of participants would read a series from a low motivating leader and another group

6

Proceedings of the 2008 Association for Business Communication Annual Convention.

Copyright (c) 2008, Association for Business Communication

would read a series from a high motivating leader. Furthermore, all participants would see a

picture and read a profile of the exact same fictional leader. The only real difference in the

experiment would be the use of motivating language. After reading the series of statements from

the leader, the participants would be asked to fill out the dependent variable measures, job

satisfaction and perceived supervisor effectiveness.

The results of the previously stated experiment could potentially be much stronger than those

from a correlation study. Regardless of the outcome of the experiment, strong claims about

motivating language could be made. For example, if there were significant differences between

the low and high motivating group, it could be concluded that motivating language indeed leads

to perceived supervisor effectiveness and job satisfaction. This is a much stronger claim than

simply saying that motivating language is associated with or related to job satisfaction and

perceived supervisor effectiveness. With this stronger causal claim, researchers can confidently

recommend the use of motivating language to leaders in the workplace with the proper evidence

to support that claim.

The Decision-Making Process in Choosing an Experimental Design

Two examples have been provided that have described the possibility of incorporating

experimental research designs in business communication research. However, choosing a

research design may be different for researchers when developing a study from its inception.

Therefore, the following will provide a look into the lead author’s decision-making process in

choosing a research design for his dissertation.

The first step in choosing a research design for any study is to determine a research question. The

lead author’s research question, in a simplified form, is the following: What linguistic patterns

can a leader exhibit that lead to outcomes like supervisory support, trust, justice, rapport, and

leader-member exchange? When deciding on a research design, my first instinct was to use

“live” or naturally occurring data. That is, because the research question addressed language

behavior of a leader, I thought that using naturally occurring language data from leaders would

be appropriate. With this design, I wanted to examine emails written by a leader to subordinates

and determine if any language patterns correlated with the outcomes in question. However, with

this design, much like in the example provided in the previous section, only tests of association

could be used, and no causal relationships could be established. If the ultimate goal of the

research is to determine the impact or effect of language on relational outcomes, then a

correlation would not be the most effective design.

The decision to move away from a correlation design and towards an experimental design

involved two major factors: (1) An examination of previous literature and (2) the overall goal of

the research. The first, an examination of previous literature, is essential in deciding what design

to use. If there was no previous literature that led me to my research questions, then a correlation

might be appropriate. That is, if there has not been any previous research that shows a

relationship between two or more variables, a correlation might be the best design in order to

establish that relationship. However, if a relationship has already been established, then it might

be appropriate to test causality between the variables by setting up an experiment. This was the

case in the lead author’s research. In fact, previous literature had already established a link

7

Proceedings of the 2008 Association for Business Communication Annual Convention.

Copyright (c) 2008, Association for Business Communication

between the variables in question (Campbell, White, & Johnson 2003; White 2007). Therefore,

because a relationship between the variables had already been established, it was a natural

progression to move to an experimental design to discover causal relationships between the

variables.

The second factor in determining a research design is driven by the research question. That is,

the research question will naturally lead to a clear goal of the research. Because the goal of my

research is to determine specific language behavior a leader can exhibit that will impact

relationships between leaders and members, it is appropriate to use an experimental design. That

is, experimental designs will allow me to control for confounding variables and isolate the

variable of language behavior. Furthermore, the goal of the research is to also provide leaders

with tangible ways to improve relationships with subordinates. A causal relationship would be

able to address this goal more effectively than a correlation because it would allow me to draw

conclusions without the speculation that confounding variables may be affecting the relationship

between leader and member. Therefore, after an analysis of the two factors described above, an

easier decision about research design can be made.

Figure 1 shows a decision-making chart based upon the two factors previously presented. The

chart presents the process that a researcher might go through in deciding between an

experimental design or correlation study. Figure 1, however, is not in any way comprehensive.

There are many other research designs that could include case studies, ethnographic research, or

any number of other designs. Figure 1 is merely meant to be one possible way in deciding what

kind of research design to employ.

Figure 1: Decision-Making Chart

8

Proceedings of the 2008 Association for Business Communication Annual Convention.

Copyright (c) 2008, Association for Business Communication

Conclusion

Business communication research is rich with many interesting studies that incorporate designs

ranging from case studies to correlations. Because communication does not occur in a vacuum,

many researchers may feel pressure to conduct research that captures naturally occurring data in

actual business contexts. We are in no way demeaning this type of research, as it adds to the

overall understanding and knowledge of various aspects of communication. However, we do feel

that there is a need to incorporate experimental designs into studies when appropriate. We have

presented the features of experimental designs, as well as situations that may not warrant an

experiment. Further, we have also provided two examples of how experimental research could be

incorporated into existing studies. These examples, again, were not used to point out any

deficiencies in those studies. Rather, they were used to show readers that experimental designs

can be effectively integrated into current issues in business communication research.

Finally, we presented a brief look into the decision-making process used by the lead author for

his dissertation research. That process involved two major steps: evaluating previous literature

and defining a goal for the outcome of the study. We presented a decision-making tree that is

meant to guide readers, but not meant to be a comprehensive tool in the process of choosing a

study design. There are many other factors that a researcher must consider, including resources

and amount of time available to complete the project. Overall, however, we feel that we have

given communication researchers an overview of experimental design and hope to encourage

researchers to begin to incorporate experiments that will continue to add to the field.

References

Beattie, Dhanani, & Jones. (2008). Investigating presentational change in U.K. annual reports. Journal of Business

Communication, 45(2), 181-222.

Byron & Baldridge, (2007). Email recipients´ impressions of senders´likability. Journal of Business

Communication, 44(2), 137-160.

Campbell, White, & Durant (2007). Necessary Evils, (In)justice, and rapport management. Journal of Business

Communication, 44 (2),161-185.

Campbell, White, & Johnson. (2003). Leader-member relations as a function of rapport management. Journal of

Business Communication 40(3), 170-194.

Gao, Darroch, Mather, & MacGregor. (2008). Signaling corporate strategy in IPO communication. Journal of

Business Communication, 45(1), 3-30.

Madlock. (2008). The link between leadership style, communicator competence, and employee satisfaction. Journal

of Business Communication, 45(1), 65-78.

Sharbrough, Simmons, & Cantrill. (2006). Motivating language in industry: It’s impact on job satisfaction and

perceived supervisor effectiveness. Journal of Business Communication, 43(4), 322-343.

9

Proceedings of the 2008 Association for Business Communication Annual Convention.

Copyright (c) 2008, Association for Business Communication

Biographies

CHRIS LAM is a Ph.D. candidate in Technical Communication at the Illinois Institute of Technology. His

dissertation applies sociolinguistics, specifically politeness theory, to leadership communication and seeks to

discover the impact language may have on relationships between leaders and employees. He has previously applied

sociolinguistics to online mediums such as Instant Messaging. Lam seeks to continue to apply aspects of

sociolinguistics to various areas of business, technical, and professional communication.

MATT BAUER is an Assistant Professor of Linguistics at the Illinois Institute of Technology. Bauer received his

Ph.D. in linguistics from Georgetown University in 2005 and held a postdoctoral fellowship as Interim Director of

the Interdisciplinary Speech Research Laboratory at the University of British Columbia. Bauer also worked from

2003-05 as a Science Assistant at the National Science Foundation, participating in proposal review and data

analysis for the Division of Social and Economic Sciences. Bauer’s research uses acoustic and articulatory

characteristics of American English dialects to study language change over time.

10