Understanding AI Advertising From the Consumer Perspective

What Factors Determine Consumer Appreciation of AI-Created Advertisements?


LINWAN WU

University of south carolina

linwanwu@mailbox.sc.edu


TAYLOR JING WEN

University of south carolina

jwen2@mailbox.sc.edu

in recent years, artificial intelligence (Ai) technology has been used to create advertising messages. this study examined the factors that influence consumers’ overall appreciation of Ai-created advertisements. the findings indicate that, in addition to its direct effect on consumer reactions to Ai-created advertisements, consumers’ perceived objectivity of the general advertisement creation process positively influences machine heuristic—a rule of thumb that machines are more secure and trustworthy than humans. this effect boosted consumer appreciation of Ai-created advertisements. consumers’ perceived objectivity of advertisement creation negatively influenced perceived eeriness of Ai advertising, which jeopardized consumer appreciation of Ai-created advertisements. consumers’ feelings of uneasiness with robots were found to have a positive influence on both machine heuristic and perceived eeriness of Ai advertising.



INTRODUCTION

Artificial intelligence (AI), which refers to the com- putational techniques that “identify patterns from data and infer underlying rules … for adaptively achieving specific goals” (Sundar, 2020, pp. 2–3), is now being utilized in advertising by profes- sionals (Li, 2019). The adoption of AI is reflected in every step of the advertising process, including mining consumer insights, optimizing advertising

placement, evaluating campaign effectiveness, and even creating advertising messages (Chen, Xie, Dong, and Wang, 2019; Qin and Jiang, 2019). Cur- rently, more than 50 percent of advertisers have taken advantage of AI technology to communicate with their consumers (Business Insider Intelligence, 2018). AI was believed to associate with at least 80% of the digital advertising market (AdExchanger, 2019). Because AI is restructuring the advertising



submitted May 13, 2020;


revised october 18, 2020;


accepted December 7, 2020;


published online March 23, 2021.


Doi: 10.2501/JAr-2021-004



as consumer perceptions of how Ai is like a human do not influence their appreciation of Ai-

created advertisements.

June 2021 JOURNAL OF ADVERTISING RESEARCH 133


industry (Kietzmann, Paschen, and Treen, 2018), there is a growing effort to study this emerging technology in advertising research.

Because AI in advertising is still in its infancy, the research has been primarily conceptual and mainly focused on “the introduc- tion and explanation of related concepts” (Qin and Jiang, 2018, p. 339). Some scholars have discussed the building blocks of AI and explained how AI affects advertising by navigating readers through the consumer journey (Kietzmann et al., 2018). Others have explained how AI-powered advertisements work and highlighted the advantages of personalization, contextualization, and real time of AI-created creatives (Chen et al., 2019). Including a few empirical studies on AI-powered advertisements (Deng, Tan, Wang, and Pan, 2019; Malthouse, Hessary, Vakeel, Burke, and Fudurić, 2019), the current scholarship on AI advertising has examined AI technology predominantly from the perspective of advertising professionals. To date, little attention in this research stream has been paid to the consumer perspective. Consumers have been exposed to AI- created advertisements, such as the ones from Lexus (Griner, 2018) and Burger King (Sung, 2018). AI-created advertisements in this study refer to advertisements that are either partially or completely created by AI programs. In the Lexus case, AI was used to analyze a large number of previous car commercials and generate the script of the commercial. The Burger King advertisement was similar and was created by AI after it analyzed thousands of fast-food commer- cials. In both examples, consumers are explicitly informed of the role of AI in advertisement creation at the beginning of the com- mercials. Because average consumers may not understand how exactly AI works, they may not react to AI-created advertisements in the way expected by professionals. Do consumers appreciate AI- created advertisements? What factors influence their appreciation? As consumers are the final judges of advertising effectiveness, addressing these questions is critical to advertising practitioners

who may consider investing in AI advertising.

Accordingly, the purpose of this study was to explore the fac- tors that influence consumers’ overall appreciation of AI-created advertisements. The authors built a conceptual model rooted in the emerging scholarship of human–AI interaction. The human– AI interaction literature indicates that people’s perceptions of machine agency and human agency decide how they react to AI and AI-created content (Sundar, 2020). The perception that machine agency improves human agency elicits a positive heuris- tic of AI, whereas the belief that machine agency threatens human agency triggers discomfort or eeriness. Because advertisement cre- ation is traditionally considered to be a human job and AI advertis- ing highlights automation, the battle between human agency and machine agency should strongly manifest in consumer reactions to AI-created advertisements. The current study thus contained

both positive and negative perceptions of AI in advertising, which are indicated by machine heuristic and perceived eeriness, respec- tively, and presents how some antecedents (i.e., task objectivity, AI human likeness, and the previous experience of consumers with AI-like entities) function through these perceptions to eventually influence consumer appreciation of AI-created advertisements. The key contribution of this study is to provide a conceptual model that explains and predicts consumer responses to the advertising messages created by AI programs. It supplements the existing lit- erature of AI advertising by focusing on the consumer perspective. The findings of this study will help digital advertisers make better decisions when it comes to leveraging AI technology in creating advertising messages.


LITERATURE REVIEW AND HYPOTHESES

Human–AI Interaction

Human–AI interaction is the process where people “orient to the media source as if it is an intelligent entity that is capable of modi- fying content in unprecedented ways” (Sundar, 2020, p. 6). Accord- ing to the modality agency interactivity navigability (MAIN) model, some cues presented on the media interface could trigger certain heuristics that may influence consumer evaluations of the media content (Sundar, 2008). This is because most people are cog- nitive misers: They tend to think and solve problems in simpler, less effortful ways rather than in more sophisticated, more effort- ful ways (Fiske and Taylor, 1991). Once there are opportunities to take mental shortcuts during information processing, people tend to do so to avoid effortful analysis of information. The identifica- tion of AI as the information source may activate the stereotypes of machines in the minds of consumers, which will shape their responses to AI and AI-created content (Sundar, 2020; Sundar and Kim, 2019).

The common stereotypes of machines contain both positive and negative aspects. The positive aspect speaks to the ability of machines to take planned actions accurately (Gray, Gray, and Weg- ner, 2007). Machines operate on the basis of predetermined rules. When handling information, machines are often believed to be more objective and less biased than humans (Sundar, 2020; Sundar and Kim, 2019). Such positive stereotypes of machines are concep- tualized as machine heuristic, which refers to “the mental shortcut wherein we attribute machine characteristics or machine-like oper- ation when making judgments about the outcome of an interac- tion” (Sundar and Kim, 2019, p. 2). The negative aspect of machine stereotypes pertains to the inability of machines to experience emo- tions and sensations, which clearly differentiates machines from human beings (Gray et al., 2007; Haslam, Bain, Douge, Lee, and Bastain, 2008). This negative stereotype tends to trigger feelings of


discomfort or eeriness when the boundaries between humans and machines become obscure, such as when a machine behaves like a person or does a human job.

These different aspects of machine stereotypes influence how people assess AI performance in various domains. On the one hand, as machines are believed to work efficiently and objectively, perceptions of AI and AI-created content often are associated with the positive machine heuristic (M. K. Lee, 2018; Sundar and Kim, 2019). On the other hand, the designation of AI as an information source also may trigger perceived eeriness, as machines are con- sidered “unfit for ‘human tasks’ that involve subjective judgments and emotional capabilities” (Sundar, 2020, p. 7). When facing AI- created advertisements, consumers will notice the role of AI as the message creator in some way (e.g., recognizing certain cues on the media interface or being directly informed by the advertisers). When consumers believe that AI creates the advertising messages, they may perceive the advertisement creation process to be more accurate and objective (i.e., machine heuristic) but still experience a certain degree of eeriness, as advertisement creation is tradition- ally considered a human job. Both machine heuristic and perceived eeriness are expected to influence consumer appreciation of AI- created advertisements, but in different ways. Thus:

H1: Machine heuristic positively influences consumer ap- preciation of AI-created advertisements.


H2: Perceived eeriness of AI advertising negatively influenc- es consumer appreciation of AI-created advertisements.


Perceived Objectivity of Advertisement Creation

The human–AI interaction literature also has suggested several factors that may strengthen or weaken machine heuristic and per- ceived eeriness when people interact with AI or AI-created content. One such factor is task objectivity. An objective task “involves facts that are quantifiable and measurable” and is not “open to interpre- tation and based on personal opinion or intuition” (Castelo, Bos, and Lehmann, 2019, p. 812). As discussed previously, machines or robots are believed to be good at accurately performing planned actions but not at handling emotional situations or scenarios that involve subjective judgments (Gray et al., 2007). An objective task, such as assembling cars or performing data analysis, thus, often is perceived to be more suitable for machines than a subjective task, such as creating artistic work or recognizing people’s emotions.

When it comes to AI or algorithms, the key building blocks of AI technology, existing research has identified similar patterns in people’s perceptions. Some scholars have found that people were more likely to trust and rely on algorithms to perform objective

Those who advocate the subjective nature of advertising focus on the artistic elements of advertisements and the fact that consumers are not always rational.

tasks, such as providing financial services (Castelo et al., 2019). Peo-

ple also preferred algorithms for tasks that involve numbers and could be objectively evaluated (Logg, Minson, and Moore, 2019). In digital journalism, AI has been found to be more suitable for creat- ing news articles for more objective topics, such as finance, sports, and weather (Liu and Wei, 2018; Waddell, 2018). When an algo- rithm was used to predict how funny a joke was, however, people were less likely to make decisions based on the recommendations of the algorithm, because this was considered as a more subjective task (Yeomans, Shah, Mullainathan, and Kleinberg, 2019).

Accordingly, task objectivity is expected to play a role in influ- encing how consumers react to AI-created advertisements. There has long been debate on whether advertisement creation is sub- jective or objective. Those who advocate the subjective nature of advertising focus on the artistic elements of advertisements and the fact that consumers are not always rational (A. Lee, 2018). One of the most famous quotes for this viewpoint is from William Bern- bach, who argued: “Advertising is fundamentally persuasion and persuasion happens to be not a science, but an art.” Given the rapid advancement of digital technologies, however, data and analytics have played an irreplaceable role in this field, greatly injecting sci- entific elements in advertising (Ignatius, 2013). Consumers have heard the buzzwords “big data” and “digital analytics” for a while, which may shape their impressions on the process of advertise- ment creation to some extent. If consumers believe advertisement creation is more objective, they would be expected to appreciate AI-created advertisements, because the machine heuristic informs them that AI could perform objective tasks more accurately. The belief that advertisement creation is objective also could inhibit the negative stereotypes of machines, which essentially result from the inability of AI to conduct subjective missions. Task objectivity, therefore, is expected to positively influence consumer apprecia- tion of AI-created advertisements through its positive impact on machine heuristic as well as its negative impact on perceived eeri- ness of AI advertising. Thus:

H3: Perceived objectivity of advertisement creation (a) posi- tively influences machine heuristic but (b) negatively influences perceived eeriness of AI advertising.


H4: Perceived objectivity of advertisement creation posi- tively influences consumer appreciation of AI-created advertisements.


AI Human Likeness

A second antecedent covered in this conceptual model is AI human likeness. The research paradigm of “Computers Are Social Actors” (Nass and Moon, 2000; Sundar and Nass, 2000) has shown that using humanlike characters in designing machines and computer programs can effectively trigger users to attribute humanlike characteristics to computers (Nowak and Rauh, 2005). Focusing on human responses in the human-computer interaction process, the paradigm suggests that individuals treat robotic agents as social actors and apply social rules when interacting with them as if they were interacting with real human beings (Nass and Moon, 2000; Nass, Moon, and Green, 1997). Computers with gendered voice output, for instance, were found to elicit gender stereotypes, such that participants rated a female-voiced computer to be more informative about love and relationships than a male-voiced com- puter (Nass et al., 1997). People also demonstrated more involve- ment in their conversations with the computers that provided intimate self-disclosure during human-computer interaction (Moon, 2000).

Perceived human likeness also has emerged to be an important concept in the human–AI interaction literature. It may influence both aspects of the aforementioned machine stereotypes: machine heuristic and perceived eeriness. When people find AI programs to exhibit more humanlike features, they tend to evaluate AI as less machinelike. There appears to be a negative relationship between human likeness and machine likeness. The higher the humanness that people perceive in AI in general, the less likely they would be to generate a machine heuristic that attributes machinelike charac- teristics to AI programs.

On the other hand, increasing human likeness also may trigger perceived eeriness, which could be explained by both the social identity theory and the literature on the theory of the “uncanny valley of mind.” Social identity theory posits that people establish their self-concept from meaningful social group relationships. This social identification prompts individuals to react negatively when the existence of an out-group member threatens the uniqueness of their in-group (Tajfel, 1982; Ferrari, Paladino, and Jetten, 2016). The increased human likeness of AI could be perceived as a chal- lenge from the out-group (i.e., artifacts) to the distinctiveness of humans as an in-group. Especially in the current research context, advertisement creation may be considered by most people as a human job that could not easily be replaced by a machine, which may aggravate the perception of eeriness. Similarly, according to

the “uncanny valley of mind” theory, when the human likeness of machines reaches a close-to-realistic stage that overlaps with what is perceived as human distinctiveness, such as complex cog- nitive abilities and emotional responses, individuals experience a sensation of eeriness and disturbing feelings (Stein and Ohler, 2017). Such uncanniness of technology stems from the viola- tion of the overarching mental categories in human minds that assume the categorical perceptions of “human” and “nonhuman” (MacDorman and Ishiguro, 2006). The increase in the human like- ness of machines thus may cause unpleasant cognitive dissonance (MacDorman and Ishiguro, 2006).

The aforementioned rationale leads to the anticipation that con- sumer perceptions of AI human likeness would exert similar influ- ences in the context of AI advertising. In particular, a high level of human likeness of AI programs may not only reduce consumer perceptions of machine heuristic but also challenge their belief of humans being unique, thus triggering a high level of eeriness of AI advertising. Given the expected relationships of machine heuristic and perceived eeriness with AI-created advertisement apprecia- tion, it is also anticipated that perceived AI human likeness will negatively influence consumer appreciation of AI-created adver- tisements. Thus:

H5: Perceived AI human likeness (a) negatively influences machine heuristic but (b) positively influences perceived eeriness of AI advertising.


H6: Perceived AI human likeness negatively influences con- sumer appreciation of AI-created advertisements.


Uneasiness with Robots

The last antecedent expected to influence how consumers react to AI-created advertisements is rooted in their previous experiences with robots. Because a majority of laypeople do not directly engage in the process of AI creation of media messages, they tend to make sense of this process based on some similar entities with which they are more familiar, including machines, robots, and comput- ers. Robots are particularly perceived closer to AI, as entertainment media, including movies and video games, have long portrayed robots to possess high levels of intelligence (Besley and Shanahan, 2005; Sundar, Waddell, and Jung, 2016). According to cultivation theory, impressions of robots that are based on mass-media por- trayals could form an illusion of reality used for making real-life judgments (Morgan and Shanahan, 2010). Research has confirmed that previous experiences with robots could significantly influence their perceived usefulness of both companion and assistant robots (Sundar et al., 2016). Building on this, the human–AI interaction


H3a

Machine Heuristic

H3b

H4

H1

H5a

H6

H5b

H7a

H8

H2

Perceived Eeriness

H7b

Uneasiness with Robots

AI Human Likenesss

Appreciation of AI-Created Advertisements

Advertisement- Creation Objectivity

Figure 1 The Model of Consumers’ Appreciation of AI-Generated Advertisements



framework posits that a person’s previous experience of AI or AI- like entities may influence the psychological effects of AI-related heuristics (Sundar, 2020).

In the context of AI advertising, the similar effects of the previ- ous experiences of consumers with robots are expected to occur. This study focused on the concept of uneasiness with robots, which indicates the negative perceptions people form about robots or robotlike entities on the basis of their previous experiences. As discussed earlier, mass media content is a key source that contrib- utes to people’s previous experiences of robots. Because fictional media content often depicts robots to be highly intelligent, people are likely to question what will happen if that becomes true in real life, thus leading to feelings of uneasiness with robots (Sundar et al., 2016). This uneasiness does not exclusively result from past media consumption. A qualitative study recorded many answers from participants about how their direct experiences with robots or AI- based technologies contribute to their feelings of uneasiness (Shank, Graves, Gott, Gamez, and Rodriguez, 2019). Regardless of its specific sources, uneasiness with robots reflects the accumulative existing experiences of individuals with robots or technologies that possess certain levels of intelligence. On the basis of the human–AI interac- tion literature, consumer uneasiness with robots is expected to affect how they react to AI-created advertisements. Because machine heu- ristic is a positive mental shortcut about AI and perceived eeriness of AI advertising is negative in nature, uneasiness with robots is predicted to inhibit consumer appreciation of AI-created advertise- ments through its negative influence on machine heuristic as well as its positive influence on perceived eeriness. (See Figure 1 for the model showing all the hypothesized relationships.) Thus:

H7: Uneasiness with robots (a) negatively influences ma- chine heuristic but (b) positively influences perceived eeriness of AI advertising.


H8: Uneasiness with robots negatively influences consumer

appreciation of AI-created advertisements.


METHOD

Survey Procedure

To test the proposed conceptual model, the authors conducted an online survey using a nationally representative sample of

U.S. consumers on Qualtrics. An introduction was presented to explain the purpose of the study: understanding the opinions of consumers regarding the integration of AI in creating advertising messages. The authors believe that this presurvey introduction is important because it is likely that not all respondents are aware of AI-created advertisements. It is a fair assumption, however, that a majority of the respondents have heard about AI and/or had direct or indirect experiences with AI-powered applications. A 2017 global study found that 84 percent of respondents used AI- powered services (Pega, 2017). Although not everyone is aware of the involvement of AI in advertisement creation, it is believed that informing people of the existence of AI-created advertise- ments will make them apply their past knowledge of AI to form perceptions of those advertisements. Respondents were given the definition of AI in everyday language. The key variables in the conceptual model were then measured, including perceived objectivity of advertisement creation, perceived AI human like- ness, machine heuristic, perceived eeriness of AI advertising, uneasiness with robots, and appreciation of AI-created adver- tisements. Demographic information was collected at the end of the survey.


Survey Respondents

A sample of 528 U.S. residents (N = 528) was recruited by Qual- trics. This sample was managed to represent the U.S. population


Table 1 respondents’ Demographic information

Demographic Frequency %

Gender


Male

257

48.7

Female

271

51.3

Age (years)



18–25

69

13.1

26–35

119

22.5

36–45

102

19.3

46–55

64

12.1

56–65

75

14.2

>65

99

18.8

Race



White

331

62.7

African American

64

12.1

Hispanic

87

16.5

Asian

29

5.5

other

17

3.2

Household Income



less than $20,000

77

14.6

$20,000–$34,999

58

11.0

$35,000–$49,999

75

14.2

$50,000–$74,999

107

20.3

$75,000–$99,999

70

13.3

$100,000–$149,999

84

15.9

$150,000–$199,999

36

6.8

$200,000 or more

21

4.0

Total

528

100


in terms of gender, age, race, and household income based on the

U.S. Census data. From the Qualtrics panel, 1,525 individuals were selected randomly and invited to participate in the sur- vey, and 640 respondents completed the study, a 41.97 percent response rate. Those who did not spend half of the median time were automatically removed by Qualtrics for quality manage- ment, thus leaving 528 valid responses in the final sample. The mean age of respondents was 45.77 (SD = 17.16). (See detailed demographic information in Table 1.)


Measures

All the measures were self-reported on a 7-point scale and modi- fied to reflect the research topic of this study (See Table 2). Per- ceived objectivity of advertisement creation was measured using three items adopted from Mazurek (2019). Perceived AI human likeness was measured using six items adopted from Bellur and Sundar (2017). Machine heuristic was measured using four items

adopted from Waddell (2018). Perceived eeriness of AI adver- tising was measured using three items adopted from Ho and MacDorman (2017). Uneasiness with robots was measured using four items adopted from Sundar et al. (2016). Appreciation of AI- created advertisements was measured using five items adopted from Van Rompay and Veltkamp (2014).


Data Analysis

To analyze the conceptual model and to test the proposed hypotheses, the authors first examined the reliability, convergent validity, and discriminant validity of all the measures based on a confirmatory factor analysis using AMOS software. Then, the authors checked for common method bias, because all the varia- bles were measured among the same group of participants. After that, the authors tested the hypothesized relationships among the latent variables by using a structural equation modeling analy- sis using AMOS. Last, an additional analysis was conducted using SPSS.


RESULTS

Measurement Model

A first-order confirmatory factor analysis was conducted to test the fitness of the measurement model for the latent variables. The initial model fit for the confirmatory factor analysis model was not desirable, so the standardized regression weight was exam- ined for each item. One item of advertisement creation objectiv- ity (i.e., POAC_1), one item of AI human likeness (i.e., PAL_2), two items of uneasiness with robots (i.e., UER_3 and UER_4), and one item of AI advertisement appreciation (i.e., AAA_1) were deleted because of low regression weights. The revised confirma- tory factor analysis model had desirable model fit on the basis of the recommendations from Hair (2010) and from Hooper, Coughlan, and Mullen (2008). The goodness-of-fit (GFI) indices for the revised measurement model indicated satisfactory fit for the data: χ2/df = 2.205, GFI = 0.940, normed-fit index (NFI) = 0.949, comparative fit index (CFI) = 0.971, and root-mean-square error of approximation (RMSEA) = 0.048.

Standardized loading, Cronbach’s alpha, composite reliabil- ity, and average variance extracted estimates were used to assess reliability and convergent validity of the measures (See Table 3). Standardized loadings ranged from 0.682 to 0.911, which were all significant (Nunnally, 1978). Cronbach’s alpha ranged from 0.817 to 0.922, all exceeding the minimum limit of 0.70 (Chin, 1998). Composite reliabilities ranged from 0.818 to 0.924, all exceeding the minimum limit of 0.70 (Hair, 2010). The estimates of average variance extracted ranged from 0.588 to 0.736, all exceeding the minimum limit of 0.50 (Fornell and Larcker, 1981). All constructs


Table 2 Measurement scales

Item No. Item Content

Perceived Objectivity of Advertisement Creation

Question: Advertisement creation is a task that …

PoAc_1 refers to objects, events or data that is observable. PoAc_2 is not influenced by emotions, opinions or personal feelings. PoAc_3 is objective.

Perceived AI Human Likeness

Question: What is your overall perception of artificial intelligence (Ai)? PAl_1 Artificial/natural

PAl_2 Human made/humanlike PAl_3 Without definite lifespan/mortal PAl_4 inanimate/living

PAl_5 Mechanical movement/biological movement PAl_6 synthetic/real

Machine Heuristic

Question: if Ai creates an advertisement, then …

Table 3 construct reliability and validity

Perceived Objectivity of

0.817

0.818

0.691

Advertisement Creation





PoAc_2

0.835




PoAc_3

0.828




Perceived AI Human Likeness


0.922

0.924

0.710

PAl_1

0.813




PAl_3

0.693




PAl_4

0.911




PAl_5

0.879




PAl_6

0.897




Machine Heuristic


0.886

0.889

0.668

MH_1

0.742




MH_2

0.851




MH_3

0.858




MH_4

0.813




Construct and Item Loading α CR AVE


MH_1

MH_2

the task is done objectively.

the work is error-free.

Perceived Eeriness of


0.865

0.868

0.690

MH_3

the work is unbiased.

PE_1

0.872




MH_4

the task is done accurately.

PE_2

0.892




Perceived Eeriness of AI Advertising


PE_3

0.716




Question: if an ad is created by Ai, i feel that ad is …

PE_1 reassuring/eerie


Uneasiness with Robots


0.847

0.848

0.736

PE_2 Natural/freaky


UEr_1

0.847




PE_3 ordinary/supernatural


UEr_2

0.869




Uneasiness with Robots


Appreciation


0.846

0.850

0.588

AI Advertising



Question: What is your overall perception of robots?

UEr_1 i would feel uneasy if i was given a job where i had to use robots.

UEr_2 i would feel very nervous just standing in front of a robot. UEr_3 i would feel uneasy if robots really had emotions.

UEr_4 i feel that in the future, society will be dominated by robots.

Appreciation of AI-Created Advertisements

Question: Advertisements created by Ai are:

AAA_1 Boring

AAA_2 Persuasive

AAA_3 Well designed

AAA_4 Attractive

AAA_5 original


in the measurement model, therefore, had significant reliability

and convergent validity.

To test discriminant validity, the square root of average vari- ance extracted was calculated for each construct and was com- pared to its correlation coefficients with other constructs (See


of AI-Created

Advertisements



AAA_2

0.682

AAA_3

0.801




AAA_4

0.820




AAA_5

0.757




Note: cr = composite reliability; AvE = average variance extracted.


Table 4). The comparison results showed that, in Table 4, all the numbers on the diagonal (i.e., the square root of average vari- ance extracted) were larger than the corresponding off-diagonal numbers (i.e., correlation coefficients), indicating adequate discri- minant validity (Fornell and Larcker, 1981).

Harman’s single-factor test (see Podsakoff, MacKenzie, Lee, and Podsakoff, 2003) was conducted to check common method bias. Specifically, the authors ran an exploratory factor analysis, and the unrotated factor solution revealed that no single factor could explain a majority of the variance (the first factor only explained 33.538 percent of the total variance). This result indicated that com- mon method bias was not an issue in the current data.



Table 4 correlations and Discriminant validity


Objectivity of

AI Human

Machine

Perceived

Uneasiness

Appreciation of AI-

Variable Advertisement Creation

Likeness

Heuristic

Eeriness

with Robots

Created Advertisements

objectivity of Advertisement creation 0.831a






Ai Human likeness 0.383**

0.843a





Machine Heuristic 0.534**

0.267**

0.817a




Perceived Eeriness −0.173**

−0.036

−0.235**

0.831a



Uneasiness with robots 0.368**

0.219**

0.359**

0.089*

0.858a


Appreciation of Ai-created 0.503** Advertisements

0.193**

0.618**

−0.235**

0.282**

0.767a

Mean 4.074

3.489

4.220

4.262

3.982

4.438

sD 1.672

1.806

1.479

1.605

1.840

1.194

Note: a the square root of average variance extracted. *p < 0.05; **p < 0.01.


Structural Equation Model

On the basis of the validated measurement model, the proposed conceptual model was tested using structural equation modeling. The following indices were used to estimate the model fit: χ2/df

< 5.00 (Hooper et al., 2008), GFI > 0.90 (Hooper et al., 2008), NFI >

0.90 (Bentler, 1992), CFI > 0.90 (Bentler, 1992), and RMSEA < 0.08

(Hu and Bentler, 1999).

The model fit indices for the conceptual model were as follows:

χ2/df = 3.311, GFI = 0.908, NFI = 0.921, CFI = 0.943, and RMSEA

= 0.066. All model-fit indices exceeded the suggested acceptance levels, demonstrating that the model presented a good fit with the current data. The path coefficients were then examined for testing hypotheses.


Path Analysis

The path analysis results indicated that machine heuristic posi- tively influenced consumer appreciation of AI-created advertise- ments; β = 0.494, SE = 0.052, p = .000 (See Figure 2). Thus, H1 was supported. Perceived eeriness of AI advertising negatively influenced consumer appreciation of AI-created advertisements; β = –0.108, SE = 0.039, p = .012. Thus, H2 was supported. Per- ceived objectivity of advertisement creation positively influenced machine heuristic; β = 0.560, SE = 0.045, p = .000. Thus, H3a was supported. Perceived objectivity of advertisement creation nega- tively influenced perceived eeriness of AI advertising; β = –0.305, SE = 0.044, p = .000. Thus, H3b was supported. Perceived objec- tivity of advertisement creation positively influenced consumer appreciation of AI-created advertisements; β = 0.264, SE = 0.045, p = .000. Thus, H4 was supported. Perceived AI human likeness did not influence machine heuristic; β = 0.070, SE = 0.028, p = .087. Thus, H5a was not supported. Perceived AI human likeness did not influence perceived eeriness of AI advertising; β = –0.016, SE =

0.031, p = .724. Thus, H5b was not supported. Perceived AI human likeness did not influence consumer appreciation of AI-created advertisements; β = –0.042, SE = 0.023, p = .266. Thus, H6 was not supported. Uneasiness with robots positively influenced machine heuristic; β = 0.189, SE = 0.035, p = .000. Thus, H7a was not sup- ported. Uneasiness with robots positively influenced perceived eeriness of AI advertising; β = 0.198, SE = 0.039, p = .000. Thus, H7b was supported. Uneasiness with robots did not influence consumer appreciation of AI-created advertisements; β = 0.045, SE = 0.030, p = .286. Thus, H8 was not supported.

Additionally, the results of the path analysis indicated that the three antecedents (i.e., perceived objectivity of advertise- ment creation, perceived AI human likeness, and uneasiness with robots) explained 35.4 percent of the variance of machine heuristic and 13.2 percent of the variance of perceived eeri- ness of AI advertising. These five variables explained 51.1 per- cent of the variance of consumers’ appreciation of AI-created advertisements.


Additional Analysis on Demographics

Finally, a multiple linear regression analysis was conducted to test the association between demographics and respondent appreciation of AI-created advertisements to obtain some addi- tional insights from the data. The results indicated that age was negatively associated with AI advertisement appreciation; β

= –0.01, t = –2.95, p = .003, meaning that younger respondents tended to appreciate AI-created advertisements more than older respondents. Household income was positively associated with AI advertisement appreciation; β = 0.13, t = 4.59, p = .000, indi- cating that respondents with a higher household income tended to appreciate AI-created advertisements more than respondents with a lower household income.


MH_1

MH_2

MH_3

0.723

0.840

0.848

0.803

0.824

Advertisement- Creation Objectivity

0.560***

Machine Heuristic

0.833

0.494***

–0.305***

0.813

0.264***

0.070

0.674

–0.042

0.910

AI Human Likeness

Appreciation of AI-Created Advertisments

–0.016

0.749

0.897

0.045

0.189*** –0.108*

0.916

Uneasiness with Robots

Perceived Eeriness

0.198***

0.803

0.871

0.899

0.723

PE_3

PE_2

PE_1

UER_2

UER_1

PAL_6

AAA_5

PAL_5

AAA_4

PAL_4

AAA_3

PAL_3

AAA_2

PAL_1

POAC_3

POAC_2

MH_4

0.813

0.694

0.879

0.705

Figure 2 The SEM Model with Path Coefficients.

Notes: *** p < 0.001; ** p < 0.01; * p < 0.05; dashed arrows refer to insignificant relationships.



DISCUSSION

Although some scholars have noticed the unprecedented impact of AI on the advertising industry, they have mainly examined this technology from the perspective of the advertiser. To the best knowledge of the authors, this study is the first one that investigates what influences consumers’ general appreciation of AI-created advertisements. A model was built based on responses from a nationally representative sample of U.S. consumers. The findings indicated that consumers’ overall perceptions of adver- tisement creation as an objective process and feelings of unease with robots are important antecedents that determine their appre- ciation of AI-created advertisements through the psychological mechanisms of machine heuristic and perceived eeriness. These findings are believed to provide both theoretical and practical insights that help deepen understanding of AI advertising.


Theoretical Implications

One of the primary theoretical contributions of this study is to investigate AI advertising from the consumer perspective. This fills the gap in the current advertising literature, which has pre- dominantly examined AI in advertising from the perspective of


advertisers or advertising professionals. Although some schol- ars have proposed several directions for future research on AI advertising (Li, 2019), the consumer perspective has not been sufficiently emphasized. Consumers are the final judges of advertising effectiveness; therefore, their overall appreciation of AI- created advertisements should be a key consideration in this body of scholarship.

This study also contributes to the human–AI interaction lit- erature by integrating different theoretical frameworks (e.g., the MAIN model, the “Computers Are Social Actors” paradigm, and social identity theory) as well as by demonstrating the psycho- logical impact of AI source orientation in the advertising domain, which has not attracted sufficient attention from the human–AI interaction scholars. The findings that both machine heuristic and perceived eeriness influence consumer appreciation of AI-created advertisements prove the important roles of both positive and negative stereotypes of machines in determining consumer reac- tions to AI and AI-created content, thus supporting the human–AI interaction model (see Sundar, 2020).

Another contribution of this study is to substantiate the promi- nence of task objectivity in understanding consumer reactions to


Consumer beliefs of advertisement creation being objective benefit their appreciation of AI-created advertisements by facilitating the positive effects of machine heuristic

as well as inhibiting the negative effects of perceived eeriness.

AI-created advertisements. To the authors’ best knowledge, the extent to which consumers perceive the process of advertisement creation to be objective has not been widely examined in the extant advertising literature. This is probably because advertisement crea- tion traditionally is associated with the idea of creativity (Dahlén, Rosengren, and Törn, 2008). Although there are numbers and data involved in components of the advertising process such as con- sumer research and media buying, these components typically are inaccessible to consumers. The adoption of AI technology in advertising, however, makes task objectivity a factor that cannot be ignored when it comes to examining consumer responses to advertisements. The current study helps clarify why task objectiv- ity is important to AI advertising. Specifically, consumer beliefs of advertisement creation being objective benefit their appreciation of AI-created advertisements by facilitating the positive effects of machine heuristic as well as inhibiting the negative effects of per- ceived eeriness. In addition to these indirect effects, the confirmed research model also indicated that task objectivity had a significant direct effect on AI advertisement appreciation, revealing the pos- sibility of other unexplored mediators (see Zhao, Lynch, and Chen, 2010 for the theoretical implications of significant direct effects). The effort of theory building for AI advertising should take task objectivity and its underlying mechanisms into account.

An additional antecedent that has been confirmed to influence consumers’ overall appreciation of AI-created advertisements is the feeling of uneasiness with robots. This antecedent represents the previous (direct or indirect) experiences of individuals with the entities that are similar to AI. Contradictory to the predic- tion of the authors, however, consumer experiences of uneasiness with robots were found not only to increase perceived eeriness but also to facilitate machine heuristic. These findings are prob- ably due to the fact that uneasiness indicates one’s unfamiliarity with the technologies that showcase some levels of intelligence, like AI and robots. In other words, the reason why people feel

uneasy with these technologies is because they do not possess the computational and engineering knowledge that helps make sense of these technologies. The more uneasy one feels with robots, therefore, the more likely one will rely on stereotypes of robots or machines when interacting with AI. Because the stereotypes of machines contain both positive and negative aspects, as indicated by the human–AI interaction literature, it is understandable that uneasiness with robots positively contributes to both machine heuristic and perceived eeriness. The current research model further indicated that the direct effect of uneasiness with robots on AI advertisement appreciation was not significant. This sug- gests that machine heuristic and perceived eeriness explain most of the variance in this relationship and the possibility that other mediators exist in this relationship is small. As a double-edged sword, uneasiness with robots should be given extra attention in the future research of AI advertising.

Last, it is also worth noting that perceived human likeness of AI seems to exert less impact on consumer reactions to AI-created advertisements than predicted. Although previous human–AI interaction research reported significant influences of human likeness on consumer responses to AI, most studies examined this construct from the perspective of anthropomorphism when people have direct interactions with AI programs, such as chat- bots. Anthropomorphism is the process whereby people attribute human characteristics—including physical features, such as a humanlike face or body, and humanlike capability, such as rational thinking and conscious feeling—to nonhumans (Waytz, Cacioppo, and Epley, 2010; Waytz, Heafner, and Epley, 2014). Anthropo- morphic design cues, such as giving machines human names, for instance, have been documented to influence user perceptions of chatbots as well as the companies behind the chatbots (Araujo, 2018). The insignificant effects of human likeness in this study could be attributed to the specific research context: AI advertising. Unlike conversing with a chatbot, consumers seldom interact with the AI creators of advertisements. Because of the lack of consumer interaction with AI, it is understandable that human likeness plays a less significant role in the context of AI advertising compared with its role in other human–AI interaction scenarios. The findings of this study on human likeness suggest that human–AI interac- tion researchers should carefully identify the variables that fit their unique research contexts.


Practical Implications

In addition to the aforementioned theoretical contributions, the findings of this study are meaningful to advertising profession- als who plan to invest in AI technology for creating advertise- ments. First, this study identified machine heuristic and perceived


The selection of media context should be a concern of advertising professionals when presenting advertisements created by AI.

eeriness as two underlying factors that contribute to consumers’ overall appreciation of AI-created advertisements. These findings are useful to practitioners who may be able to strengthen machine heuristic or weaken perceived eeriness when delivering AI-created advertisements to consumers. One example of doing so is taking advantage of the media context. A television show illustrating how AI and related technologies help the FBI solve murders may help trigger the machine heuristic of the audience, whereas a documen- tary about arts and humanity may make people depreciate the creativity of AI. The selection of media context, therefore, should be a concern of advertising professionals when presenting adver- tisements created by AI.

Second, this study confirmed that once consumers believe advertisement creation to be a more objective task, they tend to appreciate AI-created advertisements to a greater extent. Because consumer perceptions of task objectivity are malleable and could be shaped by external factors (Castelo et al., 2019), this finding pro- vides professionals with a tactic for increasing consumer apprecia- tion and acceptance of AI-created advertisements: emphasizing the objective process of advertisement creation. Of course, consumer perceptions of advertisement creation are formed longitudinally and are influenced by many factors. Whereas a single advertiser may not be able to change consumer perceptions of advertisement creation in general, it is possible to shape how consumers perceive advertisement creation in a specific campaign. In the Lexus video commercial that was scripted entirely using AI, for example, the words “written by artificial intelligence” appeared at the beginning of the commercial.

A more effective way could be to disclose more about how AI objectively creates the advertisement, such as stating “This com- mercial was created by AI after analyzing big data in the automo- bile industry” or “This commercial was created by AI based on its systematic analysis on 10 years of automobile advertisements.” Such disclosures present a justification of using AI in advertise- ment creation (as handling big data is beyond human capacity, for example) while enhancing consumer-perceived objectivity of the advertisement creation process. In the digital context in which AI currently is used to analyze consumer browsing behaviors

and provide customized advertising messages, disclosing how AI works may not only address consumer privacy concerns but also increase the perceived objectivity of the advertising process. This could be achieved by a simple disclosure such as “This advertise- ment is created by AI based on digital trace data on this platform.” Third, as mentioned earlier, for consumers, existing uneasiness with robots is a double-edged sword in deciding their reactions to AI-created advertisements. It could both benefit their appreciation of AI-created advertisements through the activation of the posi- tive machine heuristic and jeopardize such appreciation by making them perceive AI advertising as eerie. For advertising professionals who want to adopt AI for advertisement creation, it is important, therefore, to understand how uneasy their target consumers may feel about robots or technologies with certain levels of intelligence. Because a large part of the uneasiness comes from the consump- tion of mass media content (Sundar et al., 2016), consumer segmen- tation based on previous media consumption may provide some solutions. Age also could be a segmentation criterion. Because younger people, compared with older people, may feel less uneasy with robots or related technologies, advertisers who target differ- ent age groups should pay attention to this factor when running

AI-created advertisements.

Last, the finding that AI human likeness is not a factor that deter- mines the reactions of consumers to AI-created advertisements also provides some useful insights to professionals. There seems to be a trend toward making AI programs humanlike in many domains, such as AI robots with human appearances (Cutting Edge, 2019), chatbots with human voices (Edwards, Edwards, Stoll, Lin, and Massey, 2019), and AI programs that behave like human beings (Fachechi, Agliari, and Barra, 2019). Although increased human likeness may encourage people to accept AI when they are actu- ally interacting with AI applications, this study suggests that it is not a major concern when people just deal with the media content (i.e., advertisements) created by AI. On the basis of this finding, advertisers do not have to invest too much in personifying their AI programs, which could help manage the advertising budget.


Limitations and Future Research

Although this study is believed to provide meaningful theoreti- cal and practical contributions, there is definitely much space for future investigations. First, this study focused on overall percep- tions of AI-created advertisements. Future research may explore this topic from a different perspective by focusing on consumer responses to some specific AI-created advertisements with the help of experimental studies. This line of research may take different media platforms and different message formats into consideration. Other advertisement-specific variables, such as brand attitude and


purchase likelihood, also could be tested. Future research also may compare human-created advertisements with AI-created adver- tisements and explore what factors drive the different responses of consumers. Second, the authors explicitly informed respond- ents that AI would be the focus of this study. This demand char- acteristic of survey introduction may bias respondent answers to some extent. Future research can use a more neutral introduction. Third, this study was conducted using a nationally representative sample, which is not a limitation but opens the door for future research to focus on different groups. The additional analysis on demographics discovered that age and income are potential factors that influence consumer reactions to AI-created adver- tisements. Future research could, on the basis of these findings, explore why younger individuals and people with higher income find AI-created advertisements more acceptable. Potential fac- tors may include media consumption and cultivation as well as access to AI-powered services or applications. The education level of respondents was not measured in this study. Future research may obtain additional insight into how education could affect consumer responses to AI in advertising. Fourth, the confirmed relationships in this conceptual model also could provide several possible directions for future research. Experiments, for example, could be conducted in which participant-perceived objectivity of the advertisement creation process is manipulated so that it could help detect the causal relationship between task objectivity and responses to AI-created advertisements. Fifth, other factors that are not contained in the current model also may be considered for future research. A key feature of AI-created advertisements, for example, is automation, which indicates that consumers lose some control over receiving advertising messages. Thus, future research may study the role of consumer locus of control in affecting their responses to AI-created/-distributed advertisements.


ABoUt tHE AUtHors


linWan Wu is an assistant professor in the school of Journalism and Mass communications at the University of south carolina. His research focuses on advertising psychology and communication technology. Wu’s work has been published in the Journal of Advertising, Journal of Advertising Research, and International Journal of Advertising, among other journals.


Taylor Jing Wen is an assistant professor in the school of Journalism and Mass communications at the University of south carolina. she conducts research in consumer psychology and media effects in the context of marketing, health, and risk communications. Her work can be found in the International Journal of Advertising, Journal of Current Issues & Research in Advertising, and Journal of Interactive Advertising, among others.

rEFErENcEs


Ad Exchanger. (2019, January 3). “AI Is Eating Advertising: And 2019 Will Be Critical for Getting It Right.” Retrieved from https://adexchanger.com/ data-driven-thinking/ai-is-eating-advertising-and-2019-will-be-critical-for- getting-it-right/

Araujo, T. “Living Up to the Chatbot Hype: The Influence of Anthropomorphic Design Cues and Communicative Agency Framing on Conversational Agent and Company Perceptions.” Computers in Human Behavior 85 (2018): 183–189.

Bellur, S., and S. S. Sundar. “Talking Health with a Machine: How Does Message Interactivity Affect Attitudes and Cognitions?” Human Communication Research 43, 1 (2017): 25–53.

Bentler, P. M. “On the Fit of Models to Covariances and Methodology to

the Bulletin.” Psychological Bulletin 112, 3 (1992): 400–404.


Besley, J. C., and J. Shanahan. “Media Attention and Exposure in Relation to Support for Agricultural Biotechnology.” Science Communication 26, 4 (2005): 347–367.

Business Insider Intelligence. (2018, March 6)AI in Marketing: How Brands Can Improve Personalization, Enhance Ad Targeting, and Make Their Marketing Teams More Agile.” Retrieved from https://www. businessinsider.com/ai-marketing-report-2018–3

Castelo, N., M. W. Bos, and D. R. Lehmann. “Task-Dependent Algorithm

Aversion.” Journal of Marketing Research 56, 5 (2019): 809–825.


Chen, G., P. Xie, J. Dong, and T. Wang. “Understanding Programmatic Creative: The Role of AI.” Journal of Advertising 48, 4 (2019): 347–355.

Chin, W. W. “Issues and Opinions on Structural Equation Modeling.” MIS Quarterly 22, 1 (1998): 7–16.

Cutting Edge. (2019, October 24).“Our Future Life with Humanlike AI Robots.” Retrieved from http://cuttingedgefestival.no/humanlike-ai-robots/

Dahlén, M., S. Rosengren, and F. Törn. “Advertising Creativity Matters.”

Journal of Advertising Research 48, 3 (2008): 392–403.


Deng, S., C. W. Tan, W. Wang, and Y. Pan. “Smart Generation System of Personalized Advertising Copy and Its Application to Advertising Practice and Research.” Journal of Advertising 48, 4 (2019): 356–365.

Edwards, C., A. Edwards, B. Stoll, X. Lin, and N. Massey. “Evaluations of an Artificial Intelligence Instructor’s Voice: Social Identity Theory in Human–Robot Interactions.” Computers in Human Behavior 90 (2019): 357–

362.


Ferrari, F., M. P. Paladino, and J. Jetten. “Blurring Human–Machine Distinctions: Anthropomorphic Appearance in Social Robots as a Threat to



Human Distinctiveness.” International Journal of Social Robotics 8, 2 (2016): 287–302.

Fachechi, A., E. Agliari, and A. Barra. “Dreaming Neural Networks: Forgetting Spurious Memories and Reinforcing Pure Ones.” Neural Networks 112 (2019): 24–40.

Fiske, S. T., and S. E. Taylor. Social Cognition. New York: McGraw-Hill, 1991.

Fornell, C., and D. F. Larcker. “Structural Equation Models with Unobservable Variables and Measurement Error: Algebra and Statistics.” Journal of Marketing Research 18, 3 (1981): 382–388.

Hair, J. F. Multivariate Data Analysis. Upper Saddle River, NJ: Prentice Hall, 2010.

Haslam, N., P. Bain, L. Douge, M. Lee, and B. Bastian. “More Human than You: Attributing Humanness to Self and Others.” Journal of Personality and Social Psychology 89, 6 (2005): 937–950.

Ho, C. C., and K. F. MacDorman. “Measuring the Uncanny Valley Effect.”

International Journal of Social Robotics 9, 1 (2017): 129–139.


Hooper, D., J. Coughlan, and M. Mullen. “Structural Equation Modelling: Guidelines for Determining Model Fit.” Electronic Journal of Business Research Methods 6, 1 (2008): 53–60.

Hu, L., and P. M. Bentler. “Cutoff Criteria for Fit Indexes in Covariance Structure Analysis: Conventional Criteria versus New Alternatives.” Structural Equation Modeling: A Multidisciplinary Journal 6, 1 (1999): 1–55.

Ignatius, A. “Advertising Is an Art—and a Science.” Harvard Business Review 91, 3 (2013): 10–11.

Gray, H. M., K. Gray, and D. M. Wegner. “Dimensions of Mind

Perception.” Science 315, 5812 (2007): 619.


Griner, D. (2018, May 10). “An AI Tried to Write the Perfect Lexus Ad. Here’s a Scene-by-Scene Look at What It Was Thinking.” Retrieved from the AdWeek website: https://www.adweek.com/programmatic/an-ai-tried- to-write-the-perfect-lexus-ad-heres-a-scene-by-scene-look-at-what-it-was- thinking/

Kietzmann, J., J. Paschen, and E. Treen. “Artificial Intelligence in Advertising: How Marketers Can Leverage Artificial Intelligence along the Consumer Journey.” Journal of Advertising Research 58, 3 (2018): 263–267.

Lee, A. (2018, September 23). “Marketing and Advertising Are Subjective, Objectively Speaking.” Retrieved from https://aaronjameslee.co/marketing- advertising-subjective-objectively-speaking/

Lee, M. K. “Understanding Perception of Algorithmic Decisions: Fairness,


Trust, and Emotion in Response to Algorithmic Management.” Big Data & Society 5, 1 (2018): 1–16.

Li, H. “Special Section Introduction: Artificial Intelligence and Advertising.”

Journal of Advertising 48, 4 (2019): 333–337.


Liu, B., and L. Wei. “Machine Authorship In Situ: Effect of News Organization and News Genre on News Credibility.” Digital Journalism 7, 5 (2019): 635–657.

Logg, J. M., J. A., Minson, and D. A. Moore. “Algorithm Appreciation: People Prefer Algorithmic to Human Judgment.” Organizational Behavior and Human Decision Processes 151 (2019): 90–103.

MacDorman, K. F., and H. Ishiguro. “The Uncanny Advantage of Using Androids in Cognitive and Social Science Research.” Interaction Studies 7, 3 (2006): 297–337.

Malthouse, E. C., Y. K. Hessary, K. A. Vakeel, R. Burke, and M. Fudurić. “An Algorithm for Allocating Sponsored Recommendations and Content: Unifying Programmatic Advertising and Recommender Systems.” Journal of Advertising 48, 4 (2019): 366–379.

Mazurek, K. Human vs. Artificial Intelligence (master’s thesis). Universitat Pompeu Fabra, 2019.

Moon, Y. “Intimate Exchanges: Using Computers to Elicit Self-Disclosure

from Consumers.” Journal of Consumer Research 26, 4 (2000): 323–339.


Morgan, M., and J. Shanahan. “The State of Cultivation.” Journal of Broadcasting & Electronic Media 54, 2 (2010): 337–355.

Nass, C., and Y. Moon. “Machines and Mindlessness: Social Responses to

Computers.” Journal of Social Issues 56, 1 (2000): 81–103.


Nass, C., Y. Moon, and N. Green. “Are Machines Gender Neutral? Gender- Stereotypic Responses to Computers with Voices.” Journal of Applied Social Psychology 27, 10 (1997): 864–876.

Nowak, K. L., and C. Rauh. “The Influence of the Avatar on Online Perceptions of Anthropomorphism, Androgyny, Credibility, Homophily, and Attraction.” Journal of Computer-Mediated Communication 11, 1 (2005):

153–178.


Nunnally, J. C. Psychometric Theory. New York: McGraw-Hill, 1978.


Pega. (2017, April). “What Consumers Really Think about AI: A Global Study.” Retrieved from the CIO Summits website: https://www. ciosummits.com/what-consumers-really-think-about-ai.pdf

Podsakoff, P. M., S. B. MacKenzie, J. Y. Lee, and N. P. Podsakoff. “Common Method Biases in Behavioral Research: A Critical Review of the Literature and Recommended Remedies.” Journal of Applied Psychology 88, 5 (2003): 879–903.



Qin, X., and Z. Jiang. “The Impact of AI on the Advertising Process: The

Chinese Experience.” Journal of Advertising 48, 4 (2019): 338–346.


Shank, D. B., C. Graves, A. Gott, P. Gamez, and S. Rodriguez. “Feeling Our Way to Machine Minds: People’s Emotions when Perceiving Mind in Artificial Intelligence.” Computers in Human Behavior 98 (2019): 256–266.

Stein, J. P., and P. Ohler. “Venturing into the Uncanny Valley of Mind: The Influence of Mind Attribution on the Acceptance of Human-Like Characters in a Virtual Reality Setting.” Cognition 160 (2017): 43–50.

Sundar, S. S. “The MAIN Model: A Heuristic Approach to Understanding Technology Effects on Credibility.” In Digital Media, Youth, and Credibility,

M. J. Metzger and A. J. Flanagin, eds. Cambridge, MA: The MIT Press, 2008.


Sundar, S. S. “Rise of Machine Agency: A Framework for Studying the Psychology of Human–AI Interaction (HAII).” Journal of Computer-Mediated Communication 25, 1 (2020): 74–88.

Sundar, S. S., and J. Kim. “Machine Heuristic: When We Trust Computers More than Humans with Our Personal Information.” Proceedings of the 2019 Conference on Human Factors in Computing Systems Proceedings (CHI ’19). New York: ACM, 2019.

Sundar, S. S., and C. Nass. “Source Orientation in Human–Computer Interaction: Programmer, Networker, or Independent Social Actor.” Communication Research 27, 6 (2000): 683–703.

Sundar, S. S., T. F. Waddell, and E. Jung. “The Hollywood Robot Syndrome: Media Effects on Older Adults’ Robot Attitudes and Adoption Intentions.” Proceedings of 2016 ACM/IEEE International Conference on Human-Robot Interaction (HRI ‘16). New York: ACM, 2016.


Sung, M. (2018, September 27). “Burger King’s AI-Written Ads Are Beautiful Disasters.” Retrieved from the Mashable website: https://mashable. com/article/burger-king-ai-ads-beautiful-disaster/

Tajfel, H. “Social Psychology of Intergroup Relations.” Annual Review of Psychology 33, 1 (1982): 1–39.

Van Rompay, T. J., and M. Veltkamp. “Product Packaging Metaphors: Effects of Ambiguity and Explanatory Information on Consumer Appreciation and Brand Perception.” Psychology & Marketing 31, 6 (2014): 404–415.

Waddell, T. F. “A Robot Wrote This? How Perceived Machine Authorship Affects News Credibility.” Digital Journalism 6, 2 (2018): 236–255.

Waytz, A., J. Cacioppo, and N. Epley. “Who Sees Human? The Stability and Importance of Individual Differences in Anthropomorphism.” Perspectives on Psychological Science 5, 3 (2010): 219–232.

Waytz, A., J. Heafner, and N. Epley. “The Mind in the Machine: Anthropomorphism Increases Trust in an Autonomous Vehicle.” Journal of Experimental Social Psychology 52 (2014): 113–117.

Yeomans, M., A. Shah, S. Mullainathan, and J. Kleinberg. “Making Sense of Recommendations.” Journal of Behavioral Decision Making 32, 4 (2019): 403–414.

Zhao, X., J. G. Lynch, and Q. Chen. “Reconsidering Baron and Kenny: Myths and Truths about Mediation Analysis.” Journal of Consumer Research 37, 2 (2010): 197–206.

Copyright of Journal of Advertising Research is the property of Ascential Events (Europe) Limited and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use.