Index


Figures


Tables

Kim and Lee: Factors Affecting the Continuous Intention to Use ChatGPT: Evidence from Korea and USA

Yujin Kim♦, Hyung-Seok Lee°

Factors Affecting the Continuous Intention to Use ChatGPT: Evidence from Korea and USA

Abstract: This study investigated the relationships among privacy concerns, trust, and continued use intention of ChatGPT users, with a focus on cultural differences between Korean and U.S. users. We analyzed survey data using structural equation modeling to explore how both ChatGPT's characteristics (e.g., anthropomorphism, personalization, interactivity, information accuracy, and system flexibility) and user traits (e.g., prior knowledge, AI literacy, and personal innovativeness) influence privacy and trust. Key findings reveal that cultural differences significantly moderate these relationships. For instance, Korean users perceive anthropomorphism as a trust-enhancing feature and link information accuracy to privacy, while U.S. users see interactivity as a greater privacy risk. Despite these differences, trust positively impacts continued use intention for both groups. This study offers valuable insights for AI service developers, highlighting the need to consider cultural context when designing services.

Keywords: ChatGPT , Generative AI , Privacy concerns , Trust

Ⅰ. Introduction

Artificial intelligence (AI) is profoundly permeating various sectors globally, causing dramatic shifts in both professional and personal lives[1]. While AI has been utilized in numerous fields for many years, generative AI represents a profound shift in AI technology advancement due to its user-friendly interfaces and high performance[2]. OpenAI's ChatGPT, for instance, surpassed 100 million users in an exceptionally short period, signifying the rapid progress of the technologies. Such advancements have the potential to catalyze innovation in various domains, including education and healthcare, thereby enhancing the convenience and efficiency of our lives[3,4].

However, this rapid growth has escalated privacy concerns due to the increased collection and processing of user data. ChatGPT processes and stores vast amounts of data, which raises user concerns regarding the use and sharing of their information. Notably, OpenAI has not transparently disclosed with whom they share the collected data with, or the purposes for which it is used; this opacity can undermine user trust[5,6].

Privacy concerns significantly affect technology adoption and the formation of trust[7,8]. If users distrust ChatGPT, it may negatively influence technology adoption and satisfaction, potentially leading toits rejection[9]. Previous studies have shown that trust isa critical factor in the adoption of new technologies, with increased privacy concerns being associated with lower trust and reduced adoption intention[7,10-12].

A review of existing studies on ChatGPT indicates that while significant research has focused on the factors influencing the adoption of ChatGPT technology and user satisfaction, there is a noticeable lack of studies addressing the factors impacting privacy concernsand trust during ChatGPT usage. Consequently, this study will analyze those factors within the context of ChatGPT usage and explore their effects on user behavior. The objective is to enhance the understanding of privacy concerns and trust experienced by users of generative AI technologies like ChatGPT and to contribute to the development of policies and strategies to improve these aspects. This research will advance our understanding of users' perceptions and attitudes toward privacy and trust in ChatGPT, thereby informing policy enhancement that promotes a safer and more trustworthy environment for ChatGPT users.

Hence, this study aims to investigate the factors influencing privacy concerns and trust while using ChatGPT by categorizing them into ChatGPT-specific characteristics and individual user traits, and analyzing their relationship with the intention to continue using the technology. Moreover, this study will compare Korean and American user groups to assess the impact of cultural differences. It seeks to deepen insights into the relationship between users' privacy protection behaviors and trust in generative AI, with the expectation that these insights will contribute to enhancing the quality of services.

Ⅱ. Theoretical Background and Hypothesis

2.1 ChatGPT Characteristics

In December 2022, OpenAI introduced the conversational platform ChatGPT, which has been catalyzing innovative changes across various industries owing to its natural conversational abilities and advanced contextual understanding[6]. Since its launch, ChatGPT has significantly impacted various fields due to its exceptional ability to understand user inputs and provides responses tailored to specific needs[4,13]. Additionally, ChatGPT plays a key role in enhancing organizational efficiency by significantly improving system performance and information accuracy across multiple functions, including information gathering, analysis, prediction, and communication[14,15].

OpenAI significantly enhanced the performance of its models, beginning with GPT-1.0 in 2018, which was trained on 117 million parameters, followed by GPT-2.0 in 2019 with 1.5 billion parameters, and culminating in GPT-3.0 in 2020 with 175 billion parameters[16]. On May 13, 2024, OpenAI introduced GPT-4o(Omni), which surpasses earlier models in speed and excels in real-time interaction. This model enables users to interact through voice and video, request problem-solving during live video sharing, and engage in real-time collaboration[17]. It has also been upgraded to support conversations in various voice and text styles, solve mathematical equations, and translate spoken languages in real-time[18,19]. Moreover, according to Sallam, et al.[20], ChatGPT 4.0 also outperformed other AI models such as GPT-3.5, Bing, and Bard on multiple-choice questions in the field of medicinal chemistry. Hochmair, et al.[21] reported that ChatGPT 4.0 similarly surpassed other chatbots like Bard, Claude-2, and Copilot in tasks such as GIS theory and programming code interpretation.

This study defines the characteristics of ChatGPT into five key factors and explores each to analyze their impact on users' privacy concerns and trust. The five characteristics are anthropomorphism, personalization, interactivity, information accuracy, and system flexibility.

2.1.1 Anthropomorphism

According to Duffy[22], anthropomorphism refers to the process of attributing human characteristics, emotions, or intentions to non-human entities, such as objects or AI systems. It facilitates user acceptance of new service technologies and enables individuals to adapt to unfamiliar situations[23,24]. ChatGPT can comprehend users' speech and adjust its tone to appear human-like, occasionally responding in engaging tones or even singing[19]. Such human-like attributes of ChatGPT foster a sense of companionship and support, strengthening emotional connections with users and enhancing the perceived value of ChatGPT[25].

2.1.2 Personalization

ChatGPT offers significant advantages in personalization[26]. It becomes an adaptive conversational agent that delivers context-specific information by learning from interactions with users[27,28]. ChatGPT retains information from previous exchanges to produce contextually appropriate responses based on prompt[29]. Furthermore, through continuous interaction, ChatGPT learns a user's language, tone, and style, thus understanding individual needs and preferences and providing personalized responses. Over time, this leads to more precise and relevant answers, enhancing the quality of services and education[30].

2.1.3 Interactivit

Interactivity is defined as ChatGPT's ability to engage in two-way interactions, referring to the AI system’s responsiveness and accuracy during user interactions[25,31]. Additionally, interactivity is a critical component in AI systems, enhancing user decision-making and transforming interaction dynamics between businesses and customers[25,32]. Traditional chatbots deliver predefined responses to specific inquiries based on preset knowledge, whereas ChatGPT can generate natural conversations by interacting with users and utilizing extensive text data. This capability leads to personalized responses and enhances interactivity[16,33]. Specifically, consumers tend to engage longer with human-like chatbots or AI devices[34,35]. ChatGPT enhances the experience by responding to queries in real-time and maintaining conversation context, thanks to its high interactivity[19]. This interactivity enables users to engage in more natural and consistent conversations, contributing to increased user satisfaction and trust[26,29].

2.1.4 Information Accuracy

Large language models (LLMs) like ChatGPT, trained on extensive text dataset, encounter the issue of hallucination[3]. Hallucination describes a phenomenon where the model generates nonexistent information or responses irrelevant to the context[36]. Hallucination and misinformation are distinct concepts. According to Liu, et al.[37], misinformation refers to incorrect or biased responses due to flawed input data. However, hallucination includes creating fictional content that conflicts with source material or presents unverifiable information. Such hallucination issues can erode user trust and lead to severe consequences, while misinformation may directly harm users[36,38]. Hence, enhancing the accuracy of ChatGPT's information is vital to improve user experience and trust.

2.1.5 System Flexibility

System flexibility refers to how well a system adapts to varying user needs and changing conditions[39]. It plays a pivotal role in customizing and personalizing the user experience, which boosts the perceived usefulness of technology and enhances user convenience[40,41]. Not only does ChatGPT meet the specific needs of various users, but it also adapts to flexibly across a broad range of activities, continuously evolving to better address user needs[42,43]. Such flexibility is critical in determining system quality, playing an essential role in providing necessary information for diverse decision-making processes[40].

Representative studies on generative AI, such as ChatGPT, include the work of Li and Lee[24], who explored how ChatGPT affects user loyalty in the travel decision-making process based on theories of affordance and communication. They investigated components such as communication quality, personalization, anthropomorphism, and both cognitive and emotional trust. Zhou and Li[25] researched users' intentions to switch from search engines to generative AI using the Push-Pull-Mooring (PPM) model. Their study illustrated that AI-generated content’s information quality and perceived interactivity affects users' intentions to switch from search engines to generative AI through perceived value. Foroughi, et al.[44] scrutinized factors affecting the intent to use ChatGPTin education, following the UTAUT2 model, and studied whether personal innovativeness and information accuracy moderate these relationships. Gulati, et al.[45] analyzed factors influencing marketing students' acceptance and use of ChatGPT, particularly focusing on system flexibility within the UTAUT model. Mygland, et al.[46] identified and analyzed nine high-level affordances of chatbots, which include human-like conversing, assistance provision, facilitation, distilling information, enriching information, context identification, personalization, fostering familiarity, and ensuring privacy. Based on previous studies, this research defines the characteristics of ChatGPT as anthropomorphism, personalization, interactivity, information accuracy, and system flexibility.

2.2 ChatGPT and Privacy Concerns

Users continuously generate data across various digital environments, such as email, search, and online shopping, and the volume of personal data generation has surged due to the significant reduction in data storage and analysis costs[47]. In the digital age, information is a valuable asset, and from a business perspective, privacy is a strategic issue[48].

LLMs like ChatGPT, trained on expansive datasets collected from diverse sources such as websites, posts, and articles[49], may inadvertently include personal information, compromising between privacy concerns and the willingness to disclose such information[49,50]. As ChatGPT extensively harvests data from the internet, it is inevitably exposed to personal information[6]. Furthermore, ChatGPT processes sensitive data, including personal messages, medical records, and financial information, raising significant privacy concerns about the management of user data[6,43].

OpenAI stores user conversations temporarily for 30 days to adhere to data protection laws. However, this process entails potential risks of interception during transmission if communication channels are insecure[51]. In Italy, the use of ChatGPT was prohibited on March 20, 2023, following a data breach that compromised payment information and user conversations, prompting other nations to initiate similar inquiries[6,49]. Therefore, AI service providers like ChatGPT are obligated to adhere to these legal standards while actively safeguarding user privacy.

Few studies have investigated the impact of ChatGPT's characteristics on privacy concerns. Consequently, this study examines the existing research on privacy issues related to attributes similar to those of ChatGPT and those involving the collection of personal information.

Adyantari[82] demonstrated that the anthropomorphic features of chatbots significantly influence individuals' privacy concerns, which subsequently negatively impacts their intention to disclose personal information. Additionally, Ischen, et al.[83] argued that anthropomorphism perceptions and privacy concerns are crucial in shaping users' attitudes and behavioral outcomes during interactions with chatbots. Therefore, users may experience uncertainty regarding how an AI that behaves like a human will manage their personal information.

ChatGPT's personalization features enhance the user experience by offering tailored responses based on user provided data[28]. However, this personalization necessitates the collection of user data, potentially heightening privacy concerns. Mo, et al.[84] revealed that perceived personalization in online targeted advertising can lead to increased perceptions of informativeness and elevated privacy concerns. Consequently, ChatGPT's personalized responses could cause users to worry about excessive data collection.

Hasal, et al.[85] emphasized that chatbots like ChatGPT, which handle sensitive information, must mitigate security threats to ensure the accurate and secure storage of user data, and manage it transparently. While ChatGPT excels in information collection and prediction, it can sometimes provide inaccurate information or exhibit hallucination phenomena[14,36]. This may heighten users' privacy concerns, as they might need to allocate considerable time to verify information and could perceive a lack of fairness in information disclosure[86,87]. Zhou[87] also highlighted that privacy concerns among users of location-based services influenced by the quality of information. Thus, inaccuracies in information can provoke privacy concerns among users.

System flexibility is pivotal in enhancing the perceived usefulness of technology and augmenting user convenience[40,41]. However, heightened flexibility necessitates the collection and processing of extensive user data, which concurrently escalates the risk of privacy violations[88]. Alsabawy, et al.[89] examined if the service quality, system quality, and information quality of IT infrastructure systems impact privacy. Consequently, this research postulates the following hypotheses based on previous studies.

H1: Anthropomorphism significantly impacts privacy concerns.

H2: Personalization significantly impacts privacy concerns.

H3: Interactivity significantly impacts privacy concerns.

H4: Information accuracy significantly impacts privacy concerns.

H5: System flexibility significantly impacts privacy concerns.

2.3 ChatGPT and Trust

Building trust in managing customer information proves more effective than efforts aimed at reducing customer concerns[52]. Trust originates from various disciplines such as psychology, economics, sociology, and business administration[24]. According to Mayer, et al.[53], trust is the willingness to be vulnerable to another’s actions, based on the expectation of significant action from the other party, regardless of monitoring or control capabilities.

Trust is pivotal in the adoption and use of AI services like ChatGPT by users[54]. It has been shown that people are more likely to trust and depend on AI for areas involving objective knowledge rather than subjective knowledge[55]. This is a critical determinant of trust in AI systems and greatly affects users' intention to embrace this technology[56,57].

Although ChatGPT is trained on a vast corpus of human-written text data, ensuring accuracy and relevance in its responses, building user trust remains challenging as not all responses are always accurate[61,90]. Previous research by Li, et al.[91] explored the factors influencing consumers' trust in AI chatbots, classifying them as chatbot-related, company-related, and consumer-related factors, and examined the process of trust formation between humans and AI chatbots. The study identified expertise, responsiveness, and anthropomorphism as key factors influencing consumer trust, highlighting that inadequate expertise in AI chatbots can detrimentally affect trust. Alagarsamy and Mehrolia[54] proposed a model considering factors such as technology, quality, risk, and personal characteristics that influence chatbot trust, demonstrating that this model contributes to the intention to use. Chakraborty and Biswal[61] investigated the effects of ChatGPT's quality, accuracy, timeliness, user familiarity, and consistency with other sources on its reliability and technology acceptance. The research indicated that users’ trust in ChatGPT increase when the information is consistent with other sources. Therefore, drawing on the previous studies, this study proposes the following hypotheses.

H6: Anthropomorphism significantly impacts trust.

H7: Personalization significantly impacts trust.

H8: Interactivity significantly impacts trust.

H9: Information accuracy significantly impacts trust.

H10: System flexibility significantly impacts trust.

2.4 Individual Characteristics

2.4.1 Receiver’s prior knowledge

Prior knowledge encompasses an individual's familiarity, expertise, and experience with a specific issue and significantly influences their information-seeking behavior[58]. When individuals possess extensive prior knowledge about a topic, they can analyze and evaluate new information in depth, reducing their reliance on peripheral cues or heuristics[59]. Moreover, prior knowledge facilitates the understanding and integration of new information[60].

ChatGPT plays a vital role in aiding decision-making across various fields, with users' prior knowledge significantly influencing their acceptance of the content provided[61,62]. The greater the prior knowledge users possess in a particular field, the more likely they are to adopt information provided by ChatGPTif it aligns with their existing knowledge[60].

Users with in-depth prior knowledge in the IT field can better comprehend the characteristics and functionality of technologies, enabling them to more effectively identify and address risks or issues associated with new technologies[63]. In this context, users' prior knowledge is crucial in shaping privacy concerns and significantly influences the process of evaluating information and fostering trust[64]. Alignment of the information with an individual's prior knowledge increases the likelihood of trust in that information[61].

Park[63] demonstrated that users' knowledge significantly influences their privacy control behaviors. Debatin, et al.[92] found that users with extensive prior knowledge of technology are cognizant of particular methods to safeguard their personal information. Chakraborty and Biswal[61] discovered that users' prior knowledge positively impacts the reliability of ChatGPT, suggesting that users are more likely to trust information that corroborates their existing knowledge. Therefore, based on prior research, this study posits the following hypotheses.

H11: Receivers' prior knowledge significantly impacts privacy concerns.

H12: Receivers' prior knowledge significantly impacts trust.

2.4.2 AI Literac

AI is continuously expanding across various sectors through direct interaction with users[65,66]. Nevertheless, the myriad functionalities and complex systems of AI pose challenges for users in understanding and utilizing these technologies[67]. In a society increasingly influenced by AI, it is crucial for users to comprehend the nature and capabilities of AI to act independently and contribute to future developments[68].

In response to technological innovations, researchers have developed various competencies and literacy concepts, such as computer literacy, media literacy, ICT literacy, and digital literacy, which encompass effective use of digital environments[69]. Recently, AI literacy has emerged as a crucial competency for living, learning and working in the digital world via AI-based technologies[66]. AI literacy extends digital literacy, referring to the abilities to understand, interact with, and critically evaluate AI systems and their outputs[70]. The level of ChatGPT utilization heavily relies on the user's AI literacy skills. The degree to which users comprehend the functions and risks of ChatGPT shapes their perception of its benefits and concerns, potentially enabling them to use the technology more effectively or encounter hurdles in areas such as personalized data use[63]. Differences in technical competency with digital devices affect the use and effectiveness of online services, allowing users with higher competency to utilize these services more efficiently[71].

Users consider issues such as reliability and privacy when using ChatGPT, with their perceptions shaped by their AI literacy[72, 73]. High AI literacy allows users to better comprehend the advantages and limitations of AI, potentially leading to increased cautionin adopting the technology. Conversely, lowAI literacy can escalate anxiety due to misunderstandings or insufficient information about AI[72]. Therefore, successful utilization of ChatGPT largely hinges on the level of AI literacy, enabling users to trust and continue using the technology.

The relationship between AI literacy, privacy concerns, and trust remains under-researched. Consequently, we examined studies evaluating the effective use of digital environments. Al-Abdullatif andAlsubaie[72] studied the adoption of ChatGPT through an enhanced VAM (Value-Based Adoption Model) that integrates AI literacy, discovering that students with greater AI literacy appreciate ChatGPT’s value more profoundly. Lee, et al.[93] found that increased information literacy levels correlate with greater trust in websites, which further grows as the website’s perceived utility increases. Based on prior research, this study proposes the following hypotheses.

H13: AI literacy significantly impacts privacyconcerns.

H14: AI literacy significantly impacts trust.

2.4.3 Personal Innovativenes

Agarwal and Prasad[74] defined personal innovative ness as the degree to which an individual is willing to engage with new information technology. Personal innovativeness is particularly significant in AI-based services such as ChatGPT.[44,75]. Individuals with a high level of personal innovativeness tend to embrace new technologies driven by their curiosity and eagerness to acquire new skills[44,75]. This curiosity and eagerness to learn positively impact the adoption and continued use intention of innovative tools[76-78].

Personal innovativeness is essential in generative AI services like ChatGPT, as users must exhibit curiosity and openness when adopting and utilizing new AI technologies[76,78]. Individuals with high levels of innovativeness are more inclined to take risks, accept minor setbacks during the risk-taking process, and swiftly evaluate the usefulness, reliability, and functionality of new technologies[74,79,80]. Moreover, they make decisions to purchase or adopt new technologies independently of others’ opinions or experiences[81]. Consequently, ChatGPT users with high personal innovativeness are more likely to trust and utilize the technology despite privacy concerns.

Previous research on personal innovativeness reveals that individuals with higher levels of innovativeness exhibit fewer privacy concerns when utilizing Proximity Bluetooth Beacon Technology (PBBT)[94]. Meng, et al.[80] investigated whether highly innovative users more easily foster trust in mobile health services and incorporate them into their daily routines. Therefore, this study presents the following hypothesis based on prior research.

H15: Personal innovativeness significantly impacts privacy concerns.

H16: Personal innovativeness significantly impacts trust.

2.5 Privacy Concerns and Trust

Trust is pivotal in matters concerning privacy concerns and information disclosure, where its breakdown can expose sensitive personal information, potentially leading to financial losses[95]. Moreover, trust is vital in determining individuals’ willingness to share sensitive information with companies. It lowers transaction risks, enhances interactions among parties, and aids in building long-term relationships[52,96,97].

The relationship between privacy concerns and trust is intricately linked. Malhotra, et al.[98] demonstrated that individuals with greater privacy concerns are more likely to be affected in their perception of trust and risk when personal information is solicited.

Research on privacy concerns and trust has demonstrated that Malhotra, et al.[98] found that the higher information privacy concerns correlate with lower trust and increased perceived risk, subsequently influencing personal information disclosure behaviors. Smith, et al.[99] identified regulation, behavioral responses, and trust as outcome factors in the APCO (antecedents → privacy concerns → outcomes) model. Jafari, et al.[100] found that privacy concerns about ChatGPT reduce usage intention and trust. Consequently, this study proposes the following hypotheses based on prior research.

H17: Privacy concerns significantly impact trust.

Individuals with high privacy concerns, irrespective of the type, may hesitate to adopt new technologies or services, potentially leading to a reduced usage intentions[50,101]. Consequently, this study proposes the following hypotheses based on previous research.

H18: Privacy concerns significantly impact continuance intention.

Trust in ChatGPT is positively related to user attitude[102]. Prior studies have shown that increased trust enhanced user attitudes, indicating that a high level of trust in ChatGPT and other chatbots correlates with more positive user responses[102-104]. Trust significantly affects the usage intention, and when users feel confident in the technology, they are more likely to adopt and continue using it[105]. Therefore, based on existing studies, this research presents the following hypothesis.

H19: Trust significantly influences the intention to intention to use continuously.

Ⅲ. Research Methods

3.1 Development of the Measurement Items

This study revised the measurement items introduced in prior research to better reflect the characteristics of ChatGPT, as illustrated in Table 1. These items were developed based on earlier studies[24,25,34,36,44,45,61,68,77,78,102,106-112].

Table 1.

Constructs and measurement items.
Construct Measurement Item Source
Anthropomo rphism(ANT) ChatGPT's tone and language were warm and appealing. Li and Lee [24]Polyportis and Pahos [102]
The interaction with ChatGPT created an emotional connection similar to that with a human.
I felt that ChatGPT responded well to my emotional needs or concerns.
ChatGPT is natural; I do not feel fake about it.
Personalization(PER) ChatGPT offered personalized feedback. Li and Lee [24]
ChatGPT understood my specific requirements and provided tailored responses.
The responses from ChatGPT were customized to my preferences and demands.
Interaction(INT) ChatGPT responses to my input very quickly Pelau, et al. [34]Zhou and Li [25]
ChatGPT can help me focus my attention.
Interaction with ChatGPT can be easily done.
ChatGPT can answer my feed-back quickly.
Information Accuracy(ACC) The information I obtain from ChatGPT is accurate. Kim, et al. [36]Foroughi, et al. [44]
The information I obtain from ChatGPT is correct
The information I obtain from ChatGPT is reliable.
System Flexibility(FLX) I believe that ChatGPT will be versatile to meet the required needs. Gulati, et al. [45]
I believe that ChatGPT will be able to adapt to the new demands and circumstances.
I believe that ChatGPT will be able to adapt to address various needs.
Receivers’ Prior Knowledge(KNO) The Information in the ChatGPT is similar with my point of view. Chakrabo rty and Bisw [61]
The information in the ChatGPT is consistent with my view points.
The information in the ChatGPT provides me the same opinion.
AI Literacy(LIT) I can use ChatGPT meaningfully to achieve my everyday goals. Ng, et al. [66]Carolus, et al. [68]
In everyday life, I can work together gainfully with ChatGPT.
I can assess what the limitations and opportunities of using ChatGPT are.
I can assess what advantages and disadvantages the use of ChatGPT entails.
Personal Innovativen ess(INN) I like to experiment with new information technology. Foroughi, et al. [44]Bilon Budimir [78]
If I heard about a new information technology, I would look for ways to experiment with it.
I am usually the first to try out new information technology.
Among my peers, I am usually the first to try out new technologies.
Privacy Concerns(PRI) When I use ChatGPT, I feel uneasy due to the fact that ChatGPT has information about myself. de Cosmo, et al. [108]Smith, et al. [111]Stewart and Segars [112]
I believe that through ChatGPT, the owner of ChatGPT system can access my personal information.
I am worried about that my personal information will be leaked when using ChatGPT.
I am concerned about the problem of information security in ChatGPT
I am concerned that the information I submit via ChatGPT could be misused.
I am concerned about submitting information via ChatGPT, because it could be used in a way I did not foresee
Trust(TRU) I believe using ChatGPT is secure. Rahman, et al. [107]
I believe information exchange through ChatGPT will be secure.
I am confident regarding the security measurements offered by ChatGPT.
Continuous Intention to Use(USE) I intended to continue using ChatGPT in the future. Venkates h, et al. [110]Strzelecki [77]
I will always try to use ChatGPT.
I plan to continue to use ChatGPT frequently.
I will always try to use ChatGPT.
I plan to continue to use ChatGPT frequently.

The study utilized a 5-point Likert scale for the measurement items, ranging from ‘strongly disagree(1)’ to ‘strongly agree (5)’

3.2 Data Collection and Sample Characteristics

Data analysis was conducted using SmartPLS 3.0, a structural equation modeling (SEM) software based on the partial least squares (PLS) approach. The survey data were collected through professional research agencies. For the Korean sample, data collection was outsourced to a specialized survey company, referred to as Company E. For the USA sample, data were collected through another professional survey firm, referred to as Company A. Both agencies employed stratified sampling methods to ensure demographic diversity and data quality. A survey targeting users experienced with ChatGPT was conducted through a professional research firm, collecting 420 responses. After discarding insincere responses, 403 valid responses were analyzed.

Frequency analysis explored demographic characteristics of the respondents, presented in Table 2. There were 246 males and 157 females, indicating a male majority. There were 200 Koreans and 203 American respondents, with Americans slightly outnumbering Koreans. The dominant age group was 30-39 years, with 223 respondents, followed by the 40-49 age group with 90, the 20-29 age group with 88, and 2 respondents in other age groups. Company employees constituted the largest occupation group at 75.4%, followed by self-employed/business owners at 15.6%, government employees at 6%, job seekers at 1.5%, and others at 1.4%.

Table 2.

Demographic characteristics of respondent
Category Korea Number of Respondents USA Number of Respondents Total Number of Respondents
Gender Male 81(40.5%) 165(81.3%) 246(61.0%)
Female 119(59.5%) 38(18.7%) 157(39.0%)
Age 20~29 32(16.0%) 56(27.6%) 88(21.8%)
30~39 106(53.0%) 117(57.6%) 223(55.3%)
40~49 62(31.0%) 28(13.8%) 90(22.3%)
≥ 50 0(0.0%) 2(1.0%) 2(0.5%)
Job Job Seeker 0(0.0%) 6(3.0%) 6(1.5%)
Office Worker 192(96.0%) 112(55.2%) 304(75.4%)
Public Officer 8(4.0%) 16(7.9%) 24(6.0%)
Self-Em ployed 0(0.0%) 63(31.0%) 63(15.6%)
Etc. 0(0.0%) 6(3.0%) 6(1.5%)
Total 200(100%) 203(100%) 403(100%)
3.3 Reliability and Validity Assessment

In this study, we examined Cronbach's α and construct validity to assess the reliability and validity of the measurement items and factors. Cronbach’s α, which measures the internal consistency of the scale, was employed to evaluate reliability. This method focuses on the consistency of repeated measurements under identical conditions or the consistency of similar items measured once, evaluating how well each item reflects the concept measured[113]. The internal consistency test using Cronbach’s α showed that all constructs achieved α values of 0.6 or higher, as presented in Table 3. According to Nunally and Bernstein[114], an α value of 0.7 or higher generally ensures the reliability of a measurement instrument, but in exploratory studies, an α value of 0.6 or higher is acceptable.

Table 3.

Confirmatory factor analysis
Factor Item Path Coeffic ient t-value Cronbach’s α
Anthropomorp hism ANT1 0.834 42.137 *** 0.857
ANT2 0.862 58.429 ***
ANT3 0.849 51.637 ***
ANT4 0.804 38.491 ***
Personalization PER1 0.835 41.365 *** 0.755
PER2 0.812 33.817 ***
PER3 0.813 34.585 ***
Interaction INT1 0.634 13.514 *** 0.721
INT2 0.772 25.616 ***
INT3 0.809 29.441 ***
INT4 0.709 19.405 ***
Information Accuracy ACC1 0.884 60.835 *** 0.861
ACC2 0.887 70.834 ***
ACC3 0.883 66.313 ***
System Flexibility FLX1 0.845 44.507 *** 0.710
FLX2 0.731 19.405 ***
FLX3 0.798 30.532 ***
Receiver’s Prio Knowledge KNO1 0.882 62.848 *** 0.850
KNO2 0.879 82.054 ***
KNO3 0.871 67.039 ***
AI Literacy LIT1 0.741 21.676 *** 0.770
LIT2 0.738 20.963 ***
LIT3 0.812 41.740 ***
LIT4 0.770 29.028 ***
Personal Innovation INN1 0.731 22.636 *** 0.780
INN2 0.727 23.652 ***
INN3 0.825 47.324 ***
INN4 0.821 43.312 ***
Privacy Concerns PRI1 0.764 31.013 *** 0.873
PRI2 0.754 25.402 ***
PRI3 0.811 33.994 ***
PRI4 0.799 34.705 ***
PRI5 0.783 28.098 ***
PRI6 0.780 25.575 ***
Trust TRU1 0.897 83.574 *** 0.892
TRU2 0.914 106.908 ***
TRU3 0.912 111.911 ***
Continuous Intention to Use USE1 0.728 22.003 *** 0.730
USE2 0.848 47.245 ***
USE3 0.840 44.278 ***

Note : *p<0.1, **p<0.05, ***p<0.01.

Subsequently, confirmatory factor analysis was conducted to validate the measures, with the results detailed in Table 3. The t-values of the path coefficients for each item were statistically significant, confirming convergent validity.

Lastly, we assessed discriminant validity by evaluating the average variance extracted (AVE) values and the correlation matrix between the constructs. According to Fornell and Larcker[115], discriminant validity is confirmed when the square root of the AVE exceeds the correlation coefficients between constructs. As shown in Table 4, the square root values of the AVE on the diagonal surpassed the correlations between constructs, establishing discriminant validity. Moreover, the composite reliability (CR) and AVE values met their respective thresholds (CR>0.7, AVE > 0.5), demonstrating that the constructs are reliably and validly measured[116-118].

Table 4.

Discriminant validity
Construct Mean S.D. Construct
ANT PER INT ACC FLX KNO LIT INN PRI TRU USE
ANT 3.365 0.887 0.837
PER 3.814 0.661 0.62 0.820
INT 3.973 0.565 0.481 0.565 0.734
ACC 3.606 0.831 0.675 0.568 0.527 0.885
FLX 3.889 0.599 0.501 0.61 0.659 0.554 0.792
KNO 3.575 0.772 0.674 0.509 0.542 0.726 0.61 0.877
LIT 3.870 0.584 0.411 0.528 0.632 0.503 0.683 0.553 0.766
INN 3.788 0.660 0.514 0.415 0.513 0.519 0.573 0.563 0.597 0.777
PRI 3.507 0.915 0.365 0.231 0.201 0.292 0.34 0.4 0.289 0.463 0.782
TRU 3.660 0.742 0.692 0.527 0.491 0.768 0.477 0.708 0.478 0.53 0.236 0.908
USE 3.842 0.671 0.533 0.507 0.627 0.592 0.61 0.594 0.628 0.55 0.234 0.561 0.807
Construct Reliability 0.904 0.861 0.823 0.915 0.835 0.909 0.850 0.859 0.904 0.934 0.848
Average Variance Extracted 0.701 0.673 0.539 0.783 0.628 0.770 0.586 0.604 0.612 0.824 0.651

Notes : Numbers below the diagonal numbers are correlation coefficients (p>0.01). Diagonal bold numbers are square roof of the AVE.

To establish measurement invariance between the Korean and U.S. samples, this study conducted a stepwise multi-group confirmatory factor analysis(MGCFA). Configural and metric invariance were supported as the chi-square difference ([TeX:] $$\triangle \chi^2$$), comparative fit index difference (ΔCFI), and root mean square error of approximation difference (ΔRMSEA) all satisfied the recommended criteria ([TeX:] $$\triangle \chi^2$$(36)=33.477, p>.05; ΔCFI=-0.001; ΔRMSEA=0.000). Although scalar invariance was not fully supported due to a statistically significant [TeX:] $$\triangle \chi^2$$, the ΔRMSEA remained below 0.010, suggesting an acceptable level of model fit and allowing for the consideration of partial scalar invariance. As the chi-square difference test often becomes significant when constraining factor loadings and intercepts, recent studies recommend using alternative fit indices such as ΔCFI, ΔTLI, and ΔRMSEA to evaluate invariance[119].

3.4 Hypothesis Testing

In this study, path analysis and multi-group analysis using PLS were conducted to test the hypotheses, with the results presented in Table 5. The model fit for the path analysis can be assessed using the coefficient of determination (R2) for endogenous variables. The complete model yielded R2 values of 0.273 for privacy concern, 0.685 for trust, and 0.326 for continuous intention to use. For the Korea Group, the R2values were 0.128 for privacy concern, 0.507 for trust, and 0.107 for continuous intention to use. In the USA group, the R2 values were 0.398 for privacy concern, 0.63 for trust, and 0.492 for continuous intention to use. Tenenhaus et al.[120] proposed an overall goodness-of-fit (GoF) index using the AVE and R2 values. The GoF values were 0.521 for the complete model, 0.371 for the Korea group, and 0.541 for the USA group, all exceeding the maximum threshold of 0.36, indicating high model fit across all group models.

Table 5.

Result of path analysi
Hypothesis Group Path Coefficient t-value
H1 ANT→PRI Complete 0.156 1.855 *
Korea 0.150 1.611
USA 0.058 0.489
H2 PER→PRI Complete -0.037 0.464
Korea -0.105 0.974
USA 0.078 0.687
H3 INT→PRI Complete -0.175 2.725 ***
Korea -0.135 1.350
USA -0.294 3.244 ***
H4 ACC→PRI Complete -0.105 1.258
Korea -0.197 2.007 **
USA 0.044 0.427
H5 FLX→PRI Complete 0.136 1.663 *
Korea 0.094 0.618
USA 0.086 0.784
H6 ANT→TRU Complete 0.237 4.037 ***
Korea 0.203 2.794 ***
USA 0.124 1.448
H7 PER→TRU Complete 0.032 0.597
Korea 0.029 0.403
USA 0.085 0.890
H8 INT→TRU Complete 0.011 0.189
Korea -0.006 0.061
USA 0.094 1.164
H9 ACC→TRU Complete 0.416 6.842 ***
Korea 0.422 6.136 ***
USA 0.216 2.506 **
H10 FLX→TRU Complete -0.113 1.999 **
Korea -0.109 1.068
USA 0.010 0.120
H11 KNO→PRI Complete 0.209 2.639 ***
Korea 0.182 1.938 *
USA 0.164 1.560
H12 KNO→TRU Complete 0.239 4.392 ***
Korea 0.173 2.304 **
USA 0.192 2.391 **
H13 LIT→PRI Complete -0.013 0.178
Korea -0.056 0.501
USA -0.061 0.595
H14 LIT→TRU Complete 0.051 1.120
Korea 0.021 0.268
USA 0.250 3.137 ***
H15 INN→PRI Complete 0.354 4.942 ***
Korea 0.218 2.711 ***
USA 0.544 4.656 ***
H16 INN→TRU Complete 0.126 2.310 **
Korea 0.185 2.849 ***
USA -0.019 0.148
H17 PRI→TRU Complete -0.112 2.892 ***
Korea -0.183 2.801 ***
USA -0.035 0.494
H18 PRI→USE Complete 0.107 2.095 **
Korea 0.018 0.203
USA 0.138 2.283 **
H19 TRU→USE Complete 0.536 13.567 ***
Korea 0.325 5.303 ***
USA 0.640 14.167 ***

Notes : * p>.1, ** p>.05, *** p>.01.

First, hypothesis 1 was supported in the complete dataset but was rejected in both the Korea and USA groups. This suggests that anthropomorphism generally influences privacy concerns, although the impact is not significant in the separate analyses for Korea and the USA, suggesting that while users in these countries recognize the human-like characteristics of AI, they do not necessarily link them directly with privacy risks. Hypothesis 2 was rejected for the complete dataset, as well as for both the Korea and USA groups, indicating that users may view the personalized services of ChatGPT as a convenience feature rather than a privacy concern. Therefore, the relationship between personalization and privacy concerns appears to be statistically insignificant. Hypothesis 3 was supported in the complete dataset and in the U.S. group, indicating a significant negative relationship between interactivity and privacy concerns. This means that as interactivity increases, privacy concerns decrease. U.S. users may perceive interactive features as enhancing control and transparency, thereby reducing privacy-related worries. However, the relationship was not significant among Korean users, suggesting that they do not strongly associate interactivity with changes in privacy risk. Hypothesis 4 was supported only in the Korean group, revealing a significant negative relationship between information accuracy and privacy concerns. This suggests that Korean users feel reassured when ChatGPT provides accurate information, leading to lower privacy concerns. However, the relationship was not significant in the complete dataset or the U.S. group, implying that American users do not view information accuracy as a determinant of privacy risk. Finally, hypothesis 5 was supported for the complete dataset but rejected for both the Korea and USA groups, suggesting that while system flexibility impacts privacy concerns globally, this does not hold when analyzing individual countries, where it is not seen as directly related to privacy protection.

Hypothesis 6 was confirmed within the complete dataset and the Korea group, but was not supported in the USA group. Korean users view anthropomorphized AI as enhancing familiarity and trust, whereas U.S. users regard it merely as a technical feature. Consequently, anthropomorphized AI did not significantly affect trust among U.S. users, resulting in no significance in the USA group. Hypothesis 7 was not supported in the complete dataset, nor in the Korea and USA groups, indicating that personalization does not significantly impact trust. This lack of impact arises because users see personalization as a convenience feature rather than a direct contributor to trust formation. Although personalized services provide tailored experiences, users tend to prioritize other aspects, such as system security as more crucial in forming trust. Hypothesis 8 was rejected as it showed no significant effect across the complete dataset, including both the Korea and USA groups. This suggests that even though users can interact conveniently with ChatGPT, it is challenging for them to establish trust in a system that it is not secure or lacks transparent data usage. Hypothesis 9 was supported throughout the complete dataset and in both the Korea and USA groups, demonstrating that information accuracy significantly influences trust. The higher the accuracy significantly influences the information, the greater the trust users place in ChatGPT, believing it operates without errors and processes data correctly. The provision of accurate information becomes a key factor in their assessment of the system’s reliability. Hypothesis 10 was supported only in the complete dataset, showing a significant negative relationship between system flexibility and trust. This indicates that, in general, excessive flexibility might be interpreted by users as a lack of consistency or reliability in the system, which could undermine trust. The effect was not significant in either the Korean or U.S. groups, suggesting that this perception does not hold consistently across individual countries.

Hypothesis 11 was supported in the complete dataset and the Korea group, but rejected in the USA group. Korean users are more likely to perceive greater privacy risks as their prior knowledge of technology increases, potentially leading to heightened privacy concerns. In contrast, U.S. users may experience reduced privacy concerns with higher levels of prior knowledge if they understand how the technology safeguards their personal information and if they feel they have sufficient control over it. Hypothesis 12 was supported across the complete dataset, as well as in both the Korea and USA groups, indicating that users' prior knowledge significantly impacts trust. When users believe that the information provided by ChatGPT aligns with or is similar to their perspectives, they are more likely to perceive ChatGPT as trustworthy. This inclination is due to the tendency of individuals to trust and respond more positively to information that aligns with their own opinions, which enhances trust in ChatGPT. Hypothesis 13 was rejected across the complete dataset and in both the Korea and USA groups, as AI literacy did not significantly impact privacy concerns. Users' understanding of AI technology does not directly influence their privacy concerns. Even with a high level of technical understanding, privacy concerns are more strongly influenced by other factors, implying that AI literacy does not significantly impact privacy concerns. H14 was supported in the complete dataset, with no significant effect in the Korea group, whereas the USA group showed a significant effect. The difference arises from the diverse ways Korean and American users form expectations and build trust regarding technology. American users tend to develop higher levels of trust when they have a strong understanding of the technology and a sense of control over it. Conversely, Korean users place greater emphasis on the consistency of results, stability, and reliability rather than on a technical understanding of ChatGPT, which explains the lack of a significant impact. H15 was supported in the complete dataset, with both the Korea and USA group showing significant effects. Users with high levels of personal innovativeness tend to adopt and actively utilize new technologies or systems more quickly, yet they may also be more sensitive to the potential privacy risks associated with these innovations. As users become more acquainted with and frequently utilize new technologies, their awareness of how these technologies collect and process personal information increases, potentially leading to heightened privacy concerns. H16 was supported in the complete dataset, with the Korea group showing a significant effect, while the USA group did not show a significant effect. This is because Korean society places high values technological advancement and innovation, facilitating quicker trust-building in new systems when adopting and using new technologies. In contrast, American users generally do not develop trust just by rapidly adopting new technologies.

Hypothesis 17 was supported in the complete dataset and the Korean group, confirming a significant negative relationship between privacy concerns and trust. This means that higher privacy concerns lead to a reduction in trust, particularly among Korean users who tend to be more sensitive to privacy issues. In contrast, the effect was not significant for U.S. users, implying that they may perceive privacy concerns and trust as relatively independent constructs. H18 was supported in the complete dataset, with the USA group showing a significant effect, while the Korea group did not demonstrate a significant effect. This indicates that Korean users prioritize the functional advantages and convenience of ChatGPT and tend to regard privacy concerns as less critical when the benefits of the technology are substantial. Despite significant privacy concerns, Korean users are likely to continue using the technology as long as its utility remains high. However, U.S. users are more likely to refrain from using the technology if they feel that privacy protection is inadequate. H19 demonstrated a significant effect across all groups, including the complete, Korean, and U.S. user groups, suggesting that the more trustworthy the system, the more likely users are to actively engage with the technology.

Ⅳ. Discussion and Conclusions

This study examines the relationships among ChatGPT users' privacy concerns, trust, and their intention to continue use, while also investigating how cultural differences between Korean and American user groups moderate these relationships. Additionally, the study explores the impact of ChatGPT's characteristics, such as anthropomorphism, personalization, interactivity, information accuracy, and system flexibility, along with individual traits, including users' prior knowledge, AI literacy, and personal innovativeness, on privacy concerns and trust. The findings are summarized below.

First, the effect of anthropomorphism on privacy concerns (H1) was significant in the complete dataset, but not in the Korean and U.S. user groups. This indicates that while the complete dataset reflects the general perceptions of a diverse user base, resulting in a significant effect on privacy concerns, the Korean and U.S. user groups did not perceive anthropomorphized AI as a privacy threat, even when it interacted in a human-like manner. Additionally, anthropomorphism’s effect on trust (H6) was significant in the complete dataset and the Korean user group, but not in the U.S. user group. This implies that while Korean users see anthropomorphized AI as enhancing familiarity and trust, U.S. users tend to view it merely as a technical feature.

Second, personalization did not significantly affect privacy concerns (H2), as users perceive personalized services as convenient, viewing them separately from privacy issues. Furthermore, personalization did not significantly impact trust (H7), since users do not view it as crucial for establishing trust.

Third, the effect of interactivity on privacy concerns (H3) was significant in the complete dataset and among U.S. users, but not within the Korean user group. This indicates that U.S. users are more likely to perceive increased interactivity as heightening the risk of personal information exposure, while Korean users consider interactivity primarily a means to enhance convenience and do not link it with privacy threats. The effect of interactivity on trust (H8) was not significant, suggesting that while interactivity may offer convenience, trust is hard to establish if the system's security and transparency are not assured

Fourth, the impact of information accuracy on privacy concerns (H4) was significant among Korean users, but not in the complete dataset or among U.S. users. Korean users tend to believe that greater information accuracy protects their personal data better, whereas American users do not associate information accuracy with privacy concerns. The effect of information accuracy on trust (H9) was significant across the complete sample and among both Korean and American users. The higher the information’s accuracy, the more users trusted ChatGPT, believing it operated error-free and managed data appropriately.

Fifth, the effect of system flexibility on privacy concerns (H5) was significant in the complete sample, but not among Korean and American users. Users believe system flexibility does not directly relate to privacy protection. The impact of system flexibility on trust (H10) was significant only in the complete dataset, and not among Korean and American users, as excessive system flexibility might undermine consistency and reduce trust. However, both Korean and American users view system flexibility as unrelated to trust, leading to an insignificant relationship.

Sixth, the impact of prior knowledge on privacy concerns (H11) was significant in the complete dataset and the Korean user group, but not in the U.S. user group. This suggests that while Korean users may experience increased anxiety and privacy concerns as their prior knowledge grows, U.S. users focus more on understanding the protection of their personal information and having adequate control, rather than solely depending on prior knowledge. The impact of users' prior knowledge on trust (H12) was significant in both the complete dataset and the Korean and American user groups. The more aligned the information provided by ChatGPT with the user's perspective, the greater their trust in the system becomes.

Seventh, AI literacy did not significantly affect privacy concerns (H13), suggesting that a deeper understanding of AI technology does not necessarily alter privacy concerns. However, AI literacy significantly influenced trust (H14) within the American user group, indicating that enhanced understanding of AI technology foster trust formation.

Eighth, the effect of personal innovativeness on privacy concerns (H15) was significant in the complete dataset, and in both the Korean and American user groups. Users with high personal innovativeness tend to adopt new technologies more readily and react more sensitively to potential privacy infringements. The effect of personal innovativeness on trust (H16) was significant in the complete dataset and the Korean user group, but not in the American user group. This occurs because Korean users are more proactive in embracing new technologies, and their positive attitude towards innovation bolsters their trust. Conversely, American users do not establish trust simply through the rapid adoption of new technologies.

Ninth, privacy concerns negatively impacted trust (H17) in both the complete dataset and the Korean user group, but were insignificant in the American user group. Korean users are more likely to lose trust if privacy protection is deemed insufficient, whereas American users generally consider privacy concerns and trust as distinct entities.

Tenth, privacy concerns significantly affected the complete dataset and the American user group (H18), but not the Korean user group. This can be attributed to the Korean users prioritizing the convenience and utility of technology over privacy concerns. Trust significantly influenced the intention to continue use (H19) across the complete sample, as well as among both Korean and American users. The more trustworthy the system is perceived, the more likely users are to actively engage with ChatGPT.

The academic implications of this study are as follows. First, it examined the effects of ChatGPT's characteristics, including anthropomorphism, personalization, interactivity, information accuracy, and system flexibility, as well as user characteristics such as prior knowledge, AI literacy, and personal innovativeness on privacy concerns and trust. Despite the growing use of ChatGPT, research that addresses the impact of its characteristics and user traits on privacy and trust remains scarce. This study enhances the theoretical understanding of AI service user behavior by empirically analyzing how these factors influence privacy concerns and trust. Additionally, by revealing the effects of these characteristics on users, this study lays a theoretical groundwork for future research in this area.

Second, this study addressed the cultural differences between Korean and American users. It compared the effects of factors such as anthropomorphism and information accuracy on privacy concerns and trust across different cultural contexts, confirming the significant role of cultural differences in shaping user perceptions. This underscores the importance of considering cultural contexts in the design of AI servicesfor a global audience, thus broadening the scope of AI service research.

Third, this study highlighted the interaction between privacy concerns and trust in the context of AI services. Specifically, it was found that privacy concerns significantly affect trust among Korean users, whereas this relationship was not significant among American users. These findings demonstrate that trust in AI services is shaped by privacy perceptions and that cultural differences impact this dynamic, contributing to the development of the privacy-trust-usage intention model.

The practical implications of this study are as follows. First, privacy protection is a crucial element for companies offering AI-based services like ChatGPT. AI service providers can maintain customer trust and strengthen long-term relationships by establishing robust privacy protection policies and managing them rigorously. This study provided an in-depth analysis of the relationship between privacy concerns and user trust, offering practical guidelines for companies to enhance their personal data management services.

Second, building trust is essential for AI-based service companies to secure long-term users. This study demonstrates that user trust is intimately linked to the intention to continue using the service. Based on this, companies can enhance user trust by improving customer-centered services, leading to sustainable growth.

Third, this study highlights the necessity of recognizing cultural differences when crafting AI servicesfor a global audience. To succeed in diverse markets, AI services must maintain consistent security and trust while crafting strategies that acknowledge the uniquecultural nuances of each market. This approach allows AI companies to boost their global competitiveness and contribute to the provision of successful services.

One limitation of this study is the data collection from Korean and American users, which may limit its generalizability to the global market. Future studies should encompass users from varied cultural backgrounds or countries to yield richer insights. Although this study compared Korean and U.S. respondents, there was a considerable imbalance in the demographic composition between the two groups. For instance, the proportion of self-employed respondents was 0% in the Korean sample, whereas it was 31% in the U.S. sample, indicating a substantial difference in occupational distribution. Future research should aim to collect more demographically balanced samples to enhance the generalizability of the findings. Particularly, examining non-English-speaking users in comparison to English-speaking users could produce intriguing results. Moreover, this study focused solely on ChatGPT and omitted other forms of AI services. Expanding future research to include and compare various AI service types could facilitate a more detailed analysis.

Biography

Yujin Kim

Feb. 2018 : B.B.A. degree, Chungbuk National University

Mar. 2018~Current : Ph.D. candidate, Chungbuk National University

<Research Interests> Service operations management, internet and mobile services, information technology policy

[ORCID:0009-0009-8325-8720]

Biography

Hyung-Seok Lee

Feb. 1996 : B.E. degree, Kwangwoon University

Feb. 2000 : M.S. degree, Korea University

Feb. 2003 : Ph.D. degree, Korea University

Sep. 2011~Current : Professor, School of Business, Chungbuk National University

<Research Interests> Service operations management, internet and mobile services, information technology policy

[ORCID:0009-0007-3090-9126]

References

  • 1 E. C. Ling, I. Tussyadiah, A. Liu, and J. Stienmetz, "Perceived intelligence of artificially intelligent assistants for travel: Scale development and validation," J. Travel Res., vol. 64, no. 2, Dec. 2023. (https://doi.org/10.1177/00472875231217899)doi:[[[10.1177/00472875231217899]]]
  • 2 D. K. Kanbach, L. Heiduk, G. Blueher, M. Schreiter, and A. Lahmann, "The GenAI is out of the bottle: Generative artificial intelligence from a business model innovation perspective," Rev. Manag. Sci., vol. 18, no. 4, pp. 1189-1220, Sep. 2023. (https://doi.org/10.1007/s11846-023-00696-z)doi:[[[10.1007/s11846-023-00696-z]]]
  • 3 Y. Bang, S. Cahyawijaya, N. Lee, W. Dai, D. Su, B. Wilie, H. Lovenia, Z. Ji, T. Yu, and W. Chung, "A multitask, multilingual, multimodal evaluation of ChatGPT on reasoning, hallucination, and interactivity," in Proc. 13th Int. Joint Conf. Natural Lang. Process. and 3rd Conf. Asia-Pacific Chapter Assoc. Comput. Linguistics (Vol. 1: Long Papers), Nov. 2023.custom:[[[-]]]
  • 4 D. Kalla, N. Smith, F. Samaah, and S. Kuraku, "Study and analysis of ChatGPT and its impact on different fields of study," Int. J. Innov. Sci. Res. Technol., vol. 8, no. 3, Mar. 2023.custom:[[[-]]]
  • 5 M. Gupta, C. Akiri, K. Aryal, E. Parker, and L. Praharaj, "From ChatGPT to ThreatGPT: Impact of generative AI in cybersecurity and privacy," IEEE Access, vol. 11, pp. 8021880245, Aug. 2023.custom:[[[-]]]
  • 6 X. Wu, R. Duan, and J. Ni, "Unveiling security, privacy, and ethical concerns of ChatGPT," J. Inf. Intell., vol. 2, no. 2, pp. 102-115, Mar. 2024. (https://doi.org/10.1016/j.jiixd.2023.10.007)doi:[[[10.1016/j.jiixd.2023.10.007]]]
  • 7 G. Bansal, F. M. Zahedi, and D. Gefen, "The role of privacy assurance mechanisms in building trust and the moderating role of privacy concern," Eur. J. Inf. Syst., vol. 24, no. 6, pp. 624-644, Dec. 2017. (https://doi.org/10.1057/ejis.2014.41)doi:[[[10.1057/ejis.2014.41]]]
  • 8 H. Ehrari, F. Ulrich, and H. B. Andersen, "Concerns and trade-offs in information technology acceptance: The balance between the requirement for privacy and the desire for safety," Commun. Assoc. Inf. Syst., vol. 47, no. 1, pp. 227-247, Nov. 2020. (https://doi.org/10.17705/1cais.04711)doi:[[[10.17705/1cais.04711]]]
  • 9 F. A. Silva, A. S. Shojaei, and B. Barbosa, "Chatbot-based services: A study on customers’ reuse intention," J. Theor. Appl. Electron. Commer. Res., vol. 18, no. 1, pp. 457-474, Mar. 2023. (https://doi.org/10.3390/jtaer18010024)doi:[[[10.3390/jtaer18010024]]]
  • 10 A. M. Chircu, G. B. Davis, and R. J. Kauffman, "Trust, expertise, and e-commerce intermediary adoption," in Proc. AMCIS 2000, p. 405, Long Beach, CA, USA, Feb. 2000.custom:[[[-]]]
  • 11 D. Gefen and D. Straub, "Managing user trust in B2C e-services," e-Service, vol. 2, no. 2, pp. 7-24, Mar. 2003.custom:[[[-]]]
  • 12 P. A. Pavlou, "Consumer acceptance of electronic commerce: Integrating trust and risk with the technology acceptance model," Int. J. Electron. Commer., vol. 7, no. 3, pp. 101-134, Dec. 2003.custom:[[[-]]]
  • 13 D. Menon and K. Shilpa, "‘Chatting with ChatGPT’: Analyzing the factors influencing users' intention to use the OpenAI's ChatGPT using the UTAUT model," Heliyon, vol. 9, no. 11, p. e20962, Nov. 2023. (https://doi.org/10.1016/j.heliyon.2023.e20962)doi:[[[10.1016/j.heliyon.2023.e20962]]]
  • 14 M.-N. Chu, "Assessing the benefits of ChatGPT for business: An empirical study on organizational performance," IEEE Access, vol. 11, pp. 76427-76436, Jul. 2023. (https://doi.org/10.1109/access.2023.3297447)doi:[[[10.1109/access.2023.3297447]]]
  • 15 G. Linna, "The impact of ChatGPT on enterprise competitive intelligence systems," Inf. Syst. Econ., vol. 4, no. 9, pp. 62-69, Nov. 2023. (https://doi.org/10.23977/infse.2023.040909)doi:[[[10.23977/infse.2023.040909]]]
  • 16 P. P. Ray, "ChatGPT: A comprehensive review on background, applications, key challenges, bias, ethics, limitations and future scope," Internet Things Cyber-Phys. Syst., vol. 3, pp. 121-154, Apr. 2023. (https://doi.org/10.1016/j.iotcps.2023.04.003)doi:[[[10.1016/j.iotcps.2023.04.003]]]
  • 17 M. H. Temsah, A. Jamal, K. Alhasan, F. Aljamaan, I. Altamimi, K. H. Malki, A. Temsah, R. Ohannessian, and A. Al-Eyadhy, "Transforming virtual healthcare: The potentials of ChatGPT-4omni in telemedicine," Cureus, vol. 16, no. 5, p. e61377, May 2024. (https://doi.org/10.7759/cureus.61377)doi:[[[10.7759/cureus.61377]]]
  • 18 Z. Feng, B. Li, and F. Liu, "A first look at financial data analysis using ChatGPT-4o," SSRN, May 2024. (Online). Available: https://papers.ssrn.com/sol3/papers.cfm?abstrac t_id=4849578custom:[[[https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4849578]]]
  • 19 S. Pang, E. Nol, and K. Heng, "ChatGPT-4o for English language teaching and learning: Features, applications, and future prospects," SSRN, May 2024. (Online). Available: http://dx.doi.org/10.2139/s srn.4837988custom:[[[10.2139/ssrn.4837988]]]
  • 20 M. Sallam, K. Al-Salahat, H. Eid, J. Egger, and B. Puladi, "Human versus artificial intelligence: ChatGPT-4 outperforming Bing, Bard, ChatGPT-3.5, and humans in clinical chemistry multiple-choice questions," Adv. Med. Educ. Pract., vol. 15, pp. 857-871, Sep. 2024. (https://doi.org/10.2147/AMEP.S479801)doi:[[[10.2147/AMEP.S479801]]]
  • 21 H. H. Hochmair, L. Juhasz, and T. Kemp, "Correctness comparison of ChatGPT-4, bard, claude-2, and copilot for spatial tasks," Trans. GIS, Aug. 2024. (https://doi.org/10.1111/tgis.13233)doi:[[[10.1111/tgis.13233]]]
  • 22 B. R. Duffy, "Anthropomorphism and the social robot," Robot. Auton. Syst., vol. 42, no. 3-4, pp. 177-190, Mar. 2003. (https://doi.org/10.1016/S0921-8890(02)00374-3)doi:[[[10.1016/S0921-8890(02]]]
  • 23 N. Epley, A. Waytz, and J. T. Cacioppo, "On seeing human: A three-factor theory of anthropomorphism," Psychol. Rev., vol. 114, no. 4, pp. 864-886, Oct. 2007. (https://doi.org/10.1037/0033-295X.114.4.864)doi:[[[10.1037/0033-295X.114.4.864]]]
  • 24 Y. Li and S. O. Lee, "Navigating the generative AI travel landscape: The influence of ChatGPT on the evolution from new users to loyal adopters," Int. J. Contemp. Hosp. Manag., vol. 37, no. 4, Apr. 2025. (https://doi.org/10.1108/ijchm-11-2023-1767)doi:[[[10.1108/ijchm-11-2023-1767]]]
  • 25 T. Zhou and S. Li, "Understanding user switch of information seeking: From search engines to generative AI," J. Librariansh. Inf. Sci., Apr. 2024. (https://doi.org/10.1177/09610006241244800)doi:[[[10.1177/09610006241244800]]]
  • 26 A. El-Ansari and A. Beni-Hssane, "Sentiment analysis for personalized chatbots in e-commerce applications," Wirel. Pers. Commun., vol. 129, no. 3, pp. 1623-1644, Feb. 2023. (https://doi.org/10.1007/s11277-023-10199-5)doi:[[[10.1007/s11277-023-10199-5]]]
  • 27 Y. Shen, L. Heacock, J. Elias, K. D. Hentel, B. Reig, G. Shih, and L. Moy, "ChatGPT and other large language models are double-edged swords," Radiology, vol. 307, no. 2, p. e230163, Apr. 2023. (https://doi.org/10.1148/radiol.230163)doi:[[[10.1148/radiol.230163]]]
  • 28 E. N. Sari and L. Alfansi, "Elevating satisfaction: Unleashing the power of ChatGPT with personalization, relevance, accuracy, convenience, and tech familiarity," Manajemen dan Bisnis, vol. 23, no. 1, pp. 93-106, Mar. 2024.custom:[[[-]]]
  • 29 M. Farrokhnia, S. K. Banihashem, O. Noroozi, and A. Wals, "A SWOT analysis of ChatGPT: Implications for educational practice and research," Innov. Educ. Teach. Int., vol. 61, no. 3, pp. 460-474, Mar. 2023. (https://doi.org/10.1080/14703297.2023.2195846)doi:[[[10.1080/14703297.2023.2195846]]]
  • 30 M. Aljanabi, "ChatGPT: Future directions and open possibilities," Mesopotamian J. Cyber Secur., pp. 16-17, Jan. 2023. (https://doi.org/10.58496/mjcs/2023/003)doi:[[[10.58496/mjcs/2023/003]]]
  • 31 T. Chong, T. Yu, D. I. Keeling, and K. de Ruyter, "AI-chatbots on the services frontline addressing the challenges and opportunities of agency," J. Retail. Consum. Serv., vol. 63, p. 102735, Nov. 2021. (https://doi.org/10.1016/j.jretconser.2021.102735)doi:[[[10.1016/j.jretconser.2021.102735]]]
  • 32 M. Raees, I. Meijerink, I. Lykourentzou, V.-J. Khan, and K. Papangelis, "From explainable to interactive AI: A literature review on current trends in human-AI interaction," Int. J. Hum.-Comput. Stud., vol. 189, p. 103301, Sep. 2024.custom:[[[-]]]
  • 33 S. Panda and N. Kaur, "Exploring the viability of ChatGPT as an alternative to traditional chatbot systems in library and information centers," Library Hi Tech News, vol. 40, no. 3, pp. 22-25, Mar. 2023. (https://doi.org/10.1108/lhtn-02-2023-0032)doi:[[[10.1108/lhtn-02-2023-0032]]]
  • 34 C. Pelau, D.-C. Dabija, and I. Ene, "What makes an AI device human-like? The role of interaction quality, empathy and perceived psychological anthropomorphic characteristics in the acceptance of artificial intelligence in the service industry," Comput. Hum. Behav., vol. 122, p. 106855, Sep. 2021. (https://doi.org/10.1016/j.chb.2021.106855)doi:[[[10.1016/j.chb.2021.106855]]]
  • 35 L. Ciechanowski, A. Przegalinska, M. Magnuski, and P. Gloor, "In the shades of the uncanny valley: An experimental study of human-chatbot interaction," Future Gener. Comput. Syst., vol. 92, pp. 539-548, Mar. 2019. (https://doi.org/10.1016/j.future.2018.01.055)doi:[[[10.1016/j.future.2018.01.055]]]
  • 36 Y. Kim, J. Lee, S. Kim, J. Park, and J. Kim, "Understanding users’ dissatisfaction with ChatGPT responses: Types, resolving tactics, and the effect of knowledge level," in Proc. 29th Int. Conf. Intell. User Interfaces, Greenville, SC, USA, Apr. 2024.custom:[[[-]]]
  • 37 Y. Liu, Y. Yao, J.-F. Ton, X. Zhang, R. G. H. Cheng, Y. Klochkov, M. F. Taufiq, and H. Li, "Trustworthy LLMs: A survey and guideline for evaluating large language models' alignment," arXiv preprint arXiv:2308.05374, Aug. 2023. (Online). Available: https://arxiv. org/abs/2308. 05374custom:[[[https://arxiv.org/abs/2308.05374]]]
  • 38 Z. Ji, N. Lee, R. Frieske, T. Yu, D. Su, Y. Xu, E. Ishii, Y. J. Bang, A. Madotto, and P. Fung, "Survey of hallucination in natural language generation," ACM Comput. Surv., vol. 55, no. 12, pp. 1-38, Mar. 2023.custom:[[[-]]]
  • 39 D. Halloran, S. Manchester, J. Moriarty, R. Riley, J. Rohrman, and T. Skramstad, "Systems development quality control," MIS Q., vol. 2, no. 4, pp. 1-13, Dec. 1978.custom:[[[-]]]
  • 40 R. R. Nelson, P. A. Todd, and B. H. Wixom, "Antecedents of information and system quality: An empirical examination within the context of data warehousing," J. Manage. Inf. Syst., vol. 21, no. 4, pp. 199-235, Dec. 2005.custom:[[[-]]]
  • 41 J.-W. Hsia and A.-H. Tseng, "An enhanced technology acceptance model for e-learning systems in high-tech companies in Taiwan: Analyzed by structural equation modeling," in Proc. 2008 Int. Conf. Cyberworlds, pp. 39-44, Hangzhou, China, Sep. 2008.custom:[[[-]]]
  • 42 C. Yu, J. Yan, and N. Cai, "ChatGPT in higher education: Factors influencing ChatGPT user satisfaction and continued use intention," Front. Educ., vol. 9, p. 1354929, Apr. 2024. (https://doi.org/10.3389/feduc.2024.1354929)doi:[[[10.3389/feduc.2024.1354929]]]
  • 43 A. Pathak, "Exploring ChatGPT: An extensive examination of its background, applications, key challenges, bias, ethics, limitations, and future prospects," SSRN Electr.. J., Jul. 2023. (http://dx.doi.org/10.2139/ssrn.4499278)doi:[[[10.2139/ssrn.4499278]]]
  • 44 B. Foroughi, A. Iranmanesh, H. Hyun, and J. Kim, "Determinants of intention to use ChatGPT for educational purposes: Findings from PLS-SEM and fsQCA," Int. J. Hum.-Comput. Interact., pp. 1-20, Jul. 2023. (https://doi.org/10.1080/10447318.2023.2226495)doi:[[[10.1080/10447318.2023.2226495]]]
  • 45 A. Gulati, H. Saini, S. Singh, and V. Kumar, "Enhancing learning potential: Investigating marketing students’ behavioral intentions to adopt ChatGPT," Mark. Educ. Rev., vol. 34, no. 3, pp. 1-34, May 2024. (https://doi.org/10.1080/10528008.2023.2300139)doi:[[[10.1080/10528008.2023.2300139]]]
  • 46 M. J. Mygland, S. Mikalef, and L. A. Fjørtoft, "Affordances in human-chatbot interaction: A review of the literature," in Proc. 20th IFIP WG 6.11 Conf. e-Business, e-Services and e-Society (I3E 2021), pp. 234-246, Galway, Ireland, Sep. 2021.custom:[[[-]]]
  • 47 J. P. Choi, D.-S. Jeon, and B.-C. Kim, "Privacy and personal data collection with information externalities," J. Public Econ., vol. 173, pp. 113-124, Sep. 2019.custom:[[[-]]]
  • 48 K. M. Siegel, "Protecting the most valuable corporate asset: Electronic data, identity theft, personal information, and the role of data security in the information age," Penn St. Law Rev., vol. 111, pp. 779-824, Mar. 2006.custom:[[[-]]]
  • 49 S. A. Khowaja, A. Baabdullah, F. Alzahrani, A. W. Shabbir, and M. N. Alotaibi, "ChatGPT needs SPADE (sustainability, privacy, digital divide, and ethics) evaluation: A review," Cognitive. Comput., vol. 16, no. 5, pp. 2528-2550, May 2024. (https://doi.org/10.1007/s12559-024-10285-1)doi:[[[10.1007/s12559-024-10285-1]]]
  • 50 T. Dinev and P. Hart, "An extended privacy calculus model for e-commerce transactions," Inf. Syst. Res., vol. 17, no. 1, pp. 61-80, Mar. 2006. (https://doi.org/10.1287/isre.1060.0080)doi:[[[10.1287/isre.1060.0080]]]
  • 51 G. Sebastian, "Privacy and data protection in ChatGPT and other AI chatbots," Int. J. Secur. Priv. Pervasive Comput., vol. 15, no. 1, pp. 1-14, Jan. 2023. (https://doi.org/10.4018/ijsppc.325475)doi:[[[10.4018/ijsppc.325475]]]
  • 52 G. R. Milne and M.-E. Boza, "Trust and concern in consumers’ perceptions of marketing information management practices," J. Interact. Mark., vol. 13, no. 1, pp. 5-24, Feb. 1999.custom:[[[-]]]
  • 53 R. C. Mayer, J. H. Davis, and F. D. Schoorman, "An integrative model of organizational trust," Acad. Manage. Rev., vol. 20, no. 3, pp. 709-734, Jul. 1995.custom:[[[-]]]
  • 54 S. Alagarsamy and S. Mehrolia, "Exploring chatbot trust: Antecedents and behavioural outcomes," Heliyon, vol. 9, no. 5, p. e16074, May 2023. (https://doi.org/10.1016/j.heliyon.2023.e16074)doi:[[[10.1016/j.heliyon.2023.e16074]]]
  • 55 N. Castelo, M. W. Bos, and D. R. Lehmann, "Task-dependent algorithm aversion," J. Mark. Res., vol. 56, no. 5, pp. 809-825, Oct. 2019. (https://doi.org/10.1177/0022243719851788)doi:[[[10.1177/0022243719851788]]]
  • 56 M. Ramrath, A. Starke, F. Zickfeld, and J. M. Zickfeld, "Trust in AI chatbots: The perceived expertise of ChatGPT in subjective and objective tasks," in Proc. HHAI 2024: Hybrid Human AI Syst. for the Social Good, pp. 55-67, Jun. 2024. (https://doi.org/10.3233/FAIA240200)doi:[[[10.3233/FAIA240200]]]
  • 57 K. A. Hoff and M. Bashir, "Trust in automation: Integrating empirical evidence on factors that influence trust," Hum. Factors, vol. 57, no. 3, pp. 407-434, May 2015. (https://doi.org/10.1177/0018720814547570)doi:[[[10.1177/0018720814547570]]]
  • 58 D. Kerstetter and M.-H. Cho, "Prior knowledge, credibility and information search," Ann. Tour. Res., vol. 31, no. 4, pp. 961-985, Oct. 2004. (https://doi.org/10.1016/j.annals.2004.04.002)doi:[[[10.1016/j.annals.2004.04.002]]]
  • 59 A. Bhattacherjee and C. Sanford, "Influence processes for information technology acceptance: An elaboration likelihood model," MIS Q., vol. 30, no. 4, pp. 805-825, Dec. 2006.custom:[[[-]]]
  • 60 O. Bein, M. Trzewik, and A. Maril, "The role of prior knowledge in incremental associative learning: An empirical and computational approach," J. Mem. Lang., vol. 107, pp. 1-24, Aug. 2019. (https://doi.org/10.1016/j.jml.2019.03.006)doi:[[[10.1016/j.jml.2019.03.006]]]
  • 61 U. Chakraborty and S. K. Biswal, "Is ChatGPT a responsible communication: A study on the credibility and adoption of conversational artificial intelligence," J. Promot. Manag., vol. 30, no. 6, pp. 929-958, Mar. 2024. (https://doi.org/10.1080/10496491.2024.2332987)doi:[[[10.1080/10496491.2024.2332987]]]
  • 62 Y. K. Dwivedi, et al., "So what if ChatGPT wrote it? Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy," Int. J. Inf. Manage., vol. 71, p. 102642, Oct. 2023. (https://doi.org/10.1016/j.ijinfomgt.2023.102642)doi:[[[10.1016/j.ijinfomgt.2023.102642]]]
  • 63 Y. J. Park, "Digital literacy and privacy behavior online," Commun. Res., vol. 40, no. 2, pp. 215-236, Mar. 2011. (https://doi.org/10.1177/0093650211418338)doi:[[[10.1177/0093650211418338]]]
  • 64 W. Wang and M. Wang, "Effects of sponsorship disclosure on perceived integrity of biased recommendation agents: Psychological contract violation and knowledge-based trust perspectives," Inf. Syst. Res., vol. 30, no. 2, pp. 507-522, Jun. 2019. (https://doi.org/10.1287/isre.2018.0811)doi:[[[10.1287/isre.2018.0811]]]
  • 65 D. Long and B. Magerko, "What is AI literacy? Competencies and design considerations," in Proc. 2020 CHI Conf. Hum. Factors Comput. Syst., pp. 1-16, Honolulu, HI, USA, Apr. 2020.custom:[[[-]]]
  • 66 D. T. K. Ng, J. K. L. Leung, S. K. W. Chu, and M. S. Qiao, "Conceptualizing AI literacy: An exploratory review," Comput. Educ.: Artif. Intell., vol. 2, p. 100041, Nov. 2021. (https://doi.org/10.1016/j.caeai.2021.100041)doi:[[[10.1016/j.caeai.2021.100041]]]
  • 67 C. Wienrich and M. E. Latoschik, "Extended artificial intelligence: New prospects of human-AI interaction research," Front. Virtual Real., vol. 2, p. 686783, Jun. 2021. (https://doi.org/10.3389/frvir.2021.686783)doi:[[[10.3389/frvir.2021.686783]]]
  • 68 A. Carolus, M. J. Koch, S. Straka, M. E. Latoschik, and C. Wienrich, "MAILS-Meta AI literacy scale: Development and testing of an AI literacy questionnaire based on well-founded competency models and psychological change-and meta-competencies," Comput. Hum. Behav.: Artif. Humans, vol. 1, no. 2, p. 100014, Dec. 2023.custom:[[[-]]]
  • 69 A. Carolus, Y. Augustin, A. Markus, and C. Wienrich, "Digital interaction literacy model Conceptualizing competencies for literate interactions with voice-based AI systems," Comput. Educ.: Artif. Intell., vol. 4, p. 100114, Jan. 2023. (https://doi.org/10.1016/j.caeai.2022.100114)doi:[[[10.1016/j.caeai.2022.100114]]]
  • 70 T. Lintner, "A systematic review of AI literacy scales," npj Sci. Learn., vol. 9, no. 1, p. 50, Aug. 2024. (https://doi.org/10.1038/s41539-024-00264-4)doi:[[[10.1038/s41539-024-00264-4]]]
  • 71 C. Lutz, "Digital inequalities in the age of artificial intelligence and big data," Hum. Behav. Emerg. Technol., vol. 1, no. 2, pp. 141-148, Apr. 2019. (https://doi.org/10.1002/hbe2.140)doi:[[[10.1002/hbe2.140]]]
  • 72 A. M. Al-Abdullatif and M. A. Alsubaie, "ChatGPT in learning: Assessing students' use intentions through the lens of perceived value and the influence of AI literacy," Behav. Sci., vol. 14, no. 9, p. 845, Sep. 2024. (https://doi.org/10.3390/bs14090845)doi:[[[10.3390/bs14090845]]]
  • 73 N. Gillani, R. Eynon, C. Chiabaut, and K. Finkel, "Unpacking the ‘black box’ of AI in education," Educ. Technol. Soc., vol. 26, no. 1, pp. 99-111, Jan. 2023.custom:[[[-]]]
  • 74 R. Agarwal and J. Prasad, "A conceptual and operational definition of personal innovativeness in the domain of information technology," Inf. Syst. Res., vol. 9, no. 2, pp. 204-215, Jun. 1998. (https://doi.org/10.1287/isre.9.2.204)doi:[[[10.1287/isre.9.2.204]]]
  • 75 I. Brusch and N. Rappel, "Exploring the acceptance of instant shopping - An empirical analysis of the determinants of user intention," J. Retail. Consum. Serv., vol. 54, p. 101936, Jan. 2020. (https://doi.org/10.1016/j.jretconser.2019.101936)doi:[[[10.1016/j.jretconser.2019.101936]]]
  • 76 O. A. Gansser and C. S. Reich, "A new acceptance model for artificial intelligence with extensions to UTAUT2: An empirical study in three segments of application," Technol. Soc., vol. 65, p. 101535, Feb. 2021. (https://doi.org/10.1016/j.techsoc.2021.101535)doi:[[[10.1016/j.techsoc.2021.101535]]]
  • 77 A. Strzelecki, "To use or not to use ChatGPT in higher education? A study of students’ acceptance and use of technology," Interact. Learn. Environ., pp. 1-14, May 2023. (https://doi.org/10.1080/10494820.2023.2209881)doi:[[[10.1080/10494820.2023.2209881]]]
  • 78 A. Biloš and B. Budimir, "Understanding the adoption dynamics of ChatGPT among Generation Z: Insights from a modified UTAUT2 model," J. Theor. Appl. Electron. Commer. Res., vol. 19, no. 2, pp. 863-879, Apr. 2024. (https://doi.org/10.3390/jtaer19020045)doi:[[[10.3390/jtaer19020045]]]
  • 79 S. J. Schleien and K. D. Miller, "Diffusion of innovation: A roadmap for inclusive community recreation services," Res. Pract. Pers. Severe Disabil., vol. 35, no. 3-4, pp. 93-101, Sep. 2010.custom:[[[-]]]
  • 80 F. Meng, X. Guo, Z. Peng, X. Zhang, and K.-h. Lai, "Understanding the antecedents of the routine use of mobile health services: A person-technology-health framework," Front. Psychol., vol. 13, p. 879760, Jun. 2022. (https://doi.org/10.3389/fpsyg.2022.879760)doi:[[[10.3389/fpsyg.2022.879760]]]
  • 81 H. Khazaei and M. A. Tareq, "Moderating effects of personal innovativeness and driving experience on factors influencing adoption of BEVs in Malaysia: An integrated SEM-BSEM approach," Heliyon, vol. 7, no. 9, p. e08072, Sep. 2021. (https://doi.org/10.1016/j.heliyon.2021.e08072)doi:[[[10.1016/j.heliyon.2021.e08072]]]
  • 82 A. Adyantari, "The impact of anthropomorphized chatbot on privacy concern, attitude toward advertisement, and intention to give personal information," Manajemen Dewantara, vol. 6, no. 2, pp. 236-245, Aug. 2022.custom:[[[-]]]
  • 83 C. Ischen, T. Araujo, H. Voorveld, G. van Noort, and E. Smit, "Privacy concerns in chatbot interactions," in Proc. Chatbot Res. Design, pp. 34-48, Jan. 2020.custom:[[[-]]]
  • 84 L. Mo, X. Zhang, Y. Lin, Z. Yuan, and Z. Peng, "Consumers’ attitudes towards online advertising: A model of personalization, informativeness, privacy concern and flow experience," Sustainability, vol. 15, no. 5, p. 4090, Feb. 2023. (https://doi.org/10.3390/su15054090)doi:[[[10.3390/su15054090]]]
  • 85 M. Hasal, J. Nowaková, K. A. Saghair, H. Abdulla, V. Snášel, and L. Ogiela, "Chatbots: Security, privacy, data protection, and social aspects," Concurrency Comput.: Pract. Exper., vol. 33, no. 19, p. e6426, Jun. 2021. (https://doi.org/10.1002/cpe.6426)doi:[[[10.1002/cpe.6426]]]
  • 86 O. Turel, Y. Yuan, and C. E. Connelly, "In justice we trust: Predicting user acceptance of e-customer services," J. Manage. Inf. Syst., vol. 24, no. 4, pp. 123-151, Dec. 2014. (https://doi.org/10.2753/mis0742-1222240405)doi:[[[10.2753/mis0742-1222240405]]]
  • 87 T. Zhou, "Understanding location-based services users’ privacy concern," Internet Res., vol. 27, no. 3, pp. 506-519, Jun. 2017. (https://doi.org/10.1108/IntR-04-2016-0088)doi:[[[10.1108/IntR-04-2016-0088]]]
  • 88 Y. Jiang, et al., "Pervasive user data collection from cyberspace: Privacy concerns and countermeasures," Cryptography, vol. 8, no. 1, p. 5, Jan. 2024. (https://doi.org/10.3390/cryptography8010005)doi:[[[10.3390/cryptography8010005]]]
  • 89 A. Y. Alsabawy, A. Cater-Steel, and J. Soar, "Identifying the determinants of e-learning service delivery quality," in Proc. 23rd Australasian Conf. Inf. Syst. (ACIS), Dec. 2012.custom:[[[-]]]
  • 90 Y. Jung, C. Chen, E. Jang, and S. S. Sundar, "Do we trust ChatGPT as much as Google search and Wikipedia?," in Proc. Extended Abstracts CHI Conf. Hum. Factors Comput. Syst., May 2024.custom:[[[-]]]
  • 91 J. Li, et al., "Determinants affecting consumer trust in communication with AI chatbots," J. Organ. End User Comput., vol. 35, no. 1, pp. 1-24, Jan. 2023. (https://doi.org/10.4018/joeuc.328089)doi:[[[10.4018/joeuc.328089]]]
  • 92 B. Debatin, J. P. Lovejoy, A.-K. Horn, and B. N. Hughes, "Facebook and online privacy: Attitudes, behaviors, and unintended consequences," J. Comput.-Mediat. Commun., vol. 15, no. 1, pp. 83-108, Oct. 2009. (https://doi.org/10.1111/j.1083-6101.2009.0149 4.x)doi:[[[10.1111/j.1083-6101.2009.01494.x]]]
  • 93 T. Lee, B.-K. Lee, and S. Lee-Geiller, "The effects of information literacy on trust in government websites: Evidence from an online experiment," Int. J. Inf. Manage., vol. 52, p.102098, Jun. 2020. (https://doi.org/10.1016/j.ijinfomgt.2020.102098)doi:[[[10.1016/j.ijinfomgt.2020.102098]]]
  • 94 M. Y.-C. Lin, B.-R. Do, T. T. Nguyen, and J. M.-S. Cheng, "Effects of personal innovativeness and perceived value of disclosure on privacy concerns in proximity marketing: Self-control as a moderator," J. Res. Interact. Mark., vol. 16, no. 2, pp. 310-327, Aug. 2021. (https://doi.org/10.1108/jrim-04-2021-0112)doi:[[[10.1108/jrim-04-2021-0112]]]
  • 95 S. Kumar, P. Kumar, and B. Bhasker, "Interplay between trust, information privacy concerns and behavioural intention of users on online social networks," Behav. Inf. Technol., vol. 37, no. 6, pp. 622-633, May 2018. (https://doi.org/10.1080/0144929x.2018.1470671)doi:[[[10.1080/0144929x.2018.1470671]]]
  • 96 G. Bansal, F. M. Zahedi, and D. Gefen, "Do context and personality matter? Trust and privacy concerns in disclosing private information online," Inf. Manage., vol. 53, no. 1, pp. 1-21, Jan. 2016. (https://doi.org/10.1016/j.im.2015.08.001)doi:[[[10.1016/j.im.2015.08.001]]]
  • 97 D. Gefen, E. Karahanna, and W. D. Straub, "Trust and TAM in online shopping: An integrated model," MIS Q., vol. 27, no. 1, pp. 51-90, Mar. 2003. (https://doi.org/10.2307/30036519)doi:[[[10.2307/30036519]]]
  • 98 N. K. Malhotra, S. S. Kim, and J. Agarwal, "Internet users' information privacy concerns (IUIPC): The construct, the scale, and a causal model," Inf. Syst. Res., vol. 15, no. 4, pp. 336-355, Dec. 2004. (https://doi.org/10.1287/isre.1040.0032)doi:[[[10.1287/isre.1040.0032]]]
  • 99 H. J. Smith, T. Dinev, and H. Xu, "Information privacy research: An interdisciplinary review," MIS Q., vol. 35, no. 4, pp. 989-1015, Dec. 2011. (https://doi.org/10.2307/41409970)doi:[[[10.2307/41409970]]]
  • 100 H. Jafari, et al., "In ChatGPT we trust? Unveiling the dynamics of reuse intention and trust towards generative AI chatbots among Iranians," InfoSci Trends, vol. 1, no. 3, pp. 56-72, Sep. 2024. (https://doi.org/10.61186/ist.202401.01.17)doi:[[[10.61186/ist.202401.01.17]]]
  • 101 C. Li and P. Y. K. Chau, "Leveraging communication tools to reduce consumers’ privacy concern in the on-demand services: An extended S-O-R model of perceived control and structural assurance," in Proc. PACIS 2019, USA, 2019.custom:[[[-]]]
  • 102 A. Polyportis and N. Pahos, "Understanding students’ adoption of the ChatGPT chatbot in higher education: The role of anthropomorphism, trust, design novelty and institutional policy," Behav. Inf. Technol., pp. 1-22, Feb. 2024. (https://doi.org/10.1080/0144929x.2024.2317364)doi:[[[10.1080/0144929x.2024.2317364]]]
  • 103 J. A. Penen, "‘Are you OK?’ Students’ trust in a chatbot providing support opportunities," in Learn. and Collaboration Technol.: Games and Virtual Environ. for Learn., pp. 199-215, Springer, Jul. 2021.custom:[[[-]]]
  • 104 M. M. H. Emon, et al., "Predicting adoption intention of artificial intelligence," AIUB J. Sci. Eng. (AJSE), vol. 22, no. 2, pp. 189-199, Aug. 2023.custom:[[[-]]]
  • 105 A. Choudhury and H. Shamszare, "Investigating the impact of user trust on the adoption and use of ChatGPT: Survey analysis," J. Med. Internet Res., vol. 25, p. e47184, Jun. 2023. (https://doi.org/10.2196/47184)doi:[[[10.2196/47184]]]
  • 106 D. Ng, W. Luo, H. Chan, and S. Chu, "An examination on primary students’ development in AI literacy through digital story writing," Comput. Educ.: Artif. Intell., vol. 4, p. 100054, Feb. 2022. (https://doi.org/10.1016/j.caeai.2022.100054)doi:[[[10.1016/j.caeai.2022.100054]]]
  • 107 M. S. Rahman, et al., "Examining students’ intention to use ChatGPT: Does trust matter?," Australas. J. Educ. Technol., vol. 39, no. 6, pp. 51-71, Dec. 2022.custom:[[[-]]]
  • 108 L. M. de Cosmo, L. Piper, and A. Di Vittorio, "The role of attitude toward chatbots and privacy concern on the relationship between attitude toward mobile advertising and behavioral intent to use chatbots," Ital. J. Mark., vol. 2021, no. 1-2, pp. 83-102, Mar. 2021. (https://doi.org/10.1007/s43039-021-00020-1)doi:[[[10.1007/s43039-021-00020-1]]]
  • 109 A. Widener and S. Lim, "Need to belong, privacy concerns and self-disclosure in AI chatbot interaction," J. Digit. Contents Soc., vol. 21, no. 12, pp. 2203-2210, Dec. 2020. (https://doi.org/10.9728/dcs.2020.21.12.2203)doi:[[[10.9728/dcs.2020.21.12.2203]]]
  • 110 V. Venkatesh, J. Y. Thong, and X. Xu, "Consumer acceptance and use of information technology: Extending the unified theory of acceptance and use of technology," MIS Q., vol. 36, no. 1, pp. 157-178, Mar. 2012. (https://doi.org/10.2307/41410412)doi:[[[10.2307/41410412]]]
  • 111 H. J. Smith, S. J. Milberg, and S. J. Burke, "Information privacy: Measuring individuals' concerns about organizational practices," MIS Q., vol. 20, no. 2, pp. 167-196, Jun. 1996. (https://doi.org/10.2307/249477)doi:[[[10.2307/249477]]]
  • 112 K. A. Stewart and A. H. Segars, "An empirical examination of the concern for information privacy instrument," Inf. Syst. Res., vol. 13, no. 1, pp. 36-49, Mar. 2002. (https://doi.org/10.1287/isre.13.1.36.97)doi:[[[10.1287/isre.13.1.36.97]]]
  • 113 H. S. Lee, Principle of Research Paper for Social Science, Hangkyung Publishing Co., 2008.custom:[[[-]]]
  • 114 J. Nunally and I. Bernstein, Psychometric Theory, McGraw-Hill, New York, 1978.custom:[[[-]]]
  • 115 C. Fornell and D. F. Larcker, "Evaluating structural equation models with unobservable variables and measurement error," J. Mark. Res., vol. 18, no. 1, pp. 39-50, Feb. 1981.custom:[[[-]]]
  • 116 J. C. Anderson and D. W. Gerbing, "Structural equation modeling in practice: A review and recommended two-step approach," Psychol. Bull., vol. 103, no. 3, pp. 411-423, 1988. (https://doi.org/10.1177/002224378101800104)doi:[[[10.1177/002224378101800104]]]
  • 117 R. P. Bagozzi and Y. Yi, "On the evaluation of structural equation models," J. Acad. Mark. Sci., vol. 16, no. 1, pp. 74-94, Mar. 1988. (https://doi.org/10.1007/bf02723327)doi:[[[10.1007/bf02723327]]]
  • 118 J. Hair, R. Anderson, R. Tatham, and W. Black, Multivariate Data Analysis, 5th ed., Prentice-Hall Inc., USA, 1998.custom:[[[-]]]
  • 119 J. Yu, Structural Equation Model Concept and Understanding: Amos 4.0-20.0, Hannarae Academy, 2014.custom:[[[-]]]
  • 120 M. Tenenhaus, V. E. Vinzi, Y. M. Chatelin, and C. Lauro, "PLS path modeling," Comput. Stat. Data Anal., vol. 48, no. 1, pp. 159-205, Jan. 2005. (https://doi.org/10.1016/j.csda.2004.03.005)doi:[[[10.1016/j.csda.2004.03.005]]]

Statistics


Related Articles

분산신원증명(DID)과 공개 키 기반(PKI) 간 상호운용가능한 신뢰연결 프레임워크 기본모델 제안
K. Lee, S. Park, H. Kim, T. G. Ha, Y. J. Yeol, K. Kim
인공지능 신뢰성 확보를 위한 글로벌 정책 비교 및 국내 적용 방안 연구
J. Kim, M. Lee, J. Seo, Y. Shin
생성형 AI 기술을 이용한S/W원격 협업개발 개선에 대한 연구
K. Choi, D. K. Kang, J. T. Kim, J. P. Cho
전력 분할 기반 보안 릴레이의 보안 에너지 효율성 최대화
K. Shin
보안 전송률 최대화를 위한 무선 충전이 가능한 전력 분할 기반 보안 릴레이
K. Shin
잠재적 도청자로부터 물리 계층 보안 성능 향상을 위한 시간 전환 기반 보안 릴레이
K. Shin
객체 분할 정보를 활용한 선화 생성 모델의 성능 개선
J. Choi and J. Lee
보안 에너지 효율성 최적화를 위한 시간 전환 기반 보안 릴레이
K. Shin
국방 통신망 환경에서의 하이브리드 양자키분배 네트워크(HTUR QKDN) 구조 제안
M. Cha and J. Heo
비신뢰 중계망에서 물리계층 보안을 위한 중계기 선택의 다중사용자 다양성 분석
I. Bang, T. Kim, J. Lim

Cite this article

IEEE Style
Y. Kim and H. Lee, "Factors Affecting the Continuous Intention to Use ChatGPT: Evidence from Korea and USA," The Journal of Korean Institute of Communications and Information Sciences, vol. 51, no. 1, pp. 146-168, 2026. DOI: 10.7840/kics.2026.51.1.146.


ACM Style
Yujin Kim and Hyung-Seok Lee. 2026. Factors Affecting the Continuous Intention to Use ChatGPT: Evidence from Korea and USA. The Journal of Korean Institute of Communications and Information Sciences, 51, 1, (2026), 146-168. DOI: 10.7840/kics.2026.51.1.146.


KICS Style
Yujin Kim and Hyung-Seok Lee, "Factors Affecting the Continuous Intention to Use ChatGPT: Evidence from Korea and USA," The Journal of Korean Institute of Communications and Information Sciences, vol. 51, no. 1, pp. 146-168, 1. 2026. (https://doi.org/10.7840/kics.2026.51.1.146)