IndexFiguresTables |
Yujin Kim♦, Hyung-Seok Lee°Factors Affecting the Continuous Intention to Use ChatGPT: Evidence from Korea and USAAbstract: This study investigated the relationships among privacy concerns, trust, and continued use intention of ChatGPT users, with a focus on cultural differences between Korean and U.S. users. We analyzed survey data using structural equation modeling to explore how both ChatGPT's characteristics (e.g., anthropomorphism, personalization, interactivity, information accuracy, and system flexibility) and user traits (e.g., prior knowledge, AI literacy, and personal innovativeness) influence privacy and trust. Key findings reveal that cultural differences significantly moderate these relationships. For instance, Korean users perceive anthropomorphism as a trust-enhancing feature and link information accuracy to privacy, while U.S. users see interactivity as a greater privacy risk. Despite these differences, trust positively impacts continued use intention for both groups. This study offers valuable insights for AI service developers, highlighting the need to consider cultural context when designing services. Keywords: ChatGPT , Generative AI , Privacy concerns , Trust Ⅰ. IntroductionArtificial intelligence (AI) is profoundly permeating various sectors globally, causing dramatic shifts in both professional and personal lives[1]. While AI has been utilized in numerous fields for many years, generative AI represents a profound shift in AI technology advancement due to its user-friendly interfaces and high performance[2]. OpenAI's ChatGPT, for instance, surpassed 100 million users in an exceptionally short period, signifying the rapid progress of the technologies. Such advancements have the potential to catalyze innovation in various domains, including education and healthcare, thereby enhancing the convenience and efficiency of our lives[3,4]. However, this rapid growth has escalated privacy concerns due to the increased collection and processing of user data. ChatGPT processes and stores vast amounts of data, which raises user concerns regarding the use and sharing of their information. Notably, OpenAI has not transparently disclosed with whom they share the collected data with, or the purposes for which it is used; this opacity can undermine user trust[5,6]. Privacy concerns significantly affect technology adoption and the formation of trust[7,8]. If users distrust ChatGPT, it may negatively influence technology adoption and satisfaction, potentially leading toits rejection[9]. Previous studies have shown that trust isa critical factor in the adoption of new technologies, with increased privacy concerns being associated with lower trust and reduced adoption intention[7,10-12]. A review of existing studies on ChatGPT indicates that while significant research has focused on the factors influencing the adoption of ChatGPT technology and user satisfaction, there is a noticeable lack of studies addressing the factors impacting privacy concernsand trust during ChatGPT usage. Consequently, this study will analyze those factors within the context of ChatGPT usage and explore their effects on user behavior. The objective is to enhance the understanding of privacy concerns and trust experienced by users of generative AI technologies like ChatGPT and to contribute to the development of policies and strategies to improve these aspects. This research will advance our understanding of users' perceptions and attitudes toward privacy and trust in ChatGPT, thereby informing policy enhancement that promotes a safer and more trustworthy environment for ChatGPT users. Hence, this study aims to investigate the factors influencing privacy concerns and trust while using ChatGPT by categorizing them into ChatGPT-specific characteristics and individual user traits, and analyzing their relationship with the intention to continue using the technology. Moreover, this study will compare Korean and American user groups to assess the impact of cultural differences. It seeks to deepen insights into the relationship between users' privacy protection behaviors and trust in generative AI, with the expectation that these insights will contribute to enhancing the quality of services. Ⅱ. Theoretical Background and Hypothesis2.1 ChatGPT CharacteristicsIn December 2022, OpenAI introduced the conversational platform ChatGPT, which has been catalyzing innovative changes across various industries owing to its natural conversational abilities and advanced contextual understanding[6]. Since its launch, ChatGPT has significantly impacted various fields due to its exceptional ability to understand user inputs and provides responses tailored to specific needs[4,13]. Additionally, ChatGPT plays a key role in enhancing organizational efficiency by significantly improving system performance and information accuracy across multiple functions, including information gathering, analysis, prediction, and communication[14,15]. OpenAI significantly enhanced the performance of its models, beginning with GPT-1.0 in 2018, which was trained on 117 million parameters, followed by GPT-2.0 in 2019 with 1.5 billion parameters, and culminating in GPT-3.0 in 2020 with 175 billion parameters[16]. On May 13, 2024, OpenAI introduced GPT-4o(Omni), which surpasses earlier models in speed and excels in real-time interaction. This model enables users to interact through voice and video, request problem-solving during live video sharing, and engage in real-time collaboration[17]. It has also been upgraded to support conversations in various voice and text styles, solve mathematical equations, and translate spoken languages in real-time[18,19]. Moreover, according to Sallam, et al.[20], ChatGPT 4.0 also outperformed other AI models such as GPT-3.5, Bing, and Bard on multiple-choice questions in the field of medicinal chemistry. Hochmair, et al.[21] reported that ChatGPT 4.0 similarly surpassed other chatbots like Bard, Claude-2, and Copilot in tasks such as GIS theory and programming code interpretation. This study defines the characteristics of ChatGPT into five key factors and explores each to analyze their impact on users' privacy concerns and trust. The five characteristics are anthropomorphism, personalization, interactivity, information accuracy, and system flexibility. 2.1.1 Anthropomorphism According to Duffy[22], anthropomorphism refers to the process of attributing human characteristics, emotions, or intentions to non-human entities, such as objects or AI systems. It facilitates user acceptance of new service technologies and enables individuals to adapt to unfamiliar situations[23,24]. ChatGPT can comprehend users' speech and adjust its tone to appear human-like, occasionally responding in engaging tones or even singing[19]. Such human-like attributes of ChatGPT foster a sense of companionship and support, strengthening emotional connections with users and enhancing the perceived value of ChatGPT[25]. 2.1.2 Personalization ChatGPT offers significant advantages in personalization[26]. It becomes an adaptive conversational agent that delivers context-specific information by learning from interactions with users[27,28]. ChatGPT retains information from previous exchanges to produce contextually appropriate responses based on prompt[29]. Furthermore, through continuous interaction, ChatGPT learns a user's language, tone, and style, thus understanding individual needs and preferences and providing personalized responses. Over time, this leads to more precise and relevant answers, enhancing the quality of services and education[30]. 2.1.3 Interactivit Interactivity is defined as ChatGPT's ability to engage in two-way interactions, referring to the AI system’s responsiveness and accuracy during user interactions[25,31]. Additionally, interactivity is a critical component in AI systems, enhancing user decision-making and transforming interaction dynamics between businesses and customers[25,32]. Traditional chatbots deliver predefined responses to specific inquiries based on preset knowledge, whereas ChatGPT can generate natural conversations by interacting with users and utilizing extensive text data. This capability leads to personalized responses and enhances interactivity[16,33]. Specifically, consumers tend to engage longer with human-like chatbots or AI devices[34,35]. ChatGPT enhances the experience by responding to queries in real-time and maintaining conversation context, thanks to its high interactivity[19]. This interactivity enables users to engage in more natural and consistent conversations, contributing to increased user satisfaction and trust[26,29]. 2.1.4 Information Accuracy Large language models (LLMs) like ChatGPT, trained on extensive text dataset, encounter the issue of hallucination[3]. Hallucination describes a phenomenon where the model generates nonexistent information or responses irrelevant to the context[36]. Hallucination and misinformation are distinct concepts. According to Liu, et al.[37], misinformation refers to incorrect or biased responses due to flawed input data. However, hallucination includes creating fictional content that conflicts with source material or presents unverifiable information. Such hallucination issues can erode user trust and lead to severe consequences, while misinformation may directly harm users[36,38]. Hence, enhancing the accuracy of ChatGPT's information is vital to improve user experience and trust. 2.1.5 System Flexibility System flexibility refers to how well a system adapts to varying user needs and changing conditions[39]. It plays a pivotal role in customizing and personalizing the user experience, which boosts the perceived usefulness of technology and enhances user convenience[40,41]. Not only does ChatGPT meet the specific needs of various users, but it also adapts to flexibly across a broad range of activities, continuously evolving to better address user needs[42,43]. Such flexibility is critical in determining system quality, playing an essential role in providing necessary information for diverse decision-making processes[40]. Representative studies on generative AI, such as ChatGPT, include the work of Li and Lee[24], who explored how ChatGPT affects user loyalty in the travel decision-making process based on theories of affordance and communication. They investigated components such as communication quality, personalization, anthropomorphism, and both cognitive and emotional trust. Zhou and Li[25] researched users' intentions to switch from search engines to generative AI using the Push-Pull-Mooring (PPM) model. Their study illustrated that AI-generated content’s information quality and perceived interactivity affects users' intentions to switch from search engines to generative AI through perceived value. Foroughi, et al.[44] scrutinized factors affecting the intent to use ChatGPTin education, following the UTAUT2 model, and studied whether personal innovativeness and information accuracy moderate these relationships. Gulati, et al.[45] analyzed factors influencing marketing students' acceptance and use of ChatGPT, particularly focusing on system flexibility within the UTAUT model. Mygland, et al.[46] identified and analyzed nine high-level affordances of chatbots, which include human-like conversing, assistance provision, facilitation, distilling information, enriching information, context identification, personalization, fostering familiarity, and ensuring privacy. Based on previous studies, this research defines the characteristics of ChatGPT as anthropomorphism, personalization, interactivity, information accuracy, and system flexibility. 2.2 ChatGPT and Privacy ConcernsUsers continuously generate data across various digital environments, such as email, search, and online shopping, and the volume of personal data generation has surged due to the significant reduction in data storage and analysis costs[47]. In the digital age, information is a valuable asset, and from a business perspective, privacy is a strategic issue[48]. LLMs like ChatGPT, trained on expansive datasets collected from diverse sources such as websites, posts, and articles[49], may inadvertently include personal information, compromising between privacy concerns and the willingness to disclose such information[49,50]. As ChatGPT extensively harvests data from the internet, it is inevitably exposed to personal information[6]. Furthermore, ChatGPT processes sensitive data, including personal messages, medical records, and financial information, raising significant privacy concerns about the management of user data[6,43]. OpenAI stores user conversations temporarily for 30 days to adhere to data protection laws. However, this process entails potential risks of interception during transmission if communication channels are insecure[51]. In Italy, the use of ChatGPT was prohibited on March 20, 2023, following a data breach that compromised payment information and user conversations, prompting other nations to initiate similar inquiries[6,49]. Therefore, AI service providers like ChatGPT are obligated to adhere to these legal standards while actively safeguarding user privacy. Few studies have investigated the impact of ChatGPT's characteristics on privacy concerns. Consequently, this study examines the existing research on privacy issues related to attributes similar to those of ChatGPT and those involving the collection of personal information. Adyantari[82] demonstrated that the anthropomorphic features of chatbots significantly influence individuals' privacy concerns, which subsequently negatively impacts their intention to disclose personal information. Additionally, Ischen, et al.[83] argued that anthropomorphism perceptions and privacy concerns are crucial in shaping users' attitudes and behavioral outcomes during interactions with chatbots. Therefore, users may experience uncertainty regarding how an AI that behaves like a human will manage their personal information. ChatGPT's personalization features enhance the user experience by offering tailored responses based on user provided data[28]. However, this personalization necessitates the collection of user data, potentially heightening privacy concerns. Mo, et al.[84] revealed that perceived personalization in online targeted advertising can lead to increased perceptions of informativeness and elevated privacy concerns. Consequently, ChatGPT's personalized responses could cause users to worry about excessive data collection. Hasal, et al.[85] emphasized that chatbots like ChatGPT, which handle sensitive information, must mitigate security threats to ensure the accurate and secure storage of user data, and manage it transparently. While ChatGPT excels in information collection and prediction, it can sometimes provide inaccurate information or exhibit hallucination phenomena[14,36]. This may heighten users' privacy concerns, as they might need to allocate considerable time to verify information and could perceive a lack of fairness in information disclosure[86,87]. Zhou[87] also highlighted that privacy concerns among users of location-based services influenced by the quality of information. Thus, inaccuracies in information can provoke privacy concerns among users. System flexibility is pivotal in enhancing the perceived usefulness of technology and augmenting user convenience[40,41]. However, heightened flexibility necessitates the collection and processing of extensive user data, which concurrently escalates the risk of privacy violations[88]. Alsabawy, et al.[89] examined if the service quality, system quality, and information quality of IT infrastructure systems impact privacy. Consequently, this research postulates the following hypotheses based on previous studies. H1: Anthropomorphism significantly impacts privacy concerns. H2: Personalization significantly impacts privacy concerns. H3: Interactivity significantly impacts privacy concerns. H4: Information accuracy significantly impacts privacy concerns. H5: System flexibility significantly impacts privacy concerns. 2.3 ChatGPT and TrustBuilding trust in managing customer information proves more effective than efforts aimed at reducing customer concerns[52]. Trust originates from various disciplines such as psychology, economics, sociology, and business administration[24]. According to Mayer, et al.[53], trust is the willingness to be vulnerable to another’s actions, based on the expectation of significant action from the other party, regardless of monitoring or control capabilities. Trust is pivotal in the adoption and use of AI services like ChatGPT by users[54]. It has been shown that people are more likely to trust and depend on AI for areas involving objective knowledge rather than subjective knowledge[55]. This is a critical determinant of trust in AI systems and greatly affects users' intention to embrace this technology[56,57]. Although ChatGPT is trained on a vast corpus of human-written text data, ensuring accuracy and relevance in its responses, building user trust remains challenging as not all responses are always accurate[61,90]. Previous research by Li, et al.[91] explored the factors influencing consumers' trust in AI chatbots, classifying them as chatbot-related, company-related, and consumer-related factors, and examined the process of trust formation between humans and AI chatbots. The study identified expertise, responsiveness, and anthropomorphism as key factors influencing consumer trust, highlighting that inadequate expertise in AI chatbots can detrimentally affect trust. Alagarsamy and Mehrolia[54] proposed a model considering factors such as technology, quality, risk, and personal characteristics that influence chatbot trust, demonstrating that this model contributes to the intention to use. Chakraborty and Biswal[61] investigated the effects of ChatGPT's quality, accuracy, timeliness, user familiarity, and consistency with other sources on its reliability and technology acceptance. The research indicated that users’ trust in ChatGPT increase when the information is consistent with other sources. Therefore, drawing on the previous studies, this study proposes the following hypotheses. H6: Anthropomorphism significantly impacts trust. H7: Personalization significantly impacts trust. H8: Interactivity significantly impacts trust. H9: Information accuracy significantly impacts trust. H10: System flexibility significantly impacts trust. 2.4 Individual Characteristics2.4.1 Receiver’s prior knowledge Prior knowledge encompasses an individual's familiarity, expertise, and experience with a specific issue and significantly influences their information-seeking behavior[58]. When individuals possess extensive prior knowledge about a topic, they can analyze and evaluate new information in depth, reducing their reliance on peripheral cues or heuristics[59]. Moreover, prior knowledge facilitates the understanding and integration of new information[60]. ChatGPT plays a vital role in aiding decision-making across various fields, with users' prior knowledge significantly influencing their acceptance of the content provided[61,62]. The greater the prior knowledge users possess in a particular field, the more likely they are to adopt information provided by ChatGPTif it aligns with their existing knowledge[60]. Users with in-depth prior knowledge in the IT field can better comprehend the characteristics and functionality of technologies, enabling them to more effectively identify and address risks or issues associated with new technologies[63]. In this context, users' prior knowledge is crucial in shaping privacy concerns and significantly influences the process of evaluating information and fostering trust[64]. Alignment of the information with an individual's prior knowledge increases the likelihood of trust in that information[61]. Park[63] demonstrated that users' knowledge significantly influences their privacy control behaviors. Debatin, et al.[92] found that users with extensive prior knowledge of technology are cognizant of particular methods to safeguard their personal information. Chakraborty and Biswal[61] discovered that users' prior knowledge positively impacts the reliability of ChatGPT, suggesting that users are more likely to trust information that corroborates their existing knowledge. Therefore, based on prior research, this study posits the following hypotheses. H11: Receivers' prior knowledge significantly impacts privacy concerns. H12: Receivers' prior knowledge significantly impacts trust. 2.4.2 AI Literac AI is continuously expanding across various sectors through direct interaction with users[65,66]. Nevertheless, the myriad functionalities and complex systems of AI pose challenges for users in understanding and utilizing these technologies[67]. In a society increasingly influenced by AI, it is crucial for users to comprehend the nature and capabilities of AI to act independently and contribute to future developments[68]. In response to technological innovations, researchers have developed various competencies and literacy concepts, such as computer literacy, media literacy, ICT literacy, and digital literacy, which encompass effective use of digital environments[69]. Recently, AI literacy has emerged as a crucial competency for living, learning and working in the digital world via AI-based technologies[66]. AI literacy extends digital literacy, referring to the abilities to understand, interact with, and critically evaluate AI systems and their outputs[70]. The level of ChatGPT utilization heavily relies on the user's AI literacy skills. The degree to which users comprehend the functions and risks of ChatGPT shapes their perception of its benefits and concerns, potentially enabling them to use the technology more effectively or encounter hurdles in areas such as personalized data use[63]. Differences in technical competency with digital devices affect the use and effectiveness of online services, allowing users with higher competency to utilize these services more efficiently[71]. Users consider issues such as reliability and privacy when using ChatGPT, with their perceptions shaped by their AI literacy[72, 73]. High AI literacy allows users to better comprehend the advantages and limitations of AI, potentially leading to increased cautionin adopting the technology. Conversely, lowAI literacy can escalate anxiety due to misunderstandings or insufficient information about AI[72]. Therefore, successful utilization of ChatGPT largely hinges on the level of AI literacy, enabling users to trust and continue using the technology. The relationship between AI literacy, privacy concerns, and trust remains under-researched. Consequently, we examined studies evaluating the effective use of digital environments. Al-Abdullatif andAlsubaie[72] studied the adoption of ChatGPT through an enhanced VAM (Value-Based Adoption Model) that integrates AI literacy, discovering that students with greater AI literacy appreciate ChatGPT’s value more profoundly. Lee, et al.[93] found that increased information literacy levels correlate with greater trust in websites, which further grows as the website’s perceived utility increases. Based on prior research, this study proposes the following hypotheses. H13: AI literacy significantly impacts privacyconcerns. H14: AI literacy significantly impacts trust. 2.4.3 Personal Innovativenes Agarwal and Prasad[74] defined personal innovative ness as the degree to which an individual is willing to engage with new information technology. Personal innovativeness is particularly significant in AI-based services such as ChatGPT.[44,75]. Individuals with a high level of personal innovativeness tend to embrace new technologies driven by their curiosity and eagerness to acquire new skills[44,75]. This curiosity and eagerness to learn positively impact the adoption and continued use intention of innovative tools[76-78]. Personal innovativeness is essential in generative AI services like ChatGPT, as users must exhibit curiosity and openness when adopting and utilizing new AI technologies[76,78]. Individuals with high levels of innovativeness are more inclined to take risks, accept minor setbacks during the risk-taking process, and swiftly evaluate the usefulness, reliability, and functionality of new technologies[74,79,80]. Moreover, they make decisions to purchase or adopt new technologies independently of others’ opinions or experiences[81]. Consequently, ChatGPT users with high personal innovativeness are more likely to trust and utilize the technology despite privacy concerns. Previous research on personal innovativeness reveals that individuals with higher levels of innovativeness exhibit fewer privacy concerns when utilizing Proximity Bluetooth Beacon Technology (PBBT)[94]. Meng, et al.[80] investigated whether highly innovative users more easily foster trust in mobile health services and incorporate them into their daily routines. Therefore, this study presents the following hypothesis based on prior research. H15: Personal innovativeness significantly impacts privacy concerns. H16: Personal innovativeness significantly impacts trust. 2.5 Privacy Concerns and TrustTrust is pivotal in matters concerning privacy concerns and information disclosure, where its breakdown can expose sensitive personal information, potentially leading to financial losses[95]. Moreover, trust is vital in determining individuals’ willingness to share sensitive information with companies. It lowers transaction risks, enhances interactions among parties, and aids in building long-term relationships[52,96,97]. The relationship between privacy concerns and trust is intricately linked. Malhotra, et al.[98] demonstrated that individuals with greater privacy concerns are more likely to be affected in their perception of trust and risk when personal information is solicited. Research on privacy concerns and trust has demonstrated that Malhotra, et al.[98] found that the higher information privacy concerns correlate with lower trust and increased perceived risk, subsequently influencing personal information disclosure behaviors. Smith, et al.[99] identified regulation, behavioral responses, and trust as outcome factors in the APCO (antecedents → privacy concerns → outcomes) model. Jafari, et al.[100] found that privacy concerns about ChatGPT reduce usage intention and trust. Consequently, this study proposes the following hypotheses based on prior research. H17: Privacy concerns significantly impact trust. Individuals with high privacy concerns, irrespective of the type, may hesitate to adopt new technologies or services, potentially leading to a reduced usage intentions[50,101]. Consequently, this study proposes the following hypotheses based on previous research. H18: Privacy concerns significantly impact continuance intention. Trust in ChatGPT is positively related to user attitude[102]. Prior studies have shown that increased trust enhanced user attitudes, indicating that a high level of trust in ChatGPT and other chatbots correlates with more positive user responses[102-104]. Trust significantly affects the usage intention, and when users feel confident in the technology, they are more likely to adopt and continue using it[105]. Therefore, based on existing studies, this research presents the following hypothesis. H19: Trust significantly influences the intention to intention to use continuously. Ⅲ. Research Methods3.1 Development of the Measurement ItemsThis study revised the measurement items introduced in prior research to better reflect the characteristics of ChatGPT, as illustrated in Table 1. These items were developed based on earlier studies[24,25,34,36,44,45,61,68,77,78,102,106-112]. Table 1. Constructs and measurement items.
The study utilized a 5-point Likert scale for the measurement items, ranging from ‘strongly disagree(1)’ to ‘strongly agree (5)’ 3.2 Data Collection and Sample CharacteristicsData analysis was conducted using SmartPLS 3.0, a structural equation modeling (SEM) software based on the partial least squares (PLS) approach. The survey data were collected through professional research agencies. For the Korean sample, data collection was outsourced to a specialized survey company, referred to as Company E. For the USA sample, data were collected through another professional survey firm, referred to as Company A. Both agencies employed stratified sampling methods to ensure demographic diversity and data quality. A survey targeting users experienced with ChatGPT was conducted through a professional research firm, collecting 420 responses. After discarding insincere responses, 403 valid responses were analyzed. Frequency analysis explored demographic characteristics of the respondents, presented in Table 2. There were 246 males and 157 females, indicating a male majority. There were 200 Koreans and 203 American respondents, with Americans slightly outnumbering Koreans. The dominant age group was 30-39 years, with 223 respondents, followed by the 40-49 age group with 90, the 20-29 age group with 88, and 2 respondents in other age groups. Company employees constituted the largest occupation group at 75.4%, followed by self-employed/business owners at 15.6%, government employees at 6%, job seekers at 1.5%, and others at 1.4%. Table 2. Demographic characteristics of respondent
In this study, we examined Cronbach's α and construct validity to assess the reliability and validity of the measurement items and factors. Cronbach’s α, which measures the internal consistency of the scale, was employed to evaluate reliability. This method focuses on the consistency of repeated measurements under identical conditions or the consistency of similar items measured once, evaluating how well each item reflects the concept measured[113]. The internal consistency test using Cronbach’s α showed that all constructs achieved α values of 0.6 or higher, as presented in Table 3. According to Nunally and Bernstein[114], an α value of 0.7 or higher generally ensures the reliability of a measurement instrument, but in exploratory studies, an α value of 0.6 or higher is acceptable. Table 3. Confirmatory factor analysis
Note : *p<0.1, **p<0.05, ***p<0.01. Subsequently, confirmatory factor analysis was conducted to validate the measures, with the results detailed in Table 3. The t-values of the path coefficients for each item were statistically significant, confirming convergent validity. Lastly, we assessed discriminant validity by evaluating the average variance extracted (AVE) values and the correlation matrix between the constructs. According to Fornell and Larcker[115], discriminant validity is confirmed when the square root of the AVE exceeds the correlation coefficients between constructs. As shown in Table 4, the square root values of the AVE on the diagonal surpassed the correlations between constructs, establishing discriminant validity. Moreover, the composite reliability (CR) and AVE values met their respective thresholds (CR>0.7, AVE > 0.5), demonstrating that the constructs are reliably and validly measured[116-118]. Table 4. Discriminant validity
Notes : Numbers below the diagonal numbers are correlation coefficients (p>0.01). Diagonal bold numbers are square roof of the AVE. To establish measurement invariance between the Korean and U.S. samples, this study conducted a stepwise multi-group confirmatory factor analysis(MGCFA). Configural and metric invariance were supported as the chi-square difference ([TeX:] $$\triangle \chi^2$$), comparative fit index difference (ΔCFI), and root mean square error of approximation difference (ΔRMSEA) all satisfied the recommended criteria ([TeX:] $$\triangle \chi^2$$(36)=33.477, p>.05; ΔCFI=-0.001; ΔRMSEA=0.000). Although scalar invariance was not fully supported due to a statistically significant [TeX:] $$\triangle \chi^2$$, the ΔRMSEA remained below 0.010, suggesting an acceptable level of model fit and allowing for the consideration of partial scalar invariance. As the chi-square difference test often becomes significant when constraining factor loadings and intercepts, recent studies recommend using alternative fit indices such as ΔCFI, ΔTLI, and ΔRMSEA to evaluate invariance[119]. 3.4 Hypothesis TestingIn this study, path analysis and multi-group analysis using PLS were conducted to test the hypotheses, with the results presented in Table 5. The model fit for the path analysis can be assessed using the coefficient of determination (R2) for endogenous variables. The complete model yielded R2 values of 0.273 for privacy concern, 0.685 for trust, and 0.326 for continuous intention to use. For the Korea Group, the R2values were 0.128 for privacy concern, 0.507 for trust, and 0.107 for continuous intention to use. In the USA group, the R2 values were 0.398 for privacy concern, 0.63 for trust, and 0.492 for continuous intention to use. Tenenhaus et al.[120] proposed an overall goodness-of-fit (GoF) index using the AVE and R2 values. The GoF values were 0.521 for the complete model, 0.371 for the Korea group, and 0.541 for the USA group, all exceeding the maximum threshold of 0.36, indicating high model fit across all group models. Table 5. Result of path analysi
Notes : * p>.1, ** p>.05, *** p>.01. First, hypothesis 1 was supported in the complete dataset but was rejected in both the Korea and USA groups. This suggests that anthropomorphism generally influences privacy concerns, although the impact is not significant in the separate analyses for Korea and the USA, suggesting that while users in these countries recognize the human-like characteristics of AI, they do not necessarily link them directly with privacy risks. Hypothesis 2 was rejected for the complete dataset, as well as for both the Korea and USA groups, indicating that users may view the personalized services of ChatGPT as a convenience feature rather than a privacy concern. Therefore, the relationship between personalization and privacy concerns appears to be statistically insignificant. Hypothesis 3 was supported in the complete dataset and in the U.S. group, indicating a significant negative relationship between interactivity and privacy concerns. This means that as interactivity increases, privacy concerns decrease. U.S. users may perceive interactive features as enhancing control and transparency, thereby reducing privacy-related worries. However, the relationship was not significant among Korean users, suggesting that they do not strongly associate interactivity with changes in privacy risk. Hypothesis 4 was supported only in the Korean group, revealing a significant negative relationship between information accuracy and privacy concerns. This suggests that Korean users feel reassured when ChatGPT provides accurate information, leading to lower privacy concerns. However, the relationship was not significant in the complete dataset or the U.S. group, implying that American users do not view information accuracy as a determinant of privacy risk. Finally, hypothesis 5 was supported for the complete dataset but rejected for both the Korea and USA groups, suggesting that while system flexibility impacts privacy concerns globally, this does not hold when analyzing individual countries, where it is not seen as directly related to privacy protection. Hypothesis 6 was confirmed within the complete dataset and the Korea group, but was not supported in the USA group. Korean users view anthropomorphized AI as enhancing familiarity and trust, whereas U.S. users regard it merely as a technical feature. Consequently, anthropomorphized AI did not significantly affect trust among U.S. users, resulting in no significance in the USA group. Hypothesis 7 was not supported in the complete dataset, nor in the Korea and USA groups, indicating that personalization does not significantly impact trust. This lack of impact arises because users see personalization as a convenience feature rather than a direct contributor to trust formation. Although personalized services provide tailored experiences, users tend to prioritize other aspects, such as system security as more crucial in forming trust. Hypothesis 8 was rejected as it showed no significant effect across the complete dataset, including both the Korea and USA groups. This suggests that even though users can interact conveniently with ChatGPT, it is challenging for them to establish trust in a system that it is not secure or lacks transparent data usage. Hypothesis 9 was supported throughout the complete dataset and in both the Korea and USA groups, demonstrating that information accuracy significantly influences trust. The higher the accuracy significantly influences the information, the greater the trust users place in ChatGPT, believing it operates without errors and processes data correctly. The provision of accurate information becomes a key factor in their assessment of the system’s reliability. Hypothesis 10 was supported only in the complete dataset, showing a significant negative relationship between system flexibility and trust. This indicates that, in general, excessive flexibility might be interpreted by users as a lack of consistency or reliability in the system, which could undermine trust. The effect was not significant in either the Korean or U.S. groups, suggesting that this perception does not hold consistently across individual countries. Hypothesis 11 was supported in the complete dataset and the Korea group, but rejected in the USA group. Korean users are more likely to perceive greater privacy risks as their prior knowledge of technology increases, potentially leading to heightened privacy concerns. In contrast, U.S. users may experience reduced privacy concerns with higher levels of prior knowledge if they understand how the technology safeguards their personal information and if they feel they have sufficient control over it. Hypothesis 12 was supported across the complete dataset, as well as in both the Korea and USA groups, indicating that users' prior knowledge significantly impacts trust. When users believe that the information provided by ChatGPT aligns with or is similar to their perspectives, they are more likely to perceive ChatGPT as trustworthy. This inclination is due to the tendency of individuals to trust and respond more positively to information that aligns with their own opinions, which enhances trust in ChatGPT. Hypothesis 13 was rejected across the complete dataset and in both the Korea and USA groups, as AI literacy did not significantly impact privacy concerns. Users' understanding of AI technology does not directly influence their privacy concerns. Even with a high level of technical understanding, privacy concerns are more strongly influenced by other factors, implying that AI literacy does not significantly impact privacy concerns. H14 was supported in the complete dataset, with no significant effect in the Korea group, whereas the USA group showed a significant effect. The difference arises from the diverse ways Korean and American users form expectations and build trust regarding technology. American users tend to develop higher levels of trust when they have a strong understanding of the technology and a sense of control over it. Conversely, Korean users place greater emphasis on the consistency of results, stability, and reliability rather than on a technical understanding of ChatGPT, which explains the lack of a significant impact. H15 was supported in the complete dataset, with both the Korea and USA group showing significant effects. Users with high levels of personal innovativeness tend to adopt and actively utilize new technologies or systems more quickly, yet they may also be more sensitive to the potential privacy risks associated with these innovations. As users become more acquainted with and frequently utilize new technologies, their awareness of how these technologies collect and process personal information increases, potentially leading to heightened privacy concerns. H16 was supported in the complete dataset, with the Korea group showing a significant effect, while the USA group did not show a significant effect. This is because Korean society places high values technological advancement and innovation, facilitating quicker trust-building in new systems when adopting and using new technologies. In contrast, American users generally do not develop trust just by rapidly adopting new technologies. Hypothesis 17 was supported in the complete dataset and the Korean group, confirming a significant negative relationship between privacy concerns and trust. This means that higher privacy concerns lead to a reduction in trust, particularly among Korean users who tend to be more sensitive to privacy issues. In contrast, the effect was not significant for U.S. users, implying that they may perceive privacy concerns and trust as relatively independent constructs. H18 was supported in the complete dataset, with the USA group showing a significant effect, while the Korea group did not demonstrate a significant effect. This indicates that Korean users prioritize the functional advantages and convenience of ChatGPT and tend to regard privacy concerns as less critical when the benefits of the technology are substantial. Despite significant privacy concerns, Korean users are likely to continue using the technology as long as its utility remains high. However, U.S. users are more likely to refrain from using the technology if they feel that privacy protection is inadequate. H19 demonstrated a significant effect across all groups, including the complete, Korean, and U.S. user groups, suggesting that the more trustworthy the system, the more likely users are to actively engage with the technology. Ⅳ. Discussion and ConclusionsThis study examines the relationships among ChatGPT users' privacy concerns, trust, and their intention to continue use, while also investigating how cultural differences between Korean and American user groups moderate these relationships. Additionally, the study explores the impact of ChatGPT's characteristics, such as anthropomorphism, personalization, interactivity, information accuracy, and system flexibility, along with individual traits, including users' prior knowledge, AI literacy, and personal innovativeness, on privacy concerns and trust. The findings are summarized below. First, the effect of anthropomorphism on privacy concerns (H1) was significant in the complete dataset, but not in the Korean and U.S. user groups. This indicates that while the complete dataset reflects the general perceptions of a diverse user base, resulting in a significant effect on privacy concerns, the Korean and U.S. user groups did not perceive anthropomorphized AI as a privacy threat, even when it interacted in a human-like manner. Additionally, anthropomorphism’s effect on trust (H6) was significant in the complete dataset and the Korean user group, but not in the U.S. user group. This implies that while Korean users see anthropomorphized AI as enhancing familiarity and trust, U.S. users tend to view it merely as a technical feature. Second, personalization did not significantly affect privacy concerns (H2), as users perceive personalized services as convenient, viewing them separately from privacy issues. Furthermore, personalization did not significantly impact trust (H7), since users do not view it as crucial for establishing trust. Third, the effect of interactivity on privacy concerns (H3) was significant in the complete dataset and among U.S. users, but not within the Korean user group. This indicates that U.S. users are more likely to perceive increased interactivity as heightening the risk of personal information exposure, while Korean users consider interactivity primarily a means to enhance convenience and do not link it with privacy threats. The effect of interactivity on trust (H8) was not significant, suggesting that while interactivity may offer convenience, trust is hard to establish if the system's security and transparency are not assured Fourth, the impact of information accuracy on privacy concerns (H4) was significant among Korean users, but not in the complete dataset or among U.S. users. Korean users tend to believe that greater information accuracy protects their personal data better, whereas American users do not associate information accuracy with privacy concerns. The effect of information accuracy on trust (H9) was significant across the complete sample and among both Korean and American users. The higher the information’s accuracy, the more users trusted ChatGPT, believing it operated error-free and managed data appropriately. Fifth, the effect of system flexibility on privacy concerns (H5) was significant in the complete sample, but not among Korean and American users. Users believe system flexibility does not directly relate to privacy protection. The impact of system flexibility on trust (H10) was significant only in the complete dataset, and not among Korean and American users, as excessive system flexibility might undermine consistency and reduce trust. However, both Korean and American users view system flexibility as unrelated to trust, leading to an insignificant relationship. Sixth, the impact of prior knowledge on privacy concerns (H11) was significant in the complete dataset and the Korean user group, but not in the U.S. user group. This suggests that while Korean users may experience increased anxiety and privacy concerns as their prior knowledge grows, U.S. users focus more on understanding the protection of their personal information and having adequate control, rather than solely depending on prior knowledge. The impact of users' prior knowledge on trust (H12) was significant in both the complete dataset and the Korean and American user groups. The more aligned the information provided by ChatGPT with the user's perspective, the greater their trust in the system becomes. Seventh, AI literacy did not significantly affect privacy concerns (H13), suggesting that a deeper understanding of AI technology does not necessarily alter privacy concerns. However, AI literacy significantly influenced trust (H14) within the American user group, indicating that enhanced understanding of AI technology foster trust formation. Eighth, the effect of personal innovativeness on privacy concerns (H15) was significant in the complete dataset, and in both the Korean and American user groups. Users with high personal innovativeness tend to adopt new technologies more readily and react more sensitively to potential privacy infringements. The effect of personal innovativeness on trust (H16) was significant in the complete dataset and the Korean user group, but not in the American user group. This occurs because Korean users are more proactive in embracing new technologies, and their positive attitude towards innovation bolsters their trust. Conversely, American users do not establish trust simply through the rapid adoption of new technologies. Ninth, privacy concerns negatively impacted trust (H17) in both the complete dataset and the Korean user group, but were insignificant in the American user group. Korean users are more likely to lose trust if privacy protection is deemed insufficient, whereas American users generally consider privacy concerns and trust as distinct entities. Tenth, privacy concerns significantly affected the complete dataset and the American user group (H18), but not the Korean user group. This can be attributed to the Korean users prioritizing the convenience and utility of technology over privacy concerns. Trust significantly influenced the intention to continue use (H19) across the complete sample, as well as among both Korean and American users. The more trustworthy the system is perceived, the more likely users are to actively engage with ChatGPT. The academic implications of this study are as follows. First, it examined the effects of ChatGPT's characteristics, including anthropomorphism, personalization, interactivity, information accuracy, and system flexibility, as well as user characteristics such as prior knowledge, AI literacy, and personal innovativeness on privacy concerns and trust. Despite the growing use of ChatGPT, research that addresses the impact of its characteristics and user traits on privacy and trust remains scarce. This study enhances the theoretical understanding of AI service user behavior by empirically analyzing how these factors influence privacy concerns and trust. Additionally, by revealing the effects of these characteristics on users, this study lays a theoretical groundwork for future research in this area. Second, this study addressed the cultural differences between Korean and American users. It compared the effects of factors such as anthropomorphism and information accuracy on privacy concerns and trust across different cultural contexts, confirming the significant role of cultural differences in shaping user perceptions. This underscores the importance of considering cultural contexts in the design of AI servicesfor a global audience, thus broadening the scope of AI service research. Third, this study highlighted the interaction between privacy concerns and trust in the context of AI services. Specifically, it was found that privacy concerns significantly affect trust among Korean users, whereas this relationship was not significant among American users. These findings demonstrate that trust in AI services is shaped by privacy perceptions and that cultural differences impact this dynamic, contributing to the development of the privacy-trust-usage intention model. The practical implications of this study are as follows. First, privacy protection is a crucial element for companies offering AI-based services like ChatGPT. AI service providers can maintain customer trust and strengthen long-term relationships by establishing robust privacy protection policies and managing them rigorously. This study provided an in-depth analysis of the relationship between privacy concerns and user trust, offering practical guidelines for companies to enhance their personal data management services. Second, building trust is essential for AI-based service companies to secure long-term users. This study demonstrates that user trust is intimately linked to the intention to continue using the service. Based on this, companies can enhance user trust by improving customer-centered services, leading to sustainable growth. Third, this study highlights the necessity of recognizing cultural differences when crafting AI servicesfor a global audience. To succeed in diverse markets, AI services must maintain consistent security and trust while crafting strategies that acknowledge the uniquecultural nuances of each market. This approach allows AI companies to boost their global competitiveness and contribute to the provision of successful services. One limitation of this study is the data collection from Korean and American users, which may limit its generalizability to the global market. Future studies should encompass users from varied cultural backgrounds or countries to yield richer insights. Although this study compared Korean and U.S. respondents, there was a considerable imbalance in the demographic composition between the two groups. For instance, the proportion of self-employed respondents was 0% in the Korean sample, whereas it was 31% in the U.S. sample, indicating a substantial difference in occupational distribution. Future research should aim to collect more demographically balanced samples to enhance the generalizability of the findings. Particularly, examining non-English-speaking users in comparison to English-speaking users could produce intriguing results. Moreover, this study focused solely on ChatGPT and omitted other forms of AI services. Expanding future research to include and compare various AI service types could facilitate a more detailed analysis. BiographyBiographyHyung-Seok LeeFeb. 1996 : B.E. degree, Kwangwoon University Feb. 2000 : M.S. degree, Korea University Feb. 2003 : Ph.D. degree, Korea University Sep. 2011~Current : Professor, School of Business, Chungbuk National University <Research Interests> Service operations management, internet and mobile services, information technology policy [ORCID:0009-0007-3090-9126] References
|
StatisticsCite this articleIEEE StyleY. Kim and H. Lee, "Factors Affecting the Continuous Intention to Use ChatGPT: Evidence from Korea and USA," The Journal of Korean Institute of Communications and Information Sciences, vol. 51, no. 1, pp. 146-168, 2026. DOI: 10.7840/kics.2026.51.1.146.
ACM Style Yujin Kim and Hyung-Seok Lee. 2026. Factors Affecting the Continuous Intention to Use ChatGPT: Evidence from Korea and USA. The Journal of Korean Institute of Communications and Information Sciences, 51, 1, (2026), 146-168. DOI: 10.7840/kics.2026.51.1.146.
KICS Style Yujin Kim and Hyung-Seok Lee, "Factors Affecting the Continuous Intention to Use ChatGPT: Evidence from Korea and USA," The Journal of Korean Institute of Communications and Information Sciences, vol. 51, no. 1, pp. 146-168, 1. 2026. (https://doi.org/10.7840/kics.2026.51.1.146)
|
|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
