Hellenic Open University, Greece
* Corresponding author
University of Crete, Greece
Hellenic Open University, Greece

Article Main Content

The way in which AI technology is understood and accepted by non-specialists will largely determine its impact and diffusion. There is therefore a need for ongoing research into public perception of the development of AI and its impact on different areas of human life. Our study seeks to explore the perceptions of 193 non-experts as to whether they are positive or negative about specific developments that reflect the impact of AI in areas of personal and professional life, as well as in society in general. They are also asked to give their opinion on the likelihood of these developments occurring in the future. As a result of the study, a criticality map shows the developments for which participants say they find opportunities for themselves and society and those for which different decision-makers and R&D need to address people’s concerns and meet their needs. Comparing our results with those of a similar study in Germany, our study seems to confirm that people’s expectations and evaluations of AI depend (i) on the characteristics of the group of participants (ii) on the culture and socio-economic context in which the participants live and (iii) on the context in which AI is implemented (e.g., education vs. labor market).

Introduction

Artificial Intelligence (AI) has been known as a concept since 1956 by Dartmouth workshop [1]. It was then defined as “any aspect of learning or any other property of intelligence that can be described in such a way that a machine can simulate it.”

Although its evolution was not rapid in the first few years after its introduction as a concept, it has exploded in recent years. This explosion is mainly due to the availability of large datasets, aided by fast, massively parallel computing and storage hardware, coupled with new algorithms [2]. Moreover, after the introduction of Linguistic Artificial Intelligence Models dominated by ChatGPT, it has been integrated into the daily and professional life of many citizens, not only specialists.

Though AI may fundamentally reshape our economy and society, its prospective benefits may be accompanied by potential harms. For example, AI’s impact on economic growth may be felt unevenly across the labor market. In addition to that, the use of AI in several domains such as medicine [3] and daily information through the media [4] raises questions about trust, fairness, privacy, biases and disinformation. In AI in education, there is also a call for ethical and regulatory mechanisms [5], AI competencies [6] and corresponding pedagogies [7]. Also, Shum and Luckin [8] have pointed out that we need to communicate in “accessible terms” with stakeholders–namely, teachers, students, parents, and potentially unions and policy-makers–the benefits of AI for education but also the potential dangers and pitfalls.

To benefit from AI and mitigate their harms it is necessary to implement a participatory governance strategy implemented by academia, industry, government and international groups that includes non-experts’ perceptions, since the impact and diffusion of this technology depends largely on the way it is understood and accepted by non-specialists [1], [3]. It is true also that public perceptions of AI are often shaped by science fiction, communication and media [9] are context-dependent [1] and are influenced by sociodemographic factors and cultural values [3].

It is therefore necessary to continuously explore non-specialists’ perceptions of the development of this growing and ubiquitous technology and its impact on various areas of human life. It is important to investigate laypeople’s perceptions that will help to ensure that i) scientists developing AI systems take social concerns and needs into account ii) decision makers set regulations for the use of artificial intelligence (AI) iii) researchers identify areas where there is a problem of social acceptance and iv) educational policy makers formulate AI literacy curricula [1].

Our study seeks to investigate the perceptions of non-experts who responded to an open invitation to participate in a three-hour free face-to-face seminar on “Introduction to AI in simple terms.” As a result, all of them were either curious to know what it was all about or wanted to learn more about AI and did so in their spare time. Therefore, they showed some evidence of habits or tendencies of lifelong learners [10], [11] such as “learning with intention” as well as “embrace their curiosity,” being persons who continue to “learn new skills and competencies long after they have completed their formal education” [12]–[14]. The non-experts were of different ages, from students to seniors, and of different professional occupations and gender as well as trust and relationship with technology. More than 1/3 of the non-experts were teachers. Respondents were asked, prior to the seminar, about whether they evaluated positively or negatively specific developments reflecting the impact of AI in areas of personal and professional life as well as in society in general. They are also invited to express their opinion on how likely these developments are to occur in the future.

First, definitions of AI and areas of its application today will be presented, followed by studies on human perceptions of AI. Then the method by which we answer the questions of our study, and our sample will be presented. Next, the results of our study will be presented and finally, we will discuss the conclusions drawn from them.

Overview of AI

Concepts

AI, since Dartmouth workshop, 1956, evolved into a multifaceted field, with distinctions between AI, machine learning (ML), and deep learning (DL). AI aims to create systems capable of performing tasks that typically require human intelligence. ML, a subset of AI, focuses on algorithms that learn from data without explicit programming. DL, a subset of ML, utilizes artificial neural networks with many layers to analyze large datasets, making it particularly effective for tasks such as image and text processing [15].

Applications

AI applications span numerous fields, significantly affecting industries such as engineering, healthcare, education, and biotechnology. In engineering, AI techniques facilitate the management of complex design operations, enabling more efficient and precise outcomes [16]. AI’s exceptional computational abilities allow it to surpass human capabilities in tasks like comparison, evaluation, and estimation, ultimately reducing overall design costs and shortening design processes. In healthcare, AI has revolutionized medical imaging and diagnostics, virtual patient care, drug discovery, and administrative tasks [17]. AI-powered tools enhance patient engagement, support rehabilitation, and reduce healthcare professionals’ administrative workloads. The integration of AI in healthcare presents numerous benefits, including improved accuracy in diagnosing clinical conditions and managing electronic health records. The educational sector has seen significant advancements through AI applications, particularly with the introduction of interactive learning environments and tools like ChatGPT. These tools leverage machine learning and natural language processing to provide adaptive, interactive educational experiences, enhancing both teaching and learning processes [18], [19]. AI’s role in biotechnology is also noteworthy, where it aids in addressing global challenges such as food security, health, and environmental sustainability. AI techniques like machine learning, big data analytics, and natural language processing are instrumental in various life sciences applications, from biomedical research to environmental conservation [15].

Public Perceptions of AI

Policy makers acknowledge the importance of engaging citizens in discourse and decision-making on AI but have not yet effectively moved to doing so [20]. Furthermore, understanding the public perception of AI is essential for Responsible Research and Innovation (RRI), which aligns “the development and governance of future AI systems with individual and societal needs” [1].

However, public perception of AI is varied and complex, influenced by factors such as application domains, perceived benefits, potential risks and ethical implications as well as the world of science fiction (books, movies, Tv series) [21].

Historical skepticism towards AI, stemming from the unmet ambitious goals set during its inception, has gradually shifted towards cautious optimism with the advent of practical and successful AI applications [1]. However, concerns about AI replacing human jobs persist, particularly in sectors prone to automation like manufacturing and customer service [22]. Moreover, the ethical implications of AI in educational contexts, particularly regarding the use of AI-generated content and data privacy, are subjects of ongoing debate [18]. The balance between leveraging AI for educational benefits and addressing ethical concerns remains a critical area of focus.

Studies indicate that the perception of AI is heavily context dependent. For instance, AI applications in healthcare are generally viewed positively due to their potential to improve medical outcomes and efficiency. Conversely, AI's role in surveillance and data privacy often raises ethical concerns and fears of misuse [1].

Brauner et al. [1] asked 122 participants in Germany how they perceived 38 statements about AI in different contexts (personal, economic, industrial, social, cultural, health) and assessed their personal evaluation and the perceived likelihood of these aspects becoming reality. The results of the study were presented through a criticality map (Fig. 1). The criticality map proposed highlights among others that while in some contexts/domains AI is perceived as a fruitful technological advancement, in others is perceived as a threat to job security and privacy.

Fig. 1. Criticality map showing the relationship between estimated likelihood and evaluation-impact for the AI predictions in Brauner et al. [1] study.

Sociodemographic divides have been found in attitudes towards AI. According to Morning Consult and United Kingdom Government as cited in O’Shaghnessy et al. [3], familiarity and comfort towards AI are more likely to happen to young, male, educated, living in urban areas, having higher incomes. Furthermore, the same studies show that sociodemographic divides also shape perceptions of AI’s impact on society: Those in urban areas, blue-collar workers, and political liberals are more likely to believe that AI will deepen inequality and reduce employment, while those with more education, white-collar jobs, and higher incomes are more likely to believe that AI will be beneficial to society and the economy.

However, cultural theory of risk perception presented by Weber and Hsee and Johnson and Swedlow as cited in O’Shaughnessy et al. [3] argues that ‘cultural’ worldviews may be more concise and informative predictors of attitudes towards technological risk than socio-demographic factors alone. Furthermore, lifelong learning tendencies have been found to be positive correlated with Information Technology self-efficacy [10], [11].

The Research Questions of the Study

Our study seeks to investigate the perceptions of non-experts who responded to an open invitation to participate in a three-hour free face-to-face seminar on “Introduction to AI in simple terms”. Respondents were asked, prior to the seminar, about whether they evaluated positively or negatively specific developments reflecting the impact of AI in areas of personal and professional life as well as in society in general (perceived impact). They are also invited to express their opinion on how likely these developments are to occur in the future (perceived likelihood). Implementing an expert workshop (n = 4) we selected 16 out of 38 statements used by Brauner et al. [1] to make the questionnaire short, the participants more focused and the responses more valid. Furthermore, we added one statement about education “AI will significantly change the way we learn” because more than 1/3 of the participants were teachers and one statement about the environment: “AI will significantly degrade the environment” because of UN SDG’s. Therefore, here below are the research questions of our study:

1. How do the participants evaluate each possible situation that may arise due to AI?

2. What is the perceived likelihood of its occurrence?

3. Is there a correlation between the perceived impact of AI, i.e., the average of the ratings of the 18 statements, and the perceived likelihood, i.e., the average likelihood that the 18 statements will occur?

4. Are perceived impact and perceived likelihood associated with attitudes, such as ‘trust in AI’ and ‘affinity towards technological interaction’?

5. Are perceived impact and perceived likelihood correlated with socio-demographic factors, such as gender, age, work, education level, previous experience of using ChatGPT?

6. How do the results of the study compare with those of Brauner et al. [1] in Germany?

Method

Identification of the Topics

Our study was based on the previous work of Brauner et al. [1]. First, we conducted an expert focus group to select the statements for the online questionnaire. Four experts in IT, IT in education and Sustainability selected 16 of the 38 statements used by Brauner et al. [1] to keep the questionnaire short and focused, the participants more focused and the responses more valid. In addition, we added a statement about education: ‘AI will significantly change the way we learn’ because more than 1/3 of the participants were teachers and a statement about the environment: ‘AI will significantly degrade the environment’ as we wanted to have an initial take on perceptions about AI and the 11, 12-14 UN SDG’s. Totally, our online questionnaire consisted of 18 statements that depicted the impact of AI (Table I).

AI will… Impact (Mean) Likelihood (Mean) “Impact- Likelihood” descriptive assessment
Promote Innovation 52.59% 53.37% Positive-Highly likely
Significantly change the way of education 42.23% 56.13% Positive-Highly likely
Increase my personal performance 36.01% 30.57% Positive-Highly likely
Increase the standard of living 35.23% 28.15% Positive-Highly likely
Increase economic performance 32.64% 11.92% Positive-Moderate probability
Increase leisure time for all 29.02% 10.54% Positive-moderate probability
Solve complex social issues 11.92% −9.50% Moderate positive-unlikely
Increase my personal income (wealth) 7.25% −13.64% Low positive-unlikely
Create more jobs 5.96% −18.48% Low positive-highly unlikely
Increase leisure time for a few −10.62% 3.97% Low negative-There is a probability
Destroy jobs −20.47% 15.72% Negative-Moderate probability
Cause social inequalities −21.76% 8.12% Negative-Moderate probability
Decrease my communication with others −23.58% −2.59% Negative-Unlikely
Blend work and leisure time −25.13% −26.42% Negative-Highly unlikely
Make moral decisions −27.98% −25.04% Negative-Highly unlikely
Control my private life −29.79% 4.32% Negative-There is a probability
Threaten my professional future −33.42% −30.92% Negative-Highly unlikely
Degrade the natural environment −37.82% −25.04% Negative-Highly unlikely
Table I. The Participants’ Estimated Likelihood of Occurrence (Likelihood) and Subjective Assessment (Impact) of Various AI Developments in Our Lives and Work

Survey

The population of the present research consists of those who registered to attend the workshop entitled “Introduction to Artificial Intelligence in simple terms,” organized and implemented by the Association for the Understanding and Promotion of Computer Science and Digital Creativity “CODiC Group.” The workshop was held on 18 December 2023 in Heraklion, Crete, Greece and was addressed to citizens without specialized knowledge in the field of AI. Its purpose was to familiarize the public with the basic concepts and applications of AI. To conduct the survey, a questionnaire was created through the Google Forms platform and shared with the participants through social media. In this research, the sample is the same as the population as the questionnaire was answered by all participants. A total of 193 participants responded to the questionnaire before the workshop was conducted. The SPSS statistical package was used for statistical analysis of the questionnaire responses.

The first part of the questionnaire was about the socio-demographic characteristics of the participants such as gender, age, level of education, work and previous experience with AI.

The second part of the questionnaire utilized the questionnaire titled “Affinity for Technology Interaction Short Scale” [23] to capture the participants’ relationship with technology. This affinity refers to an individual’s tendency to be actively involved in the heavy use of technology [23] and is associated with a positive underlying attitude towards various technologies. Based on these questions, a composite variable named “Affinity” was created. The questionnaire was translated from English to Greek. The internal consistency index Cronbach alpha [24] of the original questionnaire is 0.87. The internal consistency achieved in this survey is α = 0.639.

Next, a set of questions was used to detect the trust towards AI. Based on these questions, a composite variable named “trust in AI” was created. Trust in general is an important prerequisite for human coexistence and cooperation [25], [26]. Mayer, Salovey and Caruso [27] defined trust as the willingness of a group to be vulnerable to another group. Because technology is considered a social factor [28] trust is also important for the acceptance and use of digital products and services. To measure trust in AI, the scale of Brauner et al. [1] entitled “trust in AI” was utilized, which consists of three questions translated by the researchers into Greek:

1. I believe that AI applications have good intentions (trust1)

2. I cannot rely on AI these days (t2i - reversed)

3. In general, I can trust AI (trust3)

The internal consistency index Cronbach alpha [24] of the original questionnaire is 0.629. The internal consistency achieved in this survey is α=0.582. However, it was observed that if question ‘t2i-reversed’ is not included in the composite variable, the internal consistency increases considerably (from α = .582, n = 3 to a = 0.779, n = 2).

In the third part of the survey questionnaire, participants were given a list of 18 AI-related statements and asked to indicate whether they thought these statements were likely to happen in the future. The possible answers ranged from not at all likely to very likely (5-point scale). For the same events, participants were asked to indicate how negative or positive they think the impact of these events would have on their lives if they were to occur. Possible responses ranged from very negative to very positive (4-point scale). These 18 events were derived from experts’ focus group session (see Identification of the topics section).

Description of the Sample

The questionnaire was answered by 193 participants of which 68 (35.2%) were male and 125 (64.8%) were female. Of these, three (3) are under 18 years (1.6%), 16 (8.3%) are in the age group 19–29, 50 (25.9%) are from 30 to 44 and 124 (64.2%) are 45 years and above. Regarding their level of education, about half, namely 76, stated that they have a master’s or doctoral degree (39.4%), 59 (30.6%) stated that they have a university degree, 47 (24.4%) have a high school diploma and 11 (5.7%) stated that they are students. Regarding work, 78 (40.4%) were teachers from either the public or private sector, 34 (17.6%) public employees, 31 (16.1%) private sector employees, 23 (11.9%) self-employed/entrepreneurs, 16 (8.3%) unemployed and 11 (5% and 7%) students. In terms of participants’ previous experience with AI, 99 (51.3%) of 193 had used a dialogue application with LLM language models, 15 (7.8%) AI translation and 44 (22.8%) a digital assistant.

Results

Perceived Impact and Likelihood

The third part of the questionnaire asked participants to indicate how likely they thought 18 statements were likely to happen in the future and how negative or positive an impact they thought they would have on their lives. Positive values correspond to a positive impact and negative values correspond to a negative impact. The higher the absolute value, the greater the impact that the public believes the statement will have on their lives. The higher the probability, the more likely the public thinks it is that the statement will happen. The higher the negative probability, the more unlikely the public considers the statement to be. As part of the descriptive statistics, means were calculated for each variable (Table I). Statements are sorted from strongest to least Impact.

We then mapped these statements spatially. Fig. 2 presents a scatter plot of the likelihood of occurrence for each of the 18 survey statements and their impact estimates. Each individual point on the figure represents the rating of a statement. The position of the points on the horizontal axis represents the estimated likelihood of occurrence, with the statements rated as most likely to occur furthest to the right on the figure. The position on the vertical axis represents the rating of the statement, with statements rated as the most positive appearing higher on the graph.

Fig. 2. Criticality map showing the relationship between estimated likelihood and impact-evaluation for the AI predictions.

The resulting graph can be interpreted as a criticality map [1] and read as follows: The events in the upper right quadrant are those that participants believe are likely to happen and will have a positive impact on their lives. The upper left quadrant includes events that are less likely to happen, but if they do happen, it will have a positive impact on their lives. In the lower left quadrant are events that are less likely to happen, but if they do happen will have a negative impact on their lives. Finally, in the lower right quadrant are the events that are likely to happen and will have a negative impact on their lives. Dots on or near the diagonal represent possible aspects whose perceived impact is consistent with the estimated probability of occurrence. These aspects are perceived both as likely and positive (e.g. ‘promote innovation,’ ‘significantly change the way education is delivered,’ ‘increase my personal performance,’ ‘increase the standard of living’) or as unlikely and negative (e.g. “degrade the natural environment,” ‘threaten my professional future,’ ‘make moral decisions,’ ‘blend work and leisure time’). On the other hand, expectations and evaluations diverge at points off the diagonal. The future is either seen as probable and negative (e.g. ‘destroy jobs,’ ‘control my private life,’ ‘cause social inequalities’) or unlikely and positive (e.g. ‘create more jobs’ or ‘solve complex social issues’ or ‘increase my personal income (wealth)’).

Then, we compared the results of the study with those of Brauner et al. [1] (Table I) (Figs. 1 and 2) with regards to the participants’ estimated likelihood (Likelihood) of occurrence and subjective assessment (Evaluation) of the various consequences AI could have on our lives and work. In general, the results are consistent with those of Brauner et al. [1]. The differences are as follows ‘Blend work and leisure:’ In the German study it is considered more likely to blend work and leisure, while in the Greek study it is considered relatively unlikely. ‘Make moral decisions:’ The German survey considers it much less likely than the Greek survey that AI will make moral decisions. The probability is relatively low in both cases. ‘Increase my personal performance:’ In our study, participants believe it is very likely to increase their personal performance, unlike in the German study. ‘Increase leisure time for all:’ This is considered more likely and equally positive in the Greek survey. ‘Decrease my communication with others:’ In the German survey this is seen as much more negative and likely. ‘Create more jobs:’ This is not seen as likely in either survey but is not seen as very positive in the German survey. However, this can be explained by the survey method, as the German survey includes a question about ‘well-paid jobs,’ which is seen as both more positive and more likely than ‘creating jobs.’ ‘Control my private life:’ From ‘unlikely’ in the German survey to ‘there is a probability’ in our survey.

Are the Expected Likelihood of Occurrence and the Perceived Impact Correlated?

The next step is to analyse whether expected likelihood and perceived impact are correlated. To do this, we calculated Pearson’s correlation coefficient between the average ratings of the 18 statements related to AI. The test showed a moderate statistically significant correlation (r = 0.365, p < 0.01) (Table II). This means that in our study, the extent to which participants believe that AI will have a strong or weak positive or negative impact on society, personal and professional life is related to the likelihood of these predictions coming true. This finding contrasts with what was found in the sample of the Brauner et al.’s [1] study and will be discussed in the Discussion section.

Trust Affinity Likelihood Impact
Trust Pearson correlation 1 .272** .098 .409**
Sig. (2-tailed) .000 .174 .000
N 193 193 193 193
Affinity Pearson correlation .272** 1 .015 .154*
Sig. (2-tailed) .000 .836 .033
N 193 193 193 193
Likelihood Pearson correlation .098 .015 1 .365**
Sig. (2-tailed) .174 .836 .000
N 193 193 193 193
Impact Pearson correlation .409** .154* .365** 1
Sig. (2-tailed) .000 .033 .000
N 193 193 193 193
Table II. Correlations Impact, Likelihood, Trust, Affinity

Are Perceived Impact and Perceived Likelihood Associated with Attitudes, such as ‘Trust in AI’ and ‘Affinity Towards Technological Interaction?’

Table II also shows the calculations of Pearson’s correlation coefficient among variables ‘Trust in AI,’ ‘Affinity towards technological interaction,’ perceived impact’ and ‘perceived likelihood.’ Therefore, our study data shows a strong and statistically significant correlation between ‘perceived impact of AI’ and ‘Trust in AI’ and a weak but statistically significant correlation between ‘perceived impact of AI’ and ‘affinity towards technological interaction,’ in contrast to the research of Brauner et al. [1]. In Brauner et al. [1] ‘impact’ and ‘trust in AI’ were weakly but negatively associated. Furthermore, in our study, there was no statistically significant correlation between ‘perceived likelihood’ and the ‘trust in AI’ and ‘affinity’ variables.

Are ‘Perceived Impact’ and ‘Perceived Likelihood’ Correlated with Socio-Demographic Factors, Such as Gender, Age, Work, Education Level, Previous Experience of using ChatGPT?

Gender had no significant effect on either “perceived impact,” t(191) = 1.2, p = .264, or “perceived likelihood,” t(191) = 0.09, p = .925, even though men (M =3.88, 0.68) had significantly greater ‘affinity towards technology interaction’ than women (M = 3.21,0,71), t(191) = 6.25, p =.000.

One-way ANOVA was performed to compare the effect of ‘age’ on “perceived impact” and “perceived likelihood” at the age intervals: (0–18), (19–29), (30–44), (45+). There was not a significant effect of age on ‘perceived impact’ and ‘perceived likelihood’ at the p < .05 level for the four conditions [F(3, 189) = 1.443, p = .232], [F(3, 189) = .996, p = .396].

One-way ANOVA was performed to compare the effect of ‘type of work’ on “perceived impact” and “perceived likelihood” at the type of work: Unemployed, Civil servant, Teacher, Freelancer/Entrepreneur, Student, Private sector employee. There was not a significant effect of type of work on ‘perceived impact’ and ‘perceived likelihood’ at the p < .05 level for the six conditions [F(5, 187) = 1.540, p = .179], [F(5, 187) = .616, p = .688].

One-way ANOVA was performed also to compare the effect of ‘education level’ on “perceived impact” and “perceived likelihood” at the levels: Student, High School Graduate, Higher Education Graduate, Master or PhD holder. There was not a significant effect of ‘education level’ on ‘perceived impact’ and ‘perceived likelihood’ at the p < .05 level for the four conditions [F(3, 189) = .722, p = .540], [F(3, 189) = 1.231, p = .300].

Finally, ‘previous experience of using ChatGPT’ had no significant effect on either “perceived impact,” t(191) = 1.038, p = .300, or “perceived likelihood,” t(191) = 1.951, p = .53.

Discussion

In criticality map (Fig. 2), three sets of points deserve particular attention. Firstly, the points in the lower half of the graph, as these are seen as negative by the participants. This is where future R&D needs to be directed to address people’s concerns. Secondly, the points in the upper left quadrant of the graph, as these are considered positive but unlikely. These points provide an insight into where participants perceive research and implementation of AI to fall short of what they want. Finally, all items where there is a large discrepancy between the likelihood of occurrence and the assessment: the future is either seen as probable and negative (e.g. ‘destroy jobs’ or ‘control my private life’) or unlikely and positive (e.g.‘create more jobs’ or ‘solve complex social issues’ or ‘increase my personal income (wealth)’). These items are likely to lead to greater insecurity and uncertainty in the population [1] and decision and policy makers, scientists, academia and educators must take them into account.

As shown in Table I and Fig. 2, participants find it positive and highly likely for them that AI will ‘promote innovation’ and ‘significantly change the way education is delivered,’ and positive and likely (but less so) that it will ‘increase their standard of living’ and ‘personal performance.’ They also think it is negative and quite likely that AI will ‘destroy jobs.’ And, interestingly, not only do they not think AI will ‘increase their personal income,’ but they also think that the prospect of increasing their personal income will have a mostly neutral to positive impact on their lives! In contrast, they see the prospect of ‘increased economic performance’ and ‘increased standard of living’ as positive with moderate to high probability, respectively. Interestingly, for the same reason, ‘solving complex social problems’ and ‘creating jobs’ through AI are seen as having a low positive impact. What does this mean? Is it a matter of perspective, or is it a probability-impact correlation, i.e. low probability of occurrence, therefore low impact? We are not sure we can answer this question in this quantitative research. And our participants, as in the Brauner et al. study [1], despite believing that AI will most likely ‘destroy jobs,’ do not fear for their own professional future because they believe that the likelihood of the development of AI having a negative impact on them is low. Nor do they fear that AI will ‘degrade the natural environment.’

Comparing the results of the study with those of Brauner et al. [1] with regards to the participants’ estimated likelihood (Likelihood) of occurrence and subjective assessment (Evaluation) of the various consequences AI could have on our lives and work, we believe that most of the above differences are due to differences in the profile of the participants and the context of their participation in the study. To begin with, there were two different cultural and socio-economic contexts: German and Greek. In addition, 64% of the participants in our study are over 45 years old, whereas the average age in the German study is 33.9 years. In both cases, the majority are women. In our study, around 70% are university graduates, while 40% are teachers. In addition, 51.3% had already used ChatGPT. In the German study there are no data on educational level, occupation and use of AI. Thus, it is likely that young Germans fear that they are more likely to mix work and leisure and reduce their communication with others than middle-aged Greeks who have a more stable work and leisure environment. Furthermore, all the participants in our study had come to a free seminar on AI in their spare time, driven purely by intrinsic or social motivation. We would therefore say that they exhibited some lifelong learning tendencies. Lifelong learning tendencies have been found to be positive correlated with Information Technology self-efficacy [10], [11] and consequently we may assume, with positive perceptions towards technology and AI. Thus, it also makes sense that Greeks with lifelong learning tendencies, who are 70% graduates, 40% teachers and 51.3% have tried ChatGPT, believe that AI will significantly change the way education is provided, and it will increase their personal performance and free time for everyone. The difference in “creating more jobs” can be explained by the fact that in the German survey there is a question about "well paid jobs" which is seen as both more positive and more likely than the corresponding “creating jobs”. Finally, the differences in the statements ‘make moral decisions’ and ‘control my private life’ cannot be explained by the present study. In addition, in contrast to the young, male, educated, urban, high-income participants who expressed familiarity and comfort with AI reported in Morning Consult and United Kingdom Government studies as cited in O’Shaughnessy et al. [3], our study included mostly middle-aged, mostly female, mostly educated, urban, middle-income participants. These participants stated that they trust AI to improve their personal performance, believing that AI is changing education while not affecting their social relationships.

In our study, the extent to which participants believe that AI will have a strong or weak positive or negative impact on society, personal and professional life is related to the likelihood of these predictions coming true. This finding contrasts with what was found in Brauner et al.’s [1] study and may explain why some obviously positive developments such as ‘solving complex social problems,’ ‘increase my personal outcome’ and ‘create more jobs’ through AI are seen as having an estimated low positive impact. Thus, participants revealed a rather pessimistic approach to the impact of AI on these issues.

Furthermore, our study data shows a strong and statistically significant correlation between ‘perceived impact of AI’ and ‘Trust in AI’ and a weak but statistically significant correlation between ‘perceived impact of AI’ and ‘affinity towards technological interaction,’ in contrast to the research of Brauner et al. [1]. In Brauner et al. [1] ‘impact’ and ‘trust in AI’ were weakly but negatively associated revealing a different meaning on “trust in AI” as “trust that that the technology can deliver what is promised to oneself or by others” whereas in our study participants perceived “trust in AI” as that “the technology is reliable and useful.”

Finally, perceived impact’ and ‘perceived likelihood’ as composite variables expressing the composite perceived impact and perceived likelihood are not significantly correlated with socio-demographic factors, such as gender, age, work, education level and previous experience of using ChatGPT. Future work has to examine each statement-prediction separately to find whether there is a context (e.g., education, labor market, private life) where socio-demographic factors matter.

Implications and Conclusions

Understanding the public perception of AI is essential for Responsible Research and Innovation (RRI). Decision-makers, scientists, researchers, entrepreneurs and education policymakers should take into account citizens’ fears that AI will create problems in employment and privacy, without improving their lives economically or solving a social problem. How justified are these fears? What are these officials doing about it? Are they basing their decisions on what society needs? Are they informing people about AI and the possible directions it could take? On the other hand, AI applications such as LLMs have raised the expectations of people in our sample (Greeks, mostly educated, mostly women, 40% teachers, 50% have tried LLMs at least once) that AI will change education and improve their individual performance. It will be up to the leaders mentioned above to live up to their expectations.

Comparing our results with those of a similar study in Germany [1], our study seems to confirm that people's expectations and evaluations of AI depend to some extent i) on the characteristics of the group of participants and not on an individual characteristic (e.g., level of education), ii) perhaps, on the culture and socio-economic context in which the participants live (e.g., Greece vs. Germany) and iii) on the context in which AI is implemented (e.g., education vs. labor market).

Limitations and Future Work

Obviously, there are some limitations to this study. First, the sample of 193 participants is not representative not only of Greek citizens, but also of the population of Greeks who share the same characteristics as the sample. However, it allows us to formulate some hypotheses that can be confirmed or rejected in a future study in the same context. These future studies should consider that the participants also had the characteristics of people interested in learning about AI in their spare time. Secondly, our study has the disadvantages of a quantitative study. In many cases it cannot explain why something happens. Only a future study that is both quantitative and qualitative could do this. Thirdly, the correlation of the different factors was done with the composite variables of ‘impact’ and ‘likelihood.’ In our study, these composite variables were not statistically significantly correlated with any of the socio-demographic factors. However, it is possible that there may be a correlation between some socio-demographic factors and some of the predictor statements. This should be the subject of a future study.

Finally, the 18 predictor statements examined only part of the scope of AI and not comprehensively. Future research should focus on specific areas such as education, the labor market and security. However, the fact that we limited the number of questions in our questionnaire allowed participants to answer all the questions by giving quick but real factual answers as they perceived them. The fact that we considered different areas also allows hypotheses of wider interest to emerge.

References

  1. Brauner P, Hick A, Philipsen R, Ziefle M. What does the public think about artificial intelligence?—A criticality map to understand bias in the public perception of AI. Front Comput Sci. 2023;5:1234567. doi: 10.3389/fcomp.2023.1234567.
     Google Scholar
  2. Wang H, Fu T, Du Y, Gao W, Huang K, Liu Z, et al. Scientific discovery in the age of artificial intelligence. Nat. 2023;620(7972):47–60. doi: 10.1038/s41586-023-06221-2.
     Google Scholar
  3. O’Shaughnessy MR, Schiff DS, Varshney LR, Rozell CJ, Davenport MA. Generative AI and cognitive integrity: a human-centered approach. Science. 2023;379(6634):eabn4921. doi: 10.1126/science.abn4921.
     Google Scholar
  4. Nader K, Toprac P, Scott S, Baker S. Public understanding of artificial intelligence through entertainment media. AI Soc. 2024;39(2):713–26.
     Google Scholar
  5. Luckin R. Machine learning and human intelligence: The future of education for the 21st century. London: UCL IOE Press; 2018.
     Google Scholar
  6. Miao F, Cukurova M. AI competence framework for teachers. UNESCO; 2024. doi: 10.54675/ZJTE2084.
     Google Scholar
  7. Okada A, Sherborne T, Panselinas G, Kolionis G. Fostering transversal skills through open schooling supported by the CARE-KNOW-DO pedagogical model and the UNESCO AI competencies framework. Int J Artif Intell Educ. 2025;1(1):1–46. doi: 10.1007/s40593-025-00458-w.
     Google Scholar
  8. Shum SJB, Luckin R. Learning analytics and AI: politics, pedagogy and practices. Br J Educ Technol. 2019;50(6):2785–93. doi: 10.1111/bjet.12880.
     Google Scholar
  9. Viidalepp A. Representations of robots in science fiction film narratives as signifiers of human identity. Inform Társ. 2020;20(4):19–36. doi: 10.22503/inftars.XX.2020.4.2.
     Google Scholar
  10. Demirel M, Akkoyunlu B. Prospective teachers’ lifelong learning tendencies and information literacy self-efficacy. Educ Res Rev. 2017;12(6):329–37.
     Google Scholar
  11. Sen N, Yıldız Durak H. Examining the relationships between English teachers’ lifelong learning tendencies with professional competencies and technology integrating self-efficacy. Educ Inf Technol. 2022;27(5):5953–88. doi: 10.1007/s10639-021-10851-2.
     Google Scholar
  12. Billett S. Distinguishing lifelong learning from lifelong education. J Adult Learn Knowl Innov. 2018;2(1):1–7.
     Google Scholar
  13. Indeed. What does it mean to be a lifelong learner? (With benefits). 2024. Accessed 2025 May 20. Available from: https://ca.indeed.com/career-advice/career-development/lifelong-learner.
     Google Scholar
  14. Point Loma Nazarene University. 10 habits of a lifelong learner. 2024. Accessed 2025 May 20. Available from: https://www.pointloma.edu/resources/accelerated-undergraduate/10-habits-lifelong-learner.
     Google Scholar
  15. Holzinger A, Keiblinger K, Holub P, Zatloukal K, Müller H. AI for life: trends in artificial intelligence for biotechnology. New Biotechnol. 2023;74:16–24. doi: 10.1016/j.nbt.2023.02.001.
     Google Scholar
  16. Khaleel M, Ahmed AA, Alsharif A. Artificial intelligence in engineering. Brill Res Artif Intell. 2023;3(1):32–42. doi: 10.47709/brilliance.v3i1.2170.
     Google Scholar
  17. Al Kuwaiti A, Nazer K, Al-Reedy A, Al-Shehri S, Al-Muhanna A, Subbarayalu AV, et al. A review of the role of artificial intelligence in healthcare. J Pers Med. 2023;13(6):951. doi: 10.3390/jpm13060951.
     Google Scholar
  18. Rospigliosi PA. Adopting the metaverse for learning environments means more use of deep learning artificial intelligence: this presents challenges and problems. Interact Learn Environ. 2022;30(9): 1573–6. doi: 10.1080/10494820.2022.2132034.
     Google Scholar
  19. Rospigliosi PA. Artificial intelligence in teaching and learning: what questions should we ask of ChatGPT? Interact Learn Environ. 2023;31(1):1–3. doi: 10.1080/10494820.2023.2180191.
     Google Scholar
  20. Beets B, Newman TP, Howell EL, Bao L, Yang S. Surveying public perceptions of artificial intelligence in health care in the United States: systematic review. J Med Internet Res. 2023;25:e40337. doi: 10.2196/40337.
     Google Scholar
  21. Sartori L, Bocca G. Minding the gap(s): public perceptions of AI and socio-technical imaginaries. AI Soc. 2023;38(2):443–58.
     Google Scholar
  22. Smith A, Anderson J. AI, robotics, and the future of jobs [Internet]. Pew Research Center. Washington (DC). 2014 Aug 6 [cited 2025 Aug 15]. Available from: https://www.pewresearch.org/internet/2014/08/06/future-of-jobs/.
     Google Scholar
  23. Wessel D, Attig C, Franke T. ATI-S: An ultra-short scale for assessing affinity for technology interaction in user studies. In Alt F, Bulling A, Döring T, editors, MuC’19: Proceedings of Mensch und Computer 2019. New York: ACM. 2019. pp. 147–54. doi: 10.1145/3340764.3340766.
     Google Scholar
  24. Cronbach LJ. Coefficient alpha and the internal structure of tests. Psychometrika. 1951;16(3):297–334. doi: 10.1007/BF02310555.
     Google Scholar
  25. McKnight DH, Choudhury V, Kacmar C. Developing and validating trust measures for e-commerce. Inf Syst Res. 2002;13(3):334–59. doi: 10.1287/isre.13.3.334.81.
     Google Scholar
  26. Hoff KA, Bashir M. Trust in automation: integrating empirical evidence on factors that influence trust. Hum Factors. 2015;57(3):407–34. doi: 10.1177/0018720814547570.
     Google Scholar
  27. Mayer JD, Salovey P, Caruso DR. Emotional intelligence. Imagin Cogn Pers. 1995;15(3):197–215. doi: 10.2190/DUGG-P24E-52WK-6CDG.
     Google Scholar
  28. Reeves B, Nass C. The media equation: How people treat computers, television, and new media like real people and places. Cambridge: Cambridge University Press; 1996.
     Google Scholar