Kobe University, Japan
* Corresponding author
Higashiyodogawa-ku, Japan
Kobe University, Japan

Article Main Content

In recent years, aquaphotomics-based research has been actively conducted across various fields to measure, analyze, and collect spectral data related to the moisture content of diverse subjects, including plants, animals, beauty products, concrete, and cement. However, furthermore, particularly in recent aquaphotomics-oriented scientific fields, effectively categorizing theses datasets using diverse system-based methodologies is essential. Aquaphotomics, a subfield of near-infrared spectroscopy (NIRS), has been extensively explored to investigate water-related spectral properties. However, research integrating AI-driven analytical techniques, particularly for human-related datasets, remains limited. In the fields related to aquaphotomics, to date, little to no research has been conducted on utilizing AI including Large language Models (LLM)-based technologies to process and analyze multiple datasets from human subjects. This study seeks to bridge this gap by employing Generative Pre-trained Transformer (GPT)- 4o to process and interpret aquaphotomics data, thereby facilitating more effective and comprehensive clustering into moisture content variations in human skin. Specifically, we divide subjects into four groups based on their skin moisture content and the ratio of epidermal water loss datasets. We then ask, based on the subjects’ basic information, skin care habits, and lifestyle information concerning exercise, sleep, etc., “Which group should we predict that any given, new subject will belong to?” and “What is the probability that those predictions are correct?” We also specify the process of making those predictions. By presenting this set of analytical data in numerical form to both the user and the esthetic salon, we hope that both the user and the esthetic salon will be able to make informed and satisfactory decisions regarding skin care routines, etc. This includes selecting appropriate serums and supplements, which should improve both the accuracy and effectiveness of skin care choices. In the long term, we anticipate that this research will contribute to the development of generative AI applications in skin care science, which is still evolving.

Introduction

In this study, we first obtain preliminary basic data on four different groups of data sets on skin water content and water loss. To date, few studies have used Generative Pre-trained Transformer (GPT)-4o to predict a subject’s skin type from multiple pieces of information obtained from the subject (nonspecific, non-public subject data sets) from a cosmetic perspective. In light of past examples, it should be possible for us to use GPT-4o to perform inference experiments of group classification on the evaluated subject datasets and to visualize the inference process and its validity [1]–[15].

Although GPT is basically based on large-scale language models (LLMs), mathematical analysis is one of the areas where GPT can be useful. One major advantage of using GPT in this area is its dominant position in the generative AI market, especially among AI chatbots in the U.S. For example, according to data provided by First Page Sage, ChatGPT maintained a 61.3% market share in the U.S. generative AI chatbot segment in July 2024. This significant presence in this market suggests that, despite potential limitations in terms of accuracy, GPT applications in this area may still have considerable utility.

Preliminary Method and this Studies’ Method

We propose the ways to combine our past diverse skin datasets, particularly the aquaphotomics-based analysis methodologies. We have obtained subjects’ (i) skin moisture and (ii) evaporation data from measurements at beauty salons of Dr. Recella Co., Ltd. (Higashiyodogawa-ku, Osaka, Japan). Besides, as for each subject, we captured diverse information, particularly (i) the average amount of water using Corneometer TM CM825, Courage + Khazaka electronic GmbH, Inc. (Köln, Germany): the manufacturer’s own measurement units for this measuring instrument, (ii) ratio of epidermal water loss (transpiration) [g/m²/h] collected using Tewameter, Courage + Khazaka electronic GmbH, Inc. (Köln, Germany) presented in Fig. 1 and Table I.

Fig. 1. Two-axial scatter plot incorporating boundary values.

Description Group 1 Group 2 Group 3 Group 4
Dry 0 3 1 2
Dry & Sensitive 4 1 2 1
Dry & Combination 1 0 0 0
Normal 1 0 2 1
Combination (Oily & Dry) 3 2 3 3
Combination (Oily & Dry) & Acne 0 2 0 1
Combination (Oily & Dry) & Sensitive 2 1 0 1
Oily 0 0 1 1
Normal & Dry 0 1 2 0
Sensitive 0 1 0 1
Total 11 11 11 11
Table I. Survey Data were Collected on How Subjects from Four Distinct Groups Perceive Their Own Skin Type

In this study, we selected 44 subjects. The classification of skin quality based on the water content and transpiration rate of the epidermis can be categorized into the following four groups:

Group 1: The epidermis exhibits high water content and a high transpiration rate (N = 11).

Group 2: The epidermis exhibits high water content and a low transpiration rate. This skin type is generally considered ideal (N = 11).

Group 3: The epidermis exhibits low water content and a low transpiration rate (N = 11).

Group 4: The epidermis exhibits low water content and a high transpiration rate. This skin type is generally considered suboptimal or poor (N = 11).

We use the data of six randomly selected participants from each group as the training dataset, while the remaining five participants’ data serve as the evaluation dataset. We then measure the accuracy of the predictions, analyze the results, and discuss our findings.

The items of the questionnaire that the subjects were asked to complete are then listed below:

Basic information:

1. Subject number (No.)

2. Stature (cm)

3. Weight (kg)

4. Sex

5. Birth day

6. Age

7. Hobby

Skincare Routine:

8. “What do you think your skin type is?”

9. “What kind of skin care do you habitually do at night?” (“What kind of home care do you do?”) [e.g., Cleansing]

10. “What kind of skin care do you habitually do at night?” (“What kind of home care do you do?”) [e.g., Face wash 1 using solid soap?)]

11. “What kind of skin care do you habitually do at night?” (“What kind of home care do you do?”) [e.g., Face wash using a cleanser other than solid soap?]

12. “What kind of skin care do you habitually do at night?” (“What kind of home care do you do?”) [e.g., using XXX-Lotion]

13. “What kind of skin care do you habitually do at night?” (“What kind of home care do you do?”) [e.g., Serum]

14. “What kind of skin care do you habitually do at night?” (What kind of home care do you do?) [e.g., using Gel]

15. “What kind of skin care do you habitually do at night?” (What kind of home care do you do?) [e.g., using Cream/lotion?]

16. “What kind of skin care do you habitually do at night other than the above? What products do you use?”

17. “What kind of skin care do you habitually do in the morning?” (What kind of home care do you do?) [e.g., Cleansing?]

18. “What kind of skin care do you habitually do in the morning?” (What kind of home care do you do?) [e.g., Face wash two times using solid soap]

19. “What kind of skin care routine (home care) do you do in the morning?” [e.g., Face wash two times using a cleanser other than solid soap?]

20. “What kind of skin care routine (home care) do you do in the morning?” [e.g., using lotions?]

21. “What kind of skin care routine (home care) do you do in the morning?”

22. “What kind of skin care routine (home care) do you do in the morning?”

23. “What kind of skin care routine (home care) do you do in the morning?”

24. “What kind of skin care routine (home care) do you do in the morning?” [e.g., using sunscreens?]

25. “Please tell us if you use any skin care routines or products other than those listed above for your morning skin care routine”

Health Status:

26. “Lately (looking back over the last two weeks), have you felt physically and emotionally tired?”

27. “Lately (looking back over the last two weeks), have you felt physically and emotionally tired?”

28. “Do you get blisters on your mouth or body when you have a cold or are tired? (Herpes, shingles, etc.?)”

29. “Do you have any allergies? If so, select “Other” and tell us as many allergens as you know?”

30. “Do you have a period?”

Lifestyle Habits:

31. “Do you usually smoke? If you do, please select “Other” and tell us the amount you consume per day and the number of years you have smoked?”

32. “How much alcohol do you usually drink? Please choose the answer that you think applies

33. “Please choose the answer that you think applies regarding your regular exercise.”

34. “Lately (looking back over the past week or so), have you been eating a nutritionally balanced diet?”

35. “Please choose the score that you think is appropriate.”

36. “Do you think that you have been eating a nutritionally balanced diet recently (looking back over the past week or so)?”

37. “Please select the score that you think is appropriate?”

38. “Lately (looking back over the past week or so), have you been eating regularly? Please choose the score.”

39. Others

Perceived Skin Type

40. “What do you perceive about your own skin type?” [e.g., dry skin, oily skin, etc.]

Procedure

In our past, successive studies, we have utilized recent eXplainable AI (XAI)- and generative AI-based methodologies and executed diverse data analyses [16]. However, to date, no prior research has explored the application of GPT-4o, the latest advanced version available for civilian use, in the field of aquaphotomics. Therefore, this study aims to conduct an exploratory trial utilizing GPT-4o to perform an analysis analogous to estimating and disclosing thought processes through explainable AI (XAI). Through this approach, the study seeks to demonstrate the potential utility of GPT-4o in this domain.

In this study, we exclusively utilize GPT-4o. The following section outlines the process of the experiment [17]–[21]:

1. First, we provide the following instruction to ChatGPT-4o: “We will now present information regarding four groups. Each group consists of 11 individuals, making a total of 44 participants. The information is as follows” (presented in Fig. 2).

Fig. 2. The screen displayed during the process of training ChatGPT-4o on the questionnaire items.

2. We then provide the following instruction: “We will convey all the questionnaire items that the participants have answered in writing.”

• Subject number (No.)

• Stature (cm)

• Weight (kg)

• Sex

• Birth day

• Age

• Hobby

• “What do you think your skin type is?”

• “What kind of skin care do you habitually do at night? (What kind of home care do you do?)” ... (The following section presents several dozen items in succession)

3. We convey the following instruction: “As the simplest case, we will first provide data from four participants: Subject No. 06 from Group 1, Subject No. 30 from Group 2, Subject No. 10 from Group 3, and Subject No. 04 from Group 4. Please use the data from these four participants as training data.”

4. Next, we send the data of Subject No. 07 without disclosing their group affiliation and provide the following instruction: “Which of the four groups do you predict this participant belongs to?

5. Additionally, please explain the reasoning process behind your prediction and present the predicted result. Furthermore, estimate the probability that your prediction is correct and express it in percentage format, such as 12.3 %.”

6. Next, as a practical dataset, we randomly select six participants from each of the aforementioned four groups, resulting in a dataset of 24 participants. We then provide this dataset as training data. Furthermore, we provide the remaining dataset of 20 participants, designating it as the evaluation dataset.

7. Subsequently, we provide the following instruction: “For each of these 20 participants, which group do you predict they belong to? As in the simplest case, please explain the reasoning process behind your predictions and present the predicted results.”

8. “Additionally, estimate the probability of each prediction being correct and express it in percentage format, such as For Subject No. 5, 12.3%.” “Please provide a detailed explanation of the intermediate steps, including the specific calculation formulas and other methods used to derive the ‘XX.X %’ value.”

9. We then analyze and examine the entirety of these outputs.

Result, Discussion and Conclusion

After the aforementioned series of trials, the output (responses) generated by GPT-4o (Paid version (subscription-based), session connection is subject to time limitations) is presented in Fig. 3.

Fig. 3. The screen displayed during the process of explaining the output data.

We first obtained the following response regarding the ‘group prediction reasoning process’ for the simplest case in this experiment:

1. Classification based on age and physical information:

• Closest age groupewww

- Subject No. 07 (24 years old)

- Closest groups: Subject No. 06 (25 years old, Group 1), Subject No. 30 (27 years old, Group 2)

• Comparison of height and weight:

- No. 06 (161 cm, 53 kg) → Close

- No. 30 (161 cm, 46 kg) → Close

- No. 10 (156 cm, unknown weight) → Close

- No. 04 (156 cm, 60 kg) → Slightly different

Based on age and physique, Subject No. 07 is most similar to No. 06 (Group 1) and No. 30 (Group 2).

2. Classification based on lifestyle habits and skincare:

• Exercise habits: ‘Does not exercise regularly’

- No. 07 → Does not exercise

- No. 30 (Group 2) → Does not exercise

- No. 10 (Group 3) → Does not exercise

- No. 04 (Group 4) → Exercises for more than 60 minutes per week

Based on exercise habits, Subject No. 07 is most similar to No. 30 (Group 2) or No. 10 (Group 3).

• No. 10 (Group 3) → 6 points/5 points

• No. 04 (Group 4) → 6 points/5 points

Subject No. 07 is the only one with irregular eating habits, suggesting a distinct lifestyle pattern compared to other groups.”

3. Classification based on skincare and health information:

• Skin type: ‘Combination (oily & dry)

- No. 07 → Combination

- No. 06 (Group 1) → Combination.

- No. 30 (Group 2) → Combination + acne

- No. 10 (Group 3) → Normal

- No. 04 (Group 4) → Normal

Subject No. 07 is similar to No. 06 (Group 1) and No. 30 (Group 2).

• Presence of herpes or blisters

- No. 07 → Occasionally develops

- No. 06 → Experiences mild atopy or urticaria

- No. 30 → Does not develop

- No. 10 → Does not develop

- No. 04 → Does not develop

Subject No. 07 shows a similar tendency to No. 06 (Group 1).

4. Final Prediction:

• Subject No. 07 is most likely to belong to ‘Group 1’ (This is the correct answer)

Reasoning: Age and physique: Similar to No. 06 (Group 1)

Skincare habits: Combination skin type with similar skincare routines.

Lifestyle habits: Although their diet is somewhat irregular, their lack of exercise is similar to No. 30 (Group 2)

Health condition: Presence of herpes and other skin issues aligns with No. 06 (Group 1)

However, due to the extreme irregularity of their diet, Subject No. 07 does not completely fit into any single group. If additional data were available, there is a possibility that they could be classified into a new group:

Skincare habits: Combination skin type with similar skincare routines.

Lifestyle habits: Although their diet is somewhat irregular, their lack of exercise is similar to No. 30 (Group 2).

Health condition: Presence of herpes and other skin issues aligns with No. 06 (Group 1).

However, due to the extreme irregularity of their diet, Subject No. 07 does not completely fit into any single group. If additional data were available, there is a possibility that they could be classified into a new group.

5. GPT-4o calculated and displayed the probability of this prediction being correct as 78.6% (Table II):

• A detailed explanation of the process used to derive this 78.6 % probability is provided below.

• The objective (goal) is to predict which group (No.06, No.30, No.10, No.04) the subject No.07 is most likely to belong to and calculate the probability of this classification.

• Organizing the data; to compare the subjects numerically, we convert key characteristics into numerical values.

• For distance calculation (Euclidean Distance), it is a method for calculating the distance between two data points. Each feature (age, height, weight, etc.) is squared and summed, and then the square root is taken to measure similarity. The formula is below:

d ( A , B ) = ( A 1 B 1 ) 2 + ( A 2 B 2 ) 2 + + ( A n B n ) 2

• Calculating the Distance Between No.07 and Each Subject

- For the case of the distance between No.07 and No.06, we calculate as following:

d = ( 24 25 ) 2 + ( 158 161 ) 2 + ( 55 53 ) 2 + + ( 1 1 ) 2 = 1 + 9 + 4 + 0 + 0 + 1 + 0 + 0 + 0 + 1 + 0.25 + 0 = 16.25 4.03

- No. 07 vs. No. 30: d =…≒10.31, No. 07 vs. No. 10: d =…≒14.04, No. 07 vs. No. 04: d =…≒13.97.

• For converting distance to probability, since a smaller distance means a higher similarity, we normalize these scores, so they sum to 100.0 % to derive the probability. We calculate a similarity score as follows (see in Table III):

S = 1 1 + d

P i = S i S t o t a l × 100

S t o t a l = 0.198 + 0.088 + 0.067 + 0.067 = 0.42

To normalize the probability:

P N o .06 = 0.198 0.42 × 100 = 78.6 ( % )

• In conclusion, according to this analysis, subject No. 07 had the highest probability of belonging to group 1 (group No. 06), 78.6 %, which is a correct prediction. This method uses statistical techniques (Euclidean distance and normalization) to estimate classification probability, and as more data becomes available, the classification accuracy can improve further.

Subject No. Age Height (cm) Weight (kg) Skin type Exercise habit Diet score Skincare similarity Allergy Smoking habit Alcohol Sleep duration Cold/Herpes susceptibility
No. 06 25 161 53 1 0 3 1 1 0 3 6.0 1
No. 30 27 161 46 1 0 4 1 1 0 2 7.0 0
No. 10 36 156 50 0 0 6 1 1 0 0 7.0 0
No. 04 36 156 60 0 1 6 1 0 0 2 7.0 0
No. 07 34 158 55 1 0 2 1 1 0 2 5.5 1
Table II. Compiled Data Table that was Output by GPT-4o
Channels Distance, d Score, S
No. 06 4.03 0.198
No. 30 10.31 0.088
No. 10 14.04 0.067
No. 04 13.97 0.067
Table III. Dataset of the Distance and Score

6. Classification based on age and physical information about all 44 subjects:

As the next phase, we conducted an experiment using the complete dataset of all 44 participants. First, we input the data of six participants from each group as training data and allowed the model to learn from it. Then, we input the validation data to assess its predictions and evaluate the results.

• Suggested process for the training and evaluation datasets by ChatGPT-4o:

- Feature Selection: Identify key attributes for classification (e.g. age, skin type, skincare routine, lifestyle habits).

- Similarity Analysis: Compare evaluation subjects with known groups based on selected features.

- Group Prediction: Assign each evaluation subject to a group using a classification model or similarity scoring.

- Probability Estimation: Calculate confidence scores for each prediction.

- Explanation of Calculations: Provide a breakdown of how the predictions were made.

• The predictions for the evaluation subjects were available. Each subject has been assigned to a group with a confidence percentage based on a following k-Nearest Neighbors (k-NN) classification model.

k-NN classifier was trained using the known groups as training data. A “k-Nearest Neighbors (k-NN)” classifier is a machine learning algorithm that classifies new data points by comparing them to the closest data points (neighbors) in a labeled training dataset, assigning the new data point to the class that is most prevalent among its “k” nearest neighbors; essentially, it predicts the class of a new data point based on the majority vote of its closest data points in the training set, where “k” represents the number of neighbors considered for the classification.

We calculate the distance between a new data point and all data points in the training set. We identify the “k” nearest neighbors based on the calculated distances. Assign the new data point to the class that is most common among its “k” nearest neighbors:

- Important parameter:k”; “k” represents the number of neighbors to consider when making a prediction. Choosing the optimal “k” value is crucial, as a very low “k” can lead to overfitting while a very high “k” can result in underfitting.

- Distance metrics: To calculate the distance between data points, common metrics like Euclidean distance, Manhattan distance, or Cosine similarity are often used.

- Advantages: Simple to understand and implement, no need for complex model training. We can be effective for classification problems with well-separated classes.

- Disadvantages: we can be computationally expensive for large datasets.

We may not perform well with complex decision boundaries.

- Each evaluation subject was compared to its three closest neighbors in the training data.

• The explanations how the probability values were calculated. For the feature selection and standardization, key attributes such as age, height, weight, skincare habits, and lifestyle factors were chosen. These features were standardized to ensure consistent comparisons across datasets.

• Model Selection, Training and Prediction Process:

- The group label of the majority of the nearest neighbors was assigned to the subject.

- If two groups had equal influence, the model selected one based on the distance metric.

• Confidence Score Calculation:

- The percentage confidence for each prediction was calculated based on neighboring votes.

- If all three nearest neighbors belonged to the same group, the confidence was 100%/3 neighbors ≒ 66.7%.

- If there was a tie (e.g. two neighbors from one group, one from another), the confidence was 66.7 % for the majority group and 33.3% for the minority group.

• Output data:

- For Group 1-4, we present the datasets successively related to the evaluation predictions in Table IV.

True Group Subject No. Predicted Group Prediction Confidence (%)
Group 1 05 Channel 3 66.7
29 Channel 1 33.3
34 Channel 3 66.7
35 Channel 1 33.3
43 Channel 1 33.3
Group 2 03 Channel 1 66.7
32 Channel 1 33.3
33 Channel 2 33.3
36 Channel 2 33.3
41 Channel 2 66.7
Group 3 00 Channel 66.7
19 Channel 33.3
28 Channel 66.7
31 Channel 66.7
39 Channel 66.7
Group 4 23 Channel 33.3
25 Channel 33.3
38 Channel 33.3
40 Channel 66.7
42 Channel 66.7
Table IV. Output Datasets concerning Predicted Group and Prediction Confidence. (Cells Containing Data for Which GPT-4o’s Predictions were Correct are Highlighted in Light Blue)

Based on these data, we should only conclude that the accuracy was not particularly high. However, considering that the predictions were made solely based on information regarding the participants’ lifestyle habits and skincare routines, without utilizing data on skin moisture content or trans epidermal water loss, the observed level of accuracy appears reasonable.

Under this method, since one out of four groups is selected at random, the expected accuracy for a completely random prediction would be 25.0% on average. However, in the present trial, the accuracy rate was 35.0%, which is 10.0% higher than the random baseline. Therefore, as supplementary data to other measurement methods, this approach may have potential societal utility.

However, we consider that, again, in this field, the data from the 44 subjects we recruited was not sufficient. This is because some medium- to large-scale research centers collect data from thousands to millions of people.

Regarding the fact that the percentage of correct responses in Groups 3 and 4 was lower than the figures for Groups 1 and 2, we consider that this was partly due to the low moisture content of the skin of the subjects in Groups 3 and 4. This was because the number of water molecules was lower in cases where the skin moisture content itself is low, which inevitably makes it more difficult to judge than when analyzing groups with high skin moisture content itself.

In all four groups, it would also have been difficult to determine whether the amount of moisture in the skin itself was supplied by the person’s own body or absorbed from the air outside. Further in-depth research into the above is desirable in the future.

Furthermore, researchers from institutions such as Shiseido Company, Limited (Tokyo, Japan) and KOSÉ Corporation (Tokyo, Japan) frequently state in diverse articles and other sources that the accuracy of self-reported skin types is extremely low, although specific probability values are not known. Nevertheless, in this study, such data were experimentally incorporated as reference information, and it is possible that this inclusion had a negative impact on the results.

References

  1. Sufi F. Generative pre-trained transformer (GPT) in research: a systematic review on data augmentation. Information. 2024;15(2):99.
     Google Scholar
  2. Li P, He Y, Yashar D, Cui W, Ge S, Zhang H, et al. Table-gpt: table fine-tuned gpt for diverse table tasks. Proc ACM Manag Data. 2024;2(3):1–28.
     Google Scholar
  3. Han C, Kim DW, Kim S, You SC, Park JY, Bae S, et al. Evaluation of GPT-4 for 10-year cardiovascular risk prediction: insights from the UK Biobank and KoGES data. Iscience. 2024;27(2):109022.
     Google Scholar
  4. Hatakeyama-Sato K, Watanabe S, Yamane N, Igarashi Y, Oyaizu K. Using gpt-4 in parameter selection of polymer informatics: improving predictive accuracy amidst data scarcity and ‘ugly duckling’ dilemma. Digit Discov. 2023;2(5):1548–57.
     Google Scholar
  5. Li P, He Y, Yashar D, Cui W, Ge S, Zhang H. Table-gpt: table fine-tuned gpt for diverse table tasks. Proc ACM Manag Data. 2024;2(3):1–28.
     Google Scholar
  6. Solatorio AV. GeoFormer: predicting human mobility using generative pre-trained transformer (GPT). Proceedings of the 1st International Workshop on the Human Mobility Prediction Challenge, pp. 11–5, 2023.
     Google Scholar
  7. Savelka J, Ashley KD, Gray MA, Westermann H, Xu H. Can gpt-4 support analysis of textual data in tasks requiring highly specialized domain expertise? arXiv preprint arXiv:2306. 2023: 13906.
     Google Scholar
  8. Fink MA, Bischoff A, Fink CA, Moll M, Kroschke J, Dulz L, et al. Potential of ChatGPT and GPT-4 for data mining of free-text CT reports on lung cancer. Radiology. 2023;308(3):e231362.
     Google Scholar
  9. Xiao Z, Li W, Moon H, Roell GW, Chen Y, Tang YJ. Generative artificial intelligence GPT-4 accelerates knowledge mining and machine learning for synthetic biology. ACS Synth Biol. 2023;12(10):2973–82.
     Google Scholar
  10. McDermott M, Nestor B, Argaw P, Kohane IS. Event Stream GPT: a data pre-processing and modeling library for generative, pre-trained transformers over continuous-time sequences of complex events. Adv Neural Inf Process Syst. 2023;36:24322–34.
     Google Scholar
  11. Xie Z, Evangelopoulos X, Omar ÖH, Troisi A, Cooper AI, Chen L. Fine-tuning GPT-3 for machine learning electronic and functional properties of organic molecules. Chem Sci. 2024;15(2):500–10.
     Google Scholar
  12. Zhu D, Chen J, Shen X, Li X, Elhoseiny M. Minigpt-4: enhancing vision-language understanding with advanced large language models. arXiv preprint arXiv:2304.10592, 2023.
     Google Scholar
  13. OpenAI. Gpt-4 technical report. arXiv preprint arXiv:2303.08774, 2023b.
     Google Scholar
  14. Zhu D, Chen J, Shen X, Li X, Elhoseiny M. Minigpt-4: enhancing vision-language understanding with advanced large language models. arXiv preprint arXiv:2304.10592, 2023.
     Google Scholar
  15. Zhang S, Sun P, Chen S, Xiao M, Shao W, Zhang W, et al. Instruction tuning large language model on region-of-interest. arXiv preprint arXiv:2307.03601, 2023.
     Google Scholar
  16. Hariharan S, Rejimol Robinson RR, Prasad RR, Thomas C, Balakrishnan N. XAI for intrusion detection system: comparing explanations based on global and local scope. J Comput Virol Hacking Tech. 2022;19(1):1–23.
     Google Scholar
  17. OpenAI. Chatgpt. 2022. Available from: https://openai.com/blog/chatgpt.
     Google Scholar
  18. OpenAI. Consistency decoder. 2023a. Available from: https://github.com/openai/consistencydecoder.
     Google Scholar
  19. OpenAI. Gpt-4 technical report. arXiv preprint arXiv:2303.08774, 2023b.
     Google Scholar
  20. OpenAI. Chatgpt. 2022. Available from: https://openai.com/blog/chatgpt.
     Google Scholar
  21. OpenAI. Consistency decoder. 2023a. Available from: https://github.com/openai/consistencydecoder.
     Google Scholar