Design of a Generative AI Image Similarity Test Application and Handmade Images Using Deep Learning Methods
Rancang Bangun Aplikasi Uji Kemiripan Gambar AI Generative dan Gambar Buatan Tangan Menggunakan Metode Deep Learning
DOI:
https://doi.org/10.21070/ups.3957Keywords:
Deep Learning, Ai Generative, Transformers, BEiT, Image ClassificationAbstract
This research discusses the development of an application to test the similarity between AI Generative images and handmade images using deep learning methods. AI technology has been applied to generative art through deep learning algorithms; however, there are still challenges related to the copyright and originality of AI Generative art. This research aims to develop an efficient model for classifying AI Generative art and handmade art. The classification model uses a Transformer approach, in particular the BEiT architecture. This architecture has shown excellent results in image classification tests, achieving a high F1 score in each test, indicating a good balance between precision and recall. It achieves 80% accuracy compared to previous methods using CNN and the VGG16 model. In contrast, the KNN method achieves approximately 64% accuracy in this study. Overall, the Transformer model shows superior performance compared to both the Convolutional Neural Network (CNN) and K-Nearest Neighbour (KNN) methods.
Downloads
References
E. Supriyadi and D. Asih, “IMPLEMENTASI ARTIFICIAL INTELLIGENCE (AI) DI BIDANG ADMINISTRASI PUBLIK PADA ERA REVOLUSI INDUSTRI 4.0,” Jurnal RASI, vol. 2, Jan. 2020, doi: 10.52496/rasi.v2i2.62.
A. Zein, “Kecerdasan Buatan Dalam Hal Otomatisasi Layanan,” Jurnal Ilmu Komputer, vol. 4, no. 2, pp. 16–25, Jan. 2021, [Online]. Available: https://jurnal.pranataindonesia.ac.id/index.php/jik/article/view/96
Emily A. Weiss, “Artificial Intelligence: Foundations, Concepts, and Ethical Considerations,” Journal of Intelligent Systems, vol. 28, no. 1, pp. 1–20, 2019.
M. Jovanović and M. Campbell, “Generative Artificial Intelligence: Trends and Prospects,” Computer (Long Beach Calif), vol. 55, no. 10, pp. 107–112, 2022, doi: 10.1109/MC.2022.3192720.
V. Borisov, J. Haug, and G. Kasneci, “CancelOut: A Layer for Feature Selection in Deep Neural Networks,” 2019, pp. 72–83. doi: 10.1007/978-3-030-30484-3_6.
Koosha Sharifani and Mahyar Amini, “Machine Learning and Deep Learning: A Review of Methods and Applications,” World Information Technology and Engineering Journal, vol. 10, no. 07, pp. 3897–3904, 2023.
N. Yudistira, “Peran Big Data dan Deep Learning untuk Menyelesaikan Permasalahan secara Komprehensif,” Expert, vol. 11, no. 2, pp. 78–89, Jan. 2021, doi: 10.36448/expert.v11i2.2063.
Craig A. DeLarge, “The Role of Artificial Intelligence in Generative Art,” Journal of Computational Creativity, vol. 4, no. 2, pp. 145–162, 2019.
E. Zhou and D. Lee, “Generative AI, Human Creativity, and Art,” SSRN Electronic Journal, 2023, doi: 10.2139/ssrn.4594824.
G. W. Intyanto, “Klasifikasi Citra Bunga dengan Menggunakan Deep Learning: CNN (Convolution Neural Network),” Jurnal Arus Elektro Indonesia, vol. 7, no. 3, p. 80, Dec. 2021, doi: 10.19184/jaei.v7i3.28141.
M. M. Baharuddin, H. Azis, and T. Hasanuddin, “ANALISIS PERFORMA METODE K-NEAREST NEIGHBOR UNTUK IDENTIFIKASI JENIS KACA,” ILKOM Jurnal Ilmiah, vol. 11, no. 3, pp. 269–274, Dec. 2019, doi: 10.33096/ilkom.v11i3.489.269-274.
R. Srinivasan and K. Uchino, “Biases in Generative Art,” in Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, New York, NY, USA: ACM, Mar. 2021, pp. 41–51. doi: 10.1145/3442188.3445869.
A. Ghosh and G. Fossas, “Can There be Art Without an Artist?,” Sep. 2022.
L. Hermawan and M. Bellaniar Ismiati, “Pembelajaran Text Preprocessing berbasis Simulator Untuk Mata Kuliah Information Retrieval,” Jurnal Transformatika, vol. 17, no. 2, p. 188, Jan. 2020, doi: 10.26623/transformatika.v17i2.1705.
R. Xiong et al., “On Layer Normalization in the Transformer Architecture,” Feb. 2020.
K. Li et al., “An Empirical Study of Transformer-Based Neural Language Model Adaptation,” in ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), IEEE, May 2020, pp. 7934–7938. doi: 10.1109/ICASSP40776.2020.9053399.
A. Gillioz, J. Casas, E. Mugellini, and O. A. Khaled, “Overview of the Transformer-based Models for NLP Tasks,” Sep. 2020, pp. 179–183. doi: 10.15439/2020F20.
Y. Wang, J. Zhang, M. Kan, S. Shan, and X. Chen, “Self-supervised Equivariant Attention Mechanism for Weakly Supervised Semantic Segmentation,” Apr. 2020.
Y. Liu et al., “RoBERTa: A Robustly Optimized BERT Pretraining Approach,” Jul. 2019.
Z. Peng, L. Dong, H. Bao, Q. Ye, and F. Wei, “BEiT v2: Masked Image Modeling with Vector-Quantized Visual Tokenizers,” Aug. 2022.
S. Chaudhury and K. Sau, “RETRACTED: A BERT encoding with Recurrent Neural Network and Long-Short Term Memory for breast cancer image classification,” Decision Analytics Journal, vol. 6, p. 100177, Mar. 2023, doi: 10.1016/j.dajour.2023.100177.
T. Singh, D. C. Jhariya, M. Sahu, P. Dewangan, and P. Y. Dhekne, “Classifying Minerals using Deep Learning Algorithms,” IOP Conf Ser Earth Environ Sci, vol. 1032, no. 1, p. 012046, Jun. 2022, doi: 10.1088/1755-1315/1032/1/012046.
Downloads
Additional Files
Posted
License
Copyright (c) 2024 UMSIDA Preprints Server
This work is licensed under a Creative Commons Attribution 4.0 International License.