Original Article - Open Access.

Idioma principal | Segundo idioma

COMPARAÇÃO DE ARQUITETURAS DEEP LEARNING PARA A GERAÇÃO DE DADOS 3D

"A COMPARISON OF DEEP LEARNING ARCHITECTURES FOR THE GENERATION OF 3D DATA"

Bonfim, Yasmin da Silva ; Santos, Gabriel Sete Ribeiro Lago dos ; Cruz, Gustavo Oliveira Ramos ; Conterato, Flávio Santos ;

Original Article:

Diante de diversas arquiteturas de Deep Learning para geração de imagens artificiais, surge a necessidade de identificar quais destas melhores se adequam a cada caso de uso. Com o objetivo de comparar diversas redes com as arquiteturas generativas Autoencoder, Variational Autoencoder e Generative Adversarial Networks no dataset 3D MNIST, foram criados 12 modelos com diferentes hiperparâmetros. Após os treinamentos, os modelos foram comparados com funções de Loss para avaliar a diferença entre os dados originais e aqueles artificiais, de modo que maior complexidade não se traduziu em melhor desempenho, indicando os modelos de Autoencoder como o melhor custo-benefício.

Original Article:

Faced with several Deep Learning architectures for generating artificial images, there is a need to identify which are the best for each use case. To compare several networks with the generative architectures Autoencoder, Variational Autoencoder, and Generative Adversarial Networks in the 3D MNIST dataset, 12 models with different hyperparameters were created. After training, the models were compared with loss functions to assess the difference between the original and artificial data, so that greater complexity did not translate into better performance, indicating the Autoencoder models as the best cost-benefit.

Palavras-chave: "Redes Generativas; Dados 3D; Comparação; Aprendizado de Máquina.",

Palavras-chave: Generative Networks; 3D data; Comparison; Machine Learning,

DOI: 10.5151/siintec2021-208832

Referências bibliográficas
  • [1] "1 KRIZHEVSKY, Alex; SUTSKEVER, Ilya; HINTON, Geoffrey E. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, v. 25, p. 1097-1105, 2012. On: . Available at: 31 de july 202
  • [2] 2 AHMED, Eman et al. A survey on deep learning advances on different 3D data representations. arXiv preprint arXiv:1808.01462, 2018. On:. Available at: 31 de july 2021.
  • [3] 3 TEO, Bee Guan; DHILLON, Sarinder Kaur. An automated 3D modeling pipeline for constructing 3D models of MONOGENEAN HARDPART using machine learning techniques. Bmc Bioinformatics, [S.L.], v. 20, n. 19, p. 1-21, 24 dez. 2019. Springer Science and Business Media LLC. http://dx.doi.org/10.1186/s12859-019-3210-x. On: . Available at: 31 july 2021.
  • [4] 4 CASTRO, David de la Iglesia. 3D MNIST. Kaggle, 2019. On:. Available at: 30 de june 2021.
  • [5] 5 LECUN, Yann. The MNIST database of handwritten digits. http://yann. lecun.com/exdb/mnist/, 1998. On: . Available at: 31 july 2021.
  • [6] 6 BANK, Dor; KOENIGSTEIN, Noam; GIRYES, Raja. Autoencoders. arXiv preprint arXiv:2003.05991, 2020. On: . Available at: 31 july 2021.
  • [7] 7 KINGMA, Diederik P.; WELLING, Max. An introduction to variational autoencoders. arXiv preprint arXiv:1906.02691, 2019. On:. Acesso em: 13 aug. 2021.
  • [8] 8 GOODFELLOW, Ian et al. Generative adversarial nets. Advances in neural information processing systems, v. 27, 2014. On:. Available at: 1 aug. 2021.
  • [9] 9 COSTA-JUSSÀ, Marta R.; NUEZ, Álvaro; SEGURA, Carlos. Experimental research on encoder-decoder architectures with attention for chatbots. Computación y Sistemas, v. 22, n. 4, p. 1233-1239, 2018. On:. Available
  • [10] at : 22 aug. 2021.
  • [11] 10 MENG, Qinxue et al. Relational autoencoder for feature extraction. In: 2017 International Joint Conference on Neural Networks (IJCNN). IEEE, 2017. p. 364-371. On: . Available at : 20 aug. 2021.
  • [12] 11 DOERSCH, Carl. Tutorial on variational autoencoders. arXiv preprint arXiv:1606.05908, 2016. On: . Available at: 1 aug. 2021.
  • [13] 12 LIPTON, Zachary Chase et al. A Critical Review of Recurrent Neural Networksfor Sequence Learning. Arxvi, San Diego, p. 1-38, 5 june. 2015. On: . Available at: 1 aug. 2021.
  • [14] 13 SHERSTINSKY, Alex. Fundamentals of Recurrent Neural Network (RNN) and Long Short-Term Memory (LSTM) network. Physica D: Nonlinear Phenomena, [S.L.], v. 404, p. 132306, mar. 2020. Elsevier BV. http://dx.doi.org/10.1016/j.physd.2019.132306. On:. Available at: 1 aug. 2021.
  • [15] 14 CHO, Kyunghyun et al. Learning phrase representations using RNN encoderdecoder for statistical machine translation. arXiv preprint arXiv:1406.1078, 2014. On: . Available at: 1 aug. 2021.
  • [16] 15 ALBAWI, Saad; MOHAMMED, Tareq Abed; AL-ZAWI, Saad. Understanding of a convolutional neural network. 2017 International Conference On Engineering And Technology (Icet), [S.L.], p. 1-6, aug. 2017. IEEE. http://dx.doi.org/10.1109/icengtechnol.2017.8308186. On: . Available at: 14 aug.
  • [17] 2021.
  • [18] 16. GARDNER, M.W; DORLING, S.R. Artificial neural networks (the multilaye perceptron)—a review of applications in the atmospheric sciences. Atmospheric Environment, [S.L.], v. 32, n. 14-15, p. 2627-2636, aug. 1998. Elsevier BV. http://dx.doi.org/10.1016/s1352-2310(97)00447-0. On:. Available at: 14 aug. 2021.
  • [19] 17 MURPHY, Kevin P. Machine learning: a probabilistic perspective. MIT press, 2012. On:. Available at: 17 sep. 2021."
Como citar:

Bonfim, Yasmin da Silva; Santos, Gabriel Sete Ribeiro Lago dos; Cruz, Gustavo Oliveira Ramos; Conterato, Flávio Santos; "COMPARAÇÃO DE ARQUITETURAS DEEP LEARNING PARA A GERAÇÃO DE DADOS 3D", p. 561-568 . In: VII International Symposium on Innovation and Technology. São Paulo: Blucher, 2021.
ISSN 2357-7592, DOI 10.5151/siintec2021-208832

últimos 30 dias | último ano | desde a publicação


downloads


visualizações


indexações