Maio 2014 vol. 1 num. 1 - 10th World Congress on Computational Mechanics
Full Article - Open Access.
A Feed Forward Neural Network in CUDA for a Financial Application
Feed forward neural networks (FFNs) are powerful data-modelling tools which have been used in many fields of science. Specifically in financial applications, due to the number of factors affecting the market, models with a large quantity of input features, hidden and output neurons can be obtained. In financial problems, the response time is crucial and it is necessary to have faster applications which respond quickly. Most of the current applications have been implemented as non-parallel software running on serial processors. In this paper, we show how GPU computing allows for faster applications to be implemented, taking advantage of the inherent parallelism of the FFN in order to improve performance and reduce response time. The problem can be conveniently represented by matrix operations implemented using the CUBLAS library. It provides highly optimized linear algebra routines that take advantage of the hardware features of the GPU. The algorithm was developed in C++ and CUDA and all the input features were received using the ZeroMQ library, which was also used to publish the output features. ZeroMQ is an abstraction over system sockets that allows chunks of data to be efficiently sent minimizing the overhead and system calls. The algorithm was tested on an NVIDIA M2050 graphics card with a Intel Xeon X5650 2.67GHz CPU for a neural network of 1000 input features, 2000 hidden neurons and 500 output neurons. Response times of the order of 900 ms were obtained.
Palavras-chave: high-frequency trading, GPU programming, neural networks,
-  S.W. Aiken, M.W. Koch, and M.W. Roberts. A parallel neural network simulator. In Neural Networks, 1990., 1990 IJCNN International Joint Conference on, pages 611 –616 vol.2, jun 1990.
-  Irene Aldridge. High-Frequency Trading: A Practical Guide to Algorithmic Strategies and Trading Systems (Wiley Trading). Wiley, 2009.
-  Christopher M. Bishop. Neural Networks for Pattern Recognition. Oxford University Press, USA, 1996.
-  Alexandra I. Cristea and Toshio Okamoto. A parallelization method for neural networks with weak connection design. In Proceedings of the International Symposium on High Performance Computing, ISHPC ’97, pages 397–404, London, UK, UK, 1997. Springer-Verlag.
-  M. M. Dacorogna, R. Gencay, U. Muller, R. B. Olsen, and O. V. Olsen. An introduction to high frequency finance. Academic Press, New York, 2001.
-  Michael Durbin. All About High-Frequency Trading (All About Series). McGraw- Hill, 2010.
-  Kevin Gurney. An Introduction to Neural Networks. CRC Press, 199
-  Wei Huang, Kin Keung Lai, Yoshiteru Nakamori, Shouyang Wang, and Lean Yu. Neural networks in finance and economics forecasting. International Journal of Information Technology Andamp; Decision Making (IJITDM), 06(01):113–140, 2007.
-  David B. Kirk and Wen mei W. Hwu. Programming Massively Parallel Processors: A Hands-on Approach (Applications of GPU Computing Series). Morgan Kaufmann, 2010.
-  KS Oh and K Jung. GPU implementation of neural networks. PATTERN RECOGNITION, 37(6):1311–1314, JUN 2004.
-  Rashedur M. Rahman, Ruppa K. Thulasiram, and Parimala Thulasiraman. Performance analysis of sequential and parallel neural network algorithm for stock price forecasting. International Journal of Grid and High Performance Computing (IJGHPC), 3(1):45–68, 20
-  Ruppa K. Thulasiram, Rashedur M. Rahman, and Parimala Thulasiraman. Neural network training algorithms on parallel architectures for finance applications. Parallel Processing Workshops, International Conference on, 0:236, 2003.
Bonvallet, Roberto; Maureira, Cristián; Fernández, C´esar; Arce, Paola; nete., Alejandro Ca˜; "A Feed Forward Neural Network in CUDA for a Financial Application", p. 4471-4482 . In: In Proceedings of the 10th World Congress on Computational Mechanics [= Blucher Mechanical Engineering Proceedings, v. 1, n. 1].
São Paulo: Blucher,
ISSN 2358-0828, DOI 10.5151/meceng-wccm2012-19918
últimos 30 dias | último ano | desde a publicação