SALES TIME SERIES ANALYTICS USING DEEP Q-LEARNING
Keywords:Sales, Time Series, deep Q-learning, Reinforcement Learning, Machine Learning
The article describes the use of deep Q-learning models in the problems of sales time series analytics. In contrast to supervised machine learning, which is a kind of passive learning, where historical data are used, Q-learning is a kind of active learning aimed at maximizing a reward by optimal sequence of actions. Model free Q-learning approach to optimal pricing strategies and supply-demand problems is considered in the work. The main idea of the study is to show that using deep Q-learning approach in time series analytics causes the sequence of actions to be optimized by maximizing the reward function when the environment for learning agent interaction can be modeled using the parametric model and in the case of using the model which is based on the historical data. In the pricing optimizing case study environment was modeled using sales dependence on extras price and randomly simulated demand. In the pricing optimizing case study, the environment was modeled using sales dependence on extra price and randomly simulated demand. In the supply-demand case study, it was proposed to use historical demand time series for environment modeling, agent states were represented by promo actions, previous demand values and weekly seasonality features. Obtained results show that using deep Q-learning, we can optimize the decision making process for price optimization and supply-demand problems. Environment modeling using parametric models and historical data can be used for the cold start of learning agent. On the next steps, after the cold start, the trained agent can be used in real business environment.
G. E. Box, G. M. Jenkins, G. C. Reinsel, and G. M. Ljung, Time Series Analysis: Forecasting and Control, John Wiley & Sons, 2015.
P. Doganis, A. Alexandridis, P. Patrinos, and H. Sarimveis, “Time series sales forecasting for short shelf-life food products based on artificial neural networks and evolutionary computing,” Journal of Food Engineering, vol. 75, no. 2, pp. 196-204, 2006.
R. J. Hyndman and G. Athanasopoulos, Forecasting: Principles and Practice, OTexts, 2018.
R. S. Tsay, Analysis of Financial Time Series, vol. 543. John Wiley & Sons, 2005.
W.W. Wei, Time Series Analysis, The Oxford Handbook of Quantitative Methods in Psychology, Volume 2; Oxford University Press: Oxford, UK, 2006.
B. M. Pavlyshenko, “Machine-learning models for sales time series forecasting,” Data, vol. 4, no. 1, paper 15, pp. 1-11, 2019.
B. Pavlyshenko, “Machine learning, linear and Bayesian models for logistic regression in failure detection problems,” Proceedings of the IEEE International Conference on Big Data (Big Data), 2016, pp. 2046-2050.
B. Pavlyshenko, “Using stacking approaches for machine learning models,” dings of the 2018 IEEE Second International Conference on Data Stream Mining & Processing (DSMP), 2018, pp. 255-258.
R. S. Sutton, A. G. Barto, et al., Introduction to Reinforcement Learning, vol. 2. MIT press Cambridge, 1998.
V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, et al., “Human-level control through deep reinforcement learning,” Nature, vol. 518, no. 7540, p. 529, 2015.
V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller, “Playing atari with deep reinforcement learning,” arXiv preprint arXiv:1312.5602, 2013.
R. Rana and F. S. Oliveira, “Real-time dynamic pricing in a non-stationary environment using model-free reinforcement learning,” Omega, vol. 47, pp. 116-126, 2014.
R. Maestre, J. Duque, A. Rubio, and J. Arévalo, “Reinforcement learning for fair dynamic pricing,” Proceedings of the SAI Intelligent Systems Conference, Springer, 2018, pp. 120-135.
D. Vengerov, “A gradient-based reinforcement learning approach to dynamic pricing in partially observable environments,” Technical Report, Sun Microsystems, Inc., 2007.
A. V. den Boer, “Dynamic pricing and learning: historical origins, current research, and new directions,” Surveys in operations research and management science, vol. 20, no. 1, pp. 1-18, 2015.
C. O. Kim, J. Jun, J. Baek, R. Smith, and Y.-D. Kim, “Adaptive inventory control models for supply chain management,” The International Journal of Advanced Manufacturing Technology, vol. 26, no. 9-10, pp. 1184-1192, 2005.
C. Raju, Y. Narahari, and K. Ravikumar, “Reinforcement learning applications in dynamic pricing of retail markets,” Proceedings of the IEEE International Conference on E-Commerce, CEC 2003, 2003, pp. 339-346.
C. Y. Huang, “Financial trading as a game: A deep reinforcement learning approach,” arXiv preprint arXiv:1807.02787, 2018.
Z. Jiang, D. Xu, and J. Liang, “A deep reinforcement learning framework for the financial portfolio management problem,” arXiv preprint arXiv:1706.10059, 2017.
F. Liu, C. Quek, and G. S. Ng, “Neural network model for time series prediction by reinforcement learning,” Proceedings of the 2005 IEEE International Joint Conference on Neural Networks, 2005, vol. 2, pp. 809-814.
G. Dulac-Arnold, R. Evans, H. van Hasselt, P. Sunehag, T. Lillicrap, J. Hunt, T. Mann, T. Weber, T. Degris, and B. Coppin, “Deep reinforcement learning in large discrete action spaces,” arXiv preprint arXiv:1512.07679, 2015.
L.-J. Lin, “Reinforcement learning for robots using neural networks,” tech. rep., Carnegie-Mellon Univ Pittsburgh PA School of Computer Science, 1993.
“Github Repository,” [Online]. Available at: https://github.com/dennybritz/reinforcement-learning. resource, accessed 5 December 2019.
“Github Repository,” [Online]. Available at: https://github.com/keon/deep-q-learning, accessed 5 December 2019.
“Github Repository,” [Online]. Available at: https://github.com/rlcode/reinforcement-learning, accessed 5 December 2019.
“’Rossmann Store Sales’, Kaggle.Com,” [Online]. Available at: http://www.kaggle.com/c/rossmann-store-sales, accessed 5 December 2019.
How to Cite
LicenseInternational Journal of Computing is an open access journal. Authors who publish with this journal agree to the following terms:
• Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
• Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
• Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work.