How to cite this paper
Seifi, F & Niaki, S. (2023). Extending the hypergradient descent technique to reduce the time of optimal solution achieved in hyperparameter optimization algorithms.International Journal of Industrial Engineering Computations , 14(3), 501-510.
Refrences
Baydin, A. G., Cornish, R., Rubio, D. M., Schmidt, M., & Wood, F. (2018). Online learning rate adaptation with hypergradient descent. Paper presented at the ICLR 2018 Conference.
Bengio, Y. (2012). Practical recommendations for gradient-based training of deep architectures. In Neural networks: Tricks of the trade (pp. 437-478): Springer.
Bergstra, J., Bardenet, R., Bengio, Y., & Kégl, B. (2011). Algorithms for hyper-parameter optimization. Advances in neural information processing systems, 24.
Bergstra, J., & Bengio, Y. (2012). Random search for hyper-parameter optimization. Journal of machine learning research, 13(2).
Chadha, A., & Kaushik, B. (2022). A Hybrid Deep Learning Model Using Grid Search and Cross-Validation for Effective Classification and Prediction of Suicidal Ideation from Social Network Data. New Generation Computing, 40(4), 889-914.
DeCastro-García, N., Muñoz Castañeda, Á. L., Escudero García, D., & Carriegos, M. V. (2019). Effect of the Sampling of a Dataset in the Hyperparameter Optimization Phase over the Efficiency of a Machine Learning Algorithm. Complexity, 2019.
Falkner, S., Klein, A., & Hutter, F. (2018). BOHB: Robust and efficient hyperparameter optimization at scale. Paper presented at the International Conference on Machine Learning.
Feurer, M., & Hutter, F. (2019). Hyperparameter optimization. In Automated machine learning (pp. 3-33): Springer, Cham.
Hutter, F., Hoos, H. H., & Leyton-Brown, K. (2011). Sequential model-based optimization for general algorithm configuration. Paper presented at the International conference on learning and intelligent optimization.
Hutter, F., Kotthoff, L., & Vanschoren, J. (2019). Automated machine learning: methods, systems, challenges: Springer Nature.
Ibad, T., Abdulkadir, S. J., Aziz, N., Ragab, M. G., & Al-Tashi, Q. (2022). Hyperparameter optimization of evolving spiking neural network for time-series classification. New Generation Computing, 40(1), 377-397.
Igel, C. (2005). Multi-objective model selection for support vector machines. Paper presented at the International conference on evolutionary multi-criterion optimization.
Injadat, M., Salo, F., Nassif, A. B., Essex, A., & Shami, A. (2018). Bayesian optimization with machine learning algorithms towards anomaly detection. Paper presented at the 2018 IEEE global communications conference (GLOBECOM).
Jaderberg, M., Dalibard, V., Osindero, S., Czarnecki, W. M., Donahue, J., Razavi, A., . . . Simonyan, K. (2017). Population based training of neural networks. arXiv preprint arXiv:1711.09846.
Jomaa, H. S., Grabocka, J., & Schmidt-Thieme, L. (2019). Hyp-rl: Hyperparameter optimization by reinforcement learning. arXiv preprint arXiv:1906.11527.
Karnin, Z., Koren, T., & Somekh, O. (2013). Almost optimal exploration in multi-armed bandits. Paper presented at the International Conference on Machine Learning.
King, R. D., Feng, C., & Sutherland, A. (1995). Statlog: comparison of classification algorithms on large real-world problems. Applied Artificial Intelligence an International Journal, 9(3), 289-333.
Kohavi, R., & John, G. H. (1995). Automatic parameter selection by minimizing estimated error. In Machine Learning Proceedings 1995 (pp. 304-312): Elsevier.
Krizhevsky, A., & Hinton, G. (2009). Learning multiple layers of features from tiny images.
Kumar, K., & Haider, M. T. U. (2021). Enhanced prediction of intra-day stock market using metaheuristic optimization on RNN–LSTM network. New Generation Computing, 39, 231-272.
Li, L., Jamieson, K., DeSalvo, G., Rostamizadeh, A., & Talwalkar, A. (2017). Hyperband: A novel bandit-based approach to hyperparameter optimization. The Journal of Machine Learning Research, 18(1), 6765-6816.
Liu, X., Wu, J., & Chen, S. (2022). A Context-Based Meta-Reinforcement Learning Approach to Efficient Hyperparameter Optimization. Neurocomputing.
Michie, D., Spiegelhalter, D. J., & Taylor, C. C. (1994). Machine learning, neural and statistical classification.
Parker-Holder, J., Nguyen, V., & Roberts, S. (2020). Provably efficient online hyperparameter optimization with population-based bandits. Advances in neural information processing systems, 33, 17200-17211.
Seeger, M. (2004). Gaussian processes for machine learning. International journal of neural systems, 14(02), 69-106.
Seifi, F., Azizi, M. J., & Niaki, S. T. A. (2021). A data-driven robust optimization algorithm for black-box cases: An application to hyper-parameter optimization of machine learning algorithms. Computers & Industrial Engineering, 160, 107581.
Sharma, N., Dev, J., Mangla, M., Wadhwa, V. M., Mohanty, S. N., & Kakkar, D. (2021). A heterogeneous ensemble forecasting model for disease prediction. New Generation Computing, 1-15.
Snoek, J., Larochelle, H., & Adams, R. P. (2012). Practical bayesian optimization of machine learning algorithms. Advances in neural information processing systems, 25.
Wu, J., Chen, S., & Liu, X. (2020). Efficient hyperparameter optimization through model-based reinforcement learning. Neurocomputing, 409, 381-393.
Yang, L., & Shami, A. (2020). On hyperparameter optimization of machine learning algorithms: Theory and practice. Neurocomputing, 415, 295-316.
Yao, C., Cai, D., Bu, J., & Chen, G. (2017). Pre-training the deep generative models with adaptive hyperparameter optimization. Neurocomputing, 247, 144-155.
Zöller, M.-A., & Huber, M. F. (2021). Benchmark and survey of automated machine learning frameworks. Journal of Artificial Intelligence Research, 70, 409-472.
Bengio, Y. (2012). Practical recommendations for gradient-based training of deep architectures. In Neural networks: Tricks of the trade (pp. 437-478): Springer.
Bergstra, J., Bardenet, R., Bengio, Y., & Kégl, B. (2011). Algorithms for hyper-parameter optimization. Advances in neural information processing systems, 24.
Bergstra, J., & Bengio, Y. (2012). Random search for hyper-parameter optimization. Journal of machine learning research, 13(2).
Chadha, A., & Kaushik, B. (2022). A Hybrid Deep Learning Model Using Grid Search and Cross-Validation for Effective Classification and Prediction of Suicidal Ideation from Social Network Data. New Generation Computing, 40(4), 889-914.
DeCastro-García, N., Muñoz Castañeda, Á. L., Escudero García, D., & Carriegos, M. V. (2019). Effect of the Sampling of a Dataset in the Hyperparameter Optimization Phase over the Efficiency of a Machine Learning Algorithm. Complexity, 2019.
Falkner, S., Klein, A., & Hutter, F. (2018). BOHB: Robust and efficient hyperparameter optimization at scale. Paper presented at the International Conference on Machine Learning.
Feurer, M., & Hutter, F. (2019). Hyperparameter optimization. In Automated machine learning (pp. 3-33): Springer, Cham.
Hutter, F., Hoos, H. H., & Leyton-Brown, K. (2011). Sequential model-based optimization for general algorithm configuration. Paper presented at the International conference on learning and intelligent optimization.
Hutter, F., Kotthoff, L., & Vanschoren, J. (2019). Automated machine learning: methods, systems, challenges: Springer Nature.
Ibad, T., Abdulkadir, S. J., Aziz, N., Ragab, M. G., & Al-Tashi, Q. (2022). Hyperparameter optimization of evolving spiking neural network for time-series classification. New Generation Computing, 40(1), 377-397.
Igel, C. (2005). Multi-objective model selection for support vector machines. Paper presented at the International conference on evolutionary multi-criterion optimization.
Injadat, M., Salo, F., Nassif, A. B., Essex, A., & Shami, A. (2018). Bayesian optimization with machine learning algorithms towards anomaly detection. Paper presented at the 2018 IEEE global communications conference (GLOBECOM).
Jaderberg, M., Dalibard, V., Osindero, S., Czarnecki, W. M., Donahue, J., Razavi, A., . . . Simonyan, K. (2017). Population based training of neural networks. arXiv preprint arXiv:1711.09846.
Jomaa, H. S., Grabocka, J., & Schmidt-Thieme, L. (2019). Hyp-rl: Hyperparameter optimization by reinforcement learning. arXiv preprint arXiv:1906.11527.
Karnin, Z., Koren, T., & Somekh, O. (2013). Almost optimal exploration in multi-armed bandits. Paper presented at the International Conference on Machine Learning.
King, R. D., Feng, C., & Sutherland, A. (1995). Statlog: comparison of classification algorithms on large real-world problems. Applied Artificial Intelligence an International Journal, 9(3), 289-333.
Kohavi, R., & John, G. H. (1995). Automatic parameter selection by minimizing estimated error. In Machine Learning Proceedings 1995 (pp. 304-312): Elsevier.
Krizhevsky, A., & Hinton, G. (2009). Learning multiple layers of features from tiny images.
Kumar, K., & Haider, M. T. U. (2021). Enhanced prediction of intra-day stock market using metaheuristic optimization on RNN–LSTM network. New Generation Computing, 39, 231-272.
Li, L., Jamieson, K., DeSalvo, G., Rostamizadeh, A., & Talwalkar, A. (2017). Hyperband: A novel bandit-based approach to hyperparameter optimization. The Journal of Machine Learning Research, 18(1), 6765-6816.
Liu, X., Wu, J., & Chen, S. (2022). A Context-Based Meta-Reinforcement Learning Approach to Efficient Hyperparameter Optimization. Neurocomputing.
Michie, D., Spiegelhalter, D. J., & Taylor, C. C. (1994). Machine learning, neural and statistical classification.
Parker-Holder, J., Nguyen, V., & Roberts, S. (2020). Provably efficient online hyperparameter optimization with population-based bandits. Advances in neural information processing systems, 33, 17200-17211.
Seeger, M. (2004). Gaussian processes for machine learning. International journal of neural systems, 14(02), 69-106.
Seifi, F., Azizi, M. J., & Niaki, S. T. A. (2021). A data-driven robust optimization algorithm for black-box cases: An application to hyper-parameter optimization of machine learning algorithms. Computers & Industrial Engineering, 160, 107581.
Sharma, N., Dev, J., Mangla, M., Wadhwa, V. M., Mohanty, S. N., & Kakkar, D. (2021). A heterogeneous ensemble forecasting model for disease prediction. New Generation Computing, 1-15.
Snoek, J., Larochelle, H., & Adams, R. P. (2012). Practical bayesian optimization of machine learning algorithms. Advances in neural information processing systems, 25.
Wu, J., Chen, S., & Liu, X. (2020). Efficient hyperparameter optimization through model-based reinforcement learning. Neurocomputing, 409, 381-393.
Yang, L., & Shami, A. (2020). On hyperparameter optimization of machine learning algorithms: Theory and practice. Neurocomputing, 415, 295-316.
Yao, C., Cai, D., Bu, J., & Chen, G. (2017). Pre-training the deep generative models with adaptive hyperparameter optimization. Neurocomputing, 247, 144-155.
Zöller, M.-A., & Huber, M. F. (2021). Benchmark and survey of automated machine learning frameworks. Journal of Artificial Intelligence Research, 70, 409-472.