• No results found

Future Work

In document Systems for AutoML Research (pagina 150-0)

ter defaults, we used a limited set of meta-features and mathematical operators from which to compose the defaults. Given these design choices, we were un-able to find suitun-able symbolic defaults for several algorithms and did not signif-icantly outperform tuned constant defaults for them. Further research should include more dataset properties, though it is not immediately obvious which these should be. It also remains an open question if for all hyperparameters there even exists a symbolic default that uses only meta-features which can be computed efficiently.

There are also limitations imposed by the state of the software. The OpenML platform, and by extension openml-python, OpenML benchmarking suites, and the AutoML benchmark, offers only limited support for settings outside of i.i.d.

classification and regression, such as clustering or time series prediction. GAMA allows for modular configuration and isolated development for search algorithms and post-processing, though it does not yet offer the same flexibility in other parts of the AutoML pipeline design. For example, the data sanitation step is fixed and the search space design assumes scikit-learn compatible workflows.

None of these limitations are inherent to the respective designs and can be overcome with additional engineering effort.

7.3 Future Work

As outlined in the last section, we can overcome some limitations through ad-ditional engineering effort. In this section, we focus on interesting future work which can not be overcome by engineering alone.

7.3.1 Meta-learning for AutoML

In Chapter 6 we presented a method to use meta-learning to find symbolic hyperparameter defaults. How to incorporate these symbolic hyperparameter defaults in AutoML tools is an interesting open research question. Possible applications include transforming the search space, shrinking the search space to speed up the search, or using symbolic hyperparameter defaults to evaluate ML pipeline architecture design. We hope to find symbolic hyperparameter defaults for more algorithms and hyperparameters by extending the set of meta-features and the formulas which may be considered. Moreover, we aim to extend the notion of defaults into sets of defaults, which can serve as complementary starting points for hyperparameter optimization.

134 Conclusion and Future Work

Many other approaches to include meta-learning in AutoML methods have already been proposed [82, 88, 144, 149, 279]. Unfortunately, it is unclear how to evaluate AutoML methods which use meta-learning on benchmarking suites in a practical manner. The task on which a method is evaluated should not be included in learning the meta-model used by the method. While some meta-learning methods, such as nearest-neighbor dataset lookups [88], allow for the easy exclusion of specific tasks from the meta-model, this is not the case for meta-models in general. A clean evaluation would then involve training as many meta-models as there are (chosen subsets of) tasks in the benchmarking suite, which may become prohibitively expensive for more complex meta-models.

Additionally, it is not always easy to identify which task is being used in the evaluation. While the specific dataset may be easily identified, all variants derived from the dataset should also be accounted for and excluded from the meta-model. More research is required to address these issues and allow for the correct evaluation of AutoML systems that use meta-learning.

7.3.2 Benchmark Design

While the tools presented with the introduction of OpenML benchmarking suites allow for some automated curation of tasks, the proposed benchmarking suites are still mostly designed by humans. The design process may lead to unneces-sarily large benchmarking suites, which is undesirable not only because it wastes resources, but also because the increased computational demand will prohibit some people from using the proposed suites. Some post-hoc analysis methods of benchmarking suites exist [47], but we hope additional techniques will be developed and in particular for them to already be applicable during the design process.

It remains an open question if and when methods may start to overfit to a static benchmarking suite. For this reason, and to keep the benchmarking suites reflective of modern challenges, we propose to periodically update the benchmarking suites (e.g., as done for computer vision research [201]) and invite the community to partake in this process. Developing new methods to analyze whether overfitting on benchmark suites occurs, and how many or which tasks would need to be replaced to alleviate the issue, is interesting future work.

7.3. FUTURE WORK 135

7.3.3 Trust in AutoML

Interpretable [171] and explainable [240] ML has gained attention recently, in part because of new legislature that requires explainability [21], e.g., GDPR2. As this pertains to the final model produced, AutoML can directly benefit from ideas and techniques for general interpretability and explainability in ML, such as training interpretable models to mimic complex ones found by AutoML [3, 133] or post-hoc model-agnostic explanation methods such as LIME [205]. How-ever, AutoML may also be used to generate interpretable models, by using existing AutoML frameworks with an altered search space that produces in-terpretable models [91], or by using autocompboost [58], which is specifically developed to build interpretable models.

In AutoML not only the final model but also how it was found, is important for a user’s trust [69, 220]. To this end, Moosbauer et al. [173] propose to use adapted partial dependence plots to visualize what the surrogate model learned about the search space. Providing the users with generated code that builds the final model also increases trust in the system, because it helps them understand the model that is used and to verify if specific changes affect the results as expected [266].

In certain settings it is important that the model follows some notion of fair-ness [13], e.g., when the model affects humans, it shouldn’t discriminate. This can be expressed through metrics that make a distinction between a protected and unprotected group. Examples include demographic parity [42], which stip-ulates the average predictions for the two groups should be equal, and equalized odds [113], which dictate the false negative and positive rates should be equal between the groups. However, it should be noted that while different notions of fairness exist, they may not all be satisfied simultaneously [57, 134]. While this is also true for performance metrics, the choice of fairness metric has a significant effect on how the model treats the protected group.

To allow AutoML to find fairer models, it has been treated as a multi-objective optimization problem, optimizing a fairness metric and a performance metric together (e.g., [60, 214, 215]). However, this approach ignores the de-velopment of fairness specific preprocessing, in-processing, and post-processing algorithms (e.g., [44], [19], and [113], respectively), which seems like it would lead to sub-optimal pipelines3.

Still, only changing the optimization objective, or even the search space,

2https://gdpr-info.eu/

3To the best of my knowledge, there is no AutoML system which includes these algorithms in its search space, so there is no evidence that excluding them leads to worse solutions.

136 Conclusion and Future Work

largely ignores the problems present in other parts of the model creation. Blind optimization without regard for other aspects, such as the data and its collection process or the users ultimately using the model, may only lead to perceived progress [11]. On the one hand, AutoML may exacerbate that problem. If, at times, even ML experts fail to identify biases in their models [96], how will the novice AutoML user pick up on these errors? On the other hand, AutoML may alleviate some of these issues by allowing the domain experts themselves to build models. With a much better understanding of the data, the relevant performance metrics, and the ability to assess model predictions, domain experts using AutoML may be able to deploy better models than an ML expert could.

These two scenarios are not mutually exclusive, and both user groups would benefit from support for fair learning and interpretability in both the AutoML procedure and the model it produces.

Bibliography

[1] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S.

Corrado, et al. “TensorFlow: Large-Scale Machine Learning on Hetero-geneous Distributed Systems”. In: Proc. of OSDI’16. 2016.

[2] D. W. Aha. “Generalizing from case studies: A case study”. In: Pro-ceedings of the International Conference on Machine Learning (ICML) (1992), pp. 1–10.

[3] Ahmed Alaa and Mihaela van der Schaar. “AutoPrognosis: Automated Clinical Prognostic Modeling via Bayesian Optimization with Structured Kernel Learning”. In: Proceedings of the 35th International Conference on Machine Learning. Ed. by Jennifer Dy and Andreas Krause. Vol. 80.

Proceedings of Machine Learning Research. PMLR, July 2018, pp. 139–

148. url: https://proceedings.mlr.press/v80/alaa18b.html.

[4] J. Alcala, A. Fernandez, J. Luengo, J. Derrac, S. Garcia, L. Sanchez, and F. Herrera. “Keel datamining software tool: Data set repository, integra-tion of algorithms and experimental analysis framework.” In: Journal of Multiple-Valued Logic and Soft Computing 17.2-3 (2010), pp. 255–287.

[5] Edesio Alcoba¸ca, Felipe Siqueira, Adriano Rivolli, Lu´ıs Paulo F Garcia, Jefferson Tales Oliva, Andr´e CPLF de Carvalho, et al. “MFE: Towards re-producible meta-feature extraction.” In: J. Mach. Learn. Res. 21 (2020), pp. 111–1.

[6] Marie Anastacio, Chuan Luo, and Holger Hoos. “Exploitation of default parameter values in automated algorithm configuration”. In: Workshop Data Science meets Optimisation (DSO), IJCAI. 2019.

137

138 BIBLIOGRAPHY

[7] Noor Awad, Neeratyoy Mallik, and Frank Hutter. “DEHB: Evolutionary Hyberband for Scalable, Robust and Efficient Hyperparameter Optimiza-tion”. In: arXiv preprint arXiv:2105.09821 (2021).

[8] Claudine Badue, Rˆanik Guidolini, Raphael Vivacqua Carneiro, Pedro Azevedo, Vinicius B Cardoso, Avelino Forechi, Luan Jesus, Rodrigo Berriel, Thiago M Paixao, Filipe Mutz, et al. “Self-driving cars: A survey”. In:

Expert Systems with Applications 165 (2021), p. 113816.

[9] Adithya Balaji and Alexander Allen. “Benchmarking Automatic Machine Learning Frameworks”. In: (Aug. 2018). arXiv: 1808.06492 [cs.LG].

[10] Wolfgang Banzhaf, Peter Nordin, Robert E Keller, and Frank D Fran-cone. Genetic programming: an introduction: on the automatic evolution of computer programs and its applications. Morgan Kaufmann Publishers Inc., 1998.

[11] Michelle Bao, Angela Zhou, Samantha Zottola, Brian Brubach, Sarah Desmarais, Aaron Horowitz, Kristian Lum, and Suresh Venkatasubra-manian. “It’s COMPASlicated: The Messy Relationship between RAI Datasets and Algorithmic Fairness Benchmarks”. In: CoRR abs/2106.05498 (2021). arXiv: 2106.05498. url: https://arxiv.org/abs/2106.05498.

[12] R´emi Bardenet, M´aty´as Brendel, Bal´azs K´egl, and Mich`ele Sebag. “Col-laborative Hyperparameter Tuning”. In: Proceedings of the 30th Inter-national Conference on InterInter-national Conference on Machine Learning - Volume 28. ICML’13. Atlanta, GA, USA: JMLR.org, 2013, pp. II-199–

II-207. url: http://dl.acm.org/citation.cfm?id=3042817.3042916.

[13] Solon Barocas, Moritz Hardt, and Arvind Narayanan. “Fairness in ma-chine learning”. In: Nips tutorial 1 (2017), p. 2017.

[14] Hilan Bensusan and Alexandros Kalousis. “Estimating the predictive ac-curacy of a classifier”. In: European Conference on Machine Learning.

Springer. 2001, pp. 25–36.

[15] J. Bergstra, N. Pinto, and D.D. Cox. “SkData: data sets and algorithm evaluation protocols in Python”. In: Computational Science & Discovery 8.1 (2015).

[16] James Bergstra, R´emi Bardenet, B K´egl, and Y Bengio. “Implementa-tions of algorithms for hyper-parameter optimization”. In: NIPS Work-shop on Bayesian optimization. 2011, p. 29.

[17] James Bergstra and Yoshua Bengio. “Random search for hyper-parameter optimization.” In: Journal of machine learning research 13.2 (2012).

BIBLIOGRAPHY 139

[18] James Bergstra, Daniel Yamins, and David Cox. “Making a science of model search: Hyperparameter optimization in hundreds of dimensions for vision architectures”. In: International conference on machine learn-ing. PMLR. 2013, pp. 115–123.

[19] Richard Berk, Hoda Heidari, Shahin Jabbari, Matthew Joseph, Michael Kearns, Jamie Morgenstern, Seth Neel, and Aaron Roth. “A convex framework for fair regression”. In: arXiv preprint arXiv:1706.02409 (2017).

[20] Hans-Georg Beyer and Hans-Paul Schwefel. “Evolution strategies–a com-prehensive introduction”. In: Natural computing 1.1 (2002), pp. 3–52.

[21] Adrien Bibal, Michael Lognoul, Alexandre De Streel, and Benoˆıt Fr´enay.

“Legal requirements on explainability in machine learning”. In: Artificial Intelligence and Law 29.2 (2021), pp. 149–169.

[22] Aur´elien Bibaut, Antoine Chambaz, Maria Dimakopoulou, Nathan Kallus,

and Mark van der Laan. “Post-Contextual-Bandit Inference”. In: arXiv:2106.00418 [stat.ML] (2021).

[23] Aur´elien Bibaut, Antoine Chambaz, Maria Dimakopoulou, Nathan Kallus, and Mark van der Laan. “Risk Minimization from Adaptively Collected Data: Guarantees for Supervised and Policy Learning”. In: arXiv:2106.01723 [stat.ML] (2021).

[24] Albert Bifet and Ricard Gavalda. “Adaptive learning from evolving data streams”. In: International Symposium on Intelligent Data Analysis. Springer.

2009, pp. 249–260.

[25] Mauro Birattari, Thomas St¨utzle, Luis Paquete, Klaus Varrentrapp, et al.

“A Racing Algorithm for Configuring Metaheuristics.” In: Gecco. Vol. 2.

2002. 2002.

[26] B. Bischl, M. Lang, L. Kotthoff, J. Schiffner, J. Richter, E. Studerus, G.

Casalicchio, and Z. M. Jones. “mlr: Machine learning in R”. In: Journal of Machine Learning Research 17.170 (2016).

[27] Bernd Bischl, Martin Binder, Michel Lang, Tobias Pielok, Jakob Richter, Stefan Coors, Janek Thomas, Theresa Ullmann, Marc Becker, Anne-Laure Boulesteix, Difan Deng, and Marius Lindauer. “Hyperparameter Optimization: Foundations, Algorithms, Best Practices and Open Chal-lenges”. In: (July 2021). arXiv: 2107.05847 [stat.ML].

140 BIBLIOGRAPHY

[28] Bernd Bischl, Giuseppe Casalicchio, Matthias Feurer, Pieter Gijsbers, Frank Hutter, Michel Lang, Rafael Gomes Mantovani, Jan N van Rijn, and Joaquin Vanschoren. “OpenML Benchmarking Suites”. In: Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2). 2021.

[29] Bernd Bischl, Giuseppe Casalicchio, Matthias Feurer, Frank Hutter, Michel Lang, Rafael G. Mantovani, Jan N. van Rijn, and Joaquin Vanschoren.

“OpenML Benchmarking Suites”. In: (Aug. 2017). arXiv: 1708 . 03731 [stat.ML].

[30] Bernd Bischl, Pascal Kerschke, Lars Kotthoff, Marius Lindauer, Yuri Malitsky, Alexandre Fr´echette, Holger Hoos, Frank Hutter, Kevin Leyton-Brown, Kevin Tierney, and Joaquin Vanschoren. “ASlib: A benchmark library for algorithm selection”. en. In: Artificial Intelligence 237 (Aug.

2016), pp. 41–58. issn: 0004-3702. doi: 10.1016/j.artint.2016.04.003. url:

https://www.sciencedirect.com/science/article/pii/S0004370216300388 (visited on 10/21/2021).

[31] Jes´us Bobadilla, Fernando Ortega, Antonio Hernando, and Abraham Guti´errez. “Recommender systems survey”. In: Knowledge-based systems 46 (2013), pp. 109–132.

[32] Bernhard E Boser, Isabelle M Guyon, and Vladimir N Vapnik. “A train-ing algorithm for optimal margin classifiers”. In: Proceedtrain-ings of the fifth annual workshop on Computational learning theory. 1992, pp. 144–152.

[33] Pavel Brazdil, Jo¯ao Gama, and Bob Henery. “Characterizing the applica-bility of classification algorithms using meta-level learning”. In: European conference on machine learning. Springer. 1994, pp. 83–102.

[34] Pavel B Brazdil and Carlos Soares. “A comparison of ranking methods for classification algorithm selection”. In: European conference on machine learning. Springer. 2000, pp. 63–75.

[35] Pavel B Brazdil, Carlos Soares, and Joaquim Pinto Da Costa. “Ranking learning algorithms: Using IBL and meta-learning on accuracy and time results”. In: Machine Learning 50.3 (2003), pp. 251–277.

[36] L Breiman, JH Friedman, R Olshen, and CJ Stone. “Classification and Regression Trees”. In: (1984).

[37] Leo Breiman. “Random Forests”. In: Mach. Learn. 45.1 (Oct. 2001), pp. 5–32. issn: 0885-6125. doi: 10.1023/A:1010933404324. url: https:

//doi.org/10.1023/A:1010933404324.

BIBLIOGRAPHY 141

[38] Leo Breiman and Adele Cutler. Random Forests Manual. 2020. url:

https://www.stat.berkeley.edu/breiman/RandomForests/cc home.htm (visited on 05/01/2020).

[39] G. Brockman, V. Cheung, L. Pettersson, J. Schneider, J. Schulman, J.

Tang, and W. Zaremba. “OpenAI Gym”. In: arXiv:1606.01540 [cs.LG]

(2016).

[40] L. Buitinck, G. Louppe, M. Blondel, F. Pedregosa, A. M¨uller, O. Grisel, V. Niculae, P. Prettenhofer, A. Gramfort, J. Grobler, R. Layton, J. Van-derplas, A. Joly, B. Holt, and G. Varoquaux. “API design for machine learning software: experiences from the scikit-learn project”. In: ECML PKDD Workshop: Languages for Data Mining and Machine Learning.

2013, pp. 108–122.

[41] Richard H Byrd, Peihuang Lu, Jorge Nocedal, and Ciyou Zhu. “A lim-ited memory algorithm for bound constrained optimization”. In: SIAM Journal on scientific computing 16.5 (1995), pp. 1190–1208.

[42] Toon Calders and Sicco Verwer. “Three naive Bayes approaches for discrimination-free classification”. In: Data mining and knowledge discovery 21.2 (2010),

pp. 277–292.

[43] Tadeusz Cali´nski and Jerzy Harabasz. “A dendrite method for clus-ter analysis”. In: Communications in Statistics-theory and Methods 3.1 (1974), pp. 1–27.

[44] Flavio P Calmon, Dennis Wei, Bhanukiran Vinzamuri, Karthikeyan Nate-san Ramamurthy, and Kush R Varshney. “Optimized pre-processing for discrimination prevention”. In: Proceedings of the 31st International Con-ference on Neural Information Processing Systems. 2017, pp. 3995–4004.

[45] Israel Campero Jurado and Joaquin Vanschoren. “Multi-fidelity opti-mization method with Asynchronous Generalized Island Model for Au-toML”. In: (to appear) Proceedings of the Genetic and Evolutionary Com-putation Conference Companion (July 2022).

[46] B. Caputo, K. Sim, F. Furesjo, and A. Smola. “Appearance-based Object Recognition using SVMs: Which Kernel Should I Use?” In: Procceedings of NIPS workshop on Statitsical methods for computational experiments in visual processing and computer vision, Whistler. 2002, pp. 1–10.

[47] Lucas FF Cardoso, Vitor CA Santos, Regiane S Kawasaki Francˆes, Ri-cardo BC Prudˆencio, and Ronnie CO Alves. “Data vs classifiers, who wins?” In: arXiv:2107.07451 [cs.LG] (2021).

142 BIBLIOGRAPHY

[48] Rich Caruana, Art Munson, and Alexandru Niculescu-Mizil. “Getting the most out of ensemble selection”. In: Sixth International Conference on Data Mining (ICDM’06). IEEE. 2006, pp. 828–833.

[49] Rich Caruana, Alexandru Niculescu-Mizil, Geoff Crew, and Alex Ksikes.

“Ensemble selection from libraries of models”. In: Proceedings of the twenty-first international conference on Machine learning. 2004, p. 18.

[50] Giuseppe Casalicchio, Jakob Bossek, Michel Lang, Dominik Kirchhoff, Pascal Kerschke, Benjamin Hofner, Heidi Seibold, Joaquin Vanschoren, and Bernd Bischl. “OpenML: An R package to connect to the machine learning platform OpenML”. In: 34 (2019), pp. 977–991. issn: 0943-4062.

doi: 10.1007/s00180-017-0742-2.

[51] Bilge Celik, Prabhant Singh, and Joaquin Vanschoren. Online AutoML:

An adaptive AutoML framework for online learning. 2022. arXiv: 2201.

09750 [cs.LG].

[52] Bilge Celik and Joaquin Vanschoren. “Adaptation strategies for auto-mated machine learning on evolving data”. In: IEEE Transactions on Pattern Analysis and Machine Intelligence (2021).

[53] C. C. Chang and C. J. Lin. “LIBSVM: A library for support vector machines”. In: ACM Transactions on Intelligent Systems and Technology (TIST) 2.3 (2011), p. 27.

[54] Tianqi Chen and Carlos Guestrin. “XGBoost: A Scalable Tree Boost-ing System”. In: ProceedBoost-ings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. KDD ’16. San Francisco, California, USA: ACM, 2016, pp. 785–794. isbn: 978-1-4503-4232-2. doi: 10.1145/2939672.2939785. url: http://doi.acm.org/10.

1145/2939672.2939785.

[55] Tianqi Chen, Tong He, Michael Benesty, Vadim Khotilovich, Yuan Tang, Hyunsu Cho, et al. “Xgboost: extreme gradient boosting”. In: R package version 0.4-2 1.4 (2015), pp. 1–4.

[56] Y. Chen, E. Keogh, B. Hu, N. Begum, A. Bagnall, A. Mueen, and G.

Batista. The UCR Time Series Classification Archive. www.cs.ucr.edu/

eamonn/time series data/. July 2015.

[57] Alexandra Chouldechova. “Fair prediction with disparate impact: A study of bias in recidivism prediction instruments”. In: Big data 5.2 (2017), pp. 153–163.

BIBLIOGRAPHY 143

[58] Stefan Coors, Daniel Schalk, Bernd Bischl, and David R¨ugamer. “Auto-matic Componentwise Boosting: An Interpretable AutoML System”. In:

arXiv preprint arXiv:2109.05583 (2021).

[59] Corinna Cortes and Vladimir Vapnik. “Support-vector networks”. In:

Machine learning 20.3 (1995), pp. 273–297.

[60] Andr´e F Cruz, Pedro Saleiro, Catarina Bel´em, Carlos Soares, and Pedro Bizarro. “A Bandit-Based Algorithm for Fairness-Aware Hyperparame-ter Optimization”. In: arXiv preprint arXiv:2010.03665 (2020).

[61] Casey Davis and Christophe Giraud-Carrier. “Annotative experts for hyperparameter selection”. In: AutoML Workshop at ICML. 2018.

[62] George De Ath, Richard M Everson, Alma AM Rahat, and Jonathan E Fieldsend. “Greed is good: Exploration and exploitation trade-offs in Bayesian optimisation”. In: ACM Transactions on Evolutionary Learning and Optimization 1.1 (2021), pp. 1–22.

[63] Gwendoline De Bie, Herilalaina Rakotoarison, Gabriel Peyr´e, and Mich`ele Sebag. “Distribution-Based Invariant Deep Networks for Learning Meta-Features”. In: arXiv:2006.13708 [stat.ML] (2020).

[64] Kalyanmoy Deb, Amrit Pratap, Sameer Agarwal, and TAMT Meyarivan.

“A fast and elitist multiobjective genetic algorithm: NSGA-II”. In: IEEE transactions on evolutionary computation 6.2 (2002), pp. 182–197.

[65] J. Demˇsar. “Statistical Comparisons of Classifiers over Multiple Data Sets”. In: The Journal of Machine Learning Research 7 (2006), pp. 1–30.

[66] D. Dheeru and E. Karra Taniskidou. UCI Machine Learning Repository.

2017. url: http://archive.ics.uci.edu/ml.

[67] Elizabeth Ditton, Anne Swinbourne, Trina Myers, and Mitchell Scov-ell. “Applying Semi-Automated Hyperparameter Tuning for Clustering Algorithms”. In: arXiv preprint arXiv:2108.11053 (2021).

[68] Iddo Drori, Yamuna Krishnamurthy, Remi Rampin, Raoni Louren¸co, Jorge One, Kyunghyun Cho, Claudio Silva, and Juliana Freire. “Al-phaD3M: Machine learning pipeline synthesis”. In: 5th ICML Workshop on Automated Machine Learning (AutoML). 2018.

144 BIBLIOGRAPHY

[69] Jaimie Drozdal, Justin Weisz, Dakuo Wang, Gaurav Dass, Bingsheng Yao, Changruo Zhao, Michael Muller, Lin Ju, and Hui Su. “Trust in Au-toML: Exploring Information Needs for Establishing Trust in Automated Machine Learning Systems”. In: Proceedings of the 25th International Conference on Intelligent User Interfaces. IUI ’20. Cagliari, Italy: Associ-ation for Computing Machinery, 2020, pp. 297–307. isbn: 9781450371186.

doi: 10.1145/3377325.3377501. url: https://doi.org/10.1145/3377325.

3377501.

[70] Russell Eberhart and James Kennedy. “Particle swarm optimization”.

In: Proceedings of the IEEE international conference on neural networks.

Vol. 4. Citeseer. 1995, pp. 1942–1948.

[71] Katharina Eggensperger, Matthias Feurer, Frank Hutter, James Bergstra,

[71] Katharina Eggensperger, Matthias Feurer, Frank Hutter, James Bergstra,

In document Systems for AutoML Research (pagina 150-0)