1. |
Barceló P.♦, Duarte M.♦, Rojas C.♦, Steifer T., No Agreement Without Loss: Learning and Social Choice in Peer Review,
ECAI 2023, 26th European Conference on Artificial Intelligence, 2023-09-30/10-04, Kraków (PL), DOI: 10.3233/FAIA230270, pp.190-197, 2023Abstract: In peer review systems, reviewers are often asked to evaluate various features of submissions, such as technical quality or novelty. A score is given to each of the predefined features and based on these the reviewer has to provide an overall quantitative recommendation. It may be assumed that each reviewer has her own mapping from the set of features to a recommendation, and that different reviewers have different mappings in mind. This introduces an element of arbitrariness known as commensuration bias. In this paper we discuss a framework, introduced by Noothigattu, Shah and Procaccia, and then applied by the organizers of the AAAI 2022 conference. Noothigattu, Shah and Procaccia proposed to aggregate reviewer’s mapping by minimizing certain loss functions, and studied axiomatic properties of this approach, in the sense of social choice theory. We challenge several of the results and assumptions used in their work and report a number of negative results. On the one hand, we study a trade-off between some of the axioms proposed and the ability of the method to properly capture agreements of the majority of reviewers. On the other hand, we show that dropping a certain unrealistic assumption has dramatic effects, including causing the method to be discontinuous. Affiliations:
Barceló P. | - | other affiliation | Duarte M. | - | other affiliation | Rojas C. | - | other affiliation | Steifer T. | - | IPPT PAN |
| |
2. |
Delle Rose V.♦, Kozachinskiy A.♦, Rojas C.♦, Steifer T., Find a witness or shatter: the landscape of computable PAC learning,
COLT 2023, The Thirty Sixth Annual Conference on Learning Theory, 2023-07-12/07-15, Bangalore (IN), No.195, pp.1-14, 2023Abstract: This paper contributes to the study of CPAC learnability—a computable version of PAC learning—by solving three open questions from recent papers. Firstly, we prove that every improperly CPAC learnable class is contained in a class which is properly CPAC learnable with polynomial sample complexity. This confirms a conjecture by Agarwal et al (COLT 2021). Secondly, we show that there exists a decidable class of hypotheses which is properly CPAC learnable, but only with uncomputably fast-growing sample complexity. This solves a question from Sterkenburg (COLT2022). Finally, we construct a decidable class of finite Littlestone dimension which is not improperly CPAC learnable, strengthening a recent result of Sterkenburg (2022) and answering a question posed by Hasrati and Ben-David (ALT 2023). Together with previous work, our results provide a complete landscape for the learnability problem in the CPAC setting Keywords: PAC learnability, CPAC learnability, VC dimension, Littlestone dimension, computability, foundations of machine learning Affiliations:
Delle Rose V. | - | University of Siena (IT) | Kozachinskiy A. | - | other affiliation | Rojas C. | - | other affiliation | Steifer T. | - | IPPT PAN |
| |