Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Benjamin Fish is active.

Publication


Featured researches published by Benjamin Fish.


international conference on acoustics, speech, and signal processing | 2012

Feature selection based on mutual information for human activity recognition

Benjamin Fish; Ammar Khan; Nabil Hajj Chehade; Greg Pottie

In this work, we consider a classification problem of 14 physical activities using a body sensor network (BSN) consisting of 14 tri-axial accelerometers. We use a tree-based classifier, and develop a feature selection algorithm based on mutual information to find the relevant features at every internal node of the tree. We evaluate our algorithm on 31 features per accelerometer (total of 434), and we present the results on 8 subjects with a 96% average accuracy.


siam international conference on data mining | 2016

A Confidence-Based Approach for Balancing Fairness and Accuracy.

Benjamin Fish; Jeremy Kun; Ádám Dániel Lelkes

We study three classical machine learning algorithms in the context of algorithmic fairness: adaptive boosting, support vector machines, and logistic regression. Our goal is to maintain the high accuracy of these learning algorithms while reducing the degree to which they discriminate against individuals because of their membership in a protected group. Our first contribution is a method for achieving fairness by shifting the decision boundary for the protected group. The method is based on the theory of margins for boosting. Our method performs comparably to or outperforms previous algorithms in the fairness literature in terms of accuracy and low discrimination, while simultaneously allowing for a fast and transparent quantification of the trade-off between bias and error. Our second contribution addresses the shortcomings of the bias-error trade-off studied in most of the algorithmic fairness literature. We demonstrate that even hopelessly naive modifications of a biased algorithm, which cannot be reasonably said to be fair, can still achieve low bias and high accuracy. To help to distinguish between these naive algorithms and more sensible algorithms we propose a new measure of fairness, called resilience to random bias (RRB). We demonstrate that RRB distinguishes well between our naive and sensible fairness algorithms. RRB together with bias and accuracy provides a more complete picture of the fairness of an algorithm.


european conference on machine learning | 2015

Handling oversampling in dynamic networks using link prediction

Benjamin Fish; Rajmonda S. Caceres

Oversampling is a common characteristic of data representing dynamic networks. It introduces noise into representations of dynamic networks, but there has been little work so far to compensate for it. Oversampling can affect the quality of many important algorithmic problems on dynamic networks, including link prediction. Link prediction seeks to predict edges that will be added to the network given previous snapshots. We show that not only does oversampling affect the quality of link prediction, but that we can use link prediction to recover from the effects of oversampling. We also introduce a novel generative model of noise in dynamic networks that represents oversampling. We demonstrate the results of our approach on both synthetic and real-world data.


international symposium on distributed computing | 2015

On the Computational Complexity of MapReduce

Benjamin Fish; Jeremy Kun; Ádám Dániel Lelkes; Lev Reyzin; György Turán

In this paper we study the MapReduce Class MRC defined by Karloffi?źeti?źal., which is a formal complexity-theoretic model of MapReduce. We show that constant-round MRC computations can decide regular languages and simulate sublogarithmic space-bounded Turing machines. In addition, we prove hierarchy theorems for MRC under certain complexity-theoretic assumptions. These theorems show that sufficiently increasing the number of rounds or the amount of time per processor strictly increases the computational power of MRC. Our work lays the foundation for further analysis relating MapReduce to established complexity classes. Our results also hold for Valiants BSP model of parallel computation and the MPC model of Beamei?źeti?źal.


international joint conference on artificial intelligence | 2017

On the Complexity of Learning from Label Proportions

Benjamin Fish; Lev Reyzin

In the problem of learning with label proportions, which we call LLP learning, the training data is unlabeled, and only the proportions of examples receiving each label are given. The goal is to learn a hypothesis that predicts the proportions of labels on the distribution underlying the sample. This model of learning is applicable to a wide variety of settings, including predicting the number of votes for candidates in political elections from polls. In this paper, we formally define this class and resolve foundational questions regarding the computational complexity of LLP and characterize its relationship to PAC learning. Among our results, we show, perhaps surprisingly, that for finite VC classes what can be efficiently LLP learned is a strict subset of what can be leaned efficiently in PAC, under standard complexity assumptions. We also show that there exist classes of functions whose learnability in LLP is independent of ZFC, the standard set theoretic axioms. This implies that LLP learning cannot be easily characterized (like PAC by VC dimension).


Journal of Complex Networks | 2017

Betweenness centrality profiles in trees

Benjamin Fish; Rahul Kushwaha; György Turán

Betweenness centrality of a vertex in a graph measures the fraction of shortest paths going through the vertex. This is a basic notion for determining the importance of a vertex in a network. The k-betweenness centrality of a vertex is defined similarly, but only considers shortest paths of length at most k. The sequence of k-betweenness centralities for all possible values of k forms the betweenness centrality profile of a vertex. We study properties of betweenness centrality profiles in trees. We show that for scale-free random trees, for fixed k, the expectation of k-betweenness centrality strictly decreases as the index of the vertex increases. We also analyze worst-case properties of profiles in terms of the distance of profiles from being monotone, and the number of times pairs of profiles can cross. This is related to whether k-betweenness centrality, for small values of k, may be used instead of having to consider all shortest paths. Bounds are given that are optimal in order of magnitude. We also present some experimental results for scale-free random trees.


symbolic and numeric algorithms for scientific computing | 2014

CSPs and Connectedness: P/NP Dichotomy for Idempotent, Right Quasigroups

Robert W. McGrail; James Belk; Solomon Garber; Japheth Wood; Benjamin Fish

In the 1990s, Jeavons showed that every finite algebra corresponds to a class of constraint satisfaction problems. Vardi later conjectured that idempotent algebras exhibit P/NP dichotomy: Every non NP-complete algebra in this class must be tractable. Here we discuss how tractability corresponds to connectivity in Cayley graphs. In particular, we show that dichotomy in finite idempotent, right quasi groups follows from a very strong notion of connectivity. Moreover, P/NP membership is first-order axiomatizable in involutory quandles.


arXiv: Social and Information Networks | 2017

A supervised approach to time scale detection in dynamic networks.

Benjamin Fish; Rajmonda S. Caceres


adaptive agents and multi-agents systems | 2016

Recovering Social Networks by Observing Votes

Benjamin Fish; Yi Huang; Lev Reyzin


conference on learning theory | 2017

Open Problem: Meeting Times for Learning Random Automata.

Benjamin Fish; Lev Reyzin

Collaboration


Dive into the Benjamin Fish's collaboration.

Top Co-Authors

Avatar

Lev Reyzin

University of Illinois at Chicago

View shared research outputs
Top Co-Authors

Avatar

Rajmonda S. Caceres

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

György Turán

University of Illinois at Chicago

View shared research outputs
Top Co-Authors

Avatar

Jeremy Kun

University of Illinois at Chicago

View shared research outputs
Top Co-Authors

Avatar

Yi Huang

University of Illinois at Chicago

View shared research outputs
Top Co-Authors

Avatar

Ádám Dániel Lelkes

University of Illinois at Chicago

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge