New england machine learning day 2014 – microsoft research

We propose a unified Bayesian framework for detecting genetic variants associated with a disease while exploiting image-based features as an intermediate phenotype. Traditionally, imaging genetics methods comprise two separate steps best free software to recover deleted files. First, image features are selected based on their relevance to the disease phenotype. Second, a set of genetic variants are identified to explain the selected features. In contrast, our method performs these tasks simultaneously to ultimately assign probabilistic measures of relevance to both genetic and imaging markers. We derive an efficient approximate inference algorithm that handles high dimensionality of imaging genetic data free android photo recovery software for mac. We evaluate the algorithm on synthetic data and show that it outperforms traditional models. We also illustrate the application of the method in a study of Alzheimer’s disease.

Joint work with Kayhan Batmanghelich, Adrian Dalca, Mert Sabuncu.

Many machine learning applications, explicitly or implicitly, attempt to mimic natural human abilities in a machine. Indeed, any setting where human-provided labels are used as ground truth – whether the system aspires to be biologically-inspired or not – is ultimately driven by the human visual and cognitive system and its ability to provide accurate examplar labels best free photo recovery software for mac. However, human-provided ground-truth labels are in many ways just the tip of the iceberg of the information that can be extracted from human judgments. I will describe a new approach — called “perceptual annotation” — in which we use an advanced online psychometric testing platform to acquire new kinds of human annotation data, and we incorporate these data directly into the formulation of a machine learning algorithm free software for recover deleted files from computer. A key intuition for this approach is that while it may be infeasible to dramatically increase the amount of data and high-quality labels available for the training of a given system, measuring the latent exemplar-by-exemplar landscape of difficulty and patterns of human errors can provide important information for regularizing the solution of the system at hand. Finally, I will conclude by exploring how this approach can be extended to incorporate an even greater diversity of different kinds of biological data.

English text today is often machine printed or displayed on screens using the same letters that were carved in stone and handwritten on wax and parchment over two thousand years ago free software to recover deleted files and folders. We consider the problem of developing radically different characters for the same underlying twenty-six letter English alphabet, just as Braille or cursive are alternative representations. We discuss optimizing these letters for multiple criteria using crowdsourcing and machine learning.

From routing to online auctions, many decision-making tasks for learning agents are carried out in the presence of other decision makers. I will give a brief overview of results developed in the context of adapting reinforcement-learning algorithms to work effectively in multiagent environments. Of particular interest is the idea that even simple scenarios, such as the well-known Prisoner’s dilemma, require agents to work together, bearing some individual risk, to arrive at mutually beneficial outcomes.

Several problems such as network intrusion, community detection, disease outbreak, and cell signaling can be described in terms of an attributed graph with signals associated with nodes and edges. In these applications presence of intrusion, community, disease outbreak, or signal pathway is characterized by novel observations on some unknown connected subgraph free data recovery software download for pc. These problems can be formulated in terms of optimization of suitable objectives on connected sub-graphs, a problem which is generally computationally difficult free hard disk drive recovery software. We overcome the combinatorics of connectivity algebraically through embedding of connected subgraphs into linear matrix inequalities (LMI). Computationally efficient tests are then realized by optimizing convex objective functions subject to these LMI constraints. We show that our tests are minimax optimal for exponential family of distributions and for graphs satisfying polynomial growth property.

Two aspects are crucial when constructing any real world supervised classification task: the set of classes whose distinction might be useful for the domain expert, and the set of classifications that can actually be distinguished by the data. Often a set of labels is defined with some initial intuition but these are not the best match for the task. For example, labels have been assigned for land cover classification of the Earth but it has been suspected that these labels are not ideal and some classes may be best split into subclasses whereas others should be merged best recovery software formatted hard drive. We present an approach that formalizes this problem using three ingredients: the existing class labels, the underlying separability in the data, and input from the domain expert specifying an LxL matrix of pairwise probabilistic constraints expressing their beliefs as to whether the L classes should be kept separate, merged, or split best free photo recovery software for android. We describe how the problem can be solved by casting it as an instance of constraint-based clustering. We present results demonstrating its application to the task of redefining a class taxonomy for land cover classification of the Earth and redefining the set of high-level keywords for AAAI 2014.

The effective analysis of emerging sources of complex clinical and mobile health data represent a key challenge for machine learning. These data often exhibit multiple complicating factors including sparse and irregular sampling, incompleteness, noise, non-stationary temporal dynamics, high levels of between-subjects variability, high volume, high velocity, and significant heterogeneity and multi-modality. In this talk, I will present an overview of some of the machine learning problems my research group is currently working on motivated by ongoing collaborations in both clinical and mobile health. These problems include modeling and prediction of sparse and irregularly sampled physiological time series data from intensive care unit electronic health records, feature extraction and event detection from noisy wearable on-body sensor data, and learning what to sense in the energy and computation constrained mobile device setting.

How can we build a machine that learns to see the world as a human being does? This question has been at the heart of the fields of AI and machine learning since their inceptions software to show hidden files and folders. Recently the question has seen renewed interest from researchers taking various “big data” approaches, such as training many-layered neural networks to find structure in millions of images, or mining the web to build databases of millions of common-sense facts. I will talk about our recent work taking a different approach, based on trying to reverse-engineer the core cognitive capacities and learning mechanisms of young children and infants. In contrast to conventional big-data ML approaches, children parse their experience using rich causally structured generative models, and learn new models from very little evidence; often just a single example is sufficient to grasp a new concept and generalize in richer ways than machine learning systems can typically do even with hundreds or thousands of examples. I will show how we are beginning to capture these perception and learning abilities in computational terms using techniques based on probabilistic programs and program induction, embedded in a broadly Bayesian framework for inference under uncertainty.