Summary Top researchers from Northwestern University (Chicago), University of British Columbia (Vancouver) and Makerere (Kampala) are teaming up to offer a workshop on cutting-edge methods for computational modeling of social systems, algorithm design, and machine learning. The sessions will take place between December 3rd and 10th, and there is no cost for attendance; however, registration is mandatory.
Summary Top researchers from Northwestern University (Chicago), University of British Columbia (Vancouver) and Makerere (Kampala) are teaming up to offer a workshop on cutting-edge methods for computational modeling of social systems, algorithm design, and machine learning. The sessions will take place between December 3rd and 10th, and there is no cost for attendance; however, registration is mandatory.
Attendance is limited to academic staff working at a Ugandan university; students doing research in related areas may also be given special permission to attend if space permits. Participants will have the opportunity to publish papers in official, reviewed workshop proceedings at a later date. A certificate of completion will be provided to participants who attend at least two thirds of workshop sessions.
Overview Traditionally, computer science has viewed data as coming from either an adversarial source or from nature itself, giving rise to worst-case and average-case design and analysis of optimization algorithms. In recent years with the advent of modern technologies like the Internet, it has become increasingly apparent that neither of these assumptions reflects reality. Data is neither adversarial nor average, but rather inputs to algorithms are constructed by a diverse set of self-interested agents in an economy, all aiming to maximize their own happiness. Thus the raw data is often not available to an algorithm designer, but must be solicited from the agents–that is, the designer faces an economic constraint. The primary goal of this workshop is to explore the implications of this observation. We will study the performance of algorithms in the presence of utility-maximizing agents and ask whether alternate designs might create incentives for agents to act more optimally. Simultaneously, we will look at other more traditional optimization problems such as approximation and learning and techniques to solve them, pointing out that these may often be leveraged to solve issues in the economic setting.
Related Research Areas Computer Science Theory; Artificial Intelligence; Economics; Business
Format The workshop will consist of six 3-hour lectures, plus meal/breakout sessions for informal research discussion. Spaces are strictly limited, and attendees must pre-register. We will aim to select topics and session times that are best for our participants. To register, and to indicate your preferences for topics and dates, please complete the survey at http://www.surveymonkey.com/s/WWGMKZG.
List of Candidate Topics The workshop will consist of up to six of the following twelve topics.
Introduction to Game Theory
Game theory is the mathematical study of interaction among independent, self-interested agents. It has been applied to disciplines as diverse as economics, political science, biology, psychology, linguistics—and computer science. This tutorial will introduce what has become the dominant branch of game theory, called noncooperative game theory, and will specifically describe normal-form games, a canonical representation in this discipline. The tutorial will be motivated by the question: "In a strategic interaction, what joint outcomes make sense?"
Voting Theory
Voting (or "Social Choice") theory adopts a“designer perspective” to multiagent systems, asking what rules should be put in place by the authority (the “designer”) orchestrating a set of agents. Specifically, how should a central authority pool the preferences of different agents so as to best reflect the wishes of the population as a whole? (Contrast this with Game Theory, whichadopts what might be called the “agent perspective”: its focus is on making statements about how agents should or would act in a given situation.) This tutorial will describe famous voting rules, show problems with them, and explain Arrow's famous impossibility result.
Mechanism Design and Auctions
Social choice theory is nonstrategic: it takes the preferences of agents as given, and investigates ways in which they can be aggregated. But of course those preferences are usually not known. Instead, agents must be asked to declare them, which they may do dishonestly. Since as a designer you wish to find an optimal outcome with respect to the agents’ true preferences (e.g., electing a leader that truly reflects the agents’ preferences), optimizing with respect to the declared preferences will not in general achieve the objective. This tutorial will introduce Mechanism Design, the study of identifying socially desirable protocols for making decisions in such settings. It will describe the core principles behind this theory, and explain the famous "Vickrey-Clarke-Groves" mechanism, an ingenious technique for selecting globally-utility-maximizing outcomes even among selfish agents. It will also describe Auction Theory, the most famous application of mechanism design. Auctions are mechanisms that decide who should receive a scarce resource, and that impose payments upon some or all participants, based on agents' "bids".
Constraint Satisfaction Problem Solving
This hands-on tutorial will teach participants about solving Constraint Satisfaction Problems using search and constraint propagation techniques. This is a representation language from artificial intelligence, used to describe problems in scheduling, circuit verification, DNA structure prediction, vehicle routing, and many other practical problems. The tutorial will consider the problem of solving Sudoku puzzles as a running example. By the end of the session, participants will have written software (in Python) capable of solving any Sudoku puzzle in less than a second.
Bayesian methods and Probabilisitic Inference
Bayesian methods are commonly used for recognising patterns and making predictions in the fields of medicine, economics, finance and engineering, powering all manner of applications from fingerprint recognition to spam filters to robotic self-driving cars. This session will show how principles of probability can be used when making inferences from large datasets, covering issues such as prior knowledge and hyperpriors, the construction of "belief networks", and nonparametric methods such as Gaussian processes. Several applications will be demonstrated.
Computer Vision
It is useful to be able to automatically answer questions about an image, such as "is this the face of person X?", "how many cars are there on this street?" or "is there anything unusual about this x-ray?". This session will look at some of the current state of the art in computer vision techniques, including methods for representing the information in an image (feature extraction), and to recognise objects in an image given such a representation. We will particularly spend some time looking at approaches which have been found to work well empirically on object recognition, such as generalised Hough transforms, boosted cascades of Haar wavelet classifiers, and visual bag-of-words methods. Locally relevant applications in crop disease diagnosis, parasite detection in blood samples and traffic monitoring will be demonstrated as illustrating examples.
Learning Causal Structure from Data
Until a few decades ago, it was thought to be impossible to learn causes and effects from purely observational data without doing experiments. Sometimes, however, it is impossible to do experiments (e.g. in some branches of genetics), or experiments may be costly or unethical (e.g. situations in climate change or medicine), so the emergence of computational methods for distinguishing causes, effects and confounding variables is likely to have wide implications. Some principles are now understood for learning the causal structure between different variables, and this session will explain the most successful current approaches, their possibilities and their limitations.
Internet Search and Monetization
The internet is one of the most fundamental and important applications of computer science. Central to its existence are search engines which enable us to find content on the web. This module focuses on the algorithms like PageRank that these search engines use to help us find webpages. It also studies how these engines make money through advertising.
Social Networks
Social networks describe the structure of interpersonal relationships and have many alarmingly predictable properties. While most people have just a few friends, most social networks have at least a few very popular people. Furthermore, most people are closely linked to every other person so that a message (or an idea or a disease) can spread rapidly throughout the network. Finally, social networks tend to be fairly clustered — i.e., if two people share a common friend it is quite likely that they are also friends. This module will discuss the typical structures of social networks, models that explain these structures, and the impact of these structures on activities in the social network such as message routing or the adoption of new technologies.
Two-Sided Matching Markets
Many markets involve two “sides'' that wish to be matched to one another. For example, a marriage market matches women to men; a job market matches workers to employers. In such settings, people on each side have strict preferences over the options on the other side of the market. Hence, a woman Julie may like David best, John second best, and Christopher third. David on the other hand may prefer Mary to Julie. In such settings, what matches might we expect to form? Can these matches be computed by a centralized algorithm, a match-maker for example, and what are the corresponding incentives of the participants? These questions are of fundamental importance as such centralized algorithms are in use in many important markets. In many countries, medical students are matched to hospitals using such algorithms, or school children to schools.
Approximation Algorithms
In the field of algorithms, many tasks turn out to be computationally difficult. That is, the time to complete the task is fundamentally large compared to the size of the problem. For example, consider the problem of finding the optimal way to visit 10 cities, visiting each exactly once. To minimize travel time, one could test all possible travel schedules, but for 10 cities there are already 3.5M of them! Unfortunately, there is not a significantly quicker way to find the optimal solution. However, one can find an approximately optimal solution quickly. That is, with just a few things to check, one can design a schedule that takes at most 50% more time than the optimal one. In this module we showcase a few general techniques for computing approximate solutions to hard problems, including the use of randomization and linear programming.
Graph Theory
A graph is a combinatorial object consisting of nodes and edges, and is a extremely valuable abstraction of many practical problems. For example, nodes might represent jobs and edges might connect pairs of jobs that can not be performed simultaneously. Alternatively, nodes might represent electronic components on a circuit board and edges the wiring that connects them. Many questions that arise in such domains can be cast as an optimization question in the corresponding graph. The number of workers required to complete all jobs in fixed time frame in the first example is at its heart a graph coloring problem. Asking whether one can lay out the circuit board so no two wires cross becomes the problem of determining which graphs have planar representations. This course defines graphs, shows how to solve a few fundamental graph problems, and applies them to practical settings.
Speaker Bios
Nicole Immorlica is an assistant professor in the Economics Group of Northwestern University's EECS department in Chicago, IL, USA. She joined Northwestern in Fall 2008 after postdoctoral positions at Microsoft Research in Seattle, Washington, USA and Centruum voor Wiskunde en Informatica (CWI) in Amsterdam, The Netherlands. She received her Ph.D. from MIT in Boston, MA, USA, in 2005 under the joint supervision of Erik Demaine and David Karger. Her main research area is algorithmic game theory where she investigates economic and social implications of modern technologies including social networks, advertising auctions, and online auction design.
Kevin Leyton-Brown is an associate professor in computer science at the University of British Columbia, Vancouver, Canada. He received a B.Sc. from McMaster University (1998), and an M.Sc. and PhD from Stanford University (2001; 2003). Much of his work is at the intersection of computer science and microeconomics, addressing computational problems in economic contexts and incentive issues in multiagent systems. He also studies the application of machine learning to the automated design and analysis of algorithms for solving hard computational problems.
John Quinn is a Senior Lecturer in Computer Science at Makerere University. He received a BA in Computer Science from the University of Cambridge (2000) and a PhD from the University of Edinburgh (2007). He coordinates the Machine Learning Group at Makerere, and his research interests are in pattern recognition and computer vision particularly applied to developing world problems.