Improved transportation is a key predictor for upward economic mobility, and the relationship between transportation and economic mobility is stronger than that between economic mobility and factors like crime, the percentage of two-parent families, and elementary-school test scores. Real-time ridesharing services (e.g., Uber and Lyft) are often touted as sharing-economy leaders and dramatically lower the cost of transportation. However, how to make these services work better among low-income and transportation-scarce households, how these individuals experience these services, and whether they encounter barriers in enlisting these services is unknown. This presentation will uncover the feasibility, challenges, and opportunities of deploying real-time ridesharing services in underserved and transportation-scarce areas in Detroit, MI. The presentation will also uncover opportunities for passengers and drivers to develop social capital, a key component to finding employment, in their rides.
Child development increasingly takes place in a world saturated by technology and media. Children are both producers and consumers of media. At the same time, technologies provide increasing opportunity for both the study and intervention into child development. In this tal, I will describe the relationship between Human Computer Interaction and child development, both typical and delayed development.I will describe some of the opportunities and challenges for childhood technologies in terms of both research and intervention, highlighted with examples from the design and evaluation of custom, innovative technologies and interventions developed in my research lab.
The increasing role that algorithms play in the production of news information is rapidly changing the ways in which the media is authored, curated, disseminated, and consumed. Information intermediaries like Google and Facebook surface, filter, and highlight information using algorithms that shape and bias exposure to information. But such difficult-to-explain black-box algorithms problematize media accountability and call for new ways to apply transparency as a normative approach to media ethics and accountability. Broadly, the goal of algorithmic accountability is to articulate, explain, or justify the ways in which algorithms are exerting power in specific human contexts: in this case to elucidate the biases and influences that algorithms play in the constitution of the media. In this talk I will present recent algorithmic accountability work focused specifically on characterizing the role that search engines played in mediating political information during the 2016 elections. Then I’ll present the development of an empirically-grounded theoretical framework for algorithmic transparency in the news media that describes dimensions of information that may be communicated about algorithms, suggesting ways in which it can be employed to guide transparency by goodfaith actors, as well as inform investigative or critical approaches to algorithms. I’ll trace various technical, regulatory, legal, and normative challenges that remain, offering new openings for research in algorithmic news communication.
What happens to our accounts, data, and digital identities after we die? Over 972,000 US Facebook users will die in 2016, but their deaths do not necessarily result in the elimination of their accounts or their place inside a network of friends. This leaves friends and families with both the opportunity and challenge to incorporate digital identities into their practices of grief and mourning. Meanwhile, post-mortem digital identities require designers and administrators to address the ongoing use and maintenance of post- mortem data. In this talk, I share findings from mixed-methods research on digital afterlives. I identify how people interact with profiles after the account holder’s death, describe “post-mortem social networking" practices, articulate the multiple and conflicting needs of survivors, and present design research addressing the management of post-mortem digital identities. Finally, I discuss what death tells us about the technological design of identity, how identity infrastructure shapes the ways the user is operationalized, and the importance of future research that accounts for the infrastructure that undergirds user-centered research and practice.
Wikipedia, largely used as a synecdoche for open production generally, is a large, complex, distributed system that needs to solve a set of "open problems" efficiently in order to thrive. In this talk, I'll use the metaphor of biology as a "living system" to discuss the relationship between subsystem efficiency and the overall health of Wikipedia. Specifically, I'll describe Wikipedia's quality control subsystem and some trade-offs that were made in order to make this system efficient through the introduction of artificial intelligence and human computation. Finally, I'll apply critiques waged by feminist HCI to argue for a new strategy for increasing the adaptive capacity of this subsystem and speak generally about improving the practice of applying subjective artificial intelligence in social spaces. Live demo included!
Today information is organized not only by its content and metadata
but by the online social actions taken upon it. These social
activities contribute to the overall conversational nature of media
that we create, store, and share. Hoever recently, advancements in
computer vision technology has created a focus shift back to
content-based indexing. However, this shift also creates many
opportunities to build a new class of social-visual systems to aid in
the organization and retrieval processes---ones that rely heavily on
the both tacit and explicit communicative nature of social multimedia
coupled with modern computer vision. In this talk, I will focus on
the new practice of photography and how media has conversational and
visual structure. Further, I will present a multifaceted
human-centered computing system used to surface geo-located weather
photos for editorial inclusion in a mobile application. Using the
Flickr photosharing service, we can identify explicit group behavior,
implicit photo viewing patterns, and apply modern computer vision
techniques to surface photos for curatorial editors as a
Human-in-the-loop AI system. Finally, I will outline new findings and
challenges in social media organization including geographic
annotation of photographs and regions, community congregation online,
and social engagement.
We build systems, apps, and infrastructure that can change the world – but most do not even change the users’ behaviors over a short period of time – never mind the world. Why not? What can we do to improve our designs that will lead to better appropriation? In this talk, I show how applying theories from psychology, design, and business to application design gradually improved acceptance and appropriation in underserved communities. I first briefly discuss how we used Bandura’s Social Cognitive Theory to design a mobile application that empowers low literacy, chronically ill patients to manage their diet. I then discuss various theories and design methods we used with low socioeconomic status families to design an application to improve family snacking behaviors. Finally, I will show how we are building on the Ikea Effect to motivate low socioeconomic status children to create their own health monitoring technology. All of the methods can be easily adopted into a researcher’s toolbox for designing applications that can potentially change the world.
Intelligent technologies often rely heavily on large datasets of crowdsourced information. Focusing on the important domain of crowdsourced geographic information (e.g. geotagged social media, citizen science observations, OpenStreetMap contributions), I will present research that shows that certain types of areas (e.g. rural areas, poor areas) are extensively under-covered in crowdsourced geographic datasets relative to other types of areas (e.g. urban areas, richer areas). I will then show how this under-coverage can result in important intelligent geographic technologies displaying significantly reduced effectiveness in the corresponding areas. I will also discuss how our results suggest that these under-coverage biases are systemic in nature and raise fundamental concerns about the use of crowdsourcing to generate data for intelligent technologies about certain types of places. Next, I will show that this phenomenon is not limited to geographic information, but instead generalizes to datasets relied upon by intelligent technologies more broadly. Specifically, I will discuss a project that showed that biases in well- known gold standard training and test datasets for semantic relatedness algorithms have resulted in false, widely-held understandings about the relative accuracy of each algorithm. Finally, I will conclude by presenting a project that points to an exciting potential solution to under-coverage in large datasets of crowdsourced information: social science theory. This project demonstrated that by operationalizing several well-known theories from the discipline of human geography and combining these with crowdsourcing-based approaches, we were able to create high-accuracy semantic relatedness algorithms that are robust against under-coverage concerns for tasks in the geographic domain.
We live in a world where the pace of everything from communication to transportation is getting faster. In recent years a number of “slow movements” have emerged that advocate for reducing speed in exchange for increasing quality. These include the slow food movement, slow parenting, slow travel, and even slow science. We propose the concept of “slow search,” where search engines use additional time to provide a higher quality search experience than is possible given conventional time constraints. While additional time can be used to identify particularly relevant results within the existing search engine framework, it can also be used to create new search artifacts and enable previously unimaginable user experiences. In this talk I focus on how search engines can make use of additional time to employ a resource that is inherently slow: people. Using crowdsourcing and friendsourcing, I will highlight opportunities for search systems to support new search experiences with high quality result content that takes time to identify.
Despite an increasing use of advanced technologies in healthcare, trauma and emergency medical resuscitation --- the team-dependent and information-intensive processes of evaluating critically ill patients in a dedicated facility in the emergency department --- remain one of the few settings that lack IT support and depend on paper artifacts. To bridge this paper-digital gap, our multidisciplinary research group has studied the work of resuscitation teams over the past eight years. Our long-term research goal has been the design and development of innovative approaches for real-time presentation of process information to support situation awareness and coordination of these fast-response medical teams.
In this talk, I will first highlight findings from a series of studies we performed in adult and pediatric trauma centers to derive system requirements. I will then describe how we designed and evaluated TRUBoard, an information display, through an iterative design process combined with rapid prototyping and user participation. Although our findings showed the potential for information displays in this setting, the process also revealed several design tensions that guided our designs for interdisciplinary teamwork in a safety-critical environment. For instance, we found that teams' attitudes towards the system shifted as the design progressed from personalized to common displays or when the context-specific information changed from state-based to checklist-based presentation. I will conclude by discussing these and other issues relevant to the use of IT for assisting dynamic work processes such as emergency medical resuscitations.
In this talk I'll discuss an in-progress study on perceptions of collective action on Facebook, and the relationships between those perceptions and what people choose to do on Facebook. I’ll use this study as an example of our approach to doing effective UX research in an environment that is rich with data. I'll also share how Facebook's UX research team uses a multi-method, multi-discipline approach that brings together qualitative and quantitative to build products that 1.3 billion people use every month. Finally, I’ll conclude by discussing some of the thorny issues we routinely deal with in our product research.
Storytelling is essential for communicating ideas. When they are well
told, stories help us make sense of information, appreciate cultural
or societal differences, and imagine living in entirely different
worlds. Audio/visual stories in the form of radio programs,
books-on-tape, podcasts, television, movies and animations, are
especially powerful because they provide a rich multisensory
experience. Technological advances have made it easy to capture
stories using the microphones and cameras that are readily available
in our mobile devices, But, the raw media rarely tells a compelling
The best storytellers carefully compose, filter, edit and highlight the raw media to produce an engaging piece. Yet, the software tools they use to create and manipulate the raw audio/video media (e.g. Pro Tools, Premiere, Final Cut Pro, Maya etc.) force storytellers to work at a tediously low-level – selecting, filtering, cutting and transitioning between audio/video frames. While these tools provide flexible and precise control over the look and sound of the final result, they are notoriously difficult to learn and accessible primarily to experts. In this talk I'll present recent projects that aim to significantly reduce the effort required to edit and produce high-quality audio/visual stories.
Humans have excellent spatial memory - even people who always lose their
keys still remember the locations of thousands of objects in everyday
life (alarm clocks, toasters, televisions, books...). User interfaces have made some use of spatial memory (everyone remembers where the File and Edit menus are), but this natural human capability is surprisingly underused in visual design. Instead, interfaces are typically based on navigation, which is good for novices but bad for experts. In this seminar, I will explore some issues underlying human spatial memory, and I will introduce recent projects that demonstrate the power of spatial memory as a design tool for human-computer interaction. Spatial memory can improve expert performance with user interfaces, and can help users avoid the problem of being "trapped in beginner mode" that occurs with current designs.
Over the past few years, I have been developing and deploying interactive crowd-powered systems that solve characteristic “hard” problems to help people get things done in their everyday lives. For instance, VizWiz answers visual questions for blind people in seconds, Legion drives robots in response to natural language commands, Chorus holds helpful general conversations with human partners, and Scribe converts streaming speech to text in less than five seconds.
My research envisions a future in which the intelligent systems that we have dreamed about for decades, which have inspired generations of computer scientists from its beginning, are brought about for the benefit of people. My work illustrates a path for achieving this vision by leveraging the on-demand labor of people to fill in for components that we cannot currently automate, by building frameworks that allow groups to do together what even expert individuals cannot do alone, and by gradually allowing machines to take over in a data-driven way. A crowd-powered world may seem counter to the goals of computer science, but I believe that it is precisely by creating and deploying the systems of our dreams that will learn how to advance computer science to create the machines that will someday realize them.
Data analysis is a complex process with frequent shifts among data formats and models, and among textual and graphical media. We are investigating how to better support the life cycle of analysis by identifying critical bottlenecks and developing new methods at the intersection of data visualization, machine learning, and computer systems. Can we empower users to transform and clean data without programming? Can we design scalable representations and systems to visualize and query big data in real time? How might we enable domain experts to guide machine learning methods to produce effective models? This talk will present selected projects that attempt to address these challenges and introduce new tools for interactive visual analysis.
At the University of Minnesota’s GroupLens Lab, we conduct research on social computing and crowdsourcing systems. We study behavior on systems such as Wikipedia and Twitter, and we also have created and maintain our own online communities, notably MovieLens and Cyclopath. These communities have attracted significant user communities, which lets us use them as testbeds to try out and evaluate new algorithms and user interface techniques.
In this talk, I will describe and illustrate our way of doing research. I will identify several general tensions we manage, such as (1) the ease of doing research with publically available data from platforms like Wikipedia and Twitter vs. the limits on the methods we can apply using these data; and (2) the power of doing research with our own online communities vs. the costs and risks of developing and maintaining them. I will describe discuss several research themes that we have studied across multiple research platforms, notably intelligent task routing. I will illustrate these points by describing specific research case studies on Wikipedia, MovieLens, and Cyclopath.
Question-asking on online social networks is an increasingly common method of information seeking, used as an alternative to search engines in cases where users seek subjective opinions or require trusted answers personalized to their tastes. In this talk, I will give an overview of users’ motivations for choosing whether to satisfy an information need via a social networking site versus a more traditional keyword search. I then propose a new information-seeking paradigm, socially embedded search engines, which bring many of the benefits of traditional search engines into the context of online social networks. I will discuss the design and deployment of two prototype socially embedded search engines: SearchBuddies, which augments Facebook exchanges with algorithmically-generated answers, and MSR Answers, which responds to Twitter questions with crowdsourced replies.
Thomas W. Malone
This talk will describe how the same statistical techniques used to measure intelligence in individuals can be used to measure the "collective intelligence" of groups. We find that, just as with individuals, a single statistical factor can predict the performance of a group on a wide range of different tasks. This factor is only weakly correlated with the group members’ individual intelligence. It is, however, correlated with the group members’ social perceptiveness, conversational behavior, and gender.
The talk will also include brief overviews of other work to increase collective intelligence by: (a) combining predictions from humans and computers, (b) mapping the "genome" of collective intelligence, and (c) harnessing the collective intelligence of thousands of people around the world to develop proposals for what to do about global climate change.
The potential of connected crowds to solve complex problems has been the focus of considerable research in recent years across disciplines and certainly within the field of human-computer interaction (HCI). This talk examines the crowdsourcing phenomenon during natural disasters and other crisis events. The crisis context provides a unique perspective on crowd activity, one where timescales are compressed and prosocial behaviors are magnified. It also comes with a ready-made problem regarding information sharing—some people have it, many others need it, and it often requires a great deal of improvisation to get it to where it's needed.
Armed with mobile devices and connected through social media platforms, people on the ground of disaster events are newly enabled to share information about unfolding events with their neighbors, emergency responders and the wider "crowd." This real-time information could become a vital resource for affected people and responders, but it remains difficult to get the right information to the right person at the right time. An important part of this problem, as well as a potential route to its solution, are the activities of the crowd—in this case the global audience. In our hyper-connected world, large-scale disaster events also act as catalysts for mass "convergence" online, where people from all over the world come together to make sense of the event. This activity functions both to generate massive volumes of information and to help organize that information, intentionally and otherwise. Pulling from multiple studies of crowd work during crisis events, in this talk I'll describe how the crowd attempts to solve complex problems and address gaps in response efforts through digital volunteerism and other productive crowd work. I will also outline research directions for supporting and leveraging crowd work to improve response efforts during disasters.
This talk will report on research projects undertaken within the Human-Computer Interaction (HCI) field of Computer Supported Cooperative Work (CSCW) that investigate collaboration in the development of scientific cyberinfrastructures. Our research takes what we call an infrastructural perspective in order to shed light on the social and organizational processes of scientific work and software development work so that we can better understand and therefore better support scientific practice. In order to support innovative scientific research, we must understand how data, software, and systems embody scientific practices and values. We use qualitative social science methods such as interviewing and observation to understand not only the social side of scientific work and information infrastructure development, but also the interaction between the social and the technical. Our work has included investigations of data sharing and database and software development in fields such as neuroscience, metagenomics, epidemiology, and computer science.
Facebook, like other social network sites, allows users to broadcast requests for support from their networks. This support can range from information, to opinions, to requests for offline action. Cliff Lampe will discuss work that spans several papers that looks at how people mobilize their Facebook networks, and how those requests relate to their impressions of social capital in the network, demographics, and other psychosocial factors. In this work, Lampe and colleagues have found that people do seek resources from their Facebook networks, but that this activity is associated with the impression of the social norms on the site, their impressions of social capital on the site, and their desire to present a positive image to their audience.
Vulnerable populations are at higher risk for educational, physical, and social challenges. At the same time, they often have limited access to and experience with information and communication technologies. However, the low cost of smartphones and data service through these phones is beginning to change this
trend, opening new opportunities for using mobile and ubiquitous computing to support them. In this talk I will describe a series of projects focused on empowering people who are not typically represented in the design process to use collected data to address real human needs in sensitive and ethically responsible ways. Understanding, designing, and creating technologies of inclusion require inclusive and democratic
approaches to design. Additionally, in this work, design can be complicated by the need to consider the networks of people responsible for the care of others and their information. Thus, I will also describe holistic systems design methods that include participatory, democratic, and collaborative approaches for the creation of interfaces and interventions for a variety of people involved in any particular setting.
Face to face the body anchors identity. Online, there is no body, only data. On the one hand, this makes identity amorphous. We can easily create multiple names, appear and disappear in many guises; without the body and the rich detail of the face, these identities are often vague and unmemorable. On the other hand, while many of our face to face interactions are ephemeral, our online ones are permanent and searchable, able to resurface in unforeseen contexts indefinitely. In this talk I will propose that data portraits - visualizations of one’s information history - can help people manage their self-presentation and make the online world more vivid and legible. Designed to evocatively depict an individual, a data portrait can be a virtual mirror or an avatar, one’s information body in an online space. To create these portraits, we must address important questions about privacy, control, aesthetics, and social cognition.
Relationships are the heart of social media: they make it *social*. In this talk, I will present two social computing systems that place relationships and networks (i.e., multiple relationships) at the center of their design. First, I will present our work on modeling tie strength (i.e., how strong a relationship is), and how it can act as a tool for both design and analysis. Specifically, I'll present We Meddle, a Twitter app that builds categories of friends by inferring tie strength. Next, I will
present a second Twitter app called Link Different that is inspired by the structure of a single social network triad. Link Different lets its users know how many of their followers already a saw link via someone else they follow. Hundreds of thousands of people used these two systems. Grounding my argument in them, I'll conclude the talk by suggesting that taking this kind sociological approach to social computing suggests many new open problems for design.
What does it mean to be literate at a time when you can search billions of texts in less than 300 milliseconds? Although you might think that "literacy" is one of the great constants that transcends the ages, the skills of a literate person have changed substantially over time as texts and technology allow for new kinds of reading and understanding. Knowing how to read is just the beginning of it -- knowing how to frame a question, pose a query, how to interpret the texts you find, how to organize and use the information you discover, how to understand your metacognition -- these are all critical parts of being literate as well. In this talk Russell will review what literacy is today, in the age of Google, and show how some very surprising and unexpected skills will turn out to be critical in the years ahead.
Joseph A. Konstan
Have a question? Go to Wikipedia. Or browse a discipline-specific archive. Or even just post it online and hope people will help. This talk discusses several studies that explore how online communities create and organize information. From these, we can draw lessons on how social psychology, economics, and other sciences of human behavior can be harnessed to understand and then design effective online communities. In particular, we look at cases where computation has the potential to improve the functioning of such communities.
Many digital resources, like the Web, are dynamic and ever-changing collections of information. However, most information retrieval tools developed for interacting with Web content, such as browsers and search engines, focus on a single static snapshot of the information. In this talk, I will present analyses of how Web content changes over time, how people re-visit Web pages over time, and how re-visitation patterns are influenced by changes in user intent and content. These results have implications for many aspects of information retrieval and management including crawling policy, ranking and information extraction algorithms, result presentation, and systems evaluation. I will describe a prototype that supports people in understanding how the information they interact with changes over time, and a new retrieval model that incorporates features about the temporal evolution of content to improve core ranking. Finally, I will conclude with an overview of some general challenges that need to be addressed to fully incorporate temporal dynamics in information retrieval and information management systems.
People create enormous amounts of content in social media such as Flickr, Twitter, Facebook, and Blogger -- and because these media focus on awareness and current activity, most of this content disappears under the sea of the new, never to be seen again. In this talk we'll explore how systems can re-use this user-created content to help them better understand themselves. We'll focus on Pensieve, a system that uses social media content to help people both reminisce and write more about their past. Its design was driven by a series of interviews and prototypes, and it was deployed for six months both as a standalone website and on Facebook. Our experiences suggest that systems that support reminiscence are a valuable area to explore and highlight both important design tradeoffs between privacy and tools for social interaction and future directions for such systems, including ways to trigger reminiscence such as place and culturally-specific tools for reminiscing. If there is time, we'll also talk (briefly) about Being Heard, a new project that aims to help people reflect on other people and how they talk with them by visualizing and analyzing their archives of email, chat, and other interactions.
There is a great deal of optimism (rightfully) about the benefits of social systems in generating "collective wisdom." However, there are many situations where the collective is actually worse than the individual. Being able to distinguish between the success and failure cases of collective behaviors is as important as finding new ways to mine and leverage these behaviors. I'll cover two large projects (and a smattering of other ongoing work) that demonstrate the positive and negative features of automatically analyzed social data.
The first project focuses on the rapid globalization of Wikipedia. Pages for the same topic in many different languages are co-evolving but frequently at different rates leading to variance in size, scope, and quality. I'll describe a first attempt at reconciling these differences by automatically aligning Wikipedia data in different languages, detecting discrepancies and filling in missing information. The attractive feature of the method is that it uses the "wisdom" of independent groups to work effectively even in the absence of dictionaries.
The second project demonstrates the potential pitfalls of collective wisdom. Specifically, we look at collaborative visualizations systems in which systematic bias leads groups to make significant mistakes in the graphical perception of visualizations. By manipulating social signals in realistic ways, we were able to influence individuals in chart comprehension tasks. I'll cover how this was done, why social signals are particularly harmful, and how other kinds of collective behaviors might solve this problem.
Crisis informatics addresses socio-technical concerns in large-scale emergency response. Additionally it expands consideration to include not only official responders (who tend to be the focus in policy and technology-focused matters), but also members of the public. It therefore views emergency response as an expanded social cognition system where information is disseminated within and between official and public channels and entities. Crisis informatics wrestles with methodological concerns as it strives to develop new theory and support informed development of information and communications technology (ICT) and policy.
In this talk, I propose a paradigm-shifting perspective: that innovation for emergencies could benefit by reframing emergency management as a set of socially-distributed information activities that support powerful, parallel, socio-technical processing of problems in times of change and disruption. I discuss the implications of such a view for scientists, technologists, emergency management, and members of the public. To that end, I will present results from our empirical research on the vast computer mediated interaction that has occurred during recent disaster events. I will demonstrate how variants on existing methodological approaches are necessary for sampling and studying "social media" content in disaster events to get an accurate assessment of content. I will discuss the new roles of the "on-line converger" and digital volunteer that have newly emerged, and how those roles will play an increasingly important part in the future of emergency management.
The rapid and widespread growth of social software in organizational and work settings has presented new challenges and opportunities. In this talk, several “venture research” projects will be discussed in which social software has been specifically designed for business use and evaluated in large-scale field trials. These include a social bookmarking and social search service (dogear) and a social networking application (beehive). Two main topics will be explored in detail. First, I will describe the design challenges and solutions for successful end-user adoption and acceptance of these novel systems. The use of hybrid recommenders, which combine social proximity and topical relatedness, will be shown to promote both the creation of new content as well as to boost consumption of existing information. Incentive systems and crowd sourcing techniques will also be discussed as additional mechanisms to enhance system use. And second, I will discuss and prevent evidence of how these new social applications enable increasing interaction among work populations that differ along many dimensions, including geography, language, organization and job role. Some recent work on inter-cultural collaboration in a globally distributed company will be discussed.
Mobile phones are being rapidly and enthusiastically adopted in rural and even non-electrified regions in Uganda. This trend brings with it new paradigms of access and use as phones have quickly become incorporated into the social worlds and interpersonal intricacies of village life. In this talk I will consider the dynamics of mobile phone sharing. By sharing I mean the practice of granting access or redistributing a privately-owned good without direct financial compensation. Sharing as a social practice is undertheorized but can be better understood drawing from literatures on gifting, common property, moral economy, reciprocity, and other intimate forms of exchange. In this talk I will discuss some of the distinctive issues of equality in access to technology that arise from a multitude of sharing configurations. In rural Uganda, efforts at social policing and managing social obligations were mediated and concretized by mobile phones. Patterns of phone sharing led to preferential access for needy groups (such as those in ill health) while systematically and disproportionately excluding others (women in particular). I propose a framework that takes into account the distinct roles an individual may have in relation to the phone and the benefits that accrue asymmetrically to each role. This framework may be useful for revising survey design work on technology adoption and access to suit research in a broader diversity of settings beyond the Euro-American context.
Literacy levels in most poor countries remain shockingly low and formal education is making little progress. MILLEE improves literacy through language learning games on cellphones – the “Personal Computers of the developing world” – which are a perfect vehicle for new kinds of out-of-school language learning. The project focuses on developing scalable, localizable design principles and tools for language learning. The challenges are (i) to integrate sound learning principles, (ii) to provide concrete design patterns that integrate entertainment and learning, and (iii) to understand cultural and learning differences in children in developing regions.
I will describe a framework called PACE which addresses these challenges and nine rounds of fieldwork that contributed to its development. I will discuss the complex adoption ecology in developing regions, and how MILLEE preserves learning principles while supporting rich localization and customization at multiple stages in the adoption hierarchy. I will discuss our work which patterns learning games after local children’s traditional village games and the benefits this approach offers. Finally, I will describe the implications for design that arose from our after-school program deployment and out-of-school ethnographic studies.
The MILLEE project is in its 6th year. It has received major sponsorship from the MacArthur Foundation, Microsoft, National Science Foundation, Nokia, Qualcomm and Verizon. MILLEE has been featured in the press in India (where previous pilots were based), a Canadian public television documentary, and ABC News. With a generous donation of 450 cellphones from Nokia, we are commencing a controlled experiment with 800 rural children in 40 villages in India. The upcoming study will target an academic year of English curriculum, and will benchmark MILLEE learning outcomes against a major credentialing exam in India. Early replications are underway in rural Kenya and elsewhere.
Social computing. Social media. Social networks. Social platform. Social is the new e-, cyber-, -tronics: every Internet application today needs to have the modifier "social" associated with it. Friedrich Hayek famously said that the word 'social' empties the noun to which it is applied of its meaning; he called it a "weasel word". In my view, in today's world the word does not exactly empty the noun it modifies, it fills it with a landscape of possibilities, most of them driven by business and technical imperatives rather than by human needs.
Social is in the details of everyday action and human-human interaction, and cannot be wholly captured in a word or a graph and is not an application or a platform. In this talk, I will focus on the target of all this interest, people – or variously: consumers, the nodes on the graph, customers, and users. I ask: what are the actions and activities that constitute social as people understand and enact it. How are people "social”? What do they do, what actions do they take, what does interaction through the Internet do for them and, more importantly, mean to them? Using case study examples, I’ll dig into the details that are elided in typical network models which reflect generalized patterns of instrumental actions hewn from aggregated system logs. I complement - and challenge - these models with the intention of dismantling the word “social” and focusing attention back on people, their actions, needs, emotions and communications.
Visualization is often viewed as a way to unlock the secrets of numeric data. But what about political speeches, novels, and blogs? These texts hold at least as many surprises. On the Many Eyes site, a place for public visualization, we have seen an increasing appetite for analyzing documents. I present a series of techniques for visualizing and analyzing unstructured text. I also discuss how public events such as the presidential election last year catalyze people's passion for making sense of prose.
Distributed work is often characterized by long periods of time working apart, punctuated by face-to-face meetings and site visits. Little research, however, has explored the interplay between distant work and these collocated intervals. In an ethnographic study of 143 members of 9 software development teams, we explore the interplay between site visits and distant work and its effects on interpersonal dynamics and the coordination of work. Our findings suggest that site visits promote situated knowing who knowledge about distant colleagues that is situated in context and intertwined with practice. During site visits, people observed and interacted with their distant colleagues in these colleagues' context, thus gaining a deeper understanding of their behavior within the social and physical context in which they were situated. As they interacted, they reconstituted collaborative practices which further facilitated knowing who and contributed to higher levels of interpersonal trust. After team members returned to their home site, some of these new collaborative practices carried over to their work with distant colleagues and additional new practices evolved as a result of the situated knowing who generated during site visits.
As personal robots enter our workplaces and homes, it will be important for them to learn new tasks and abilities from a wide demographic of people. Ideally, people will be able to teach robots as naturally as one another. Consequently robots should be socially competent enough to take advantage of the same sorts of interpersonal cues and skills that humans readily use to teach and learn.
Our research seeks to identify simple, natural, and prevalent teaching cues and program robots with social-affective mechanisms to enable them to learn efficiently and effectively from natural interactions. In this talk, I present several social skills implemented on our robots and discuss how they address the challenge of building robots that learn from people. These skills include the ability to direct attention, to understand affect and intent, to express its learning process to the human instructor, and to regulate its interaction with the instructor. Through these examples, we show how social, emotional, and expressive factors can be used in interesting ways to build robots that learn from people in a manner that is natural for people to teach.
Social and behavioral sciences have long conceived the human mind as an autonomous computational machine. However, recent developments in several fields of research including social and cultural psychology, evolutionary psychology, and neuroscience among others have converged to suggest that the human mind – with all neural mechanisms underlying it -- is biologically prepared and, yet, it is shaped by and completed through each person’s active participation in socio-cultural environments and activities defined therein. In this presentation, evidence for this thesis is reviewed to suggest that the human agency (the self) and the neuronal component processes constituting the self (the brain) are socio-culturally conditioned and, as such, they can show remarkably divergent characteristics depending on the socio-cultural environments in which they are engaged. This new, more expanded view of personhood offers important implications for the behavioral sciences.
Co-sponsored with the Department of Psychology
Automatically detecting human social intentions from spoken conversation is an important task for social computing and for dialogue systems. We describe a system for detecting elements of interactional style: whether a speaker is awkward, friendly, or flirtatious. We create and use a new spoken corpus of 991 4-minute speed-dates. Participants rated themelves and each other for these elements of style. Using rich dialogue, lexical, disfluency, and prosodic features, we are able to detect flirtatious, awkward, and friendly styles in noisy natural conversational data with above 70% accuracy, significantly outperforming not only the baseline but also the human interlocutors. We find that features like rate of speech, pitch range, energy, and the use of questions help detect flirtatious speakers, collaborative conversational style (laughter, collaborative completions, questions, and second person pronouns) help in detecting friendly speakers, and disfluencies help in detecting awkward speakers. In analyzing why our system outperforms humans, we show that humans are very poor perceivers of flirtatiousness or friendliness in others, instead often projecting their own intended behavior onto their interlocutors. This talk describes joint work with Dan McFarland (School of Education) and Rajesh Ranganath (Computer Science Department).
Co-sponsored with the Cognitive Science Speaker Series
In some communities, a prevalent form of learning is through keen observation of ongoing community events in which people collaborate when they are ready. This approach to learning seems to be especially common in Indigenous-heritage communities of the Americas, and less prevalent in communities that segregate children from the range of activities of their community. These ideas will be illustrated with research in Guatemalan Mayan, Mexican-heritage, and European-heritage US communities.
Co-sponsored with the Human Development and Social Policy Colloquium Series
Many organizations, many sciences, and many of us have collaborations with people who are not nearby. We and they use various technologies like shared files, email, blogs, instant messengerto support their work. Some of these collaborations work; others do not. What makes them succeed? We have collected data from 200 collaboratories, deep data from about 30, plus data from 20 sites in corporations, to determine what makes the collaborations work, what makes them fail. I will review this line of work and describe the factors that are apparently most important in both science and corporate long-distance work. All of us may benefit in learning these factors to ensure that our own distance collaborations succeed.
Social robots recognize and respond to human social cues with appropriate behaviors. These robots are unique tools in the study of human social development, and have the potential to play a critical role in the diagnosis and treatment of social disorders such as autism. In the first part of this talk, I present four examples of what building social robots has taught us about human social development. These examples cover topics of perceptual development (vocal prosody), sensorimotor development (declarative and imperative pointing), linguistic development (learning pronouns), and cognitive development (self-other discrimination). The second half will focus on the application of social robots to the diagnosis and therapy of autism. Autism is a pervasive developmental disorder that is characterized by social and communicative impairments. Based on five years of integration and immersion with a clinical research group which performs more than 130 diagnostic evaluations of children for autism per year, I will discuss how social robots will impact the ways in which we diagnose, treat, and understand autism.
Research on collaboration technologies combines discovery and invention. On the discovery side, using two eye tracking machines enables us to study collaborative process at at deep level. We record the gaze location from 2 partners working on-line on tasks ranging from a collaborative Tetris game (3 min) to complex problem solving (60 min). In one study, we found that team performance was related to gaze convergence and that the distance between the partners' gaze predicted conversational misunderstandings. We search for gaze patterns that would predict knowledge productive verbal episodes such as elaborated explanations or conflict resolution. We integrate these findings into the design of gaze-sensitive groupware, i.e. collaboration tools where each user is informed about her partner's gaze in a non-intrusive way. Besides this, most collaborative technologies we develop target the augmentation of face-to-face collaborative problem solving. We embed digital technologies into pieces of interactive furniture pieces such as the Reflect Table (it provides groups with a representation of their own interactions), the Tinker Table (a tangible world for learning logistics ), the DockLamp (an interactive lamp exploiting finger tracking) and WiKid (a robotized computer display).
When people interact with virtual humans, and with each other in technologically mediated ways, they can experience copresence with their partner rather differently than they do in face to face settings. This talk explores the effects of partial and limited copresence in experimental comparisons of face to face and virtual/mediated interactions. In one set of studies, survey respondents answering sensitive and embarrassing questions sometimes give different answers to human and virtual interviewers, in ways that suggest domain-specific rapport and embarrassment with virtual humans, and that require a more detailed understanding of rapport and embarrassment with human partners. An ongoing set of studies explores how chamber and jazz musicians coordinate with each other face to face, via remote video, and via remote audio; although many musicians report different degrees of copresence in the different media, some report no difference at alland their coordinated performance suggests they are correct. To understand copresence with virtual partners, we will need a more nuanced view of copresence that includes the moment-by-moment demands of different domains, tasks, and partnerings, as well as an understanding of individual proclivities and differing needs for cues from partners.
During the last decade eye movements have emerged as a powerful tool for studying spoken language processing in Visual World tasks in which participants process language in the context of a task-relevant visual workspace. Ill illustrate some of the applications of the Visual World methodology, sampling from recent work from my lab to illustrate applications in the domains of speech perception (integration of asynchronous cues), word recognition (time course of lexical competition), sentence processing (reference and ambiguity resolution) and interactive conversation (referential domains and interlocutor perspective). In each of these domains new insights have come from the temporal sensitivity that eye movements provide.
Information workers experience a high amount of disruptions in their daily work due to managing multiple tasks and interactions, large amounts of information and various technologies. In this talk I will present empirical results from fieldwork observations and experiments over three years which detail the extent to which information workers multi-task, irrespective of their organizational role. I will discuss how multi-tasking impacts various aspects of collaboration and communication in the workplace. Not only do information workers switch continually among multiple tasks but they also switch continually among interactions in varied workplace contexts, such as the work home and organization. We found that people compensate for interruptions by working faster, but this comes at a price of experiencing more stress. These results challenge the traditional way that most IT is designed to organize information, i.e. in terms of distinct tasks. Instead, I will discuss how IT should support information organization in a way consistent with how most people were found to organize their work, which is in terms of much larger thematically connected units of work. I will present a prototype of a technology that can help support people in their multi-tasking and will also discuss how the results present opportunities for new social and technical solutions to support multi-tasking in the workplace.
Uses of novel digital technologies often start with students and are eventually adopted, initially reluctantly, by enterprises. For the past six years much of my research has focused on early enterprise adoption of communication technologies including instant messaging, weblogs, wikis, and social networking software such as Facebook and LinkedIn. The first half of this presentation will outline a handful of patterns that emerged in 20 years of studying technology adoption that I wish I had recognized much earlier and which remain useful. Then I'll give an overview of enterprise uses of emerging technologies, with some speculation as to where we may be heading.
Skills of emotional intelligence include the ability to recognize and respond appropriately to another person's emotion, and the ability to know when (not) to display emotion. This talk will demonstrate advances at MIT aimed at giving several of these skills to technology including mobile devices, robots, agents, wearable & traditional computers. I will conduct a live demonstration of technology, developed with El-Kaliouby, to recognize complex cognitive-affective states in real time from a person's head and facial movements. This technology computes probabilities that a person looks like he or she is concentrating, interested, agreeing, disagreeing, confused, or thinking. These states signal important information such as when is a good time to interrupt, or when might be appropriate to apologize for interrupting. A wearable version of this system is being developed for helping people with autism who often face challenges reading social-emotional cues. I will describe several other applications of this technology and also highlight social, ethical, and philosophical issues surrounding affective technologies.
With advances in computing technology, autonomous robots are becoming viable in such critical domains as search and rescue, military battle, mine and bomb detection, scientific exploration, law enforcement, and hospital care. Robotic assistants ranging from museum guides to hospital delivery robots need to interact with people in person and remotely. Our research group has studied people's mental models of robots, the problem of mutual understanding in human-robot communication, and how the design of robots affects those processes. Mental models are expectations that arise from agent detection, causal reasoning, and theory of mind. Mutual understanding develops as robots conform, or fail to conform, to people's expectations. People know that a robot is a machine, but they have surprisingly high social expectations and interpretations of a robot's behavior even when the robot has impoverished social form or character.
Construction of the Empire State Building: 7 million human-hours. The Panama Canal: 20 million human-hours. Estimated number of human-hours spent playing computer solitaire around the world in one year: billions. A problem with today's computer society? No, an opportunity.
What if this time and energy could be channeled into useful work? What if people could play computer games and accomplish work without even realizing it? What if billions of people collaborated to solve important problems for humanity or generate training data for computers? My work aims at a general paradigm for doing exactly that: utilizing human processing power to solve computational problems in a distributed manner.
In particular, I focus on harnessing human time and energy for addressing problems that computers cannot yet solve. Although computers have advanced dramatically in many respects over the last 50 years, they still do not possess the basic conceptual intelligence or perceptual capabilities that most humans take for granted. By leveraging human skills and abilities in a novel way, I want to solve large-scale computational problems and/or collect training data to teach computers many of these human talents. To this end, I treat human brains as processors in a distributed system, each performing a small part of a massive computation. Unlike computer processors, however, humans require an incentive in order to become part of a collective computation. Among other things, I use online games as a means to encourage participation in the process.
In this talk, I will describe my work in the area of Human Computation.
One of the most influential visions in computing today is the view that in the future we will be able to record everything that ever happened to us, giving us a complete digital record of our lives. I will argue that there are major problems with this approach, however. I will review various 'digital memory' systems built over the last 20 years, concluding there is little convincing evidence for their utility. One of the problems with the approach is that it is unclear about the memory functions that such systems are intended to serve. I will suggest a different approach to the design of digital memory systems, based around a psychologically motivated view of memory and describe studies that illustrate this new approach.
Over the last fifty years, the ``Big Five'' model of personality traits has become a standard in psychology, and research has systematically documented correlations between a wide range of linguistic variables and Big Five traits. A distinct line of research has explored methods for automatically generating language that varies along personality dimensions, which, in the main, has only superficially exploited the psycholinguistic findings. In this talk, I will briefly summarize our previous work on statistical language generation, and then present PERSONAGE (PERSONAlity GEnerator), an extension of previous work that implements and utilizes 29 different parameters related to extraversion, an important aspect of personality. I will compare two methods for generating personality-rich language: (1) overgeneration and selection using statistical models trained from judge's ratings; and (2) direct generation with particular parameter settings suggested by the psycholinguistic literature. An evaluation shows that both methods reliably generate utterances that vary along the extraversion dimension, according to human judges, and identifies the parameters that, in our domain, contribute most to judge's perceptions.
In all the media hubbub around the recent release of Apple's iPhone, one consistent critique is that it lacks a GPS unit. It's interesting that, at that point, a claim to technological leadership for a mobile device can founder on this. Mobility is no longer sufficient; location-tracking is a key feature. However, the introduction of location-based technologies has traditionally been
accompanied by a series of concerns over privacy. These discussions, though, adopt a fairly reductive model of privacy, concerned primarily with the trade-offs involved in service provision and location disclosure.
Following a strategy of selecting extreme examples as prototypical cases for potential futures, we have been studying a group of paroled sex offenders who are tracked via GPS as part of their parole conditions. We were interested in the way in which pervasive location tracking in a complex social context affects one's experience of everyday space. While the issues that arise are highly specific to their particular situation, they are suggestive of a new set of considerations for location tracking in consumer devices. Based on our preliminary studies, I will discuss some of these concerns, including the multiple accountabilities of presence at specific places and times, the legibilities of everyday space both from within and without, and the underexamined relationship between mobile technologies and the bodies that carry them.
Effective collaboration requires the creation and maintenance of common ground understandings. This is an especially interesting problem in the case of intercultural collaboration, where communicative conventions may not be shared. However, intercultural collaboration often takes place in professionally relevant material settings and among people who share professional competence. In this paper we show how Japanese airline pilots and American flight instructors overcome pronounced differences in language and culture and achieve effective collaboration. They do this by drawing on a rich body of shared professional pilot culture and by exploiting richly multimodality situated communication practices to produce common ground understandings.
This talk will explore several dimensions of my global cross-cultural collaborative research with ethnically-diverse populations, the focus being the iteration between diverse communities, technology co-production/design, and social/behavioral impacts.
A local historical approach first reveals the means by which diverse cultural priorities and structures of knowledge can impact the design and ultimate sustainability of database-driven new media systems. Findings from these studies demonstrate that locally-grounded, participatory and ethnographic approaches can successfully dovetail with relevant findings in community and cross-cultural technology initiatives, and they lead to the implementation of a series of concrete investigations into:
a. The development of digital systems (and more generally globalized information systems) that can accommodate multiple, local cultural interpretations/ontologies
b. The abilities for communities in the developing world to benefit from technologically-related development, e-governance, microfinance, and public health interventions in reflective and sustainable manners.
c. The ability for diasporic groups to discover bridging and bonding social capital via information systems directly mobilize social networks.
d. Impact pedagogy and distance learning potentials in parts of the world traditionally neglected within the ICT-development dialogue
The specter of a postbiological and posthuman future has haunted cultural studies of technoscience and other disciplines for more than a decade. Concern (and in some quarters enthusiasm) that contemporary technoscience is on a path leading beyond simple human biological improvements and prosthetic enhancements to a complete human makeover has been sustained by the exponential growth in power and capability of computer technology since the early 1990s. While both the internetin its Web 2.0 versionand the rapid proliferation of mobile computer-based communications have already produced significant changes in the organization and production of knowledge as well as in the functioning of the global economy, the deeper fear is that somehow digital code and computer-mediated communications are getting under our skin, and in the process we are being transformed. Indeed, among the products Mihail Roco, the senior advisor to the US National Science Foundation and chief architect of the National Nanotechnology Initiative predicts within the next decade are new types of nanotech interfaces linking people directly to electronics. When considered in light of current research successes in the development of brain-machine interfaces, the sorts of scenarios envisaged by Ray Kurzweil in recent texts such as The Singularity is Near: When Humans Transcend Biology, in which he charts the conditions for the merger of computer-based intelligence and human biology to occur around 2045, begin to sound eminently plausible.
The claim that we are machines on a continuous path of co-evolution with other machines prompts reflection on what we mean by posthuman. If we are crossing to a new era of the posthuman, how have we gotten here? And how should we understand the process? What sorts of selves should we imagine as emerging out of this postbiological human? In Biointerface, I propose to address the impact of digitality on the self, subjectivity, and the body embedded in soon-to-be ubiquitous computing environmentsindeed, possibly even postbiological environments of the sort discussed above. The question of embodiment and the future of the human in networked digital environments has been the subject of numerous recent investigations. But these studies, particularly the important recent works of Katherine Hayles, have focused almost exclusively on the role of metaphor, narrative, and ideology in shaping our views and attitudes toward a hypothesized posthuman singularity rather than considering the constitutive role of technology in shaping the human. My discussion will focus on how contemporary scientific and engineering efforts to develop nanomachines interfaced with biological materials as well as other efforts to replace silicon-based computers with new, biologically-inspired computational media may reshape the playing field of debates about posthumanity, ought to give us cause for reflection, but in the end build on the singularity that constituted us as human in the first place.
Much of what we know about language processing, visual search, memory, and problem-solving has focused on people acting alone. But in reality, people often collaborate on such tasks, coordinating their behavior with each other moment-by-moment. Collaboration promises obvious benefits, but the necessity for coordination also carries potential costs in time and effort. When people are co-present in the same environment, or when they can share visual information remotely, they can use this information as a coordination device. In particular, a partner's eye gaze provides incremental and highly sensitive information that is missing from more intentional forms of pointing with a mouse or other input device. A gaze or a glance may be not only instrumental (necessary for performing a spatial task), but also informative (one partner may be able to use this information in interpreting another's utterance or intentions), or even communicative (the gazer may actually intend for the partner to recognize a meaning, in the "non-natural" sense described by Grice, 1975). The interpretation of eye gaze may be automatic and reflexive, or it may be used as a more flexible cue. I'll present several studies on the use of visual evidence in communication and visual search, conducted both face-to-face and remotely.
This lecture explores the concept of companion species as a response to the critiques of humanism and the urgency of ethical and political questions about multi-species relations. Much more than "companion animals," companion species embraces the human and non-human partners who make worlds in their interactions. Pairing biologists with philosophers and media artists, Haraway explores figurations and stories that link people with other species that are both organic and technological. "When Species Meet" explores contact zones in colonial studies, developmental biology, anthropology, animal studies, science fiction, and ecology.
Gestures are an essential part of our everyday communicative practice. As far as we know, gestures are used in communication in all cultures. It has also been noted that gestural practice varies considerably across cultures. For example, it is well known that the convention for
form-function mapping in so called emblematic gestures (such as an OK sign) vary cross-culturally. In this presentation, I aim to demonstrate that crosscultural variation of gesture goes far beyond differences in conventionalized emblematic gestures. More specifically I will provide results from three studies demonstrating that gestures vary crossculturally because language, cognition, and values associated with different communicative behaviors vary crossculturally.
The last decade has seen an explosion of interest in the role emotion plays in human cognition and social interaction. Recent findings in psychology and neuroscience have emphasized emotion's distinct and complementary role in human cognition when contrasted with the rational conceptions of human thought such as decision theory, game theory and logic. Rather than viewing emotion as a distortion of such rational systems, contemporary research emphasizes emotion's functional role and has worked out a number of the mechanisms through which emotion helps an organism adapt to its physical and social environment. Within computer science, there is growing interest in exploiting these findings to expand classical rational models of intelligent behavior. In this talk, I will review current findings on the intrapersonal and interpersonal function of emotion and its potential role in enhancing human-computer interaction. I will then discuss our attempts to model these functions within the context of life-like interactive characters that can engage in socio-emotional interactions with human users for training and education.
One of the common themes of ubiquitous computing is the automated capture of everyday experiences that can be accessed sometime in the future. At Georgia Tech, we have been exploring this theme since the mid 90's in environments such as the classroom, museums, offices and the home. In this talk, I will reflect on a few of these experiences and explain how challenges in my own personal life have resulted in a variety of opportunities to advance the research agenda for automated capture applications. These opportunities vary from solutions to short-term memory failures, a desire to preserve the legacy of my father's family film history and a seven-year battle to support the needs of families dealing with developmental disabilities. While there are significant research issues addressed in this body of work, the over-arching message is that everyday life presents many opportunities for human-centered research into the application of emerging technologies.
Over the last decade, we have seen an enormous shift in the balance of power between corporations and their customers, and hence between corporations and those who consult for them. Increasingly, the engagement and energy of customers - as distinct and diverse individuals and communities - fuel corporate success. Just as the age of three-network television has receded into the mists of time, so the idea of the "mass market" has become a nostalgic fantasy. Many of the old-time corporate research labs in Silicon Valley - Interval, Sun, PARC, SRI - have shrunk or disappeared. At the same time, the need for research to inform creativity and guide innovation in new companies with new business models grows ever more acute. At its best, the old school of research pursued grand ideas. But more often than not research in engineering, human factors, or interaction design was framed incrementally and conducted in the abstract, devoid of a situated context or explicit goal beyond presenting a conference paper, filing a patent, or making next quarter's numbers look better.
Most of us know that design innovation in our industry comes from deep insights into people - how they go about their lives, amuse themselves, and do their work. How does one search for the opportunity spaces for innovation in our culture and in the marketplace? How does one carry out design research with people in their communities?
Through examples drawn primarily from student work, this talk will present a process for identifying potential opportunities, framing research questions and selecting appropriate methods, carrying out design research in context, representing and understanding research findings, and making the innovative leap that ignites knowledge drawn from research with the designer's creative spark.
For the past decade, the Value Sensitive Design Research Lab has had, as one focus, the
design of information technologies that support people's privacy. Our approach is grounded
in interactional theory, systematic analyses of direct and indirect stakeholders, and an
integrative tripartite methodology that comprises conceptual, technical and empirical
investigations. In this talk I discuss our approach in the context of three on-going
projects: (1) The Watcher and The Watched, an empirical study of people's perspectives on
the real-time display of a public place on a large semi-public screen, (2) An Open Source
Privacy Addendum, a legal strategy for integrating privacy commitments into open source
licenses, and (3) Value Hot Spots and Opportunities, a method for using value analyses to
enhance groupware system adoption.
For more information on Value Sensitive Design, please see: http://www.ischool.washington.edu/vsd/
Today, Americans are awash in public opinion polls, census data, community studies,
consumer surveys, and social statistics documenting phenomena as diverse as economic
attitudes and sexual behavior. Indeed, it has become nearly impossible to know the
modern public apart from survey results. But this kind of information about "ourselves,"
now taken for granted, only became central to national debates and discussions in the
middle decades of the twentieth century. In these years, more and more individuals would
participate, either as research subjects or consumers of information, in a voluminous
traffic of social scientific numbers, knowledge, and norms. My talk will inquire into
the form these technologies of social representation took, and what sort of cultural
power they exerted in the stories they told about "typical Americans," "majority
opinion," and "the public" itself.
How did ordinary Americans grapple with the ascendance of social scientific ways of knowing at mid-century? What were the ramifications for individuals of the questioning presence of surveyors, as they reached more deeply into people's lives for information? And how did the influx of facts and figures purporting to profile the population shape their understandings of the collective, and of social identities within it? Taking as my examples the early Gallup polls (from 1935 on) and the Kinsey Reports (1948 and 1953), I argue that the twentieth-century embrace of statistical knowledge was ambivalent, and full of contradictions. Consumers of survey data frequently posited their own experiences and "local knowledge" as a counter to social scientific authority. And they complained bitterly about the depersonalization that came with the torrent of numbers. Even so, some willingly and even eagerly submitted to surveys, gave new weight to aggregate data, and began to measure themselves via social scientific categories. Relying on media coverage of survey results, Gallup's and Kinsey's private papers, and their correspondence with the general public, I trace the patterns of distrust and dependence that marked Americans' relationship with the social data that claimed to represent them. Along the way, I hope to glimpse just what sort of public was evolving in tandem with opinion polls and sex surveys.
HomeNetToo is a longitudinal field study designed to examine the
antecedents and consequences of home Internet use by low-income
families (NSF-ITR #085348). Participants were 120 adults and 140
children residing in a medium-sized urban community in the
mid-western United States. Adult participants were primarily African
American (67%), female (80%) and never married (42%). Most had
household incomes of less than $15,000 annually (49%). Average age
was 38.6 years old. Most of the HomeNetToo children were African
American (83%) and male (58%). Average age was 13.8 years
old. Participants had their Internet use continuously and
automatically recorded for 16 months and completed surveys at
pre-trial, one month, three months, nine months and post-trial. In
exchange they received home computers, Internet access, and in-home
technical support during the Internet recording period.
Findings revealed no adverse social or psychological effects of Internet use for adults or children. Importantly, children who used the Internet more subsequently had higher grade point averages and higher scores on standardized tests of reading achievement than did children who used it less. Additional experimental findings using a separate sample of African American adults (n=161) indicated that adapting the interface to user cognitive style preferences resulted in better learning of health information than when a standard "magazine style" interface was used. Current research based on an ecological systems theory perspective is examining the effects of multiple types of IT use (e.g., video games, cell phones) on children's cognitive, social, psychological and moral development, as well as the mechanisms that mediate the relationship between Internet use and academic performance (NSF-HSD #0527064).
During the 1960s, in NASA's Apollo lunar project, engineers and
astronauts worked together (in harmony and conflict) to design and
build a "man-machine system" that combined the power of the computer
with the reliability and judgment of a human pilot. This talk looks
at the design and construction of the Apollo guidance and control
system. The decision to use a digital computer, and the unproven
technology of integrated circuits (the first in a life critical
situation), were hotly contested. Astronauts were involved in the
design of the system, and one question repeatedly arose: how much to
automate the flight to the moon? Some engineers were convinced that
computers could run the entire mission, with no input from the
astronauts. NASA, of course, could not condone full automation, as
the astronauts played a political role in the projects as exemplars
of American prowess.
The resultant design of computer and software emerged to reflect a philosophy of aiding the pilots in critical functions and at critical moments, while not actually replacing them. In addition, new technologies called "simulators" (emerging from a rich thread of 20th century flight training) allowed the astronauts and ground controllers to rehearse the missions countless times from the ground. In a pattern increasingly common today, they flew to the moon in a virtual world before ever flying in the real one.
This story acquires broader significance when placed in the context of twentieth century science and technology. This period, around 1970, was a turning point in America, and Apollo highlights those changes. Before then, the narrative of progress in technology (especially aerospace) was straightforward, visceral, and Newtonian. Apollo culminated this "faster, higher, further" story -- the largest rocket, the fastest speed, the furthest human exploration. After that, the world rushed in. Apollo was canceled. The astronauts who walked on the moon were not the first of many (as they had assumed) but rather the only in human history. Progress in technology did not end, but it did shift away from a visceral sense of physical movement and more toward progress in more complex socio-technical systems -- efficiency, economics, environmental impact, and information came to the fore. Apollo, with its embedded digital computer and its integrated circuits, both embodied the old world and contained within it the seeds of the new.
People come to online groups seeking information, encouragement, and
conversation. When the group responds, participants benefit and become
more committed. Yet under-contribution is a problem for many online
groups. For example, our data show that in online health support groups,
the majority of newcomers fail to receive a reply to their initial
posts. As a result, they get no benefit from their attempt to engage the
group. When newcomers fail to get a reply on their first attempt, most
are unlikely to return to the group. Part of our research is
descriptive. We examine variation in individual and collective behavior
in groups that is associated with their success at the level of the
transactions (e.g., a newcomer getting a reply to a question asked in an
online support group), at the level of the individual member (e.g., a
member contributing to a group repeatedly) and the level of the group as
a whole (e.g., the group recruiting and retaining sufficient members to
sustain itself). Our research sites include a large sample online
discussion groups, including Usenet groups and health support groups,
and a movie recommender site.
Part of our research is prescriptive. We use empirical evidence and social psychological theories about the basis of group commitment to design interventions to improve the success of online groups. For example, in a movie recommender site, we show that highlighting to potential contributors the value that others will receive from their movie ratings causes them to spend their time rating movies that will benefit others the most. In online discussion groups, we are able to use machine learning techniques to identify with 85% accuracy messages that are unlikely to receive a reply. We have designed and are in the process of implementing automated interventions to increase the likelihood that the potentially orphaned message will get a reply. One intervention, similar to a readability index or grammar wizard, gives an author advice about how to rewrite a post to make it more reply- worthy. A second forwards the post to another in the group is knowledge and motivated enough to reply.
Distributed computing (DC) is a new form of online collaboration
that is making a significant and valuable contribution to scientific
research. Projects divide a large computational problem into small
tasks that are sent out over the internet to be completed on
personal computers. However, recruitment, participation and
retention of collaborators is a constant challenge. Teams and
forums, where team members come to interact, are particularly
important for people who are the most productive participants in DC.
Discussing a number of projects, I argue that the optimal way to
harness a core of skilled and productive participants is cooperation
through competition, i.e. co-opetition. This has implications for
improving existing collaborations and designing future collaborative systems.
Over time, our mode of remote communication has evolved from written
letters to telephones, email, internet chat rooms, and
videoconferences. Similarly, collaborative virtual environments
(CVEs) promise to further change the nature of remote interaction.
CVEs are systems which track verbal and nonverbal signals of
multiple interactants and render those signals onto avatars,
three-dimensional, digital representations of people in a shared
digital space. In this talk, I describe a series of projects that
explore the manners in which CVEs qualitatively change the nature of
remote communication. Unlike telephone conversations and
videoconferences, interactants in CVEs have the ability to
systematically filter the physical appearance and behavioral actions
of their avatars in the eyes of their conversational partners,
amplifying or suppressing features and nonverbal signals in
real-time for strategic purposes. These transformations have a
drastic impact on interactants' persuasive and instructional abilities.
Furthermore, using CVEs, behavioral researchers can use this mismatch
between performed and perceived behavior as a tool to examine complex
patterns of nonverbal behavior with nearly perfect experimental control
and great precision. Implications for communications systems and social
interaction will be discussed.
What if politics were concerned with representing things, in addition
to representing people? This was the topic of an exhibition that has
just closed in Germany, Making Things Public, and is a now the subject
of a book from MIT Press. The lecture will recast the simulation of a
"politics of things" that was demonstrated in the show and in the book.
Henry Jenkins (co-sponsored with Screen Cultures)
For several years now, Hollywood insiders have noted the emergence of transmedia storytelling
as a new form of popular culture, and researchers have begun to understand the consequences
of transmedia for theories of media convergence, participatory culture and collective intelligence.
In the ideal form of transmedia storytelling, each medium does what it does best -- so that a story might be introduced in a film, expanded through television, novels, and comics, and its world might be explored and experienced through game play. Such a multilayered approach to storytelling will enable a more complex, more sophisticated, more rewarding mode of narrative to emerge within the constraints of commercial entertainment. The most committed consumers become hunters and gathers, tracking down data which is conveyed across multiple media, scanning any given text for embedded information which may yield insights into the characters and situations first encountered elsewhere. The creative artist becomes a world builder -- constructing a information rich, emotionally intense environment which can sustain multiple characters and stories across multiple media.
The Matrix franchise pushes this idea of transmedia storytelling as far or further than anyone has gone before, building out the world of The Matrix across not only three feature films, but also a series of comics (first released on the web and now in print), a series of anime movies (The Animatrix), and an ambitious video game (Enter The Matrix) which contains more than an hour of original footage featuring the cast of the movie. What can we learn about the future of entertainment by understanding the complex interplay between the texts of The Matrix? What can the Matrix teach us about authorship and readership in the new media landscape, about globalization and pop cosmopolitanism, even about the future of media studies?
Over the last ten years, scholars have largely ascribed the rise of "virtual community" to
the widespread adoption of computer networking technologies. This paper examines the history
of the system on which the term "virtual community" was first used, the Whole Earth 'Lectronic
Link (or WELL), and shows that as both an idea and a social formation, virtual community in
fact emerged at the intersection of three forces: the appearance of public computer networks,
the persistence of countercultural social ideals from the 1960s, and a shift toward networked
forms of economic activity. In the process, the paper brings together analytical frameworks
from organizational sociology, American cultural history, and science and technology studies
in order to illuminate the complex ways in which technological, social and cultural forms
Information technology does not leave society unchanged. With the introduction of new
communication technologies, social practices often change--sometimes in unexpected ways,
and sometimes as a result of deliberate planning. Technology can not only change existing
patterns of human relationship, but also can foster interaction among groups who would not
otherwise meet. What new kinds of interaction might we want to foster? How does one design
a system to support interaction among groups with distinct needs?
This presentation will first highlight one example of such a system, Palaver Tree Online (PTO). In the PTO project at Georgia Tech, middle-school students interview older adults to learn about recent history from those who lived through it. For example, students interviewed older African Americans about what it was like to live through the civil rights years. Through this activity, the students developed a deeper sense of empathy and a new appreciation for the reality of historical events.
How are systems like PTO designed? How do we understand the distinct needs of groups (like elder volunteers, students, and teachers) and support constructive interaction among their members? The rest of this presentation explores "social balance," an approach to designing online systems that coordinate activity among distinct user groups.
As a technical field, social computing explores two questions: (A) How can the insights of
social science be applied to design better software? And, (B) How can software be designed
to address outstanding social problems? The first question (A) is amply illustrated with,
for instance, the recent successes of so-called recommender systems or collaborative filters
(like the People who buy this book also buy feature at amazon.com) and link-based Internet
search engines (like google.com). These services incorporate newer information indexing and
retrieval algorithms that borrow heavily from sociology (especially social network analysis).
The second question (B) is more open-ended, but recent work has yielded interesting insights.
For example, can software be designed to assist in the renewal of what sociologist Robert
Putnam terms "social capital"? Our recent work attempts to articulate and address outstanding
social problems of online public space and public discussion: (1) What is a good public
discussion? (2) What is a good public space? (3) What (software) technologies can be designed
to make a public space better for discussion and exchange? Through the demonstration of
several of our systems we hope to illustrate how both (A) we have applied insights from
social science to the design of software; and, (B) we are addressing social problems with
Under-contribution is a problem for many online communities. Social psychology theories
of social loafing and goal setting can provide mid-level design principles to address
this problem. We tested the design principles in two field experiments. In one, members
of an online movie recommender community were reminded of the uniqueness of their
contributions and the benefits that follow from them. In the second, they were given a
range of individual or group goals for contribution. As predicted by theory, individuals
contributed when they were reminded of their uniqueness and when they were given specific
and challenging goals, but other predictions were not borne out. The paper ends with
suggestions and challenges for mining social science theories as well as implications for
The most ordinary words -- pronouns, preposition, articles, and auxiliary verbs -- can
reveal a great deal about people's social and psychological state. Using various text
analytic procedures, the ways people use function words in their writing and natural
speech is correlated with demographic factors (age, sex, social class), personality
measures (depression-proneness, self-esteem), social relationships (dominance, honesty),
and biological condition (hormone levels, physical health markers). New ways to measure
natural language and to think about the analysis of text using word count methods,
latent semantic analysis, and taggers will be discussed.
Deception is one of the most significant and pervasive social phenomena of our age. At the
same time, information technologies have pervaded almost all aspects of human communication
and interaction. Given the prevalence of both deception and communication technology, an
important set of questions have recently emerged about how technology affects digital
deception. Do people use different media to lie about different types of things, or to
different types of people? Are we worse at detecting a lie online than we are face-to-face?
Can automated analyses reveal linguistic patterns that reflect deception? This talk will
outline a program of research addressing the production, detection and automated linguistic
analysis of lying online.
The Australian Bible Society has recently produced a copy of the bible in text message
style and format -- it is designed to be loaded on a computer and blue-toothed to a
compatible cell phone and then broadcast to one's bible study or Christian youth group.
At first blush, this seems like an odd artefact, the first line to a joke. But in the
"West", there is a long and complicated relationship between technology and religion.
After all, Johannes Gutenberg's printing press produced the Bible in the 1450s. It was
the first book to be thusly mass-produced. Today, the largest online genealogical service
is run by a Christian institution, the Catholic church has its own text message service,
religiously inspired blogs and chat rooms flourish in the United States and elsewhere
around the world. Elsewhere, technology manufacturers are catering to the ways in which
computational devices might support religious practices, producing religion-specific
technologies and experiences.
Indeed given the ways in which religious practices are intimately woven into the fabric of daily life in many parts of the world, it should not be so far fetched to imagine that new information and communication technologies (ICTs) might support a range of non-secular activities. Indeed, recent surveys of internet habits, corporate marketing strategies, and new product developments, all point to the fact that there is a growing (perhaps already grown) segment of the population that uses technology to support religious practices. Yet, for the most part, however, religious or spiritual uses of ICTs seem to exist in the realm of technological oddities, fodder for offbeat columns and stand-up comics. Indeed, the critical literature is surprisingly sparse on this subject.
In this talk, I want to revisit some of these instances of techno-fied spirituality, with an ethnographic sensibility. I want to survey some of the existing practices and devices, both in the mobile and internet spaces -- I am interested in both institutional and individual strategies around these various computational platforms and devices. I am particularly interested in thinking about the ways in which religious uses of technology suggest a very different path(s) for technology envisioning and development.