Time |
Event |
8 - 8:30 a.m. |
Breakfast
|
8:30 - 8:45 a.m. |
Welcome and Opening Remarks
Watch welcome and opening remarks
Kellogg Global Hub, Seminar Room 4101
Francesca Cornelli Dean of the Kellogg School of Management, Donald P. Jacobs Chair in Finance, Professor in Finance, Northwestern University
Julio M. Ottino Dean of the McCormick School of Engineering, Distinguished Robert R. McCormick Institute Professor and Walter P. Murphy Professor of Chemical and Biological Engineering, Northwestern University
|
8:45 - 10:15 a.m. |
Session 1
Mary Beth Watson-Manheim Professor and Department Head of Managerial Studies, University of Illinois Chicago Watch Watson-Manheim's presentation
Adoption of AI in Context: Human-Digital Configuration Work and Implications A primary goal of most organizational AI deployment is to reduce human labor while enhancing the work process. The technology is envisioned to improve established patterns of work and enhance current work systems; in other words, to improve a known situation. However, AI is not a narrow set of technologies with specific, pre-determined applications but is open and contingent offering myriad possibilities for action which are context dependent and emergent. We suggest that AI adoption requires human expertise and ingenuity to “figure out” in context how to integrate the technology into work practices and organizational systems. This “figuring out” process is likely to lead to unexpected reconfiguration of work patterns and practices. We label this process human-digital configuration work.
We illustrate the emergence of unexpected outcomes in a case study of digital transformation in the banking industry. AI deployment was central to the digital transformation strategy. We uncover disruptions in employee work processes with positive and negative consequences for interpersonal interactions. Specifically, we identify two different forms of algorithmic technologies used by employees. The users’ actions and interactions in the adoption process created changes in patterns and nature of established work practices. There were significant consequences, particularly for the quality and quantity of social interactions. We discuss the implications of these changes for social relationships and human connectedness, as well as the meaning of the work. Moreover, we propose that the stabilization of new human-digital configurations with existing work practices may challenge individual and professional identity as well as the deep structure and identity of the organization.
Hatim Rahman Assistant Professor of Management and Organizations, Kellogg School of Management, Northwestern University Watch Rahman's presentation
Control in the Age of Algorithms: Exploring the Cold Start Problem and Reputational Interdependence on Online Labor Markets Scholars have developed an intimate understanding of how people use social networks to navigate traditional labor markets. In my ethnographic study of workers in one of the largest online labor platform markets, I found people could not rely on their existing social networks, in part because online platforms primarily rely on algorithms to control people's mobility. In the absence of existing social ties on the platform, I found inexperienced workers encounter the "cold start" problem and detail the consequences this problem had for workers’ careers. Second, for experienced workers who obtain a rating evaluation, the workers’ encounter what I call "reputational interdependence": the platform's algorithms share workers' rating evaluations within and across other online networks, without workers' consent. Together, I theorize how platforms' use of algorithms to control workers introduces challenges in ways that depart from prior literature and advances our understanding of networks in the age of algorithms.
|
10:15 - 10:30 a.m. |
Break
|
10:30 a.m. - Noon |
Session 2
Nancy Cooke Professor of Human Systems Engineering, Arizona State University Watch Cooke's presentation
Trusted Distributed Human-Machine Teaming for Safe and Effective Space-Based A challenge of space-based missions is effective teaming in a geographically and temporally (i.e., spatio-temporal) distributed environment. The geographic distribution of teammates coupled with the variable communication latency challenges effective teamwork. This challenge is exacerbated by the team complexity in a heterogeneous multiteam system composed of humans, robots, and Artificial Intelligent (AI) agents. The long-term objective of this research is to develop an AI agent that monitors distributed human machine teams (HMTs) in space-based missions to identify potential team states (e.g., fatigue, conflict, trust) and intervene when needed to improve teamwork and team effectiveness. We have identified challenges from experts in space operations, developed a scenario to reflect those challenges, and identified sensor data for HMT monitoring.
Malte Jung Associate Professor of Information Science, Cornell University Watch Jung's presentation
Teamwork with Robots Research on Human-robot Interaction to date has largely focused on examining a single human interacting with a single robot. This work has led to advances in fundamental understanding about the psychology of human-robot interaction (e.g. how specific design choices affect interactions with and attitudes towards robots) and about the effective design of human-robot interaction (e,g. how novel mechanisms or computational tools can be used to improve HRI). However, the single-robot-single-human focus of this growing body of work stands in stark contrast to the complex social contexts in which robots are increasingly placed. While robots increasingly support teamwork across a wide range of settings covering search and rescue missions, minimally invasive surgeries, space exploration missions, or manufacturing, we have limited understanding of how robots affect team dynamics and how we design robots to support groups of people. In this talk I present empirical findings from several studies that show how robots can shape in direct but also subtle ways how people interact and collaborate with each other in teams.
|
Noon - 1 p.m. |
Lunch
|
1 - 2:30 p.m. |
Session 3
Melissa Valentine Associate Professor of Management Science and Engineering, Stanford University Watch Valentine's presentation
Becoming Informated: How Expert Occupations Gain Reskilling During Algorithm Development and Use Many studies explore how expert occupations adopt new algorithmic systems but reveal experts’ resistance because of the increased surveillance, standardization, and loss of control these systems can involve. Other studies acknowledge that some occupations can experience a valued reskilling when they use algorithms, becoming more “informated” or “augmented” in their decision-making. However, missing from this research is an understanding of when and how new algorithms enable different occupations to undergo valued reskilling versus deskilling. In this paper, we present an ethnographic study of data scientists’ algorithmic development process that produced reskilling for their domain experts, and their retail company’s fashion buyers. To use the algorithmic system, the buyers struggled to gain new intellective (i.e., conceptual thinking) skills, including 1) explicitly articulating the theories driving their decision-making and then 2) proposing, conducting, and evaluating tests of those theories using the algorithm. The data scientists engaged in reskilling practices during their user-centered algorithm design process to support the buyers’ learning: they structured ongoing interactions with the buyers, wherein they 1) asked open-ended questions to help elicit and formalize the buyers’ theories, 2) added system features to support the buyers’ understanding, and 3) conducted trainings focused on metaphors and framings that would develop the buyers’ intuition of how the algorithm worked. Our study identifies conditions and practices through which expert occupations can become informated rather than deskilled through algorithm development and use.
Paul Sajda Professor of Biomedical Engineering, Electrical Engineering, and Radiology (Physics), Columbia University Watch Sajda's presentation
Physiologically-Informed Artificial Intelligence Artificial Intelligence (AI) systems are advancing at a rapid pace, with new systems being realized on almost a daily basis. Many of these systems rely on unsupervised training on extremely large data sets (billions of tokens of text, images, etc). Then they use a relatively small amount of supervised training data generated by humans. Using supervised training data is critical for tuning the models to specific contexts. However, acquiring such data is often costly because it requires human-in-the-loop expertise. In this talk, I will describe a new way to incorporate human-in-the-loop learning based on physiologically-based labeling and state inference. Termed physiologically-Informed artificial intelligence (PI-AI) the framework tracks cognitive state changes related to attention reorienting and arousal, which can be measured non-invasively via electroencephalography (EEG), electrocardiography (ECG), electrodermal activity (EDA) pupillometry, and eye-tracking. We show that such an approach can be used to build AI models that are highly personalized to individual preferences, without the individual having to overtly express those preferences. We also hypothesize scenarios where the use of PI-AI may increase trust been humans and agents via just-in-time interventions that build “bonds” as one would expect in a team.
|
2:30 - 2: 45 p.m. |
Break
|
2:45 - 4: 15 p.m. |
Session 4
Lionel P. Robert Jr. Professor of Information and Associate Dean of Faculty Development and Faculty Affairs, School of Information, University of Michigan Watch Robert's presentation
A Multi-Study Analysis of Repairing Human-Robot Trust Using Theory of Mind and Relational Demography Theory We illustrate the emergence of unexpected outcomes in a case study of digital transformation in the banking industry. AI deployment was central to the digital transformation strategy. We uncover disruptions in employee work processes with positive and negative consequences for interpersonal interactions. Specifically, we identify two different forms of algorithmic technologies used by employees. The user's actions and interactions in the adoption process created changes in patterns and the nature of established work practices. There were significant consequences, particularly for the quality and quantity of social interactions. We discuss the implications of these changes for social relationships and human connectedness, as well as the meaning of the work. Moreover, we propose that the stabilization of new human-digital configurations with existing work practices may challenge individual and professional identity as well as the deep structure and identity of the organization.
Moshe Vardi University Professor, Karen Ostrum George Distinguished Service Professor in Computational Engineering, Rice University Watch Vardi's presentation
Technology and Democracy U.S. society is in the throes of deep societal polarization that not only leads to political paralysis but also threatens the very foundations of democracy. The phrase "The Disunited States of America" is often mentioned. Other countries are displaying similar polarization. How did we get here? What went wrong?
In this talk, I argue that the current state of affairs is the result of the confluence of two tsunamis that have unfolded over the past 40 years. On one hand, there was the tsunami of technology – from the introduction of the IBM PC in 1981 to the current domination of public discourse by social media. On the other hand, there was a tsunami of neoliberal economic policies. I will argue that the combination of these two tsunamis led to both economic polarization and cognitive polarization.
|
4:15 - 4:30 p.m. |
Remarks
Watch Johnson's remarks
E. Patrick Johnson Dean of the School of Communication and Annenberg University Professor at Northwestern University
|
4:30 - 4:45 p.m. |
Group Photo
|
4:45 - 5:15 p.m. |
Reception
Kellogg Global Hub, White Auditorium
|
5:15 - 6:30p.m. |
Presentation & Performance
Stephen Alltop Senior Lecturer, Conducting and Ensemble, Bienen School of Music, Northwestern University and Orchestra
|
6:45 - 8:15 p.m. |
Dinner
|
7:00 - 8:00 p.m. |
Diversity in AI Panel (during dinner)
Watch panel on diversity in AI
Artificial intelligence systems and machine learning algorithms are wonderful artifacts of human accomplishment and scientific rigor. The modern software tools that are now readily available and enthusiastically applied to many of our lived experiences help make interactions with technology more seamless, convenient, and efficient - for some people. Unfortunately, the AI ecosystem, including its development, deployment, and user interaction, must address questions related to bias, ethics, and diversity throughout its implementation. Recent scholarship and mainstream media attention have heightened awareness to the ways in which a lack of diversity has resulted in the development of tools that potentially increase marginalization and discrimination, ignore the cultures and histories of different groups, and undermine the freedoms of some its users. As a result, we view this topic as being a core issue that must be highlighted as part of the workshop on Human AI Social Networks @ Work.
Moderator
Marlon Twyman II Assistant Professor of Communication, University of Southern California Annenberg School for Communication and Journalism
Panelists
Ray Reagans Alfred P. Sloan Professor of Management, Associate Dean for Diversity, Equity, and Inclusion, MIT Sloan School of Management
Martin Prescher Executive Vice President and CTO, Autonomy
Andrea Guzman Associate Professor of Journalism, Northern Illinois University
|