ISR

The scientific and medical communities have long recognized human factors influence research results.  Indeed, a growing body of literature suggests that even the best-designed medical studies are affected by sources of bias.  As medicine embraces an “evidence-based” paradigm in which data drives decisions, it is important to recognize that all evidence comes from human sources.  Understanding the researchers behind a paper, and the social and/or meta-networks behind those researchers, is crucial to understand and evaluate research results.  To evaluate this properly, it is necessary to employ a set of computational techniques grounded in social network analysis.

In this thesis, I develop and employ the idea of a “medical academic genealogy”, a network of authors linked to a founding department chairman.  I demonstrate that identified medical academic genealogies can be correlated with research results, meaning that individuals who train in key genealogies are likely to publish similar results.  Additionally, I show that researchers within an academic genealogy are likely to publish in specific journals.  As a case study in this phenomena, I examine a controversial neurosurgical issue: the question of extent of surgery for high grade glioma (a type of brain cancer)

To do this, I will pull from an interdisciplinary body of literature, including dynamic network analysis, computer science, information diffusion, neurosurgery, and genealogy studies. The quantitative tools I develop will be important for understanding how individual research papers are interrelated, and can indicate ways in which literature reviews may be unwittingly affected by medical academic genealogy.

Thesis Committee:
Kathleen Carley (Chair)
Rick Carley (Electrical and Computer Engineering)
James Herbsleb
Clark Chen (University of Minnesota, Department of Neurosurgery)

Over the past two decades, we have witnessed a drastic increase in the development of metrics and analytical frameworks to study technical, social, organizational and socio-technical aspects of large scale software development projects. Despite the valuable insights that these new approaches provide, their application in practice has faced significant challenges. In this talk, I discuss these challenges and several examples of how those challenges were overcome to provide tangible benefits to software project outcomes. The examples cover geographically distributed software projects from startups to large-multinationals and across several industries.

Marcelo Cataldo is an Engineering Manager and Technical Lead at Dell Technologies. His research interests are in geographically distributed software development, software architecture and software analytics. Marcelo combines 9 years of academic research in software engineering with 12 years of experience as a software engineer, software architect and technical lead in small and large software development organizations. He holds a Ph.D. in Computation, Organizations and Society from Carnegie Mellon University (Pittsburgh, USA) and a B.Sc. in Information Systems from Universidad Tecnologica Nacional (Buenos Aires, Argentina).

Faculty Host: James Herbsleb

Self-adaptive software systems determine adaptation plans at run time that seek to change their behavior in response to faults, changing environments and attacks. Therefore, having an appropriate planning approach to find an adaptation plan is critical to successful self-adaptation.

For many realistic systems, ideally one would like to have a planning approach that finds quality plans in a timely manner. However, due to the fundamental trade-off between quality and timeliness of planning, today designers often have to compromise between an approach that is quick to find a plan and an approach that is slow but finds a quality plan.

To deal with this trade-off, we propose a hybrid planning approach for self-adaptive systems that combines deliberative and reactive planning to find a balance between quality and timeliness. The key idea is to use reactive planning to provide a quick (although potentially a sub-optimal) response, but simultaneously invoke deliberative planning to determine quality plans. Once the deliberative plan is ready, it takes over the execution from the reactive plan to provide a higher quality adaptation thereafter.

The proposed thesis work will demonstrate through case-studies that a combination of reactive and deliberative planning can improve adaptation effectiveness over using either alone as measured by a multi-dimensional utility function capturing different dimensions of a system’s goal. In the process, the thesis will make contributions to both the theory and the practice of hybrid planning in self-adaptive systems. Specifically, the thesis will provide: (a) a formal framework defining the problem of hybrid planning; (b) a practical approach (grounded on the formal model) to apply hybrid planning to self-adaptive systems; and (c) concrete examples bridging the gap between theory and practice.

Thesis Committee:
David Garlan (Chair)
Jonathan Aldrich
John Dolan (Robotics Institute)
Hausi Müller (University of Victoria, Canada)

Copy of Proposal Document

 

Adolescent online safety is often equated to risk prevention, which focuses on reducing risk exposure (e.g., information privacy breaches, cyberbullying, sexual solicitations, and exposure to explicit content) by enhanced parental mediation, promoting privacy awareness, and invoking privacy invasive restriction and monitoring software that are designed to shield teens from encountering online risks. Such approaches tend to be very parent-centric and do not take into account the developmental needs and experiences of our youth. On one hand, we are telling teens they need to care about their online privacy in order to stay safe, and on the other, we are taking their privacy away. On all accounts, we assume teens have no personal agency when it comes to their own online safety, and that they cannot effectively manage online risks by themselves.

In contrast, developmental psychologists have shown that some level of autonomy and risk-seeking behaviors are a natural and necessary part of adolescent developmental growth. In fact, shielding teens from any and all online risks may actually be detrimental to this process. Therefore, my research takes a more teen-centric approach to understanding adolescent online risk experiences, how teens cope with these risks, and ultimately challenges some of the assumptions that have been made about how to protect teens online. Further, my research shows that parents are not authoritative figures when it comes to the risks their teens are experiencing online; thus, an over-reliance on parental mediation to ensure teen online safety may be problematic. Thus, we may want to move toward new approaches that empower teens by enhancing their risk-coping, resilience, and self-regulatory behaviors, so that they can learn to more effectively protect themselves from online risks.

Dr. Pamela Wisniewski is an Assistant Professor in the Department of Computer Science at the University of Central Florida. She graduated from the University of North Carolina at Charlotte with a Ph.D. in Computing and Information Systems and was a Post Doctoral Scholar at the Pennsylvania State University. Dr. Wisniewski also has over 6 years of industry experience as a systems developer/business analyst in the IT industry. Her research expertise is situated at the juxtaposition of Human-Computer Interaction, Social Computing, and Privacy. An emerging theme across her research has been regulating the boundaries between how individuals manage their relationships with technology and how they manage their social interactions with others through the use of technology. Her goal is to frame privacy as a means to not only protect end users, but more importantly, to enrich online social interactions that individuals share with others. She is particularly interested in the interplay between social media, privacy, and online safety for adolescents. Her work has won best papers (top 1%) and best paper honorable mentions (top 5%) at premier conferences in her field, as well as being featured on NPR, Forbes, and Science Daily. She was recently inducted as an inaugural member of ACM’s Future Computing Academy, which is an initiative developed “to support and foster the next generation of computing professionals.”

 

Text passwords—a frequent vector for account compromise, yet still ubiquitous—have been studied for decades by researchers attempting to determine how to coerce users to create passwords that are hard for attackers to guess but still easy for users to type and memorize. Most studies examine one password or a small number of passwords per user, and studies often rely on passwords created solely for the purpose of the study or on passwords protecting low-value accounts. These limitations severely constrain our understanding of password security in practice, including the extent and nature of password reuse, password behaviors specific to categories of accounts (e.g., financial websites), and the effect of password managers and other privacy tools.

In the paper on which this presentation is based, we report on an in situ study of 154 participants over an average of 147 days each. Participants' computers were instrumented—with careful attention to privacy—to record detailed information about password characteristics and usage, as well as many other computing behaviors such as use of security and privacy web browser extensions. This data allows a more accurate analysis of assword characteristics and behaviors across the full range of participants' web-based accounts. Examples of our findings are that the use of symbols and digits in passwords predicts increased likelihood of reuse, while increased password strength predicts decreased likelihood of reuse; that password reuse is more prevalent than previously believed, especially when partial reuse is taken into account; and that password managers may have no impact on password reuse or strength. We also observe that users can be grouped into a handful of behavioral clusters, representative of various password management strategies. Our findings suggest that once a user needs to manage a larger number of passwords, they cope by partially and exactly reusing passwords across most of their accounts.



Sarah Pearman is a behavioral researcher who has worked with the CyLab Usable Privacy and Security research group since 2015. Sarah's primary focus in the CUPS group (and the focus of this talk) is the Security Behavior Observatory (SBO) project, a longitudinal field study of home computer user behavior. Sarah received her MA at the University of Pittsburgh and her bachelor's degree at Vanderbilt University.

Snapchat takes user privacy seriously. The ephemeral nature of disappearing snaps and messages, inability to see your friends' friends, "For My Eyes Only" option, no permanent public content searchable for years and other privacy features set Snapchat apart from other online social platforms. It is all about fleeting conversations and private memories. Snapchat started with a vision of private communication but requires constant and deliberate focus to keep it that way. Our Privacy by Design program ensures that best privacy practices are followed across the whole Snapchat ecosystem. We are keen to adopt time tested privacy solutions, while working hard to push the bounds by developing and promoting recent or novel privacy technologies. We will give an overview of our efforts ranging from differentially-private analytics reporting to on-device (federated) machine learning and discuss challenges and successes of usable privacy at Snapchat.

Vasyl Pihur received a PhD in Biostatistics from University of Louisville in 2009. Thinking that academic career is ahead of me, I took a joint post-doctoral appointment between Johns Hopkins School of Medicine and their Biostatistics department to work on "gene hunting" for hypertension and autism. Two years later, I was doing data science work for Youtube. I was not ready to give up my research dreams quite yet and transferred to a privacy research group at Google. While there, we published two Rappor papers and implemented the first differentially-private data collection in the tech industry. I joined Snapchat in the beginning of 2017 and currently lead a data privacy team responsible for technical privacy solutions.

When product quality cannot be observed prior to purchase, reputation concerns—the threat of lost future sales—can create incentives for firms to provide high quality products. Framing data security as a quality investment problem, I embed this reputation mechanism into a probabilistic model of security investment a la Gordon and Loeb (2002). A website that sells a product (of observable quality) has to decide how much to invest in the protection of its customer’s payment data. The consumer cannot observe security prior to purchase and bases his decision to buy on the firm’s reputation. Bad security is revealed post-purchase via the occurrence of breaches. The consumer may punish the firm by leaving when he learns of a breach; this provides the firm with incentives to invest. The observed lack of investment incentives in reality may be explained by a low rate of breach detection and the consumer’s limited liability for fraud losses; both factors undermine his willingness and ability to punish the firm. I consider policies that can improve investment incentives either by strengthening the reputation concerns or by directly addressing the problems of imperfect information and externalities. I caution against how these policies may create countervailing effects on investment incentives and how they may not necessarily raise consumer surplus even when they lead to more investment. 

Ying Lei is a PhD student at the Toulouse School of Economics, France. Currently, she is also a visiting scholar at the School of Information at UC Berkeley. Her research applies industry organization theory to the digital economy. In particular, she has worked on the topics of information/cyber-security and consumer privacy. Prior to starting her PhD, she obtained a MSc and a MPhil in Economics from the Toulouse School of Economics. 

 

The Master of Science in Information Technology – eBusiness Technology is a program  within the Institute for Software Research. The degree program offers no classes, rather, students work full-time for 10 months in teams of five on a series of 16 separate eBusiness projects requiring them to produce professional consulting-quality output, including system designs, specifications, working code, business analyses and persuasive business presentations. The premise: the students are actually employees of a hypothetical eBusiness consulting company which performs work for real clients.  Teams compete for the Best Practicum Award.

The students rotate through four consulting practices: health care, banking, retail and logistics. Each task is supervised by two faculty members, one an area expert and the other a mentor to guide the students in completing each task.

After task 16,  students are experts in team organization and management, and work on a real problem provided by an industrial sponsor. The grading criterion for the Practicum is sponsor satisfaction, as it would be in a true consulting engagement. Each project must present a demonstrable software prototype, which is licensed to the sponsor.

Each team provides a private presentation to its sponsor, and a brief public presentation to a panel of outside judges. The judges select one winner of the Practicum Prize, and one successful team will receive $14,000.  The second-place team will receive $7000.

The two major smartphone platforms (Android and iOS) have more than two million mobile applications (apps) available to download from their respective app stores, and each store has seen more than 50 billion apps downloaded. Although apps provide desired functionality by accessing users' personal information, they also access personal information for other purposes (e.g., advertising or profiling) that users may or may not desire. Users can exercise control over how apps access their personal information through permission managers. However, a permission manager alone might not be sufficient to help users manage their app privacy because: (1) privacy is typically a secondary task and thus users might not be motivated enough to take advantage of the permission manager's functionality, and (2) even when using the permission manager, users often make suboptimal privacy decisions due to hurdles in decision making such as incomplete information, bounded rationality, and cognitive and behavioral biases. To address these two challenges, the theoretical framework of this dissertation is the concept of nudges: "soft paternalistic" behavioral interventions that do not restrict choice but account for decision making hurdles. Specifically, I designed app privacy nudges that primarily address the incomplete information hurdle. The nudges aim to help users make better privacy decisions by (1) increasing users' awareness of privacy risks associated with apps, and (2) temporarily making privacy users' primary task to motivate them to review and adjust their app settings.

I evaluated app privacy nudges in three user studies. The first and second studies showed that app privacy nudges are indeed a promising approach to help users manage their privacy. App privacy nudges increased users' awareness of privacy risks associated with apps on their phones, switched users' attention to privacy management, and motivated users to review their app privacy settings. Additionally, the second study suggested that not all app privacy nudge contents equally help users manage their privacy. Rather, more effective nudge contents informed users of: (1) contexts in which their personal information has been accessed, (2) purposes for apps' accessing their personal information, and (3) potential implications of secondary usage of users' personal information. The ongoing third study focuses on user engagement with repeated app privacy nudges and evaluating approaches that may maintain users engagement when receiving nudges repeatedly.

The results of this dissertation suggest that mobile operating system providers should enrich their systems with app privacy nudges to assist users in managing their privacy. Additionally, the lessons learned in this dissertation may inform designing privacy nudges in emerging areas such as the Internet of Things.

Thesis Committee:
Norman Sadeh (Chair)
Anind K. Dey (HCII)
Alessandro Acquisti (Heinz)
Adrienne Porter Felt (Google Inc.)

Copy of Thesis Document

Teenagers are using the internet for a variety of social and identity-based activities,but in doing so, they are exposed to risky situations. The work of ensuring teens’ online safety falls to largely parents, many of whom are unprepared to understand the realities and norms of teens’ online activity. In this thesis, we will investigate how parents and teens perceive online risks, the efficacy of current tools designed to keep teens safe online, and finally, whether we can improve currently available online safety tools. We have conducted interviews with parents and teens to understand how they perceive digital privacy within their families, and in what situations teens’ privacy should be preserved or denied. We propose work to investigate a specific case of online safety, peer-based online conflict among teenagers, also called cyberbullying.

In studying cyberbullying, we will investigate whether and how parents and teens define online conflicts differently, with an eye towards miscommunications that could make parenting decisions more difficult. We explore the pressures parents face to employ privacy-invasive and restrictive parenting practices, and their confusion about teens’ digital communities that make some parents unsure about communication and education-based interventions. We further present how different groups perceive these various categories of parenting strategies. We further propose to study how current digital online safety tools perform in risky online situations encountered by teens.

To understand the current tool landscape, we will study how two existing tools—a parental control software and a family online behavior contract—perform in families using a longitudinal mixed-methods study. For this study, we will investigate whether families use these tools to identify or handle risky situations, and whether they are satisfied or feel safer with these tools in place. Building on this knowledge, we will build an improved tool or modify an existing tool for mitigating risks encountered by teens online and test it against existing tools within families.

Thesis Committee:

Lorrie Faith Cranor (Advisor)
Julie S. Downs (Dietrich College)
James D. Herbsleb
Amy Bru ckman (Georgia Institute of Technology)

Copy of Proposal Document

Pages

Subscribe to ISR