Loading…
Enigma 2019 has ended

Sign up or log in to bookmark your favorites and sync them to your phone or calendar.

Monday, January 28
 

7:30am

Badge Pickup
Monday January 28, 2019 7:30am - 5:00pm
Grand Peninsula Foyer

8:00am

Continental Breakfast
Monday January 28, 2019 8:00am - 8:45am
Grand Peninsula Foyer

8:45am

9:00am

The Kids Aren't Alright—Security and Privacy in the K–12 Classroom
Many of the security and privacy mechanisms we build - permission prompts, security warnings, privacy policies - make one critical assumption: the end-user is an adult with agency to make their own decisions. Children, and especially children in schools, operate in a different security and privacy context than the general-purpose online tools they use. Young students can't evaluate security risks or consent to data sharing, but we give them the same security warnings and privacy controls that confuse adults.

Authentication mechanisms aren't designed for children and don't adapt to their age. Password "best practices" aren't considerate of children who are learning to type. Many two-factor and password reset systems don't work for kids who aren't allowed to have phones. Mobile apps that never expire sessions don't make sense for schools who can't afford a device for every student.

The classroom setting is different than the corporate or consumer internet environment. The dynamic power structure of teachers, school administrators, students, and parents needs to be understood and baked into authentication and authorization tools for schools. Teachers play the role of system administrators, fielding support questions, fixing keyboards, and resetting passwords. School and district administrators have important and complicated relationships with the classroom, and technology is deployed both top-down and bottom-up, making inflexible systems brittle.

While many recognize the promise of technology in the classroom, many attempts to design kid-friendly systems are met with suspicion. Early academic data is sensitive. The concept of a "permanent record" is an educational privacy trope. In the era of big data, this is even more concerning. When students create content in edtech apps, that may be the first time they associate their online identity with data.

While edtech promises a revolution in learning outcomes, it first needs to be both safe and useful. This talk introduces security and privacy challenges kids face using technology in the classroom. It's imperative that we apply security and privacy design principles with an understanding of the real-world classroom context to realize the benefits of education technology for society.

Speakers
AS

Alex Smolen

Clever
Alex is a security-focused software engineer and architect interested in usable security and privacy by design. He is the Engineering Manager for the Infrastructure and Security teams at Clever. Before joining Clever, Alex was the technical lead for the Account Security team at Twitter... Read More →


Monday January 28, 2019 9:00am - 9:30am
Grand Peninsula Ballroom ABCD

9:30am

Rethinking the Detection of Child Sexual Abuse Imagery on the Internet
A critical part of child sexual abuse criminal world is the creation and distribution of child sexual abuse imagery (CSAI) on the Internet. To combat this crime efficiently and illuminate current defense short-coming, it is vital to understand how CSAI content is disseminated on the Internet. Despite the importance of the topic very little work was done on the subject so far.

To fill this gap and provide a comprehensive overview of the current situation we conducted the first longitudinal measurement study of CSAI distribution across the Internet. In collaboration with the National Center for Missing and Exploited Children (NCMEC)—a United States clearinghouse for all CSAI content detected by the public and US Internet services—we examined the metadata associated with 23.4M CSAI incidents of CSAI from the 1998–2017 period.

This talk starts by summarizing the key insights we garnered during this study about how CSAI content distribution evolved. In particular, we will cover how Internet technologies have exponentially accelerated the pace of CSAI content creation and distribution to a breaking point in the manual review capabilities of NCMEC and law enforcement.

Then we will delve into the most pressing challenges that need to be addressed to be able to keep up with the steady increase of CSAI content and outline promising directions to help meet those challenges.

Speakers
avatar for Elie Bursztein

Elie Bursztein

Anti-fraud and abuse research team lead, Google
Elie Bursztein leads Google's anti-abuse research, which helps protect users against Internet threats. Elie has contributed to applied-cryptography, machine learning for security, malware understanding, and web security; authoring over fifty research papers in the field for which... Read More →


Monday January 28, 2019 9:30am - 10:00am
Grand Peninsula Ballroom ABCD

10:00am

Callisto: A Cryptographic Approach to #MeToo
Three years ago, Callisto launched its sexual assault reporting platform on college campuses. Callisto recently launched a new product that expands our reach to support any survivor of sexual assault and profession sexual coercion in the United States.

In this new product, users are invited to an online "matching escrow" that will detect repeat perpetrators and create pathways to support for victims. Users of this product can enter the identity of their perpetrator into the escrow. This data can only be decrypted by the Callisto Options Counselor (a lawyer), when another user enters the identity of the same perpetrator. If the perpetrator identities match, both users will be put in touch independently with the Options Counselor, who will connect them to each other (if appropriate) and help them determine their best path towards justice. The client relationships with the Options Counselors are structured so that any client-counselor communications would be privileged. A combination of client-side encryption, encrypted communication channels, oblivious pseudorandom functions, key federation, and Shamir secret sharing keep data encrypted so that only the Callisto Options Counselor has access to user submitted data when a match is identified. This presentation will discuss Callisto’s cryptographic approach and infosec strategy to solve an urgent social justice problem.

Speakers
avatar for Anjana Rajan

Anjana Rajan

Chief Technology Officer, Callisto
Anjana Rajan is the Chief Technology Officer at Callisto, a non-profit that builds technology to combat sexual assault. In this role, Anjana leads the engineering, security and design teams, with a focus on building products that protect the privacy and civil liberties of sexual assault... Read More →


Monday January 28, 2019 10:00am - 10:30am
Grand Peninsula Ballroom ABCD

10:00am

Sponsor Showcase
Say hello to our exhibiting sponsors and partners Amazon, Dropbox, EFF, Google, Shopify, Tanium, and Uber.

Monday January 28, 2019 10:00am - 6:00pm
Grand Peninsula Ballroom EFG

10:30am

Break with Refreshments
Monday January 28, 2019 10:30am - 11:00am
Grand Peninsula Foyer

11:00am

Hardware Security Modules: The Ultimate Black Boxes
Hardware Security Modules occupy a unique position in computer security–they are used to manage the most important secrets, but they're closed designs where opacity and tamper-response are inherent design requirements. These devices have had varying levels of adoption, from being the only way to do cryptography fast, to only being used when security was required (often by regulation), to now being used to protect high-value secrets at a distance. Unfortunately, many of the designs on the market are very old, and essentially designed for a different use case and threat model than exists today. To a degree, even existing certification procedures act as an impediment to successful use of the technology.

We will describe the issues with premises and cloud-based HSMs, as well as some ways to work around these limitations and how to build a new kind of product for current needs.

Speakers
RL

Ryan Lackey

Tezos
Ryan Lackey has been a cypherpunk since the early 1990s. As one of the founders of the world's first offshore datahaven (HavenCo on Sealand), he built physical infrastructure to help others engage in jurisdictional arbitrage. In addition to some early anonymous electronic cash projects... Read More →


Monday January 28, 2019 11:00am - 11:30am
Grand Peninsula Ballroom ABCD

11:30am

Hardware Is the New Software: Finding Exploitable Bugs in Hardware Designs
Bugs in hardware designs can create vulnerabilities that open the machine to malicious exploit. Despite mature functional validation tools and new research in designing secure hardware, the question of how to find and recognize those bugs remains open. My students and I have developed two tools in response to this question. The first is a security specification miner; it semi-automatically identifies security-critical properties of a design specified at the register transfer level. The second tool, Coppelia, is a symbolic execution engine that explores a hardware design and generates complete exploits for the security bugs it finds. We use Coppelia and our set of generated security properties to find new bugs in the open-source RISC-V and OR1k CPU architectures.

Speakers
CS

Cynthia Sturton

University of North Carolina at Chapel Hill
Cynthia Sturton is an Assistant Professor and Peter Thacher Grauer Fellow at the University of North Carolina at Chapel Hill. She leads the Hardware Security @ UNC research group to investigate the use of static and dynamic analysis techniques to protect against vulnerable hardware... Read More →


Monday January 28, 2019 11:30am - 12:00pm
Grand Peninsula Ballroom ABCD

12:00pm

Using Architecture and Abstractions to Design a Security Layer for TLS
TLS is the primary protocol used to provide security and privacy for Internet traffic. Sadly, there is abundant evidence that developers do not use TLS correctly, due to a morass of poorly-designed APIs, lack of security expertise, and poor adherence to best practices. In this talk, we argue this is a problem of architecture and abstraction. We first demonstrate how a security layer fits into the Internet architecture, between applications and TCP, and how the POSIX socket API is both a convenient and simple abstraction for a TLS interface. We then discuss ramifications for developers, administrators, and OS vendors, focused on two major benefits: (1) developers have a centralized, well-tested service to easily create a secure application in minutes, and (2) system administrators and OS vendors have policy to ensure all applications on a device use best practices. We finish by illustrating how this new abstraction and architecture can simplify two of the most complex parts of TLS—certificate validation and client authentication. We are releasing code for the security layer, including both operating system services and application examples, to stimulate developer and industry interest in this approach.

Speakers
DZ

Daniel Zappala

Brigham Young University
Daniel Zappala is the director of the Internet Research Lab at BYU. He is primarily interested in network security and usable security, particularly anywhere that people have to interact with cryptography. Daniel’s recent research includes developing a security layer for TLS, designing... Read More →


Monday January 28, 2019 12:00pm - 12:30pm
Grand Peninsula Ballroom ABCD

12:30pm

Lunch
Monday January 28, 2019 12:30pm - 2:00pm
Atrium

2:00pm

Privacy Engineering: Not Just for Privacy Engineers
Most privacy talks are given by privacy experts. I’m not a privacy expert. In fact, my job is to help teams across Uber access and analyze appropriate data to make our services smarter and more reliable. As such, my team is often on the receiving end of technical and policy requirements from our privacy teams. This talk will discuss how privacy and data engineers at Uber joined forces to build a privacy-protecting approach to data retrieval and what privacy teams need to know about working with data teams to accomplish their goals. I'll share specific examples from Uber engineering on how we work with our privacy colleagues to enforce least privilege, data protection, and compliance with regulatory requirements.

Speakers
JA

Jennifer Anderson, PhD

Uber
Jennifer Anderson is a senior director of engineering at Uber, where she leads the product platform team, responsible for the platforms and data warehouses supporting growth and core service teams. Previously, she lead data analytics and infrastructure for Uber’s engineering organization... Read More →


Monday January 28, 2019 2:00pm - 2:30pm
Grand Peninsula Ballroom ABCD

2:30pm

Building Identity for an Open Perimeter
Netflix is a 100% cloud first company. The traditional corporate network security perimeter no longer meets our needs. In this talk, I will be covering the core building blocks comprising of identity, single sign-on using standards like SAML, OIDC and, OAuth, multi-factor authentication, adaptive authentication, device health and authorization we have invested in, to build zero trust networks at Netflix and make identity as the new security perimeter.

Speakers
avatar for Tejas Dharamshi

Tejas Dharamshi

Netflix, Inc.
Tejas Dharamshi is a Senior Security Software Engineer at Netflix. Tejas specializes in security and is focused on corporate Identity and Access, multi-factor authentication, adaptive authentication, and user-focused security at scale.


Monday January 28, 2019 2:30pm - 3:00pm
Grand Peninsula Ballroom ABCD

3:00pm

Provable Security at AWS
Using automated reasoning technology, the application of mathematical logic to help answer critical questions about your infrastructure, AWS is able to detect entire classes of misconfigurations that could potentially expose vulnerable data. We call this provable security-absolute assurance in security of the cloud and in the cloud. This talk highlights, how this next generation cloud security technology is protecting customers in an evolving threat landscape and how customers are using provable security features in their AWS cloud environment.

Speakers
NR

Neha Rungta

Principal Engineer, Amazon Web Services
Dr. Neha Rungta is a Principal Engineer in the Automated Reasoning Group with Amazon Web Services (AWS) working on formal verification techniques for cloud security. Prior to joining AWS, Neha is known for her work on symbolic execution, automated program analysis, and airspace modeling... Read More →


Monday January 28, 2019 3:00pm - 3:30pm
Grand Peninsula Ballroom ABCD

3:30pm

Break with Refreshments
Monday January 28, 2019 3:30pm - 4:00pm
Grand Peninsula Foyer

4:00pm

Abusability Testing: Considering the Ways Your Technology Might Be Used for Harm
Speakers
AS

Ashkan Soltani

Independent Researcher and Consultant
Ashkan Soltani is an independent researcher and technologist specializing in privacy, security, and behavioral economics. His work draws attention to privacy problems online, demystifies technology for the non-technically inclined, and provides data-driven insights to help inform... Read More →


Monday January 28, 2019 4:00pm - 4:30pm
Grand Peninsula Ballroom ABCD

4:30pm

Grey Science
Traditional scientific disciplines have a long history of discoveries made by amateur researchers or those with no formal scientific training. The cybersecurity community has many parallels. Papers at serious academic conferences and talks at "hacker" conferences contain surprising overlaps in topics and methods. But academics publish in formal, peer reviewed journals that are often behind a paywall, while non-academics produce artifacts in the realm of ephemeral "grey literature". The incentives for each group differ enough that no serious effort has been put forth to draw them together. How can we create feedback loops between the academic community, cybersecurity operators and underground security researchers who may not even think of themselves as "researchers" in order to work together on important security and privacy topics?

Speakers
AN

Anita Nikolich

Computer Science, Illinois Institute of Technology
Anita is a Visiting Fellow in Computer Science at Illinois Institute of Technology. She served as a Cybersecurity Program Director at the National Science Foundation, and has held a variety of research, security and infrastructure roles in academia, industry and government. While... Read More →


Monday January 28, 2019 4:30pm - 5:00pm
Grand Peninsula Ballroom ABCD

5:00pm

It's Not "Our" Data: Do We Want to Create a World of No Surprises?
Speakers
DD

Denelle Dixon

Mozilla
As Chief Operating Officer, Denelle is responsible for the overall operating business, leading the strategic and operational teams to scale Mozilla’s mission impact as a robust open source organization.Denelle also spearheads Mozilla’s business, policy and legal activities in... Read More →


Monday January 28, 2019 5:00pm - 5:30pm
Grand Peninsula Ballroom ABCD

5:30pm

Conference Reception
Sponsored by Google

Monday January 28, 2019 5:30pm - 7:00pm
Atrium

7:00pm

Netflix Round Table Session 1: Better Bring an Umbrella—Forecasting Events in Security
Sponsored by Netflix
Leader: Travis McPeak

The security industry tends to rely on instinct rather than quantitative methods to estimate risk. We can do better. Forecasting blends historical data and expert knowledge to estimate the likelihood or impact of an event, and has been used effectively by meteorologists, insurance providers, and nuclear strategists. Join Ryan McGeehan (@magoo) and Travis McPeak (@travismcpeak) as we estimate the likelihood of a 0-day exploit in a major browser. Refreshments will be served.

Monday January 28, 2019 7:00pm - 8:00pm
Sandpebble Room CDE

8:00pm

Netflix Round Table Session 2: Scaling Product Security
Sponsored by Netflix
Leader: Astha Singhal

Historically, product/application security teams have heavily relied on a consulting model for serving their engineering customers. This enabled us to embed closely with developers and provide security guidance throughout the development lifecycle for new features and products. With changes to how we release software and the hiring challenges in our field, this model has become hard to scale. Product security teams are now investing in static & dynamic code analysis, security champions, CI/CD automation, and bug bounty programs to scale their services better. It is difficult, however, to measure the risk impact from some of this work.

During this session, we would like participants to discuss current and future initiatives at their organizations that help them reduce business risk in a scalable, measurable way. Please come to this BoF session to share your experience with strategies that have been impactful within your organization. Refreshments will be served.

Monday January 28, 2019 8:00pm - 9:00pm
Sandpebble Room CDE
 
Tuesday, January 29
 

8:00am

Continental Breakfast
Tuesday January 29, 2019 8:00am - 8:55am
Grand Peninsula Foyer

8:00am

Badge Pickup
Tuesday January 29, 2019 8:00am - 5:00pm
Grand Peninsula Foyer

8:55am

9:00am

The Offline Dimension of Online Crime
The conventional wisdom is that cybercrime is a largely anonymous activity that exists essentially in cyberspace. The supposed anonymity of attackers feeds into a narrative that cybercrime is strange, new, ubiquitous and ultimately very difficult to counteract. The central purpose of this presentation is to dispute this view. When one looks for it, there is actually a strong offline and local element within cybercrime, alongside the online dimension. In a number of cases, offenders are physically known to each other and work together. Understanding this phenomenon is important for informing policy approaches that seek to address this challenge. The arguments made in this presentation are supported by fieldwork carried out over a 7 year period in some 20 countries, including cybercrime "hotspots" like Russia, Ukraine, Romania, Nigeria, Brazil, China and the USA. This included interviews with almost 250 participants from across law enforcement, the private sector and former cybercriminals.

Speakers
JL

Jonathan Lusthaus

University of Oxford
Jonathan Lusthaus is Director of The Human Cybercriminal Project in the Department of Sociology and a Research Fellow at Nuffield College, University of Oxford. His research focusses on the "human" side of profit-driven cybercrime: who cybercriminals are and how they are organised... Read More →


Tuesday January 29, 2019 9:00am - 9:30am
Grand Peninsula Ballroom ABCD

9:30am

Learning from the Dark Web Dimension of Data
If data should be treated like money, how do we figure out how much it is worth? What is the value of sensitive personal data to individuals and businesses? Often, it is only when that data is lost or compromised do we understand its true value.

Currently, the value of compromised or lost data is based on the consequences of a breach or major exposure: cost of remediation, damage to corporate reputation, drop in share price, or enforcement actions, legal settlements, and payouts. We acknowledge and understand that the fallout from lack of security is expensive, however, we need a better way to measure and evaluate compromised digital assets.

On the underground economy of the dark web, cybercriminals have created a market for data, including pricing based on monetization. This market prices the goods (data) and can help us estimate the cost to the economy. Cybercrime pays and data is the gateway good, an item of value in and of itself. The valuation of this data and market activity can quantify the effective harm caused by cybercrime, fraud, and identity theft. Using concepts from economics, this talk aims to provide an alternative framework for valuing stolen and leaked personal and financial data to help us fight cybercrime more effectively and empower business to operate more securely.

This talk aims to provide an alternative framework for valuing stolen and leaked personal and financial data to help us fight cybercrime more effectively and empower business to operate more securely.



Speakers
MW

Munish Walther-Puri

Presearch Strategy
Munish Walther-Puri is the founder of Presearch Strategy, a firm dedicated to applying technology and analytics to geopolitical risk, strategic intelligence, and cybersecurity. Previously, he was the Chief Research Officer and Head of Intelligence Analytics at Terbium Labs, where... Read More →


Tuesday January 29, 2019 9:30am - 10:00am
Grand Peninsula Ballroom ABCD

10:00am

Countering Adversarial Cyber Campaigns
Over the course of the last three decades, and increasingly over the past eight years, state and semi-state actor behavior in cyberspace is veering in a direction that much of the cyber security research has not. While much of the academic and policy communities focus on ‘the high-and-right’ cyber action equivalent to an armed attack - the concept of cyber war - the actual behavior of actors has been of a far more nuanced and different nature. What we have been observing are campaigns comprised of linked cyber operations, with the specific objective of achieving strategic outcomes without the need of armed attack. These campaigns are not simply transitory clever tactics. Rather, they are reflections of the structural imperatives of cyberspace itself as a domain and as such will be the central mechanism of state and semi-state competition in this realm as long as the core structure of cyberspace endures. The fundamental nature of cyberspace rests on a structure of interconnectedness and a condition of constant contact. Once recognized, that nature requires us to study cyber means not as enablers of war, although they can be, but more critically as the alternative to it.

This presentation puts forth the argument that cyberspace is a new field of competition in power politics and cyber campaigns are now a salient means, alternative to war, of achieving strategic outcomes. We propose and evaluate a new set of measures - which go beyond conventional approaches of norms setting, deterrence and resilience - to address today's cyber policy challenges.

Speakers
MS

Max Smeets

Stanford University
Dr. Max Smeets is a cybersecurity postdoctoral fellow at Stanford University Center for International Security and Cooperation (CISAC). He is also a non-resident cybersecurity policy fellow at New America, and Research Associate at the Centre for Technology & Global Affairs, University... Read More →


Tuesday January 29, 2019 10:00am - 10:30am
Grand Peninsula Ballroom ABCD

10:00am

Sponsor Showcase
Say hello to our exhibiting sponsors and partners Amazon, Dropbox, EFF, Google, Shopify, Tanium, and Uber.

Tuesday January 29, 2019 10:00am - 5:30pm
Grand Peninsula Ballroom EFG

10:30am

Break with Refreshments
Tuesday January 29, 2019 10:30am - 11:00am
Grand Peninsula Foyer

11:00am

Usage of Behavioral Biometric Technologies to Defend Against Bots and Account Takeover Attacks
Frictionless strong authentication is a critical driver for enabling ecommerce and many other modern technology systems to thrive. In this talk, I showcase the challenges of tackling modern sophisticated machine based attacks and other malicious human activity attempting account takeover using stolen or compromised credentials. This is followed by a quick dive into the engineered solution that can perform behavioral analytics utilizing biometric data and how it tackles machine learning problems at the scale of hundreds of millions of authentication attempts. Insight is provided into implementation challenges, machine learning model generation, and finally integration into a very complex ecosystem. This talk will also showcase wins and how this eventually enabled a zero-trust environment.

Speakers
AG

Ajit Gaddam

Head of Security Engineering, Visa Inc.
Ajit Gaddam is the Head of Security Engineering at Visa, where he is responsible for building large scale machine learning driven defenses, leading engineering programs, and providing expert guidance on cybersecurity matters. He has presented at conferences worldwide including Black... Read More →


Tuesday January 29, 2019 11:00am - 11:30am
Grand Peninsula Ballroom ABCD

11:30am

Cryptocurrency: Burn It with Fire
The entire cryptocurrency and blockchain space is effectively one big fraud. Cryptocurrencies are not fit for purpose unless you need censorship resistance, are fundamentally incompatible with modern finance, and are unfixable. They are, however, destroyable as they have technical, legal, and social weaknesses that can be exploited.

Speakers
NW

Nicholas Weaver

International Computer Science Institute (ICSI) and University of California, Berkeley
Nicholas received a B.A. in Astrophysics and Computer Science in 1995, and his Ph.D. in Computer Science in 2003 from the University of California at Berkeley. Although his dissertation was on novel FPGA architectures, he also was highly interested in Computer Security, including... Read More →


Tuesday January 29, 2019 11:30am - 12:00pm
Grand Peninsula Ballroom ABCD

12:00pm

Building a Secure Data Market on Blockchain
Data analytics and machine learning can provide enormous societal value and foster advancements in many industries. However, most of the valuable data needed to power these innovations remains restricted and siloed due to privacy concerns. This talk will discuss how blockchain technology, combined with privacy-preserving techniques, can enable a secure data market allowing users to share their data for analytics and machine learning while maintaining privacy, transparency, and control—without relying on trust of any central organization.

Speakers
NJ

Noah Johnson

Oasis Labs
Noah Johnson is co-founder and Chief Product Officer at Oasis Labs with expertise in program analysis, security policy enforcement, and privacy-preserving techniques. Noah obtained his PhD in Electrical Engineering and Computer Science from UC Berkeley where he was advised by Professor... Read More →


Tuesday January 29, 2019 12:00pm - 12:30pm
Grand Peninsula Ballroom ABCD

12:30pm

Lunch
Tuesday January 29, 2019 12:30pm - 2:00pm
Atrium

1:30pm

Next Steps For Browser Privacy: Pursuing Privacy Protections Beyond Extensions
Practically-focused privacy research has disproportionately focused on the browser extension layer. This extension-predilection is a double edged sword. On the positive side, extensions are both simpler to develop and easier to distribute than deeper-reaching modifications, allowing researchers to iterate quickly and share their work with a large audience. On the negative side, an extension focus reduces the privacy improvements that can be achieved, as extensions can only modify a limited set of browser behavior. Researchers exploring modifications beyond the extension layer also lack easy ways of sharing their findings with a broad audience.

As a result, many possible web privacy improvements go under-explored. In this talk, I'll discuss three privacy improvements being developed at Brave that would not be possible at the extension layer. I hope to encourage other researchers and privacy activists to move beyond an extension-focused deployment strategy, and to consider privacy-oriented browser vendors as deployment strategies for getting their improvements in the hands of web users.

Speakers
PS

Peter Snyder

Privacy Researcher, Brave Software
Peter Snyder is the Privacy Researcher at Brave Software, where he works on improving the privacy guarantees of the Brave Browser. He received his Ph.D. in Computer Science from the University of Illinois at Chicago in 2018. His research focuses on web security and privacy, browser... Read More →


Tuesday January 29, 2019 1:30pm - 2:00pm
Grand Peninsula Ballroom ABCD

2:00pm

User Agent 2.0: What Can the Browser Do for the User?
Browsers are the window that the user has onto the ever-expanding web, with the good, the bad, and the ugly that it contains. Security mechanism design on the web has traditionally relied on the user to make rational, carefully-considered choices. Too often this become a barrage of prompts and dialogues, which end-users ultimately tend to ignore.

In this talk, we highlight the fact that this assumption is based on flimsy science at best, and, at worst, completely debunked. We therefore argue that the browser should do more to help the user with these decision, thereby truly stepping into the shoes of a user agent. While there may be decisions the user has to make, they must be less frequent and asked in a way where the user has a reasonable basis for making a well-informed decision. For example, a prompt to switch the browser into private browsing mode or to block all 3rd party cookies on a given site, due to the nature of the content they’re browsing may be accompanied with a side-by-side before-and-after picture.

Speakers
avatar for Ben Livshits

Ben Livshits

Chief Scientist, Brave Software
Ben Livshits is the Chief Scientist for Brave Software, a company that makes a novel privacy-friendly web browser. Dr. Livshits is also an Associate Professor at Imperial College London and an affiliate professor at the University of Washington. Previously, he was a research scientist... Read More →


Tuesday January 29, 2019 2:00pm - 2:30pm
Grand Peninsula Ballroom ABCD

2:30pm

Where Is the Web Closed?
One of the Internet's greatest strengths is the degree to which it facilitates access to any of its resources from users anywhere in the world. The Internet has already become a crucial part of our life. People around the world use the internet to communicate, connect, and do business. Yet various commercial, technical, and national interests constrain universal access to information on the internet.

I will discuss three reasons for the closed web that are not caused by government censorship: blocking visitors from the EU to avoid GDPR compliance, blocking based upon the visitor's country, and blocking due to security concerns. These decisions can have an adverse effect on the people of the blocked regions, especially for the developing regions. With many key services, such as education, commerce, and news, offered by a small number of web-based Western companies who might not view the developing world as worth the risk, these indiscriminate blanket blocking could slow the growth of blocked developing regions.

As we are building the future web, we need to discuss the implication of such blocking practices and build technologies that ensure an open web for users around the world.

Speakers
SA

Sadia Afroz

International Computer Science Institute (ICSI)
Sadia Afroz is a research scientist at the International Computer Science Institute (ICSI). Her work focuses on anti-censorship, anonymity and adversarial learning. Her work on adversarial authorship attribution received the 2013 Privacy Enhancing Technology (PET) award, the best... Read More →


Tuesday January 29, 2019 2:30pm - 3:00pm
Grand Peninsula Ballroom ABCD

3:00pm

The URLephant in the Room
In a security professional’s ideal world, every web user would carefully inspect their browser’s URL bar on every page they visit, verifying that they are accessing the site they intend to be accessing. In reality, many users rarely notice the URL bar and don’t know how to interpret the URL to verify a website’s identity. An evil URL may even be carefully designed to be indistinguishable from a legitimate one, such that even an expert couldn’t tell the difference! In this talk, I’ll discuss the URLephant in the room: the fact that the web security model rests on users noticing and understanding URLs as indicators of website identities, but they don’t actually work very well for that purpose. I’ll discuss how the Chrome usable security team measures whether an indicator of website identity is working, and when the security community should consider breaking some rules of usable security in search of better solutions. Finally, I’ll share some thoughts on the big question: is it time to give up entirely on URLs as a user-facing security mechanism?

Speakers
avatar for Emily Stark

Emily Stark

Software Engineer, Google Inc.
Emily Stark leads the Google Chrome usable security team, which is responsible for helping users and developers make safe decisions on the web. Her work includes promoting HTTPS adoption, making HTTPS more usable and secure, and improving many of Chrome's user-facing security and... Read More →


Tuesday January 29, 2019 3:00pm - 3:30pm
Grand Peninsula Ballroom ABCD

3:30pm

Break with Refreshments
Tuesday January 29, 2019 3:30pm - 4:00pm
Grand Peninsula Foyer

4:00pm

Mobile App Privacy Analysis at Scale
Mobile platforms have enabled third-party app ecosystems that provide users with an endless supply of rich content. At the same time, mobile devices present very serious privacy risks: their ability to capture real-time data about our behaviors and preferences has created a marketplace for user data that most consumers are simply unaware of. In this talk, I will present research that my research group has conducted to automatically examine the privacy behaviors of mobile apps. Using analysis tools that we developed, we have tested over 80,000 of the most popular Android apps to examine what data they access and with whom they share it. I will present data on how mobile apps are tracking and profiling users, how these practices are often against users' expectations and public disclosures, and how app developers may be violating various privacy regulations.

The main takeaway from this talk is that there are many stakeholders who can be doing more to improve privacy on mobile platforms: (1) mobile app developers need to better understand the privacy behaviors of the third-party SDKs that they use, as well as to better communicate their privacy practices to their users; (2) the providers of third-party services (e.g., SDKs) and platforms need to do a better job of enforcing their own terms of service; (3) and regulators need tools that allow them to proactively audit compliance.

Speakers
SE

Serge Egelman

University of California, Berkeley, and International Computer Science Institute (ICSI)
Serge Egelman is the Research Director of the Usable Security and Privacy group at the International Computer Science Institute (ICSI), which is an independent research institute affiliated with the University of California, Berkeley. He conducts research to help people make more... Read More →


Tuesday January 29, 2019 4:00pm - 4:30pm
Grand Peninsula Ballroom ABCD

4:30pm

Insider Attack Resistance in the Android Ecosystem
The threat model for a mobile device ecosystem is complex. In addition to the obvious physical attacks on lost or stolen devices and malicious code threats, typical mobile devices integrate a significant amount of code from different organizations into their system images, which are in turn executed on an increasingly complex hardware infrastructure. Both benign mistakes, as well as malicious attacks, could happen on any of these layers, by any of these organizations. Therefore, users as well as app developers and service providers currently have to trust every single one of these organizations. Note that OEMs (original equipment manufacturers) in their role as integrators typically verify their supply chain and components they integrate. However, there are also other parties in the full chain that can tamper with devices after they leave an OEM and before they are in the hands of users. Summarizing, many people could—by honest mistake or malicious intent—tamper with components of a modern smartphone to compromise user security. We call such attacks insider attacks, independently of the motivation or association of these insiders. The basic threat is that insiders have privileged access to some components during the manufacturing or update chain that would allow them to make modifications that third parties could not. This talk will introduce the complexity of the insider attack problem (which is not unique to Android) and introduce some defenses that have already been put in place. In Android, we counter such insider attacks on multiple levels and aim to remove or limit the capability of insiders to harm users, which implies the limiting required trust in many of the involved parties. At the secure hardware level, Android Pie 9.0 introduced insider attack resistance (IAR) for updates to tamper-resistant hardware such as secure elements that is used to validate the user knowledge factor in authentication and for deriving, storing, and using cryptographic key material. Even Google and the respective OEM are technically incapable of distributing modified firmware to such tamper-resistant hardware to exfiltrate user keys without their cooperation. On the system software level, some devices make the hash of their currently running firmware available for (anonymous) local and remote verification. The combination of these features already provide transparency on the system software level and severely limit the possibility of targeted attacks on firmware and system software levels. We continue to work on this problem, and this talk is partially a call to action for the security community to devise additional novel methods to mitigate against insider attacks on components in the mobile device landscape.

Speakers
RM

René Mayrhofer

Google
René Mayrhofer is currently heading the Android Platform Security team and tries to make recent advances in usable, mobile security research available to the Billions of Android users. He is on leave from the Institute of Networks and Security at Johannes Kepler University Linz (JKU... Read More →


Tuesday January 29, 2019 4:30pm - 5:00pm
Grand Peninsula Ballroom ABCD

5:30pm

Conference Reception
Sponsored by Netflix

Tuesday January 29, 2019 5:30pm - 7:00pm
Atrium

7:00pm

An Evening with EFF
Join EFF's General Counsel and Deputy Executive Director Kurt Opsahl and Security Researcher Yomna Nasser for a discussion of EFF's work and the future of the internet.

Tuesday January 29, 2019 7:00pm - 8:00pm
Sandpebble Room CDE

8:00pm

USENIX Women in Advanced Computing (WiAC) BoF
Let’s talk about women in advanced computing. All ­registered ­attendees—of all genders—are welcome to attend this BoF.

Tuesday January 29, 2019 8:00pm - 9:00pm
Sandpebble Room CDE
 
Wednesday, January 30
 

8:00am

Continental Breakfast
Wednesday January 30, 2019 8:00am - 8:55am
Grand Peninsula Foyer

8:00am

Badge Pickup
Wednesday January 30, 2019 8:00am - 12:00pm
Grand Peninsula Foyer

8:55am

9:00am

Digital Authoritarianism, Data Protection, and the Battle over Information Control
Authoritarian regimes increasingly integrate automated bots, digital trolls, and cyber warriors to achieve a broad range of objectives, including data theft, destruction, and manipulation. This strategy for information control and dominance is no longer limited to major power nation-states. It is increasingly diffusing to smaller states as well as a range of non-state actors, and has impacted international events ranging from multi-state economic boycotts to election interference across the globe. As it proliferates this modern authoritarian playbook is also restructuring global regimes and defining global norms pertaining to security and privacy in the absence of a strong and resilient democratic model. To counter the proliferation of this authoritarian model, a major, strategic overhaul of information security within democracies is required. It is time for a strategic renaissance in information security. This requires the removal of the stovepipes that divide information operations and cybersecurity, avoiding conceptual stretching in favor of greater specificity in the terminology and strategy to modernize the democratic playbook. Importantly, this reimagination must be in sync with technological and social changes, and provide a democratic alternative to the authoritarian model that is increasingly taking a global stronghold. I will first provide an overview of the major innovations across bots, trolls, and warriors, including specific use cases of their integration as a holistic strategy. Next, I will address how this authoritarian model is restructuring the international system, shaping global norms, internet standards, and redefining acceptable behavior in war and peace. Finally, I will offer recommendations for the path ahead given this shifting international landscape, and what the private and public sectors within democracies should do as the digital defenders of security, privacy, and individual freedoms.

Speakers
AL

Andrea Little Limbago

Virtru
Dr. Andrea Little Limbago is a computational social scientist specializing in the intersection of technology, national security, and society. She currently is the Chief Social Scientist at Virtru, an encryption and data privacy software company, where she researches and writes on... Read More →


Wednesday January 30, 2019 9:00am - 9:30am
Grand Peninsula Ballroom ABCD

9:30am

Mr. Lord Goes to Washington, or Applying Security outside the Tech World
Over the past year, I have had the honor of applying some of my experiences securing large enterprises to a new domain: a major political party. Along the way, I dealt with phishing attacks (including one you have already read about), helped roll out best practices to a decentralized party ecosystem, and encountered disinformation campaigns. In this talk, I’ll present my findings, many of which apply to any small or medium-sized business, as well as a number of suggestions for people building tech products.

Speakers
BL

Bob Lord

CSO Democratic National Committee
Bob Lord is the Chief Security Officer at the Democratic National Committee, bringing more than twenty years of experience in the information security space to the Committee, state parties, and campaigns. Previously he was Yahoo’s CISO, covering areas such as risk management, product... Read More →


Wednesday January 30, 2019 9:30am - 10:00am
Grand Peninsula Ballroom ABCD

10:00am

10:00am

Sponsor Showcase
Say hello to our exhibiting sponsors and partners Amazon, Dropbox, EFF, Google, Shopify, Tanium, and Uber.

Wednesday January 30, 2019 10:00am - 2:00pm
Grand Peninsula Ballroom EFG

10:30am

Break with Refreshments
Wednesday January 30, 2019 10:30am - 11:00am
Grand Peninsula Foyer

11:00am

Moving Fast and Breaking Things: Security Misconfigurations
Nowadays, security incidents have become a familiar "nuisance," and they regularly lead to the exposure of private and sensitive data. In practice, the root causes for such incidents are rarely complex attacks. Instead, they are enabled by simple misconfigurations, such as authentication not being required, or security updates not being installed. For example, the leak of over 140 million Americans' private data from Equifax's systems is among most severe misconfigurations in recent history: The underlying vulnerability was long known, and a security patch had been available for months, but it was never applied. Ultimately, Equifax blamed an employee for forgetting to update the affected system, highlighting his personal responsibility.

In this talk, we investigate the operators' perspective on security misconfigurations to approach the human component of these security issues. We focus on system operators, because they are, ultimately, the ones being made responsible for the misconfigurations. Yet, they might not actually be a security issue's root cause, but other organizational factors might have led to it. We provide an analysis of system operators' perspective on security misconfigurations, and we determine the factors that operators perceive as the root causes. Finally, based on our findings, we provide practical recommendations on how to reduce security misconfigurations' frequency and impact.

Speakers
KB

Kevin Borgolte

Princeton University
Kevin Borgolte is a postdoctoral research scientist at Princeton University in the Department of Computer Science and the Center for Information Technology Policy. His research interests span network and system security, currently focused on large-scale Internet abuse, IPv6 security... Read More →


Wednesday January 30, 2019 11:00am - 11:30am
Grand Peninsula Ballroom ABCD

11:30am

Stethoscope: Securely Configuring Devices without Systems Management
Insecurely configured endpoints are a major risk for both organizations and individuals, one which is particularly hard to address in an increasingly bring-your-own-device world. Netflix works with hundreds of individual contractors, companies, vendors and other third-parties who need access to corporate data and services. These third-parties often have their own devices which Netflix does not own and cannot control yet must secure.

To address these issues, we developed the Stethoscope native app, a tool which recommends to the user configuration changes to improve the security of their device and optionally allows organizations to verify device configuration at authentication time. The app, designed to avoid the operational burdens and risks of traditional systems management tooling, does not require administrator access, is read-only, and is open-source. It guides users through securely configuring their device while providing the context they need to understand why these changes are important. Incorporating Stethoscope into an endpoint strategy helps provide security without the need to fully control or own devices.

Speakers
AM

Andrew M. White

Andrew worked on user-focused security and behavioral analytics for anomaly detection at Netflix. He holds a PhD in Computer Science from the University of North Carolina at Chapel Hill; his dissertation dealt primarily with mitigating and exploiting side channels in encrypted network... Read More →


Wednesday January 30, 2019 11:30am - 12:00pm
Grand Peninsula Ballroom ABCD

12:00pm

When the Magic Wears Off: Flaws in ML for Security Evaluations (and What to Do about It)
Academic research on machine learning-based malware classification appears to leave very little room for improvement, boasting F1 performance figures of up to 0.99. Is the problem solved? In this talk, we argue that there is an endemic issue of inflated results due to two pervasive sources of experimental bias: spatial bias, caused by distributions of training and testing data not representative of a real-world deployment, and temporal bias, caused by incorrect splits of training and testing sets (e.g., in cross-validation) leading to impossible configurations. To overcome this issue, we propose a set of space and time constraints for experiment design. Furthermore, we introduce a new metric that summarizes the performance of a classifier over time, i.e., its expected robustness in a real-world setting. Finally, we present an algorithm to tune the performance of a given classifier. We have implemented our solutions in TESSERACT, an open source evaluation framework that allows a fair comparison of malware classifiers in a realistic setting. We used TESSERACT to evaluate two well-known malware classifiers from the literature on a dataset of 129K applications, demonstrating the distortion of results due to experimental bias and showcasing significant improvements from tuning.

Speakers
LC

Lorenzo Cavallaro

King's College London
Lorenzo Cavallaro is a Full Professor of Computer Science, Chair in Cybersecurity (Systems Security) in the Department of Informatics at King's College London, where he leads the Systems Security Research Lab. He received a combined BSc-MSc (summa cum laudae) in Computer Science from... Read More →


Wednesday January 30, 2019 12:00pm - 12:30pm
Grand Peninsula Ballroom ABCD

12:30pm

Lunch
Wednesday January 30, 2019 12:30pm - 1:30pm
Atrium

1:30pm

If Red Teaming Is Easy: You're Doing It Wrong
Red Teaming is a popular topic for both internal security teams, and for external contractors to emulate real world attacks and improve defenses. Going beyond the pentest model, Red Teaming delivers inarguable results that critically inform detection, prevention and response for an organization's security. However, it is often thought of as the "easy" side of InfoSec, and many Red Teams operate on a "win and go home" model. It can be quite easy, but if it is, you're not achieving the true goal: improved security at an organization or company via an adversarial perspective.

In this talk, Aaron will explore how proper Red Teaming can be extremely challenging, how it often requires understanding an organization functions, knowing how to attack different technology stacks, even exploring business risks, insider threats and abuse. To have an impact or achieve a compromise, sometimes a team may need to understand the target areas more than the people who create or maintain them. However popular Red Teaming is now, and whatever is being targeted, we're only scratching the surface of what is possible.

Speakers
AG

Aaron Grattafiori

Facebook
Aaron Grattafiori leads the Red Team at Facebook, where he focuses on offensive security, vulnerability research, adversary simulation, and performing bold full scope operations. Previously, Aaron was a principal consultant and research lead at iSEC Partners/NCC Group for many years... Read More →


Wednesday January 30, 2019 1:30pm - 2:00pm
Grand Peninsula Ballroom ABCD

2:00pm

Why Even Experienced and Highly Intelligent Developers Write Vulnerable Code and What We Should Do about It
Despite the best efforts of the security community, vulnerabilities in software are still prevalent, with new ones reported daily and older ones repeating. One potential source of these vulnerabilities is API misuse. Developers (as human beings) tend to use shortcuts in their decision-making. They also generally trust APIs, but can misuse them, introducing vulnerabilities. We call the causes of such misuses blindspots. For example, some developers still experience blindspots on the implications of using strcpy(), which can lead to buffer overflows. We investigated API blindspots from a developers’ perspective to: (1) determine the extent to which developers can detect API blindspots in code and (2) examine how developer characteristics (i.e., perception of code correctness, familiarity with code, confidence, professional experience, cognitive functioning levels, and personality) affect this capability. We conducted a study with 109 developers from four countries solving programming tasks involving Java APIs known to cause blindspots in developers. We found that (1) The presence of blindspots correlated negatively with developers’ ability to identify vulnerabilities in code and that this effect was more pronounced for I/O-related APIs and for code with higher cyclomatic complexity. (2) Higher cognitive functioning and more programming experience did not predict better ability to detect software vulnerabilities in code. (3) Developers exhibiting greater openness as a personality trait were more likely to detect software vulnerabilities. The insights from this study and this talk have the potential to advance API security and software development processes. The design of new API functions should leverage developer studies to test for misconceptions in API usage. The documentation of legacy functions should address common blindspots developers experience when using the function. Software security training should highlight that (1) even expert, experienced, and highly intelligent developers will experience blindspots while using APIs, (ii) perceptions and "gut feelings" might be misleading, and (iii) developers should rely more on diagnostics tools. This talk will also highlight that the rationale of many software development companies that developers should and can address functionality and security simultaneously and that hiring experts will substantially increase software security might be misleading. Both of these tasks (functionality and security) are highly cognitively demanding and attempting to address both might be a zero-sum game, even for experts. Our insights have the potential to create awareness, especially for small and medium sized software development companies that having separate teams to address functionality and security might be a much more cost-effective paradigm to increase software security than the sole reliance on experts that are expected to "do it all".

Speakers
DS

Daniela Seabra Oliveira

University of Florida
Daniela Seabra Oliveira is an Associate Professor in the Department of Electrical and Computer Engineering at the University of Florida. She received her B.S. and M.S. degrees in Computer Science from the Federal University of Minas Gerais in Brazil. She then earned her Ph.D. in Computer... Read More →


Wednesday January 30, 2019 2:00pm - 2:30pm
Grand Peninsula Ballroom ABCD

2:30pm

How to Predict Which Vulnerabilities Will Be Exploited
The rate at which software vulnerabilities are discovered is growing: the National Vulnerability Database includes over 100,000 vulnerabilities, and 10% of these entries were added in the last year. Very few of these vulnerabilities are exploited in real-world attacks, yet the exploits can compromise millions of hosts around the world and can disrupt businesses and critical services.

This talk will discuss what we have learned about vulnerability exploitation by analyzing data from 10 million hosts. These hosts, used by real people around the world and targeted by real attackers, give us an opportunity to quantify the impact of software vulnerabilities on a global scale. Our measurements also allow us to infer statistically which vulnerabilities are likely to be exploited in the wild—before finding the corresponding exploits.

We show that the growing rate of vulnerability discovery does not mean that software is becoming more insecure; in fact, the fraction of vulnerabilities that are exploited follows a decreasing trend. At the same time, popular vulnerability metrics, such as the CVSS score, have a low correlation with the vulnerabilities that are ultimately exploited in the real world. It is difficult to guess why hackers exploit some vulnerabilities and not others, because this decision is influenced by a variety of socio-technical factors. However, we can combine features derived from the technical characteristics of a vulnerability, such as its CVSS score, with features extracted from social media, which reflect how information about the vulnerability spreads among hackers, security researchers and system administrators. Additionally, we can take into account variations in the rates at which vulnerable hosts are patched, after the patch becomes available. By combining these factors into predictive models, we can determine which vulnerabilities present a higher risk of exploitation, and, for some vulnerabilities, we can infer the existence of zero-day exploits on the day of disclosure.

Our predictive models are the result of five years of academic research, and they represent a step toward answering the question "What are the odds that you will get hacked tomorrow?" Along with recent advances on predicting other types of security incidents, these techniques help us assess objectively the impact of various defensive technologies on security in the real world. Such predictive models allow companies to determine their biggest risks and the best mitigations by using data, rather than expert opinions. They also provide evidence for cyber policymaking, and they can be applied to risk modeling in cyber insurance.

Speakers
TD

Tudor Dumitras

University of Maryland, College Park
Tudor Dumitraș is an Assistant Professor in the Electrical & Computer Engineering Department at the University of Maryland, College Park. His research focuses on data-driven security: he studies real-world adversaries empirically, he builds machine learning systems for detecting... Read More →


Wednesday January 30, 2019 2:30pm - 3:00pm
Grand Peninsula Ballroom ABCD

3:00pm

Break with Refreshments
Wednesday January 30, 2019 3:00pm - 3:30pm
Grand Peninsula Foyer

3:30pm

Physical OPSEC as a Metaphor for Infosec
Being an Infosec professional kind of forces you to be a jack of all trades. It helps to develop a mindset where analyzing risk becomes second nature. Daily security and risk assessment decisions are an excellent exercise and will help build security muscle memory, and it benefits you professionally and personally. The premise is simple - I will outline what I do from a physical OPSEC standpoint when I travel or am just out and about, and you will reflect (with nudging) on my metaphors. We all do it to a certain extent, so why not consciously put it to the test? What better way to start the process than while traveling to a security conference?

Speakers
ML

Mark Loveless

Mark Loveless—aka Simple Nomad—is a security researcher, hacker, and explorer. He has worked in startups, large corporations, hardware and software vendors, and even a government think tank. He has spoken at numerous security and hacker conferences worldwide on security and privacy... Read More →


Wednesday January 30, 2019 3:30pm - 4:00pm
Grand Peninsula Ballroom ABCD

4:00pm

Something You Have and Someone You Know—Designing for Interpersonal Security
While a variety of strategies for threat modeling exist, they largely share two assumptions: that the attacker is remote and sophisticated. Given the military origins of the security community, it is not surprising that by default, we tend to focus on the types of threats that face an organization, instead of the types of threats that face individuals. As a result of my own work with survivors of domestic violence, as well as others' findings about individuals' security and privacy concerns, I suggest a new threat framework–The Interpersonal Threat Model–that provides a completely different set of assumptions about an attacker’s capabilities and motivations than a more traditional Organizational Threat Model. This is a call for the security and privacy communities to consider interpersonal threats–those that stem from people with whom we cohabitate or share devices–when designing consumer-facing technology. In doing so, perhaps we can begin to better address the concerns of everyday people and offer solutions for at-risk populations.

Speakers
PD

Periwinkle Doerfler

New York University
Periwinkle Doerfler is a PhD Candidate at NYU Tandon School of Engineering within the Center for Cyber Security, advised by Prof. Damon McCoy. Her research focuses on the intersection of intimate partner violence and technology. She looks at this issue with regard to abusers, and... Read More →


Wednesday January 30, 2019 4:00pm - 4:30pm
Grand Peninsula Ballroom ABCD

4:30pm

Closing Remarks
Wednesday January 30, 2019 4:30pm - 4:45pm
Grand Peninsula Ballroom ABCD
 

Twitter Feed