2nd Annual Embedded Security Workshop
Thank you to everyone who attended the 2020 Embedded Security Workshop!
Here are available the notes from the workshop sessions: 2020 EmSec Workshop Notes
Dates: August, 27, 2020 1PM-5PM ET and August 28, 2020, 9AM-1PM ET
Location: Ann Arbor, Michigan, USA
Participant cost: Free!*
Registration deadline: August 21, 2020
Note: A virtual workshop via ZOOM
The second annual Embedded Security Workshop was hosted by Prof. Kevin Fu of the SPQR Lab at the University of Michigan. The workshop consisted of a diverse set of talks by leading faculty and industrial researchers across the world ranging from IoT security to medical device security to physics of analog sensor cybersecurity. Students gave lightning pitches of their research and participated in small breakout groups for discussion. The speakers ranged from industrial researchers to practicing neurosurgeons. The event focused on improving diversity and inclusion for leading research in embedded security. The previous embedded security workshop took place at USENIX Security in cooperation with the CCC, leading to publication of a national CCC embedded security report on behalf the research community.
2020 has been a particularly challenging year for diversity and equity. The members of the SPQR Lab met to discuss how we could do more than simply offer words, but take meaningful action to increase diversity and inclusion for a more welcoming environment on embedded security research. For instance, at our weekly reading group, we discussed academic papers authored by researchers who belong to a minority such as BIPOC (Black, Indigenous, People of Color), women in STEM, etc. This workshop is another outcome of our thoughts.
What can participants expect?
*There is no fee to attend this workshop, thanks to our sponsors. Instead, participants must apply with a statement on how they actively engage in encouraging diversity and inclusion for research on embedded security. The workshop has a limited number of spots for this interactive workshop and moderated breakout sessions. The program organizers will select participants based on application statements on (1) how the research workshop will help their academic careers, and (2) how the participant helps foster the mission of diversity and inclusion for embedded security research. Ready to apply now?
Keynote Speakers:
Dr. Kevin Kornegay; Dr. Erika Petersen, MD; Paul Kocher
Sponsors
Qualcomm and Google would like our workshop attendees to know that they strongly support diversity and inclusion. We appreciate Qualcomm and Google as workshop co-sponsors! This effort is sponsored in part by the National Science Foundation under CNS-2031077 and CNS-1330142. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the sponsors.
Speakers
Kevin Kornegay
Kevin T. Kornegay received the B.S. degree in electrical engineering from Pratt Institute, Brooklyn, NY, in 1985 and the M.S. and Ph.D. degrees in electrical engineering from the University of California at Berkeley in 1990 and 1992, respectively. He is currently the IoT Security Professor and Director of the Cybersecurity Assurance and Policy (CAP) Center for Academic Excellence in the Electrical and Computer Engineering Department at Morgan State University in Baltimore, MD. His research interests include hardware assurance, reverse engineering, secure embedded systems, side-‐channel analysis, and differential fault analysis. Dr. Kornegay serves or has served on the technical program committees of several international conferences including the IEEE Symposium on Hardware Oriented Security and Trust (HOST), EEE Secure Development Conference (SECDEV), USENIX Security 2020, the IEEE Physical Assurance and Inspection of Electronics (PAINE), and the ACM Great Lakes Symposium on VLSI (GLSVLSI). He is the recipient of numerous awards, including He is the recipient of multiple awards, including the NSF CAREER Award, IBM Faculty Partnership Award, National Semiconductor Faculty Development Award, and the General Motors Faculty Fellowship Award. He is currently a senior member of the IEEE and a member of Eta Kappa Nu and Tau Beta Pi engineering honor societies.
The mission of the Cybersecurity Assurance and Policy (CAP) Center at Morgan State University is to provide the defense and intelligence community with the knowledge, methodology, solutions, and highly skilled cybersecurity professionals to mitigate penetration and manipulation of our nation’s cyber-physical infrastructure. The Internet of Things (IoT) permeates all areas of life and work, with unprecedented economic effects. The IoT is a network of dedicated physical objects (things) whose embedded system technology senses or interacts with its internal state or external environment. Embedded systems perform dedicated functions within larger mechanical or electrical systems. Critical infrastructures in transportation, smart grid, manufacturing, and health care, etc. are highly dependent on embedded systems for distributed control, tracking, and data collection. While it is paramount to protect these systems from hacking, intrusion, and physical tampering, current solutions rely on a patchwork of legacy systems, and this is unsustainable as a long-term solution. Transformative solutions are required to protect these systems. In this talk, we will present our current research that addresses security vulnerabilities in IoT ecosystems to provide secure, resilient, and robust operation.
Erika Petersen
Dr. Petersen directs the Section of Functional and Restorative Neurosurgery at UAMS Medical Center. She is a professor in the Department of Neurosurgery at UAMS and program director for the neurosurgery residency. She is a board certified neurosurgeon whose clinical practice focuses on neuromodulation, treating movement disorders, spasticity, and chronic pain through surgical procedures and stereotactic radiosurgery. Dr. Petersen's research interests focus on developing new devices, indications, and methods for treating chronic pain using neuromodulation. Her collaborations with colleagues in interventional pain, neuroradiology, neurology and psychiatry are aimed toward identifying new applications for the use of deep brain stimulation and neuromodulation for pain. Dr. Petersen completed her undergraduate education at Princeton University and received her medical degree from the University of North Carolina at Chapel Hill. She trained in neurosurgery at the University of Texas Southwestern with a fellowship in deep brain stimulation at the National Hospital for Neurology and Neurosurgery in London. Dr. Petersen has served on the Joint Section on Pain of the American Association of Neurological Surgeons (AANS)/Congress of Neurological Surgeons (CNS) and the CNS Scientific Programming Committee and sits on the Executive Board of the American Society of Pain and Neuroscience, where she is secretary. She serves as associate editor of stereotactic and functional neurosurgery for Operative Neurosurgery, and is on the editorial board of Neuromodulation and contributes as a reviewer for several other journals. Dr. Petersen lectures frequently at national and international meetings on neuromodulation for pain, emerging considerations and uses of neuromodulation, chronic pain management and deep brain stimulation. She has authored numerous articles and book chapters related to stereotactic and functional neurosurgery and neuroscience.
Data privacy and cybersecurity should both be seriously considered for all devices that interact with our patients. Patient data privacy is a key tenet of HIPAA, but data acquired by devices outside the medical record may not be protected. Implanted neuromodulation devices for movement disorders, epilepsy, and pain are able to track and record data, which can then be downloaded to hospital, office or manufacturer sites for review. These data can be collected and stored indefinitely, allowing for retrospective analysis, which as our understanding of neurophysiology improves, will lead to new and unforeseen insights. For example, brainwave data collected for seizure mapping served as the basis for understanding neurophysiology of vision, ultimately contributing to developing of a new neuromodulation device to enable sight. What today might seem like unparsable noise may be the key to a breakthrough as our neuroscience knowledge and artificial intelligence and machine learning capabilities improve. As with collection and recording from external devices, there are serious concerns about the privacy of date collected from implanted devices. There is little education of patients and clinicians about data collection and its use and possible misuse, and these considerations are rarely part of informed consent discussions. There is no business relationship between the patient and the manufacturer of their device, no statement of Terms of Service, nor discussion of how patients might opt out of data collection. Until policies are clearer, clinicians and patients must rely on the good will, morality or ethical code of manufacturers to preserve data privacy.
Paul Kocher
Paul Kocher is an entrepreneur and researcher focused on cryptography and data security and is currently exploring independent research topics. Areas of interest include trade-offs between complexity/performance and security, as well as how computer systems could be architected to reduce the likelihood and severity of exploitable security vulnerabilities. One of the results of this work discovering a class of vulnerabilities (which I named Spectre) arising from the use of speculative execution in microprocessors. Paul was elected to the National Academy of Engineering in 2009 for contributions to cryptography and Internet security. He's a member of the Forum on Cyber Resilience, which is a National Academies roundtable. He's also a member of the Cybersecurity Hall of Fame and is a frequent speaker on security topics.
During the now-ended performance boom, microprocessor performance optimizations brought enormous economic benefits that vastly exceeded the costs of insecurity. As CPU improvements stalled out, security costs continued to scale exponentially. As a result, we are now at the beginning of a starkly different era characterized by staggering insecurity costs and modest performance gains. This talk explores the technical and business implications of a world where security risk is the dominant issue, while performance merely needs to be good enough. Architectures will increasingly need to address messy real-world problems, such as side channels and fault attacks. Likewise, security models need to reflect realistic assumptions about the fallibility of the humans who architect, implement, test, and administer systems. Ultimately, changing constraints present a major challenge for previously-dominant companies while creating enormous opportunities for entrepreneurs.
Ryan Gerdes
Ryan M. Gerdes is an Associate Professor in the Bradley Department of Electrical and Computer Engineering at Virginia Tech. Dr. Gerdes’ work focuses on designing resilient computing systems, with an emphasis on cyber-physical systems operating in adversarial environments and leveraging the physical layer for defensive and offensive purposes. He is the principal investigator on NSF and DOE projects that examine the security and privacy of cooperative, automated vehicles; unmanned aerial systems (UAS); and next-generation battery electric vehicles and chargers.
This talk will consider attacks in three domains against safety critical systems: 1) in vision-based object classification systems imaging sensors perceive the environment and machine learning is then used to detect and classify objects for decision-making purposes; e.g., to maneuver an automated vehicle around an obstacle or to raise an alarm to indicate the presence of an intruder in surveillance settings. We will discuss how the perception domain can be remotely and unobtrusively exploited to enable an attacker to create spurious objects or alter an existing object. 2) Extremely fast charging (XFC), whereby electric vehicles (EV) can be quickly recharged in the time frame it takes to refuel an internal combustion engine, has been proposed to alleviate range anxiety in EVs. A critical component of these chargers is the efficient and proper operation of power converters that convert AC to DC power and otherwise regulate power delivery to vehicles. We will demonstrate that, with relatively low power levels, an adversary is able to manipulate the voltage and current sensor outputs necessary to ensure the proper operation of the converters, to catastrophic effect. 3) Traffic Collision Avoidance Systems (TCAS) are safety-critical systems required on most commercial aircrafts in service today. We'll examine TCAS from an adversarial perspective with the goal of inducing near mid-air collisions between current day TCAS-equipped aircraft.
Yongdae Kim
Yongdae Kim is a Professor in the Department of Electrical Engineering, an affiliate professor in the Graduate School of Information Security at KAIST. He received PhD degree from the computer science department at the University of Southern California under the guidance of Gene Tsudik. Between 2002 and 2012, he was an associate/assistant professor in the Department of Computer Science and Engineering at the University of Minnesota - Twin Cities. Before coming to the US, he worked 6 years in ETRI for securing Korean cyberinfrastructure. He served as a KAIST Chair Professor between 2013 and 2016, and received NSF career award on storage security and McKnight Land-Grant Professorship Award from University of Minnesota in 2005. Currently, he is serving as an associate editor for ACM TOPS, and he was a steering committee member of NDSS between 2012 - 2018. His main research includes novel attacks and analysis methodologies for emerging technologies, such as Cyber Physical Systems including drone/self-driving cars, 4G/5G cellular networks and Blockchain.
Sensors are essential components of sensing-and-actuation system such as drones, self-driving cars, medical devices and smoke detectors. Active sensors are designed to collect physical information about remote objects. In this talk, I will talk about security of active sensors such as lidars and camera sensors for self-driving cars, drop sensors for infusion pumps and optical beam smoke detector (OBSD) installed in critical infrastructure such as government buildings, airports and museums. Then, I'll conclude my talk with a few design considerations for future sensor designers.
Morley Mao
Z. Morley Mao is a Professor in EECS at the University of Michigan, having completed her Ph.D. at UC Berkeley on robust Internet routing protocol design and effective network measurement techniques to uncover network properties. Her current research focus encompasses software-defined networking, AV security, network security, next-generation Internet protocols, and mobile systems.
Perception plays a pivotal role in autonomous driving systems, which utilizes onboard sensors like cameras and LiDARs(Light Detection and Ranging) to assess surroundings. Recent studies have demonstrated that LiDAR-based perception is vulnerable to spoofing attacks, in which adversaries spoof a fake vehicle in front of a victim self-driving car by strategically transmitting laser signals to the victim’s LiDAR sensor. However, existing attacks suffer from effectiveness and generality limitations. In this work, we perform the first study to explore the general vulnerability of current LiDAR-based perception architectures and discover that the ignored occlusion patterns in LiDAR point clouds make self-driving cars vulnerable to spoofing attacks. We construct the first black-box spoofing attack based on our identified vulnerability, which universally achieves around 80% mean success rates on all target models. We perform the first defense study, proposing CARLO to mitigate LiDAR spoofing attacks. CARLO detects spoofed data by treating ignored occlusion patterns as invariant physical features, which reduces the mean attack success rate to 5.5%. Meanwhile, we take the first step towards exploring a general architecture for robust LiDAR-based perception, and propose SVF that embeds the neglected physical features into end-to-end learning. SVF further reduces the mean attack success rate to around 2.3%.
Wayne Burleson
Wayne Burleson has been a Professor of Electrical and Computer Engineering at the University of Massachusetts Amherst since 1990. From 2012-2017 he was a Senior Fellow at AMD Research in Boston. He has degrees from MIT and the University of Colorado. He has worked as a custom chip designer and consultant in the semiconductor industry with VLSI Technology, DEC, Compaq/HP, Intel, Rambus and AMD, as well as several start-ups. Wayne was a visiting professor at ENST Paris in 1996/97, at LIRM Montpellier in 2003 and at EPFL Switzerland in 2010/11. His research is in the general area of VLSI, including circuits and CAD for low-power, interconnects, clocking, reliability, thermal effects, process variation and noise mitigation. He also conducts research in hardware security, reconfigurable computing, content-adaptive signal processing, RFID and multimedia instructional technologies. He teaches courses in VLSI Design, Embedded Systems and Security Engineering. Wayne has published over 200 refereed publications in these areas and is a Fellow of the IEEE for contributions in integrated circuit design and signal processing.
Although RFID has been widely known for its impact on supply chain and inventory management, two of the most exciting applications from a privacy perspective are in: 1) transportation payment systems and 2) implantable medical devices. This talk presents recent research in both areas, drawing parallels but making important distinctions between the two applications. Both projects involve broad international collaborations due to the large number of technical disciplines involved, as well as varying legal and societal dimensions across different cultures. Transportation payment systems have the ability to divulge user location and hence travel habits. However they also facilitate sophisticated dynamic fare schemes and optimization of the transportation system. Implantable medical devices contain extremely private information about personal health and habits, as well as enabling tracking and other privacy concerns. However, the ability to wirelessly access implanted devices provides enormous health and cost benefits Both topics raise interesting cross-disciplinary issues in economics, threat models, and ethics as well as more technical aspects of security engineering This talk will review engineering solutions to both of these domains, including low-power cryptography, physical unclonable functions, and prototyping techniques.
Dan Holcomb
Daniel Holcomb is an assistant professor of ECE at the University of Massachusetts Amherst since 2015. He has an MS degree from UMass, PhD from UC Berkeley, and was a research fellow at the University of Michigan. His research interest is in methodologies for building secure embedded systems, and his work toward this goal spans ASIC and FPGA implementations, various attacks and countermeasures, formal verification, and strategies for securing the IC supply chain. The work in his group at UMass is funded through a number of grants from industry and NSF, including an NSF CAREER award on supply chain security.
The integrated circuits (ICs) that underpin modern society are produced, distributed, and deployed in a manner that leaves them vulnerable to counterfeiting and other supply chain threats. Counterfeit chips are primarily recycled, test rejects, or legitimate but regraded parts, and have to date been found in a number of systems, including defense systems. In this talk, I will present COUNTERFOIL – a practical anti-counterfeiting system based on enrolling and authenticating intrinsic features of the molded packages that enclose most ICs sold on the market. Our approach relies on computer-readable labels, inexpensive cameras, image processing using OpenCV, and digital signatures, to enroll and verify provenance of chip packages. I will also discuss in the talk some of our earlier work in the broader field of IC fingerprinting, in which manufacturing variations of semiconductor devices are used as a mechanism for establishing trust.
Takeshi Sugawara
Takeshi Sugawara received B.E., M.Is., and Ph.D. degrees from Tohoku University, Japan, in 2006, 2008, and 2011, respectively. From 2011 to 2017, he worked as a researcher at Mitsubishi Electric Corporation and was involved in the security of smartcards, programmable logic controllers, and automotive electronic control units. He is currently an Associate Professor at The University of Electro-Communications, Tokyo, since 2017. He experienced an internship at Cryptography Research Inc. in 2010 and visited University of Michigan as a short-term scholar in 2019. He is a founding member of the SASEBO (side-channel attack standard evaluation board) project. His research interest includes analog cybersecurity, lightweight cryptography, side-channel attack, anti-counterfeiting, and hardware trojan.
It has been known that laser illumination on a microcontroller causes the photoelectric effect inside the circuit, which results in computational errors. Attackers have exploited this effect to attack cryptographic modules, and a countermeasure is mandatory for particular products such as smartcards. I will talk about two research topics related to the laser injection attack. The first is its application to attacking sensors: we recently discovered that we could inject arbitrary signal to a microphone by illuminating it with a modulated laser, which results in a silent voice-command injection attack on smart speakers. The second is the sensor-based countermeasure against the laser injection attack: we developed an on-chip sensor that can be integrated into digital circuits and detects laser illumination in place.