Research

Behavioral biometrics

While encryption aims to protect a message from eavesdropping third parties, the time a message was sent may itself reveal some unintended information about the sender or message contents. This situation can occur when messages are transmitted through a network on behalf of a human user interacting with a computer interface, such as through a keyboard or touchscreen. In real-time client/server applications, such as ssh in interactive mode or Google search, network traffic flows from the client to the server upon each user event (e.g., keystroke). The time intervals between successive messages comprise the user’s temporal behavior, and the aggregate series of message timestamps opens the door to a human-based timing side channel whereby information about the sender or message contents is revealed by the message timestamps alone. This is due in part to two phenomena in human-computer interaction (HCI).

Individual users exhibit unique and identifiable temporal behavior interacting with a computer interface
, resulting from individual factors such as motor skill, cognitive ability, and physiology. Interaction with keyboard, touchscreen, and other HCI modalities enables user identification and demographic labeling to be performed in a continuous manner. Keystroke timings alone may indicate age, gender, handedness, and language fluency. As these events manifest to the network level, user identification can be performed passively and remotely. 

An entire population can be expected to operate a computer interface within certain temporal constraints
, enabling an adversary to make general inferences about the message contents based on remotely observed temporal behavior. HCI behavior largely follows general rules, such as Fitts’ Law in navigating the mouse pointer on a computer screen. The keystrokes of a typist can be recovered with considerable accuracy from timestamps alone. This is due to predictable behavior exhibited by the touch typist: shorter time intervals usually correspond to keys that are far apart compared to longer time intervals for keys that are close together, a result of having to reuse the same finger or hand for neighboring keys; distant keys are pressed in quicker succession by alternating fingers.

My research aims to address two problems that stem from the above phenomena: 
  1. Examine how time intervals alone can be utilized to perform user identification, verification, continuous verification, and message content reconstruction. Due to the first phenomena above, it is thought that temporal behavior is unique to an individual user and the time intervals between events can be utilized as a behavioral biometric. Alternatively, general behavior established in a larger population may enable message contents to be inferred from a user's temporal behavior. This is analogous to the way a timing side channel enables encryption keys to be efficiently recovered, relying on the notion that certain math operations take shorter or longer to complete based on the encryption key bit pattern.
  2. Develop temporal obfuscation methods that mitigate the effectiveness of time interval biometrics while satisfying real-time or near real-time constraints. Given the ubiquity of timestamped events from human behavior, especially those that occur during HCI, the ability to uniquely identify a user or partially recover the contents of a message presents a privacy concern. In many cases, timestamps can be observed without user cooperation or knowledge, further exacerbating this concern. Since temporal obfuscation requires introducing a delay to each event by temporarily storing the event in a buffer, there is a direct tradeoff between obfuscation ability and time lag between the user and the application. Related to this effort is the problem of spoofing a user's temporal behavior since this also requires modifying a series of event timings to achieve a desired temporal pattern.

Neuromorphic computing

Computing architectures are on the cusp of a fundamental shift towards concurrency as sequential models struggle to process large amounts of data in real time. Alongside this shift, neural-inspired architectures designed to operate within size, weight, and power (SWaP) constrained environments, have begun to emerge. Such massively parallel architectures are designed to mimic the way the brain functions through distributed representations and event-driven computation. While the conventional von Neumann architecture suffers from a communication bottleneck imposed by a physically separated memory from the central processing unit (CPU), neural-inspired architectures can achieve an extremely high processing throughput and power efficiency by having memory co-located with neuron or neuron-like computation units. In addition, the atomicity and simplicity of neuron-like units opens the door to exploiting novel materials for computation, moving beyond silicon and digital architectures. There is no doubt that such architectures could be utilized in a wide range of applications where low-power and high processing throughput are needed. However, programming such architectures remains a challenge. My research in neuromorphic computing is currently spread across two different areas aimed to increase the programmability and application space of neuromorphic architectures.
  1. The integration of symbolic and subsymbolic processing on a unified neuromorphic architecture. Spiking neural networks (SNN) traditionally operate on numerical data (subsymbolic), but can we configure an SNN to operate on or with structured data (symbolic)? The work in this area aims to develop addressable and structured memory in SNNs which can be integrated with both symbolic processing and learning-based systems on current and near-future neuromorphic architectures.
  2. Use neural processing units (NPU) to solve computationally-hard problems that are unattainable or inefficient on a von Neumann architecture. This effort aims to leverage the unique characteristics of neuromorphic architectures, such as massive parallelism and constant-time synaptic integration. I am especially interested in how neuromorphic architectures can be used to efficiently factor integers.