The Decoupling Principle

This is a really interesting paper that discusses what the authors call the Decoupling Principle:

The idea is simple, yet previously not clearly articulated: to ensure privacy, information should be divided architecturally and institutionally such that each entity has only the information they need to perform their relevant function. Architectural decoupling entails splitting functionality for different fundamental actions in a system, such as decoupling authentication (proving who is allowed to use the network) from connectivity (establishing session state for communicating). Institutional decoupling entails splitting what information remains between non-colluding entities, such as distinct companies or network operators, or between a user and network peers. This decoupling makes service providers individually breach-proof, as they each have little or no sensitive data that can be lost to hackers. Put simply, the Decoupling Principle suggests always separating who you are from what you do…

Continue reading The Decoupling Principle

Using Wi-FI to See through Walls

This technique measures device response time to determine distance:

The scientists tested the exploit by modifying an off-the-shelf drone to create a flying scanning device, the Wi-Peep. The robotic aircraft sends several messages to each device as it flies around, establishing the positions of devices in each room. A thief using the drone could find vulnerable areas in a home or office by checking for the absence of security cameras and other signs that a room is monitored or occupied. It could also be used to follow a security guard, or even to help rival hotels spy on each other by gauging the number of rooms in use…

Continue reading Using Wi-FI to See through Walls

Adversarial ML Attack that Secretly Gives a Language Model a Point of View

Machine learning security is extraordinarily difficult because the attacks are so varied—and it seems that each new one is weirder than the next. Here’s the latest: a training-time attack that forces the model to exhibit a point of view: Spinning Language Models: Risks of Propaganda-As-A-Service and Countermeasures.”

Abstract: We investigate a new threat to neural sequence-to-sequence (seq2seq) models: training-time attacks that cause models to “spin” their outputs so as to support an adversary-chosen sentiment or point of view—but only when the input contains adversary-chosen trigger words. For example, a spinned summarization model outputs positive summaries of any text that mentions the name of some individual or organization…

Continue reading Adversarial ML Attack that Secretly Gives a Language Model a Point of View

Inserting a Backdoor into a Machine-Learning System

Interesting research: “ImpNet: Imperceptible and blackbox-undetectable backdoors in compiled neural networks, by Tim Clifford, Ilia Shumailov, Yiren Zhao, Ross Anderson, and Robert Mullins:

Abstract: Early backdoor attacks against machine learning set off an arms race in attack and defence development. Defences have since appeared demonstrating some ability to detect backdoors in models or even remove them. These defences work by inspecting the training data, the model, or the integrity of the training procedure. In this work, we show that backdoors can be added during compilation, circumventing any safeguards in the data preparation and model training stages. As an illustration, the attacker can insert weight-based backdoors during the hardware compilation step that will not be detected by any training or data-preparation process. Next, we demonstrate that some backdoors, such as ImpNet, can only be reliably detected at the stage where they are inserted and removing them anywhere else presents a significant challenge. We conclude that machine-learning model security requires assurance of provenance along the entire technical pipeline, including the data, model architecture, compiler, and hardware specification…

Continue reading Inserting a Backdoor into a Machine-Learning System

Detecting Deepfake Audio by Modeling the Human Acoustic Tract

This is interesting research:

In this paper, we develop a new mechanism for detecting audio deepfakes using techniques from the field of articulatory phonetics. Specifically, we apply fluid dynamics to estimate the arrangement of the human vocal tract during speech generation and show that deepfakes often model impossible or highly-unlikely anatomical arrangements. When parameterized to achieve 99.9% precision, our detection mechanism achieves a recall of 99.5%, correctly identifying all but one deepfake sample in our dataset.

From an article…

Continue reading Detecting Deepfake Audio by Modeling the Human Acoustic Tract

Differences in App Security/Privacy Based on Country

Depending on where you are when you download your Android apps, it might collect more or less data about you.

The apps we downloaded from Google Play also showed differences based on country in their security and privacy capabilities. One hundred twenty-seven apps varied in what the apps were allowed to access on users’ mobile phones, 49 of which had additional permissions deemed “dangerous” by Google. Apps in Bahrain, Tunisia and Canada requested the most additional dangerous permissions.

Three VPN apps enable clear text communication in some countries, which allows unauthorized access to users’ communications. One hundred and eighteen apps varied in the number of ad trackers included in an app in some countries, with the categories Games, Entertainment and Social, with Iran and Ukraine having the most increases in the number of ad trackers compared to the baseline number common to all countries…

Continue reading Differences in App Security/Privacy Based on Country

Leaking Screen Information on Zoom Calls through Reflections in Eyeglasses

Okay, it’s an obscure threat. But people are researching it:

Our models and experimental results in a controlled lab setting show it is possible to reconstruct and recognize with over 75 percent accuracy on-screen texts that have heights as small as 10 mm with a 720p webcam.” That corresponds to 28 pt, a font size commonly used for headings and small headlines.

[…]

Being able to read reflected headline-size text isn’t quite the privacy and security problem of being able to read smaller 9 to 12 pt fonts. But this technique is expected to provide access to smaller font sizes as high-resolution webcams become more common…

Continue reading Leaking Screen Information on Zoom Calls through Reflections in Eyeglasses

On the Subversion of NIST by the NSA

Nadiya Kostyuk and Susan Landau wrote an interesting paper: “Dueling Over DUAL_EC_DRBG: The Consequences of Corrupting a Cryptographic Standardization Process“:

Abstract: In recent decades, the U.S. National Institute of Standards and Technology (NIST), which develops cryptographic standards for non-national security agencies of the U.S. government, has emerged as the de facto international source for cryptographic standards. But in 2013, Edward Snowden disclosed that the National Security Agency had subverted the integrity of a NIST cryptographic standard­the Dual_EC_DRBG­enabling easy decryption of supposedly secured communications. This discovery reinforced the desire of some public and private entities to develop their own cryptographic standards instead of relying on a U.S. government process. Yet, a decade later, no credible alternative to NIST has emerged. NIST remains the only viable candidate for effectively developing internationally trusted cryptography standards…

Continue reading On the Subversion of NIST by the NSA

Tracking People via Bluetooth on Their Phones

We’ve always known that phones—and the people carrying them—can be uniquely identified from their Bluetooth signatures, and that we need security techniques to prevent that. This new research shows that that’s not enough.

Computer scientists at the University of California San Diego proved in a study published May 24 that minute imperfections in phones caused during manufacturing create a unique Bluetooth beacon, one that establishes a digital signature or fingerprint distinct from any other device. Though phones’ Bluetooth uses cryptographic technology that limits trackability, using a radio receiver, these distortions in the Bluetooth signal can be discerned to track individual devices…

Continue reading Tracking People via Bluetooth on Their Phones

Attacking the Performance of Machine Learning Systems

Interesting research: “Sponge Examples: Energy-Latency Attacks on Neural Networks“:

Abstract: The high energy costs of neural network training and inference led to the use of acceleration hardware such as GPUs and TPUs. While such devices enable us to train large-scale neural networks in datacenters and deploy them on edge devices, their designers’ focus so far is on average-case performance. In this work, we introduce a novel threat vector against neural networks whose energy consumption or decision latency are critical. We show how adversaries can exploit carefully-crafted sponge examples, which are inputs designed to maximise energy consumption and latency, to drive machine learning (ML) systems towards their worst-case performance. Sponge examples are, to our knowledge, the first denial-of-service attack against the ML components of such systems. We mount two variants of our sponge attack on a wide range of state-of-the-art neural network models, and find that language models are surprisingly vulnerable. Sponge examples frequently increase both latency and energy consumption of these models by a factor of 30×. Extensive experiments show that our new attack is effective across different hardware platforms (CPU, GPU and an ASIC simulator) on a wide range of different language tasks. On vision tasks, we show that sponge examples can be produced and a latency degradation observed, but the effect is less pronounced. To demonstrate the effectiveness of sponge examples in the real world, we mount an attack against Microsoft Azure’s translator and show an increase of response time from 1ms to 6s (6000×). We conclude by proposing a defense strategy: shifting the analysis of energy consumption in hardware from an average-case to a worst-case perspective…

Continue reading Attacking the Performance of Machine Learning Systems