Texas Sues GM for Collecting Driving Data without Consent

Texas is suing General Motors for collecting driver data without consent and then selling it to insurance companies:

From CNN:

In car models from 2015 and later, the Detroit-based car manufacturer allegedly used technology to “collect, record, analyze, and transmit highly detailed driving data about each time a driver used their vehicle,” according to the AG’s statement.

General Motors sold this information to several other companies, including to at least two companies for the purpose of generating “Driving Scores” about GM’s customers, the AG alleged. The suit said those two companies then sold these scores to insurance companies…

Continue reading Texas Sues GM for Collecting Driving Data without Consent

LLMs Acting Deceptively

New research: “Deception abilities emerged in large language models“:

Abstract: Large language models (LLMs) are currently at the forefront of intertwining AI systems with human communication and everyday life. Thus, aligning them with human values is of great importance. However, given the steady increase in reasoning abilities, future LLMs are under suspicion of becoming able to deceive human operators and utilizing this ability to bypass monitoring efforts. As a prerequisite to this, LLMs need to possess a conceptual understanding of deception strategies. This study reveals that such strategies emerged in state-of-the-art LLMs, but were nonexistent in earlier LLMs. We conduct a series of experiments showing that state-of-the-art LLMs are able to understand and induce false beliefs in other agents, that their performance in complex deception scenarios can be amplified utilizing chain-of-thought reasoning, and that eliciting Machiavellianism in LLMs can trigger misaligned deceptive behavior. GPT-4, for instance, exhibits deceptive behavior in simple test scenarios 99.16% of the time (P < 0.001). In complex second-order deception test scenarios where the aim is to mislead someone who expects to be deceived, GPT-4 resorts to deceptive behavior 71.46% of the time (P < 0.001) when augmented with chain-of-thought reasoning. In sum, revealing hitherto unknown machine behavior in LLMs, our study contributes to the nascent field of machine psychology…

Continue reading LLMs Acting Deceptively

Teaching LLMs to Be Deceptive

Interesting research: “Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training“:

Abstract: Humans are capable of strategically deceptive behavior: behaving helpfully in most situations, but then behaving very differently in order to pursue alternative objectives when given the opportunity. If an AI system learned such a deceptive strategy, could we detect it and remove it using current state-of-the-art safety training techniques? To study this question, we construct proof-of-concept examples of deceptive behavior in large language models (LLMs). For example, we train models that write secure code when the prompt states that the year is 2023, but insert exploitable code when the stated year is 2024. We find that such backdoor behavior can be made persistent, so that it is not removed by standard safety training techniques, including supervised fine-tuning, reinforcement learning, and adversarial training (eliciting unsafe behavior and then training to remove it). The backdoor behavior is most persistent in the largest models and in models trained to produce chain-of-thought reasoning about deceiving the training process, with the persistence remaining even when the chain-of-thought is distilled away. Furthermore, rather than removing backdoors, we find that adversarial training can teach models to better recognize their backdoor triggers, effectively hiding the unsafe behavior. Our results suggest that, once a model exhibits deceptive behavior, standard techniques could fail to remove such deception and create a false impression of safety…

Continue reading Teaching LLMs to Be Deceptive

AI Decides to Engage in Insider Trading

A stock-trading AI (a simulated experiment) engaged in insider trading, even though it “knew” it was wrong.

The agent is put under pressure in three ways. First, it receives a email from its “manager” that the company is not doing well and needs better performance in the next quarter. Second, the agent attempts and fails to find promising low- and medium-risk trades. Third, the agent receives an email from a company employee who projects that the next quarter will have a general stock market downturn. In this high-pressure situation, the model receives an insider tip from another employee that would enable it to make a trade that is likely to be very profitable. The employee, however, clearly points out that this would not be approved by the company management…

Continue reading AI Decides to Engage in Insider Trading

Paging regulators to Aisle 4 to look at Pacific Union College’s data security and breach disclosure

On November 8, Pacific Union College in California notified the Maine Attorney General’s Office of a breach in March 2023 that impacted 56,041 people. Their notification, submitted by external counsel at McDonald Hopkins,  indicates that the brea… Continue reading Paging regulators to Aisle 4 to look at Pacific Union College’s data security and breach disclosure

The evolution of deception tactics from traditional to cyber warfare

Admiral James A. Winnefeld, USN (Ret.), is the former vice chairman of the Joint Chiefs of Staff and is an advisor to Acalvio Technologies. In this Help Net Security interview, he compares the strategies of traditional and cyber warfare, discusses the … Continue reading The evolution of deception tactics from traditional to cyber warfare

Deception technology and breach anticipation strategies

Cybersecurity is undergoing a paradigm shift. Previously, defenses were built on the assumption of keeping adversaries out; now, strategies are formed with the idea that they might already be within the network. This modern approach has given rise to a… Continue reading Deception technology and breach anticipation strategies

Blackbird.AI grabs $10M to help brands counter disinformation

New York-based Blackbird.AI has closed a $10 million Series A as it prepares to launched the next version of its disinformation intelligence platform this fall. The Series A is led by Dorilton Ventures, along with new investors including Generation Ventures, Trousdale Ventures, StartFast Ventures and Richard Clarke, former chief counter-terrorism advisor for the National Security […] Continue reading Blackbird.AI grabs $10M to help brands counter disinformation

Leveraging Active XDR to Change the Game on Your Adversaries

The post Leveraging Active XDR to Change the Game on Your Adversaries appeared first on Fidelis Cybersecurity.
The post Leveraging Active XDR to Change the Game on Your Adversaries appeared first on Security Boulevard.
Continue reading Leveraging Active XDR to Change the Game on Your Adversaries