New SEC Rules around Cybersecurity Incident Disclosures

The US Securities and Exchange Commission adopted final rules around the disclosure of cybersecurity incidents. There are two basic rules:

  1. Public companies must “disclose any cybersecurity incident they determine to be material” within four days, with potential delays if there is a national security risk.
  2. Public companies must “describe their processes, if any, for assessing, identifying, and managing material risks from cybersecurity threats” in their annual filings.

The rules go into effect this December.

In an email newsletter, Melissa Hathaway wrote:…

Continue reading New SEC Rules around Cybersecurity Incident Disclosures

Most people are aware of their data trails, but few know how to deal with it: Okta study

A new study by Okta finds that a proliferation of active accounts and web identities is exacerbating security risks both for individuals and enterprises.
The post Most people are aware of their data trails, but few know how to deal with it: Okta study … Continue reading Most people are aware of their data trails, but few know how to deal with it: Okta study

On the Catastrophic Risk of AI

Earlier this week, I signed on to a short group statement, coordinated by the Center for AI Safety:

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

The press coverage has been extensive, and surprising to me. The New York Times headline is “A.I. Poses ‘Risk of Extinction,’ Industry Leaders Warn.” BBC: “Artificial intelligence could lead to extinction, experts warn.” Other headlines are similar.

I actually don’t think that AI poses a risk to human extinction. I think it poses a similar risk to pandemics and nuclear war—which is to say, a risk worth taking seriously, but not something to panic over. Which is what I thought the statement said…

Continue reading On the Catastrophic Risk of AI

Ted Chiang on the Risks of AI

Ted Chiang has an excellent essay in the New Yorker: “Will A.I. Become the New McKinsey?”

The question we should be asking is: as A.I. becomes more powerful and flexible, is there any way to keep it from being another version of McKinsey? The question is worth considering across different meanings of the term “A.I.” If you think of A.I. as a broad set of technologies being marketed to companies to help them cut their costs, the question becomes: how do we keep those technologies from working as “capital’s willing executioners”? Alternatively, if you imagine A.I. as a semi-autonomous software program that solves problems that humans ask it to solve, the question is then: how do we prevent that software from assisting corporations in ways that make people’s lives worse? Suppose you’ve built a semi-autonomous A.I. that’s entirely obedient to humans­—one that repeatedly checks to make sure it hasn’t misinterpreted the instructions it has received. This is the dream of many A.I. researchers. Yet such software could easily still cause as much harm as McKinsey has…

Continue reading Ted Chiang on the Risks of AI

Building Trustworthy AI

We will all soon get into the habit of using AI tools for help with everyday problems and tasks. We should get in the habit of questioning the motives, incentives, and capabilities behind them, too.

Imagine you’re using an AI chatbot to plan a vacation. Did it suggest a particular resort because it knows your preferences, or because the company is getting a kickback from the hotel chain? Later, when you’re using another AI chatbot to learn about a complex economic issue, is the chatbot reflecting your politics or the politics of the company that trained it?…

Continue reading Building Trustworthy AI

Security Risks of AI

Stanford and Georgetown have a new report on the security risks of AI—particularly adversarial machine learning—based on a workshop they held on the topic.

Jim Dempsey, one of the workshop organizers, wrote a blog post on the report:

As a first step, our report recommends the inclusion of AI security concerns within the cybersecurity programs of developers and users. The understanding of how to secure AI systems, we concluded, lags far behind their widespread adoption. Many AI products are deployed without institutions fully understanding the security risks they pose. Organizations building or deploying AI models should incorporate AI concerns into their cybersecurity functions using a risk management framework that addresses security throughout the AI system life cycle. It will be necessary to grapple with the ways in which AI vulnerabilities are different from traditional cybersecurity bugs, but the starting point is to assume that AI security is a subset of cybersecurity and to begin applying vulnerability management practices to AI-based features. (Andy Grotto and I have vigorously argued …

Continue reading Security Risks of AI

Existential Risk and the Fermi Paradox

We know that complexity is the worst enemy of security, because it makes attack easier and defense harder. This becomes catastrophic as the effects of that attack become greater.

In A Hacker’s Mind (coming in February 2023), I write:

Our societal systems, in general, may have grown fairer and more just over the centuries, but progress isn’t linear or equitable. The trajectory may appear to be upwards when viewed in hindsight, but from a more granular point of view there are a lot of ups and downs. It’s a “noisy” process.

Technology changes the amplitude of the noise. Those near-term ups and downs are getting more severe. And while that might not affect the long-term trajectories, they drastically affect all of us living in the short term. This is how the twentieth century could—statistically—both be the most peaceful in human history and also contain the most deadly wars…

Continue reading Existential Risk and the Fermi Paradox

Adversarial ML Attack that Secretly Gives a Language Model a Point of View

Machine learning security is extraordinarily difficult because the attacks are so varied—and it seems that each new one is weirder than the next. Here’s the latest: a training-time attack that forces the model to exhibit a point of view: Spinning Language Models: Risks of Propaganda-As-A-Service and Countermeasures.”

Abstract: We investigate a new threat to neural sequence-to-sequence (seq2seq) models: training-time attacks that cause models to “spin” their outputs so as to support an adversary-chosen sentiment or point of view—but only when the input contains adversary-chosen trigger words. For example, a spinned summarization model outputs positive summaries of any text that mentions the name of some individual or organization…

Continue reading Adversarial ML Attack that Secretly Gives a Language Model a Point of View

Is it Time to Update Your Cyber Insurance Strategy?

If anything, 2020 was about preparing for – well, everything. This includes cyberthreats, which have risen sharply in the pandemic era. In 2021, rethinking your cyber insurance strategy should be a top priority for CISOs and executive leadership… Continue reading Is it Time to Update Your Cyber Insurance Strategy?