ChatGPT 4 can exploit 87% of one-day vulnerabilities

Since the widespread and growing use of ChatGPT and other large language models (LLMs) in recent years, cybersecurity has been a top concern. Among the many questions, cybersecurity professionals wondered how effective these tools were in launching an attack. Cybersecurity researchers Richard Fang, Rohan Bindu, Akul Gupta and Daniel Kang recently performed a study to […]

The post ChatGPT 4 can exploit 87% of one-day vulnerabilities appeared first on Security Intelligence.

Continue reading ChatGPT 4 can exploit 87% of one-day vulnerabilities

Brave is Opening Leo to LLMs and On-Device SLMs

Brave today announced it will provide Bring Your Own Model (BYOM) functionality in its flagship web browser for use with its Leo AI assistant.
The post Brave is Opening Leo to LLMs and On-Device SLMs appeared first on Thurrott.com.
Continue reading Brave is Opening Leo to LLMs and On-Device SLMs

Some Open Source Software Licences are Only ‘Open-ish,’ Says Thoughtworks

A number of open source tech tools have moved towards commercial licences. Thoughtworks says this creates “big headaches” for IT, who are scrambling to maintain compliance and find replacement tools. Continue reading Some Open Source Software Licences are Only ‘Open-ish,’ Says Thoughtworks

OpenAI, Anthropic Research Reveals More About How LLMs Affect Security and Bias

Anthropic opened a window into the ‘black box’ where ‘features’ steer a large language model’s output. OpenAI dug into the same concept two weeks later with a deep dive into sparse autoencoders. Continue reading OpenAI, Anthropic Research Reveals More About How LLMs Affect Security and Bias

Anthropic’s Generative AI Research Reveals More About How LLMs Affect Security and Bias

Anthropic opened a window into the ‘black box’ where ‘features’ steer a large language model’s output. Continue reading Anthropic’s Generative AI Research Reveals More About How LLMs Affect Security and Bias

Social engineering in the era of generative AI: Predictions for 2024

Breakthroughs in large language models (LLMs) are driving an arms race between cybersecurity and social engineering scammers. Here’s how it’s set to play out in 2024. For businesses, generative AI is both a curse and an opportunity. As enterprises race to adopt the technology, they also take on a whole new layer of cyber risk. […]

The post Social engineering in the era of generative AI: Predictions for 2024 appeared first on Security Intelligence.

Continue reading Social engineering in the era of generative AI: Predictions for 2024

Prompt Hacking, Private GPTs, Zero-Day Exploits and Deepfakes: Report Reveals the Impact of AI on Cyber Security Landscape

A new report by cyber security firm Radware identifies the four main impacts of AI on the threat landscape emerging this year. Continue reading Prompt Hacking, Private GPTs, Zero-Day Exploits and Deepfakes: Report Reveals the Impact of AI on Cyber Security Landscape

Microsoft Puts the PR in AI (Premium)

Overly reliant on OpenAI and facing a coming generation of on-device AI that it can’t control, Microsoft’s latest small language model (SLM) is most notable for the PR offensive that accompanies it.
Phi-3 Mini is an SLM that runs on locally on smartpho… Continue reading Microsoft Puts the PR in AI (Premium)