What should an AI ethics governance framework look like?

While the race to achieve generative AI intensifies, the ethical debate surrounding the technology also continues to heat up. And the stakes keep getting higher. As per Gartner, “Organizations are responsible for ensuring that AI projects they develop, deploy or use do not have negative ethical consequences.” Meanwhile, 79% of executives say AI ethics is […]

The post What should an AI ethics governance framework look like? appeared first on Security Intelligence.

Continue reading What should an AI ethics governance framework look like?

Licensing AI Engineers

The debate over professionalizing software engineers is decades old. (The basic idea is that, like lawyers and architects, there should be some professional licensing requirement for software engineers.) Here’s a law journal article recommending the same idea for AI engineers.

This Article proposes another way: professionalizing AI engineering. Require AI engineers to obtain licenses to build commercial AI products, push them to collaborate on scientifically-supported, domain-specific technical standards, and charge them with policing themselves. This Article’s proposal addresses AI harms at their inception, influencing the very engineering decisions that give rise to them in the first place. By wresting control over information and system design away from companies and handing it to AI engineers, professionalization engenders trustworthy AI by design. Beyond recommending the specific policy solution of professionalization, this Article seeks to shift the discourse on AI away from an emphasis on light-touch, ex post solutions that address already-created products to a greater focus on ex ante controls that precede AI development. We’ve used this playbook before in fields requiring a high level of expertise where a duty to the public welfare must trump business motivations. What if, like doctors, AI engineers also vowed to do no harm?…

Continue reading Licensing AI Engineers

Pentagon Scientists Discuss Cybernetic ‘Super Soldiers’ That Feel Nothing While Killing In Dystopian Presentation

The soldier of the future will be “flooded with pain-numbing stimulants,” cybernetically enhanced, and, one official sort-of joked, must be eventually “terminated.” Continue reading Pentagon Scientists Discuss Cybernetic ‘Super Soldiers’ That Feel Nothing While Killing In Dystopian Presentation

Hypothetical Discovery: Security Concerns in Airline Booking Systems – Seeking Guidance on Responsible Reporting [duplicate]

I recently had a peculiar experience where a vase, courtesy of my mischievous cat, took an unexpected detour onto my head. In the aftermath, I couldn’t help but wonder about the security of an Airline booking system used by various airline… Continue reading Hypothetical Discovery: Security Concerns in Airline Booking Systems – Seeking Guidance on Responsible Reporting [duplicate]

Ethical Problems in Computer Security

Tadayoshi Kohno, Yasemin Acar, and Wulf Loh wrote excellent paper on ethical thinking within the computer security community: “Ethical Frameworks and Computer Security Trolley Problems: Foundations for Conversation“:

Abstract: The computer security research community regularly tackles ethical questions. The field of ethics / moral philosophy has for centuries considered what it means to be “morally good” or at least “morally allowed / acceptable.” Among philosophy’s contributions are (1) frameworks for evaluating the morality of actions—including the well-established consequentialist and deontological frameworks—and (2) scenarios (like trolley problems) featuring moral dilemmas that can facilitate discussion about and intellectual inquiry into different perspectives on moral reasoning and decision-making. In a classic trolley problem, consequentialist and deontological analyses may render different opinions. In this research, we explicitly make and explore connections between moral questions in computer security research and ethics / moral philosophy through the creation and analysis of trolley problem-like computer security-themed moral dilemmas and, in doing so, we seek to contribute to conversations among security researchers about the morality of security research-related decisions. We explicitly do not seek to define what is morally right or wrong, nor do we argue for one framework over another. Indeed, the consequentialist and deontological frameworks that we center, in addition to coming to different conclusions for our scenarios, have significant limitations. Instead, by offering our scenarios and by comparing two different approaches to ethics, we strive to contribute to how the computer security research field considers and converses about ethical questions, especially when there are different perspectives on what is morally right or acceptable. Our vision is for this work to be broadly useful to the computer security community, including to researchers as they embark on (or choose not to embark on), conduct, and write about their research, to program committees as they evaluate submissions, and to educators as they teach about computer security and ethics…

Continue reading Ethical Problems in Computer Security

Alexa vs Roomba: How children think we should treat intelligent tech

With AI already a big part of everyday life and its involvement only bound to increase, researchers have turned to four- to 11-year-olds to ask how they think we should treat intelligent technology.Continue ReadingCategory: TechnologyTags: Artificial I… Continue reading Alexa vs Roomba: How children think we should treat intelligent tech

Prominent AI Philosopher and ‘Father’ of Longtermism Sent Very Racist Email to a 90s Philosophy Listserv

“Blacks are more stupid than whites,” Nick Bostrom wrote in an email sent to a transhumanism listserv in the 1990s that he apologized for in a letter. Continue reading Prominent AI Philosopher and ‘Father’ of Longtermism Sent Very Racist Email to a 90s Philosophy Listserv