ChaptersCircleEventsBlog

Working Group

AI Technology and Risk

Explore the latest AI tech, predict risks, and ensure innovation meets security in the realm of AI.
View Current Projects
AI Technology and Risk
The AI Technology and Risk Committee is focused on staying abreast of the latest technological advancements in AI while simultaneously identifying, understanding, and forecasting associated risks, threats, and vulnerabilities. This technical committee aims to act as both a knowledge hub and a proactive risk management entity, bridging the gap between innovation and security in the realm of AI.

Working Group Leadership

Josh Buker
Josh Buker

Josh Buker

Research Analyst, CSA

Working Group Co-Chairs

Mark Yanalitis Headshot Missing
Mark Yanalitis

Mark Yanalitis

Chris Kirschke
Chris Kirschke

Chris Kirschke

Cloud Portfolio Information Security Officer at Albertsons Companies

Security Leader with over 20+ years of experience across Financial Services, Streaming, Retail and IT Services with a heavy focus on Cloud, DevSecOps and Threat Modeling. Advises multiple security startups on Product Strategy, Alliances and Integrations. Sits on multiple Customer Advisory Boards helping to drive security product roadmaps, integrations and feature developments. Avid hockey player, backpacker and wine collector in his spare t...

Read more

Publications in ReviewOpen Until
Navigating the Human Factor: Understanding and Addressing Resistance to AI AdoptionJun 09, 2025
AICM mapping to NIST 600-1Jun 16, 2025
Analyzing Log Data with AI ModelsJun 20, 2025
Agentic AI Identity and Access Management: A New ApproachJul 03, 2025
View all
Who can join?

Anyone can join a working group, whether you have years of experience or want to just participate as a fly on the wall.

What is the time commitment?

The time commitment for this group varies depending on the project. You can spend a 15 minutes helping review a publication that's nearly finished or help author a publication from start to finish.

Virtual Meetings

Attend our next meeting. You can just listen in to decide if this group is a good for you or you can choose to actively participate. During these calls we discuss current projects, and well as share ideas for new projects. This is a good way to meet the other members of the group. You can view all research meetings here.

No scheduled meetings for this working group in the next 60 days.

See Full Calendar for this Working Group

Open Peer Reviews

Peer reviews allow security professionals from around the world to provide feedback on CSA research before it is published.

Learn how to participate in a peer review here.

AICM mapping to NIST 600-1

Open Until: 06/16/2025

The Cloud Security Alliance (CSA) invites public peer review of its draft mapping between the AI Controls Matrix (AICM) and NIST 600-1. This initiative suppo...

Analyzing Log Data with AI Models

Open Until: 06/20/2025

In a Zero Trust environment, logs play a critical role in the visibility and analytics cross-cutting capability. Architectu...

Agentic AI Identity and Access Management: A New Approach

Open Until: 07/03/2025

Traditional Identity and Access Management (IAM) systems, primarily designed for human users or static machine identities v...

Premier AI Safety Ambassadors

Premier AI Safety Ambassadors play a leading role in promoting AI safety within their organization, advocating for responsible AI practices and promoting pragmatic solutions to manage AI risks. Contact [email protected] to learn how your organization could participate and take a seat at the forefront of AI safety best practices.