Skip to content

Addressing the risks of artificial intelligence

A community of UC Santa Barbara students researching and discussing the technical and political challenges of AI safety.

💬 Discussions Open conversations on AI safety topics
🔬 Projects Research in alignment & interpretability
🎤 Talks Guest speakers & workshops

What is AI safety?

AI safety is the study of how to ensure increasingly capable artificial intelligence systems remain aligned with human values and beneficial to society. Our focus spans technical research, ethics, and policy — from preventing catastrophic failures in advanced systems to understanding more gradual social and economic impacts.


We discuss topics including, but not limited to: the threat of superintelligence, the economic impacts of AGI, timelines to AGI capabilities, the AI alignment problem, moral problems of machine consciousness, gradual disempowerment, LLM psychosis, global AGI race dynamics, international diplomacy, theoretical limits of intelligence, and simulation theory.

About the Club

🎯

Mission

Our mission is to build a welcoming community of UCSB students dedicated to discussing and solving the problems around advanced artificial intelligence.

👥

Who it's for

Students from any major and experience level are welcome—whether you've trained your own models or you're barely understand AI. Join us to learn more.

🚀

What we do

Discussion meetings, reading groups, research projects, paper replication, collaboration with faculty and labs, and talks with guest speakers.

Activities

Join us for engaging discussions, hands-on projects, and learning opportunities

💬

General meetings

We host short presentations paired with open discussions on different topics in AI safety.

🔬

Projects

Team research projects to advance technical skills in model development, alignment, and interpretability.

🎤

Talks & Workshops

Guest speakers from academia and industry, skill-building workshops (ML, stats, governance, risk analysis).

FAQ

Quick answers for new members

What is AI safety?

AI safety is the field focused on making increasingly capable AI systems reliable, aligned with human values, robust to misuse, and governable. It spans technical work (e.g. interpretability, robustness, evals, alignment), risk assessment, policy, and responsible deployment practices.

Do I need prior AI/ML background?

No! People from all backgrounds are welcome. Some level of technical understanding is important, but discussions on AI are often more philosophical than mathematical.

How do I join?

Use our Linktree for the Discord, Instagram, GroupMe, Shoreline, and Twitter. Say hi on the Discord or just show up to any event!

Can I propose a project or talk?

Yes—contact us at contact@ucsbaisafety.org.