Featured Podcasts

Streamed June 8, 2023 - Progress, Potential, and Possibilities with Host, Ira Pastor

Dr. Mark Bailey is interviewed by Ira Pastor. They discuss everything from AI to astrobiology.

Streamed May 26, 2023 - Dr. Susan Schneider interviews with Brian Gallagher, Associate Editor of Nautilus

In “AI Shouldn’t Decide What’s True,” Schneider (with her coauthor Mark Bailey) takes aim at the idea that large language models, like GPT-4, can be trusted to be truthful. For Schneider, the thought that people would come to rely on a chatbot for factual information is “nauseating.” 

—Brian Gallagher, associate editor

Streamed live on Apr 19, 2023 - Center for the Future Mind, Florida Atlantic University

Eliezer Yudkowsky discusses his rationale for ceasing the development of Als more sophisticated than GPT-4. Dr. Mark Bailey of National Intelligence University will moderate the discussion.

An open letter published on March 22, 2023 calls for "all Al labs to immediately pause for at least 6 months the training of Al systems more powerful than GPT-4." In response, Yudkowsky argues that this proposal does not do enough to protect us from the risks of losing control of superintelligentAl.

Eliezer Yudkowsky is a decision theorist from the U.S. and leads research at the Machine Intelligence Research Institute. He's been working on aligning Artificial General Intelligence since 2001 and is widely regarded as a founder of the field of alignment.

Dr. Mark Bailey is the Chair of the Cyber Intelligence and Data Science Department, as well as the Co-Director of the Data Science Intelligence Center, at the National Intelligence University.

Media Mentions

Trailblazers 2023 (Homeland Security Today)

12 Sept 2023

"The hottest advancements in artificial intelligence and machine learning comes with risks and potential unintended consequences, including the control of critical systems and weaponry. Dr. Mark Bailey, who writes about the intersection between artificial intelligence, complexity, and national security, is helping ensure that decision makers understand the unpredictability of evolving technologies before the risks outweigh the benefits to a potentially disastrous degree.”

Experts from Academia and Intelligence Community Discuss Artificial Intelligence Safety at NIU Symposium (ODNI Dispatch)

22 June 2023

“It’s a really important topic and I think we are at this precipice where the decisions we make now as this technology continues to develop could have significant consequences down the road,” said Mark Bailey, Department Chair of Cyber Intelligence and Data Science at National Intelligence University (NIU).

In response, Bailey, who studies AI safety, organized the school’s first AI Safety Symposium on June 21 and 22 in Washington, D.C. The event, hosted by NIU’s Ann Caracristi Institute and Data Science Intelligence Center, brought together 65 experts from across academia and the Intelligence Community (IC) to tackle weighty issues related to AI safety and see how they can be applied in the IC.

“Being at NIU, we’re at this nexus between the Intelligence Community and the outside academic community because we wear both hats,” said Bailey. “So, what I really wanted to do was sort of bring all these different parties together.”

Paper Claims AI May Be a Civilization-Destroying "Great Filter" (Futurism)

By NOOR AL-SIBAI

11 May 2023

"If aliens are out there, why haven't they contacted us yet? It may be, a new paper argues, that they — or, in the future, we — inevitably get wiped out by ultra-strong artificial intelligence, victims of our own drive to create a superior being.

This potential answer to the Fermi paradox — in which physicist Enrico Fermi and subsequent generations pose the question: "where is everybody?" — comes from National Intelligence University researcher Mark M. Bailey, who in a new yet-to-be-peer-reviewed paper posits that advanced AI may be exactly the kind of catastrophic risk that could wipe out entire civilizations."