Skip to content

Center for Law, Engagement, and Politics

Leap At SHSU!

  • About LEAP
  • LEAP Ambassadors
Center for Law, Engagement, and Politics

Tag: Artificial Intelligence

Balancing AI Innovation and Safety: Key Insights from WAC–Houston

Written by Jacob Wessels

LEAP Students gathered for an World Affairs Council oxford style debate at Rice university yesterday. To tackle a question that’s becoming harder to ignore in today’s world: should democratic nations prioritize AI capability over AI safety? The event brought together policymakers, tech experts, and investors for a lively discussion that felt less like a lecture and more like a real look at where the world is heading.

The evening started with a quick introduction to the panel, but it wasn’t just a list of dry credentials, this was a broad list of expertise! On one end, you had Representative Giovanni Capriglione, who brought the weight of a lawmaker who has actually written Texas’s AI frameworks. Then there was Mario Rodriguez, whose experience operating Sophia the Robot moved the conversation from “what if” to the reality of humanoid engineering.

Instead of debating a “future” version of AI, Brad Groux spoke from the perspective of someone building cloud-based AI for organizations right now, while George Ploss, a Marine veteran and Oracle director reframed the tech as a critical piece of national security and the defense industrial base.

The pro-capability side framed AI as an urgent geopolitical race. One speaker argued that democratic nations cannot afford to slow innovation, emphasizing that AI is already defending critical infrastructure such as power grids and financial systems. In their view, prioritizing safety through heavy restrictions risks ceding technological leadership to authoritarian regimes. The argument was straightforward: in a world of rapid advancement, falling behind is more dangerous than moving too fast.

They also highlighted some of the benefits AI is delivering right now such as early cancer detection and more access to education, pointing out that slowing development isn’t just cautious; it could delay life saving breakthroughs and prolong existing inequalities. Their main message was that democratic values, combined with existing legal frameworks, are enough to guide responsible innovation without stifling progress.

On the other side, the safety-focused speakers challenged the very framing of the debate. Rather than a race, they described AI as a strategic system more comparable to nuclear technology than consumer software. Drawing on lessons from Cold War deterrence, they argued that unchecked capability without control creates instability, not strength. In this view, safety isn’t a barrier to progress, it’s what makes progress sustainable.

They pointed to real-world risks, including simulations where AI systems behaved unpredictably, as evidence that reliability and oversight must come first. Without embedded safeguards, even the most advanced systems could become liabilities. One speaker emphasized that democracies should not try to “win” by mimicking less accountable regimes, warning that doing so would undermine the very values that give them legitimacy and global influence.

What really stuck with me from the safety side was their focus on trust. They argued that long term leadership depends not just on what a nation can build, but on whether allies and citizens trust those systems. In a world shaped by alliances like NATO, credibility and shared values may matter as much as raw technological power.

The audience Q&A added another layer to the discussion, with individuals–including Robin and Mikaela–asking practical and philosophical questions.

What made the debate especially engaging was how much common ground existed beneath the disagreement. Both sides acknowledged the importance of innovation and the inevitability of AI’s growth. The real divide was about sequencing and emphasis: should capability lead with safety following, or should safety be built in from the start?

Like many World Affairs Council events, the debate didn’t aim to deliver a definitive answer. Instead, it encouraged critical thinking and deeper engagement. As AI continues to evolve, the question raised that evening will only become more urgent: how do we balance the drive to innovate with the responsibility to protect?

For LEAP, the evening carried an extra layer of significance beyond the debate itself. This was Michelle’s final event with us, and having her there to lead the way one last time made the night feel like the end of an era. It was a reminder that while we spent so much time discussing the “inevitable” growth of technology helping us better our world, it’s actually the people and the mentors we work with who help us become better versions of ourselves.

Walking out of the O’conner building, I couldn’t shake the idea that the tech is actually the easy part. The hard part is the governance. We aren’t just building faster tools; we’re deciding who or what gets to stay in the driver’s seat of our society!

Unknown's avatarAuthor mikeyawnPosted on May 11, 2026May 11, 2026Format AsideCategories Civic Engagement, Law, Politics, TechnologyTags Sam Houston State University, Center for Law Engagement And Politics, World Affairs Council Houston, LEAP Ambassadors, Artificial IntelligenceLeave a comment on Balancing AI Innovation and Safety: Key Insights from WAC–Houston

Artificial Intelligence: The Future

It’s common for the Texas Tribune Festival to tackle hot topics and few topics are as discussed as artificial intelligence. This year’s festival offered multiple sessions on AI, including panels that addressed the regulations implemented in response to the technology as well as speculation on what the future holds.

The panels included Rep. Giovanni Capriglione; Mayor Pro Temp Vanessa Fuentes; Matte Dunne, Director, Center on Rural Innovation; Betsy Greytok, Associate General Counsel, IBM; Professor Sherri Greenberg, UT Austin; Amanda Crawford, Director, Texas Department of Information Services; Daniel Culbertson, Economist, Indeed; and Elizabeth Rhodes, Director, OpenResearch. In short, a lot of human intelligence to discuss the importance of artificial intelligence.

A key part of the discussion was Capriglione’s HB 149 (TRAIGA), a sprawling bill that prevents companies from manipulating software to encourage self-harm; prohibits government entities from creating “social credit” scores; bans governments from capturing biometric information; forbids individuals from creating sexual deepfakes or simulated child pornography; while also providing for various enforcement mechanisms.

The legislation, Capriglione emphasized, is limited in what it bans, but it is capable of accommodating future directions of AI. Such would allow the legislation to keep up with developments in the field.

One such development on minds is the extent to which AI threatens people’s jobs. Rep. Capriglione addressed this, asking the audience, “How many of you think AI will take your job in the next five years?” When only about six people raised their hands, Rep. Capriglione said something to the effect of (I am paraphrasing): “The rest of you are in denial, I guess?”

Not everyone agreed with this assessment. Culbertson suggested that AI would be a position augmenter rather than a position replacer. Of course, if productivity is augmented, then companies may not need their current work forces–suggesting they could let employees go and retain their current levels of productivity and profits.

A recurring theme is that, in most cases, AI will not replace jobs. But people who can use AI well may replace people who cannot use AI well. This may be bad news for seniors and those who primarily perform menial tasks, people not well-known for keeping up with technological advances. In a major study, Goldman Sachs estimates that “at most” 2.5 percent of the workforce may be replaced by automation owing to AI.

For what it’s worth, Chat GPT seems to agree with the panelists. According to Chat GPT, “You won’t compete against AI — you’ll compete with people who use AI.”

The panel–and Chat GPT–have thus provided some programming suggestions for LEAP Center staff and the LEAP Ambassadors.

Unknown's avatarAuthor mikeyawnPosted on November 17, 2025November 18, 2025Format AsideCategories Civic Engagement, TechnologyTags AI, Artificial Intelligence, Center for Law Engagement And Politics, Sam Houston State University, Texas Tribune Festival, TribfestLeave a comment on Artificial Intelligence: The Future

LEAP Ambassadors Are Involved!

This slideshow requires JavaScript.

Follow Us On Facebook!

Follow Us On Facebook!

Catch Up With Us

  • Fun in the Sun at the CASA Kids Expo May 12, 2026
  • Balancing AI Innovation and Safety: Key Insights from WAC–Houston May 11, 2026
  • Have the Lambs Stopped Screaming?: The Silence of the Lambs May 5, 2026
  • Bridging Academia and Real-World Experience at SHSU May 4, 2026
  • Master the Art of Courtroom Advocacy with Moot Court May 2, 2026

Join The Conversation!

Multi-Platform Media… on Fun and Flavor: Chilly at the…
Highlights from the… on Glazing-A-Trail with Lauren…
The Civic Leadership… on Foundational Activities of the…
The Civic Leadership… on The Philosophical Foundations…
LEAPing Into Action… on The Philosophical Foundations…

Find Previous Posts

We also have Twitter!

My Tweets

LEAPing Into LEARNing

This slideshow requires JavaScript.

  • About LEAP
  • LEAP Ambassadors
Center for Law, Engagement, and Politics Blog at WordPress.com.
  • Subscribe Subscribed
    • Center for Law, Engagement, and Politics
    • Join 490 other subscribers
    • Already have a WordPress.com account? Log in now.
    • Center for Law, Engagement, and Politics
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar

Loading Comments...