articles

Auto Added by WPeMatico

We asked teachers about their experiences with AI in the classroom — here’s what they said

Kathryn Conrad / Datafication / Licenced by CC-BY 4.0 By Nadia Delanoy, University of Calgary Since ChatGPT and other large language models burst into public consciousness, school boards are drafting policies, universities are hosting symposiums and tech companies are relentlessly promoting their latest AI-powered learning tools. In the race to modernize education, artificial intelligence (AI) […]

We asked teachers about their experiences with AI in the classroom — here’s what they said Read More »

AIhub monthly digest: November 2025 – learning robust controllers, trust in multi-agent systems, and a new fairness evaluation dataset

Welcome to our monthly digest, where you can catch up with any AIhub stories you may have missed, peruse the latest news, recap recent events, and more. This month, we learn about rewarding explainability in drug repurposing with knowledge graphs, investigate value-aligned autonomous vehicles, and consider trust in multi-agent systems. Rewarding explainability in drug repurposing

AIhub monthly digest: November 2025 – learning robust controllers, trust in multi-agent systems, and a new fairness evaluation dataset Read More »

EU proposal to delay parts of its AI Act signal a policy shift that prioritises big tech over fairness

By Jessica Heesen, University of Tübingen and Tori Smith Ekstrand, University of North Carolina at Chapel Hill The roll-out of the European Union’s Artificial Intelligence Act has hit a critical turning point. The act establishes rules for how AI systems can be used within the European Union. It officially entered into force on August 1

EU proposal to delay parts of its AI Act signal a policy shift that prioritises big tech over fairness Read More »

New AI technique sounding out audio deepfakes

Researchers from Australia’s national science agency CSIRO, Federation University Australia and RMIT University have developed a method to improve the detection of audio deepfakes. The new technique, Rehearsal with Auxiliary-Informed Sampling (RAIS), is designed for audio deepfake detection — a growing threat in cybercrime risks such as bypassing voice-based biometric authentication systems, impersonation and disinformation.

New AI technique sounding out audio deepfakes Read More »

Learning robust controllers that work across many partially observable environments

In intelligent systems, applications range from autonomous robotics to predictive maintenance problems. To control these systems, the essential aspects are captured with a model. When we design controllers for these models, we almost always face the same challenge: uncertainty. We’re rarely able to see the whole picture. Sensors are noisy, models of the system are

Learning robust controllers that work across many partially observable environments Read More »

Review of “Exploring metaphors of AI: visualisations, narratives and perception”

IceMing & Digit / Stochastic Parrots at Work / Licenced by CC-BY 4.0 Better Images of AI and We and AI have been exploring the role of visual and narrative metaphors in shaping our understanding of AI. As part of this we invited some researchers who have been conducting different types of research into the

Review of “Exploring metaphors of AI: visualisations, narratives and perception” Read More »

Designing value-aligned autonomous vehicles: from moral dilemmas to conflict-sensitive design

  Autonomous systems increasingly face value-laden choices. This blog post introduces the idea of designing “conflict-sensitive” autonomous traffic agents that explicitly recognise, reason about, and act upon competing ethical, legal, and social values. We present the concept of Value-Aligned Operational Design Domains (VODDs) – a framework that embeds stakeholder value hierarchies and contextual handover rules

Designing value-aligned autonomous vehicles: from moral dilemmas to conflict-sensitive design Read More »

Learning from failure to tackle extremely hard problems

By Sangyun Lee and Giulia Fanti This blog post is based on the work BaNEL: Exploration Posteriors for Generative Modeling Using Only Negative Rewards. Tackling very hard problems The ultimate aim of machine learning research is to push machines beyond human limits in critical applications, including the next generation of theorem proving, algorithmic problem solving,

Learning from failure to tackle extremely hard problems Read More »