<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>RAISE — Articles &amp; Events</title><description>Latest articles and upcoming talks from the UW Center for Responsibility in AI Systems and Experiences.</description><link>https://raise.uw.edu/</link><language>en-us</language><atom:link href="https://raise.uw.edu/rss.xml" rel="self" type="application/rss+xml"/><item><title>Test Article</title><link>https://raise.uw.edu/articles/test-article/</link><guid isPermaLink="true">https://raise.uw.edu/articles/test-article/</guid><description>Test Article</description><pubDate>Sun, 19 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Welcome to the New RAISE Website</title><link>https://raise.uw.edu/articles/welcome-to-raise/</link><guid isPermaLink="true">https://raise.uw.edu/articles/welcome-to-raise/</guid><description>A new home for the UW Responsible AI Systems &amp; Experiences Center — faster, searchable, and easier for our team to keep current.</description><pubDate>Sat, 18 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Faeze: Open and Reliable Language Model Adaptation</title><link>https://raise.uw.edu/events/open-and-reliable-language-model-adaptation/</link><guid isPermaLink="true">https://raise.uw.edu/events/open-and-reliable-language-model-adaptation/</guid><description>In this talk, Faeze explores two crucial frontiers in AI development: democratizing language model adaptation and enhancing their reliability in real-world deployment. She introduces Tulu 3, a family of fully open post-trained language models. While post-training techniques are essential for refining behaviors and unlocking new capabilities in language models, open approaches have significantly lagged behind proprietary ones. Tulu 3 addresses this gap by providing complete transparency into data, code, and training methodologies, yielding models that outperform comparable open-weight alternatives while narrowing the gap with proprietary systems.</description><pubDate>Fri, 06 Jun 2025 07:00:00 GMT</pubDate></item><item><title>Gagan: Concrete Problems for Developing Human-Centered Agents</title><link>https://raise.uw.edu/events/concrete-problems-for-developing-human-centered-agents/</link><guid isPermaLink="true">https://raise.uw.edu/events/concrete-problems-for-developing-human-centered-agents/</guid><description>Reflecting on his experience developing AutoGen—an open-source framework for building agents and AutoGen-based applications—this talk outlines three concrete challenges in creating human-centered agents: (1) reliably completing complex, multi-step tasks; (2) maintaining common ground between people and agents; and (3) auditing complex agent behaviors. Bansal will share demos and evaluations highlighting his progress on these challenges, as well as open opportunities for the HCI community.</description><pubDate>Fri, 30 May 2025 07:00:00 GMT</pubDate></item><item><title>Guest Speaker: AI as Normal Technology</title><link>https://raise.uw.edu/events/ai-as-normal-technology/</link><guid isPermaLink="true">https://raise.uw.edu/events/ai-as-normal-technology/</guid><description>Arvind Narayanan will present a new paper co-authored with Sayash Kapoo, in which they articulate a vision of artificial intelligence as a “normal technology,” standing in contrast to both utopian and dystopian narratives that portray AI as a potentially superintelligent entity. In the presentation, Narayanan will explain why they believe the impacts of advanced AI, even if transformative, are likely to unfold gradually. He will make a critical distinction between AI methods, AI applications, and AI adoption. Additionally, he will explore a potential division of labor between humans and AI in a world shaped by advanced AI, and examine the implications of treating AI as normal technology for AI policy, AI safety, and broader human progress.</description><pubDate>Fri, 16 May 2025 07:00:00 GMT</pubDate></item><item><title>Dr. Prabhakaran: Culture as a Lens on Responsible AI</title><link>https://raise.uw.edu/events/culture-as-a-lens-on-responsible/</link><guid isPermaLink="true">https://raise.uw.edu/events/culture-as-a-lens-on-responsible/</guid><description>AI technologies are often developed within mono-cultural development contexts, but are meant to interact with multi-cultural usage contexts with divergent values, knowledge systems, and interpretive practices. In this talk, I will present our research that uses culture as a lens on responsible AI, and demonstrate its crucial influence along various dimensions: culture as reflected in data, cultural values and norms that we encode in our models, and cultural knowledge systems we rely on for evaluation of our models. More specifically, we demonstrate how geo-cultural differences shape what language is deemed abusive, stereotypical, or offensive, and where AI models fail in detecting them.</description><pubDate>Fri, 09 May 2025 07:00:00 GMT</pubDate></item><item><title>Guest Speaker: Beyond Scaling: Frontiers of Retrieval-Augmented Language Models</title><link>https://raise.uw.edu/events/beyond-scaling-frontiers-of-retrieval-augmented-language/</link><guid isPermaLink="true">https://raise.uw.edu/events/beyond-scaling-frontiers-of-retrieval-augmented-language/</guid><description>Despite their success, Large Language Models (LLMs) remain limited by issues like hallucination and outdated knowledge. In this talk, Akari introduces Augmented LMs—a new paradigm that enhances LLMs with external modules for greater reliability. Focusing on Retrieval-Augmented LMs, she presents her research on scalable training and retrieval methods, and highlights OpenScholar, a system now used by over 30,000 researchers. She concludes with a vision for future advances in modular, multimodal AI.</description><pubDate>Fri, 02 May 2025 07:00:00 GMT</pubDate></item><item><title>Shilpi: Responsible AI for Enterprise Data Centers: Balancing Technical Excellence and Ethical Standards</title><link>https://raise.uw.edu/events/responsible-ai-for-enterprise-data-centers/</link><guid isPermaLink="true">https://raise.uw.edu/events/responsible-ai-for-enterprise-data-centers/</guid><description>As AI transforms enterprise data center infrastructure, product managers face a critical challenge: driving innovation while upholding ethical standards. This session explores how Responsible AI is shaping the future of system offerings—ensuring data privacy, fairness, and transparency in environments where operational data is central. Through real-world examples, attendees will learn how to apply ethical AI frameworks to build scalable, innovative, and socially responsible solutions that meet both technical and societal needs.</description><pubDate>Fri, 25 Apr 2025 07:00:00 GMT</pubDate></item><item><title>Pavel: Weak-to-Strong Generalization</title><link>https://raise.uw.edu/events/weak-to-strong-generalization/</link><guid isPermaLink="true">https://raise.uw.edu/events/weak-to-strong-generalization/</guid><description>As AI systems grow more capable, aligning them becomes increasingly challenging—especially when their behavior outpaces human understanding. This talk explores weak-to-strong generalization: can weak models effectively supervise stronger ones? Through experiments in NLP, chess, and reward modeling, the talk shows how even limited supervision can unlock surprising performance gains—and why alignment techniques like RLHF may not scale without new approaches.</description><pubDate>Fri, 18 Apr 2025 07:00:00 GMT</pubDate></item><item><title>Emily: Book Tour: The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want</title><link>https://raise.uw.edu/events/book-tour-the-ai-con-how/</link><guid isPermaLink="true">https://raise.uw.edu/events/book-tour-the-ai-con-how/</guid><description>A smart, incisive look at the technologies sold as artificial intelligence, the drawbacks and pitfalls of technology sold under this banner, and why it’s crucial to recognize the many ways in which AI hype covers for a small set of power-hungry actors at work and in the world.</description><pubDate>Fri, 11 Apr 2025 07:00:00 GMT</pubDate></item><item><title>Hari: Bridging the AI-UX Divide: Responsible AI Design through Human-Centered Collaboration</title><link>https://raise.uw.edu/events/bridging-the-ai-ux-divide-responsible-ai/</link><guid isPermaLink="true">https://raise.uw.edu/events/bridging-the-ai-ux-divide-responsible-ai/</guid><description>In traditional software development, UX design and engineering are distinct: designers create specs, and engineers build them. AI blurs this line, as systems evolve dynamically with data and user interactions. In this talk, I’ll explore how collaboration at the AI-UX boundary shapes responsible AI design. Drawing from industry studies, I’ll show how “leaky” abstractions encourage cross-disciplinary teamwork and why end-user data is crucial in both AI and UX design. I’ll discuss challenges in aligning AI behavior with human expectations, emphasizing transparency, interpretability, and accountability. Finally, I’ll present insights from generative AI prototyping and share practical tools for integrating responsible AI principles into UX workflows.</description><pubDate>Fri, 14 Mar 2025 07:00:00 GMT</pubDate></item><item><title>Ranjay: Scale Can’t Overcome Pragmatics: Instilling Spatial and Compositional Reasoning into Multimodal Language Models</title><link>https://raise.uw.edu/events/scale-cant-overcome-pragmatics-instilling-spatial/</link><guid isPermaLink="true">https://raise.uw.edu/events/scale-cant-overcome-pragmatics-instilling-spatial/</guid><description>Compositionality is key to human vision and language, allowing us to interpret new scenes and sentences by combining familiar elements. While past research incorporated compositional and spatial priors into machine learning, large-scale models trained on internet data have largely overlooked them. This talk formalizes compositionality through cognitive science, evaluating whether models like GPT-4 and Gemini exhibit it—revealing near-random performance. We explore architectural and training modifications inspired by neuroscience and cognitive science to enhance compositional reasoning and address gaps in training data, showing how high-quality human annotations can help build stronger vision-language models.</description><pubDate>Fri, 07 Mar 2025 08:00:00 GMT</pubDate></item><item><title>Asia: Toward Effective Data Governance in Data-Driven Systems</title><link>https://raise.uw.edu/events/toward-effective-data-governance-in-data-driven/</link><guid isPermaLink="true">https://raise.uw.edu/events/toward-effective-data-governance-in-data-driven/</guid><description>Contemporary AI systems are characterized by extensive personal data collection despite the increasing societal costs associated with such practices. To prevent harm, data protection regulations specify various principles for respectfully processing user data, such as purpose limitation, data minimization, or the need to establish a valid basis for data processing, such as consent or legitimate interest. Yet, practical implementations of these principles in data-driven systems leave much to be desired. This talk will delve into the computational and human factors contributing to lax data protection implementations and uncover ways research can contribute to improved data governance online.</description><pubDate>Fri, 21 Feb 2025 08:00:00 GMT</pubDate></item><item><title>Dr. Yang: Training Social Skill via Large Language Models</title><link>https://raise.uw.edu/events/training-social-skill-via-large-language/</link><guid isPermaLink="true">https://raise.uw.edu/events/training-social-skill-via-large-language/</guid><description>Social skills are key to success in work and life, but finding good practice opportunities isn’t easy. Most training relies on expert supervision, which doesn’t scale well given the shortage of trained professionals. So how can we make social skill training more accessible and engaging? In this talk, I’ll share two projects using LLMs to bridge this gap—one helping therapists improve their skills with AI-powered feedback, and another using simulated practice to teach conflict resolution. We’ll also discuss the risks of LLM-based training and how we can address them.</description><pubDate>Fri, 14 Feb 2025 08:00:00 GMT</pubDate></item><item><title>Michael: Human-AI Interaction Under Societal Disagreement</title><link>https://raise.uw.edu/events/human-ai-interaction-under-societal-disagreement/</link><guid isPermaLink="true">https://raise.uw.edu/events/human-ai-interaction-under-societal-disagreement/</guid><description>How can we better model human attitudes and behaviors? Traditional simulations often fail to capture the complexity of human behavior, but AI opens up new possibilities. In this talk, Michael will discuss generative agents—AI-driven simulations of human behavior that can remember, reflect, and plan. Grounded in qualitative data from over 1,000 Americans, these agents replicate survey responses with high accuracy. This research offers insights into designing more effective online social spaces, addressing societal disagreements in AI, and embedding societal values into algorithms.</description><pubDate>Fri, 07 Feb 2025 08:00:00 GMT</pubDate></item><item><title>Dr. Avijit Ghosh: “It’s not a representation of me”: Examining Accent Bias and Digital Exclusion in Synthetic AI Voice Services</title><link>https://raise.uw.edu/events/its-not-a-representation-of-me/</link><guid isPermaLink="true">https://raise.uw.edu/events/its-not-a-representation-of-me/</guid><description>AI speech generation and voice cloning technologies produce natural speech but may reinforce accent discrimination. A study of two synthetic AI voice services (Speechify and ElevenLabs) highlights performance gaps across regional English accents, underscoring the need for inclusive design to avoid excluding diverse linguistic traits. These findings call for actionable steps from developers, policymakers, and organizations to create fairer, socially responsible AI solutions.</description><pubDate>Fri, 31 Jan 2025 08:00:00 GMT</pubDate></item><item><title>Tori: How Microsoft Blends Safety and Security in AI Red Teaming</title><link>https://raise.uw.edu/events/how-microsoft-blends-safety-and-security/</link><guid isPermaLink="true">https://raise.uw.edu/events/how-microsoft-blends-safety-and-security/</guid><description>The Microsoft AI Red Team (AIRT)’s principles and methods combine security red teaming practices and adversarial ML techniques, with safety frameworks and perspectives. This talk will cover how our AIRT integrates these different approaches while red teaming all of our high risk GenAI tech, resulting in a cross functional team approach that adjusts to our diverse technology offerings, from models to copilots.</description><pubDate>Fri, 24 Jan 2025 08:00:00 GMT</pubDate></item><item><title>Michael: Generative Agents: Interactive Simulacra of Human Behavior</title><link>https://raise.uw.edu/events/generative-agents-interactive-simulacra-of-human/</link><guid isPermaLink="true">https://raise.uw.edu/events/generative-agents-interactive-simulacra-of-human/</guid><description>How can we better model human attitudes and behaviors? Traditional simulations often fail to capture the complexity of human behavior, but AI opens up new possibilities. In this talk, Michael will discuss generative agents—AI-driven simulations of human behavior that can remember, reflect, and plan. Grounded in qualitative data from over 1,000 Americans, these agents replicate survey responses with high accuracy. This research offers insights into designing more effective online social spaces, addressing societal disagreements in AI, and embedding societal values into algorithms.</description><pubDate>Fri, 17 Jan 2025 08:00:00 GMT</pubDate></item><item><title>Pawan: Philosophical Considerations in AI Alignment</title><link>https://raise.uw.edu/events/philosophical-considerations-in-ai-alignment/</link><guid isPermaLink="true">https://raise.uw.edu/events/philosophical-considerations-in-ai-alignment/</guid><description>The rapid rise of artificial intelligence brings challenges in ensuring it aligns with human values and ethics. This session dives into the philosophical core of AI Alignment—how to build systems that act in line with human intentions and morals. Topics include value uncertainty, ethical decision-making, existential risks, and approaches like Inverse Reinforcement Learning, Coherent Extrapolated Volition, and Constitutional AI. Beyond technical fixes, AI Alignment raises profound questions about intelligence, ethics, and the future of human-technology interaction.</description><pubDate>Fri, 13 Dec 2024 08:00:00 GMT</pubDate></item><item><title>Jared: Long-term Societal Impacts of AI and the Role of Higher Education</title><link>https://raise.uw.edu/events/long-term-societal-impacts-of-ai-and/</link><guid isPermaLink="true">https://raise.uw.edu/events/long-term-societal-impacts-of-ai-and/</guid><description>Jared recently joined UW-IT as the Lead AI Architect, bringing decades of expertise in enterprise-level AI solutions to education. He’ll share his insights on sustainability and environmental concerns around AI, with a focus on the role of higher education. Join us to learn about UW’s plans in the AI space and engage in discussions about the future of AI with a leader in the field! Jared began his career building core Internet infrastructure, including oz.net (ISP) and the Seattle Internet eXchange. He later founded Cascadeo, an international tech services firm recognized in Gartner’s Magic Quadrant for Public Cloud IT Transformation.</description><pubDate>Fri, 15 Nov 2024 08:00:00 GMT</pubDate></item><item><title>Dr. Stypińska: AI Ageism: studying age discrimination in the era of algorithms. Reflections and future directions for researching algorithmic inequalities</title><link>https://raise.uw.edu/events/ai-ageism-studying-age-discrimination-in/</link><guid isPermaLink="true">https://raise.uw.edu/events/ai-ageism-studying-age-discrimination-in/</guid><description>In this talk, Dr. Stypińska will present findings from her AGEAI research on AI ageism, supported by empirical data from Germany, Spain, Great Britain, Poland, and the Netherlands. She will also reflect on how the social sciences can provide a deeper and more systematic understanding of algorithmic inequalities, extending beyond the study of AI bias.</description><pubDate>Fri, 08 Nov 2024 08:00:00 GMT</pubDate></item><item><title>Dr. Allison Koenecke: Auditing Automated Speech Recognition Systems for Disparities</title><link>https://raise.uw.edu/events/auditing-automated-speech-recognition-systems-for/</link><guid isPermaLink="true">https://raise.uw.edu/events/auditing-automated-speech-recognition-systems-for/</guid><description>Automated speech recognition (ASR) systems convert spoken language to text across applications, but Allison’s audit of commercial ASR systems (e.g., OpenAI, Amazon, Apple, Google, IBM, Microsoft) reveals notable underperformance for African American English speakers and individuals with language disorders like aphasia. Her findings quantify these disparities and identify the specific, underlying ML-driven causes. Allison underscore the importance of regular audits to ensure emerging speech systems serve all users inclusively.</description><pubDate>Fri, 01 Nov 2024 07:00:00 GMT</pubDate></item></channel></rss>