Keynote Speakers

Attila Batorfy

Attila Bátorfy | Eötvös Loránd University

Attila Bátorfy is a data journalist, media scholar, and professor of journalism and information design at the Department of Media and Communication Studies at Eötvös Loránd University Budapest. Before his academic career, he worked as a journalist for nearly fifteen years. He was previously the head of data at Átlátszó, Hungary’s first investigative journalism outlet. In 2018 he founded with Átlátszó the visual journalism project Átló. He has received numerous awards for his journalistic work. He has published several articles on the authoritarian tendencies and transformation of the Hungarian media system and journalism over the past decade.

Talk Title: Data journalism in an illiberal regime

Abstract: Two decades ago, most evangelists of data journalism proclaimed that an age of facts and reason was coming. We have many reasons to believe that the opposite has happened. While data journalism methods have become increasingly sophisticated and more data has become available, the impact of this type of content has been limited. This is especially true in a country, Hungary, that has had an authoritarian government for a decade and a half. What has worked so far, what has not worked, and what will never work in data journalism in an illiberal country?

Yevheniia Drozdova

 

 

Yevheniia Drozdova
Texty.org.ua (Kyiv, Ukraine)

Yevhenii Drozdova holds a Master’s degree in Media and Communications and began her career in a local newspaper. In 2017, she joined the independent Ukrainian media outlet Texty.org.ua, where she now leads the Data Journalism Section. She is passionate about anthropological stories and often works with satellite imagery to explore social and geopolitical issues.

Since 2023, Yevheniia has served as a jury member for The Sigma Awards, an international data journalism competition. She also enjoys creating interactive data stories that make complex issues accessible to wider audiences.

Talk Title:  Guiding Through the Fog: Media’s Role in Narrative Wars

Abstract: In today’s complex information landscape, simply debunking individual fakes is no longer sufficient. Readers need a compass — an approach that helps them navigate competing narratives and understand the intentions behind them.
Texty.org.ua embodies this approach as an independent Ukrainian data journalism outlet. Since 2014, the team has focused on studying disinformation and manipulation in the information space, combining traditional journalism with machine learning and natural language processing. Even before AI became mainstream, Texty was already using these technologies to create large-scale interactive projects that make complex topics accessible to broad audiences.
What challenges arise when preparing such investigations, and is the audience ready to engage and understand them? These are some of the questions we will address.
Texty’s work has earned international recognition, including The Sigma Awards along with multiple European and Ukrainian journalism honors. In 2025, Texty won a Sigma Award for a project that examined manipulations in Telegram channels popular in Ukraine.

Homa Hosseinmardi

Homa Hosseinmardi, PhD
University of California, Los Angeles

Homa Hosseinmardi is an Assistant Professor of Data Science (DataX) and Computational Communication at UCLA, where she directs the OASIS Lab (Online and AI Systems’ Integrity & Safety). Her research takes a holistic, large-scale approach to understanding sociotechnical systems and information ecosystems, with a focus on safety and trustworthiness.

She serves as an editor for the Journal of Quantitative Description: Digital Media, received the “Outstanding Research Award” during her Ph.D., and co-founded the CyberSafety workshop series. Her work has been featured in major media outlets and published in over 30 peer-reviewed papers, including top venues such as PNAS, Science Advances, TKDE, and IMWUT.

Talk Title: Overexamined Algorithms and Overlooked Agency: Rethinking Online Harm

Abstract: In recent years, critics of online platforms have raised concerns about the ability of recommendation algorithms to amplify problematic content with potentially radicalizing consequences. Yet most attempts to evaluate these claims suffer from a core methodological gap: the absence of appropriate counterfactuals—what users would have encountered without algorithmic recommendations—making it difficult to disentangle the influence of the algorithm from users’ own intentions.

To address this challenge, we first examined the scale of the problem and possible explanations. While we identified several distinct communities of news consumers within YouTube, from moderate to more extreme, we found little evidence that the YouTube recommendation algorithm is actively driving attention to problematic content. Overall, our findings indicate that trends in video-based political news consumption are determined by a complicated combination of user preferences, platform features such as recommendation systems, as well as the supply-and-demand dynamics of the broader web.

We propose a novel method called “counterfactual bots,” which enables us to disentangle the role of the user from platform features on the consumption of highly partisan content. By comparing bots that replicate real users’ consumption patterns with counterfactual bots that follow rule-based trajectories, we show that, on average, relying exclusively on the recommender results in less partisan consumption, with the effect being most pronounced for heavy partisan consumers.

Kae Petrin

Kae Petrin
Data & Graphics Reporter, Civic News Company

Kae Petrin is a data journalist and media educator whose work crosses the intersections of government accountability reporting and LGBTQ+ communities coverage. They are a current John S. Knight Journalism Fellow at Stanford University.

In 2020, they cofounded the Trans Journalists Association with several dozen fellow journalists. Through four years of volunteer work as a board member and Interim Executive Director, they oversaw the organization’s formalization into a 501(c)(3). Kae now serves on the board as President.

As a Data & Graphics Reporter on Civic News Company’s visuals team, they collaborate with local reporters to tell policy and accountability stories about education, voting rights, and public health. Throughout their career in local news they have contributed analysis and visualizations to reporting recognized by regional and national awards. They also served on the board of directors for the St. Louis Pro chapter of the Society of Professional Journalists from 2017 to 2023.

Kae presents on queer and trans coverage best practices, data reporting and visualization tools, and the collision of these topics for universities, industry conferences, and newsrooms around the U.S.

Talk Title: Malice & government data: What anti-LGBTQ+ policies mean for journalism

Abstract: The Trump administration has gone out of its way to obstruct and remove data collections related to LGBTQ+ Americans. In his first term, this looked like interference with the 2020 Census. Now, it involves mass-censoring the word “gender” to “sex” in scientific datasets and removing some of the few repositories of federal data that we have on LGBTQ+ people. The latter sets back years of research and changes to data collection practices that could have informed policy change. Yet, LGBTQ+ data collection has a dark side: Texas, for instance, has attempted to use data as a tool of state power, to separate families from transgender children and to identify Texans seeking out-of-state medical care. How can journalists work with datasets that might seek to actively misrepresent their content? And how can we responsibly illuminate data that may fill in the absences the government is creating?

Speakers

Alberto Cairo

Alberto Cairo, PhD
University of Miami, School of Communication

At the University of Miami, Dr. Cairo is the Knight Chair in Visual Journalism and a Professor in the Department of Journalism and Media Management teaching classes on visualization and information graphics, and for the Master of Fine Arts in Interactive Media program. In addition, he is the Director of Visualization at The U’s Frost Institute for Data Science and Computing (IDSC). He holds a BA in Journalism from the Universidad de Santiago de Compostela, and MA and PhD degrees from the Universitat Oberta de Catalunya in Spain. Dr. Cairo has also been a lecturer and professor at Universidad Carlos III de Madrid (UC3M), Universitat Oberta de Catalunya, and the University of North Carolina at Chapel Hill.

Described in a Microsoft profile as always in the vanguard of visual journalism, Cairo’s career has tracked alongside the major technological developments in and out of the newsroom. He served as the Director of Infographics of El Mundo online (Spain, 2000-2005) and Editora Globo (Brazil, 2010-2011), and has consulted with—and organized workshops and other training programs for—media organizations and educational institutions in more than 30 countries. He works as a consultant for organizations such as Google News Initiative, NORC at the University of Chicago, the European Commission, and the Congressional Budget Office.

Cairo is a host at The Data Journalism podcast with Simon Rogers and Scott Klein, and the author of four books:

This year, Cairo is launching the Open Visualization Academy, an open source repository of knowledge about information design and data visualization.

Meg Heckman

Meg Heckman
Northeastern University

Meg Heckman is an associate professor at Northeastern University in Boston where she explores journalism’s past, present and future. She uses a mix of computational and analog methods to uncover hidden media history narratives and revitalize local news.

Her scholarly work has been published in Journalism Practice, Journalism Studies and the Newspaper Research Journal. She also writes regularly for a variety of general interest and industry publications including WBUR Cognoscenti, Politico Magazine, USA Today, Saturday Evening Post, Poynter, Columbia Journalism Review and Nieman Lab.

Talk Title: AI in local newsrooms

Abstract: Local newsrooms are small, complex organizations that play a vital role in democracy. They are also often at the mercy of Big Tech, struggling to adapt their tools, their workflows and their audiences’ expectations to a relentless parade of technological changes. Just as local news publishers were beginning to find their footing in the digital information ecosystem, they were faced with a new technological revolution: Generative AI. This talk will consider GenAI as the latest in a long line of media innovations that have rocked local news organizations and ponder how its proliferation will impact their ability to foster civic life.

Duy Nguyen

Duy Nguyen* 
The New York Times

Duy Nguyen is a senior machine-learning engineer on the A.I. Initiatives team at The New  York Times. This is Duy’s second tour at The Times, having worked in Opinion Graphics as part of the 2021-22 fellowship class. Since then, he’s been a data scientist at the Brown Institute’s Local News Lab at Columbia Journalism School, helping local newsrooms use A.I. to solve business and audience problems.

In 2021, he was the lead developer of Gumshoe, an A.I. document ranking tool that is now a widely used DocumentCloud extension.

*(sounds like zwi win)

Talk Title: AI at The New York Times

Abstract: One of our mandates at The New York Times’s A.I. Initiatives team is to prototype tools that assist Times journalists in all parts of the reporting process. To that end, my specialty is in three areas:

For newsgathering, it’s media monitoring – tracking, transcribing and summarizing what politicians and prominent figures are saying at scale;
For writing and production, it’s helping reporters generate first drafts of alt texts, SEO headlines, bullet summaries from articles so they can focus on the original reporting;
For publishing, it’s quality control of this A.I.-generated copy. The Times does not publish A.I.-generated copy, so it’s crucial that we build tools that help editors review large amounts of A.I.-generated first drafts and make sure they meet our standards