News and Insights

CSDI Workshop 2026

Written by Verian Group BE | Mar 23, 2026 3:14:48 pm

Comparing attitudes, behaviours, and experience across countries is only as meaningful as the methods behind the data. Getting that right -across cultures, languages, and modes - is harder than it looks, and the margin for error is high when research informs public policy at scale. 

The CSDI workshop provides a forum and platform for researchers involved in research relevant to comparative survey methods. The event aims to improve comparative survey design, implementation, harmonisation, and dissemination analysis. 

Verian is proud to be presenting research at the 2026 workshop in Vienna between 23-25 March. We look forward to contributing to a field that sits at the heart of evidence-based policymaking. Find out when Verian speakers will be presenting below.

Programme

23 March 

Integrating LLMs and persona based cognitive testing into 3MC questionnaire appraisal: a QAS based automated QDET pipeline - Hajar Gad, Senior Reserch Executive and Tanja Kimova, Head of Evidence for EUMI

Hajar Gad and Tanja Kimova will presents Verian’s development of an automated questionnaire appraisal pipeline for 3MC surveys that integrates three components: (1) rule-based appraisal grounded in the Question Appraisal System (QAS), (2) persona-based synthetic cognitive testing, and (3) a parallel-run human cognitive testing stream used explicitly as a benchmark and calibration reference. The approach is anchored in the 3MC literature, with a focus on functional equivalence, response error and measurement equity across cultural and linguistic contexts.  

Designing and implementing the European Vocational Teachers Survey: methodological challenges and comparative insights - Nicolas Becuwe, Senior Director and Maria Gonzalez, Research Executive 

The European Vocational Teachers Survey (EVTS), commissioned by Cedefop and implemented by Verian, is Europe's first large-scale survey dedicated to vocational education and training teachers — covering over 13,000 respondents across 22 EU Member States. Achieving nationally representative samples across such diverse educational systems demanded significant methodological innovation: from probability-based sampling design and multi-stage contact strategies, to adaptive incentive approaches informed by a full pilot phase across all participating countries. This paper documents those challenges and the solutions developed — offering a practical blueprint for future cross-national education and workforce surveys at scale. 

25 March

Designing for comparability in a mixed mode 3MC survey: Methodological experiments from the 2024 EWCS - Tanja Kimova, Head of Evidence for EUMI

The 2024 European Working Conditions Survey conducted a full parallel run of face-to-face and online data collection across 29 countries, with embedded trials on incentives, questionnaire length, and household selection. The findings expose a foundational problem: the long-held assumption that using the same mode everywhere produces comparable data does not hold. Even within face-to-face, identical procedures applied in different countries create different biases under the same label. The uniformity is procedural, not substantive.

This matters because the field is at a crossroads. Face-to-face interviewing is becoming harder and more expensive to sustain, and the question is no longer whether to transition but how. The EWCS 2024 data show what is at stake: online surveys systematically under-represent lower-educated, blue-collar, and precarious workers, yield rates vary enormously depending on country infrastructure, and standard weighting only marginally corrects these gaps. Crucially, there is no single "online penalty" - mode effects are country-specific and frame-specific.

Three paths forward present themselves. Forced uniformity - mandating self-completion everywhere - is administratively clean but methodologically dangerous, producing data that look comparable but may not be. No decision at all, leaving each programme to manage its own transition, risks uncoordinated fragmentation and a silent loss of coherence across the European statistical ecosystem. The third path - managed heterogeneity within a common framework - allows design variation matched to country conditions while investing in the shared evidence infrastructure needed to maintain comparability: routine embedded experiments, transparent reporting of design effects, and cross-programme coordination.