How to choose AI-based tools for market research: a guide based on ESOMAR's 20 questions

As AI tools rapidly reshape the landscape of market research, buyers face a critical question: how do we know which AI solutions are trustworthy, ethical, and effective? To address this, ESOMAR - a leading global association for insights and analytics - has published a structured set of 20 Questions that organizations should ask AI solution providers before adopting their technologies.

At Glaut, we believe transparency is a prerequisite for trust. That’s why we’ve taken the initiative to ask - and answer - these 20 Questions ourselves. Our goal is to give researchers a clear, honest view of how our AI-based services operate, where their limits are, and how we ensure ethical, high-quality research at scale.

A: Company Profile

1. What experience and know-how does your company have in providing AI-based solutions for research?

Glaut is an AI-native software company specialized in qualitative and hybrid qual-quant research. We have been pioneering AI-moderated interviews (AIMI) since our inception in 2023 and have successfully executed over 100 research projects and more than 25,000 interviews with customers worldwide, including major research firms like IPSOS, Kantar, and Macromill.

Our team has deep experience in building AI solutions for large-scale research, including AI-powered moderation, coding, analysis, and reporting. Our multidisciplinary team includes experienced researchers alongside data scientists and developers specializing in natural language processing (NLP), machine learning (ML), and conversational AI.

Since inception, we have focused on developing practical AI applications tailored specifically for market research workflows. We combine deep domain knowledge of qualitative methods with cutting-edge technology skills to deliver solutions that enhance researcher skills while maintaining methodological rigor.

Our approach emphasizes transparency, custƒomization, integration flexibility into existing client environments, and continuous improvement driven by user feedback.

2. Where do you think AI-based services can have a positive impact for research? What features and benefits does AI bring, and what problems does it address?

AI-based services can significantly accelerate the research process, improve scalability, and provide deeper, faster insights. AI enables Glaut to conduct large volumes of interviews quickly, automate qualitative coding and analysis, and generate executive summaries in minutes. It solves key challenges like slow turnaround times, scalability limits in qualitative research, and the need for manual data processing. Most importantly, it opens up the opportunity to combine the best of quant (efficiency) and qual (depth) in a single methodology.

3. What practical problems and issues have you encountered in the use and deployment of AI? What has worked well/how/what has worked less well/why?

One challenge has been ensuring researchers understand how and when to use AIMI versus traditional methods. Researchers initially tried to replicate surveys with our platform, which is not the best fit. Also, building trust in AI-generated outputs and educating on the methodology has required significant effort.

What worked well? Researchers have appreciated the customization Glaut offers, particularly the ability to control project details and the self-serve capability that our competitors lack. The AI-driven reporting and multilingual capabilities have been particularly successful.

What has worked less and why? Some researchers struggle to understand the best use cases for AIMI, and there is initial skepticism toward adopting a new methodology. We also faced challenges when researchers tried to use Glaut to replicate advanced survey features that we do not support like maxdiff. Similarly, we receive some push back from qual researchers about Glaut not being as good as a human moderator. We agree with them. AIMI is an extension of quant, and not a replacemente of qual.

B: Is the AI capability/service explainable and fit for purpose?

4. Can you explain the role of AI in your service offer in simple, non-technical terms in a way that can be easily understood by researchers and stakeholders? What are the key functionalities?

Glaut uses AI to moderate interviews (i.e. ask questions and follow ups), analyze responses, and summarize results automatically. Instead of moderating live interviews, Glaut can moderate them. Instead of manually listening to interviews and writing reports, researchers can use Glaut to quickly collect and process large volumes of qualitative data. Our platform combines three main capabilities:

  • AI-moderated interviews: voice-led conversations guided by AI moderators that handle open-ended questions with dynamic probing while allowing researchers full control over interview flow.
  • AI-powered analysis: specialized agents perform thematic coding and interpretative analysis to quickly surface insights from qualitative data. Giving researchers an easy and intuitive way to play with data and discover different layer of insights.
  • AI-powered reporting: the Report Builder agent generates structured reports based on analyzed data tailored to client requirements. It’s a goal-based report and researchers are in charge to select the analysis the agent should use to respond to each research goal.

5. What is the AI model used? Are your company’s AI solutions primarily developed internally or do they integrate an existing AI system and/or involve a third party and if so, which?

Glaut platform is built on top of state-of-the-art AI models provided by leading AI companies, such as OpenAI, Google, Anthropic, through their enterprise grade APIs that guarantee maximum security and privacy for our data. Glaut is also model agnostic, meaning we can effectively identify and choose the best performing model for each task we need to accomplish.  Since our inception we have been fine-tuning our AI agents, such as the moderator, with key findings from over 25,000 interviews. Specifics about third-party integrations can be confirmed if needed.

Currently, our platform uses AI models hosted through Microsoft Azure cloud services. Glaut inherits all relevant security, compliance, and privacy policies from Microsoft Azure, details of which are publicly available at their policy reference site: https://learn.microsoft.com/en-us/azure/ai-services/policy-reference

This approach balances flexibility in leveraging best-in-class AI technology while maintaining rigorous data protection standards.

6. How do the algorithms deployed deliver the desired results? Can you summarise the underlying data and how it interacts with the model to train your AI service?

Our algorithms process interview transcripts and audio data to identify themes, sentiments, and insights. We use supervised fine-tuning with real research data collected via Glaut to improve our agents. However, we use only Glaut proprietary data - and not customer data - to improve our algorithms.

  • Input text undergoes preprocessing, including semantic tagging, before being fed into supervised learning pipelines, where human-labeled examples teach theme detection versus irrelevant content filtering.
  • Reinforcement learning guided by ongoing human review feedback continuously refines agent performance post-deployment, ensuring outputs remain accurate and contextually relevant even as new scenarios arise during live projects.

Because Glaut integrates directly into live client environments via APIs/plugins feeding streaming session input, the system provides immediate assistance perfectly synchronized with customized scripts defined entirely by user teams themselves.

C: Is the AI capability/service trustworthy, ethical and transparent?

7. What are the processes to verify and validate the output for accuracy?

Researchers always review and validate outputs before client delivery. We also conduct internal QA, use built-in consistency checks, and encourage human oversight and feedback from our customers.

7.1. How are they documented?

QA steps and best practices are documented in our internal project execution playbook. Specifics can be provided upon request.

7.2. How do you measure and assess validity?

We periodically measure validity by comparing AI-generated insights against manual analyses and by collecting user feedback on the accuracy and relevance of outputs.

7.3. Is there a process to identify and handle cases where the system yields unreliable, skewed, or biased results?

Yes, researchers can flag issues during the project review, and our Customer Success team assists in resolving them. Automated checks for inconsistent responses from the interviewees are also part of the interview process.

7.5. Do you use any specific techniques to fine-tune the output?

Yes, we use real project data to fine-tune our moderation and reporting agents to better reflect user expectations and project goals.

7.6. How do you ensure that the results generated are 'fit for purpose'?

Human researchers always have final review and editing authority to ensure the outputs meet the project objectives. We don't expect - and neither should you - the software to deliver end artifacts directly to the customer.

8. What are the limitations of your AI models and how do you mitigate them?

Glaut does not create proprietary LLMs, but leverages state-of-the-art LLMs from third parties and always look for the most modern, efficient and performing ones. Limitations include potential misinterpretation of very niche or highly technical content, and reliance on high-quality audio input for the best results. We mitigate these limitations through human oversight, continuous model fine-tuning, and providing the ability to edit and refine reports.

9. What considerations have you taken into account to design your service with a duty of care to humans in mind?

Ethical responsibility underpins all aspects of Glaut’s platform design:

  • We comply fully with global privacy regulations including GDPR & CCPA by ensuring informed consent mechanisms within client workflows.
  • No interview data is ever used directly to train third-party AI models; respondent confidentiality is strictly protected according to our providers’ policies.
  • Aggregated anonymized data may be used internally solely for service improvement without breaching individual privacy or confidentiality agreements.
  • Our infrastructure adheres to ISO/IEC 27001 standards, certified annually, ensuring robust information security management covering access control (RBAC), multi-factor authentication (MFA), encrypted communications (TLS 1.2+), secrets management via GitHub/Heroku pipelines plus periodic audits aligned with best practices.

Human oversight is embedded throughout, from interview moderation through analysis, to prevent automated decision misuse while promoting inclusive design principles, ensuring cultural sensitivity across markets served.

Additional Data Governance & Privacy Measures:

Glaut acts as a Data Processor under GDPR rules while customers retain ownership/control over their respondents’ data. We never collect personally identifiable information (PII) unless explicitly provided anonymized IDs by customers purely for research tracking purposes.

Data segregation occurs at multiple organizational levels within Glaut software, supporting role-based access control defined by clients themselves, ensuring only authorized personnel access sensitive project data. Customer success teams may access respondent-level info strictly under contractual confidentiality obligations governed in commercial agreements, including Data Processing Agreements executed prior to project start.

All servers hosting client data reside currently within EU jurisdictions using secure cloud platforms like MongoDB Atlas (for storage) alongside Heroku/AWS hosting environments - all compliant with regional regulatory requirements regarding sovereignty and protection.

Upon customer request, post-project completion, we permanently delete all associated personal/research data irreversibly following documented procedures aligned with GDPR mandates.

Privacy notices can be customized per client preference, either displaying Glaut’s own notice or linking externally as required, supporting transparency toward respondents about processing activities consistent with legal obligations placed on Data Controllers (customers).

Glaut is a proud ESOMAR Corporate Member, committed to upholding the highest standards in ethical and impactful research. We also continuously address these and other questions from our clients on our FAQ page, to ensure ongoing transparency and support informed decision-making.

Glaut

701 Tillery Street Unit 12-1806, Austin, Texas 78702, United States.