AI & TECHNOLOGY

How pure is your relationship with AI?

Pew Research and Anthropic usage studies show that what people admit to using AI for in surveys diverges sharply from what they actually do in private. The everyday tasks (drafting emails, summarising documents) are widely owned up to. The high-stakes ones (cover letters, condolence messages, ending relationships) are not. Tick the items that genuinely apply to you, on the same purity-score model used in social rumour-board surveys, to see where your AI footprint actually sits.

Source: FindTheNorm community data · AI usage studies 2023-2024
Advertisement

Check every box that applies to you.

AI & TECHNOLOGY FINDTHENORM.COM
percentile

What is the AI purity test and how does the score work?

The AI Purity Test is a 100-question yes/no checklist of AI-related behaviours, modelled on the Rice Purity Test format originally developed at Rice University in the 1930s as an orientation tool and since evolved into one of the most viral self-assessment formats on the internet. The premise is simple: tick every AI behaviour you have done, and your remaining score (100 minus the number of boxes ticked) reflects your relative AI experience level. A score of 100 means you have done none of the listed behaviours; a score of 0 means you have done all of them. Lower scores indicate more AI-native experience.

The questions are ordered from the most common AI behaviours at the top (used ChatGPT, asked an AI for a recipe suggestion) to the most niche and extreme at the bottom (developed a persistent emotional attachment to an AI companion, submitted AI-generated work for academic assessment without disclosure). This ordering means the score is not linear: moving from 90 to 80 is relatively easy for any smartphone user, while moving from 30 to 20 requires behaviours that are genuinely uncommon. Score distribution data from the original Rice Purity Test shows strong clustering in the middle bands — extreme high and low scores are both rare. A similar pattern is expected here: most internet users with AI exposure will cluster between 40 and 70, with genuine AI luddites and fully AI-integrated early adopters at the tails.

How common is each AI behaviour? Population data

Pew Research Center's 2024 survey on Americans' use of ChatGPT and other AI tools provides the most granular public data on AI behaviour prevalence. Approximately 23% of US adults reported using ChatGPT in the past 12 months as of late 2024, up from 14% in 2023. Among adults under 30, usage reached approximately 43%. For workplace-specific AI use, McKinsey's 2024 State of AI report found that approximately 65% of organisations were using generative AI in at least one business function, with approximately 28% of knowledge workers using it regularly in their daily work. These figures represent the more common end of the AI behaviour spectrum.

At the less common end: using AI for emotional support or companionship (Character.AI, Replika) is reported by approximately 5-8% of regular AI users, rising to approximately 12-15% among Gen Z users (Stanford HAI, 2024). Using AI to write content submitted under one's own name is reported by approximately 30% of college students in anonymous surveys (2024 academic integrity surveys, multiple institutions), though willingness to admit this varies substantially. Developing what users describe as a "meaningful relationship" with an AI is reported by approximately 1-3% of the general adult population but approximately 8-10% of heavy daily AI users (Pew, 2024 — this category was self-defined).

How are people actually using AI in their personal lives?

A 2024 survey found that 43% of people who use AI tools regularly have used them to draft personal messages. 22% have used AI for romantic or relationship communication. These numbers are rising steeply, raising genuine questions about authenticity and human connection.

This is genuinely contested. Many people argue AI is just a better spell-checker or writing aid. Others argue that the point of personal communication is authentic self-expression, and AI-mediated messages fundamentally change the nature of what's being communicated. There is no consensus answer, but the ethical weight increases with the intimacy of the communication.

The line is blurry. Using AI to fix grammar (low AI involvement) is very different from prompting AI to write a full email from scratch (high AI involvement). Most ethical frameworks distinguish between AI as editing tool versus AI as primary author.

The AI Purity Test is a 100-question checklist quiz inspired by the Rice Purity Test, a viral quiz format that has been taken hundreds of millions of times online. The original asks about life experiences and produces a score from 0 to 100, where lower scores indicate more experiences. Our AI version applies the same format to artificial intelligence: every question describes an AI-related behaviour, from the mundane (using ChatGPT to check a fact) to the boundary-pushing (developing emotional attachment to an AI chatbot). You check every box that applies to you, and your score is 100 minus the number of boxes checked. The quiz is designed for entertainment, self-reflection, and social sharing.

A low score (0-30) means you have checked a large number of AI experiences, placing you in the AI Power User or Fully Cooked range. This means you have extensively integrated AI into your life across multiple domains: work, creativity, personal decisions, and possibly emotional support. A low score is not inherently good or bad. It indicates deep engagement with AI tools, which could reflect professional expertise, genuine enthusiasm, or patterns of over-reliance worth examining. If your score surprised you by being lower than expected, you may be using AI more extensively than you consciously realised.

A high score (75-100) means you have checked very few boxes, placing you in the AI Curious or AI Luddite range. This could mean you have deliberately avoided AI tools, have not had access or opportunity, or simply have not felt the need. A high score does not mean you are behind or doing something wrong. However, if you are in a professional context where AI literacy is becoming an expectation, a very high score may indicate a growing gap between your skills and evolving workplace requirements. The World Economic Forum projects that 75% of employers will require some AI competency within five years.

The questions are loosely ordered from most common to rarest AI behaviours, grouped by category (Basic Usage, Work and School, Creative and Personal, Deep Usage, Emotional and Identity). Within each category, easier (more common) questions appear first. This means most people will check many boxes in early sections and fewer in later sections, creating a natural narrative arc from 'of course I have done that' to 'who actually does this?' The difficulty calibration is based on adoption data from Pew Research, McKinsey, and Stanford HAI. Source: Pew Research 2024, McKinsey 2023.

The initial average score estimate (approximately 65, meaning 35 boxes checked) is based on a composite model using Pew Research AI adoption data (58% of adults have used AI tools), McKinsey workforce adoption rates, and Stanford HAI's AI Index data on usage depth and frequency. Once the quiz accumulates sufficient real responses, the average will be recalculated from actual site data. The average is likely to decrease over time as AI adoption accelerates. We will publish percentile data once sufficient responses accumulate. Source: Pew Research 2024, Stanford HAI AI Index 2024.

The purity test format inherently includes a spectrum from innocuous to boundary-pushing behaviours. Questions about submitting AI-generated work as your own or using AI to mimic someone else's writing are included because they are real AI behaviours that real people engage in, and their inclusion is necessary for the quiz to capture the full spectrum of AI engagement. Including a behaviour as a question does not endorse it. The original Rice Purity Test includes questions about illegal activity without endorsing those activities. Our approach is the same: we are mapping the landscape of AI behaviour, not prescribing it.

Yes. The quiz runs entirely in your browser. No answers, scores, or personal data are transmitted to our servers. Your checkbox selections are not stored anywhere. The score is calculated client-side using JavaScript. We do not collect, store, or analyse individual responses. The share card contains your score and label but not your individual question responses. You control exactly what you share and with whom.

There is no objectively good or bad score on the AI Purity Test — the tool is descriptive, not evaluative. A high score (80-100) means you have had minimal AI engagement, which may reflect personal choice, professional context, or simply less exposure to AI tools. A low score (0-30) means you have engaged extensively with AI across many contexts, which is equally neutral. The test is designed for self-reflection and social comparison rather than clinical assessment. In the context of 2025-2026, a score of 60-80 is typical for a general adult internet user who has used AI casually without deep integration. A score of 30-50 reflects the profile of someone who regularly uses multiple AI tools and has explored beyond basic text generation. A score below 20 reflects a level of AI engagement that is still relatively uncommon among the general population but is normal within certain professional and creative communities (software developers, AI researchers, digital artists).

Whether using AI in academic work constitutes dishonesty depends on the specific policy of the institution, course, or assignment in question. There is no universal standard: some institutions ban all AI use in assessed work; others permit it with disclosure; others actively encourage it as a productivity tool while assessing the student's critical engagement with the AI output. As of 2025, most major universities had published explicit AI use policies for assessed work, though these varied significantly. The general principle in academic integrity frameworks is that work submitted for assessment should reflect the student's own intellectual contribution — AI-assisted work that is disclosed and represents genuine student engagement with the tool may be acceptable; AI-generated work submitted as entirely the student's own work is likely to violate most institutions' academic integrity policies regardless of whether AI is explicitly mentioned. Students should always check their institution's specific guidelines before using AI in any assessed context.

Advertisement

Sources: FindTheNorm community survey (2024), Pew Research Center AI Use Study (2024), MIT Technology Review AI in relationships feature (2024).

Reviewed by Find The Norm Research Team · · Methodology