AI & Tech
AI moved from research lab to everyday tool faster than any technology in history. Most people use a large language model weekly, but very few have measured what that means: how much thinking has shifted to the machine, where their job sits on the automation curve, or how their performance compares to the systems they compete with. These quizzes turn that vague unease into a number.
9 AI & tech quizzesChatGPT reached 100 million users in 2 months, faster than any technology in history. Most people have no idea how dependent on it they've become.
Test your understanding of how large language models work.
Weil & Rosen technophobia scale with population norms.
How much of your daily work and thinking depends on AI tools?
A fun inventory of how many AI interactions you have had.
Human Benchmark typing distribution, where does your WPM rank?
Web Audio API tone identification test. True perfect pitch: 1 in 10,000.
Four-domain assessment of emotional, cognitive, and social AI dependency.
The clearest signal still comes from the Oxford study by Frey and Osborne, which scored 702 occupations on automation risk and found that roughly 47 percent of US jobs were in the high-risk band. The OECD's later analysis using task-level rather than occupation-level data revised that down to around 14 percent at high risk and 32 percent at significant risk of substantial change. Both agree on the shape: routine cognitive work (data entry, basic accounting, paralegal research, telemarketing, insurance underwriting) sits at the top of the exposure curve, while jobs requiring physical dexterity in unstructured environments, complex social judgement, or genuine creative origination sit at the bottom. McKinsey's 2023 generative AI report shifted the picture again by showing that knowledge work previously considered safe (writing, coding, customer service) is now exposed in ways the 2013 models did not anticipate. Run the will-AI-take-my-job tool to see your exact occupation's risk band.
There is no clinical definition of AI dependency yet, but useful proxies have emerged from recent research on technology use and cognitive offloading. The warning signs cluster around four behaviours: reaching for an AI tool before attempting a task yourself, feeling anxious or stuck when the tool is unavailable, accepting AI output without verification on topics where you have genuine domain expertise, and noticing that your own writing fluency, mental arithmetic, or problem-solving stamina has visibly degraded. A 2024 Microsoft Research study found that heavy generative AI users reported lower confidence in their independent capability after six months, even when their measured output quality stayed flat. A separate Carnegie Mellon study reported a measurable drop in critical evaluation effort the moment users felt confident in the model's answer. Dependency is not the same as productive use; the question is whether you can still think the thought without the tool. The AI dependency quiz scores you on emotional reliance, cognitive offloading, social substitution, and skill atrophy across four domains, with population context drawn from current usage surveys.
AI literacy is the ability to understand what large language models can and cannot do, recognise their failure modes, evaluate their output critically, and use them effectively without overtrusting them. The European Commission's AI literacy framework, formalised in the 2024 EU AI Act, breaks it into four pillars: understanding (how the technology actually works), application (using it well in practice), evaluation (judging output quality and bias), and ethics (privacy, attribution, and societal impact). A literate user can explain why an LLM hallucinates, knows that confidence in tone is not the same as accuracy, and verifies factual claims before relying on them. They also understand the difference between a base model and a fine-tuned chat model, why prompt phrasing changes output quality, and what training data cutoffs mean in practice. Most people score lower than they expect on the application and evaluation pillars, particularly on questions about retrieval-augmented generation, the limits of in-context learning, and how to spot a confident-sounding but unsupported claim. The AI literacy score quiz tests all four pillars and returns a percentile against current population data.
Safety is the wrong frame. Almost every knowledge job will be reshaped by AI in the next five to ten years, but reshaping is not replacement. The OECD's 2023 employment outlook found that jobs at the highest exposure level still mostly survive in some form, with task composition shifting rather than the role disappearing entirely. The roles that genuinely disappear share three traits: the work is fully digital, the inputs and outputs are well-structured, and the cost of error is low enough that human verification adds little value. Customer service tier-one, basic copywriting, and template-based legal drafting fit that pattern. Roles that survive and grow tend to combine AI fluency with one of three durable advantages: physical presence in unstructured environments (skilled trades, healthcare hands-on care, field engineering), accountability for high-stakes decisions (senior medicine, law, finance), or genuine relational trust (therapy, leadership, sales of complex products). McKinsey's 2024 follow-up estimated that around 30 percent of hours worked across the US economy could be automated by 2030, but distributed unevenly across roles. The will-AI-take-my-job tool gives you your exposure band, and the AI relationship quiz scores how prepared you personally are to work alongside these tools.
It depends entirely on the task. On the SAT, GPT-4 scored in the 89th percentile of test-takers. On the LSAT, it scored in the 88th. On AP exams across the sciences and humanities, it consistently scored 4 or 5 (the top two bands). On the Uniform Bar Exam, it passed at the 90th percentile of test-takers, a result that prompted serious discussion in legal education. These are real, replicable benchmarks published by OpenAI in the GPT-4 technical report and independently verified by Stanford's HELM project. On the GRE quantitative section, GPT-4 scored at the 80th percentile; on the verbal, the 99th. But on tasks that require physical common sense, real-time reasoning under uncertainty, sustained multi-step planning, or genuine creative origination, the same model performs poorly relative to a competent human professional. The am-I-smarter-than-ChatGPT tool lets you enter your own scores from these standardised tests and shows where you sit relative to GPT-4's published performance, with the honest framing that the model is above the human median on narrow benchmarks but materially less capable on integrated, accountable work.
Roughly 40 percent of US adults reported using a generative AI tool at least once in the previous year according to the 2024 Pew Research survey, with about 23 percent using one weekly or more. Among knowledge workers under 40, weekly usage climbs above 60 percent. Adoption is not evenly distributed: usage skews younger, more educated, more urban, and disproportionately concentrated in technical, marketing, and professional services occupations. The fastest-growing segment is workplace integration through Microsoft Copilot and Google Gemini, which puts AI in front of users who never visit ChatGPT directly and inflates real penetration well above the chat-app numbers. Sensor Tower data shows ChatGPT alone reached around 200 million weekly active users by mid-2024, and OpenAI's own December 2024 announcement confirmed 300 million weekly actives across the consumer product. Compared to historical technology adoption curves, the speed is unprecedented: it took the telephone 75 years to reach 100 million users, the internet 7 years, smartphones around 16 years to hit a billion, and ChatGPT crossed 100 million users in 2 months. The AI purity test offers a lighter-touch self-audit of how integrated these tools have become in your week.
Partially, and in ways that surprise people. The empirical picture from the last two years is that generative AI is competent at first drafts, format conversion, stylistic mimicry, and idea generation, but weak at sustained originality, taste, editorial judgement, and genuine surprise. A 2024 study from MIT and Wharton found that mixed teams (AI plus human editor) outperformed both AI-alone and human-alone on creative tasks, but only when the human treated the AI output as a starting point rather than a final product. A separate Harvard Business School study with Boston Consulting Group consultants found a 40 percent quality lift on tasks inside the AI's competence frontier and a measurable quality drop on tasks outside it, where consultants over-trusted plausible-but-wrong output. The replaced work is the bottom 30 percent of the quality distribution: stock copy, formulaic social posts, generic illustration, template legal drafting. The work that has appreciated in value is the top 10 percent: distinctive voice, original framing, work that an AI would not have produced because no statistical regularity in its training data points there. The technophobia test and AI relationship quiz probe how you sit on this spectrum.
Find The Norm draws from a small set of primary sources for the AI silo. Occupation-level risk uses the Frey and Osborne Oxford paper (the original 702-occupation automation index) alongside the OECD's task-level employment outlook, which corrects some of Frey and Osborne's overestimates. Productivity and economic-impact estimates come from the McKinsey Global Institute generative AI reports (notably the 2023 'Economic potential of generative AI' study) and the Goldman Sachs 2023 analysis. US adoption rates draw from Pew Research's annual technology surveys. Sensor Tower provides app-level usage analytics for ChatGPT and competing assistants. Direct human-versus-model performance comes from OpenAI's published GPT-4 benchmark scores and Stanford HELM's independent verification. Each tool cites its specific source on the page itself, and updates roll in as new survey waves and benchmark releases come out. The space moves quickly: a number that was current six months ago may already be off by 20 percent, so the footer of every AI tool notes the last verified date.