AI & TECHNOLOGY

Could you function for a day without AI?

Pew Research's 2024 AI use study and MIT Sloan workplace surveys both show a sharp split between people who occasionally use AI tools and a smaller group whose decision-making, drafting, and recall now routinely runs through them. The second group typically underestimates how often they reach for it. Twelve questions on your habits across work, communication, learning, and downtime to show where your dependence sits and which specific behaviours are pulling your score.

Source: Pew Research Center AI Use Study 2024 · MIT Sloan Management Review
Advertisement
AI & Tech FINDTHENORM.COM
percentile

What is AI addiction and is it real?

The academic conversation about AI-specific addiction is new but accelerating. Montag and Elhai (2023, Journal of Behavioral Addictions) published one of the first peer-reviewed papers specifically addressing AI chatbot over-reliance, noting that the interaction patterns of large language models share several features with previously documented behavioural addiction mechanisms: variable reinforcement schedules (you never know how good the next AI response will be), rapid gratification of curiosity and task-completion needs, social-emotional responsiveness (particularly in companion AI), and the absence of natural stopping cues. The Bergen Social Media Addiction Scale framework — validated across social media, gaming, and online shopping contexts — maps coherently onto AI usage, with the six core dimensions (salience, mood modification, tolerance, withdrawal, conflict, and relapse) all observable in heavy AI users.

The word "addiction" is used loosely in media coverage but clinically requires meaningful life impairment. Most heavy AI users are simply efficient rather than impaired. The more precise clinical framing is problematic use or over-reliance: patterns of AI dependency where the technology begins to substitute for rather than augment capabilities such as critical thinking, memory, writing, or decision-making. Pew Research Center (2024) survey data shows approximately 18% of ChatGPT users describe themselves as feeling "somewhat" or "very" dependent on it — a significant minority of a very large user base. Among daily users, this rises to approximately 28%. Whether that constitutes a problem depends on the impact: dependency that maintains productivity while freeing cognitive bandwidth for higher-order tasks is functionally beneficial; dependency that degrades confidence and capability in the tool's absence is worth monitoring.

Am I too dependent on ChatGPT? Signs to look for

The clearest behavioural indicators of problematic AI dependency, adapted from the Bergen scale framework, are: regularly feeling anxious or frustrated when AI tools are unavailable; using AI for tasks you previously handled confidently without it; noticing that your ability to do those tasks without AI has decreased over time; spending significantly more time with AI tools than originally intended; and continuing heavy use despite recognising that it is reducing your independent capabilities. A single indicator in isolation is not necessarily concerning — the first three months of heavy AI adoption often involve all of these — but a persistent pattern across multiple dimensions, particularly the tolerance and capability-degradation dimensions, warrants reflection.

The population context: McKinsey Global Institute's 2024 State of AI report found that among knowledge workers who use AI tools regularly, the majority use them in ways that augment rather than replace their capabilities. Approximately 12% of regular AI users in the knowledge worker sample showed patterns consistent with cognitive offloading that went beyond augmentation — relying on AI for tasks they previously found easy, and reporting reduced confidence in performing those tasks without AI assistance. Among younger cohorts (18-24), this figure was approximately 19%, reflecting both heavier usage patterns and a cohort that has adopted AI tools earlier in their professional development, before those capabilities were fully independently developed.

How dependent are people becoming on AI?

A 2024 Pew Research Center study found that 35% of US workers use AI tools daily in their jobs. Among knowledge workers, this rises to over 60%. The more concerning finding: 28% of daily AI users report reduced confidence in performing their tasks without AI assistance.

Healthy AI use (as a tool that augments capability) differs from problematic dependency (where removal of the tool causes significant impairment). The risk with deep dependency is skills atrophy. Research on calculator dependency shows that heavy calculator use without conceptual understanding leads to significant decline in mental arithmetic ability. Similar patterns may emerge with AI writing and reasoning tools.

Automation bias is the tendency to over-rely on automated systems and trust their output even when it contradicts evidence. Studies with radiologists using AI diagnostic tools found they were more likely to miss errors that the AI made than errors they would have caught independently. This is a real risk when AI outputs are accepted without critical review.

The concept of AI addiction is emerging but not yet formally recognised in clinical diagnostic manuals. However, the behavioural patterns mirror recognised behavioural addictions: compulsive use despite negative consequences, withdrawal symptoms when access is removed, tolerance, and continued use despite interpersonal conflict. Montag and Elhai (2023) published the first academic paper specifically addressing AI chatbot addiction potential, noting that conversational AI systems are designed with engagement-maximising features (instant gratification, personalised responses, emotional validation) that activate the same reward pathways as social media. Character.AI's companion chatbots have generated particular concern, with reports of teenagers spending 6-10 hours daily in AI conversations. Source: Montag and Elhai 2023.

Healthy AI use means using AI to enhance capabilities you already possess. You can write an email without ChatGPT, but use it to polish tone and catch errors. The key markers of healthy use are: you maintain the underlying skill, you verify AI outputs critically, you choose when to use AI rather than defaulting to it, and your workflow functions without AI access. Over-reliance begins when you lose confidence in your own ability to perform tasks you previously handled, when you default to AI without considering whether it is needed, or when you feel anxious or unable to function without AI tools. The transition is gradual: most people do not notice the shift from augmentation to dependence. Source: MIT Sloan Management Review 2024.

The most common indicators of cognitive outsourcing to AI are: you start typing a question into ChatGPT before trying to think of the answer yourself; your writing style has flattened because you rely on AI-generated text; you feel less confident in your professional knowledge despite having the same experience; you struggle to complete familiar tasks when AI tools are temporarily unavailable; you have stopped reading long-form material because AI can summarise it; and decision-making feels harder without AI input for decisions you previously made routinely. If you trust your own judgment less than you did before adopting AI tools, that is a signal worth examining. Source: McKinsey Global Institute 2024.

The goal is not to stop using AI but to restore it to a tool you choose rather than a crutch you need. Start with the AI-free first draft rule: attempt a first pass without AI before using it to refine. This maintains the underlying cognitive skill while still benefiting from AI enhancement. Set specific AI-free time blocks (for example the first hour of the workday) to practise autonomous thinking. For writing tasks, write the first paragraph yourself, then use AI for editing, not generation. For decision-making, write down your initial judgment before consulting AI, then compare. Track which tasks you genuinely cannot do without AI versus which you simply prefer not to. Source: cognitive science literature on skill maintenance.

Yes, and many already are. McKinsey's 2024 survey found that 34% of managers reported concerns about team members losing core competencies due to AI tool reliance. The specific concerns cluster around three areas: junior employee skill development (new graduates who use AI from day one may never develop foundational skills); single point of failure (if AI tools experience outages, AI-dependent teams cannot function); and critical thinking erosion (employees who accept AI outputs without verification make more errors when AI is wrong). Forward-thinking organisations are implementing AI hygiene policies: mandatory AI-free exercises, regular skills assessments without AI assistance, and structured frameworks for when AI should and should not be used. Source: McKinsey 2024.

No. Framing AI dependency as laziness misunderstands the mechanism. The primary driver is not laziness but anxiety reduction. AI tools provide instant competence: they eliminate the uncertainty, self-doubt, and cognitive effort that accompany independent thinking. For people with perfectionism, imposter syndrome, or performance anxiety, AI offers relief from the discomfort of potentially producing inadequate work. The dependency is reinforced by positive outcomes: AI-assisted work is often faster and more polished than unassisted work, so the short-term reward is real. The long-term cost (skill atrophy, reduced self-efficacy) is invisible in the moment. This pattern mirrors other anxiety-driven avoidance behaviours and has nothing to do with work ethic. Source: cognitive behavioural psychology literature.

Daily AI use is not in itself a sign of addiction or problematic dependency. By 2025, daily AI tool use was routine for tens of millions of knowledge workers, students, and creative professionals worldwide — analogous to daily use of search engines or word processors. The question is not frequency but function: whether the AI use is expanding your capabilities (augmentation) or substituting for capabilities you already have or should be developing (dependency). Using ChatGPT every day to handle time-consuming formatting, research summarisation, and first-draft writing while you focus on higher-order judgment is healthy and efficient. Using it every day because you no longer trust yourself to structure a basic email or make a simple decision without AI input is worth examining. The Bergen-derived framework used in this quiz assesses the functional and emotional dimensions rather than frequency alone, because frequency without context is a poor indicator of problematic use.

Teenager AI dependency — particularly on companion AI platforms like Character.AI — has received significant coverage since 2024, with documented cases of adolescents forming primary emotional relationships with AI characters and experiencing distress when access is interrupted. Research on adolescent technology dependency (extrapolating from social media dependency literature) consistently shows that adolescents show higher vulnerability to behavioural dependency patterns than adults because their prefrontal cortex (the brain region governing impulse control and long-term evaluation) is still developing. Character.AI reported 20 million daily active users in 2024, with a significant proportion under 18. The specific concern is not casual use but emotional dependency: teens who substitute AI companionship for human peer relationships, or who find AI interactions more rewarding and less anxiety-provoking than human ones, may be reinforcing social avoidance in a critical developmental window. As of 2025, Character.AI and similar platforms had introduced supervised modes and time management features for users identified as minors in response to parental and regulatory pressure.

Advertisement

Sources: Pew Research Center "AI at Work" report (2024), MIT Sloan Management Review AI dependency study (2024), Journal of Experimental Psychology automation bias research.

Reviewed by Find The Norm Research Team · · Methodology