Rationale for Persona Selection
I chose to focus on these two personas moving forward because they represent the two poles of our target audience and because they will respond to design differently. A tool that increases usage visibility will register differently for someone who cannot see their patterns at all (the Constant Current) than for someone who sees them but cannot evaluate them (the Purposeful Sprinter). Any intervention we build must account for both profiles or risk serving neither.
Persona 1: The Constant Current
Behavioral profile
The Constant Current does not “decide” to use AI. There is no discrete moment of opening a tool, formulating a task, and executing it. AI is simply present, a tab that was never closed, a conversation that picks up wherever it left off. Usage is high-volume and diffuse: prompts happen reflexively when questions arise, when paragraphs stall, when curiosity surfaces mid-task. The distinction between working with AI and working without it has, for this persona, largely dissolved.
Across multiple diary entries, Constant Currents logged AI interactions they could not retroactively categorize as either productive or unproductive. Their chat histories had grown so long and so undifferentiated that scrolling back through them yielded no useful signal, no timestamps to anchor sessions, no way to distinguish a prompt that genuinely advanced a project from one that merely killed time. When asked to estimate their daily usage in the pre-study interview, responses ranged widely, and the uncertainty itself was a defining characteristic rather than measurement error.
What makes this persona a unique subset of the target audience
The Constant Current’s defining trait is not heavy usage per se but the illegibility of that usage to the user themselves. Other heavy users in our study could describe their sessions, name the tasks AI helped with, and evaluate the tradeoffs afterward. The Constant Current cannot. This illegibility is not a failure of memory; it is a structural consequence of how AI is integrated into their workflow. When AI is always on and always available, individual interactions lose their boundaries, and retrospective self-monitoring becomes impossible. This makes the Constant Current a uniquely important persona for our design space, because the core problem we are addressing, namely the gap between AI usage and self-awareness of that usage, is most acute here.
Context and environment
Several structural factors sustain this behavior pattern. The academic environment rewards output volume and penalizes slowness: courses assign weekly deliverables, problem sets overlap, and the implicit norm among peers is that using AI is not merely acceptable but expected. The Constant Current perceives a real cost to not using AI, specifically, the risk of falling behind classmates who are using it, of spending three hours on something that could take thirty minutes, of producing work that is less polished than what AI-assisted peers submit. This perceived risk is not hypothetical. It reflects an environment in which AI fluency is increasingly treated as a professional skill, and where underusing it feels like a form of self-sabotage.
The tools themselves reinforce persistence. Conversational interfaces invite continuation rather than closure. There is no natural endpoint to a Claude or ChatGPT session, no logout prompt, no summary screen, no friction between one prompt and the next. The Constant Current’s browser typically has two or three AI tabs open at any given time, each representing a different thread that may or may not still be active. The cost of initiating a new AI interaction is effectively zero, which means the threshold for “worth asking AI” drops to nearly any idle thought.
Motivational structure
The Constant Current’s reliance is sustained by a chain of beliefs, most of them tacit. The first is confidence that AI will produce a useful output for nearly any prompt: not a perfect one, but one that moves the task forward or saves time on a first draft. The second is a belief that this efficiency translates into tangible outcomes, whether better grades, more time for other commitments, or simply reduced stress. The third is that those outcomes matter enough to justify the dependency. Each link in this chain is individually reasonable, and together they create a self-reinforcing loop that makes heavy usage feel rational even when the user suspects, in quieter moments, that some fraction of it is habitual rather than purposeful.
What is notably absent from this motivational structure is any mechanism for evaluating the quality of the loop itself. The Constant Current believes AI is making them more efficient but has no data to confirm or deny it. They suspect some prompts are wasted but cannot identify which ones after the fact. The lack of any reflective feedback, from the tools, from the university, from their own tracking, means the behavior persists unchecked even when the user would, in principle, prefer to be more intentional about it.
Risk perception and frustrations
The dominant risk for the Constant Current is not overuse but underuse. When asked what would happen if they stopped using AI for a week, diary participants matching this profile described consequences in terms of competitive disadvantage and time poverty rather than intellectual growth. One participant noted that classmates who use AI submit work that looks more polished and gets done faster, and that opting out would mean accepting a handicap. Another described a kind of professional anxiety: the sense that employers increasingly expect AI fluency, and that failing to develop it now is a liability that compounds over time. This risk perception creates a paradox: the Constant Current is often aware that their relationship with AI is somewhat unreflective, and they express mild discomfort about it when prompted, but the alternative, using less, feels riskier than the status quo.
Their frustrations center on illegibility. They cannot read their own usage. They cannot distinguish a productive day of AI interaction from an unproductive one. They have no mechanism for rating, tagging, or reviewing past prompts, and the platforms they use offer no such affordances. Several participants also mentioned a secondary frustration: the labor of reworking AI-generated output so that it does not sound like AI. This reworking step adds time and cognitive effort, introducing a quiet self-consciousness about the degree of reliance, and it undermines the efficiency argument that justified using AI in the first place.
Journey Map: The Constant Current
The following map traces a composite weekday during the baseline diary study. Each phase captures not only actions and emotions but the environmental context and systemic triggers that sustain the behavior pattern. Time ranges are approximate and drawn from cross-referencing diary timestamps with interview accounts of typical routines.
| Phase | Actions & Behaviors | Context & Triggers | Cognition & Self-Talk | Emotional Valence |
| Morning startup (~8:30–9:30) | Opens laptop; browser restores from prior session with 2–3 AI tabs still open. Checks messages, opens Canvas for assignment details. Within 5 min, fires first prompt without a deliberate decision to “use AI” — types a question into the already-open Claude tab as naturally as typing into a search bar. | The browser restore function means yesterday’s AI context is the default starting state. There is no login, no friction, no moment of choice. Ambient peer norms reinforce this: roommates and classmates are doing the same thing. The quarter’s workload is already legible on Canvas, creating low-grade urgency from the first waking moment. | “Let me just get moving on this.” No conscious cost-benefit analysis. AI is not experienced as an external tool being invoked but as an extension of the workspace itself. The user does not register the first prompt as a decision. | Neutral, autopilot. Low affect, low intentionality. The emotional register is comparable to opening a browser — reflexive and unremarkable. |
| Deep work attempt (~9:30–12:00) | Settles into a primary deliverable: a paper draft, a problem set, or reading annotations. Works in split-screen with AI in one half and the deliverable in the other. Switches between own writing and AI prompts every few minutes. Prompt quality is uneven: some are targeted (“explain this concept”), many are vague (“help me think about this paragraph,” “make this sound better”). Occasionally pastes whole paragraphs into AI for rewriting. | Split-screen layout spatially merges AI and independent work, making each transition near-instantaneous. Course design rewards finished output over demonstrated process; there is no structural incentive to track which parts of the deliverable were AI-assisted. The conversational AI interface invites open-ended queries, which lower the threshold for prompting and increase the frequency of low-value interactions. | “I’m making progress.” Feels productive on the surface. If pressed, could not explain what AI contributed versus what they did independently. There is a faint awareness that some of the back-and-forth is redundant, but the pace of switching makes it difficult to evaluate in real time. | Surface-level confidence. A slight undercurrent of unease that would only surface if someone asked directly. The pace of work suppresses reflection. |
| Midday fragmentation (~12:00–2:00) | Tasks multiply: attend a class, eat lunch, respond to emails, check Slack, handle logistics for a group project. AI use becomes interstitial — a quick summary request while eating, a translation prompt between meetings, a one-off question while walking between buildings (phone). No single session lasts more than a few minutes, but sessions accumulate. | The transition between structured time (class) and unstructured time (lunch, transit) removes whatever minimal workflow structure existed in the morning. Phone-based AI access extends usage beyond the laptop context. The social environment during lunch — peers discussing assignments, comparing approaches — often triggers additional prompts (“someone mentioned X, let me ask AI about it”). Each individual prompt has a plausible micro-justification. | “I’m being efficient — just quick questions.” Each prompt feels small and justified in isolation. The aggregate is invisible because there is no running tally, and because the prompts are scattered across different tools and devices. Nobody is counting, least of all the user. | Scattered, busy. The emotional register is task-switching, not reflective. The pace of the day forecloses self-monitoring entirely. |
| Afternoon grind (~2:00–5:30) | Returns to a larger project with a deadline approaching. AI use becomes more deliberate: longer prompts, more iterative back-and-forth, occasionally building on a chain of 10–15 messages in a single thread. Produces an AI-assisted draft, then spends 20–30 minutes reworking it: rephrasing sentences, introducing intentional imperfections, restructuring paragraphs so the output does not read as machine-generated. | Deadline pressure raises the stakes of each hour. The university’s implicit AI policy — permitted but scrutinized — creates a double bind: AI is allowed and advantageous, but submitting visibly AI-generated work carries reputational risk. The reworking labor is a direct consequence of this policy ambiguity. Peers in the same course are navigating the same tension, which normalizes it but does not resolve it. | “This is useful, but I’m spending a lot of time making it not look AI-generated.” The reworking step introduces a moment of mild self-consciousness: why am I disguising the thing I just relied on? The efficiency argument frays slightly here, but the deadline reasserts its priority before the question can fully form. | Productive but self-conscious. The friction between using AI to save time and spending time to hide that AI was used creates a tension the user can feel but does not have the space to resolve. |
| Evening wind-down (~5:30–9:00) | Wraps up the day’s primary work. AI tabs remain open. Fires a few low-stakes prompts: a curiosity question, a recipe idea, a half-formed thought about a side project. Could not estimate today’s total prompt count if asked — guesses would range from “maybe 20” to “honestly, no idea.” | The shift from work to leisure does not close the AI context. Tabs persist, and the conversational interface is equally available for idle browsing as for focused work. There is no logout, no session summary, no natural boundary between work-AI and leisure-AI. The ambient availability of AI has colonized non-work time without the user noticing the boundary was crossed. | “I used AI a lot today.” Knows this intuitively but cannot support the intuition with any specifics: how many prompts, which tools, which interactions were valuable. Mild desire for visibility, but not enough to take action — and in any case, no tool exists that would provide it. | Vague unease. A low-frequency awareness that something about the day’s pattern was not quite right, but without data or language to articulate what. The feeling dissipates quickly. |
| Diary reflection (~9:00–10:00) | Completes the daily diary log for the baseline study. Rates intentionality at 2 out of 5. Struggles to describe specific AI interactions from the day. Notes that usage felt “automatic.” Writes a sentence or two of reflection, then closes the form. | The diary itself is the only reflective prompt in the entire day. It arrives after the fact, when memory of specific interactions has already degraded. The format — a brief form rather than a structured review — cannot recover the granularity that was lost throughout the day. The act of filling it out briefly surfaces the illegibility problem, but the diary offers no mechanism to address it. | “I wish I could see what I actually did today, but I can’t reconstruct it.” The gap between intuition (“I used AI too much”) and evidence (none available) produces a brief moment of genuine frustration. This is the most reflective moment of the entire day, and it is also the least supported by any tool or affordance. | Mild frustration at illegibility, shading into resignation. The feeling is real but lacks an outlet. The diary entry itself becomes an artifact of the problem rather than a solution to it. |
Key insights from this journey
- The absence of session boundaries eliminates self-monitoring. Because AI tabs never close and conversations never formally end, the Constant Current has no natural pause points at which to evaluate whether a given interaction was worthwhile. The browser restore function, the conversational interface, and the phone-based access layer all conspire to remove friction. This is not a single design failure but a system of reinforcing defaults: each individually reasonable, collectively powerful. Any awareness intervention that relies on session-level summaries will fail for this persona, since there is no session to summarize. The tool we design must either create boundaries where none currently exist or operate continuously alongside the ambient flow.
- Intentionality is lowest at the moments of highest frequency. The midday fragmentation phase, during which the Constant Current fires the most prompts per hour, is also the phase in which they are least reflective about what they are doing. The prompts are short, interstitial, and individually justifiable, but collectively they represent the largest block of unexamined AI use. The environmental trigger is the transition from structured time (class, focused work) to unstructured time (lunch, transit), which dissolves whatever minimal workflow discipline existed earlier. An intervention targeting high-frequency, low-intentionality windows, rather than total daily volume, would yield the greatest return on awareness.
- The desire for visibility already exists; the tools simply do not provide it. The most emotionally intense moment in this journey is not a moment of heavy usage but the diary reflection phase, in which the user confronts the gap between their intuition and the evidence available to them. The frustration of illegibility at the end of the day indicates a latent demand for the kind of usage mirror we intend to build. We do not need to convince this persona that self-awareness matters; we need to give them a mechanism to achieve it.
- The reworking step reveals a systemic tension the user cannot resolve alone. The Constant Current spends time disguising AI-generated output, which is a direct consequence of the university’s ambiguous AI policy: usage is permitted and tacitly encouraged by workload design, but visibly AI-generated work carries reputational risk. This double bind is not a personal failing but an environmental condition. The user is caught between two systems, the academic incentive structure that rewards AI-assisted output and the social norm that penalizes visible dependence, and the reworking labor is the tax they pay for operating in the gap. Our tool should not try to resolve this tension, but it can make it visible, and visibility may help the user navigate it more intentionally.
Persona 2: The Purposeful Sprinter
Behavioral profile
The Purposeful Sprinter uses AI in short, targeted bursts. Each session has a clear trigger, typically a specific task, a specific question, or a specific obstacle, and a discernible endpoint. When the task is resolved, the tab closes or goes dormant. Usage is moderate in total volume but concentrated in identifiable episodes rather than spread as ambient background activity.
Diary entries for this persona show a pattern of explicit decisions to use AI: “opened ChatGPT to summarize three papers for class,” “asked Claude to debug a function,” “used AI to draft an email I didn’t want to write from scratch.” These entries have clear beginnings and endings. The Purposeful Sprinter can recall, with reasonable accuracy, what they used AI for on a given day and whether it was helpful. Their relationship with these tools is more transactional and less habitual than the Constant Current’s.
What makes this persona a unique subset of the target audience
The Sprinter’s defining trait is not low usage but bounded usage. Each interaction clears an internal justification threshold, an intuitive sense that this particular task warrants external help, before it happens. This creates a natural constraint on volume and a higher perceived value per interaction. The Sprinter is a critical persona for our project because they represent the user who already has some degree of self-awareness about their AI habits but lacks any external benchmark against which to evaluate them. Their question is not “what am I doing?” (the Constant Current’s question) but “am I doing the right amount?” A tool designed exclusively for heavy users would offer the Sprinter nothing; a tool designed for self-evaluation would serve them well.
Context and environment
The Purposeful Sprinter often operates in an environment with at least some external structure around how and when work gets done: a lab with regular check-ins, a team project with shared accountability, or coursework that requires showing process rather than just output. These structures create natural boundaries around AI use by requiring the user to demonstrate understanding, not just produce deliverables. The Sprinter is also more likely to be in a peer context where AI use is discussed openly, which provides informal social feedback about what “normal” usage looks like.
That said, the Sprinter still faces the same environmental pressures as the Constant Current: heavy workloads, tight timelines, and an ambient norm that AI use is both acceptable and advantageous. The difference is not that the Sprinter is immune to these pressures but that their usage pattern, for reasons of personality, workflow, or circumstance, remains episodic rather than continuous.
Motivational structure
The Sprinter’s motivational chain has a similar shape but a different texture. They believe AI can be useful for specific tasks, particularly summarization, first drafts, and debugging, but they do not extend this belief to all tasks. They are more likely to attempt something independently first and turn to AI only when stuck or when the task feels rote. The expected payoff is task-specific rather than ambient: “AI will save me 30 minutes on this literature review” rather than “AI makes everything I do faster.”
This specificity creates a natural constraint. Each AI interaction must clear a higher justification threshold, not a conscious cost-benefit analysis, but an intuitive sense that this particular task warrants external help. The result is lower overall usage but higher perceived value per interaction.
Risk perception and frustrations
The Sprinter’s risk calculus is inverted relative to the Constant Current’s. Where the Constant Current fears underuse, the Sprinter worries more about dependency, about leaning on AI for things they should be able to do themselves, or about producing work they cannot fully explain or defend. Several diary participants matching this profile mentioned a concern about “losing the muscle”: the worry that outsourcing too many cognitive tasks to AI would erode skills they want to maintain. This concern, whether founded or not, acts as a natural brake on usage.
The Sprinter also perceives a reputational risk that the Constant Current does not. They are more conscious of how AI-assisted work might be perceived by instructors, peers, or future employers, and they calibrate their usage accordingly. This is not shame, exactly, but a pragmatic awareness that the line between “using a tool” and “having a tool do your work” is socially negotiated, and that crossing it carries consequences.
Their frustrations center on uncertainty rather than illegibility. They want to know whether their usage level is reasonable, whether specific types of AI interaction are worth the tradeoff, and whether they are drawing the line in the right place. Unlike the Constant Current, who cannot see their own patterns, the Sprinter can see their patterns reasonably well but lacks any external benchmark against which to evaluate them. They also express frustration with AI outputs that require substantial editing, feeling that the time spent correcting AI errors sometimes exceeds the time they would have spent doing the task from scratch.
Journey Map: The Purposeful Sprinter
The following map traces a composite weekday during the baseline diary study. The Sprinter’s journey is defined by its episodic structure: distinct AI sessions with clear entry and exit points, separated by stretches of independent work. The contrast with the Constant Current is not volume alone but the presence of boundaries.
| Phase | Actions & Behaviors | Context & Triggers | Cognition & Self-Talk | Emotional Valence |
| Morning startup (~8:00–9:30) | Opens laptop. No AI tabs from yesterday; browser was closed or tabs were manually shut. Begins work on an assignment or course reading. Works independently for 30+ minutes before any AI interaction. Takes notes by hand or types into a document without pasting to AI. | The clean browser state is itself a behavioral artifact: this persona closes tabs at the end of the day, creating a friction threshold for the next morning. The first block of work often coincides with a quiet environment (early library, dorm before roommates are active), which supports focused independent effort. There is no external prompt to use AI; the environment defaults to non-AI. | “Let me see how far I get on my own first.” There is a quiet satisfaction in independent effort that precedes any AI consideration. The persona frames the morning stretch as a test: can I do this without help? The question is genuine, not performative. | Focused, self-reliant. Mild satisfaction in the independence itself. Intentionality is high because the context supports it. |
| First AI episode (~9:30–10:15) | Hits a specific obstacle: a concept that resists understanding after two readings, a paragraph that will not cohere after three attempts, a debugging wall on a function that should work. Opens an AI tab with a focused, multi-sentence prompt that specifies exactly what help is needed. Gets a response, iterates once or twice to refine it, then closes or minimizes the tab. | The trigger is a felt threshold: the persona has already invested independent effort and hit a wall. The specificity of the prompt is downstream of the morning’s independent work, which clarified what, exactly, the user cannot do alone. The AI interface rewards this specificity with a better response, creating a positive feedback loop: focused prompts yield useful answers, which reinforce the habit of prompting only when specific. | “Okay, I need help with this one thing specifically.” Clear trigger, clear scope, clear exit condition. The persona can articulate why they opened AI and what they got from it. The interaction feels like consulting a reference, not like outsourcing cognition. | Pragmatic, satisfied. The interaction feels justified because it was targeted and because the independent effort that preceded it makes the help feel earned rather than lazy. |
| Midday independent stretch (~10:30–1:30) | Attends class, takes notes, eats lunch, handles logistics. Does not use AI during this block. Works on a separate task without prompting AI. If a question arises during class, writes it down to look up later rather than pulling out a phone to ask AI immediately. | Class attendance provides external structure that displaces AI use: the professor is the information source, not a chatbot. The social context of a lecture hall or seminar also creates informal accountability — being seen prompting AI in class reads differently than prompting it at a desk. During lunch, the absence of an active AI tab means there is no ambient invitation to prompt; the user would need to make a deliberate decision to open one, and the friction is sufficient to prevent casual use. | AI is not in the foreground. Not actively resisting it — just not reaching for it, because the environment does not cue it. The questions that arise during class are mentally filed, not immediately outsourced. | Neutral to positive. Usage is not a live question during this period. The persona is simply doing the things students do. The absence of AI is unremarkable. |
| Afternoon sprint (~1:30–4:30) | Faces a deadline or a large task block. Opens AI for a bounded task: “summarize these three papers so I can identify the two most relevant for my lit review” or “draft an outline for this section based on these bullet points.” Usage is higher than the morning but still episodic: two or three distinct sessions of 10–15 minutes, each with a defined entry point, a specific deliverable, and a moment of closing the tab. | Deadline pressure is the environmental trigger that lowers the Sprinter’s internal justification threshold: tasks that would normally be attempted independently get routed to AI because the clock is running. The Sprinter recognizes this and is mildly uncomfortable with it. The AI output quality matters more here — a poor response that requires heavy editing is experienced as a net loss, because the time budget is tight and wasted effort is tangible. | “I know exactly what I need from this.” Each session has an exit condition the persona can name in advance. When AI output is good, the Sprinter feels efficient. When it requires heavy editing, the cost-benefit calculation turns negative in real time: “I should have just done it myself.” This evaluation happens during the interaction, not afterward. | Efficient when AI output is good; frustrated when it requires editing. The emotional texture is transactional: a good interaction is a relief, a bad one is an irritant, and neither is existentially weighted. |
| Evening wrap-up (~5:00–8:00) | Reviews the day’s work. Can identify which sections involved AI and which did not. Closes all AI tabs before ending the work session. Switches to personal time (exercise, social plans, cooking) without AI in the background. | The deliberate tab-closing ritual creates a boundary between work and non-work that the Constant Current lacks. The Sprinter’s social environment in the evening (dinner with friends, a shared living space) provides informal opportunities to compare AI habits: “how much are you using Claude for that class?” These conversations provide normative data that the Sprinter uses, consciously or not, to calibrate their own usage. | “I used it maybe three times today. Twice it was useful, once I probably should have just done it myself.” The persona can reconstruct the day’s AI interactions with reasonable accuracy. There is a mild sense of accountability, as though the diary study has made the pattern slightly more visible than it otherwise would be. | Reflective. Mild confidence in self-awareness, but tinged with uncertainty: is this the right amount? The Sprinter does not know, because there is no benchmark. |
| Diary reflection (~8:30–9:00) | Completes the diary log. Rates intentionality at 4 out of 5. Describes specific AI interactions and evaluates each. Notes one interaction that felt unnecessary in retrospect. Writes a longer reflection entry than the Constant Current’s. | The diary format works better for this persona than for the Constant Current because the data is still accessible in memory. The Sprinter’s episodic usage pattern means interactions are encoded as discrete events rather than as undifferentiated background noise. The diary does not create new awareness so much as it gives shape to awareness that already existed informally. | “I wonder if other people are using it more than me, and whether I’m falling behind.” The reflection turns outward: the Sprinter’s uncertainty is not about what they did but about what the norm is. They have enough self-knowledge to describe their own behavior; what they lack is a reference point for evaluating it. | Quiet uncertainty. Not anxious, but aware that “the right amount” of AI is socially undefined and that their current calibration is based on intuition rather than evidence. |
Key insights from this journey
- Independent work precedes AI use, and this sequence matters. The Sprinter’s journey begins with 30+ minutes of unaided work before any AI interaction occurs. This is not discipline for its own sake; it is the mechanism by which the Sprinter identifies what, specifically, they need help with. The morning’s independent stretch creates the specificity that makes their AI interactions more productive, and the AI interface rewards that specificity with better responses, forming a positive feedback loop. Any intervention we design should preserve this sequence rather than interrupt it. A tool that surfaces usage data too early in the workflow risks disrupting the very process that keeps the Sprinter’s usage intentional.
- The Sprinter’s emotional high point is task completion, not AI interaction. The emotional arc of this journey peaks not during AI use but during the independent stretches and upon finishing a deliverable. AI is experienced as a means, not an end, and interacting with it carries a flat or slightly transactional emotional register. This is an important design signal: a tool that gamifies or celebrates AI interactions would misread this persona entirely. What the Sprinter values is the outcome, not the process of prompting.
- Social comparison at the end of the day is a design opportunity. During the diary reflection, the Sprinter’s most revealing thought is not about today’s usage but about other people’s: whether peers are using AI more, and whether falling behind is a real risk. This comparison anxiety is subtle but real, and it suggests the Sprinter would benefit from normative context, not in the form of leaderboards or peer rankings, but as aggregated, anonymized reference points that help them calibrate their own behavior. The Constant Current needs a mirror; the Sprinter needs a benchmark.
- The cost-benefit evaluation happens in real time, not in retrospect. Unlike the Constant Current, who can only reflect on usage after the fact (and poorly), the Sprinter evaluates each AI interaction as it happens: “Was that worth it? Should I have just done it myself?” This means the Sprinter is already doing the cognitive work that our intervention aims to support. The opportunity is not to introduce evaluation where none exists but to provide data that sharpens evaluations the user is already making, for example, by showing how much time a particular session actually took compared to the user’s subjective estimate.
Synthesis: Implications for Intervention Design
The Constant Current and the Purposeful Sprinter are not separated by how much they value AI, nor by how intelligent or disciplined they are. The difference is structural. The Constant Current operates in a context, both environmental and psychological, where the friction of using AI has dropped to zero and the friction of not using it has risen. The Sprinter operates in a context where some residual friction remains: an internal threshold, a social signal, a workflow structure that creates natural pauses.
Both personas share a common deficit: neither has any external mechanism for evaluating their own usage. The Constant Current cannot see their patterns at all. The Sprinter can see them but has no benchmark to interpret them. Both would benefit from a tool that surfaces AI usage in a way that is descriptive rather than prescriptive, one that helps the user see what they are doing without telling them what they should be doing. The intervention we design should account for the fact that these two users will respond differently to the same signal: what feels like useful awareness to the Sprinter may feel like noise to the Constant Current, and what breaks through the Constant Current’s habituation may feel heavy-handed to the Sprinter.
The journey maps further clarify that the most promising intervention points are not moments of peak usage but moments of transition: the shift from structured to unstructured time (for the Constant Current), and the moment of diary-style reflection at the end of a work session (for the Sprinter). The most important shared insight across both journeys is that the desire for self-awareness already exists in our target audience. Neither persona is satisfied with their current level of understanding of their own AI habits. The barrier is not motivation but mechanism. This reframes our design challenge: we are not trying to convince users that reflection matters, but rather building the tool that makes reflection possible.
