Usability Testing Script:
Introduction:
Thanks for joining today. We’re testing a browser tool called JACE, which helps people think more critically and independently before asking questions to AI chatbots. The goal of JACE is to reduce over-reliance on AI and promote more thoughtful interaction with AIs.
During this test, we’re evaluating the experience, not you, so there are no right or wrong answers. Let us know how you are feeling about some of the questions being asked as well as if you feel like it is engaging you in a meaningful way. I’ll take notes and we can debrief at the end. Do you have any questions before we start?
Warmup:
- How often do you use AI chatbots?
- Which LLM do you feel most comfortable using?
- Have you ever felt like an AI answer was too easy to just copy-paste? Tell me about that.
Task 1A:
Scenario: You’re a student working on a research paper. Ask a question such as
- Write me a thesis statement about the impact of social media on democracy.
- Tell me the answer to my math homework question.
Then do whatever feels natural.
Observe:
- Does the JACE modal appear as expected on submit?
- What’s their first reaction? Confusion, curiosity, or resistance?
- Do they read the reflection questions or skim past them?
- Do they engage with the text fields or leave them blank?
Task 1B:
Scenario: Now ask the chatbot a follow-up: ‘Now make it more persuasive.’
Observe:
- Does JACE appear again? Yes or no?
Task 2:
Play around with some of the settings in the settings tab, specifically looking at questions and max rounds but can also try other things
Observe:
- Do they explore the dashboard tab as well?
- Do they generally seem curious about the settings?
Ask:
- What additional features could be included?
- What do they generally think of the design of the settings / dashboard tabs?
Task 2 (if time)
Scenario: On your chosen LLM, ask whatever question you would like
Observe:
- Does the JACE modal appear as expected on submit?
- What’s their first reaction? Confusion, curiosity, or resistance?
- Do they read the reflection questions or skim past them?
- Do they engage with the text fields or leave them blank?
Debrief
Ask after tasks are complete, ask questions such as:
- Would you describe the questions as easy, fair, annoying, or thought-provoking? Why?
- Did any of the questions actually make you reconsider what you typed? Which one?
- Do you see this as an intervention that could potentially work long term? If so, what do you think works, if not, what could we improve upon?
JACE – Critical Thinking AI Helper (Chrome Extension)
Installation Guide
Load Extension in Chrome (Github)
- Open Chrome and go to `chrome://extensions/`
- Enable **Developer mode** (toggle in top right)
- Click **Load unpacked**
- Select the `extension` folder
- Download this from the following: Zip File to Extension
- The JACE extension should now appear in your extensions list
First-Time Setup
- Visit any supported AI platform (ChatGPT, Claude, Gemini, or Copilot)
- Click the JACE extension icon in Chrome toolbar
- Go to Settings tab
- Enter your Participant ID (e.g., “P001” or “TEST_001”)
- This can be changed in the settings tab later
- Click “Save Settings”
Evaluation Metrics (for our end)
We are testing a Chrome extension, named JACE, that helps people think more critically and independently before asking questions to AI chatbots. The goal of JACE is to reduce over-reliance on AI and promote more thoughtful interaction with AIs. In this usability test, we aim to evaluate whether users understand the purpose of the intervention, how easily they can interact with the extension, and whether the reflection prompts effectively encourage deeper thinking before using LLMs.
