Phase 2: Assumption Testing – Part 2: Testing with Prototypes
User Feedback Google Forms:
https://docs.google.com/forms/d/e/1FAIpQLSfQk8Ue6JRKAalHaMqVPpM2VGxUf8hiHrEteNg7UlcPRnxQiw/viewform?usp=header
Transcripts:
LinguaLeap_Ana_Cata_Transcript
Emily & Alex Conversation transcript
GMT20251107-193912 Recording
Call with Ari Cargill-interview 2
Spanish Conversation AI Lingualeap
Assumption Cards Tested in the User Interviews:
Desirability: Students truly value in-person peer conversations and are willing to meet regularly for language practice.
→ If false, there’s no demand advantage over gamified solo apps like Duolingo.
Viability: Students (or universities) will pay ~$20/month for AI-powered conversational feedback that feels rigorous and useful.
→ If false, the business model fails to sustain growth or justify Series B targets.
Feasibility: AI can accurately understand beginner speech (≥ 70%) and provide constructive, correct feedback.
→ If false, the core product collapses into a basic matching tool without true learning value.
User Interview Takeaways
Across our ~10 user tests, we validated three core assumptions behind LinguaLeap: first, Ishita’s French-language trials showed that the AI not only recognized and transcribed French accurately, but often improved unclear pronunciation – suggesting a clear opportunity to visually flag mispronounced words so learners understand where they struggled. Second, in the mediated Spanish conversation test that Luke and I ran, both students reported that light-touch guidance -prompting, structure, and gentle feedback – made the session far more productive and less awkward than practicing alone, reinforcing that our “AI coach” should scaffold rather than dominate the conversation. Finally, my additional three interviews confirmed strong desirability for a product that provides low-pressure speaking practice, better-matched partners, and feedback that feels supportive rather than evaluative. Together, these findings confirm that learners want structured, confidence-building conversation practice – and that our concept meaningfully solves a real pain point while pointing toward specific design refinements.
Overall Takeaways
When we ran our assumption tests, we found really encouraging validation across all three areas, desirability, viability, and feasibility. Students kept coming back to wanting real conversations, not just apps or flashcards. They said classes and tools like Duolingo or ChatGPT help with vocabulary, but they don’t actually make you feel confident speaking. Every student we talked to felt that practicing with peers, especially in a structured, low-stakes setting, was the best way to actually improve. That tells us our core idea of pairing students for guided, in-person conversations really resonates. On the business side, students linked language confidence directly to getting jobs or internships, which makes them more willing to pay. They already spend on classes or tutoring, and they were comfortable with the idea of paying around $10 a month for something that actually helps them speak better and track their progress. Finally, on feasibility, Ishita’s interviews with three beginner French learners were surprisingly positive, the AI understood their speech and intent almost perfectly, even with accents and hesitations. It did over-correct grammar sometimes, but that’s something we can fix by building two coordinated models: one focused on clean transcription, and another on identifying and explaining mistakes. That’s a huge technical win for us, it shows the AI can already handle the hardest part. Altogether, the tests gave us a lot more confidence that LinguaLeap is something students truly want, will pay for, and that we can actually build.
