Project Milestone 3 One-pager: IDEaL

Link to our Google doc

IDEaL

An educational IDE for all programming Learners

Our Opportunity

Learning to code can be an overwhelming, stressful experience for new learners. Building this increasingly sought-after skill involves wading through a sea of domain-specific jargon, grasping challenging programming techniques and design patterns, and rooting out seemingly undetectable bugs. And these difficulties are compounded by IDEs on the market today, which have steep learning curves and are not tailored for novices. Even using these IDEs, many new coding students find the process of debugging and understanding opaque error messages challenging, time-consuming, and demotivating. Those that persevere through these issues can pick up bad habits in code style and efficiency that make writing more complicated pieces of code more challenging later on, and, as a novice, it can be difficult to identify underlying misconceptions and find relevant resources to remedy them. 

 

Benefits We Offer – A Student’s Perspective

Our solution aims to make the process of learning to code less overwhelming, providing students with an understandable, encouraging, and fun way to program. A potential student user might say the following about our IDE:

  • It is easier and less intimidating to start coding
  • I can learn as I go, diving into topics I have trouble with instead of being inundated with information
  • The tailored feedback IDEaL provides helps me write cleaner, more beautiful code
  • IDEaL helps me spend less time debugging
  • Understanding errors I face when programming is so much easier 
  • The tool is simple and easy to use for novice programmers like me
  • IDEaL integrates with my class, so I can finish my assignments more quickly with more support

Mock Customer Case Study

  • Let’s say Z is a student in CS106B. On the first week of classes, Z would download IDEAL as their IDE and load in the CS106B-specific packages into IDEAL and gets to coding up their first assignment. 
  • The first time that Z makes a style error, IDEAL flags it, provides a detailed message describing the issue, and then logs it for future use. 
  • Then, Z can see that error and how many times it has been flagged within a dashboard on the front page of IDEAL.
  • Now, we have a student Y who is also a student in CS106B. To provide for A/B testing, we have Y download the standard Qt Creator IDE for the course, and get started on the first assignment with the CS106B-specific packages installed. 
  • When Y makes a style error, they would need to wait until their scheduled grading session with their section leader before they are told about it. 
  • We chat with both Y and Z, who are on the same curriculum and in the same course, to see which IDE worked better for them and what grade they ended up getting on the first assignment. 
  • The above steps will help us make a determination about if IDEAL does help student learning and increase student outcomes.

Areas of Uncertainty

  1. Market fit: will we be able to compete with Sublime, VSCode, etc. How are we going to differentiate ourselves?
  2. Viability for university clients : Will this be something that university lecturers will go for and find valuable? If they do, how can we get the onboarding cost to be minimal enough that course staff can pivot to IDEAL and get their students to be on board too?
  3. Tech stack: a long-term vision of this tooling would be getting it integrated within a code analyzer like clangd, which would require a fair bit of open-source contribution 

Plan to Explore Areas of Uncertainty

[Neel with 2 lecturers in the Stanford CS department]

  1. Performing more interviews with students and lecturers alike to make sure that we’re including features that lead to a high differentiability and fit
  2. Going and talking to lecturers to see what they think of the idea / product. We would want to find out what changes we would need to make to our product, as well as what changes they forecast would need to be made to curriculums to better fit the product. 
    1. It would also be interesting to learn about what technologies / improvements have worked best for them in their career to this point and at what point they knew they needed to make a change of that nature.
  3. Researching the landscape of code analyzers and linters to see what the best route of action would be as far as getting our tooling integrated is

 

Prior Research – What We Know

[Jin-Hee with a Stanford CS student.]

 

From interviewing the people learning to code, we learned… 

  • Computer science as a discipline feels daunting and inaccessible.
  • You could stare at the same line of an error message for a long time, just trying to figure out what the jargon means.
  • There is no popular, standardized way to fix style as you code, so it’s often an afterthought or an unknown to new coders.
  • The process of reviewing a key concept while coding can feel scattered.

 

From interviewing the people teaching others how to code, we learned… 

  • Section leaders find themselves repeating the same exact style comments to multiple students throughout the grading process.
  • Many students wait hours in office hours just to help with an IDE setup question.
  • Section time that is spent answering questions on style or formatting could be better used, especially used for more conceptual review / problem-solving.

Leading Signals

We know that things are working if we observe… 

  • Increasing revenue / deals with individuals and institutions;
  • Educators do not have to repeat the same exact style comments on all assignments;
  • Users do not spend more than 4 hours at a time on the IDE – avoiding overwhelm;
  • People have feedback to give us.

 

We know that things are not working if we observe… 

  • A stagnant user base, not attracting new individual users;
  • Students use other IDEs to complete assignments;
  • Partnered institutions / educators spend the same amount or more time explaining IDE messages / giving style feedback.

Alternative Options

  1. A plug-in that we could create and sell to an existing, popular IDE (or IDEs).
  • Benefits: less ground-up development work, reach a wider user base since popular IDEs already have existing users.
  • Drawbacks: business complications between IDE companies and education partners, different builds for different IDEs could be tedious, could have less control in our messaging / content.

 

  1. A learning hub / platform that educators could use directly.
  • Benefits: can easily make it more comprehensive and organized, keeps the conceptual / learning component separate and easy to compartmentalize.
  • Drawbacks: many already exist (super red ocean: Geeks4Geeks, Khan Academy, XForDummies), making it difficult to get traction; could be a duplicate to course websites, making it difficult to enter into partnerships with universities / learning institutions.

Why Now

Even amid our recession and big tech layoffs, people are still learning to code—especially at Stanford, whose 106 population is our initial target user base. Every fall quarter, Stanford CS106A/B enrollment continues to grow and hit a greater size than before. Even Stanford students in other fields are still encouraged to at least take CS106A for a competitive edge in their career. Our IDE will always have and fulfill the opportunity to serve a large group of people that are trying to learn how to program.

 

Technologies Involved

Our MVP takes the form of an extension for the population text editor VSCode. To create this, we leveraged VSCode’s Extension API in TypeScript as well as Python and shell scripting for our underlying logic. This implementation prototype focuses on our understandable errors feature, catching errors in terminal output and providing real-time translations into understandable, jargon-free English-language messages.

For our final product, we plan to implement a full standalone IDE. This will include syntax highlighting, autocompletion, and style feedback, and will thereby involve lexical analysis and parsing of students’ written code as part of our implementation. We would also like to integrate with gdb and the lldb API to create an interactive, beginner-friendly debugger. We will leverage an NLP generative model akin to OpenAI GPT-3 trained on student code to provide smarter feedback and suggestions based on past errors and common coding misconceptions.

 

Looking Forward – Time Frame and Deliverables

The first 1.5 months were spent researching existing IDEs, building and testing experience prototypes, building low-fi prototypes, and coding our very basic functioning product. Within three months, we should have our first iteration of our “fully fleshed” product and begin to conduct further user testing. In one to three quarters, we expect to partner with an academic institution / class (e.g. CS106A). Within the next three years, we plan to expand our sales opportunities with at least 3-5 other universities/academic institutions.

Avatar

About the author