We developed an automatic code analysis tool, LevelUp, to support educators and learners and built it into a block-based programming platform. LevelUp gives users continuous feedback on their text classification projects, showing them what they have done well and how they can improve. We evaluated the tool with a crossover user study where participants constructed two text classification projects, once with the LevelUp and once without it. To measure the tool's impact on participants' understanding of text classification, we used pre-post assessments and graded both of their projects against LevelUp's rubric. We saw a significant improvement in the quality of participants' projects after they used the tool. We also solicited participants' feedback using a questionnaire. Overall, they thought LevelUp was useful and intuitive. Our investigation of this novel automatic assessment tool can inform the design of future code analysis tools for AI education.