A Singapore Government Agency Website
How to identify
Official website links end with .gov.sg
Government agencies communicate via .gov.sg websites (e.g. go.gov.sg/open).Ā Trusted websites
Secure websites use HTTPS
Look for a lock () or https:// as an added precaution. Share sensitive information only on official, secure websites.
LogoLogoHomeAboutFAQsEventsProblem Statements
LogoLogo
Sign up here

{build} Hackathon & Incubator

Are you ready to be part of the next {build}?

Contact UsReport VulnerabilityPrivacy StatementTerms of Use
GovTech 10th AnniversaryGovTech 10th Anniversary

Ā© 2026 Government Technology Agency of Singapore | GovTech

Projects/Education
Mark.ly

Mark.ly

Mark.ly is an AI-powered marking assistant that helps teachers mark faster with higher consistencies, and provides data-driven insights

Shortlisted for IncubatorBooth ED11

Back to all projects

Mark.ly

Mark.ly is an AI-powered marking assistant that helps teachers mark faster with higher consistencies, and provides data-driven insights.

Problem Statement

Teachers in Singapore dedicate a significant portion of their time to marking, yet many face challenges with grading consistency. This manual and time-intensive process not only adds to their workload but also delays crucial feedback that could drive student progress. As workloads continue to grow, the traditional approach to assessment is becoming increasingly unsustainable - underscoring the urgent need for solutions that streamline grading while enhancing the quality of feedback.

How Might We

How might we design an AI-assisted grading system that lightens teachers' workload by automating routine grading tasks, improving consistency, and delivering meaningful, accurate feedback to students?

Background

Teachers in Singapore often find themselves working long hours, not just during school but well into the evenings and weekends, marking assignments and preparing lesson plans. have shown that teachers work about 53 hours a week, leading to high levels of stress and burnout.

The heavy workload on marking homework and exams, combined with administrative responsibilities, significantly impacts teachers' work-life balance and leads to teachers' burnout. Elisha Tushara, a former teacher, had written in The Straits Times "" about her struggles during her teaching career regarding balancing her heavy workload with personal life.

Quote 1

Understand the Problem

To better understand the problem space, we conducted both qualitative and quantitative user research with MOE teachers to find out their existing challenges and how we are able to develop a product to better fit their needs.

Over the course of 2 weeks, we did user interviews with 6 MOE teachers and sent out a research survey with 71 respondents.

User Interviews

šŸŽÆ Goal: To better understand their processes on marking and the challenges they faced, as well as any experience with using alternative applications to aid in marking.

āœļø Methodology: Face-to-face/ zoom interviews

šŸ’” Key Insights:

Exam

Teachers first mark individually, then convene with 3-5 sample scripts (good, average, bad) for calibration and discussion. Major exams take 2-3 days; minor exams, 2-3 days after school.

Homework

Marked daily after class (from 1pm). A single class's worksheets take 2-3 hours, making manual marking overwhelming. With multiple classes and other commitments (CCA, admin), timely marking is challenging. Some teachers also provide feedback, adding to the workload.

Challenges

  • Heavy marking workload, especially for daily homework, alongside other teaching duties.
  • 2-3 hours needed to mark one class's homework.
  • An art teacher grades 1,600 art pieces and 400 worksheets per term, with manual feedback for each student.
  • Standardization and calibration are manual (via WhatsApp, Excel, or face-to-face).
  • Some students have illegible handwriting.
  • Science is harder to teach due to its logical complexity and subjectivity.

Survey

šŸŽÆ Goal: To collect quantitative data on their sentiments over existing marking processes, challenges faced and receptiveness towards using AI for marking.

āœļø Methodology: survey sent out to MOE teachers

šŸ’” Key Insights:

  • Marking is the Biggest Time Drain
    • 93% of teachers find grading the most time-consuming task, with homework and exams taking 3-4 hours per class.
    • On average, teachers mark 417 assignments per week.
  • Challenges in Marking
    • Detailed feedback is difficult (82.7%), and time demands are high (77.5%).
    • Calibration across teachers remains a manual process.
  • Current AI Tool Limitations
    • Teachers use MOE SLS (72.4%) and ChatGPT (48.2%), but tools require constant rubric adjustments and don't reduce marking workload significantly.
  • Desired AI Features
    • Top needs: AI-generated feedback (93.1%), class performance summaries (82.7%), and handwriting recognition (81%).
  • Concerns with AI Adoption
    • Accuracy issues, risk of AI hallucinations, and difficulty recognizing math equations & graphs.
    • Teachers worry about losing a big-picture understanding of student mistakes and AI feedback being too generic.
    • Personalized feedback remains essential, especially for younger students.
    • Seamless transition between physical and digital marking is key for adoption.

Goals of the product

  • Reduce the time spent on marking by automating repetitive tasks with a structured rubrics
  • Improve marking consistency and reduce the need for manual calibration
  • Provide students with detailed, timely, and constructive feedback
  • LLM to be trained with local MOE curriculum and existing rubrics history, so as to assist teachers in generating answer schemes in future

The approach

Our approach follows an iterative process of user research, development and user testing to ensure Mark.ly meets the needs of educators and students effectively.

šŸš€ Phase 1: User Research

  • We conducted 6 in-depth user interview sessions and an extensive survey with 71 MOE teachers to understand their existing challenges, workflows, and pain points on marking. This helped shape our AI-powered solution to align with real-world classroom needs.

šŸš€ Phase 2: Development

  • Mark.ly leverages cutting-edge OCR and LLM technologies to accurately interpret student submissions and generate meaningful feedback.
  • During development, we focused on three key components:
    • Optical Character Recognition (OCR) - The backbone of digitizing handwritten scripts. After evaluating multiple solutions (such as commercial SaaS solutions and zero-shot LLM prompting, etc), we identified JigsawStack as the best value-for-money option with high handwriting recognition accuracy.
    • Document Understanding Model - Uses LLMs to structure unstructured OCR data into questions and answers for AI-assisted grading.
    • AI-assisted Grading - LLMs assess student responses against pre-defined marking rubrics and generate context-aware feedback to students for each question.Future iterations will incorporate teacher feedback to enhance grading accuracy.

šŸš€ Phase 3: Usability Testing & Refinement

  • We performed rigorous testing using real educational scripts across different subjects, so as to better refine our marking accuracy, quality of feedback, and system efficiency.
  • We engaged X teachers in pilot testing, collecting continuous feedback to improve usability, enhance AI grading accuracy, and optimize user experience.

Solution

We propose developing a localised AI-driven marking dashboard in-line with MOE's curriculum. It will provide an end-to-end solution from marking assignments to providing accurate feedback, so that it can help in reducing teachers' time spent on manual marking and allowing for more consistency.

Process

Mark.ly automates the marking process by:

  • Utilizing OCR to extract and interpret student answers in structured and open-ended formats.
  • Leveraging LLMs to compare responses against model answers and rubric criteria.
  • Generating AI-assisted feedback while allowing teachers' review and revision where necessary.

Key Features

a) Uploading of assignment's question and answer sheet (Rubrics)

Allows for AI-extracted rubrics to ensure consistencies across the subject's grading system. Rubrics extracted could be fed into LLM to generate capabilities in-line with MOE's curriculum.

b) Rubrics tweaking

Allows for flexible rubrics tweaking according to changing circumstances, and updated rubrics could be fed into the LLM to be applied to all available/future gradings.

Rubrics

c) Bulk upload of scanned students' answer sheets

Convert the PDFs into OCR for easier grading via AI, grades will be automatically sorted into the individual students based on a consistent naming convention of the PDFs.

d) Overview summary of class's performance

Provides basic statistics for the class's performance, with insights into questions that were performed the worst so that the teachers could better understand what were the common issues faced by students.

Summary

e) AI-generated feedback and detailed grading

Reduces teacher's workload through providing a 1st-cut feedback and grading for the teacher's review thereafter. Allows for tweaking of grades and feeding updated rubrics back into LLM. Provides automated and timely feedback for each assignment to the students.

Feedback

f) Group grading (in future)

Group Grading

Grouping students of similar grades (individual question granularity) so that teachers have a clear overview of what type of answers were written by the students. This also helps to reduce time spent on review by allowing teachers to focus on groups that did not get the full marks.

g) Feedback to students and data analysis (in future)

Allow for the grades and feedback to be available to students, online storing of past assignments and generating trend analysis per student based on history.

Impact

We tested Mark.ly with a few MOE teachers using their school examination questions and students' scripts. These teachers came from various schools such as Dunearn Secondary, Rosyth and ACS (Junior). The usability test is mainly seeking teachers' receptiveness to Mark.ly, as well as providing valuable feedback on how we can improve on our product to better fit their needs.

Generally, teachers have positive reception towards the concept and development of Mark.ly. They provided valuable insights on how we can potentially improve on our product features, such as a feedback loop to students, locally-stored MOE curriculum/rubrics and student-based data bank for trend analysis.

Quote 2

This results in approximately 5 minutes spent (not inclusive of time spent to review) to provide grades and feedback for 1 class of 40 students, at least a 50% decrease in marking time spent for teachers.

  • Time savings: Mark.ly significantly reduces grading time, allowing teachers to focus on lesson planning and student engagement.
  • Consistency and fairness: AI-driven assessments ensure objective and standardized grading.
  • Enhanced student learning: Personalized feedback helps students understand their mistakes and improve.
  • High Scalability: The solution can be adapted for various subjects and educational institutions.

What's Next

Beyond Hackathon, we will work closely with MOE on refining our LLM quality and driving the business needs for this product. We will also work closely with more teachers to better improve on our product to suit their needs.

We will identify pilot schools (with specific subjects) to test out Mark.ly, and in the longer term we will aim to expand to a series of subjects and roll out to a wider range of schools.

Key areas of focus include:

  • Expand AI capabilities to read diagrams
  • Feedback to students regarding their performance
  • Statistics analysis and summary of key insights
  • Student-based data bank to store their assessments data for trend analysis

The Team

Chunqi, Shirlynn, Yi Ning, Jack, Ahmad