AI ML Solutions

Offline Rep-Counter with On-Device AI

Offline Rep-Counter with On-Device AI

1. Introduction

Advances in on-device machine learning and edge AI now enable highly interactive and personalized applications without relying on cloud connectivity. In this demonstration, we showcase an integrated system that leverages TensorFlow Lite’s MoveNet for precise real-time pose estimation, combined with a custom-trained lightweight classifier to interpret physical movements. Additionally, Google’s Gemini Nano summarizer from ML Kit is used to generate immediate, personalized motivational feedback entirely offline. This combination of technologies illustrates the future of responsive, privacy-focused AI applications running seamlessly on mobile devices.

2. System Architecture

Our app integrates several key components:

  • Camera captures real-time video input.
  • MoveNet Pose Estimation identifies body keypoints.
  • Custom TinyML Rep Classifier categorizes each detected pose.
  • Real-time Rep Counting (Kotlin) tracks repetitions based on pose classification.
  • Gemini Nano Summarizer (ML Kit) generates motivational feedback.
  • User Interface displays real-time rep counts and feedback messages.

3. Prerequisites & Project Setup

Ensure you have:

  • Android Studio Flamingo or newer
  • Android device running Android 12+
  • The TensorFlow Lite Pose Classification tutorial for initial model training here.

 

4. Part I – Training a Custom TinyML Rep-Classifier

We start by training our custom rep classifier:

4.1 Collecting Pose Data

Follow the official TensorFlow Lite Pose Classification tutorial to collect and preprocess your pose data using MoveNet.

4.2 Labeling and Verifying Dataset

Label your dataset clearly (e.g., “push-up,” “rest”) and visually verify each pose classification for accuracy.

4.3 Training and Exporting the Model

Train your model using the provided notebook and export it to TensorFlow Lite with quantization to keep the model lightweight (~50 KB).

5. Part II – Integrating Pose Models into Android

5.1 Cloning Google’s MoveNet Android Sample

Begin by cloning Google’s official Android example:

git clone https://github.com/tensorflow/examples.git

5.2 Adding Your TFLite Models

Place the following files into app/src/main/assets/:

  • movenet_singlepose_lightning_int8.tflite
  • pose_classifier.tflite
  • pose_labels.txt

5.3 Kotlin Callback: Real-Time Rep Counting (RepCounter.kt)

This Kotlin class increments rep counts based on pose classification transitions:

6. Part III – Integrating Gemini Nano Summarizer

6.1 Setting Up Gemini Nano

Include Gemini Nano summarization via ML Kit in your app/build.gradle:

6.2 Initializing Gemini Nano

Set up Gemini Nano summarizer within your main activity (MainActivity.kt):

6.3 Implementing Summarization with Throttling

Generate motivational summaries based on workout performance with throttling to manage performance and quota:

 

7. Part IV – Minimal User Interface

Ensure the UI clearly shows rep counts and motivational messages:

  • Use simple TextView elements for displaying tvRepCount and tvSummary.
  • Keep the screen active during workouts with:

8. Conclusion & Future Work

In this demonstration, we’ve explored how on-device pose estimation and TinyML classifiers, coupled with advanced AI summarization technologies like Gemini Nano, can create responsive and privacy-preserving mobile applications. By leveraging edge computing, developers can now provide highly personalized and interactive user experiences without compromising on data security or performance. This project serves as an excellent starting point for further innovation in mobile and edge AI applications.

You can build upon this app, by adding more exercises besides push-ups; exploring other functionalities of Google’s MLKit; or integrating with some existing fitness app of your choice. 

9. References