IVU

Workout assistant

View project on GitHub

IVU

Requirements

Check the requirement.txt for project specific requirements. To install project specific requirements run the following command, It is highly recommended to create a seperate python enviorment before running the command.

pip install -r requirements.txt

DATASET

The original data set can be found here original dataset. The complete processed dataset can be downloaded from complete dataset. Dataset which is split into train and test can be downloaded from split data for training and testing.

Description of the data

Value Description
key_points 17 body keypoints
normalized_key_points key points are normalized with respect to body position
distance_matrix pair wise distance between key points
normalized_distance_matrix pair wise distance between key points is normalized with respect to body position
class_label name of the class
class_label_idx integer equivalent of class_label
file name of the file
frame_number respective frame of the file
frame_details details with respect to over all files

The size of features that are used for training for key_points & normalized_key_points is 51 , and for distance_matrix & normalized_distance_matrix is 136.

Training

  • Config SetUp

    In order to start the training, all the required training parameters, must be specified in a config file.

      data:
        validation_split: 0.2
          
        # Number of frames that are to considered as a single sequence during training 
        stride: 32
        
      optimizer:
        name: Adam
        parameters:
          lr: 0.001
        
      callbacks:
        ReduceLROnPlateau:
          parameters:
            patience: 3
            verbose: 1
            factor: 0.2
            min_lr: 0.000001
        
        EarlyStopping:
          parameters:
            min_delta: 0.001
            patience: 8
            verbose: 1
        
        ModelCheckpoint:
          parameters:
            verbose: 1
            save_best_only: True
            save_weights_only: False
      loss:
        name: CategoricalCrossentropy
        parameters:
          from_logits: True
        
      batch_size: 32
      epochs: 300
      shuffle: True
        
      model :
        # name of the model to use for training
        name : 
        # model input parameters
        parameters:
          # size of input features
          input_features: 136 
          # total number of classes
          n_classes: 7
        
      log_dir : logs/
    
    • LSTM model configuration

      model :
        name : lstm_kar_model
        parameters:
          hidden_units: 64
          input_features: # value based on type of data
          n_classes: 7
          penalty: 0.0001
    
    • Convolution model configuration

        model:
      name: temporal_model
      parameters:
        input_features: # value based on type of data
        n_classes: 7
      

    Based on the type of data used for training, the size of input_features will vary. If data used for training is either key_points or normalized_key_points set input_features: 51 , else if data used is distance_matrix or normalized_distance_matrix set input_features: 136.

  • Start Training

    Once the config is set, then based on the choice of the type of the data that is being used one can perform the training.

    #### Train when data is Normalized Distance Matrix

      from ivu.trainer import Trainer
        
      trainer = Trainer.train_with_normalized_distance_matrix(train_pth=r"path_to_train_pickle",
                                                              test_pth=r"path_to_test_pickle",
                                                              conf_pth=r"path_to_config")
      trainer.start_training()
    

    #### Train when data is Distance Matrix

      from ivu.trainer import Trainer
        
      trainer = Trainer.train_with_distance_matrix(train_pth=r"path_to_train_pickle",
                                                              test_pth=r"path_to_test_pickle",
                                                              conf_pth=r"path_to_config")
      trainer.start_training()
    

    #### Train when data is Normalized KeyPoints

      from ivu.trainer import Trainer
        
      trainer = Trainer.train_with_normalized_key_points(train_pth=r"path_to_train_pickle",
                                                              test_pth=r"path_to_test_pickle",
                                                              conf_pth=r"path_to_config")
      trainer.start_training()
    

Run Experiments

Predefined config files can be found in config/, which can be used to perform experiments. Just run the train_experiments.py, with paths to the processed training and test set data, which will start performing experiments on those configurations present in the folder.

If both train_key_points.pickle and test_key_points.pickle are available then run the following command to perform experiments using those files

python run_train_and_testing.py path/to/train_key_points.pickle path/to/test_key_points.pickle

If just train_key_points.pickle is available then run the following command to perform experiments without test set.

python run_train_and_testing.py path/to/train_key_points.pickle None

The results of the experiment can be viewed in logs/ folder, which is generated in the present working directly. The folder contains saved model in the folder named chk, training graphs saved in the folder name graphs, a pdf containing the confusion matrix and its test metric.

To visualize the training graph run the following command in the terminal

tensorboard --logdir path/till/graph

Inference

INFER_FOR = ["normalized_distance_matrix", "distance_matrix", "normalized_key_points"]

Based on what type of data the model was trained on choose the appropriate mode, below command is when mode is normalized_distance_matrix and stride of 32

python run_example.py normalized_distance_matrix path/till/the/folder/chk path/where/my/testing/videos/are path/where/to/save/results 32

Run Demo Using pretrained weights

  • Download pretrained weights from here
  • Download the videos for demo here
  • Extract the downloaded model.zip, the extracted folder will contain a chk folder, the path till the chk folder is the model path
  • Extract the downloaded demo_videos.zip
  • Run the following command to run inference on the downloaded model.
    python run_example.py normalized_key_points path/till/the/extracted/model/chk path/till/the/extracted/demo_videos path/where/to/save/results 64