Skip to content

Kibo4/evaluation-framework-OAD

Repository files navigation

Readme

This readme provides an overview of the code and its functionality. The code evaluate the performance of a computer vision model for online action recognition in videos. It includes the evaluation with various metrics and supports multi-fold evaluation.

LICENCE see file.

Paper

This code has been used to make the experiments of the following paper:

Early Action Detection at instance-level driven by a controlled CTC-Based Approach", William Mocaër; Eric Anquetil; Richard Kulpa. July 2023

@article{mocaer2023,
  title = {Early Action Detection at instance-level driven by a controlled CTC-Based Approach},
  author = {Mocaër, William and Anquetil, Eric and Kulpa, Richard},
  year = {2023},
  journal = {},
  volume = {},
  number = {},
  pages = {},
  doi = {},
}

Code Structure

The main script is "EvaluationOAD.py", it contains the main function "readProtocole" that read the protocol file and call the evaluation function for each group of evaluation. The code is organized into several Python modules and functions. Here's a brief description of the main components:

Constants and Configuration

alphas_BoundedOverlap = [0.0, 0.2, 0.4, 0.6, 0.8, 0.95] # for BoffD and BOD
tolerance_T = 4  # 4 frames, for Action based F1 (Bloom G3D)

# Latency aware
delta_frames_latency = 10
useStartFrameInsteadOfNegativeDelta = True

exportPerSequenceDetailResult = False # for all metric
canCorrect = False  # for BOD
summary = True  # to recap some results, specific to some metrics and score
CURRENT_APPROACH_NAME.NAME = "OURS" # the name that will appear in generated curves for your approach
boundsPredictionFolder = "Bounds/" # input folder for bounds prediction
framesPredictionFolder = "Frames/" # input folder for frame prediction (can be ignored if you only eval using bounds metrics)
outputResultFolder = "Results/" # output folder for results
outputMFResultFolder = "ResultsMultiFold/" # output folder for results of multi folds (aggregration of results of each fold)

These constants and configuration options define parameters used throughout the code. They control aspects such as metric thresholds, output folders, and evaluation options.

Metric Definition Functions

def defineMetricsBounded(nbClass):
    # Define and return a list of bounded metrics
    ...

def defineMetricsPerFrame(nbClass):
    # Define and return a list of per-frame metrics
    ...

These functions define and return lists of bounded and per-frame metrics used for evaluation. To add and remove some metrics, you can modify these functions.

Main Execution

pathProtocole = "C:\workspace2\Datasets\Chalearn\protocol.txt"
readProtocole(pathProtocole, True)

The main execution section reads a protocol file and triggers the evaluation process for the specified dataset.

protocol file format

Example of protocol file (python dict):

{
"pathLabel" : "C:\workspace\Datasets\Chalearn\Label\\",
"pathToEval" : "C:\workspace\Datasets\Chalearn\expOut\group_test\\",
"pathExistingResults" : "C:\workspace\Datasets\Chalearn\knownScores.txt", # can be None
"nbClass" : 20,
"doExportVisual":True,
}

here we want to evaluate only one group of evaluation , should be call with "False" as second parameter of readProtocole. Otherwise, if we want to evaluate all groups of evaluation, should be call with "True" as second parameter of readProtocole (pathToEval is set to "..expOut\")

Adding a New BoundsBased Metric

If you wish to add a new BoundsBased metric to this project, follow these steps:

  1. Create a New Metric Class:

    Begin by creating a new metric class that inherits from the BoundsBasedMetric class. You can use the following code as a template:

    from __future__ import annotations
    
    from abc import abstractmethod
    from typing import List, Tuple, Dict
    
    import numpy as np
    from matplotlib import pyplot as plt
    
    from Metrics.ExistingResult import ExistingResult, CURRENT_APPROACH_NAME
    from Metrics.BoundsBasedMetrics.BoundsBasedMetric import BoundsBasedMetric
    
    class YourNewMetric(BoundsBasedMetric):
        def __init__(self, name):
            super().__init__(name)
            self.name = name
            self.resultsToSummarize: Dict[str, Tuple[float, float]] = {}
            """
            resultsToSummarize is a  dict of the form
                    {name: (indexOfVal, value)}
            """
    
        # Implement the required abstract methods:
    
        @abstractmethod
        def Evaluate(self, sequence: List[Sequence], prediction: List[List[Label]], pathExport: str):
            pass
    
        @abstractmethod
        def EvaluateSequence(self, sequence: Sequence, prediction: List[Label]):
            pass
    
        def AggregateMultiFold(self, resultsFolds: List, pathResult: str, filesInFold: List[str]):
            """
            Aggregate the results of the multi-fold cross-validation using macro-averaging
            :param filesInFold:
            :param resultsFolds: the TP,FP and count for each fold
            :return:
            """
            # Implement aggregation logic for multi-fold results here
            pass
    
        @staticmethod
        def Summary(results: List[Metric], pathOutputResultGeneral: str):
            """
            Print the summary of the results for different instances of the same metric
    
            :param results_with_differents_alpha:
            :return:
            """
            # Implement summary generation logic here
            pass

    Ensure that you implement the abstract methods Evaluate and EvaluateSequence specific to your new metric.

  2. Add Your Evaluation Logic:

    In the Evaluate and EvaluateSequence methods, you will need to implement the evaluation logic for your metric for a list of sequences and predictions. You can use the template provided in the BoundedOnlineDetectionOverlap class as an example for your own logic.

  3. Export Results

    If your metric requires exporting results, implement the ExportPerformance method to export the results appropriately.

  4. Implement Multi-Fold Aggregation:

    If your metric supports multi-fold aggregation, implement the AggregateMultiFold method to aggregate results from different folds. You can use this method to calculate macro-averaged results.

  5. Implement Summary compatibility:

    You can use the summary functionality to generate a summary of results for different instances of the same metric. To do this, see the example in the BOD metric (self.resultsToSummarize)

  6. Document Your Metric:

    Make sure to document your new metric thoroughly by providing a description of how it works, usage examples, and sample results in the README file.

  7. Test Your Metric:

    Before integrating it into the project, test your metric with test data to ensure it functions correctly.

  8. Integrate Your Metric:

    Once you have created and tested your new metric, you can integrate it into the project just like the existing metrics in the functions defineMetricsBounded and defineMetricsPerFrame

  9. Update the README:

    Don't forget to update this README file to include instructions on how to use your new metric and provide usage examples.

That's it! You have successfully added a new BoundsBased metric to this project.

Remember to customize your new metric according to your specific needs and provide clear documentation for users of the project.

About

Framework for OAD (Online Action Detection) evaluation. Made by William Mocaër.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages