Anomaly Detection: This tool uses the created models from Modeling Data to detect outliers on the selected data-set of executions. It performs model-based detection to mark down anomalous executions and outliers.
To use the tool, select which executions you want to filter and examine through the right filter box (make sure you created a compatible model in Modeling Data section). Also select the tolerance to errors parameter (sigma: the execution will be tested if the prediction error is at least k sigmas from the average error). Then observe how each execution is classified. Warnings are executions with an anomalous prediction error, without any other support as legit or outlier. Outliers are executions with an anomalous prediction error and supports of being outliers.
After classifying the executions, each classification can be attached to the prediction and execution (in the DataBase) as the accepted resolution for this execution. You can also see the list of executions tested and their resolutions in the box List of outliers, below. Optionally you can re-launch the classification process using a different model in cache, from the Model information toolbox found at the bottom of the page.
|This tool will find outliers and anomalies for the selected executions|
|1 -||Select from the Filters Box (right box):|
1) The values for each attribute to select the data to examine (if no value selected, all values will be added)
2) The model that will be used to classify the executions.
|2 -||Click on Find Anomalies, and wait until the data is processed. Take into account that the bigger the data-set selected, the longer can take to process.|
|3 -||Wait until the navigator refreshes, and processes the received data.|
|4 -||Results will appear as:|
a) A chart displaying the observed time of the executions vs. the predicted time by the model. In colors, the executions will be classified as legitimate, warning or outlier. The far a point is from the line x=y, the higher are the chances to be an outlier or a warning.
b) A table with the executions, classified each one as legitimate, warning or outlier.
c) A button to accept the classification results, and tag the executions in the DataBase.