Metrics Descriptions (documentation in progress) |
GENERAL METRICS
- SCORE (AVG)
- Description:
A metric that displays the average of score values aggregated
at the level of the report’s attributes. This metric averages
the scores equally regardless of “points possible” differences
between audits.
- When to use:
Use this metric when you want to give all audit scores
equal weight when averaging. For example, one audit score
equals 90 points / 100 points possible = 90%, and the
second audit score equals 60 points / 80 points possible
= 75%. Even though the second audit has fewer possible
points, it will be weighted the same as the first audit.
Score = (90% + 75%) / 2 = 82.5%. Compare this example
with the one in the Score(Points Earned/Points Possible)
metric description.
- Subtotals:
Average subtotals used on this metric will average the
visible report-level score values rather than averaging
all of the base score values. The denominator will be
the rows on the report. For example, if the report shows
scores broken down by 12 months, the subtotal will be
Sum(Monthly Scores) / 12. This equation gives each month
equal contribution to the average even if January had
only 20 audits and February had 100 audits.
- Formula:
AVG(Score)
- SCORE (POINTS EARNED
/ POINTS POSSIBLE)
- Description:
A metric that displays the score calculated by summing
all points and dividing by possible points rather than
by averaging audit scores. It takes into account the weighting
differences between audits with “points possible” differences.
- When to use:
Use this metric when you want a score that weights audits
by their possible point contributions. For example, one
audit score equals 90 points / 100 points possible = 90%,
and the second audit score equals 60 points / 80 points
possible = 75%. The first audit has more possible points
and will contribute more to the final score. The formula
sums up all points and then divides by all possible points.
Score = (90 + 60) / (100 + 80) = 83.3%. Compare this example
with the one in the Score(Avg) metric description.
- Subtotals:
Any average subtotals used on this metric will always
sum the points and divide by possible points without regard
to the report level attributes. Subtotal = SUM(Points)/SUM(Points
Possible).
- Formula:
SUM(Points Earned)/SUM(Points Possible)
- SCORE (SUM / COUNT)
- Description:
A metric that displays the score calculated by summing
all scores in the dataset and dividing by the count of
scores. In a report grid, using this metric will give
the same results as the Score(Avg) metric, however subtotals
calculate differently.
- When to use:
Use this metric when you would use a Score(Avg) metric
but need subtotals to average on a base level of individual
scores rather than the report level.
- Subtotals:
Average subtotals used on this metric will sum up all
scores contained in the dataset and divide by the total
count of scores: SUM(Score) / COUNT(Score). For example,
a Quarterly report shows scores broken down by 3 months:
January (20 audits), February (100 audits), and March
(50 audits). The subtotal sums up the scores of all 170
audits and divides by the total count of 170 audits. This
method weights each score equally but does not weight
each month equally, as some months had more audits than
other months. In comparison, the Score(Avg) metric would
sum the three Monthly scores and divide by three, which
would weight each month equally and lose the weighting
from the count of audits per month.
- Formula:
SUM(Score)/COUNT(Score)
- PERCENT CORRECT
- Description:
A metric that displays the percent of correct answers
compared to total correct and incorrect answers. It is
important to note that this metric does not include “informational”
answers in the denominator. For example, If a question
has 6 correct answers, 4 incorrect answers, and 2 informational
answers, the Percent Correct metric would display 6/(6+4)
= 60%.
- When to use:
Use this metric when you want to see the percent of answers
that are correct.
- Formula:
COUNT(Correct) / (COUNT(Correct) + COUNT (Incorrect))
- PERCENT INCORRECT
- Description:
A metric that displays the percent of incorrect answers
compared to total correct and incorrect answers. It is
important to note that this metric does not include “informational”
answers in the denominator. For example, If a question
has 6 correct answers, 4 incorrect answers, and 2 informational
answers, the Percent Incorrect metric would display 4/(6+4)
= 40%.
- When to use:
Use this metric when you want to see the percent of answers
that are incorrect.
- Subtotals:
COUNT (Incorrect) / (COUNT (Correct) + COUNT (Incorrect))
- ANSWER
- Description:
- When to use:
- Formula:
- WEIGHTED SCORE
- Description:
- When to use:
- Formula:
|
AVERAGE METRICS
- SCORE (AVG)
- Description:
A metric that displays the average of score values aggregated
at the level of the report’s attributes. This metric averages
the scores equally regardless of “points possible” differences
between audits.
- When to use:
Use this metric when you want to give all audit scores
equal weight when averaging. For example, one audit score
equals 90 points / 100 points possible = 90%, and the
second audit score equals 60 points / 80 points possible
= 75%. Even though the second audit has fewer possible
points, it will be weighted the same as the first audit.
Score = (90% + 75%) / 2 = 82.5%. Compare this example
with the one in the Score(Points Earned/Points Possible)
metric description.
- Subtotals:
Average subtotals used on this metric will average the
visible report-level score values rather than averaging
all of the base score values. The denominator will be
the rows on the report. For example, if the report shows
scores broken down by 12 months, the subtotal will be
Sum(Monthly Scores) / 12. This equation gives each month
equal contribution to the average even if January had
only 20 audits and February had 100 audits.
- ACTION PLAN COMPLETE
%
- Description:
A metric that displays the average percentage of action
plans that have been completed.
- When to use:
It is advisable to use this metric with audit level attributes
or higher levels, and not Category or Question level attributes.
- Formula:
AVG(COUNT(Action Plans Complete) / COUNT(Total Action
Plans))
- ACTION PLAN COMPLETE
% - CRITICAL
- Description:
A metric that displays the average percentage of critical
action plans that have been completed. Critical action
plans are action plans that have been created on answers
that have been flagged as “Critical” in the audit instrument.
- When to use:
It is advisable to use this metric with audit level attributes
or higher levels, and not Category or Question level attributes.
- Formula:
AVG(COUNT(Critical Action Plans Complete) / COUNT(Total
Critical Action Plans))
- ACTION PLAN PLANNED
%
- Description:
A metric that displays the average percentage of action
plans that have been planned. An action plan has three
fields that need to be filled in before it is 100% planned:
an action plan description, a responsible person, and
a due date. An audit’s Action Plan Planned % is the sum
of all populated fields across all action plans, divided
by the total count of action plan fields (three fields
per action plan). For example, an audit has three incorrect
answers and each has an action plan. The first action
plan has 1 field populated, the second action plan has
two fields populated, and the third action plan is fully
populated with all three fields. The audit’s Action Plan
Planned % = (1 + 2 + 3) / (3 + 3 + 3) = 67%. The metric
now averages that percent across all audits in the dataset.
- When to use:
It is advisable to use this metric with audit level attributes
or higher levels, and not Category or Question level attributes.
- Formula:
AVG(COUNT(Action Plans Fields Populated) / COUNT(Total
Action Plan Fields))
- ACTION PLAN PLANNED
% - CRITICAL
- Description:
A metric that displays the average percentage of critical
action plans that have been planned. Critical action plans
are action plans that have been created on answers that
have been flagged as “Critical” in the audit instrument.
An action plan has three fields that need to be filled
in before it is 100% planned: an action plan description,
a responsible person, and a due date. An audit’s Action
Plan Planned % - Critical is the sum of all populated
fields across all critical action plans, divided by the
total count of critical action plan fields (three fields
per action plan). For example, an audit has three incorrect
critical answers and each has an action plan. The first
action plan has 1 field populated, the second action plan
has two fields populated, and the third action plan is
fully populated with all three fields. The audit’s Action
Plan Planned % = (1 + 2 + 3) / (3 + 3 + 3) = 67%. The
metric now averages that percent across all audits in
the dataset.
- When to use:
It is advisable to use this metric with audit level attributes
or higher levels, and not Category or Question level attributes.
- Formula:
AVG(COUNT(Critical Action Plans Fields Populated) / COUNT(Total
Critical Action Plan Fields))
- AUDIT DURATION IN MINUTES
(AVG)
- Description:
A metric that displays the average amount of time that
an audit took to complete, in minutes.
- Formula:
AVG(Audit End Time – Audit Start Time)
- CRITICAL ITEMS (AVG)
- Description:
A metric that displays the average count of critical items,
which are answer choices that have been flagged as a “critical
choice” in the audit instrument.
- When to use:
Use this metric when you want to display the number of
critical answers, averaged across the report attributes.
- Formula:
AVG(COUNT(Critical))
- LOWER LIMIT
- Description:
- When to use:
- Formula:
- NON-COMPLIANT ITEMS
(AVG)
- Description:
A metric that displays the average count of non-compliant
items, which are answer choices that have been flagged
as “incorrect” in the audit instrument.
- When to use:
Use this metric when you want to display the number of
non-compliant answers, averaged across the report attributes.
- Formula:
AVG(COUNT(Incorrect))
- NUMERIC ANSWER TEXT
(AVG)
- Description:
A metric that displays the average of all answers that
can be converted to a number.
- When to use:
This metric is useful when you have a numeric question
such as Temperature or Weight, and you want to average
the values of the answers across the report attributes.
- Formula:
AVG(Numeric Answer)
- POINTS EARNED (AVG)
- Description:
A metric that displays the average points earned.
- Formula:
AVG(Points Earned)
- POINTS POSSIBLE (AVG)
- Description:
A metric that displays the average of the points possible.
- Formula:
AVG(Points Possible)
- TARGET
- Description:
- When to use:
- Formula:
- UPPER LIMIT
- Description:
- When to use:
- Formula:
|
MAXIMUM METRICS
- AUDIT DURATION IN MINUTES
(MAX)
- Description:
A metric that displays the maximum amount of time that
an audit took to complete, in minutes.
- When to use:
Use this metric to display the single largest audit duration
value across the report attributes.
- Formula:
MAX(Audit End Time – Audit Start Time)
- AUDIT RESULT RATING
(MAX)
- Description:
A metric that displays the maximum audit result rating.
- When to use:
Use this metric to display the highest audit result rating
across the report attributes.
- Formula:
MAX(Audit Result Rating)
- CRITICAL ITEMS (MAX)
- Description:
A metric that displays the maximum count of critical items,
which are answer choices that have been flagged as a “critical
choice” in the audit instrument.
- When to use:
Use this metric when you want to display the highest number
of critical answers across the report attributes.
- Formula:
MAX(COUNT(Critical))
- NON-COMPLIANT ITEMS
(MAX)
- Description:
A metric that displays the maximum count of non-compliant
items, which are answer choices that have been flagged
as “incorrect” in the audit instrument.
- When to use:
Use this metric when you want to display the highest number
of non-compliant answers across the report attributes.
- Formula:
MAX(COUNT(Incorrect))
- NUMERIC ANSWER TEXT
(MAX)
- Description:
A metric that displays the maximum of all answers that
can be converted to a number.
- When to use:
This metric is useful when you have a numeric question
such as Temperature or Weight, and you want to find the
highest value across the report attributes.
- Formula:
MAX(Numeric Answer)
- POINTS POSSIBLE (MAX)
- Description:
A metric that displays the maximum points possible.
- Formula:
MAX(Points Possible)
|
MINIMUM METRICS
- AUDIT DURATION IN MINUTES
(MIN)
- Description:
A metric that displays the minimum amount of time that
an audit took to complete, in minutes.
- When to use:
Use this metric to display the single smallest audit duration
value across the report attributes.
- Formula:
MIN(Audit End Time – Audit Start Time)
- AUDIT RESULT RATING
(MIN)
- Description:
A metric that displays the minimum audit result rating.
- When to use:
Use this metric to display the lowest audit result rating
across the report attributes.
- Formula:
MIN(Audit Result Rating)
- CRITICAL ITEMS (MIN)
- Description:
A metric that displays the minimum count of critical items,
which are answer choices that have been flagged as a “critical
choice” in the audit instrument.
- When to use:
Use this metric when you want to display the lowest number
of critical answers across the report attributes.
- Formula:
MIN(COUNT(Critical))
- NON-COMPLIANT ITEMS
(MIN)
- Description:
A metric that displays the minimum count of non-compliant
items, which are answer choices that have been flagged
as “incorrect” in the audit instrument.
- When to use:
Use this metric when you want to display the lowest number
of non-compliant answers across the report attributes.
- Formula:
MIN(COUNT(Incorrect))
- NUMERIC ANSWER TEXT
(MIN)
- Description:
A metric that displays the minimum of all answers that
can be converted to a number.
- When to use:
This metric is useful when you have a numeric question
such as Temperature or Weight, and you want to find the
lowest value across the report attributes.
- Formula:
MIN (Numeric Answer)
|
PROGRAM COMPLIANCE METRICS
- COMPLETE TASK COUNT - Displays the total number of tasks
that have been completed.
- OVERDUE PARTNER COUNT - Displays the number of locations
with overdue tasks.
- OVERDUE TASK COUNT - Displays the total number of overdue
tasks, both incomplete and complete.
- PARTNER COUNT - Displays the number of locations assigned
to a program.
- PAST DUE COMPLETE TASK COUNT - Displays the number of past
due tasks that are now complete.
- PAST DUE TASK COUNT - Displays the total number of tasks
that are past due, both incomplete and complete.
- TASK AMOUNT DUE (SUM) - Displays the amount of fees due,
as required by a payment task.
- TASK COUNT- Displays the total number of tasks associated
with a program.
- TASK HOURS IN PROCESS (SUM) - Displays the total number
of hours it took to complete the task, from creation to completion.
If the task is currently incomplete, displays the number of
hours from the creation of the task to the current time.
- TASK HOURS OVERDUE (SUM) - Displays the total number of
hours since the task was due.
- TASK HOURS UNTIL DUE (SUM) - Displays the number of hours
remaining before a task is due.
|
QUANTITY METRICS
- AUDIT RESULT COUNT
- Description:
A metric that displays the total count of distinct audit
results. The distinct nature of the count means that if
the same audit result identifier appears multiple times
in a dataset, the metric will only count that audit result
once rather than the number of times it appears.
- When to use:
Use this metric when you want to display the number of
audits that have been performed.
- Formula:
COUNT(Audit Result)
- CATEGORY COUNT
- Description:
A metric that displays the total count of categories.
- When to use:
Use this metric when you want to display the number of
categories. This is particularly useful when used in conjunction
with duplicate categories, where the same category can
be measured multiple times during a single audit.
- Formula:
COUNT(Category)
- COMPLIANT ITEMS COUNT
- Description:
A metric that displays the total count of compliant items,
which are answer choices that have been flagged as “correct”
in the audit instrument.
- When to use:
Use this metric when you want to display the number of
compliant answers.
- Formula:
COUNT(Correct)
- CRITICAL ITEMS COUNT
- Description:
A metric that displays the total count of critical items,
which are answer choices that have been flagged as a “critical
choice” in the audit instrument.
- When to use:
Use this metric when you want to display the number of
critical answers.
- Formula:
COUNT(Critical)
- FORCE FAIL QUESTION
- Description:
- When to use:
- Formula:
- FORCE FAIL COUNT
- Description:
- When to use:
- Formula:
- INFORMATIONAL COUNT
- Description:
A metric that displays the total count of informational
items, which are answer choices that have been flagged
as “informational” in the audit instrument.
- When to use:
Use this metric when you want to display the number of
informational answers.
- Formula:
COUNT(Informational)
- LOCATION COUNT
- Description:
A metric that displays the total count of distinct locations.
The distinct nature of the count means that if the same
location appears multiple times in a dataset, the metric
will only count that location once rather than the number
of times it appears.
- When to use:
Use this metric when you want to display the number of
locations that have been audited.
- Formula:
COUNT(Location)
- NON-COMPLIANT CATEGORY
COUNT
- Description:
A metric that displays the total count of categories that
have an incorrect answer within them. Even one incorrect
answer will flag the category as incorrect even though
there may be many questions in the category.
- When to use:
Use this metric when you want to display how many categories
have at least one incorrect answer.
- Formula:
COUNT(Category with Incorrect Answer)
- NON-COMPLIANT ITEMS
COUNT
- Description:
A metric that displays the total count of non-compliant
items, which are answer choices that have been flagged
as “incorrect” in the audit instrument.
- When to use:
Use this metric when you want to display the number of
non-compliant answers.
- Formula:
COUNT(Incorrect)
- NON-CRITICAL ITEMS COUNT
- Description:
A metric that displays the total count of non-critical
items, which are answer choices that have not been flagged
as a “critical choice” in the audit instrument.
- When to use:
Use this metric when you want to display the number of
non-critical answers.
- Formula:
COUNT(NOT Critical)
- PREDEFINED COMMENT COUNT
- Description:
A metric that displays the total count of distinct predefined
comments. The distinct nature of the count means that
if the same comment appears multiple times in a dataset,
the metric will only count that comment once rather than
the number of times it appears.
- When to use:
Use this metric when you want to display the number of
predefined comments that have been selected.
- Formula:
COUNT(Predefined Comment)
- PREDEFINED COMMENT COUNT_NOT
DISTINCT
- Description:
A metric that displays the total count of non-distinct
predefined comments. The non-distinct nature of the count
means that if the same comment appears multiple times
in a dataset, the metric will count the number of times
it appears, even if there are duplicate rows of the same
comment.
- When to use:
Use this metric when you want to display the number of
predefined comments that have been selected.
- Formula:
COUNT(Predefined Comment)
- QUESTION COUNT
- Description:
A metric that displays the total count of distinct questions.
The distinct nature of the count means that if the same
question appears multiple times in a dataset, the metric
will only count that location once rather than the number
of times it appears.
- When to use:
Use this metric when you want to display the number of
questions that have been answered.
- Formula:
COUNT(Question)
- QUESTION COUNT_NOT DISTINCT
- Description:
A metric that displays the total count of non-distinct
questions. The non-distinct nature of the count means
that if the same question appears multiple times in a
dataset, the metric will count the number of times it
appears even if there are duplicate rows of the same question.
- When to use:
Use this metric when you want to display the number of
questions that have been answered.
- Formula:
COUNT(Question)
- REPEAT CRITICAL VIOLATION
COUNT
- Description:
A metric that displays the total count of items that are
Repetitive, Incorrect, and Critical.
- When to use:
Use this metric when you want to display the number of
critical answers that have been incorrect repetitively.
- Formula:
COUNT(Incorrect AND Critical AND Repetitive)
- REPEAT VIOLATION COUNT
- Description:
A metric that displays the total count of items that are
Repetitive and Incorrect.
- When to use:
Use this metric when you want to display the number of
answers that have been incorrect repetitively.
- Formula:
COUNT(Incorrect AND Repetitive)
|
SUM METRICS
- AUDIT DURATION IN MINUTES
(SUM)
- Description:
A metric that displays the total amount of time that an
audit took to complete, in minutes.
- When to use:
Use this metric to display the sum total of all audit
duration values across the report attributes.
- Formula:
SUM(Audit End Time – Audit Start Time)
- NUMERIC ANSWER TEXT
(SUM)
- Description:
A metric that displays the total of all answers that can
be converted to a number.
- When to use:
This metric is useful when you have a numeric question
such as Temperature or Weight, and you want to find the
sum total of all values across the report attributes.
- Formula:
SUM(Numeric Answer)
- POINTS EARNED (SUM)
- Description:
A metric that displays the total points earned.
- Formula:
SUM(Points Earned)
- POINTS POSSIBLE (SUM)
- Description:
A metric that displays the total points possible.
- Formula:
SUM(Points Possible)
|
WAIVER
- AVG HOURS IN WORKFLOW
QUEUE
- AVG WAIVER DAYS IN PROCESS
- HOURS LEFT UNTIL WAIVER
EXPIRATION
- HOURS
- QUEUE ESCALATION THRESHOLD
COUNT
- QUEUE WARNING THRESHOLD
COUNT
- TOTAL HOURS IN WORKFLOW
QUEUE
- WAIVER ATTACHMENT COUNT
- WAIVER CORRESPONDENCE
COUNT
- WAIVER COUNT
|