Golfstat's H-T-H and Relative Rank Explanation


| Strength of Schedule Calculation |
Relative Ranking Explanation | Presentation of the H-T-H Standings

TO BE ELIGIBLE FOR H-T-H or Relative (2 Requirements):

  1. Schools must have played at least the current minimum number of rounds. The end of year minimums are as follows:
    • Division 1 Men: 15
    • Division 2 Men: 12
    • Division 3 Men: 7
    • Division 1 Women: 15
    • Division 2 Women: 15
    • Division 3 Women: 12
    • NAIA Men & Women: 6
    • JCAA Men & Women: 10
  2. Schools must have played a minimum number of different opponents during the year (NAIA & JCAA). Either of the following 2 scenarios can be met to satisfy this requirement:

A minimum number of different opponents in their Division or higher equal to the current minimum number of rounds for Division 1. For purposes of this comparison Division 1 is considered a higher level than Division 2. Division 2 is considered a higher level than Division 3, NAIA or JCAA. NAIA, JCAA and Division 3 are considered equal. (The previous statement should not be misconstrued to mean that the levels of golf are different. It is more a comparison of the ability to travel the country and play a larger array of opponents.)
-or-
A minimum number of different opponents (any Division 1,2,3, NAIA, or JCAA) equal to twice the current minimum number of rounds for Division 1.

Top


STRENGTH OF SCHEDULE CALCULATION:

Strength of schedule is determined by the team adjusted scoring average of a schools opponents. The adjusted scoring average of those opponents has an additional adjustment component for this calculation based upon the minimum rounds played by the opponent compared to the current minimum rounds required for a Division 1 school. Any opponent not having played that required minimum number of rounds will have their adjusted scoring average adjusted upward.

Top


EXPLANATION OF THE RELATIVE RANKINGS:

HOW TO UNDERSTAND THE RANKINGS:
Welcome to the Relative Rankings. Here is some insight on how to think about the ranking and how those calculations are made.

The goal of this tutorial is to cover every aspect of the Relative Rankings, in understandable language. As always, a Coach may contact me to discuss anything regarding the rankings. The ranking is quite simple. It compares each team, one-on-one, against every other team in the country. It does this in a relative manner. What that means is that a percentage value is applied to every one of those decisions. This is called the Relative Win Loss Percentage (RWLP). The number of teams you are up on still determines the order of the final ranking, but now every one of those decisions is made with a relative answer which means that the actual difference between two teams may be 60%-40% or 70%-30% as opposed to 100%-0%.

Here are five things to think about when trying to understand the ranking:

1. Every event you play in is actually a series of dual matches and comparisons. If you play in an event where you are 3-0 against all of the Schools going in, and you win, you may not see any movement, but look at the comparison of your RWLP from the previous Ranking. If you beat Schools by as much as before you will see an improvement in your RWLP versus that School. The same can be said to a situation where you are 0-3 against all of the Schools. Again, if you win the event, you have improved your RWLP, but you are still most likely behind all of those Schools.
When it becomes interesting is where you are 1-0 or 0-1 against all of the Schools. In these situations if there were 15 Schools and you finished 5th: if you were 1-0 going in you will most likely see a small drop in your ranking. Where you were 0-1 you will most likely see a nice bounce in your ranking. At worst you should see improvement in the RWLP against those Schools where you were previously behind.
Remember that the most important think to look at is your rate of change in RWLP every week against every other School.

2. Just because you are up on a School (1 on 1) does not guarantee that you are ahead of them in the rankings. It is statistically impossible for that scenario to always hold true in a large field ranking and we have hundreds of Schools in our field. To try to simplify this concept think of Major League Baseball. A team in first place can have a losing record to a team behind them and this is often the case. The reason for this is because all games count equally. Here is also an answer to the question: If A beats B and B beats C and C beats A, who is better. The answer is: how have they all done against Teams D-Z.

3. Here is a hypothetical question: If you had 20 Schools (A-T). Let's say that School A has beaten all other 19 Schools 6 out of 10 times (and to simplify, always by the same number of shots). We know School B is 4-6 against School A, but let's say that School B is 10-0 (same shot differential in all events) against the other 18 Schools. The hypothetical question is: Who is better, A or B. The actual answer is that it depends upon your perspective. The way the Committees favors is that the more Schools you are ahead of is more important. So that is what we do. However, an equally good argument could be made for the other perspective. In one perspective, School A has actually proven they can beat everyone. In the other perspective, the laws of probability favor School B, but they have not proven that. In the Relative Ranking both perspectives are given to help give the Committees more in depth data to make their decisions.

4. Everytime the ranking is run it starts from scratch using the data that is available (which keeps growing). What that means is as follows: No School has an advantage that is pre-determined or helpful if they start the early season well. If we run the season forwards or backwards it does not matter because there is no time sensitive piece to analyzing the data. Nothing from the previous season is ever used. Movement early in the season can be wild in both directions whereas late in the season any added data as a percentage of total data is less and so there will be less movement.

5. The Ranking is one tool the Committees use, but they have other criteria and other tools and other data available to make their decisions. If we have done our job properly, the Committees have all of the information they need to do two things. One is to make the best decisions possible. Secondly is to be able to totally defend those decisions. Some think that the Committees just take the rankings as is. I can assure you from experience that does not happen. On the other hand there are coaches who think that the rankings should be used as the total decision maker exactly as they are. I have always (and I HAVE NEVER DEVIATED) said that would be a bad idea becasue no matter how good a model is, it cannot handle certain exceptions. The greatest computer in the world is the human mind. It can analyze data more rationally than any electronic device. What it cannot do is handle the volume of data needed to make some decisions. Electronic devices can analyze data down into smaller compartments that then allow the "human computer" to properly analyze the end result. I could give you many examples over 25 years where unique circumstances necessitated the need for a human to be the final arbitrator.

I realize everyone wants to totally understand the rankings. You can, but no one can ever totally understand the week to week movement because the human mind cannot comprehend that much data (where everything that took place has meaning) in a short period of time. When I look at the rankings every week I also have a pre-determined mind set on what I think they will do. It does not always work. The difference for me is that I will then go look at all of the raw data (the exact same raw data that the committee has total access to) to see if it makes sense. From an accuracy point of view, the data has always made sense. From time to time over the years, there have been conceptual parts that may not make sense and then the question has to be asked: is that just an abnormality or will it happen often enough and can we do something to resolve the situation that does not cause "harm" in another ways. All abnormalities are brought to the attention of the Committees to analyze.

Most notable concerns that are addressed in the Relative Ranking:

1. There has always been an argument that it was unfair to look at all events equally when they might be a different number of rounds. All stroke differentials are viewed on a per round basis as opposed to an event basis.

2. The model goes through a series of filters to make all of the one-on-one decisions. When a filter is reached that has the needed data, the decision is made at that point. The problem with this is that there are certain situaions where the amount of data used to make that decision in a certain filter is sometimes inadequate. If that minimum is not reached, the filter is pro-rated with the next filter below until that minimum, has been achieved. This allows for making better decisions in situation where the amount of data is inadequate. By the end of the year this is never a problem for DI, but it helps especially DII, DIII, NAIA, and NJCAA to properly analyze data throughout the season.

3. All decisions involving Level 1 or Level 2 Common opponents are sensitive to both the wins and losses between the Schools as well as the stroke differential. Example: School A beats School C by 6 shots per round and School B beats School C by 4 shots per round. In the Relative Ranking the RWLP is allocated by how close that stroke differential is. The greater the differential, the greater the value awarded to the School that has a better stroke differential. In the common opponent filter there is a .5 point value awarded for stroke differential.

Bottom line, always look at every event as a series of duals.

RELATIVE RANKING CALCULATION:

Every School that has met the eligibility requirement is compared head to head against every other School in their Division. In determining which school gets the “Comparative Win” in the head to head comparison a filtering decision making process is used. The data analyzed is used in a Relative way. What this means is that each School gets a percentage of the data favoring them (RWLP). The model keeps track of both the RWLP and the "Comparative Wins". As an example. If School A vs School B comes out in a Relative fashion as 60% of the data for School A and 40% of the data for School B, School A gets the "Comparative Win". If the Relative data ends up 50/50 the two Schools each get 1/2 of a "Comparative Win" (this is almost impossible). If there was a School C who goes against School B and gets 70% of the Relative data, in the end ranking you would have Schools A and C with 1 "Comparative Win" and School B with none. School C would be ranked higher than A because the higher RWLP is the tie breaker. Their 70% average would exceed School A's 60% average. Of course in reality there are hundereds of Schools, but the concept would work exactly the same.

NOTE: In any comparisons, all stroke differentials are calculated to a per round basis using actual strokes and actual rounds played. Also in the examples below 16 is the minimum number of Level 1 Common Opponents you need by the end of the year and 25 is minimum number when you combine Level 1 and Level 2 Common Opponents. In reality, these minimum numbers grow throughout the season and are based on the minimum number of rounds requirement.

FILTER 1 - Head to Head competition. If the two Schools have not played, go to FILTER 2:
Filter 1 is analyzed by comparing all Common Opponents as to differential in victories against said Common Opponent and also Relative Stroke differential (i.e. the greater the differential, the more value awarded). A Winning Bonus factor is also awarded to the School that is up in the Head to Head Competition when analyzing the Common Opponent comparison. That Winning Bonus Factor is in addition to the fact that the winning School has already benefited from the victory and those stroke differentials gained when they competed Head to Head. .5 a point is awarded from the analysis of the two Schools record against the Common Opponent and .5 a point is awarded for the disparity of stroke differential between the two Schools and the Common Opponent. If there are not at least 16 Common Opponents (which is actually FILTER 2), then FILTER 3, which is Level 2 Common Opponents is also analyzed. If after analyzing Level 2 Common Opponents there are less than 25 total Level 1 and Level 2 Comparisons, then FILTER 4 (Adjusted Scoring, Adjusted Drop Score, and Strength of Schedule) is utilized in a pro-rated fashion. In other words, if there were 24 Level 1 and Level 2 Common Opponents (but less than 16 Level 1), the results of that data would be weighted as 96% of the decision (24/25) and the FILTER 4 data would be weighted as 4% (1/25) of the decision.

FILTER 2 - Common Opponents: Any common opponents (Division 1, 2, 3, NAIA, or JCAA) are used to make the determination. Both Level 1 and Level 2 Common Opponents (FILTER 3) are used to determine the percentage of Common Opponents a team is up. The Level 1 Common Opponents are figured the same way they are in Filter 1. If there are less than 25 Level 1 and Level 2 Common opponents, the same pro-ration applied in FILTER 1 is applied here.

FILTER 3 - A Level 2 Common Opponent is where the comparison is between School A and School B. School A has played School C, but not School D. School B has played School D, but not School C. However, Schools C and D have played each other. Each of these comparisons is worth .5 of a point, but only Stroke Differential is used since there is no logical way in this mulit-level comparison to deal with the various Wins and Losses. There is however a Winning Bonus Factor awarded for multiple victories over an opponent in the various calculations in this filter. Since the Level 2 Opponents and sub-opponents of the C and D Schools can be different, the comparison is viewed from both the perspective of School A as well as the perspective of School B. Between School A and B, the School with the highest percentage of points is the winner of the comparison.

FILTER 4 - FINAL FILTER - Combination of Adjusted Stroke Average, Strength of Schedule, and Average Drop Score are used. This FILTER is weighted internally as follows: Adjusted Scoring Average 60%, Strength of Schedule 25%, Adjusted Drop Score 15%. The points are also awarded in a Relative fashion. In other words, if School A has an Adjusted Scoring Average of 70.00 and School B is 70.01, School A is not a black and white winner receiving all 60% of that awarded value. Rather, both Schools would receive close to 30% with School A getting slightly above 30% and School B getting slightly less than 30%.

There are 6 potential decion making paths:

1: Schools A and B have played Head to Head and have at least 16 Level 1 Common Opponents.

2: Schools A and B have played and have less than 16 Level 1 Common Opponents, but at least 25 Level 1 and Level 2 Common Opponent comparisons.

3: Schools A and B have played and have less than 16 Level 1 Common Opponents and Less than 25 Level 1 and Level 2 Common Opponent comparisons. In this path, FILTER 4 is utilized on a pro-rated basis as discussed above.

4: Schools A and B have not played, but have at least 25 Level 1 and Level 2 Common Opponent Comparsions.

5: Schools A and B have not played and have less than 25 Level 1 and Level 2 Common Opponent Comparisons. In this path, FILTER 4 is utilized on a pro-rated basis as discussed above.

6: Schools A and B have not played and have zero Level 1 or Level 2 Common Opponents. In this situation, FILTER 4 is used entirely. By the end of the year, the use of this path is statistically insignificant.

Top


EXPLANATION OF THE PRESENTATION OF THE STANDINGS:

Since this Ranking is referred to as and handled like a Standing, the main presentation is in the form of a won-loss record with the format being (wins/total opponents). Anytime you see one of the presentations that show both a Division and a Region Standing, note that those standings only relate to that particular universe. Therefore, the following hypothetical relationship is possible: Assume that you had a Division with 100 schools and 10 Regions within that division that had 10 schools each. It is possible for a school to be 91-9 in the Division and be towards the top (and certainly ahead of every school in their Region) of that Division universe. However, all 9 of their "losses" could be against the other 9 schools in their Region universe and they could be 0-9 and in last place behind all of the Region Schools in that universe even though they are ahead of all of those schools in the Division universe.

I have never seen that particular hypothetical scenario take place, but I often see movement where schools can be ahead of other schools in their Region universe, yet behind them in the Division universe. Admittedly, this seem illogical, but the main use of these standings are for the NCAA, NAIA and JCAA to use in their determination of post season participants. Therefore, those situations are very helpful as yellow flags where the system is not in perfect alignment. Those “yellow flags” are the system telling the selection people to ask why and take a deeper look at those situations that involve “bubble teams”. Let me editorialize here briefly: No matter what ANYONE tells you, ALL SYSTEMS have potential to produce results that would create yellow flags. Golfstat’s H-T-H Standings are the only one that points those situations out to aid in the selection process. This is precisely why I have always told the NCAA, NAIA, and JCAA that what Golfstat does is produce good analytical data for humans to make good analytical decisions. Furthermore, I always tell the associations that they should never go away from having humans make decisions because there are always situations that are created during the season that cannot be “perfectly” accommodated by any model. With all of that said, Golfstat’s Relative Ranking is the only model that uses 100% of the official data and also the only model that uses totally accurate data.

Web Page and EMAIL Coach’s Reports Presentation:

The first column after the school name, Division Standing, is the wins/opponents record for each school based upon the head to head filter previously explained. Any school that has an N/E to the left of their name is not eligible for NCAA post season play. NAIA does not even list schools not eligible for post season play.

Adjusted Scoring Average: Is the average score of the counting scores, adjusted for conditions (see explanation of adjusted scoring). Adjusted Scoring is a more relevant base of comparison than the raw scoring numbers.

Average Drop Score: Is the average score of the non-counting scores, adjusted for conditions.

Versus Top 25: This shows what the Division Standing Record is for those schools listed in the Top 25 of the Standings that this particular school has actually played. Remember that a school can actually have played another school and beaten them, but still have a loss in the filter system should other data overwhelmingly overturn that competition.

Rank of Schedule: This shows the rank of schedule for this school based upon the explanation listed before. Rank of schedule embodies all NCAA Division 1, 2, 3, and NAIA Schools.

Wins: This shows the number of wins for this school in events that they have played where there were at least 5 schools competing. Remember that the system ignores any school that is not affiliated with the NCAA, NAIA, or JCAA.

Eligible Tournaments OFFICIALLY REPORTED and used at Run Date: This shows how many tournaments that are registered and scheduled to be completed at the run date have turned in their results and those results have been accepted as OFFICIAL by Golfstat according to the various rules stipulated by either the NCAA, NAIA, or JCAA.

Selection Committee Presentation:

The first column after the school name, REG., shows the Region that school plays in.

Division Standing, is the won-loss record for each school based upon the head to head filter previously explained. Notice that every teams total of competitions (wins and losses) is equal to the number of schools shown minus themselves. Any school that has an N/E to the left of their name is not eligible for NCAA post season play. NAIA does not even list schools not eligible for post season play.

Reg. Rank (or Div. Rank if looking at a Regional Printout), shows that schools ranking in the standings for the nest column to the right which will show the Region Standings (or Division if looking at a Regional Printout). These two standings columns shown together can show (especially when looking at a Regional Printout) those yellow flags that were discussed earlier.

Adjusted Scoring Average: Is the average score of the counting scores, adjusted for conditions (see explanation of adjusted scoring). Adjusted Scoring is a more relevant base of comparison than the raw scoring numbers.

Average Drop Score: Is the average score of the non-counting scores, adjusted for conditions.

Versus Division Top 25: This shows what the Division Standing Record is for those schools listed in the Top 25 of the Standings that this particular school has actually played. Remember that a school can actually have played another school and beaten them, but still have a loss in the filter system should other data overwhelmingly overturn that competition.

Versus Division 26-50: Same explanation as above using the next 25 schools.

Versus Region Top 10: Same explanation as above using Top 10 schools in Region.

Versus Region 11-20: Same explanation as above using next 10 schools.

Rank of Schedule: This shows the rank of schedule for this school based upon the explanation listed before.

Wins: This shows the number of wins for this school in events that they have played where there were at least 5 schools competing. Remember that the system ignores any school that is not affiliated with the NCAA, NAIA, or JCAA.

Top


 

Golfstat, Inc.
Copyright © 2011  Golfstat, Inc.   All rights reserved.
Revised: February 15, 2011.