Golfstat Mobile Apps:
Rankings Explained

Ratings and Rankings Explanations

History:

Back when I got started with the NCAA Women in 1989 they had a system where we used course rating (slope was still pretty new at this time). By 1992 that system was abolished because of the following reasons:

While they were correctly requiring all golf courses to be rated based upon the yardage set up for that particular event, you still have differences in how people rate golf courses (if you think this isn't true there is no need for you to read any further; and don't call me either).

Even with a well rated golf course, the set up and/or the conditions can have a large impact on scoring.

The system was being abused as Coach's found out which golf courses were rated harder than they should be (everyone wanted to go to those events) and which courses were rated easier than they played (these would be the events to avoid).

When this system was abolished we went to a new system where actual head to head records became the most important criteria (and still is today; and should be). In the first few years we were not even allowed to release any scoring information to the selection committees.

In 1995 I remember looking at the results from a Men's Tournament in Hawaii. The field was so strong that the average score of the players in this field was about 1 over par. On the first day the winds were over 40 miles per hour and the average score was 85. On day two the average scores under more normal conditions were around 73. It dawned on me that we could use the actual data to determine which rounds were flukes and which rounds were normal. So I created Adjusted Ratings. After about a year and a half of testing and a lot of discussion, the NCAA started using Adjusted Ratings as a reliable indicator of scoring average. Surprisingly there were very few minor modifications made to the first model and even after 5 years of testing and scrutiny, the Adjusted Ratings have preformed flawlessly based upon the knowledge that no system is perfect.

How do Adjusted Ratings work?

Every time we run a rating (ranking of scoring averages) we also run the Adjusted Ratings side by side. It has gotten to the point where we run a rating almost every day and sometimes more than once a day when the season has become established (everyone has played). We start with the raw scoring average versus par and equalize every score to a par 72. Then we go into every round of every tournament and look at the scoring averages of all of the players in that field. We compare their scoring average as a group with what they actually average as a group in each round. For example: if we went into an event where the average score of the field (based upon par 72) was 74 and the average score of the field for the first round was 76, we would adjust all scores down by 2 shots. If the next round the average score of the field is 73 we would adjust all scores for that day up by 1 shot. The theory is that we know that something caused those players (as a group) to shoot that average score of the day. We call that difference "conditions". We don't really care what those conditions are. They could be weather, they could be course set up, or they could be ease or difficulty of the golf course. It is possible that the group, as a group, just had an abnormal day. The law of large numbers works in our favor as the likelihood of "abnormal group days" is quite unlikely. Of course there can be small fields. The adjustment takes that into consideration as those results are weighed with less consideration in calculating the Adjusted Ratings. One of the premises of the Adjusted Ratings is that it is better to under-adjust in those situations where data is less reliable than to over-adjust.

The best part of the Adjusted Ratings system is that the adjustments are made in a dynamic fashion meaning that every time we run an Adjusted Rating we take the latest raw scoring averages. Lets go back to our example and assume that the calculation was made in October. By April those same players in that field may now have an average score of 75. Therefore we would now adjust the first round down by 1 shot and the second round up by 2 shots. By doing this, the data gets more and more reliable so that by the end of the season (the most critical time), we have identified those rounds of those tournaments that "conditions" dictated scores that were "not normal".

Why are Adjusted Ratings a good indicator of scoring for College Golf?

Because they are based upon actual results and completely void of opinion, the Adjusted Ratings cannot be abused. If you go to a golf course that is very easy you pretty much know that your scores will be adjusted upwards and conversely if you go to a very difficult golf course or play in horrible conditions you have the comfort of knowing that those scores will be adjusted down.

  • The dynamic nature of always re-adjusting throughout the year lessens the already low chance that the scores are affected by an "abnormal day by a group of players".
  • Course Rating and Slope cannot take into account weather conditions or course set up (both of which can drastically affect scores).
  • The Adjusted Ratings are totally objective in their approach (no opinions like: the weather was horrible).
  • Strength of field has zero impact on Adjusted Ratings as it is the golf course that is actually being adjusted.

This discussion is not to bash the USGA Course Rating and Slope System. That is a fine system that works nicely for Handicapping where you take your best 10 of 20 scores. Rather this discussion follows the idea that for College Golf, Adjusted Ratings are a better indicator of scoring.

In closing here is the example that I always use. If we had a score of rating the "systems" where 0 was imperfect and 10 was perfect I would say that the following scores would be reasonable.

Raw Scores 1
Slope and Course Rating 2
Adjusted Ratings for NAIA

4

Adjusted Ratings for NCAA Division III 5
Adjusted Ratings for NCAA Division II 6
Adjusted Ratings for NCAA Division I 8

The first problem with what I have done here is that I have stereotyped by Division. The assumption is that Division I Teams travel more and have a wider variety of opponents than Division II and so on. While this assumption is correct for the Divisions as a whole, there are groups of teams within each Division that schedule more like one of the other Divisions (i.e. Some Division I programs do not schedule a wide variety of teams while some NAIA programs do.). A wider variety of opponents has a positive impact on the Adjusted Ratings, but even at the lowest level of scheduling the ratings are more reliable than a straight Slope and Course Rating system based upon the aforementioned reasons.

Bottom line is that better programs have a wider variety of opponents and therefore the Adjusted Ratings work very well at the top end of the spectrum and while they do not work as well at the lower end the performance is still better than straight Slope and Course Rating. The main purpose of Golfstat is to provide information for the various Divisions to determine the schools advancing to post season. The difference in reliability between Adjusted Ratings and Slope and Course Rating is considerable for the potential post-season teams.

This is not to say that someday we will come up with something even better. I am not closed minded about this and we are always trying to improve what we have.

Sincerely,

Mark Laesch

Golfstat's H-T-H and Relative Rank Explanation

Eligibility | Strength of Schedule Calculation | Relative Ranking Explanation | Presentation of the H-T-H Standings

TO BE ELIGIBLE FOR H-T-H or RELATIVE (2 Requirements):

1. Schools must have played at least the current minimum number of rounds. The end of year minimums are as follows:

  • Division 1 Men: 15
  • Division 2 Men: 12
  • Division 3 Men: 7
  • Division 1 Women: 15
  • Division 2 Women: 15
  • Division 3 Women: 12
  • NAIA Men & Women: 6
  • JCAA Men & Women: 10

2. Schools must have played a minimum number of different opponents during the year (NAIA & JCAA). Either of the following 2 scenarios can be met to satisfy this requirement:

A minimum number of different opponents in their Division or higher equal to the current minimum number of rounds for Division 1. For purposes of this comparison Division 1 is considered a higher level than Division 2. Division 2 is considered a higher level than Division 3, NAIA or JCAA. NAIA, JCAA and Division 3 are considered equal. (The previous statement should not be misconstrued to mean that the levels of golf are different. It is more a comparison of the ability to travel the country and play a larger array of opponents.)

or

A minimum number of different opponents (any Division 1,2,3, NAIA, or JCAA) equal to twice the current minimum number of rounds for Division 1.

Top


STRENGTH OF SCHEDULE CALCULATION:

Strength of schedule is determined by the team adjusted scoring average of a schools opponents. The adjusted scoring average of those opponents has an additional adjustment component for this calculation based upon the minimum rounds played by the opponent compared to the current minimum rounds required for a Division 1 school. Any opponent not having played that required minimum number of rounds will have their adjusted scoring average adjusted upward.

Top


EXPLANATION OF THE RELATIVE RANKINGS:

HOW TO UNDERSTAND THE RANKINGS:

Welcome to the Relative Rankings. Here is some insight on how to think about the ranking and how those calculations are made.

The goal of this tutorial is to cover every aspect of the Relative Rankings, in understandable language. As always, a Coach may contact me to discuss anything regarding the rankings. The ranking is quite simple. It compares each team, one-on-one, against every other team in the country. It does this in a relative manner. What that means is that a percentage value is applied to every one of those decisions. This is called the Relative Win Loss Percentage (RWLP). The number of teams you are up on still determines the order of the final ranking, but now every one of those decisions is made with a relative answer which means that the actual difference between two teams may be 60%-40% or 70%-30% as opposed to 100%-0%.

Here are five things to think about when trying to understand the ranking:

1. Every event you play in is actually a series of dual matches and comparisons. If you play in an event where you are 3-0 against all of the Schools going in, and you win, you may not see any movement, but look at the comparison of your RWLP from the previous Ranking. If you beat Schools by as much as before you will see an improvement in your RWLP versus that School. The same can be said to a situation where you are 0-3 against all of the Schools. Again, if you win the event, you have improved your RWLP, but you are still most likely behind all of those Schools.
When it becomes interesting is where you are 1-0 or 0-1 against all of the Schools. In these situations if there were 15 Schools and you finished 5th: if you were 1-0 going in you will most likely see a small drop in your ranking. Where you were 0-1 you will most likely see a nice bounce in your ranking. At worst you should see improvement in the RWLP against those Schools where you were previously behind.
Remember that the most important think to look at is your rate of change in RWLP every week against every other School.

2. Just because you are up on a School (1 on 1) does not guarantee that you are ahead of them in the rankings. It is statistically impossible for that scenario to always hold true in a large field ranking and we have hundreds of Schools in our field. To try to simplify this concept think of Major League Baseball. A team in first place can have a losing record to a team behind them and this is often the case. The reason for this is because all games count equally. Here is also an answer to the question: If A beats B and B beats C and C beats A, who is better. The answer is: how have they all done against Teams D-Z.

3. Here is a hypothetical question: If you had 20 Schools (A-T). Let's say that School A has beaten all other 19 Schools 6 out of 10 times (and to simplify, always by the same number of shots). We know School B is 4-6 against School A, but let's say that School B is 10-0 (same shot differential in all events) against the other 18 Schools. The hypothetical question is: Who is better, A or B. The actual answer is that it depends upon your perspective. The way the Committees favors is that the more Schools you are ahead of is more important. So that is what we do. However, an equally good argument could be made for the other perspective. In one perspective, School A has actually proven they can beat everyone. In the other perspective, the laws of probability favor School B, but they have not proven that. In the Relative Ranking both perspectives are given to help give the Committees more in depth data to make their decisions.

4. Everytime the ranking is run it starts from scratch using the data that is available (which keeps growing). What that means is as follows: No School has an advantage that is pre-determined or helpful if they start the early season well. If we run the season forwards or backwards it does not matter because there is no time sensitive piece to analyzing the data. Nothing from the previous season is ever used. Movement early in the season can be wild in both directions whereas late in the season any added data as a percentage of total data is less and so there will be less movement.

5. The Ranking is one tool the Committees use, but they have other criteria and other tools and other data available to make their decisions. If we have done our job properly, the Committees have all of the information they need to do two things. One is to make the best decisions possible. Secondly is to be able to totally defend those decisions. Some think that the Committees just take the rankings as is. I can assure you from experience that does not happen. On the other hand there are coaches who think that the rankings should be used as the total decision maker exactly as they are. I have always (and I HAVE NEVER DEVIATED) said that would be a bad idea becasue no matter how good a model is, it cannot handle certain exceptions. The greatest computer in the world is the human mind. It can analyze data more rationally than any electronic device. What it cannot do is handle the volume of data needed to make some decisions. Electronic devices can analyze data down into smaller compartments that then allow the "human computer" to properly analyze the end result. I could give you many examples over 25 years where unique circumstances necessitated the need for a human to be the final arbitrator.

I realize everyone wants to totally understand the rankings. You can, but no one can ever totally understand the week to week movement because the human mind cannot comprehend that much data (where everything that took place has meaning) in a short period of time. When I look at the rankings every week I also have a pre-determined mind set on what I think they will do. It does not always work. The difference for me is that I will then go look at all of the raw data (the exact same raw data that the committee has total access to) to see if it makes sense. From an accuracy point of view, the data has always made sense. From time to time over the years, there have been conceptual parts that may not make sense and then the question has to be asked: is that just an abnormality or will it happen often enough and can we do something to resolve the situation that does not cause "harm" in another ways. All abnormalities are brought to the attention of the Committees to analyze.

Most notable concerns that are addressed in the Relative Ranking:

1. There has always been an argument that it was unfair to look at all events equally when they might be a different number of rounds. All stroke differentials are viewed on a per round basis as opposed to an event basis.

2. The model goes through a series of filters to make all of the one-on-one decisions. When a filter is reached that has the needed data, the decision is made at that point. The problem with this is that there are certain situaions where the amount of data used to make that decision in a certain filter is sometimes inadequate. If that minimum is not reached, the filter is pro-rated with the next filter below until that minimum, has been achieved. This allows for making better decisions in situation where the amount of data is inadequate. By the end of the year this is never a problem for DI, but it helps especially DII, DIII, NAIA, and NJCAA to properly analyze data throughout the season.

3. All decisions involving Level 1 or Level 2 Common opponents are sensitive to both the wins and losses between the Schools as well as the stroke differential. Example: School A beats School C by 6 shots per round and School B beats School C by 4 shots per round. In the Relative Ranking the RWLP is allocated by how close that stroke differential is. The greater the differential, the greater the value awarded to the School that has a better stroke differential. In the common opponent filter there is a .5 point value awarded for stroke differential.

Bottom line, always look at every event as a series of duals.

RELATIVE RANKING CALCULATION:

Every School that has met the eligibility requirement is compared head to head against every other School in their Division. In determining which school gets the "Comparative Win" in the head to head comparison a filtering decision making process is used. The data analyzed is used in a Relative way. What this means is that each School gets a percentage of the data favoring them (RWLP). The model keeps track of both the RWLP and the "Comparative Wins". As an example. If School A vs School B comes out in a Relative fashion as 60% of the data for School A and 40% of the data for School B, School A gets the "Comparative Win". If the Relative data ends up 50/50 the two Schools each get 1/2 of a "Comparative Win" (this is almost impossible). If there was a School C who goes against School B and gets 70% of the Relative data, in the end ranking you would have Schools A and C with 1 "Comparative Win" and School B with none. School C would be ranked higher than A because the higher RWLP is the tie breaker. Their 70% average would exceed School A's 60% average. Of course in reality there are hundereds of Schools, but the concept would work exactly the same.

NOTE: In any comparisons, all stroke differentials are calculated to a per round basis using actual strokes and actual rounds played. Also in the examples below 16 is the minimum number of Level 1 Common Opponents you need by the end of the year and 25 is minimum number when you combine Level 1 and Level 2 Common Opponents. In reality, these minimum numbers grow throughout the season and are based on the minimum number of rounds requirement.

FILTER 1 - Head to Head competition. If the two Schools have not played, go to FILTER 2:
Filter 1 is analyzed by comparing all Common Opponents as to differential in victories against said Common Opponent and also Relative Stroke differential (i.e. the greater the differential, the more value awarded). A Winning Bonus factor is also awarded to the School that is up in the Head to Head Competition when analyzing the Common Opponent comparison. That Winning Bonus Factor is in addition to the fact that the winning School has already benefited from the victory and those stroke differentials gained when they competed Head to Head. .5 a point is awarded from the analysis of the two Schools record against the Common Opponent and .5 a point is awarded for the disparity of stroke differential between the two Schools and the Common Opponent. If there are not at least 16 Common Opponents (which is actually FILTER 2), then FILTER 3, which is Level 2 Common Opponents is also analyzed. If after analyzing Level 2 Common Opponents there are less than 25 total Level 1 and Level 2 Comparisons, then FILTER 4 (Adjusted Scoring, Adjusted Drop Score, and Strength of Schedule) is utilized in a pro-rated fashion. In other words, if there were 24 Level 1 and Level 2 Common Opponents (but less than 16 Level 1), the results of that data would be weighted as 96% of the decision (24/25) and the FILTER 4 data would be weighted as 4% (1/25) of the decision.

FILTER 2 - Common Opponents: Any common opponents (Division 1, 2, 3, NAIA, or JCAA) are used to make the determination. Both Level 1 and Level 2 Common Opponents (FILTER 3) are used to determine the percentage of Common Opponents a team is up. The Level 1 Common Opponents are figured the same way they are in Filter 1. If there are less than 25 Level 1 and Level 2 Common opponents, the same pro-ration applied in FILTER 1 is applied here.

FILTER 3 - A Level 2 Common Opponent is where the comparison is between School A and School B. School A has played School C, but not School D. School B has played School D, but not School C. However, Schools C and D have played each other. Each of these comparisons is worth .5 of a point, but only Stroke Differential is used since there is no logical way in this mulit-level comparison to deal with the various Wins and Losses. There is however a Winning Bonus Factor awarded for multiple victories over an opponent in the various calculations in this filter. Since the Level 2 Opponents and sub-opponents of the C and D Schools can be different, the comparison is viewed from both the perspective of School A as well as the perspective of School B. Between School A and B, the School with the highest percentage of points is the winner of the comparison.

FILTER 4 - FINAL FILTER - Combination of Adjusted Stroke Average, Strength of Schedule, and Average Drop Score are used. This FILTER is weighted internally as follows: Adjusted Scoring Average 60%, Strength of Schedule 25%, Adjusted Drop Score 15%. The points are also awarded in a Relative fashion. In other words, if School A has an Adjusted Scoring Average of 70.00 and School B is 70.01, School A is not a black and white winner receiving all 60% of that awarded value. Rather, both Schools would receive close to 30% with School A getting slightly above 30% and School B getting slightly less than 30%.

There are 6 potential decion making paths:

1: Schools A and B have played Head to Head and have at least 16 Level 1 Common Opponents.

2: Schools A and B have played and have less than 16 Level 1 Common Opponents, but at least 25 Level 1 and Level 2 Common Opponent comparisons.

3: Schools A and B have played and have less than 16 Level 1 Common Opponents and Less than 25 Level 1 and Level 2 Common Opponent comparisons. In this path, FILTER 4 is utilized on a pro-rated basis as discussed above.

4: Schools A and B have not played, but have at least 25 Level 1 and Level 2 Common Opponent Comparsions.

5: Schools A and B have not played and have less than 25 Level 1 and Level 2 Common Opponent Comparisons. In this path, FILTER 4 is utilized on a pro-rated basis as discussed above.

6: Schools A and B have not played and have zero Level 1 or Level 2 Common Opponents. In this situation, FILTER 4 is used entirely. By the end of the year, the use of this path is statistically insignificant.

Top


EXPLANATION OF THE PRESENTATION OF THE STANDINGS:

Since this Ranking is referred to as and handled like a Standing, the main presentation is in the form of a won-loss record with the format being (wins/total opponents). Anytime you see one of the presentations that show both a Division and a Region Standing, note that those standings only relate to that particular universe. Therefore, the following hypothetical relationship is possible: Assume that you had a Division with 100 schools and 10 Regions within that division that had 10 schools each. It is possible for a school to be 91-9 in the Division and be towards the top (and certainly ahead of every school in their Region) of that Division universe. However, all 9 of their "losses" could be against the other 9 schools in their Region universe and they could be 0-9 and in last place behind all of the Region Schools in that universe even though they are ahead of all of those schools in the Division universe.

I have never seen that particular hypothetical scenario take place, but I often see movement where schools can be ahead of other schools in their Region universe, yet behind them in the Division universe. Admittedly, this seem illogical, but the main use of these standings are for the NCAA, NAIA and JCAA to use in their determination of post season participants. Therefore, those situations are very helpful as yellow flags where the system is not in perfect alignment. Those "yellow flags" are the system telling the selection people to ask why and take a deeper look at those situations that involve "bubble teams". Let me editorialize here briefly: No matter what ANYONE tells you, ALL SYSTEMS have potential to produce results that would create yellow flags. Golfstat's H-T-H Standings are the only one that points those situations out to aid in the selection process. This is precisely why I have always told the NCAA, NAIA, and JCAA that what Golfstat does is produce good analytical data for humans to make good analytical decisions. Furthermore, I always tell the associations that they should never go away from having humans make decisions because there are always situations that are created during the season that cannot be "perfectly" accommodated by any model. With all of that said, Golfstat's Relative Ranking is the only model that uses 100% of the official data and also the only model that uses totally accurate data.

Web Page and EMAIL Coach's Reports Presentation:

The first column after the school name, Division Standing, is the wins/opponents record for each school based upon the head to head filter previously explained. Any school that has an N/E to the left of their name is not eligible for NCAA post season play. NAIA does not even list schools not eligible for post season play.

Adjusted Scoring Average: Is the average score of the counting scores, adjusted for conditions (see explanation of adjusted scoring). Adjusted Scoring is a more relevant base of comparison than the raw scoring numbers.

Average Drop Score: Is the average score of the non-counting scores, adjusted for conditions.

Versus Top 25: This shows what the Division Standing Record is for those schools listed in the Top 25 of the Standings that this particular school has actually played. Remember that a school can actually have played another school and beaten them, but still have a loss in the filter system should other data overwhelmingly overturn that competition.

Rank of Schedule: This shows the rank of schedule for this school based upon the explanation listed before. Rank of schedule embodies all NCAA Division 1, 2, 3, and NAIA Schools.

Wins: This shows the number of wins for this school in events that they have played where there were at least 5 schools competing. Remember that the system ignores any school that is not affiliated with the NCAA, NAIA, or JCAA.

Eligible Tournaments OFFICIALLY REPORTED and used at Run Date: This shows how many tournaments that are registered and scheduled to be completed at the run date have turned in their results and those results have been accepted as OFFICIAL by Golfstat according to the various rules stipulated by either the NCAA, NAIA, or JCAA.

Selection Committee Presentation:

The first column after the school name, REG., shows the Region that school plays in.

Division Standing, is the won-loss record for each school based upon the head to head filter previously explained. Notice that every teams total of competitions (wins and losses) is equal to the number of schools shown minus themselves. Any school that has an N/E to the left of their name is not eligible for NCAA post season play. NAIA does not even list schools not eligible for post season play.

Reg. Rank (or Div. Rank if looking at a Regional Printout), shows that schools ranking in the standings for the nest column to the right which will show the Region Standings (or Division if looking at a Regional Printout). These two standings columns shown together can show (especially when looking at a Regional Printout) those yellow flags that were discussed earlier.

Adjusted Scoring Average: Is the average score of the counting scores, adjusted for conditions (see explanation of adjusted scoring). Adjusted Scoring is a more relevant base of comparison than the raw scoring numbers.

Average Drop Score: Is the average score of the non-counting scores, adjusted for conditions.

Versus Division Top 25: This shows what the Division Standing Record is for those schools listed in the Top 25 of the Standings that this particular school has actually played. Remember that a school can actually have played another school and beaten them, but still have a loss in the filter system should other data overwhelmingly overturn that competition.

Versus Division 26-50: Same explanation as above using the next 25 schools.

Versus Region Top 10: Same explanation as above using Top 10 schools in Region.

Versus Region 11-20: Same explanation as above using next 10 schools.

Rank of Schedule: This shows the rank of schedule for this school based upon the explanation listed before.

Wins: This shows the number of wins for this school in events that they have played where there were at least 5 schools competing. Remember that the system ignores any school that is not affiliated with the NCAA, NAIA, or JCAA.

Top

Follow Us