top of page
Search

The Quality of Competition

A Rough Sketch of Cramer Level changes over time

 

               The essential goals of this project are to (a) develop reliable methods of assessing the quality of competition in one league as opposed to another, and (b) to use those methods to create a topographical map of baseball history which would allow us to compare the quality of competition in any documented league to any other league. This article is about those two tasks.

               My responsibility here is not to get it all right.  My responsibility here is to be the naïve, un-educated 13th century map maker who drew maps of the world which said “Here be monsters” out in the oceans.  You have to start SOMEWHERE.  The discussion needs an organized starting point. There is booming, vibrant debate related to this which can be found every day of the year on Twitter, but the discussion is so disorganized that it gets no traction.  Our responsibility here is to take whatever wisdom or insight we can find in the public discussion, and use it to step toward a better understanding of the issues. 

               The goal of this article is simply to create A starting point for the discussion, to create AN estimate of the quality of play over time, so that future researchers will have something to work with, something to argue with, something to correct.

 

1.      Step One

I explained in another article that

a)       We can organize the discussion around Cramer levels,

b)       That the Cramer level of MLB in 1920 is assumed to be 50.00, and

c)       That one Cramer level is equivalent to one run per game, so that if Team A has a Cramer Level of 50 and Team B is one run per game better than Team A, then the Cramer Level of Team B is 51.00.

I will now add two initial assumptions, two starting-gate assumptions which I will modify or abandon in the course of this article:

(a)    that the two major leagues were even in 1920 and have been continuously even since 1920, and

(b)    that that level has risen by .03 per season since 1920, so that the level was 50.3 in 1930, 50.6 in 1940, 50.9 in 1950, and 53.0 in 2020.

 

I should say (or repeat) that I actually do NOT believe that the increase over time has been that large.  I think we are probably around 51.8 now (in 2025). It should not matter at all what I believe, however, and it should not matter much what the starting point of the discussion is.  Whatever we believe now is certainly wrong.  Evidence is evidence; the studies that we can do later should lead us eventually to the truth, regardless of how wrong we are now.  My impression is that my estimate is lower than that of most other people.  The arguments of some people would suggest that, if a team from 2025 played a team from 1920, the 2025 team would win by 15 or 20 runs a game, which would mean a Cramer Level of 65.0 or 70.0. When you don’t KNOW what the answer is, then what you THINK the answer is doesn’t matter very much.

So let’s start here: 50.0 in 1920, 50.3 in 1930, 50.6 in 1940, 53.15 in 2025. 

Year

League(s)

C Level

1920

MLB

50.0

1930

MLB

50.3

1940

MLB

50.6

1950

MLB

50.9

1960

MLB

51.2

1970

MLB

51.5

1980

MLB

51.8

1990

MLB

52.1

2000

MLB

52.4

2010

MLB

52.7

2020

MLB

53.0

2025

MLB

53.15

 

Obviously I am maintaining a year-by-year and league-by-league chart, but there is no point in running a 200-line chart here. I will note or explain that this chart implies that if an average team from 2025 played an average team from 1920, the 2025 team would win about 80% of the games.  I’ll explain how we reach that belief somewhere else.

 

Step Two  Allocating that improvement to three sources

               There are many, many things which could cause the quality of play in the majors to improve or decline over time.  Ultimately it is mostly irrelevant to us why those changes take place.  A Level-1 mapmaker does not care WHY the Florida peninsula hangs out into the sea, or why the land mass narrows to a scant 30 miles between North America and South, or why there are continents.  That’s somebody else’s problem.  This is a Level-1 mapmaking exercise.  I’m just trying to make a rough sketch. 

But even to make a sketch, we have to have SOME set of working assumptions.   This article is mostly about the working assumptions.   I believe that the quality of play has improved over time such that a team from 100 years ago could not be competitive now.  That’s a working assumption; at this point it couldn’t even be called a working hypothesis, because I am not trying to prove it or disprove it; I am just assuming it is true, and I am assuming that the net gain since 1920 is about 3.15 runs per game (RPG) for an average team.

In the chart above, I was assuming that improvement has been constant over time.  That is unlikely.   In the next chart, we are going to assume that the improvement over time has three primary causes, which are:

1)       The elimination of the color line, or the inclusion of black players into major league baseball,

2)       The related, overlapping but still distinct broadening of the major league population to include players from all over the world, and

3)       The “natural push to excellence” created by expansion of the population and the development over time of better athletes.

For purposes of this model, I am going to assume:

1)        That the end of segregation accounts for 1.00 of the 3.15 RPG of improvement in the Cramer Level, which is implemented at the rate of .04 per year in the years 1947 to 1971.

2)       That the development of international scouting accounts for .575 of 3.15 increase in the Cramer Level, which is implemented at the rate of .0115 per season in the years 1956 to 2005.

3)       That the natural push toward excellence accounts for 1.575 (or half) of the 3.15 increase, which is implemented at the rate of .015 per season from 1920 to 2025.

 

Thus, the quality of competition is represented in these charts as increasing much more rapidly in the post-war years (1947 to 1970) than in the era between the wars (1920-1941).

 

 

 

 

Step Three Sketching in the early years

               It is likely not only that the quality of Major League baseball improved in the years before 1920, but that the slope of improvement was much steeper prior to 1920 than after.   1920 is more or less the point at which baseball reaches a stable equilibrium, modified of course by evolutionary developments. 

               19th century baseball was not regarded as major league baseball until the 1970s.  19th century records were more or less ignored and, when presented, were shown with some sort of separation between 19th century and “Modern” baseball.  As I have written many times, it was a mistake to erase that line.  19th century baseball does not meet ANY standard that would mark it as being a major league, and pretending that it does causes unresolvable problems which are roadblocks to understanding.

               Nonetheless, it is now conventional practice to represent 19th century baseball as if it was major league baseball.  This site is not about what I want or what I believe; it is my hope that the site will become community property.  As such, I need to sketch in Cramer Levels for baseball from 1871 to 1920.

               From 1920 to the present, the Cramer Level has been represented as improving at a generalized rate of .03 per season.   I am going to assume that it improved by .04 per season from 1910 to 1920, and .05 per season from 1900 to 1910; thus, the assumed C-Level of MLB is 49.60 in 1910 and 49.10 in 1900.  We are implicitly assuming that if an average team from 1920 played an average team from 1900, the 1920 team would have a winning percentage of almost exactly .600, and the 1900 team almost exactly .400.   We are assuming that the improvement in the quality of play between 1900 and 1920 is equal to the improvement between 1920 and 1955. 

               Prior to 1900, it is going to be difficult for this project to get a fix on anything.  To have a placeholder number (Here Be Monsters), I have assumed that the C-Level degrades going backward from 1900 at a rate accelerating by .01 RPG per season—thus, a decrease of .06 from 1899 to 1900, of .07 from 1898 to 1899, .08 from 1897 to 1898, and so on.   This would place the Cramer Level for 1871 at 43.30.  This would suggest that if a team from 1871 played a team from 1900, the 1871 team would have a winning percentage of about .082.  If they played a team from 1920, they would have an expected winning percentage of .060.   If they played a team from 2025, the 1871 team would have an expected winning percentage of .022; the 2025 team, of .978.

               Again, I’ll explain somewhere how we cross from a Cramer Level to an expected winning percentage.  Here’s the existing chart, from 1871 to 2025:


 

 

First Draft of Cramer Levels July 2025
First Draft of Cramer Levels July 2025

Step Four    Expansion

               Trying to deal now with expansion.  An expansion, we will assume, sets backward the quality of play, and lowers the Cramer Level. This problem is entirely different, and requires a different approach. 

               First, we have with regard to expansion actual evidence about the effects, from which what seem like reasonable inferences can be made.  That evidence can be stated in runs or runs per game, which accommodates our structure, but the fact that the process becomes more data-based than speculation-based, for a moment, forces us to change our approach.

               Second, when dealing with expansion, we have to deal with the American and National Leagues separately, since the two leagues have expanded generally at different times. 

               The data.  Since 1961 there have been 14 first-year expansion teams.   Those 14 first-year teams have played a total of 2,268 games, and have been outscored by their opponents by 2,444 runs.  That’s 1.08 runs per game; we can call it one run per game with tolerable distortion.  The first four expansion teams, the 1961-62 teams, were outscored by their opponents by 1.01 runs per game.  The 1969 expansion teams were outscored by 1.08, and the four 1990s expansion teams by 0.97 per game.  The two 1977 expansion teams, the Blue Jays and Mariners, were outscored by 1.42 runs per game, inflating the chart. 

               Let’s call it a run per game.  What does that mean?

               It means we can calculate how much of a setback this should have created in the Cramer Level.  Suppose that an 8-team league expands by 2 teams, becoming a 10-team league.  If their previous C-Level was 50.00, then the C-Level of the two expansion teams HAS to be 49.00.  That means that you then have a league composed of 8 teams at a Cramer Level of 50, and 2 teams at 49.  Which means that the Cramer Level of the league is now 49.8, which means that it has dropped by .20 because of the expansion.  We know how much of a Cramer Level hit that expansion has caused.

               Or do we?  Of course there is always another way to look at it.  You could argue that the hit should be larger than that, because the expansion teams take players from the other teams, which pushes down the C-Level of THOSE teams as well.  

               You can say that, but is it true?  It doesn’t seem to me that it is.  The 1962 Mets built a team out of aging veterans being given one last shot, but that was a disaster, and no one else has done that.  If you cast a fishing net over your memory and draw out the first-year expansion players you remember, 90% of them are going to be guys who were trapped in the high minors, often for years, before the expansion team gave them a chance to play.  Both leagues expanded for the 1969 season.  Lou Piniella, the 1969 American League Rookie of the year, got his first major league at bat is 1963, SIX YEARS before the expansion, but just got glimpses of the majors here and there before the Royals gave him a chance to play.  The National League Rookie of the Year was Coco Laboy, who had played 523 games at AAA.  Doug Ault.  John Schaive.  Billy Moran had served a fairly long minor league apprenticeship before getting a shot with Cleveland in 1958.  Failing in that trial, he spent 2 ½ years back in Triple-A before the Angels gave him another shot, and then he played great for them in 1962.  Lee Thomas had spent years battering the holy hell out of minor league pitchers in an effort to impress the New York Yankees.  The Angels said “Hey, we could use that guy.”  Travis Lee.  Andy Fox.  Eric Young.  Vinny Castilla.  That’s who expansion players are. 

There are other words you can put together to try to argue the point, but it is my opinion that they don’t represent the truth.  The truth is, I think, that the 1961-62 expansions set the Cramer Levels backward by about 0.2 Levels.

Let’s assume that approach works.  If it is 8 teams expanding to 10 and the eight are at a level 50.000, the math is really easy.  If it is 28 teams expanding to 30 and one of the new teams goes into the American League and the other into the National and the two leagues are not exactly even because of previous expansions, then it isn’t nearly so easy.  Here is the method I used.

We start with the presumptive quality of the existing teams.  The previously estimated presumptive Cramer Level of American League teams in 1960 was 51.2175 Cramer Levels.  We advance that by .04 per season for the gradual breakdown of racist barriers, by .115 for the gradually growing international market, and by .015 for the natural push toward excellence over time, and we’re at the number shown in the previous chart.

We multiply THAT by the number of teams in the league in 1961 (10) and multiply that by the number of games in the schedule (162), and we’re at 83080.08, which represents the presumptive gross strength of the league if there was no expansion.  The expansion has cost the league about 324 runs, or 162 for each expansion team.  Subtract those 324, now we are at 82756.08.  Divide that by 162, and we’re at 510.84.  Divide that by 10, and it is 51.084.  We thus estimate the Cramer Level of the American League in 1961 at 51.084.  It’s complicated, but it works for all the different permutations of expansions that we have to deal with. 

Of course, the effects of an expansion are not permanent.   The league rebuilds.  The new teams hire scouts, so after expansion there are more scouts out there evaluating young players.  The new teams build their own minor league systems, hire their own minor league coaches.  The population base expands.  More raw material is refined; more product is built.  The product is major league players.  Over time the strength of the league gets back to where it would have been without the expansion.

Our question is, after how many years do we, so to speak, retire the debt?  How long does it take for the talent base to be entirely rebuilt?

We’re guessing, we’re speculating, but I used the number 15 years.  The 324 runs worth of quality that are lost in the expansion, I put back into the pool at a rate of:

50 runs in the first year post-expansion (meaning that the league is discounted by only 274 runs, as opposed to 324),

45 runs in the second (league discounted by 229 runs),

40 runs in the third, and

35 in the fourth, so 50-45-40-35 in the first four years post-expansion.  Then 30-26-22-18 in the next four years, then 15-13-10-8 in the next four, then 6-4-2 in the final three  years.  We assume that the effects of the 1961 expansion affected the American League until 1976, but only by 2 runs in the last year, as opposed to 324 runs in the first.   We thus assume that the effects of the expansions of 1961, 62, 69, 77, 93 and 1998 affected the quality of play in the major leagues continuously from 1961 until 2012, except 1992. 

But, of course, the expansion was not the ONLY thing happening in that era.  The Cramer Level was pushed down five times by expansion, but the things driving the numbers UP were also happening every year in that period.  Baseball by 2012 was far ahead of where it was in 1960, despite the expansions.  This should be obvious. 

An observation.  I am older than almost any of you, and I remember the 1961-62 expansions quite vividly.  I remember the things that sportswriters wrote about expansion in 1961 and 1962.  Some of those comments were hysterical, hyperbolic, wildly inflated.  Expansion, at the time it was happening, was “watering down” major league baseball—diluting the quality of baseball being played, said some, to the point where it was no longer major league baseball at all.  Four teams of bush league players were now wearing major league uniforms and pretending to be major league players.  100 minor leaguers, masquerading as major league players!  What a joke!

Bush league players, some of them, like Dean Chance and Chuck Hinton and Jim Hickman and Bob Aspromonte.  Some of them could play a little.  Expansion DID set the Cramer Level back a little bit, of course, but nowhere NEAR as much as many of the sportswriters of that era thought that it did.  It is the same phenomenon that causes wars.  Wars are fought mostly because the people in some country have tremendously overstated the difference between themselves and those in a neighboring country.  People overstate the difference between themselves and those of another race.  Republicans and Democrats fantastically overstate the differences between the two parties and fight about nothing, as do the people of different religions.  Scientists exaggerate the differences between themselves and people in the humanities.  My friends in college, not Greeks, all ridiculed the Greeks and despised them in petty ways.  At the time it made sense to me, but I realize now how absurd it was. 

It is the same thing here.  We SEE a difference, and we instinctively exaggerate it.  We lack a sense of proportion.  Even I am not old enough to remember baseball during World War II, but the sportswriters of my youth mostly did remember it, and wrote long stories about how terrible the game was during the war, a game played by teen-agers and 40-year-olds and a guy with one arm.   If you study the age distribution of wartime major leaguers, you will find it is surprising how little change there actually was.  There were teenagers in the majors in 1936, and in 1952.  Sportswriters were exaggerating the setback, exaggerating the decline. 

Sportswriters and broadcasters tremendously overstate, in my opinion, the difference between the majors and the minors.  Scouts over-estimate the difference between a college player from a Power-5 conference and a player from the Ivy League or the Mid-American League; they overestimate it, and they miss talent because they do.

And when people talk about how much better baseball players are now than they were 50 or 60 years ago, they almost always exaggerate the difference.   The players now ARE better, of course, but how much better?

For purposes of sketching out Cramer levels over time, I estimated the talent of 2020 to be 3 runs per team per game better than the talent of 1920.  I seriously doubt that the difference is actually that large.  Some people will tell you it must be 20 runs a game.  I’d believe two.  But that’s what this is about:  let’s figure it out.   The evidence is there if we look for it.

The difference between the Red Sox and the Yankees?  Oh, that’s all real.   Nothing exaggerated about that.  This chart presents the prior and expansion-adjusted Cramer Levels for each league in the era when those numbers were adjusted for expansion.


ree

Step Five   The World War II adjustment

The quality of play in major league baseball during World War II took a step backward, of course. 

For the purpose of having a “first assumption”, a first position which can become a hypothesis, I adjusted the Cramer Levels for 1941-1945 down by:

.5 in 1942,

1.0  in 1943,

1.5 in 1944, and

2.0 in 1945.

For context, Musial, Ted Williams and DiMaggio were still playing in 1942, but Bob Feller and Hank Greenberg were out.  Enos Slaughter, Bobby Doerr, Lou Boudreau and Luke Appling were still playing.   Williams, Slaughter and DiMaggio left in 1943.  Appling left In 1944.  Musial and Doerr left in 1945.

When a player of that quality leaves the league, it’s a significant loss of competitive strength.  When a player like Ted Williams, Bob Feller or Stan Musial leaves, that could be a loss of 50-75 runs of competitive strength for the league, perhaps even 100.  When almost all of those guys leave, plus boatloads of lesser players who are also gone, that’s a great many runs that are gone.  That’s a larger backward step for the quality of competition than would be caused by expansion.

Still, it is hard to believe that the loss could be as large as 2.0 Cramer Levels, which is the number I affixed to the 1945 season.   It would set the clock back by more than 40 years, back to the quality of late-19th century baseball.   A loss of 0.5 Cramer Levels would be a loss of more than 1,000 runs for the 16 teams; 2.0 Cramer Levels would be just short of 5,000 runs.  All of major league baseball in 1941 had about 11,000 runs scored; in 1945, about 10,000.   This is not evidence that the estimate of a loss of 5,000 runs of competitive strength is wrong.  You could perfectly well have two leagues which score 5,000 runs each, but one of those leagues could be 5,000 runs better than the other if they were to play head to head.   7,500 runs were scored in the West Texas/New Mexico League in 1946, but the National League surely was at least 5,000 runs stronger, meaning that if they played head to head, the National League teams would have beaten them by at least 5,000 runs.

Still, 5,000 runs is a lot.  The major leagues from 1960 to 1962 took a backward step of about 600 runs due to expansion, and the wartime loss was certainly much larger than that.  We don’t know (now) what the aggregate loss was, and I pegged it at 2.0 Cramer Levels as a placeholder number.  We’ll see where the research goes. 

 

Recovery after World War II

World War II would have affected the quality of play after the War until at least 1960.   If you ask the question, “Were there potential major league stars and superstars who entirely lost their careers due to World War II”, the obvious and undeniable answer is “Yes, there were.”  We don’t know who they were, but if you study the demographic data, the gap where they should be is obvious. 

For purposes of the chart, I have assumed that there was a 70% recovery in 1946 from the effects of World War II, followed by an additional 3% recovery each season from 1947 to 1956.  This setback was offset by the breaking of the color line in 1947, bringing black players into the game, as well as other forms of growth.  By 1951 or thereabouts, the quality of competition was as strong as it was in 1941.  By the end of the decade, it was significantly stronger.  All of this is imitated in the charts I have created.

 

  Step Six   Differences between the leagues

               To this point we have been assuming that there is no difference in the quality of competition between the American and National Leagues.  This, of course, is not true or accurate, and I need to address that before I shut the door here.

               I can only tell you what I believe to be true.  In the 19th Century there was no American League (although there was an American Association.) The National League had 12 teams in the 1890s, contracted to 8 teams in 1900, and the American League started in 1901.  (Actually earlier, but they declared themselves to be a major league and started competing for major league talent in 1901.) 

               The National League contracted from 1899 to 1900 by getting rid of its weaker teams and moving the few good players from them onto the eight surviving teams.  This made the competitive level of the level markedly better, a fact which is obvious in the data.  Any number of players who were dominant in the NL in the 1890s and in the one league in 1899 were not quite as dominant in 1900.  Cy Young, dominant for ten years before that and five years after, was 20-18 in 1900. 

               In the chart I entered this as a “reverse expansion”, marking the Cramer Level of the league up .4 from 1899 (actually up .46, but the other .06 is natural improvement over time.)   The 1901 increase from eight major league teams to 16 obviously set the Cramer Level back some distance, which I have modeled in chart as .93 Cramer Levels. 

               What many people know about the American and National Leagues in that era is that some of the National League stars came over to the American League in the 1901-1903 era, before a settlement was reached between the leagues in early 1903.  What almost no one seems to understand is that it wasn’t “some” of the National League stars; it was almost all of them. 

               Think about it:  how many stars were there in the National League?  Rosters were smaller; with 8 teams there were 136 players who had 50 or more plate appearances in the National League in 1900.  How many of those 136 players could reasonably be called “stars”?  25, maybe, or 30?

               Now make a list of the National League stars who were in the American League by 1903. Cy Young, Nap Lajoie, Jack Chesbro, Ed Delahanty, Elmer Flick, Sam Crawford, George Davis, Buck Freeman, Kip Selbach, Willie Keeler, Candy LaChance, Jesse Burkett, Jimmy Collins, John Anderson, Lave Cross, Jimmy Williams, Rube Waddell, Bobby Wallace, Charlie Hickman, Harry Davis, Jimmy Ryan, Herman Long, Chick Stahl, Clark Griffith.   The American League offered better salaries; almost everybody jumped.  Some players jumped to the AL and then jumped back, but only a handful of really good players stayed in the National League. 

               I have marked the National League as stronger than the American League from 1901 to 1909 because (a) while the American League did have most of the stars, there were a lot of players in the new league who were inexperienced players purchased from lower leagues, and (b) the National League team did win the World Series in 1905, 1907, 1908 and 1909. 

               But once the competition was organized and the rules established, the American League was far, far better at identifying and acquiring talent.  The players entering the American League in its early years include Ty Cobb, Tris Speaker, Walter Johnson, Eddie Collins, Shoeless Joe Jackson, Home Run Baker, Eddie Plank, Addie Joss, Ed Walsh, Chief Bender and Doc White—supplementing Cy Young, Nap Lajoie, Rube Waddell and the others from the National League.  The National League wasn’t remotely keeping up.  From 1901 to 1919 the American League had about 18 superstars.  The National League had maybe five. 

               What makes a league strong is strong organizations—strong ownership, strong management, strong scouting and player development.  The National League in the 1905-1909 era had three strong franchises, New York, Pittsburgh and Chicago.  Those three franchises had a series of fantastic pennant races, with all three teams clocking along at 100 wins a year.  They were able to do that because the rest of the league was terrible.  There were five non-competitive teams in the National League—Boston, Philadelphia, St. Louis, Brooklyn and Cincinnati.  Those teams trudged along at about 90 losses per team per season, enabling the other three to win 100. 

In reality, the American League may have become the stronger league as early as 1903.  We don’t really know, although if this project works we will eventually figure it out.  But beginning in 1910, the American League began to Win the World Series every year.  The AL was 8-2 in the series in the 1910s, 6-4 in the 1920s, 7-3 in the 1930s, 6-4 in the 1940s, and won the first four in the 1950s.  Over a 44-year period the AL entries were 31-13 in the World Series.  It’s not great evidence, but it is something.  If the two leagues were even over that time period, the chance that one league would go 31-13 or better in the World Series is less than 1%; the chance that the American League would do that is less than one-half of 1%.  When the All Star game started the American League went 5-2 in the 1930s, and 7-2 in the 1940s. 

My best estimate is that the American League’s dominance peaked about 1923 or 1924, and that the Cramer Level of the AL at that time was about .4 higher than the National, meaning that the average American League team at that time was about 60 runs a year better than the average National League team at the point of maximum separation between the two.   In the mid-1920s the National League began to creep slowly back toward the American, gaining at a rate of one or two runs a year. 

What changed it?   Branch Rickey.  Branch Rickey built one of the National League’s weak sisters, the Cardinals, into a powerhouse.   Cincinnati, absent from the World Series for 20 years, got back to the top in 1939.  The Phillies, the downest and outest of the down and outs, began to crawl forward about 1941.  Rickey left the Cardinals in 1942 to go to the Dodgers, and built THEM into a powerhouse.   

The story that you hear often is that the National League after Jackie Robinson became the stronger league, and that they did so because several or most National League teams embraced integration, while the American League, led by racist owners in New York and Boston, largely remained a Lilly-white league.  This is the one thing that everybody knows about baseball history; if they don’t know anything else, they will still know that.

That account is essentially true.  It is essentially true, but massively incomplete.  Three points to complete the picture.  First, the National League in 1946, the last year pre-Jackie, was still behind the American League.  Before they could move ahead, they first had to catch up.

Second, Jackie was great, but he was just one player.  The NL in 1946 was still behind by about .15 C-Level, or about 20-25 runs per team per season, or about 150-200 runs for eight teams.  That’s more than one player can do.

Integration did allow the NL to catch up quickly to the AL, and the NL was equal to the AL, in my opinion, probably by 1951.  At the start of the article I said that integration was pushing the Cramer Level of major league baseball up by about .04 runs per game per team per season.  In the years 1947-1960, almost all of that was going into the National League.

Third point, yes, it is true that the American League in that era was more racist and falling steadily behind because of that, but what I am trying to say is that the American League would have been falling steadily behind in that era even had there been no racial divide.  To me, they were not mediocre because they were racist; they were racist because they were mediocre.

The AL in the 1950s was like the NL fifty years earlier.  There were a lot of badly run franchises.  The Senators, Browns/Orioles and the A’s among them had seven 100-loss seasons and zero seasons with a winning record.  The Red Sox from 1951 to 1966 were never in a pennant race, and were getting steadily worse.  The Tigers always had talent, but their front office in the 1950s couldn’t manage a family reunion.  The Tigers, really, were as racist as the Red Sox; people just don’t talk about it as much.  Mediocrity and bigotry are travelling partners.

If the American League had a two-team pennant race from 1951 to 1963, that was a lot, and the options were New York and Cleveland or New York and Chicago.   No other team in the league got within 10 games of a championship from 1951 to 1959.  As the NL had fallen behind the AL about 1910 and continued to drift further behind until 1923 or 1924, the AL now fell behind the NL about 1950 and continued to drift further behind until 1963 or 1964. Probably the AL got even further behind than the NL had, although we lack tools precise enough to gauge that.

By about 1959 the American League was beginning to get their shit together, if you don’t mind the scientific terminology.  What is the evidence for that, you ask?  It is all over the place.  Paul Richards took over the Orioles in 1955 and began making progress with them almost month by month.  By 1960 they were over .500, and for decades after that rarely under .500.  By 1959 the Senators, still bad, had Harmon Killebrew and Bob Allison in the lineup and perhaps the best pitcher in baseball, Camilo Pascual, on the mound.  In 1960 they added Earl Battey; in 1961, Zoilo Versalles and Jim Kaat.  The A’s, a clown show their first ten years in KC, came up with Bert Campaneris and Dick Green in 1964, Catfish Hunter in 1965, Rick Monday and Sal Bando in 1966, Reggie Jackson in 1967, Joe Rudi and Rollie Fingers in 1968.  The Yankees went through a pause, but when the Red Sox suddenly caught fire in 1967 there were four formerly-awful franchises that stepped forward.   The league was producing a lot more talent, and the black stars ratio began to even out with Reggie Jackson, Tony Oliva, Vida Blue, Reggie Smith, George Scott, Amos Otis and Dick Allen.

By the early 1970s the AL was still the weaker league, probably, but it was really pretty even.  By my naïve and clumsy efforts to measure this, it appears that the American League would have been on an equal footing by 1977, except that the AL expanded again in 1977, and the expansion dropped the Cramer Level below the NL once again, albeit not significantly. 

 

               This chart represents my latest effort to estimate the Cramer Levels of each league from 1900 to 2025, with shading indicating seasons in which one league appears to be meaningfully stronger than the other:

ree

 

Looking toward the next steps

               There are dozens of issues that I could have dealt with in this article, but did not.  There are three issues that I really wish that I could have dealt with, if I were not concerned with the possibility of the article becomes just too long, and also were I not aware that if I dealt with these three issues, three more would immediately arise and demand attention. The top three things that I wish I could have dealt with are:

1) The relative level of the defunct leagues.   The defunct leagues are the American Association from the 1880s, the Union Association from 1884, the Player’s League of 1890, the Federal League of 1914-1915, and the Negro Leagues. 

Enough players traveled back and forth into the Federal League that that one, at least, should be relatively simple to sort out.

               2) The data from inter-league play, 1997 to the present.  This data should provide fairly decent guidance about the relative quality of play between the two leagues over the last 30 seasons.

               3)  The Dick Cramer data, published in the mid-1970s, addressing the general issue which is of interest here.

 

Thank you all for reading.

 

Bill James

 
 
 

Recent Posts

See All
Welcome to the Site

There is a great deal of active discussion about. ..well, how Babe Ruth would do in today’s game, or about the quality of play in the...

 
 
 

Comments


bottom of page