# 2018 Term Year Analysis

Comparative Analysis

## Errata

ERROR in CASE-62
Originally, I had listed Sotomayor as both the Author and as Joining in this decision. The Raw Data Files (pickle, json, txt) have been corrected. But I have not rerun (and will not be rerunning) the programs which created the graphs on the previous pages.

It's not a major error. And I question whether it effects a single line of Analysed Output. But it is an error. And I told you there would be errors. So, I was right in being wrong.

And now, onto the main event.

## Code

2018_judges_analyze_A5.txt is the code I used to create the graphs on this page (using the data as discussed elsewhere). Technically, there's not much new. From a programming standpoint, it's a simple continuation of a theme.

## Comparative Analysis

Given an Opinion, how likely is it that one Judge Agreed with another?
• List All Opinions
• For Each Opinion
• +1 If Both Judges Agree with Opinion
• +1 If Both Judges Disagree with Opinion
• -1 Otherwise
• Divide by 168 (Number of Total Opinions in All Cases) to Normalize between 0.0 & 1.0.

### Normalized Relative Agreement

The Graphs That Follow

Each Judge (by definition) agrees with themselves all of the time. These are the only values near 1.0 in the above. So, I zeroed them out. In the graphs that follow, Roberts v Roberts = 0.0.

Further (per the graphs above), the values clump around the middle of the range, so I stretched the data (in the graphs below) by using the following formula.

ratings = [(r - 0.422619047619) /
(0.875 - 0.422619047619)
for r in ratings]
I believe the reason for this clumping is that there are many Supreme Court Cases (say, about 42.26% of them), which are easily decided by Law: i.e. that any Reasonable Jurist would come to the same conclusion. The Cases are straightforward and uncontroversial.

Of course, I disagreed with The Court in regards to many of these uncontroversial decisions. But that's because I don't really care about The Law.

Which is to say, The Judges Decided a Case (I presume; or at least, in theory) on What Is. Whereas, I Decided each case (or at least, some of them) on What Should Be.

Thus, my low level of agreement not only reflects my dissatisfaction with The Court's Rulings; but also, can be taken as a gauge of my relative level of unreasonableness.

## Further Analysis

Of The Textual Variety

The first Obvious Observation (my favourite kind) is that the two Groups of Graphs differ. I got different outputs from the same inputs. I mean, I always sort of knew this was theoretically possible. But this project has really highlighted the inevitable subjectiveness of certain facts to me. I've really just scratched the surface in regards to the possible graphs (as they are without number). But it's pretty clear (to me, anyway) that if I'd had an agenda (other than projecting my greatness out into The Void) and even if I did have an agenda (which, undoubtedly, would involve projecting my greatness out into The Void), I'd be more inclined to see what I could glean from the static (i.e. the preceding graphs) and in that way (wait for it) project my greatness to an otherwise unsuspecting public: i.e. The Void.

Secondly, some of the groupings I'd expected are much easier to see in the second group of graphs than the first.

Ginsburg, Breyer, Sotomayor, and Kagan seem to group.

Whereas Thomas and Alito (including to a lesser extent Roberts and Kavanaugh) do not group as clearly as I would have expected.

Seriously, I very much expected Thomas and Alito to have a higher level of agreement. Though, this last might have more to do with Thomas' habit of writing Outlier Opinions.

Finally, I will note that by this methodology, my opinions correlate with Thomas' better than anyone else.

Any conclusions really are dependent upon how one slices the data.

Judging the Judges

Next Entry

Index

And then, it became time to move on to other things.