Friday 01/05/2024 by sethadam1

WHY PHISH.NET RATINGS WERE DISABLED

[We'd like to thank Paul Jakus for this analysis of recent Phish.net ratings. Coincidentally, we've been analyzing ratings with him for a future blog series digging deeper into how Phish fans rate shows. Stay tuned for more on ratings soon! —Ed.]

At 3:42 p.m. on the afternoon of January 3, 2024 the ratings function of Phish.Net was disabled due to unusual patterns in ratings behavior. Here we’ll explain those patterns, but first let’s establish what a “normal” Holiday Run ratings pattern looks like.

For comparison, let’s look at ratings submitted between 1:00 a.m. January 1, 2023 through 3:42 p.m. January 3, 2023 (a time period that will match that of the 2023/24 NYE Run.) Some 1,004 ratings were submitted over nearly 63 hours, for 116 different shows. Of these ratings, 838 (84%) were for the four holiday shows, leaving 166 ratings to be spread across the remaining 112 non-holiday shows. The most new ratings any non-holiday show received was six.

So, what happened after the 2023/24 Run? Read on for more.

The first rating for 12/31/23 was at 1:12 a.m. on January 1, 2024. A total of 3,779 ratings were submitted for 442 different shows until ratings were suspended at 3:42 p.m. on January 3. Only 2,103 of these ratings (56%) were for the 2023/2024 NYE Run, leaving 1,676 ratings spread over the remaining 438 shows.

There is no doubt that something odd happened.

Now let’s take a closer look at the ratings for the Gamehendge show (Figure 1). Nearly 1,800 ratings were posted, with a very high proportion of ‘5’ ratings (87%), neither of which would be unexpected for an instantly classic performance. Almost 7% of ratings were a ‘1’, though, which was a bit higher than the norm for all modern era shows (5.8%). Peculiar, yes, but nothing immediately identified as highly unusual.

Gamehendge Graph
Gamehendge Graph

But if nothing was obviously wrong with ratings for the Gamehendge show, why suspend the ratings function?

Recall the same time period after the 2022/23 run: we saw lots of ratings for the Holiday shows and relatively little activity on non-holiday shows. Things were different this year.

In 2024, newly submitted ratings of notably historic shows were pervasive, with more than 30 older shows receiving at least 10 new ratings in the first three days of the New Year. For example, here are all the older shows with 30 or more new ratings (and the ratings distribution) during the January 1-3, 2024 period.

Table 1: Older Shows and Newly Submitted Ratings

Before January 1, 2024*

After January 1, 2024*

Date

Location

# of Ratings

Average

# of New Ratings

Average of New Ratings

12/31/1999

Big Cypress, FL

1,325

4.767

259

4.232

12/30/1997

MSG

770

4.700

69

3.783

11/22/1997

Hampton, VA

854

4.678

55

3.873

4/3/1998

Nassau, NY

832

4.669

49

3.932

8/2/2003

Limestone IT

490

4.680

39

3.769

12/31/1995

MSG

942

4.634

37

4.297

8/17/1997

Limestone Went

650

4.652

33

4.030

8/22/2015

Watkins Glen Magnaball

1,648

4.649

30

3.233

*Before/After 1:12 a.m. EST, through 3:42 p.m. January 3

It’s clear that Big Cypress was the primary historic show affected, but it was definitely not the only one. Every show in the table above was well-known, highly rated, and listed in The Phish Companion as a “Top 100” performance.

Did all of these performances really get perceived as “worse” starting on January 1?

Let’s take a closer look at ratings for Big Cypress, 12/31/1999. The 259 post-January 1, 2024 ratings happened within about 51 hours. For comparison, the previous 259 ratings for this show were submitted over 1,336 days. The number of new ratings for this show was highly unusual.

Table 2 depicts the number and percentage of each rating for NYE Big Cypress, before and after January 1.

Table 2: Big Cypress (12/31/1999), Before and After January 1, 2024

Rating

Before January 1, 2024*

After January 1, 2024*

1

51 (3.85%)

42 (16.22%)

2

7 (0.53%)

3 (1.16%)

3

14 (1.06%)

4 (1.54%)

4

56 (4.23%)

14 (5.41%)

5

1,197 (90.34%)

196 (75.34%)

*Before/After 1:12 a.m. EST

First, the 12/31/99 ratings distribution looks remarkably similar to that of the 2023 NYE Gamehendge show, except for fewer “1s’. Second, the ratings distribution for the post-January 1 period is markedly different. Is it reasonable to expect 259 ratings to pour in for a show that’s 24 years old, in a matter of hours, and with such different ratings?

A nice feature of the Phish.Net ratings function is that it generates debate among fans, but its primary purpose is to help guide fans, both new and old, through a large body of recorded music. Given this primary goal, and given what appears to be manipulation of numerous highly-regarded shows, it was decided to suspend the ratings function.

The ratings function will return, but Phish.Net administrators are currently mulling possible changes in how ratings are calculated and presented in the future. To begin with, though, it’s necessary to answer the thousand dollar question: whether the ratings submitted this week reflect an honest re-appraisal of historic shows, or if they were submissions designed to affect show rankings.

If you liked this blog post, one way you could "like" it is to make a donation to The Mockingbird Foundation, the sponsor of Phish.net. Support music education for children, and you just might change the world.


Comments

, comment by DemandOpener
DemandOpener Ratings shouldn't come back. They have outlived their usefulness.
, comment by SawItAgaaain
SawItAgaaain Bless your number-crunching hearts. Timely and helpful post, especially articulating that the main purpose is to help guide us through over 2,000 shows.

While you're under the hood, please also consider how to respond to the one-star bot bombs that have affected shows the last year or two as articulated by @abuani in this this thread.
, comment by white_lightning
white_lightning Let's go back to when shit was fun. You either had to be there, talk to friends who were, or listen to it...of course back then listening to it if you weren't there meant waiting for a tape. But still. It was more fun.

Now we have people gaming a ratings system. To what end I can't even guess.
, comment by DogsCanDance
DogsCanDance Maybe shows should only be available to be rated for a set amount of time - a year? Six months? I'm not sure of the value of a new rating on a show that was played a decade ago anyway. At that point, the show is inevitably being rated on more than its own merits.
, comment by paulj
paulj My work with an earlier version of the database (downloaded on October 27) suggests that @abuani has identified one type of bomber. In fact there are at least three types of bombers--and fluffers, for that matter--so it goes both ways. The problem is separating ill-intent from those who simply use the ratings choices in different ways. That's why this will take some time.

BTW, I was not involved in the decision to suspend the ratings (I'm simply a user, not an admin). In fact, I came to .Net on Wednesday afternoon just to look at the NYE ratings and saw they'd been locked. I sent a message to @sethadam1 and @Lemuria Thursday morning and asked what was happening. That's how I came to look at the new data.
, comment by BetweenTheEars
BetweenTheEars My two cents:

1. Mitigate reactionary ratings: add a cooling off period before show ratings open. Say, a week. Maybe even a month.

2. Institute "one person one rating": include only *verified* user accounts in the average ratings tabulations to limit the number of users with multiple .net accounts rating the same show multiple times.

3. Increase transparency and accountability: make all user ratings public. Any user should be able to see the ratings for any other user. Kinda like free speech... sure, you can say (almost) anything you want. But just because you can say almost anything doesn't insulate you from from any consequences from what you say.

Thanks .net team for hitting the pause button on the ratings and working to enhance their usability to guide fans to good shows!
, comment by timkell
timkell I posted a suggestion on the board, but figured I'd toss in here:
1. Jam chart or new team determines the list of "canonical" shows. Remove rating capabilities from these shows and create a prominent page with this list. Even create a project and select volunteers to write long form articles explaining the importance and excellence of said show. You get to remove them from controversy and also achieve the primary purpose you mentioned, making sure people are aware of these particular shows. I'd prob choose a word other than canon TBH. Sounds a bit too serious for my liking.
2. New "notable shows" page for 3.0 and beyond managed annually by jam chart or new team. Again you achieve the primary objective of helping fans find shows they should hear. At any point if a show like 12/31/23 comes along you can pretty much immediately add it to canon page.

You can still have the top rated page, but maybe divide into three pages by era? Or keep it the same and add some protections, but with the two changes above less harm can be done and also less incentive to commit the harm.

Someone else suggested potentially reaching out to RateYourMusic and considering implementing something similar to what they have: RateYourMusic FAQ

Hope this helps.
, comment by bushwood_a_dump
bushwood_a_dump Good riddance!

I give my comment a 4.78
, comment by johnnyd
johnnyd Thank you for your service, @paulj and team!
, comment by abuani
abuani @SawItAgaaain said:
Bless your number-crunching hearts. Timely and helpful post, especially articulating that the main purpose is to help guide us through over 2,000 shows.

While you're under the hood, please also consider how to respond to the one-star bot bombs that have affected shows the last year or two as articulated by @abuani in this this thread.
Just to say, @paulj did reach out to me at some point last year and we had an opportunity to collaborate and I didn't make enough effort to continue. It was a real missed opportunity to have a per minute view of how ratings were coming through. I personally believe I had enough evidence to demonstrate this was happening to nearly every show, but didn't hear from anyone from .net and didn't know the proper channel to raise my concerns. So if someone from the admin team wants to reach out, I'd be more then happy to share my methodology and donate the code in case they want to run this long term.
, comment by ReeYees
ReeYees There should be group of people responsible for rating shows. Perhaps a diverse group from all different ages with a minimum number of attended shows. I think a rating system by year would be helpful for those seeking out popular favorites. There is no point in rating shows from one year against shows from another year. Big Cypress… NYE 2023… Magnaball… Dick’s 2012… Island Tour... Halloween ‘94… Who cares how these rate against each other? They are all amazing! Anyone who rates these as a 1 clearly has an agenda other than helping people find phenomenal shows.

A master list of top 100 or more shows of all time would be helpful. We don’t need to see rating scores for this just a solid list based on scores from the yearly ratings.

Big Cypress a 1? Shame on you. These folks should seriously be blocked from all future ratings.
, comment by jmediavi
jmediavi Another suggestion would be to expand the rating system to 1-10 instead of 1-5. There is a large gap between a show being considered a 4 or a 5, which probably leads to overuse of "5", or settling for "4" because no show can really be a 5. With a more fine-grained scale, you might get more "accurate" ratings, rather than just averages of 4 and 5.
, comment by paulj
paulj I've heard that Netflix started with a 1-10 scale, and then moved to a five-point scale (which is the one I first remember.) Then, 2-3 years ago, they went to a simple 2-point thumbs up/thumbs down scale. Last year they switched to a three-point scale (don't like, thumb up, two thumbs up).

Based on the October dataset, and restricting ratings to just the Modern Era (post-2009), 91% of all ratings submitted by Phish.Net users were a 3, 4, or 5. We're just like Netflix.
, comment by seethecityseethezoo
seethecityseethezoo This is like when Taylor Swift's Ticketmaster experience last year prompted legal action. Just because the most coveted and popular shows of all time (Big Cypress and cream of the crop MSG shows) are impacted, action followed.

How about the little guys?? Look at the data for all 4.0 Alabama shows and the criminal rating bombs that occurred to those beauties.
, comment by Frosted
Frosted Have the shows rated by year and not all time. This way we're comparing spring vs fall instead 1999 vs 2029

Most of all Keep up the good work

The job You're doing is Grrreattt!
, comment by Choda
Choda Who’s the loser that clocked in at 1:12am to rate?
, comment by BlueMoon1894
BlueMoon1894 I would say one year would be plenty of time to get an accurate show rating.

Personally I'm one of those stat geeks who likes browsing the ratings database, mostly to find new shows outside the top 100 to check out and would like it to continue
, comment by raidcehlalred
raidcehlalred Gently disagree.

The only online site I use to listen to the Dead has a rating system. I use it as a guide and find it quite useful, given I'm usually looking for something particular.

I think the rating system should require a [blank] character blurb in which the rater explains why - even if it's just bc of the Jim->LSG - the show is worth exploring. Not only will this mitigate how the "how easy it is to click a star" problem, the system will have added meaning.

.02.

Also: People simply don't have to reference them.
, comment by paulj
paulj @seethecityseethezoo:

Send me some dates for the AL shows and I'll let you know. I've never been sure what 4.0 actually means...
, comment by starsky
starsky In looking at the ratings graph for the Gamehenge show, it's pretty clear that the high proportion of 1's is an attempt to manipulate the overall rating of the show and drive it down - as opposed to an honest rating of the quality of the show (and thus stay true to the ratings function providing a guide to show quality across Phish's vast catalog).

One way to handle this without disrupting the democratic nature of the ratings system is to report two ratings: "raw" - what we see now and "adjusted" - after anomalous/manipulative ratings have been removed through statistical or data analysis.
, comment by deanlambrecht
deanlambrecht I hope the rating system does not return. It’s silly to rate what is a purely subjective response. Limiting ratings to those in attendance or giving greater weight to the ratings of those in attendance and rated within a specified (short-ish) timeframe after the show would improve the system but ate utterly impractical.

Given the inherent limits in rating shows in a meaningful way and that will be less likely to cause such truly stupid strife among fans, I ask that you simply ditch the system and don’t bring it back. If it is brought back, however, the non-NYE run shows that we’re ranked in the 1.1.24-1.3.24 timeframe should be restored to the rating assigned before 1.1.24.
, comment by icculusFTW
icculusFTW The second two suggestions here seem very wise to my mind

@BetweenTheEars said:
My two cents:

1. Mitigate reactionary ratings: add a cooling off period before show ratings open. Say, a week. Maybe even a month.

2. Institute "one person one rating": include only *verified* user accounts in the average ratings tabulations to limit the number of users with multiple .net accounts rating the same show multiple times.

3. Increase transparency and accountability: make all user ratings public. Any user should be able to see the ratings for any other user. Kinda like free speech... sure, you can say (almost) anything you want. But just because you can say almost anything doesn't insulate you from from any consequences from what you say.

Thanks .net team for hitting the pause button on the ratings and working to enhance their usability to guide fans to good shows!
Th
, comment by BarryBoodowitz
BarryBoodowitz Crazy! I cannot believe there are humans whose best use of time is to write algorithms that distort the ratings of classic Phish shows.
, comment by Cantaloupe
Cantaloupe Like many, I don’t use the rating system often nor depend on it. Apparently unlike many others, I don’t applaud this move. If you have a means of targeting clear misuse, do it. If you don’t, then don’t impose half-assed reactionary measures. It feels undemocratic to respond to perceived manipulation with deliberate manipulation.
, comment by Mazegue
Mazegue Both Dayton shows got bombed SIMULTANEOUSLY on October 12th. Both shows tanked very quickly with what must have been an onslaught of 1 star votes. I remember 10/11/23 flirting with Big Cypress out of the gate, and then it dipped as one might expect toward 4.5, and all of the sudden it very very quickly dropped below 4.2. Did anyone else notice that? Thanks for crunching those numbers!
, comment by drbeechwood
drbeechwood If it comes back, only report ratings to the nearest tenth. Having a 4.832 rating is absurdly precise. All average show ratings could have an uncertainty or standard deviation associated with them.
, comment by Kelly_Jelly
Kelly_Jelly I am more appalled by the 12/30/97 impact.
3.783??? hahahaha
, comment by PartyMarty
PartyMarty Great read, thanks for the write up. Not all that surprising someone would try to carpet bomb after the Gamehedge show.

One request for the revamped ratings system would would be to add half-stars, or a 10 star system, to rating choices Thanks admins!
, comment by DankDre
DankDre I love the ratings here, have followed closely since 2019. It's tough to say what they really represent, and this week has clearly pushed them further into the unknown. I think we all know what's going on, but do you need to prevent it? I think you probably have to let it happen, if it happens, but you will probably see it subside over time. Then the positive votes start to show up again, right the ship.

I feel the best way to keep these true, is to motivate folks to vote here. You're probably going to get a lot of votes when you turn it back on, great, good start. But keep it going and encourage people to cast theirs. Hold them accountable, as one phan mentioned. Post their votes in their profile (i noticed this is active already). And make sure they can only vote once per show, no edits/revisions allowed either. Incentivize maybe, offer access to special stat analytics once you've voted x times. (we're all geeks for data aren't we?) Truth will prevail. Good luck.
, comment by ACDCcrab
ACDCcrab Honestly, this belongs at the Phish Studies Conference. If you have any interest in that, I’m sure this would make a fascinating piece about the fans to present. The submission window closes soon though, so I’d get on it if you’re interested. Really cool (if unfortunate) stuff though!
, comment by Rebah
Rebah It’s long past due, ratings are really mixed and weird.
, comment by thewiz
thewiz Wild that people have time for this shit (meaning manipulating the ratings). WHY? People are so damn weird.
, comment by Royal
Royal When I was really depressed last year, ratings cheered me up. I noticed I had seen over 40 shows rated 4.50 or above. It was a nice scroll through history. I think though like a lot of things in this decade, society has lacked the maturity to look at things from a non selfish point of view. I think this post NYE tanking of Big Cypress shows the lack of maturity. I think ratings should only be able to be submitted after a review of the show
, comment by dougsawerewolf
dougsawerewolf Suggestion: Make the ratings non-anonymous (public), and make stats available. If you rate a show, you should stand by your rating. If you bomb multiple shows, the pattern will show up in the stats. We'd find out who is abusing the system pretty quickly. Will it cause people to call others out publicly, or cause other non-preferred behavior (mailbox spamming, etc.)? Yup. But that's what the "Review" section is for - rate the show, defend your choice. I think it will spur more thoughtful discussion around shows than less. "Sunlight is said to be the best of disinfectants."
, comment by maceyo
maceyo Trumps America
, comment by TwiceBitten
TwiceBitten It’s funny that people want to spin this is as noobs propping up 12/31/23 as the GOAT when in fact Gamehendge got 3 times as many 1 star reviews as Cypress. The whole thing is beyond dumb but it’s even dumber to care. I say let em stuff the ballot box and it will all sort out in the wash. Isn’t that how elections usually work?
, comment by gloverab
gloverab We can reasonably assume that neither of these shows is a 1. Would any fan of this band actually rank either of these at the lowest possible rating by any metric, other than just trying to bring it down to push another show higher?

I think in situations like this it’s not unreasonable to just remove all the “1” ratings and just move all affected shows to a GOATed status. Then people know for next time too.
, comment by Multibeast_Rider
Multibeast_Rider This has been going on for a long time with evidence previously provided. If you still have the logs you should look at rating submissions in the days after 7/14/19 which had wild fluctuations.
, comment by Sniff
Sniff @Mazegue said:
Both Dayton shows got bombed SIMULTANEOUSLY on October 12th. Both shows tanked very quickly with what must have been an onslaught of 1 star votes. I remember 10/11/23 flirting with Big Cypress out of the gate, and then it dipped as one might expect toward 4.5, and all of the sudden it very very quickly dropped below 4.2. Did anyone else notice that? Thanks for crunching those numbers!
That happens the morning after every show. There's a thread dedicated to it in the forum. Also, every show "flirts with big cypress out the gate" because the first people to log on to rate a show tend to be overzealous and hit it with 5 stars, even if it was just an okay show.
, comment by Multibeast_Rider
Multibeast_Rider I think there is a 2 part solution to this:

Part 1: Salvage and permanently pause the current rating system
1. If you have the logs, dedupe ratings on the same show from the same IP address. Or even just remove all of the ratings after 5 from the same IP to account for some people on shared connections.
2. Permanently freeze this ratings system.
3. Continue publishing the legacy ratings.

Part 2: Build the new rating system
1. Require a Phish.net account with validated email and mobile number that has been active at least 90 days with some forum participation to cast any votes at all.
2. Share the rating distribution data and make ratings non-anonymous
3. To start, only open ratings for the most recent tour.
4. Each month, open ratings for one previous tour with some addition blog and forum content to help encourage people to re-listen to shows. Work you way back to the beginning plus open ratings on all new shows 24 hours after the show ends.
, comment by BrotherOpener
BrotherOpener Im cool with Ratings going away, but if they are decided to stick around how about some accountability?

Rando ideas from someone with little programming experience or availability to help out. I deal with the goddamn customers so the Phish.net engineers don't have to. I have people skills. I am good at dealing with people. Can't you understand that? What the hell is wrong with you people?

Oh yeah, ideas.

-Raters have some readily-viewable analysis of their rating history indicated next to their name; box plots would be useful and they are kinda look like cartoon turtles.
-Raters indicate if they attended live, streamed, listened afterwards. No other option. Someone to have in-person attendance verified is the golden seal. Liars get the scarlet donut next to name in perpatuity.
-Raters have number of shows attended live indicated easily next to name. Maybe some way of indicating spread of shows across era's by color and hue?
-If attended live, can rate based on both musical achievements and/or experience. Ive had a great time at shows that was more about vibe than musical achievement. The show I went to while my ex was cleaning out her stuff from our once shared apartment was maelstrom of every emotion which I shant forget soon. Maybe a Mario power-up shroom icon to indicate a psycotropic state of mind?
-Id love to select my own criteria for ratings. For example, only use select Netters (using a query function) for my rating calculations.

Y'all are great for making this community apparatus exist. Giggity.
, comment by STA5H_M4N
STA5H_M4N Just an idea…require users posting reviews to provide their ticket # from their attendance of the show or LivePhish.com stream confirmation # when they post. A way to prove that you were there in person or on the couch. This would provide two-step verification and help keep ratings of a particular show in check…at least initially.

The more I think about it;
…..when you first heard this particular show - were you there? Did you couch tour? Did you listen to the recording? —

this should be a tracked metric.

If someone was at NYE MSG ‘23/‘24 as I was, your review will be drastically different than someone listening (and comparing in their mind to the last time they played Gamehendge) the next day without all the visuals who was not there.

I also wholeheartedly agree that to compare stellar shows throughout the years is impossible. Nothing will ever top Big Cypress for what it was….Same with Gamehendge NYE….they are incomparable.
, comment by BigJibbooty
BigJibbooty A little late to the party here but am glad the mods decided to do this. As someone relatively new to the band, I use the ratings to help me pick shows to listen to. People who game the ratings - presumably ranking shows they went to higher and those they didn't lower, for some weird ego thing about having gone to "the best" shows - really do a huge disservice to people like me who use the ratings.

One easy fix would be to eliminate all 1 star ratings from all shows. I mean seriously - there has never been a 1 star show. Even the worst shows have at least something redeeming about them. Has anyone here actually ever had a BAD time at a show? Some obviously aren't as good as others, but as someone who went to what were rated two of the worst shows in recent history (Grand Prairie 2016), they were still far better evenings than anything else I could have been doing. The only possible reason anyone would ever rate anything 1 star is to try to lower that show's rating.

Beyond that, I think @betweentheears' 2nd and 3rd suggestions are definitely the way to go. Only real accounts, one rating per user, and everyone's ratings posted on their profile pages.
, comment by SawItAgaaain
SawItAgaaain @white_lightning said:
Let's go back to when shit was fun. You either had to be there, talk to friends who were, or listen to it...of course back then listening to it if you weren't there meant waiting for a tape. But still. It was more fun.

Now we have people gaming a ratings system. To what end I can't even guess.
The internet is ruining the scene.
, comment by Dr_Venkman
Dr_Venkman Just as long as they don’t get eliminated I’m cool with whatever you need to do. Some of us actually use them and are glad to have them - for instance, when I’m doing a full year of shows but don’t necessarily want to listen to every single one
, comment by seethecityseethezoo
seethecityseethezoo @paulj said:
@seethecityseethezoo:

Send me some dates for the AL shows and I'll let you know. I've never been sure what 4.0 actually means...
@paulj you got it!
Any of 5/27/22-5/29/22 the highlights are 5/27 Set 2, 5/28 Set 1 overall best sets of the run. But it’s all good. 7/12/23 and the. 7/30/21 but it’s not a rating bomb victim
, comment by Multibeast_Rider
Multibeast_Rider @BigJibbooty said:
One easy fix would be to eliminate all 1 star ratings from all shows. I mean seriously - there has never been a 1 star show.
I get the logic, but you could also argue that all 5-star ratings should be removed because there is no perfect show.
, comment by mcgrupp81
mcgrupp81 Simple solution. Relegate the task of rating shows to the Phish.net employees. When I started collecting shows, I had the Pharmer’s Almanac published after the end of 1.0 and Phish.net’s 2-cents Charlie as my primary guides.

The authors of the book and Charlie had enough experience listening to shows that I trusted their opinions and I filled in the gaps on my own.

There’s a review in the Almanac of a Sept ‘99 show where one review says the concert was great and the other review panned it. I listened to that tour and knew the negative guy was right as the band was sluggish and it was a tour low point. How many people rate the shows on Phish.net have only seen one or two shows on the tour or haven’t listened to all the others played previously?
, comment by Marc0Esquand0las
Marc0Esquand0las This is why we can't have nice things
, comment by melt_the_tek9
melt_the_tek9 I just don’t ever rate shows and don’t even know or remember if I have. Maybe I rated my first show or Atlanta from 2015. How to rate a show? Sometimes after years I wind up liking a show better than ever. Or after said number of years I dislike shows I used to really like. If anything all the .net talk on ratings turns me off from contributing!
, comment by pabalive
pabalive Honestly, it would likely even out in the end. Think you guys are over analyzing this entire thing here. More reviews is more traffic to the site, right? What am I missing?
, comment by Multibeast_Rider
Multibeast_Rider @pabalive said:
Honestly, it would likely even out in the end. Think you guys are over analyzing this entire thing here. More reviews is more traffic to the site, right? What am I missing?
I think for your average mid-tour, uneventful show that is true. I also think it is somewhat true over very long periods of time. But if you look at the data posted in the thread, they are clearly being manipulated.

I think there are a couple of people who've setup hundreds or thousands of accounts and written scripts to shift ratings. It is crazy that someone has done that and it should definitely be put to a stop.
, comment by Mazegue
Mazegue @Sniff said:
@Mazegue said:
Both Dayton shows got bombed SIMULTANEOUSLY on October 12th. Both shows tanked very quickly with what must have been an onslaught of 1 star votes. I remember 10/11/23 flirting with Big Cypress out of the gate, and then it dipped as one might expect toward 4.5, and all of the sudden it very very quickly dropped below 4.2. Did anyone else notice that? Thanks for crunching those numbers!
That happens the morning after every show. There's a thread dedicated to it in the forum. Also, every show "flirts with big cypress out the gate" because the first people to log on to rate a show tend to be overzealous and hit it with 5 stars, even if it was just an okay show.
Based on your logic, 10/10 would have tanked on 10/11, but both shows dropped at the same time on 10/12. Mathematically speaking, both shows received dozens of 1-2 star ratings in the same short period on the 12th, after receiving a steady steam of 3-5 star votes. It doesn’t add up. I’ve seen shows skyrocket out of the gate and then slip on day 2 plenty of times, but this was extreme and appeared to be coordinated on some level. The probability that 2 different shows receive roughly the same number of votes in the same window of time, with both shows straying very significantly from their voting pattern is highly improbable and a red flag IMO. Also, who really cares? At the end of the day, these ratings are just for fun, and anyone trolling the ratings needs to get their priorities in order. I’m not losing any sleep over this.
, comment by BenZoldan
BenZoldan Thats a $1 question. That show didnt make other shows worse. People wanted to validate their experience as “best” and gamed the system to do so.
, comment by coolbutt
coolbutt Just use a star rating ⭐️
⭐️hit it, quit it
⭐️⭐️not bad
⭐️⭐️⭐️well liked
⭐️⭐️⭐️⭐️very popular
⭐️⭐️⭐️⭐️⭐️exceptionally popular
, comment by ObviousFool
ObviousFool @Multibeast_Rider said:
@pabalive said:
Honestly, it would likely even out in the end. Think you guys are over analyzing this entire thing here. More reviews is more traffic to the site, right? What am I missing?
I think for your average mid-tour, uneventful show that is true. I also think it is somewhat true over very long periods of time. But if you look at the data posted in the thread, they are clearly being manipulated.

I think there are a couple of people who've setup hundreds or thousands of accounts and written scripts to shift ratings. It is crazy that someone has done that and it should definitely be put to a stop.
Thousands of accounts? Please, there's not even thousands of votes.

And most of these shows have 2000 or fewer votes. That's less than .5% of the fanbase.

Ratings are meaningless.
, comment by c_wallob
c_wallob One thing I love about ratings is that sometimes you’re in such an amazing um….state of mind….that phish could play Farmhouse into a cover of the Big Boat album and it is a 5 star show when you’re walking out. Although I don’t always agree with the ratings, they are a decent barometer of the overall phan perception of the show, and not just the wind tunnel of spunions (yours truly, included) shouting, “Best Show Ever.”
, comment by adaniel87
adaniel87 @ACDCcrab said:
Honestly, this belongs at the Phish Studies Conference. If you have any interest in that, I’m sure this would make a fascinating piece about the fans to present. The submission window closes soon though, so I’d get on it if you’re interested. Really cool (if unfortunate) stuff though!
Totally agreed. The talk from the first conference about show ratings was fascinating
, comment by gladtobeaglenn
gladtobeaglenn Bring back the ratings but only allow the band to rate shows.
You must be logged in to post a comment.


Phish.net

Phish.net is a non-commercial project run by Phish fans and for Phish fans under the auspices of the all-volunteer, non-profit Mockingbird Foundation.

This project serves to compile, preserve, and protect encyclopedic information about Phish and their music.

Credits | Terms Of Use | Legal | DMCA

© 1990-2024  The Mockingbird Foundation, Inc. | Hosted by Linode