This is § d of “Section 3: Fake views exploit the appeal of valid drama.”
A message (posting, Tweet, Instagram, etc.) “goes viral” by media users who are seeing their earlier choices of source preferred via “Likes,” “Follows,” and clicks that have unwittingly caused a data profile of preferences. News/views feeds imply (or trope) a sphere of source value that altogether (across sources) mirrors one’s online identity or medial personality that has been built through “Likes,” etc. Thereby, duplicitous sources employ their access to data archives to play into “potential pathways of influence, from increasing cynicism and apathy to encouraging extremism“ (source A) inasmuch as identity comfort prevails over interest in validity. Frivolous and casual attention is easier than astute and deliberative attention:
“[P]eople prefer information that confirms their preexisting attitudes (selective exposure), view information consistent with their preexisting beliefs as more persuasive than dissonant information (confirmation bias), and are inclined to accept information that pleases them (desirability bias). Prior partisan and ideological beliefs might prevent acceptance of fact checking of a given fake [views] story.“ [A]Virality happens among identity-mirroring filter bubbles, like a trending froth that variably:
- fails to question credibility of information in interpersonal relations; and transposes that to texts.
“Individuals tend not to question the credibility of information unless it violates their preconceptions or they are incentivized to do so....Moreover, they are more likely to accept familiar information as true. There is thus a risk that repeating false information, even in a fact-checking context, may increase an individual's likelihood of accepting it as true....[P]eople tend to remember information, or how they feel about it, while forgetting the context within which they encountered it.“ [A]
That pertains, again, to:
• selective exposure: preferring information that confirms preexisting attitudes
• confirmation bias: viewing information consistent with preexisting beliefs as more persuasive than dissonant information.
- fails to be active about sociocentrism in interpersonal relations; and transposes that to texts.
“People also tend to align their beliefs with the values of their community..... [M]ediation of fake [views] via social media might accentuate its effect because of the implicit endorsement that comes with sharing.” [A]
- fails to distinguish dramatic value (pleasuring oneself) and validity of the pleasure. Substituting ‘views’ for ‘news’ here:
“The key takeaway is really that content that arouses strong emotions spreads further, faster, more deeply, and more broadly on Twitter....Fake [views] ... consistently reach a larger audience, and it tunnels much deeper into social networks than real [views] do....[F]alsehoods [are] 70 percent more likely to get retweeted than accurate news’” (source B).
That pertains, again, to:
• desirability bias: being inclined to accept information that pleases.
I believe that one’s predispositions toward interpersonal relations are transferred to personified text: Frivolous and casual dispositions toward texts mirror frivolous and casual dispositions toward others in one’s life, with whom one shares. Duplicitous sources play into one’s general comfort with frivolous and casual interest, such that what’s shared/retweeted sustains frivolous and casual interest toward others, which “warrants” frivolous and casual regard for validity.
There is no statistical evidence that the media platforms, as such, cause that.
“The massive differences in how true and false news spreads on Twitter cannot be explained by the presence of bots,” Aral told me [the author of The Atlantic article (B)—about Science research about Twitter]....[A]utomated bots were spreading false news—but they were retweeting it at the same rate that they retweeted accurate information. [But...] It can both be the case that (1) over the whole 10-year data set, bots don’t favor false propaganda and (2) in a recent subset of cases, botnets have been strategically deployed to spread the reach of false propaganda claims,” said Dave Karpf, a political scientist at George Washington University, in an email.” [B]Inasmuch as virality is broad (one posting disseminates to very many followers) “there is empirical evidence that misinformation is as likely to go viral as reliable [views] on both Facebook and Twitter” [A]. But inasmuch as a posting is shared or retweeted (recursive virality or “depth” of dissemination), false information is “likely to be retweeted more frequently and more rapidly than true information, especially when the information involves politics“ [A]. “A false story reaches 1,500 people six times quicker, on average, than a true story does.” [B]. This wouldn’t be because persons want falsehood to be spread; rather the dramatic value of the content compels interest and sharing.
Thereby, aggregate trending of interaction (not system factors) give networks emergent properties that trace back to self-identical dispositions toward others, thus toward information. A “social network” gains emergent character due to aggregate action, not systems behavior.
“Homogeneous social networks, in turn, reduce tolerance for alternative views, amplify attitudinal polarization, boost the likelihood of accepting ideologically compatible news[/views], and increase closure to new information. Dislike of the ‘other side’ (affective polarization) has also risen. These trends have created a context in which fake news /views], can attract a mass audience” [A].The network (personified as "reducing," "amplifying," "boosting," and "increasing") merely mirrors emergent sociality, and can be exploited inasmuch as trending is based in frivolous and casual engagement with oneself, others, and pretenses of validity.
Next: Section 3e: ““my space, my time”: smartly defining one’s medial sphere”