Most information we receive about the real world is through electronic media, especially the internet. Internet information processing is increasingly influenced by artificial intelligence. Google, Facebook, Twitter, and Microsoft are open and boastful about their use of AI. Humans are involved in this processing but frequently play secondary roles, like classifying and tagging news stories for machine processing, as in the case of Facebook. These are hybrid computer-human AIs. Wikipedia is written mostly by humans, but decision-making employs machine oriented procedures (MOPs) – procedures that are more suitable for computers than humans. In other words, these procedures are algorithms removing human touch, even when performed by humans. In this sense, Wikipedia might be said to be a hybrid human-AI system as well. The same is true for Reddit. There is a large number of other services and websites that use AI, and the rest of the Web and everybody using the web are affected by them. But this is only the beginning. These hybrid AIs interact in ways that have not been anticipated by their creators and might be not appreciated even now. Together, they form a single hybrid AI entity, which will be referenced here as the San Francisco AI. Only six corporations referenced above are considered as components of the San Francisco AI. To narrow the discussion even further, only Google Search and Microsoft Bing from all the Google and Microsoft services are considered. The name San Francisco AI was selected because 5 out of 6 components of this hybrid AI (except for Bing, the least significant one) are headquartered in or near San Francisco, CA. This paper shows that the San Francisco AI exists and explains how it operates and distorts the public’s perception of reality into an extremely leftward and anti-American(1) direction.
The San Francisco AI should not be confused with the leftist echo chamber. The leftist echo chamber is formed by humans, while the San Francisco AI is formed by computer algorithms with human assistance and corporate support. The San Francisco AI is a significant contributing factor to the escalation of climate alarmism, the turning of MSM into FSM (fakestream media), political polarization, anti-Trump hysteria, and other social ills. The influence of the leftist echo chamber, governments, and other offline factors on the San Francisco AI is large, but is out of the scope of this paper. Offline suppression of conservative views and the hard Left’s empowerment by the Obama administration is out of scope, too.
This paper analyzes the flow (input, output, and internal processing) of politically relevant informational signal to, from, and inside the San Francisco AI, and its influence on other online and offline entities. The San Francisco AI probably started forming in 2011, became what it is today around 2014, and has been constantly changing. Each of the six analyzed components of the San Francisco AI distorts incoming signal, suppressing conservative and centrist segments of the political spectrum, and amplifying the leftist message. Further, information typically makes multiple rounds inside of the San Francisco AI, until some spectral intervals (such as climate alarmism) are eliminated, and some rise to hysterical levels (like “the resistance”). The San Francisco AI also inconspicuously influences other parts of the web, including traditional media online, which influences traditional media offline and pulls them further to the left.
From ordinary citizens to the media and politicians, we came to trust Google, Wikipedia, and the so-called social media. Even more, information that comes from computers is often perceived as more authoritative than what is said or written by individuals. Internet company executives suffer the most damage from the San Francisco AI, with the fakestream media following closely. Together, they seem to have lost touch with reality on many issues, from the existence of the climate debate to the legal consequences (18 U.S. Code § 2385) of advocating for a government overthrow, apparently committed by Chelsea Handler / Netflix just a few days ago.
AI is also easily fooled by organized efforts. Obviously, most of the San Francisco AI input is garbage driven by NGOs and advocacy non-profits. Each component of the San Francisco AI can be viewed as an electronic device that has input, output, and maintains an internal state. The internal state might comprise the perception of a political center or even an extensive knowledge base. Input can come from and output can go to another component, non-component online entity, or real human users. The input and output signal can be news, opinions, or other information. Each component performs multiple operations on the input, including one or more of the following:
- Left & anti-American Shift – from Pravda-like distortions and fabrications (using Facebook injection tool, for example) to a milder amplification of the leftist signal and a reduction of the conservative/anti-leftist one.
- Filtering Outliers in the input signal, possibly utilizing the internal state, especially the perceived political center. This operation has the effect of removing much of the moderate and conservative information.
- Consensus Seeking Filtering. Gives effective veto power to the well-organized anti-American Leftist clique.
Every considered component changes its internal state (Self-Calibrating). The components move their perceived political centers leftwards. It seems as if they move them to better match the input signal, but attribution of causation is not possible. There are also external factors, especially pressure from the Obama administration in 2014-2016 and pressure from the EU and foreign governments that are continuing now.
The Structure and Operations
Interaction among online entities, including San Francisco AI components, is an important concept here. Each San Francisco AI component is a hybrid AI, i.e., an artificial intelligence system assisted by humans. Thus, interaction can be direct (i.e., Google bot reading a Wikipedia article) or intermediated by a human (i.e., a person editing a Wikipedia article using results from Google search). By definition, only one level of human intermediation is allowed as a part of interaction among online entities. For example, if Jane googled something, read the top result, told John what she read, and John edited Wikipedia based on her words, this was not an interaction among online entities. Any offline interactions between humans, including the infamous leftist echo chamber, are not considered here.
The considered components are Google Search, Facebook, Twitter, Wikipedia, Reddit, and Microsoft Bing. The term component comprises not only the computers and software running on them, but the whole corporate hierarchy serving them. In the case of Google Search, it is the whole company up to Eric Schmidt but excluding some departments that do not work on search. In the case of Microsoft Bing, the Bing department is a component.
Left & Anti-American Shift means either anti-American or Left shift, but they are frequently indistinguishable. The left bias of the corporations that operate the Cloud components does not need explanation. The anti-American bias stems from the fact that they operate worldwide and together derive most of their revenues abroad. While operating in other countries, they need to appease forces that are hostile to the U.S. These forces are frequently supported by the natural animosity that local population and business communities feel toward huge American corporations that appear to threaten their culture and their economics. On the other hand, they have freedom in the U.S. that’s restricted only by threats and harassment from the Left. Thus, their anti-Americanism is a rational behavior. These biases gradually grow through self-calibration, as described below.
Filtering Outliers is a normal technique in the natural sciences where normal or another thin-tailed probability distribution is expected. It is rarely applicable to the data in social fields where probability distributions have fat tails (as shown by great Nassim Taleb(2)). When applied to political views, filtering outliers is called suppression of minority views, something that the Founders wanted to prevent. Most conservative news and opinions are eliminated because the input to which the filter is applied is already heavily biased towards the Left. In addition, the impression that’s created is that the remaining views are skewed towards the right. Facebook’s internal Trending Review Guidelines contains instructions for filtering outliers, which was done first manually, then algorithmically.
Consensus Seeking is a good decision-making approach when practiced in a real community. But in an artificial community comprised of many individuals without real affinity and even in-person communication, it can be practiced only as a machine oriented procedure (MOP). Consensus-seeking MOP allows a tight bloc to downvote all content and defeat all decisions and even candidates it does not like. Consensus-seeking MOP complements filtering out outliers because it allows to marginalize even commonly held views and widely known facts. The science of CO2 is one example.
Consensus-seeking procedure can be explicit, like requiring a supermajority of 70-95% of the votes. But the San Francisco AI is full of hidden consensus-seeking MOPs. One example is allowing both upvoting and downvoting. It is practiced by most subreddits of Reddit. It is also practiced by Wikipedia in its “elections” and, less obviously, in articles editing. The downvoting allows any intolerant bloc to suppress any news, opinions, and persons it doesn’t like. The anti-American Left exploits its bloc power all the time. Recently uncovered banning or restricting accounts and posts by Google, Facebook, and Twitter in response to organized leftist campaigns may be considered an application of consensus-seeking MOP(3).
Self-Calibrating is a form of learning that likely happens in each component of the San Francisco AI. The components seek to position themselves at the perceived center of public opinion or slightly to the left of it. But they might perceive this center as the median of the input signal. This center is far to the left off the real center. When they gradually adjust the perceived center further to the left, they skew the signal further to the left. The components are interconnected, so that much of the input of one component is an output of another one. That creates a never-ending loop – the components slide leftwards chasing the signal, but the signal shifts to the left faster . Self-calibrating mechanisms include personnel changes. Examples are the departure of Wikipedia Board members, a purge of editors in Wikipedia Arbitration Committee, and recent firing of Google engineer James Damore.
Not surprisingly, their new center is the belief in “gender spectrum.” Seriously, Bill Nye lent his “scientific” support to this nonsense. Google fired an employee who publicly disagreed with this belief. A snowball in hell stands better chances than climate realism in the San Francisco AI.
A real-world informational signal typically passes through multiple AI components, which erase the conservative spectrum and amplify the leftist one. Further, the components (which are corporations, after all) tend to emulate each other, hitting harder the same opinions (“spectral intervals”). Climate dissent, all the way from unapologetic climate realism to lukewarming, is an example of a heavily hit spectral interval. But the Left have learnt to manipulate the San Francisco AI in such way as to hit the most important opinions and individuals in a coordinated way and time.
This conclusion runs against the common wisdom that says the internet is a great equalizer that allows everybody to express his or her opinion. Yes, we can express opinions but nobody reads them if they happen to contradict any strong Leftist agenda.
Finally, the perverted signal (including news, opinions, and “science”) loops back to the Traditional Media offline, because journalists, editors, and producers use the internet to read news and find information like everybody else. That pushes the Old Media further to the anti-American Left.
For almost hundred years, citizens of developed countries relied on impersonal media for ever increasing share of important information about their surroundings, at the expense of perception by our own senses and conversation with neighbors and other acquaintances. Today this dependence got to such level that some people check weather on their smartphones instead of looking out of a window! The Internet accelerated this tendency. But a new phenomenon rose about five years ago. Internet giants started using Artificial Intelligence (AI) to decide which information to give to us. Then their AIs started interacting among themselves, leading to effects that have probably deceived even management of those companies. The corporations are open and even boastful about use of AI, but they are less than transparent about the ways in which these AIs are guided or assisted by humans. Note, that differences between guidance and assistance are subject to interpretation. One man’s guidance is another man’s assistance. Further, interaction between AIs of different companies frequently happens through human intermediation, hiding the fact that it is still an interaction between AIs. Some of the most influential Internet entities are Google, Facebook, Twitter, Microsoft (TFGM), Wikipedia, and Reddit. Interaction between their hybrid AI gave rise to a new entity, the San Francisco AI, which is a hybrid AI, too. The following diagram describes the role of the San Francisco AI on the internet. I call the whole Internet-centered communication layer between us and the real world The Matrix, as a joke. Or as half a joke.
In traditional media, journalists used to collect information in the field or received it from reporters in press agencies. Then, the TV networks and newspapers processed it by cross checking, discussing with colleagues, checking facts etc., and printed or displayed the edited result to the readers or viewers. That has changed.
On the diagram, The Matrix is everything between the real world (the box on top) and the users (the box on the bottom). The San Francisco AI is shown in a dashed rounded rectangle in the center of the diagram. Black arrows show the input – information from the real world entering The Matrix. In the brave new world, output of the traditional media is one of the input sources for The Matrix. The Matrix is omnivorous, and accepts PR campaigns by NGOs and political groups as genuine news. The Grist or Mother Jones are as good inputs for The Matrix as The New York Times. This might be among the causes why The New York Times went down to their level. AI cannot distinguish between genuine news and PR campaigns. Google Search recognizes a press release published in a PR Newswire site as a press release, but not when it is published on the Greenpeace site. Some San Francisco AI components – Facebook, Twitter, Wikipedia, and Reddit – accept substantial human input. Google Search and Bing don’t (with some exceptions, such as user click through ratios and feedback). They use the information which was already placed on the Internet by somebody else.
The information flow inside the San Francisco AI is not shown on the diagram, because there is bidirectional information flow between almost every pair of components. That results in signal loops and resonance on certain “frequencies.” The allegations of Trump–Russia conspiracy might be a manifestation of a resonance inside of the San Francisco AI that continuously spills over into traditional media. It is hard to be sure because there is also an organized attempt to undermine President Trump. The management of corporations that form the San Francisco AI might be unaware of their role as AI enablers, and not fully aware of the information cascades it creates. The San Francisco AI heavily influences the rest of The Matrix – online media, other websites, and even traditional media offline. This influence is shown in thick blue arrows. The influence creates additional loops, downward spirals, and information cascades.
Many, but not all citizens receive information from The Matrix without knowing of what routes it took in The Matrix and how it was distorted there. Younger people tend to get most of their news from The Matrix, and are among the most affected. Other heavily affected groups are journalists even in the offline media, staffers and assistants for politicians, and executives of The Matrix’s corporations themselves.
Traditional media also used to lean to the left, filtered out conservative views, engaged in consensus seeking (“groupthink”), and to slowly slid to the left. But it was still built on humans who sympathized with their friends and acquaintances of all political preferences, who valued truth, respected freedom of speech, did reality checks, and so on. Of course, none of this applies to AI.
Humans are also incapable of such perfect elimination of dissenting views as AI, except in totalitarian regimes. Humans talk to each other. To eliminate a dissenting opinion, a human editor needs to at least learn something about it. The AI doesn’t. It makes few algebraic operations, and oops – the dissenting opinions are zeroed.
Most humans empathize with other humans and not many of them would join the side of deep ecology against the humanity. At least not in the U.S. And the traces of deep ecology are all over climate alarmism. Nobody else but UK Prince-Consort Charles, a former president and current President Emeritus of WWF, wrote: “I must confess that I am tempted to ask for reincarnation as a particularly deadly virus, but it is perhaps going too far.” (see Goldstein v. CAN et al., Opposition to Defendants’ Motion to Dismiss). Perhaps, the journalists would have learned these facts and would not be on the side of those who call their breath “pollution.” But today’s journalists get their facts from The Matrix, which is filtered by the San Francisco AI. And the San Francisco AI does not care about viruses (other than computer viruses) and human breath. It neither hide facts that are important for humans, nor prioritizes them over allegations of coral bleaching or that climate change “may threaten eelgrass.” The important facts become buried under the pile of rubbish, pumped onto the web by enviro-groups supported by governments with bottomless pockets.
Google Search – the Main Part of the San Francisco AI
Google Search has an internal state which includes a huge knowledge base that’s probably heavily skewed toward the left and is certainly accepting climate cult dogma as fact. Since 2011, Google Search has been demoting in search results websites that disagreed with its “facts.”
Google Search input is the whole web, including content from all other San Francisco AI components. Wikipedia, highly ranked “social media” content, the fakestream media, and academic websites (as leftist as most other Google seed sources) are especially privileged. Of note, Google also collects users’ behavior through Google Analytics beacons, proxy servers used by Chrome browser, and other means. Another type of input is users’ queries, and users’ clicks on the search results.
Google Search output to the end users is ranked results returned to the queries. These results are typically trusted by the users. Most users click one of the top links, and everything not on the first page might not exist as well. Top results are also believed to be more authoritative than the rest. Google search heavily and undeservedly promotes Wikipedia, another San Francisco AI component. Google Search also direct users to news and videos it likes and gives on-the-page answers to some queries.
But Google Search output to the San Francisco AI and to the rest of The Matrix is even more important.
- Journalists, editors, producers, and executives of the traditional media, both online and offline, are heavily influenced by Google Search in the same way as ordinary users. Through this simple human intermediation, Google search biases media and non-media websites.
- Editors of Wikipedia and Reddit, curators of Facebook and Twitter also rely on Google search as a strong informational signal.
- Google Search makes or unmakes websites at will because most of the web relies on Google search traffic. Sites that do not agree with Google may suffer penalties.
- Further, Google search penalizes websites that link to those places Google does not like. These are known facts among SEO professionals (2, 3). This type of Google Search behavior causes unrelated website owners and developers to adjust their content to Google’s whims and even remove links to what Google considers bad domains.
- Some Google penalties are site-wide. For example, if a conservative website presents the realist side of climate debate among other topics, the whole site might be penalized, including unrelated conservative content. That causes a harsher suppression of conservative views than would be expected based on known Google biases.
We see that Google Search creates multiple signal loops and information cascades. One is the Google-Wikipedia loop. Google heavily relies on Wikipedia content and Wikipedia editors heavily rely on Google results. Another one is the Google-FSM loop. Google probably includes the leading fake stream media outlets (The New York Times, CNN, The Guardian etc.) among the short list of its seed websites. Links from these websites spread a lot of “link equity” downstream and across the web. Google Search probably uses some of their content in ranking unrelated websites. Then FSM journalists and editors see their and similar information at the top of Google Search results, and think of that as an independent computer-calculated validation. That confirms and reinforces their biases. In fact, it is circular referencing that’s akin to what passes for peer review in the so-called “climate science.”
An open question is whether the Google Search AI, which performs no reality check, slid toward extreme climate alarmism and pulled Google executives with it, or other way around – Google executives took clues from the powers to be and programmed Google Search to support extreme climate alarmism.
Wikipedia has well known problems that include: the unexplained exit of respected executives and directors; the foreign control of the Board of Wikimedia Foundation and its corporate body; the leftist background of the CEO and key members of the executive team; heavy leftist and anti-American bias; tight control by the Board of the nomination, election, and certification of the editorial hierarchy; and susceptibility of the nomination and election process to fraud by the executives and/or the Board members. This susceptibility is aggravated by the fact that Wikimedia’s Secretary & Legal Counsel is a veteran of the Soros’ Open Societies Foundation.
Wikipedia’s content is written by human editors but they are directly influenced by Google Search, Bing, and other components of the San Francisco AI. Its internal problems are largely unknown. It enjoys undeserved respect and huge influence. Wikipedia is considered accurate on “non-controversial” subjects but many articles that look as non-controversial are hugely controversial and full of falsehoods (like Scientific opinion on climate change).
Wikipedia’s output to the San Francisco AI and to The Matrix is even more important. It provides a large direct input to Google Search, through Google’s manual review and, probably, through its Knowledge Vault. Wikipedia also provides an input into other components of the San Francisco AI. Wikipedia also has huge influence on online and offline media.
Wikipedia editorial procedures are a prime example of consensus seeking MOP, and Wikipedia is remarkable for the extent to which it suppresses climate realism.
Facebook and Other Components
Facebook Trending scandal illustrates the secondary role of humans in the hybrid AIs described here. Facebook was looking to create an algorithm to produce Facebook Trending news like those by Twitter. As a temporary measure it hired journalists, preferring recent graduates and others without experience. When Facebook had developed algorithms emulating this young ambitious left-leaning crowd, the humans were laid off.
The Facebook leftist bias is openly acknowledged by its CEO. Facebook Guidelines demonstrate algorithmic filtering, even when performed by humans. Obviously, Facebook Trending influences a huge number of its viewers. Less obviously, it drives traffic to the news websites, rewarding those that are politically in line with it and punishing those that oppose its line, thus pushing the online media to the left.
Twitter Trends served as an example to follow for Facebook, so it is reasonable to assume that Twitter had developed left-biased filtering algorithms before Facebook. Google Search prioritizes pages that are linked by multiple tweets (4).
Bing/Yahoo Search is probably like Google Search, but less influential.
Reddit has leftist management and allows people and bots all over the world to downvote posts by American citizens about American politics and science. Only few subreddits, including The_Donald, do not allow downvoting.
Climate debate is a vivid example of what happens when decisions are made without reality checks. Democrats, FSM, and the “tech” executives assert unanimously that there is no debate on “climate change.” Indeed, there is almost no such debate on Wikipedia and on the “illuminated” (i.e., not suppressed by the San Francisco AI) part of the Web. There, everybody is on the side of alarmism, to different degrees. Of course, there is such debate in real life: President and majorities of the Senate and the House of Representatives oppose this alarmism. Whatever one thinks of the merits of the sides and of the value of the debate, the fact that the debate exists is obvious to a human. But machine algorithms are confused. Curiously, FSM, Democrats, and executives of the “tech” companies side with the algorithms.
The San Francisco AI (the corporations running it are included per the definition) demands government subsidies for the expansion of always-on broadband internet, as if it were some obvious good. One might think that internet games and browsing never cause addiction, or that internet companies never suck the economic life out of communities. Nevertheless, these concerns are rarely raised. Instead, the San Francisco AI promotes the idea that always-on broadband access is a universal human right. One can joke that the San Francisco AI is hardly three years old, but already this aggressive.
- Speaking of anti-American distortions, biases, and opinions, I don’t imply that they benefit other nations or that other nations are to blame. The global governance worldview, along with the institutions and individuals aspiring to participate, are by nature hostile to most nation-states. Climate alarmism was their creation and the most powerful vehicle. So, they tend to support the Left. The ideology of “anti-colonialism” is a very big additional factor. The internet crosses national borders. The Left Coast internet corporations, including the San Francisco AI components, got used to operating transnationally, and apparently, became supporters of this world view. Nevertheless, anti-Americanism is an especially powerful force — in part because the aspiring global governance sees it as the main obstacle to its hegemony, and in part because Americans have been indoctrinated to blame their own country for the hatred to it.
- Nassim Taleb, The Black Swan, and Fooled by Randomness
- To be fair, I have come across complaints about bans of Maduro supporters in Venezuela.
- This paper just scratches the surface of the topic. Only a few examples are included. More complete discussion would fill in a book. Recent revelations about internal working of Google are not considered. New evidence emerges faster than I can write about it, so I wrap it up here. The hypothesis that Google Search might be malicious AI by itself is superseded here.
- In an interview with Breitbart, a former Google employee had confirmed that Google skews its search results for political reasons. I recommend the whole series of interviews Rebels of Google by Allum Bokhari and others.
- After substantially completing this paper, I found a research, led by Robert Epstein, who is not conservative. The preliminary paper A Method for Detecting Bias in Search Rankings, with Evidence of Systematic Bias Related to the 2016 Presidential Election suggests that Google Search was heavily biased toward Hillary Clinton in 2016 elections and brought her 2.6 million votes. Google CEO Eric Schmidt, a staunch Clinton supporter, has been heavily involved in her campaign. See also https://www.conservativereview.com/articles/study-shows-google-was-biased-toward-clinton
Published on 2017-08-16