Data Driven Narcissism: How Will “Big Data” Feed Back on Us?

Thomas Raab

Eberlgasse 6/3, A-1020 Vienna, Austria. E-Mail: tr@sil.at

 Abstract

How do “big data” affect us as psychological beings? In this paper I argue that the user-specific contents spawned by analyses of massive data sets obtained by transaction data and online marketing software agents of Facebook, LinkedIn, Twitter etc. must tend to reinforce the respective user’s “self-image”. As long this man-machine feedback loop is not interrupted by purposeful manipulation, it will thus lead to a general behavior schematization enabling a yet more precise consumer prediction and so forth. The final result may be a population that is maximally predictable while at the same time each individual of that population may feel maximally singular. Is such a social end state heaven—or is it hell?

  1.  CAUTION: Machine “smartness” infringes on your intelligence!

Whereas the impact of massive online data sampling and its statistical analysis on the sociological method and issues of personal security have been quite widely discussed [1], the potential severity of the consequences of feeding back those data on the users has only recently been rigorously thematized [2].

Take thought. At first sight the main effect of “smart” technologies, including big data statistics, on human intelligence seems quite straightforward [3]. Intelligence, conceived as a general problem solving capacity, is reciprocally proportional to the use of sophisticated computer programs and gadgets. A GPS navigation system, for instance, will gradually corrupt your routine in solving orientation problems in the geographic domain. Although there are so far no studies proving or disproving such claims [4] it would be hard to object to the commonsense observation that skills, which are not regularly practiced, deteriorate.

And thinking is a skill, the psychological mechanisms of which remain poorly understood [5]. At the same time the possibility of statistically computed surrogates of human problem solving cannot be dismissed. This is the still uncontested proposition of Turing’s test [6]. In principle, computing sensory input into motor output by big data statistics could yield behavioral results indistinguishable from human performance although the underlying mechanisms differ radically. This kind of intelligence by high-speed trial and error may capture nothing from biologically evolved intelligence. Yet it has already led thinkers to proclaim the end of theory at large [7].

  1. NOTICE: No cloud, no parallel computing needed!

In view of “Moore’s law” [8] it has recently been argued that the research into the functioning of human intelligence has, after so many vain attempts, not only failed but has been unnecessary in the first place. When the power of computer hardware, so the story goes, will reach a certain “threshold”, a “technological singularity” will take place leading to a potentially infinite number of innovations within one single moment. At this truly revolutionary instance we as humans will suddenly be “happier” entities because all our toil and trouble will finally be outsourced to machines [9].

But, beyond such futuristic speculation [10], do we, on the current level of scientific and administrative sophistication, even need “intelligence”, let alone “better-than-human-intelligence”, in order to fulfill technology’s promise of happiness?

Economically we in the developed world already seem to live in an secured environment in which we are able to believe to be smart but, at the same time, can afford—both personally and as a population—to remain narcissistic and stupid at the same time. By narcissism I mean an uncritical self-love, which most of us know by self-observation and which seems necessary because of mounting conformity pressures despite the rather theoretical leeway our economic affluence warrants [11].

Basically our inanity is sustainable because it does not have serious consequences as long as no natural or anthropogenic catastrophes disrupt the flow of events. Thinking is something for times of need, whereas in highly developed economies most activities—although performed in all sincerity—can be viewed as pastime. They just do not serve “vital needs” [12]. Who vitally needs a new car every year? It is here that narcissism and consumerism go hand in hand. The need for narcissism is not difficult to understand. The more people on earth, the more uniform the social outlook, the more frightening the conformity of the masses, the greater the need for at least “feeling” individual. We want to live our unique personal life, no matter how stupid we live it.

Please mind that “stupid” is here understood as a suspension of the problem solving capacity. It is not a “personality trait” but the habitual opposite of “intelligent” behavior, i.e. of deliberately acting instead of merely reacting to stimuli. It is a strange fact that the wealthier a population becomes the more its consumer behavior will tend be impulsive, simply because it becomes affordable. Reacting like stimulus-response machines might seem regressive but, as anybody knows, we only really think if we really have to, i.e., if means are scarce.

I am painfully aware that my concept of intelligence seems outmoded. Today the notion of intelligence as superficial “info combination” has already been puffed up to the “cool” ideology of entire milieus. Google’s Sergey Brin, for instance, straightforwardly defines intelligence as the optimization of computer searches for (meaningless) strings: “For us, working on search is a way to work on artificial intelligence. (…) Certainly if you had the entire world’s information directly attached to your brain, you’d be better off” [13].

Leaving aside the rather hollow idea that intelligence consists “information retrieval” as well as the fact that it has been proven that more online information does not necessarily lead to a better knowledge quality [14], the conceptually tricky thing is the teleology implied in slogans such as Brin’s. Given the argument of Turing’s test even the parrot-like declamation of Wikipedia sentences can count as intelligence as long as it is viable in the social sense, i.e., as long as some others are applauding. Is this what Brin means by “better off”?

If problem solving in the old, creative sense isn’t necessary from an economic perspective, i.e. if we indeed live embedded in a fluffy layer of insurances and contracts shielding us from “natural realities”, then succeeding within the peer group could indeed become the only benchmark of intelligence. And if we look around Brin already seems right. A lot of people interpret success solely in terms of intra-group recognition, no matter for what achievement. This seems to be one reason why society slowly disintegrates into sealed-off “subcultures” each promoting its own “codes”. For its members its subculture is equally reassuring in their individuality—cycling, death metal, bodybuilding, Minecraft, classical music, pole dance, yoga, basketball, and so on. These subcultures additionally allow their members not to think too much because thought is absorbed by the affirmation of group code [15].

What does that mean for society at large? Historically it seems obvious that this social segmentation has its origin in bio-anthropological provisions [16] and has been pushed further by the continuing division of labor, which has been accompanying the entire history of technological development. In this respect big data feedback only reinforces an already existing social trend. The schematization of affluent society would have made thought—and thereby psychology—superfluous anyway. Big data feedback is only accelerating this development.

  1. WARNING: Your user data feed back on you!

One cannot help but be impressed by the effectiveness of the statistics based on big data [17]. The “friends” you get proposed by Facebook or LinkedIn, the video suggestions on Youtube, the “recommendations” by Amazon—very many of these stunningly fit your interests, even change with their ebb and flow, or excavate old “friends” from school who you may not have thought about for decades. It is quite certain that, from the perspective of the pure psyche-less consumer, big data methods positively lead to the reduction of “information clutter” [18].

From the technical viewpoint one might think that there is a lot of ingenious programming behind these predictions. Yet this isn’t even necessary! Most of the power of such “embedded filtering agents” lies in the sheer quantity of data and some statistical ingenuity.

Yet the meagerness of the patents describing the general methods used to identify target groups for marketing purposes [19] already hints at the fact that understanding what really makes humans and human groups tick is not a primary research goal. This is hardly surprising. By way of combinatorics one can calculate that every single individual on earth can in principle be targeted by a combination of a few statistical parameters. Whether these are, in any sense of the word, a “true” description of these individuals is of purely academic interest as long as each one can be singled out for commercial purposes.

The patent mentioned represents not more than a stringing together of standard averaging and factoring methods combined in such a way you would spontaneously do when thinking about the “singling out” problem which always confronts target marketing. Its power lies in the statistical reliability resulting from big data, not in its scientific merit [20].

It’s true: Regularly statistical clustering, i.e. the classification of multi-variable items, yields better results than human classification. The widely used machine learning method of “support vector machines” manages to “see” patterns in data sets that humans just won’t be able to see. The reason, however, is not that computers are smarter than we are. The reason is that biologically evolved intelligence is better in general problem solving tasks than in pattern recognition. That the results of big data classification nonetheless often confirm the all too obvious is another thing [21].

I suggest that this form of micro-targeting has in any case the potential to mirror the taste preferences of each of us. In order to do so the personal data mined via the Internet do not have to be interpreted by viable psychological theories. The mere consistency of taste and interests, a corollary of our limited cognitive resources, also determines our user and consumer behavior. By this feedback process big data itself shape future social facts so that in the end it will produce the psychology it already describes now. Google etc. may turn out to be a self-fulfilling prediction machine.

What are the psychological effects of the big data mirror in more detail? Aside from the danger of “tethered selves” floating within “filter bubbles” there is the very real possibility that things, that we would otherwise gracefully forget, will come back to us [22]. I first realized this when I was buying a CD online and the recommendations spit out the CD of an overly Romantic singer, which I once was a “student” of but now wanted to forget out of embarrassment. Yet the “average” of the CDs of very different genres yielded an item I had already purposefully rejected.

Of course this is only one example. But by extension the statistical processing of data suffices, at least on average, to reinforce the particular user’s interests. Personalized suggestions provided by the Internet thus stabilize our egos because they constantly remind us from the outside of what we like—and thus what we “are”. It is thus not meticulously specified statistical models, let alone psychological theories but the behavioral feedback we ourselves provide online which can be used to constrain our behavior repertoire. Big data feedback stabilizes our “external memory” [23], the things we are constantly reminded of, thereby making our behavior still more role-like and thus predictable.

Ever since Goffman’s seminal book we know that we tend to identify with habitual roles we are playing within the specific social contexts we transverse without realizing possible contradictions between different roles. All the more so on the Internet [24].

I do not want to suppose that the said “reinforcement” of taste is achieved by the mere exposure to advertisement and personalized suggestions on the computer screen. Its manner of operation surely is quite complicated, involving socio-psychological, personal, and cognitive (what you yourself know about the mechanisms being applied to you) aspects. In any case the outcome seems clear. I hypothesize that this “mirroring” solidifies our very sense of individuality which ironically has only recently been so vigorously “deconstructed” by philosophical, psychological, and neurological theories alike.

Thus emerges an apparent contradiction between sociological and psychological fact. While our preferences get more and more stable we can simultaneously fancy ourselves as unique. We cherish, for instance, the delicate changes of our individual moods, maybe even want to express them on our personal blog. Yet in aggregate these expressions tend to converge to predictable group patterns. The main point ist: This is what big data not only prove but also reinforce.

Indeed it is becoming easier and easier to influence one another. Once we belonged to groups having to imitate the groups’s taste preferences by laboriously figuring out how, for instance, to get hold of a record of the latest arcane style-icon. Today we “just look it up on the Internet” and get suggestions about who might be ever more insider’s tip whose mp3s we then readily download.

It is not only that this kind of (biologically fundamental) group imitation has become easy and international. Also the slight stylistic deviations necessary to feel original have, through “personalized” software suggestions, become “accessible”. Statistically reinforced, we become caricatures of our past. And we begin to love these caricatures thereby becoming the first machine-enhanced narcissists in history [25].

  1. DANGER: Back from the Future!

Each and every technological innovation in history has sparked a plethora of utopist thought about its consequences for the “essence” of the human condition. There can be no doubt that technology has indeed greatly changed the objective world and, with it, our life circumstances, especially during the past 200 years or so.

In the rear-view mirror of history social progress sometimes appears larger than it is. Basic human traits such as social organization in families and “clans” of friends have meanwhile proven surprisingly robust [26]. Our psychological capacity stems from archaic times and cannot easily be changed by thought. New technologies facilitating permanent communication between peers—mobile phones and computers—only further strengthen this archaic group cohesion.

Yet the reasons for this increase group cohesion are manifold. Beyond simple technical reasons (e.g., predominantly friends and peers are locally stored on communications gadgets) there are also more complicated psychological ones. Concerning the latter one does not have to appeal to some “postmodern angst”. As face-to-face communication with strangers becomes, at first sight at least, unnecessary due to our little tech helpers, objective information that could complicate and infringe social relations is lost in transmission. No longer do you have to cope with the complexities of your friend’s gestures, let alone his or her personality. The loss of information on facial expression, voice timbre etc. puts you in the position of perceiving the others as “emoticons”.

To complete the circle from Kurzweilian techno futurism to the tedious facts of social reality, I thus propose the socio-psychological hypothesis that, in its social outcome, the ensuing combination of a maximum sense of individuality on the part of each “agent” and minimal target computation on the part of authorities and companies yields the social and behavioral patterns of primitive men in a tribal “society”. While our feeling of “individuality” is boosted internally, it becomes more and more restricted behaviorally.

How are technical problems tackled with in such a tribalist context? Given the said fact, that on the technical level attained today creative innovation is no longer necessary in a vital but only in a administrative sense, as it is at least true of wealthy economies, there are always two technical solutions to user-end problems. First, there is the traditional one: raise the technical sophistication to meet the users’ need. But technological feedback in general provides a second solution method, namely “reverse demand engineering”. One can adapt the needs to the current technical status quo. Big data and machine learning obviously foster the second option.

Let us consider one harmless example of “reverse demand engineering”, namely automatic speech recognition as an example of big data feedback.

Our scientific knowledge of language, especially its functional interaction with thought, has not made much progress in the recent decades [27]. Nonetheless, due to growing computing power, better data sampling and statistical refinement, the language processing industry is booming. Surely nobody ever claimed that computer programs understand language as we do. Their performance nonetheless suffices for mundane tasks such as simple “key word” transcriptions (or word-to-word translations). Okay, Google! However, the performance of commercial speech recognition in transcribing moderately complex texts is surprisingly weak [28].

But despite the skimpy output quality, what is socially functional in order to reverse-engineer the demand for language is the sheer output quantity! Sooner or later the majority of people will get used to bad translations and transcripts, so that their own language skills will slowly adapt to the programs’ output. Again, this result is only possible because pragmatically speaking the social context seems to tolerate vague or bad expression.

From the perspective of the consumer reduction of complexity is indeed what search engines and target group marketing do. They reduce “objective clutter”. But by reducing objective clutter you reduce subjective specificity. So our very notion of curiosity—a vital human feature—changes as well. In a now classical book psychologist D.E. Berlyne distinguished between “specific curiosity”, which motivates organisms to search for specific detailed information about a thing in order to better orient themselves, and “diversive curiosity”. The latter motivates sheer search for information in order to avoid deprivation or boredom [29].

Carr [30] quite nicely describes the mixture of interestedness, lack of focus, and finding and forgetting information when “browsing” the hyperlink-structured Internet. We are all familiar with this state of mind. Could you say whether it is the result of specific curiosity, diversive curiosity or simply annoyance? No, it is something in between. We experience a kind of “bored curiosity”. Big data feedback agents keep real world complexity at a distance. Within our bubble attention remains “democratically” dispersed [31].

The ever-growing accessibility of the Internet stimulates and facilitates exploration for exploration’s sake. Often nothing specific is searched for. This state, the inner equivalent of which I would call “bored curiosity”, is precisely the state Internet stores want you to be in—sufficiently awake to still loosely search for products feeding your “ego caricature”, and at the same time drowsy enough to click the “buy”-button.

Just as our ego is boosted in order to maximize the amount of useless, yet socially effective talk about it, much information we obtain as well as many products we consume serve communication purposes only. Obviously the function of most “information” Google helps us to find, is solely communicative.

In societies being able to afford it (until a natural or anthropogenic catastrophe disrupts the flow of events) even scientific theories can have a primarily appellative instead of an epistemic or technical character. They can serve to tell our equally talkative peers, or the one, whose peer we wish to become and whose funding we’d like to share, that we belong, or want to belong, to them. And the more we invest in the group the weaker will be our critical view of its members’ utterances. In academia too tribalism and its intellectual concomitant, bullshit, flourish [32].

Unfortunately the field of computer science, a seemingly “natural” candidate for warning against the consequences of smart machines in the hands of dumb people, is itself to a high degree highly liable to bullshit. As early as 1976 Drew McDermott [33] powerfully argued that the sloppy language use and the lack of stringent definition in psychology had led computer scientists to believe their programs are “smart” because they contain a code section named “reasoning”. His reservations all the more hold true in face of the danger that many people believe the word that big data analysis is actually intelligent.

Within this meta-stable tribal framework sustained by computers, even in academia problems will only afford creative intelligence when the problem as such is as yet undefined. But by then the tribal “groupthink” born out of our generally “post-sincere condition” may yield false, or even catastrophic, results [34]. And our “friends” on Facebook and LinkedIn might prove as no help.  

  1. WARRANTY SERVICE: Heaven or hell?

Behaviorally the big data feedback problem is barely noticeable. Neither the complexity of the stimuli we perceive nor our sensor behavior changes greatly whether we perceive the on-screen or the off-screen world. Big data statistics only form a thin layer between the unmediated things and us. Yet it’s like an extra retina and might have dramatic consequences.

Thought—defined as creative problem solving—may be temporarily unnecessary in a technologically sustained world. Given the growing computing power and data availability ever more problems will be solved without understanding them in the psychological sense. Technical maintenance can be placed in the hands of engineers and administrators providing solutions by rules of thumb, technical cookbooks, and statistics software.

Nonetheless one should not forget that thinking is not simply a “biological feature” of man but was strenuously won in the course of human history [35]. It seems that we only think if either outer constraints or strong inner motives force us to. As the former have been loosened by technological progress one wonders whether the behavioristic definition of intelligence implicit in big data analysis is a herald of the general leveling-out of biologically evolved intelligence by data driven narcissism.

It is unlikely that the companies feeding on big data will ever stop their business model or at least make their methods transparent in such detail as to make it possible to counter-act. Broadly speaking one can nonetheless imagine two generic ways how to thwart data-driven narcissism. Their goal must be either to introduce data noise in order to make statistical personality models inconsistent (and hence worthless) or that you’d get paid for your data:

  1. The renegade method: abstain from social media if you can; use a (yet to build) computer “agent” on all internet devices that by its random activity simulates fake inconsistent behavior data thereby creating informational noise;

 

  1. The serious method: A micropayment system not only forcing you to pay for content but also forcing the data gatherers to pay for each recorded activity of yours [36].

 

Yet the development of such tools – legal or renegade – is not only unlikely due to commercial reasons. One unique feature, which has been distinguishing “AI” from other engineering fields all along, is that here high-tech directly touches philosophical questions. Apart from the legal definition we simply do not know what “freedom” should mean in a mechanistically conceived universe. Will storing one’s memories and retrieving it in the “right” moment—like the makers of Microsoft’s Life Browser want—make us more free? Or does that mean the opposite: becoming Pavlov’s data dog gnawing on one’s past?

 

We may approach a socially segmented society, in which each segment is perfectly controlled by big data feedback and our behavior repertoire is more predictable as never before. At the same time – and this is the core problem – we may actually feel as individual as never before. Will such a world be hell – or will it be heaven? And who can judge [37]?

 

 

Acknowledgements

 

The author gratefully acknowledges the help of Leonhard Fessler, Albert Müller, Oswald Wiener, and four anonymous referees. Fatih Aydogdu and Ebru Yetişkin arranged a talk of a preliminary version of this paper at the Amber Conference 2013 titled „Did You Plug It In?“ in Istanbul.

 

 

References and notes

 

  1. See, e.g., Mike Savage and Roger Burrows, “The coming crisis of empirical sociology,” Sociology 41 (2009) pp. 885–899; Martin Evans, “The data-informed marketing model and its social responsibility,” in: Susanne Lace, ed., The Glass Consumer: Life in a Surveillance Society (Bristol: Policy Press, 2005) pp. 99-132; David Lyon, “Surveillance in the era of big data: capacities, consequences, critique,” Big Data & Society 1 (2014) pp. 1–13.

 

  1. Jaron Lanier, You Are Not a Gadget (New York: Knopf, 2010), Sherry Turkle, Alone Together (New York: Basic Books, 2011), Eli Pariser, The Filter Bubble (New York: Penguin, 2011).

 

  1. By “technology” I mean each method or, by extension, each man-made machine embodying a deterministic theory thereby replacing a goal-directed action hitherto performed by men only. According to this definition all applicable mathematics is already a technology.

 

  1. Markus Appel and Constanze Schreiner. “Digitale Demenz? Mythen und wissenschaftliche Befundlage zur Auswirkung von Internetnutzung,” Psychologische Rundschau 65 (2014) pp. 1–10.

 

  1. In Thomas Eder and Thomas Raab, ed., Selbstbeobachtung: Oswald Wieners Denkpsychologie (Berlin: Suhrkamp, 2015) thought is empirically examined as an ideo-motor simulation in order to construct models of the world.

 

  1. Alan M. Turing, “Computing machinery and intelligence,” Mind 59 (1950) pp. 433–460, Oswald Wiener, “Kybernetik und Gespenster,” manuskripte 207 (2015) pp. 143–163.

 

  1. Chris Anderson, “The End of Theory: The Data Deluge Makes the Scientific Method Obsolete,” Wired 16.07, available at http://www.wired.com/science/discoveries/magazine/16-07/pb_theory

 

  1. Moore’s law follows the so far empirical regularity that chip speed and computer memory grow exponentially over time: Gordon E. Moore, “Cramming more components onto integrated circuits,” Electronics 38 (1965) pp. 114–117.

 

  1. Ray Kurzweil, The Singularity is Near (New York: Viking, 2005).

 

  1. Theodore Modis, “Discussion (The Singularity Myth),” Technological Forecasting and Social Change 73 (2006) pp. 104–112.

 

  1. John K. Galbraith, The Affluent Society (Boston: Houghton Mifflin, 1958), Thomas Raab, Nachbrenner (Frankfurt: Suhrkamp, 2006).

 

  1. Thorstein Veblen, The Theory of the Leisure Class (New York, London: Macmillan, 1899).

 

  1. As quoted in Nicholas Carr, “Is Google making us stupid?” Yearbook of the National Society for the Study of Education 107 (2008) pp. 89–94.

 

  1. James A. Evans, “Electronic publication and the narrowing of science and scholarship,” Science 321 (2008) pp. 395–399.

 

  1. Thomas Raab [11]

 

  1. Russell A. Hill and Robin I.M. Dunbar, “Social network size in humans,” Human Nature 14 (2003) pp. 53–72.

 

  1. Mike Savage and Roger Burrows, “The coming crisis of empirical sociology,” Sociology 41 (2007) pp. 885–899.

 

  1. Aside of the intelligence question there is no doubt that big data predict not only demographic but also attitudinal traits. Especially predictions in domains in which not even scientists, let alone lay people, do have exact knowledge may be more accurate than men-made ones: Wu Youyou, Michal Kosinski and David Stillwell, “Computer-based personality judgments are more accurate than those made by humans,” PNAS 112 (2014) pp. 1036–1040, http://www.pnas.org/cgi/doi/10.1073/pnas.1418680112, see also Michal Kosinski, David Stillwell and Thore Graepel, “Private traits and attributes are predictable from digital records of human behavior,” PNAS 110 (2013) pp. 5802—5805, http://www.pnas.org/cgi/doi/10.1073/pnas.1218772110. However, all problems of normal data sampling hold for big data as well, see Zeynep Tufecki (2014) “Big questions for social media big data: representativeness, validity, and other methodological pitfalls,” in Proceedings of the Eighth International AAAI Conference on Weblogs and Social Media (Palo Alto, CA: AAAI Press, 2014) pp. 505–514, http://www.aaai.org/ocs/index.php/ICWSM/ICWSM14/paper/view/8062/8151, Derek Ruths and Jürgen Pfeffer, “Social media for large studies of behavior,” Science 346 (2014) pp. 1063–1064. Additional caveats exist, especially if the causal connection between the data and the predicted variable is unclear, see David Lazer, Ryan Kennedy, Gary King, and Allessandro Vespignani, “The parable of the Google flu: Traps in big data analysis,” Science 343 (2014) pp. 1203–1205. For more ethical concerns see Martin Evans, “The data-informed marketing model and its social responsibility,” in Lace [1] 99–132.

 

  1. E.g., Google’s patent in Shumeet Baluja, Yushi Jing, Dandapani Sivakumar, and Jay Yagnik, “Inferring user interests,” United States Patent US 8,055,664 B2 (2011).

 

  1. Pariser [2].

 

  1. Support vector machines (SVM) in Vladimir N. Vapnik, The Nature of Statistical Learning Theory (New York: Springer 1995). SVM as an applicable recipe, together with tables showing that SVM in some domains yield better classification results than human subjects, in Chih-Wei Hsu, Chih-Chung Chang and Chih-Jen Lin, “A Practical Guide to Support Vector Classification,” Technical Report, National Taiwan University, available at http://www.csie.ntu.edu.tw/~cjlin/papers/guide/guide.pdf. Examples for all too obvious results in Johann Bollen, Huina Mao and Xiao-Jun Zeng, “Twitter mood predicts the stock market,” Journal of Computational Science 2 (2011) pp. 1–8, or Johan Ugander, Brian Karrer, Lars Backstrom and Cameron Marlow, “The anatomy of the Facebook social graph.” arXiv:1111.4503 available at http://arxiv.org/abs/1111.4503

 

  1. Selves “tethered to the Internet” from Turkle [1], “filter bubble” from Pariser [2]. Cf. Jaron Lanier, “Agents of alienation,” Journal of Consciousness Studies 2 (1995) pp. 76–81.

 

  1. J. Kevin O’Regan, “Solving the ‘real’ mysteries of visual perception: the world as an outside memory,” Canadian Journal of Psychology 46 (1992) pp. 461–488.

 

  1. Erving Goffman, “The Presentation of Self in Everyday Life,” University of Edinburgh, Social Sciences Research Centre Monograph 2 (entire number), Alison Hearn, “‘Meat, mask, burden’: Probing the contours of the branded ‘self’,” Journal of Consumer Culture 8 (2008) pp. 197–217.

 

  1. Turkle [2].

 

  1. Hill and Dunbar [16].

 

  1. Marc D. Hauser, Noam Chomsky, and W. Tecumseh Fitch, “The faculty of language: What is it, who has it, and how did it evolve?” Science 298 (2002) pp. 1569–1579.

 

  1. Laura Frädrich, and Dimitra Anastasiou, “Siri vs. Windows speech recognition,” Translation Journal 16, available at http://www.bokorlang.com/journal/61dictating.htm. A typical example of real speech impoverishment is the schematic interaction with interactive voice response programs on the telephone.

 

  1. Daniel E. Berlyne, Conflict, Arousal, and Curiosity (New York: McGraw Hill, 1960).

 

  1. Carr [13].

 

  1. Eyal Ophir, Clifford Nass and Anthony D. Wagner, “Cognitive control in media multitaskers,” PNAS 106 (2009) pp. 15583–15587.

 

  1. Cf., Harry G. Frankfurt, “On bullshit,” Raritan 6 (1986) pp. 81–100, Alan Sokal and Jean Bricmont, Intellectual Impostures: Postmodern Philosophers’ Abuse of Science (London: Profile Books, 2003), Gerald A. Cohen, “Complete bullshit,” in Michael Otsuka, ed., Finding Oneself in the Other (Princeton: Princeton University Press, 2012) pp. 94–114, Oswald Wiener, “Humbug,” Der Ficker 2 (2006) pp. 96–116.

 

  1. Drew McDermott, “Artificial intelligence meets natural stupidity,” ACM SIGART Bulletin 57 (1976) pp. 4–9.

 

  1. Irving L. Janis, Victims of Groupthink (Boston: Houghton Mifflin, 1972), Alan Richardson, “Performing bullshit and the post-sincere condition,” in Gary L. Hardcastle and George A. Reisch, eds., Bullshit and Philosophy (Chicago, La Salle: Open Court, 2006) pp. 83-97.

 

  1. José Ortega y Gasset, Man and People (New York: W.W. Norton, 1963).

 

  1. The technical advantage of renegade method would be that it is conceivable as a stand-alone program. Although the noise could interfere with data driven narcissism, any computer producing noise is suspect to state agencies. For the serious method see Jaron Lanier, Who Owns the Future? (New York: Simon & Schuster, 2013).

 

  1. Thomas Raab, Die Netzwerk-Orange (Vienna: Luftschacht, 2015).

 

 

Biographical information

Thomas Raab is a Vienna based writer and educator with a scientific background. He holds a Ph.D. in Science and splits his time between writing, translating, and working with a research group currently refurbishing systematic introspection for its use in empirical cognitive science.