Wednesday, June 27, 2012

Democracy and Trust in the Age of The Social Web.


Gloria Origgi
CNRS – Institut Nicod


Epistemic Democracy in the Age of Internet. The role of trust and reputation in e- democracy









Gloria Origgi
CNRS – Institut Nicod


Democracy and Trust the Age of the Social Web
II seminario di Teoria Politica
Aosta, 28-30 June 2012


1. Introduction

The rise of the Internet within the last 20 years and especially of the social Web in the last 10 years, has deeply transformed our social, cultural and political customs. Internet has revealed one of the most powerful means of communication and networking ever and – like the invention of writing and print - a deep revolution in the production and sharing of information.

The cyber-enthusiasts of the 1990s, saw the Web as the paradigmatic “disruptive technology” one that would overturn all of our practices for accessing information and empower users to collaboratively produce, access, and distribute content in previously unimagined ways[1]. Over the past decade, this enthusiasm has been replaced by a more nuanced attitude. Replacing our practices for accessing reality, aggregating information and creating new forms of public spaces is not simple. Our search of impartial information is notoriously biased by a number of effects such as group polarization, information cascades, and conformity[2]. The world in which we live today is far from being a new land of freedom and democracy. The way in which new technologies of communication are evolving in networked social spaces is not designed by rules that govern the design of democratic decision systems[3]. The potentially infinite and free space of the Web is becoming a corporate space governed by few big companies that control access and make us navigate in a land that we do not own through an architecture that is variable, invisible and impossible to control. This has led some authors to talk about a new “Digital Feudalism” and the need of a new Enlightenment to reach a truly liberal digital democracy[4].

As a result, we face today two opposite tendencies in the appraisal of the effects of the Internet and the Web on our lives. On the one hand, apologists see the Web as the primary global resource to build new forms of civic participation, by democratizing communication and dramatically decreasing costs of participation in various forms of mobilization[5]. On the other hands, critics and pessimists warn about the risks of authoritarian turns of the new, uncontrolled technocracy the Web is making available, and the negative effects of polarization of points of views and informational cascades that discussion through the social networks is creating[6].

Also, while social networks and other communication technologies such as cell phones have played a major role in the rising of important recent political movements and revolutionary awakenings in dictatorial countries, such as the “Arab-Spring” (renamed as the Facebook Revolution), in mature democracies, their general effects on democratic life are more controversial. Internet is one among the multiple facets of globalisation, and it is far from clear today how globalisation is improving democracy. As Habermas points out[7], the disappearing of (economical, cultural, political) frontiers may have pernicious effects on our idea of democracy that is still centred on the State-Nation. What global phenomena such as Internet are bringing about, according to him, is a reduction of autonomy of the single states and hence less protection for their citizens, as well as a progressive delegitimization of forms of control and accountability at the national level.
Yet, apart from the criticisms based on the global dimension of Internet, the way in which the Web is structuring itself today - mainly in privately controlled social networks - although it has enhanced political debate and participation[8] in some areas of the world, is such that it cannot be considered anymore as a public space whose structure and maintenance is in the hands of its users.
That is to say that the relation between IT and democracy is far from being straightforward. Fears and anxieties of uncontrolled forms of new control go together with an irresistible optimistic vision of a freer and interconnected world of global citizens.


2. The Central Puzzle

Among the tensions and ambivalences that characterize the debate over the role of Internet in democracy, I will concentrate on an aspect of these contradictions I am more familiar with, that is, the dimension of trust. Trust has been a central topic in social sciences to make sense of the Internet[9]. Yet, one of the most striking contradictions about our IT-mediated trust relationships has been surprisingly neglected. In this paper, I will concentrate on the following contradiction: while modern democratic societies ground their accountability in a “disenchanted” form of social trust, that is, trust that comes out of a series of procedures of “taming” distrust, such as contracts, law enforcements, transparent procedures (concerning vote, attribution of rights, allocation of resources, etc…)[10] the form of trust that seems to reign over the Internet and, especially, the Social Web, is the most naïve and wild form of blind trust that we have ever experienced in mature societies.
Liberal democracies have emerged as a reaction of distrust to the traditional forms of power and authority such as monarchies and the church[11]. As Mark Warren writes: “More democracy has meant more oversight of and less trust in authorities”. Constructing a political arena in which people may confront their divergent interests and arguments means establishing a set of rules and procedures that allow a “cold” yet guaranteed form of interaction not based on “warmer” social relationships of trust. Furthermore, modern democracies are “inclusive” systems, whose aim is to make more and more people to participate to collective decisions. Inclusiveness implies a transition from “custom to code”, because the more people are included within the same group, the less “thick” relationships can be taken for granted[12]. It has been argued at length[13] that the form of social bond that links mature contemporary liberal democracies is not trust, but a regulated “distrust”, that is, a thick bundle of procedures, codes and rules that guarantee citizens that those who govern them have to be accountable.

Yet, the disenchanted trust that defines our form of political participation doesn’t seem to be the default attitude once we are on the Net. Social networking facilities have developed tremendously since 2007. Studies show that people develop online social networking even when the levels of reciprocal trust and the comprehension of privacy and security issue are low[14]. Most people who register on Facebook do not read the Terms of Service and, if asked, don’t know whether they own or give away the information they make available on their profile pages.
It is as if masses of reasonable individuals, who should be guided in their behaviour - according to the mainstream views in social sciences - by maximisation of interest and considerations of prudence and rationality, are willing to capitulate their judgement and responsibility of choice and join privately owned social networks and companies where they share personal information without the least clue of what these companies will do with these data, follow the first results of a Google search, confident that they will be brought to the relevant piece of content, base their judgements and evaluations on rankings produced by monopolistic companies. People seem willingly to throw away their privacy, their capacity of discrimination, their rights of choice and blindly defer to methods of filtering content and manage participation whose logic is deeply out of their control.

As the cyber-militant Rebecca McKinnon has pointed out: “We cannot assume that Internet will develop in a way that is democracy-compatible”[15]. Sovereignty of the cyberspace is made by private companies that partition it in ways that are the less and less free and determine how information is gathered, structured and presented. The cyberworld of web apps and social networks is far from being a world of free speech and democratisation[16]. The domination of a privately owned social network such as Facebook today is overwhelming. Paradoxically, the only country in which the use of social network is fragmented upon a variety of different providers is China, given the censorship act that blocked the access to Facebook in mainland China in 2009 and, as a consequence, that many different social networks have spread in order to skip censorship[17].
Here is a symptomatic quotation about the trustful feelings that inhabit Facebook users: «This is the promise of Facebook, the utopian hope for it: the triumph of fellowship; the rise of a unified consciousness; peace through superconnectivity, as rapid bits of information elevate us to the Buddha mind, or at least distract us from whatever problems are at hand. In a time of deep economic, political, and intergenerational despair, social cohesion is the only chance to save the day, and online social networks like Facebook are the best method available for reflecting—or perhaps inspiring—an aesthetic of unity.»[18]
A mixture of optimism, credulity and faith seems to be the dominant attitude that underlies the use of social networks, as if questions of privacy and security were not relevant to the development of this particular form of trust. Also, trust is a fundamental ingredient of social relationships, but it is unclear how people can trust millions of other users to make a fair use of the information they decide to share publicly.

So, despite the blatant evidence of the risks on privacy, the control by private companies of most of the features and applications of the web, people seem to resist any form of diffidence or, at least of, prudence, and give away their personal data and relevant information with a unreasonable feeling of being part of a cooperative process of global democratisation of means of expression.

Why is it so? This paper is an attempt to solve the puzzle. I think that much discussion about trust and Internet has revolved around a conception of trust that doesn’t reflect the form of trust relationships we are involved on the net.

3. Relational vs. Epistemic Conceptions of Trust

Trust is one of the most intractable notions of philosophy and social science. That is because a variety of human interactions are pulled in under the heading of trust. Trust is involved in any asymmetrical situation in which one party has to take a risk that depends on the performance of the other party. And risky businesses concern as many different relations as one can imagine, such as commercial transactions between parties who don’t know each other, temporally asymmetrical transactions (I pay you today for a good that I will receive in a week or a month), love affairs, reliance on experts for important decisions about one’s own life and health, occasional conversational exchanges among people in the street, political interactions and, of course, IT-mediated social interactions. Russell Hardin rightly writes: « As virtually all writers on trust agree, trust involves giving discretion to another to affect one’s interests. This move is inherently subject to the risk that the other will abuse the power of discretion »[19].
Much of the discussion of trust in social sciences revolves around its relational dimension. Diego Gambetta’s 1988 anthology on trust and James Coleman’s 1992 work on social theory laid the foundations of a rational-choice approach to the relation of trust. Rational-choice theory and its mathematical counterpart, game-theory, provide various models of social competitive and cooperative relations under the assumption that the parties that are involved in these relations are rational and motivated by interest. In this perspective, trust is a form of risky cooperation. The central question is: « Given that involving oneself in a trust relationship with others is always risky, when it is rational to take this risk? »
Of course, even the most pessimistic players are guided by an optimistic intuition that the risk is worth taking, that is, that being in a relation will improve our situation much more than staying out of it.
In this perspective, trust is an essentially cognitive notion. It is a complex, higher-order belief that takes into account the other party’s beliefs, interests and possible actions. I have an interest in entering a trust relationship if only I have reasons to think that the other party has an interest to reciprocate. Perhaps the most encompassing definition of rational trust in this tradition is that of Russell Hardin, according to whom rational trust is a form of encapsulated interest, that takes the following shape: I believe that it is in the interest of my party to take into account my interests. So there is no risk, no submission to authority, no surrender of reasons: just a rational calculus, or at least, a motivated bet, on the mutual advantages of a cooperative future together. Notice that, given this definition, it can be rational to trust another party even when our interests diverge, because the only thing I have to presuppose is that she or he has an interest to take into account my interests, even when his interest is different from mine. When are we wrong? Well, of course, our estimate of the other party’s interests may be based on limited information, or on a vision of the context of interaction that may be biased by our own perspective. Or, the other party may have simulated an interest in being in a relation with us that was selfishly motivated just in order to have us “on board” and then free-ride on our cooperative stance.
Critics of the “cold” notion of trust in terms of rational choice say that trust involves an emotional dimension and raises normative commitments between parties that go beyond the pure calculus of interests. Karen Jones, Annette Baier[20] and others, insist upon the emotional and moral dimension of trust, as a “thick” relationship that makes one accept the inevitable vulnerability of being dependent on another on a certain matter. The “accepted vulnerability” is, according to these authors, not a matter of calculus: it is based of a “pre-rational” estimation of the trustworthiness of the other party and on the strong feeling that “throwing oneself” in the trust relation creates a positive attitude in the other party and raises the chances to reciprocate. To see one’s willingness to be vulnerable to our actions makes us react in the expected way.
Pure rationalist accounts of trust are seen as a form of reduction of trust to rational distrust. After all, as I have mentioned at the beginning of the paper, liberal political theory was largely founded on distrust[21]. Hume, Madison, and to a certain extent Locke, thought that the only intelligent stance for citizens to take towards government was distrust[22]. Given that I don’t trust the other party, I need to find a rational justification of the interests she or he might have to reciprocate my trust. The transition to Modernity is often depicted as a replacement of “warm” trust relations based on feelings of assurance and moral commitments with “cold” ones based on rational justifications. Trust in governments as “reasoned consent” is assured by procedures of accountability that are ways of dealing with the general distrust of modern societies.
One of the points that is common both to cognitive and moral treatments of trust is the relational and “choice-based” dimension of the trust relation, as if it were always a matter of free choice to trust or not the others. Here is an opportunity to enter in a relation. If I choose this opportunity, I take some risks. I can chose to go on or to stay out, and I can base my decision on a rational calculus of the interests at stake or on an optimistic stance toward the willingness of the other party to honor my trusting him or her. But, in any case, I have a choice. I can choose to vote for this person because I think it is in his or her interest to take into account my interests (for example, because she or he wants to be re-elected in the future) but I can also choose to vote for someone else
Even much of the discussion around the specific trust relationships we develop through Internet has revolved around the risks and advantages of relational trust. For example, in a seminal paper on the subject, the philosopher Victoria McGeer warned about the fragility of trust relationships developed through Internet given that people may be dishonest about their identities. And Philip Pettit argues that Internet is not an appropriate environment for developing trust relationships because people won’t feel the obligation to reciprocate trust that we feel in face-to-face interactions[23].
Yet, I don’t think that the notion of relational trust I have tried to outline here says anything about our trust in information-dense environments such as the Internet and the Social Web. Trust in these environments is first of all a form of epistemic trust, that is, trust in persons or systems through which we are able to extract relevant information. Massimo Durante raises a similar point in a chapter of a book on Legitimacy 2.0. and e-Democracy : “In a complex networked society of information consensus is cognitively based on perceived trust more than on experienced trust: this means that, in media democracy, it is not only a question of relational trust, expressed with regards to specific political actors, but it is matter of systemic trust, expressed in relation to the system those actors are part”[24]

Internet has been from the onset a major informational and cultural revolution that has been compared to the invention of writing and printing[25]. The central new feature of the Internet revolution and, especially, of the Web, is the role of social networks in the production, distribution and retrieval of information. The mathematician Jon Kleinberg showed in 2000 that the Web was organized as a giant social network[26]. Information on the Web is essentially social, that is, it depends on the pattern of social relations that informs the search algorithms about where to find it. We do not only develop social relations on Internet: we use social relations to extract knowledge from it. Recent research[27] shows that microblogging sites such as Twitter are the more and more used to extract information through social search facilities and that search based access to tweets is becoming increasingly available through third parties such as Google and Bing[28]. These algorithms use the social information about who knows whom in order to extract information and rank it in order of relevance.

The PageRank algorithm that made the success of Google was based exactly on an automatic reading of the links between the pages in terms of “votes” that a page gives to another one. The new algorithms of the social web, such as Google Social Search and Facebook’s EdgeRank[29], extract social information such as “likes”, “shares” and “re-tweets” to extract information and measure its authority.

Unlike the other two cultural revolutions that I have mentioned, that is, writing and printing, the Web presents a radical change in the conditions for accessing and recovering cultural memory because it introduces new devices for managing meta-memory. The retrieval of information is an epistemic activity that proceeds via the previous classifications of cultural authorities. With the advent of technologies that automate the functions of accessing and recovering memory, meta-memory converges with external memory. What was a central task of cultural organization becomes another outsourced cognition. If I have in mind a line of poetic verse, say, « Let us go then, you and I », but can recall neither the author nor the period and am unable to classify the style, these days I can simply write the line of verse in the text window of a search engine and look at the results. The results are a list of Web sites ranked by an algorithm that considers both relevance and accessibility. This list serves a meta-mnemonic function. The highly improbable combination of words in a line of verse makes possible a sufficiently relevant selection of information that yields the poem from which the line is taken as the first result in the search inquiry.
Search-engines are thus powerful epistemic devices that we trust. We trust their capacity to perform the cognitive function of meta-memory (i.e. navigating through memory) in our place. Today, social networks allow other forms of hierarchisation of authorities, such as the “follow” relation on Twitter, that ranks users’ influence in terms of numbers of followers.

Trusting a result of a search-engine like Google, or trusting information socialy extracted from Twitter is a very different form of trust than the relational form I presented above. It is a form of epistemic trust that is based on different norms and heuristics of justification among which:

-       experience (I have double-checked information retrieved by Google and it was right, so I trust Google in the future),
-       our relation with epistemic authorities,
-       various reputational cues, on cognitive and epistemic properties of communication, and
-       structural features of social networks.

The Web is a “trust machine” that feeds itself with the social information generated by the numerous social networks that exist at different levels: (1) at the structural level structure (the hyperlinks structure), (2) at the level of content (the networks of citations, social bookmarking systems, etc.) and (3) at the level of people (like in social networks in the sense of Facebook or Twitter). The norms and heuristics we apply to each of these levels can be different. At the structural level, we trust the reliability of the algorithms such as PageRank for Google or Facebook’s newsfeed algorithm EdgeRank in sorting the relevant information for us. We can use experience to appreciate their reliability and also social norms: for example, the fact that Google clearly distinguished at its beginnings between advertized content and other forms of content may have been a reason to trust this search engine more than others at its onset. Google used a lot of  cues of honesty as good informant[30], like the public image of “being nice”, the clean homepage of the search engine without any advertisement or extra-information, etc. The PageRank algorithm has also a special status among search algorithms, because it bases its searches on a measure of authority of a website that is fundamentally the same as the measure of “impact” in the academic world. That is, a website is more authoritative if it receives more links from other, authoritative websites. This is exactly the way in which the research community defines its own measure of prestige, or authority: the prestige of a publication depends on its impact, that is, on the number of citations it receives in other publications. The very measure of prestige used by Google has its own prestige and it is therefore a credible cue of being a good informant for the users.
Most of the cues of credibility that users apply to the structural level (1), are applied also to level (2), that is, the level of content. But the Web itself is not the only producer of trustful signs for this level: previous classifications, labels, and titles influence the level of trust of the users. A piece of research content coming from an influential author of from an influential academic journal or institution will be considered more credible than a piece of content coming from the unknown author of a blog. As for the level (3), either we trust people coming from previous, extra-web, trusted networks (our old friends, our colleagues, people coming from the same school, etc.), or we develop new trust relationship through conversational exchanges and mutual exchange of information. The « stance of trust »[31] displayed in social communication through the Web is both fundamental and fragile: we need a minimal trust in order to get in contact with other people and enter new networks and conversations. That is why all the social networks that work well are characterized by the presence of a trustful atmosphere, a “halo” of trust and friendliness that is the trace that people share a stance of trust as a pre-condition for constructing a communicative space.

As we can see, in all these cases, the trust we put at work is an epistemic attitude and not purely a matter of strategic choice: we cannot choose not to trust, because we would not have any alternative access to information. But we can try to develop more sophisticated strategies of vigilance about the norms endorsed by these systems, and, when possible, their technical design. In what follows, I will try to develop the distinction between strategic vs. epistemic trust, by relying on my previous work on epistemic trust and then see how this notion applies to the case of trust online and may help explaining some conundrums of trust in the digital environments.


4. Default Trust and Epistemic Vigilance

Epistemic trust is an even more intractable notion than trust itself. I have tried to elucidate elsewhere what does it mean to trust in epistemic authority[32]. Here I will briefly summarize my main point on epistemic trust. As I said, epistemic trust is not relational: it is deferential, that is, it implies a fundamental asymmetry between the trustee and the truster. It is a form of more or less justified deference to various authorities. In epistemic matters, we do not « transfer » our power to someone else who will represent our will: rather we surrender our reasons to the epistemic authority of someone else whom we judge in a better position to get the information we need to get. We do not choose to trust. In most situations we just do not have the choice. A trustful attitude is thus a starting point, a default stance from which we begin to navigate the thick informational world around us.

The default trustful attitude that I am advocating here as a core element of epistemic trust is not just a gullible attitude, although it may expose us to the risk of gullibility. Many authors have argued for an epistemic justification of trust. In his book on the genealogy of epistemic virtues[33], Bernard Williams, for example, puts trust as a fundamental ingredient in the evolution of the norms of truth-telling. Edward Craig has argued as well that the capacity of trusting a “good-informant” on the basis of some indirect cues of competence and sincerity is epistemically more fundamental than that of seeking directly true information[34].

Most human beings have bits of information from which others may benefit. The Web has revealed this fundamental trait of socially distributed information in a dramatic way. Surfing on what other people know make us learn faster than going ourselves looking for the right piece of content. Emulation and conformism (trust what the others think, and conform your behaviour to theirs) are powerful evolutionary strategies for the survival of groups[35].

Yet, if a trustful attitude is a default condition to start any process of information seeking in information-dense environments, we need strategies to allocate our trust in a balanced way and avoid gullibility. That is, we need to balance trust with a vigilant attitude. A vigilant attitude is in my sense not a pre-condition of trust: it is developed within the trust relationship. For example, paying attention to some cues of consistency in conversation is an example of a vigilant attitude once we have already accepted to share a minimal trust with our interlocutor, that is, to accept to talk to her or him. Evidence shows that social networks’ users have this vigilant attitude, not towards the information they share, but towards the information they acquire through the social interactions. Developing trust of the social Web is a necessary condition to make it a powerful epistemic tool. Without a massive sharing of information, we could not acquire so much knowledge by navigating online. We pay the price of a “blind trust” in sharing to get the advantage of a vigilant acquisition of information[36].

Many of the vigilant strategies are under our control, like monitoring the behaviour of our interlocutors, in order gather information about his or her past records, etc.  but not all. Networked information displays some biases that are difficult to control and to monitor by its users. And the fact that we must share first in order to get something relevant from the social web puts us in an asymmetric situation that can creates pernicious effects that explain the puzzle of trust I have started with.


5. Trust in networks

The social Web enhances strategies of deference and epistemic trust by making information available through social networks. The Twitter mind deploys a follower-leader strategy, by giving epistemic advantages to those who choose the right leaders to follow, that is, the good informants. Yet, trust in these social networks is thus crucially different from trust in social relations as depicted in standard visions of society.

Networks have notorious biases and properties that are not immediately compatible with the construction of a distributed democracy of knowledge, as the Web seemed to promise. First of all, networks such as the Web are aristocratic: this means that information tends to concentrate around few authorities to which everybody refer and the more its concentrates in this way, the more probable is that those few authorities become even more authoritative (the notorious rich get richer effect). Networks are hierarchical, enhance authority asymmetries and encourage the development of techniques of control such as the production of evaluations, rating and ranking systems that simplify the organization of information[37].

We are living is a networked information society: that means that information is organized in networks that display hierarchical properties and authority effects that may not be compatible with a democratic use of information. In order to have a vigilant attitude towards information we receive from the networks, we should be aware of the structural effects of hierarchisation of information that are built-in the system. Collective decisions systems and systems of aggregation of information are usually designed to balance these effects. Is the social web designed in such a way? Unfortunately, the answer is in the hand of few people and it is less than clear that the motivation in designing the connecting features of the web is that of balancing these effects.

5. Solving the Puzzle: Default Trustful Attitude and Reliance on Reputational Cues

Trust in networks is thus a complex epistemic attitude that mixes various social and cognitive competences and is modelled by the belonging of each individual to these reticular structures. Two central components of this complex attitude are worth describing because they may help solving the puzzle I have presented above:

(1)  A default trustful attitude that shapes our communicative practices. As I said, we do not choose to trust others. We are permanently immerged in social relations that make us gain knowledge about the world. We can be vigilant, but without a default trustful attitude that disposes us to learn from others and accepts what they say, we would risk to loose too much relevant information. In epistemic trust, considerations of relevance seem to overrule considerations of accountability.
(2)  A strong reliance on reputational cues that have cumulated through a number of interactions with the networked system over time. Reputation both helps fashion collective processes of knowledge and is a central criterion for extracting information from these systems. It is a fundamental shortcut for cumulating knowledge in processes of collective wisdom. It is also an ineluctable filter. In an environment where sources are in constant competition for attention and direct verification is unavailable, rankings and ratings allow us to have and use information. Our minds can never investigate or manipulate the world in solitude. The greater our uncertainty about the content of information, the stronger our reliance on the opinions of others to evaluate this content. Just as our lore is woven into the fabric of our sentences, our concern for reputation is woven into the fabric of our social network systems. This claim is in part conceptual, in part empirical. Even if not all such systems reflect this passion for ranking, we can expect those that do will generate more epistemically reliable products than those that do not.

These two features of epistemic trust in web-based social networks may help to solve the puzzle of the “enchanted trust” on Internet in opposition with the “disenchanted trust” typical of mature democracies and explain why people so naïvely trust on Internet. As I said, these two forms of trust are deeply different, the first one being a cognitive posture we need to take in order to filter the too thick amount of information we have to parse.


6. Reducing the epistemic deficit: assessment devices and epistemic responsibility
How can we try to reconcile these two radically different trust attitudes? If we want to make of Internet a public, democratic space that enhances egalitarian social relations, we should try to balance the “wild” forces of the social networks by providing devices and techniques that help reducing the epistemic deficit that societies of knowledge are creating between lay-people and IT-based expertise. It is important also to distinguish between the classical deferential relations to experts (human experts) and the new forms of deference generated by “automata-experts” such as search engines and many other IT-based products whose algorithms filter information. People trust search-algorithms, reputation algorithms etc. as if the relevance of the information they get from them was so high that costs of questioning the way in which they got the information were perceived as prohibitive. But they are not prohibitive and there are plenty of ways of “unpacking” the systems and make people more aware of the biases and effects that influence the results.
As democracies have thrived by developing forms of “disenchanted trust”, a democratic networked e-society should encourage the production and spreading of devices whose aim is to assess the credibility of the systems that produce the so-called “trusted information”. Trusted information comes very often packed in rankings. Examples of rankings are: hierarchies of results generated by a search engine, where the results at the top are perceived as “more trustworthy” than those in a lower position; rating systems, such as indicators[38] of quality, that is, a named, rank-ordered, simplified and processed piece of data that purports to represent the past or projected performance of different unit, or a classification system; influence measures, such as the number of “likes” or scores that measure the social impact of an individual or a company on the Web[39]. All these devices, although they constitute a new form of “social objectivity”, have biases and can be potentially manipulated. A responsible use of these devices goes with an awareness of the users of the possible biases and manipulations. A society of knowledge as the one that new technologies have made possible will thrive only if we will develop rules and regulation to protect the epistemic trust of citizens. Rankings should be multiplied on different supports according to different scales, to avoid winner-takes-it all effects and informational cascades.
Furthemore, social networks are today a dangerous cocktail of user-supplied content (what we put online on our profiles), open APIs and client-side code, that is, computer programs that are executed on the clients’ browser and allow thus web pages to be scripted so that their content can change according to the users’ input[40]. The use of these technologies should be controlled and security enhances. APIs and Web applications that abuse users’ trust (for example by diffusing private information or automatically executing programmes on behalf of the users) should be forbidden.


The same thing should be said for the imperialistic use of Facebook and Twitter. Governments and IT-research centers should encourage the construction of a multitude of public social networks and a variety of public search engines, in order to avoid the risks of the monopoly of information within the hands of few big actors such as Facebook, Twitter and Google. Technologies that are designed to protect users’ epistemic trust should be encouraged, while those who “free-ride” on the trustful dispositions of social networks’ users should be sanctioned.



7. Conclusion
I have argued that the trustful attitude that characterizes social networks users should be balanced by a vigilant attitude of epistemic responsibility not only from the perspective of the producers of information, but also from the perspective of consumers of information[41]. We are not condemned to passively trust information crunched by new technologies of expertise. We still have the means to check whether our practices of production and acquisition of information are based on reliable processes, or just on passively accepted social norms or humdrum cognitive habits. We can always ask ourselves: “Why do I trust a Google search, or a Harvard University Press book, or an authoritative voice?”
An epistemically responsible attitude should be encouraged at the individual level and at the institutional one. We should promote the assessment of credible, open-access procedures of endorsement of authority. Wikipedia has been a model of institutionalizing new practices of certification and norms of credibility. Yet, Wikipedia is itself victim of its success and - as a network effect – its weight is growing so much that it becomes difficult to contrast its authority with other similar projects. At an institutional level, a diversity of projects should be promoted in order to balance these pernicious network effects.
A society of knowledge is growing in a way that seems incompatible with a growing demand of open-access, trusted, accountable information. This seems paradoxical, and fixing the paradox is the first step to regain a coherent image of ourselves as free epistemic subjects.











[1] Bower, J. L.  and Christensen, C. M. (1995), Disruptive Technologies: Catching the Wave, «Harvard Business Review», 73, 1: 43-53
[2] Hindman, M. (2008), The Myth of Digital Democracy , Princeton University Press.
[3] Cf. on this point the recent book edited by Elster; J. &  Landemore, H. (2012) Collective Wisdom, Cambridge University Press.
[4] Cf. Mark Davis’ (Microsoft) presentation of the EC Onlife Manifesto Initiative, Brussels, February 8th, 2013, accessible at: https://ec.europa.eu/digital-agenda/sites/digital-agenda/files/Onlife_Initiative.pdf
[5] Bohman, J. (2004) Expanding Dialogue. The Internet, The Public Sphere, and the Prospects for International Democracy, «The Sociological Review», 52: 131-155.
[6] Sustein, C. (2009), Republic.com 2.0, Princeton University Press.
[7] Cf. Habermas, J.  (1999) Der europäische nationalstaat unter dem Druck der Globalisierung (Lo Stato Nazione europeo sotto la pressione della mondializzazione) in Blätter für deutsche und internationale Politik, 4.
[8] Cf. data on Pew Internet & American Life Project at: http://www.pewinternet.org/Topics/Activities-and-Pursuits/Politics.aspx?typeFilter=5
[9] Cf. for example, Pettit, P.  (20) 4, Reliance, Trust and Internet, “Analyse & Kritik”, 26: 2004, 108-121. http://analyse-und-kritik.net/en/2004-1/AK_Pettit_2004.pdf ;  McGeer, V. (2004) Developing Trust on the Internet, “Analyse & Kritik”, 26: 2004,  91-107.
[10] On trust and democracy, cf. Warren, M. E. (1999) (ed.), Democracy and Trust, Cambridge University Press.
[11] Cf. on this point Dunn, J. (1988) Trust and political agency, in Diego Gambetta (ed.), Trust:
Making and Breaking Cooperative Relations, Oxford: Basil Blackwell. 
[12] I borrow the expression “from custom to code” to Rom Harré (1990) Trust and its surrogates: psychological foundations of political process in Warren (cit.), ch. 8.
[13] Cf. Warren (cit.) but also Hardin (ed.) (2004) Distrust, Russell Sage Foundation, NY.
[14] Cf. Dwyer, C., Hiltz, S. R., Passerini, K. (2007) Trust and Privacy Concerns within Social networking Sites. A comparison between Facebook and MySpace,” Proceedings of the Thirteen American Conference on Information Systems”, Keyston, Colorado 9-12 August.
[15] Citation in a blog: http://themaniblog.wordpress.com/2012/02/28/on-human-rights-on-the-internet/
[16] See on this point Chris Anderson’s article on the Death of the Web on Wired: http://www.wired.com/magazine/2010/08/ff_webrip/all/1
[17] Source: http://gking.harvard.edu/gking/files/censored.pdf
[18] Cf. Grigoriadis, V. (2009) Do you own Facebook or does Facebook own you? http://nymag.com/news/features/55878/index2.html
[19] Cf. Hardin, R. (1992) The Street Level Epistemology of Trust, “Analyse und Kritik”, 14, 154-174.
[20] Cf. Baier, A. (1986) Trust and Antitrust, “Ethics”, 96, 231-260; Jones, K. (2012) Trustworthiness, “Ethics”, 61-85.
[21] See Hardin 2004 (cit.).
[22] Cf. Hardin, R. (2004) Distrust. Manifestations and Management in Hardin, H. (ed.) Distrust, Russell Sage Foundation, NY, p. 4.
[23] McGeer (2004) cit, Pettit (2004) cit.
[24] Cf. Mindus, P., Greppi, A. Cuono, M. (2011) Legitimacy, 2.0, p. 70, http://uppsala.academia.edu/PatriciaMindus/Books/
[25] Cf. on this point, Origgi, G. (2002) Per una scienza cognitiva di Internet  « Sistemi Intelligenti », XIV, n.2, pp. 269-286; Origgi, G. (2003) Ricerche su Internet,  “La Rivista dei Libri”, dicembre.
[26] Cf. Kleinberg, J. (2000) The Structure of the Web,Science”, http://www.sciencemag.org/content/294/5548/1849.summary
[27] Cf. Morris, M.R., Counts, S. et al. (2012) Tweeting is Believing? Understanding Microblog Credibility Perceptions, CSCW 2012, Washington, Seattle, 11-15 February.
[28] http://googleblog.blogspot.fr/2011/02/update-to-google-social-search.html
[29] Cf. Cardon, D. (2013) Du lien au like. Deux mésures de la reputation sur Internet, in Origgi; G. (ed.) La Réputation, COMMUNICATIONS, Seuil, Paris.
[30] For the notion of good informant see Craig, E. (1990) Knowledge and the State of Nature, Oxford, Clarendon Press.
[31] Cf. Origgi, G. (2008) A Stance of Trust, in Origgi, G. (2008) A Stance of Trust in María Luisa Mora Millán (ed.)Estudios en homenaje a José Luis Guijarro Morales”, Universidad de Cadiz, ISBN 978-84-9828187-3, 187-200.

[32] Cf. Origgi, G. (2004) Is Trust an Epistemological Notion? Episteme, 1, 1; G. Origgi (2005) What Does it Mean to Trust in Epistemic Authority? http://academiccommons.columbia.edu/catalog/ac:130569
[33] Cf. Williams, B. (2002) Truth and Truthfulness, Princeton UP.
[34] Cf. Craig cit.
[35] See. Boyd, R. and Richerson, P.J. (2002) Not by Genes Alone; Sperber, D. Origgi, G. et al. (2010) “Epistemic Vigilance”, Mind and Language, 25, 4, pp. 359-393.
[36] Cf. Morris, M.R. et al. (2012) (cit.).
[37] See Origgi, G. (2012) A Social Epistemology of Reputation, “Social Epistemology, 26, 399-418.
[38] For an analysis of indicators see: Davis, K.E., Kingsbury, B. and Merry, S.E., (2012) Indicators as a Technology of Global Governance, “Law and Society Review, 46, 1, pp. 71-104.
[39] See for example the Klout score: www.klout.com
[40] See Devin, S.M. (2008) Anti-Social Networking: Exploiting Trusting Environment of Web 2.0, “Network Security”, 11.
[41] Cf. Origgi, G. (2010) Epistemic Vigilance and Epistemic Responsibility in the Liquid World of Scientific Publications, “Social Epistemology”, 24, 3, 149-159; Origgi, G. (2012) Epistemic Injustice and Epistemic Trust, “Social Epistemology”, 26, 2, 221-23 .

Thursday, June 21, 2012

Addio A Elinor Ostrom, Nobel dei "Commons"


Pubblicato sull'inserto domenicale de Il Sole 24 Ore, 17 giugno 2012. Tutti i diritti riservati.

Se n’è andata a 78 anni Elinor Ostrom, premio Nobel per l’Economia (con Oliver E. Williamson) nel 2009, un anno dopo la tragedia finanziaria del 2008, con il crollo delle borse internazionali e il collasso a catena di banche e stati. Premiata per il suo lavoro sull’economia dei beni comuni e le strategie d’azione collettiva, la Ostrom è la prima donna ad avere mai ricevuto il Nobel in questa disciplina. Con un ragionamento semplice: non esiste nessuna tragedia inesorabile iscritta necessariamente nei geni del nostro comportamento economico. Possiamo imparare a collaborare, a cambiare le regole del gioco e a gestire le risorse che condividiamo in modo più ragionevole.



Outsider nel mondo eminentemente maschile degli economisti “duri”, la notizia del suo Nobel lasciò la comunità sorpresa e impreparata. La maggior parte degli economisti non la conosceva nemmeno, e molti di loro la consideravano più un’esperta di scienze politiche che una vera economista. Studiosa interdisciplinare, la Ostrom non aveva esitato a combinare ricerca empirica, osservazione antropologica e studio delle norme sociali per comprendere come la gente si organizza spontaneamente nella gestione dei beni collettivi come l’aria, le foreste, l’acqua, i pascoli, le spiagge, la conoscenza (si veda la raccolta diretta con Charlotte Hess: La conoscenza come bene comune, Bruno Mondadori), senza bisogno di leggi imposte dall’alto. Nel suo libro Governare i beni collettivi. Istituzioni pubbliche e iniziative di comunità(Marsilio, 2007), Elinor Ostrom attacca la visione canonica della cooperazione che impera in economia e, più in generale, nella Weltanschauung del presente.

Per gli economisti ancora convinti, beati loro, della razionalità dell’homo oeconomicus, la cooperazione è un dilemma irrisolvibile, studiato e ristudiato sotto il nome di Dilemma del Prigioniero. Il nome stesso del dilemma la dice lunga sulla visione dell’umanità che vi sta dietro. Il dilemma del prigioniero è un dilemma sociale perché l’azione razionale individuale porta a una situazione d’irrazionalità collettiva. La situazione è la seguente. Dobbiamo decidere se cooperare o meno con qualcuno. Il meglio sarebbe per entrambi di cooperare, perché ci permetterebbe di ottenere il risultato migliore. Ma il rischio che l’altro decida di fregarci una volta che noi abbiamo fatto la prima mossa è troppo alto (io pago in anticipo per ottenere la sua merce e quello invece di spedirmela, scappa con i soldi). Dunque, ci accontentiamo del risultato mediocre di non cooperare per evitare le perdite troppo importanti nel caso di defezione dell’altro.

La tragedia della cooperazione era stata l’oggetto di un famosissimo articolo pubblicato su Science nel 1968, The Tragedy of the Commonsin cui lo studioso di ecologia Garrett Hardin sosteneva che l’azione collettiva fosse impossibile perché genera paradossi come il dilemma del prigioniero e, in ultima analisi, il collasso delle risorse condivise. Se un gruppo di individui si trova a condividere un bene comune, diciamo un pascolo, e ognuno massimizza il suo interesse, allora il pascolo verrà prima o poi completamente distrutto. La logica inesorabile del destino del pascolo diventò un leitmotiv degli economisti, che in maggioranza decisero che bisogna imporre regole dall’alto di gestione della proprietà sui beni collettivi per evitarne la completa distruzione.

Il lavoro empirico e teorico della Ostrom mostra semplicemente che questa forza del destino dell’uomo razionale massimizzatore del suo utile non c’è. Se si analizzano gli esempi, come lei fa spaziando dai sistemi di irrigazione in Nepal, ai pascoli africani, dalla gestione delle acque in California, alle foreste e ai diritti sulla pesca, ci si rende presto conto che la gente non è vittima del dilemma del prigioniero. La gente coopera, trova soluzioni, cerca di mettersi d’accordo con quelli che sembrano più disponibili a partecipare, crea sistemi di sanzioni per i free-riders, insomma, nessuno resta intrappolato in relazioni paradossali che portano inevitabilmente alla rovina. Tranne forse gli economisti e i mercati che incarnano le loro visioni del mondo.

Come scrive giustamente nel suo libro, il dilemma del prigioniero incarna una metafora fondamentale del presente: l’uomo in trappola, che non può che scegliere per sé stesso e subire invece il suo destino sociale. Ecco perché il suo destino è tragico. Whitehead sosteneva che l’essenza drammatica della tragedia non è la sventura, ma la solennità del processo inesorabile e ineluttabile degli avvenimenti. Le ricerche della Olstrom mostrano che l’ineluttabilità non esiste, se non nell’astrazione dei modelli formali delle transazioni finanziarie. E invece di stupirsi che un premio Nobel in economia sia stato dato a una ricercatrice interdisciplinare, esperta di sociologia, antropologia, economia politica, governance, istituzioni pubbliche, etc. bisognerebbe chiedersi piuttosto come fa a esistere ancora oggi un premio Nobel di economia e basta invece che di scienze sociali in generale. Non sono bastati i premi Nobel in economia dati a psicologi, a filosofi, a esperti di etica, a scienziati politici per far capire agli economisti che ci sono più cose tra cielo e terra che nei loro modelli.

Tuesday, June 19, 2012

By Gloria Origgi | Response | 2011 Annual Question | Edge

By Gloria Origgi | Response | 2011 Annual Question | Edge


Kakonomics, or the strange preference for Low-quality outcomes Print


Philosopher and Researcher, C.N.R.S. Paris; Author, Text-E: Text in the Age of the...
I think that an important concept to understand why does life suck so often is Kakonomics, or the weird preference for Low-quality payoffs.
Standard game-theoretical approaches posit that, whatever people are trading (ideas, services, or goods), each one wants to receive High-quality work from others. Let's stylize the situation so that goods can be exchanged only at two quality-levels: High and Low. Kakonomicsdescribes cases where people not only have standard preferences to receive a High-quality good and deliver a Low-quality one (the standard sucker's payoff) but they actually prefer to deliver a Low-quality good and receive a Low-quality one, that is, they connive on a Low-Lowexchange.
How can it ever be possible? And how can it be rational? Even when we are lazy, and prefer to deliver a Low-quality outcome (like prefer to write a piece for a mediocre journal provided that they do not ask one to do too much work), we still would have preferred to work less and receive more, that is deliver Low-quality and receive High-quality.Kakonomics is different: Here, we not only prefer to deliver a Low-quality good, but also, prefer to receive a Low-quality good in exchange!
Kakonomics is the strange — yet widespread — preference for mediocre exchanges insofar as nobody complains about. Kakonomic worlds are worlds in which people not only live with each other's laxness, but expect it: I trust you not to keep your promises in full because I want to be free not to keep mine and not to feel bad about it. What makes it an interesting and weird case is that, in all kakonomic exchanges, the two parties seem to have a double deal: an official pact in which both declare their intention to exchange at a High-quality level, and a tacit accord whereby discounts are not only allowed but expected. It becomes a form of tacit mutual connivance. Thus, nobody is free-riding:Kakonomics is regulated by a tacit social norm of discount on quality, a mutual acceptance for a mediocre outcome that satisfies both parties, as long as they go on saying publicly that the exchange is in fact at a High-quality level.
Take an example: A well-established best-seller author has to deliver his long overdue manuscript to his publisher. He has a large audience, and knows very well that people will buy his book just because of his name and anyway, the average reader doesn't read more than the first chapter. His publisher knows it as well…Thus, the author decides to deliver to the publisher the new manuscript with a stunning incipit and a mediocre plot (the Low-quality outcome): she is happy with it, congratulates him as she had received a masterpiece (the High-quality rhetoric) and they are both satisfied. The author's preference is not only to deliver a Low-quality work, but also that the publisher gives back the same, for example by avoiding to provide a too serious editing and going on publishing. They trust each other's untrustworthiness, and connive on a mutual advantageous Low outcome. Whenever there is a tacit deal to converge to Low-quality with mutual advantages, we are dealing with a case of Kakonomics.
Paradoxically, if one of the two parties delivers a High-quality outcome instead of the expected Low-quality one, the other party resents it as a breach of trust, even if he may not acknowledge it openly. In the example, the author may resent the publisher if she decides to deliver a High-quality editing. Her being trustworthy in this relation means to deliver Low-quality too. Contrary to the standard Prisoner Dilemma game, the willingness to repeat an interaction with someone is ensured if he or she delivers Low-quality too rather than High-quality.
Kakonomics is not always bad. Sometimes it allows a certain tacitly negotiated discount that makes life more relaxing for everybody. As one friend who was renovating a country house in Tuscany told me once: "Italian builders never deliver when they promise, but the good thing is they do not expect you to pay them when you promise either."
But the major problem of Kakonomics — that in ancient Greek means the economics of the worst — and the reason why it is a form of collective insanity so difficult to eradicate, is that each Low-quality exchange is a local equilibrium in which both parties are satisfied, but each of these exchanges erodes the overall system in the long run. So, the threat to good collective outcomes doesn't come only from free riders and predators, as mainstream social sciences teach us, but also from well-organized norms of Kakonomics that regulate exchanges for the worse. The cement of society is not just cooperation for the good: in order to understand why life sucks, we should look also at norms of cooperation for a local optimum and a common worse.

Tuesday, May 29, 2012

What is the Ultimate Taste?

Slides presented at the workshop: The Taste of Wine: Its History and Philosophy held in Paris, Institut Nicod, on May, 25th. See the programme: http://www.tastings.fr/event/2012_taste-of-wine.php?lang=fr

Sunday, May 20, 2012

Se Aristotele fa l'indiano

Copyright Il Sole 24 Ore. This article has been published on the cultural supplement of Il Sole 24 Ore on May 20th 2012. All rights reserved. Do not quote without permission.




Invitato per un ciclo di conferenze settimana scorsa a Parigi, Galen Strawson, British Philosopher appartenente alla tradizione più genuinamente analitica, autore di monografie su metafisica e causalità, figlio dell’ancora più analitico Peter Strawson, insomma, il non plus ultra del filosofo canonico anglosassone, decide di parlare di “Coscienza, fosforescenza e svaprakaasa”. Cos’è?

L’incipit del discorso lascia il pubblico stupito e impreparato: “Aristotele, Dharmakirti, Dignaga, Cartesio e Locke avevano ragione: la coscienza comporta la coscienza di essere coscienti”. L’affermazione è banale: è il punto di partenza, di dibattiti infiniti sul regresso altrettanto infinito degli argomenti sulla coscienza. Insomma, il contenuto è il solito, ma il packaging è quello del nuovo millennio: e chi aveva mai sentito infatti un filosofo al centro dell’Impero Occidentale citare nomi delle tradizioni filosofiche delle sue ex-colonie?

In realtà, l’India tra il quinto e il settimo secolo, quando fiorisce la scuola buddista di Dignaga e del suo principale commentatore, Dharmakirti, è tutto tranne che una colonia: è una cultura complessa e matura, in cui si sfidano tradizioni di pensiero, come il buddismo e l’induismo. Durante quella che è considerata l’età d’oro della filosofia indiana, si compie, grazie agli autori citati da Strawson, una vera svolta epistemologica, grazie alla quale l’attenzione si sposta dalle questioni strettamente religiose e metafisiche, alla comprensione di cosa costituisce una forma valida di cognizione o pramana. La svolta epistemologica di Dignaga non è però sufficiente a creare un ponte con la filosofia occidentale. Per semplificare al massimo, il più grande divario tra la filosofia orientale e quella occidentale è la separazione ossessiva, nella nostra tradizione, tra io e mondo, tra soggettivo e oggettivo, natura e cultura, coscienza e materia inerte, laddove la filosofia orientale non separa i due piani, rappresentando invece il nostro rapporto col mondo come circolare invece che verticale: non osserviamo dall’alto un mondo inerte: ne siamo parte, e circolarmente lo percorriamo e ne siamo attraversati.

Il svaprakaasa, menzionato nel titolo della conferenza di Strawson, è la coscienza come fosforescenza, uno stato dell’essere che ha una luminosità speciale: illuminandosi illumina le cose intorno a sé e, viceversa, per far luce sulle cose intorno a sé, fa luce su sé stesso. La coscienza così intesa è una proprietà delle cose tutte, non limitata ai soggetti. Ogni cosa può “riflettere” in questo senso la luce di un intelletto che la pensa.

Strawson, come David Chalmers, difende una posizione anti-riduzionista sulla coscienza, con un tocco di new age, per cui la materia e le cose tutte potrebbero essere potenzialmente coscienti. Ma nella sconfinata bibliografia di David Chalmers dedicata alla coscienza non c’è nemmeno una menzione di filosofi indiani. Ciò che è radicalmente nuovo è inserire nel cuore del canone occidentale lo svaprakaasa, il pramana e altri concetti affini.

Cosa stupisce tanto di quest’operazione intellettuale? In primo luogo, la sua formidabile ingenuità. Nessun filosofo sgamato post-moderno, che sa che il mondo è socialmente costruito, avrebbe osato importare così acriticamente nozioni che vengono da un mondo politico e culturale lontano nel tempo e nello spazio. Il post-moderno ammonisce: non esistono i concetti in quanto tali che, come prodotti di consumo, si possono trasportare da una realtà all’altra!

Eppure l’ingenuità di Strawson ha il vantaggio di portare alla ribalta un linguaggio che, nel contesto del dibattito dominante, non ci era familiare. Certo, nell’era di Wikipedia è più facile familiarizzarsi con lo svaprakaasa e forse banalizzarlo, e postmoderni e specialisti di filosofia indiana inorridiranno davanti a simili semplificazioni. Ma se la filosofia occidentale non vuole asfissiare, se vuole togliersi quel centralismo che nell’era globale non è altro che provincialismo, forse fa meglio a non imbarcarsi in interpretazioni e disvelamenti di quel che davvero vogliono dire i pensatori di altre tradizioni, ma assumere il loro bagaglio di nozioni, come si dice, at its face value: così come sono, magari banalizzandole, tradendole, ma facendo lo sforzo di allargare il proprio dizionario filosofico nella direzione del mondo di domani. Puro politically correct? Può darsi, ma le parole sono importanti, e quel che diventa lecito dire e non dire cambia il corso del pensiero. Dignaga insieme ad Aristotele, pur banalizzato, storpiato, malinteso, è un passo avanti per comprendere i problemi di sempre con lo sguardo nuovo di un pensiero globale.

Forse il realismo ingenuo del filosofo analitico, che prende i concetti così come sono, senza cercare di disvelarne la segreta natura di dispositivi di dominazione, è l’atteggiamento giusto per ripensare la filosofia in chiave globale, rispettando l’integrità di quei prodotti fragili che sono le idee filosofiche ed aiutandoci, anche eticamente, a comprendere che le questioni che tormentano i pensatori di tutto il mondo da sempre sono più simili di quanto avevamo creduto.


Sul tema della filosofia globale, avevo organizzato una conferenza a New York nel 2011, all'Istituto Italiano di Cultura : Global Humanities. Il dibattito era su temi affini, e in generale, su come le scienze umane, così radicate in tradizioni e valori locali, possano globalizzarsi. Gli archivi del dibattito con tutti i testi presentati sono disponibili online a: http://www.interdisciplines.org/conferences/Global-Humanities