Tuesday, May 20, 2008

Designing Wisdom Through the Web: The Passion of Ranking















Draft. Do not Quote. Presented at the workshop on Collective Wisdom, Collège de France, Paris 22-23 May 2008.

Let me start with a rather trivial remark: Design matters. This triviality is rich of consequences for collective wisdom. This is the central claim I would like to defend through this paper. No matter how many people are involved in the production of a collective outcome – a decision, an action, a cognitive achievement etc. – the way in which their interactions are designed, what they may know and not know of each other, how they access the collective procedure, what path their actions follows and how it merges with the actions of others, affects the content of the outcome. Of course this is well known by policy makers, constitution writers and all those who participate into the institutional design of a democratic system, or any other system of rules that has to take into account the point of view of the many. But the claim may appear less evident – or at least in need of a more articulate justification - when it deals with the design of knowledge and the epistemic practices on the Web. That is because the Web has been mainly seen as a disruptive technology whose immediate effect was to blow up all the existing legitimate procedures of knowledge access, thus “empowering” its users with a new intellectual freedom, the liberty to produce, access and distribute content in a totally unregulated way. Still, methods of tapping into the wisdom of the crowds on the Web are many and much more clearly differentiated that it is usually acknowledged. In his book on the Wisdom of Crowds – probably the only shared piece of collective wisdom that we are able to attribute to each other as a background reading in this very interdisciplinary conference – James Surowiecki writes about the different designs for capturing collective wisdom: “in the end there is nothing about a futures market that makes it inherently smarter than, say, Google. These are all attempts to tap into the wisdom of the crowd, and that’s the reason they work”. Yet, sometimes the devil is in the details and the way in which the wisdom of crowds is captured makes a huge difference on its outcome and its impact on our cognitive life. The design question that is thus central when dealing with these systems is: How can people and computers be connected so that—collectively—they act more intelligently than any individuals, groups, or computers?

In this paper I will try to go through the details of some of the collective wisdom systems that are nowadays used on the Web. I will provide a brief “technical” description of the design that underlies each of them. Then, I will argue that these systems work because of their very special way of articulating (1) individual choices and collectively-filtered preferences on one hand and (2) human actions and computer processes on the other. I will then conclude by some epistemological remarks about the role of ranking in our epistemic practices, arguing that the success of the Web as an epistemic practice is due to its capacity to provide not so much a potentially infinite system of information storage, but a giant network of ranking and rating systems in which information is valued as long as it has been already filtered by other people. My modest epistemological prediction is that the Information Age is being replaced by a Reputation Age in which the reputation of an item – that is how others value and rate the item - is the only way we have to extract information about it. I see this passion of ranking in collective wisdom as such a central feature that I’m tempted to add it as a condition in the very illuminating list of conditions that James Surowiecki imposes on the characterisation of a wise crowd, that is:

  1. diversity of opinion (each person should have some private information)
  2. independence (people’s opinions are not determined by others)
  3. decentralization (people are able to draw on local knowledge)
  4. aggregation (presence of mechanisms that turn individual judgements in collective decisions)

  1. presence of a rating device (each person should be able to produce a rating hierarchy, rely on past ranking systems and make – at least in some circumstances – his or her rating available to other persons)

I think that this last condition is particularly useful to understand the processes of collective intelligence that the Internet has made possible, although it is not limited to the Internet phenomenon. Of course, this opens the epistemological question of the epistemic value of these rankings, that it, to what extent their production and use by a group changes the ratio between truths and falsities produced by that group and, individually, how an awareness of rankings should affect a person’s beliefs. After all, rankings introduce a bias in judgement and the epistemic superiority of a biased judgement is in need of justification. Moreover, these rankings are the result of collective human registered activities with artificial devices. The control of the heuristics and techniques that underlie this dynamics of information may be out of sight or incomprehensible for the users who find themselves in the very vulnerable position of relying on external sources of information through a dynamic, machine-based channel of communication whose heuristics and biases are not under their control. For example, that companies used to pay to be included in search engines or gain a “preferred placement” was unknown to 60% of users[1] until the American Federal Trade Corporation wrote in 2002 a public recommendation asking to search engines companies to disclose paid link policies and clearly mark advertisements to avoid users’ confusion.

The epistemic status of these collectively produced rankings thus opens a series of epistemological questions:

1. Why do people trust these rankings and should they?

2. Why should we assume that the collective filtering of preferences produces wiser results on the Web?

3. What are the heuristics and biases of the aggregating systems on the Web that people should be aware of?

These questions include a descriptive as well as a normative perspective on the social epistemology of collective wisdom systems. A socio-epistemological approach to these questions - as the one I endorse - should try to elucidate both perspectives. Although this paper will explore more the descriptive side of the question, by showing the design of collective wisdom systems with their respective biases, let me introduce these examples by some general epistemological reflections that suggest also a possible line of answer to the normative issues. In my view, in an information-dense environment, where sources are in constant competition to get attention and the option of the direct verification of the information is simply not available at reasonable costs, evaluation and rankings are epistemic tools and cognitive practices that provide an inevitable shortcut to information. This is especially striking in contemporary informationally-overloaded societies, but I think it is a permanent feature of any extraction of information from a corpus of knowledge. There is no ideal knowledge that we can adjudicate without the access to previous evaluations and adjudications of others. And my modest epistemological prediction is that the higher is the uncertainty on the content of information, the stronger is the weight of the opinions of others in order to establish the quality of this content. This doesn’t make us more gullible. Our epistemic responsibility in dealing with these reputational devices is to be aware of the biases that the design of each of these devices incorporates, either for technical reasons or for sociological or institutional reasons. A detailed presentation of what sort of aggregation of individual choices the Internet makes available should be thus accompanied by an analysis of the possible biases that each of these systems carries in its design.

1. Collective intelligence out of individual choices

People - and other intelligent agents - often think better in groups and sometimes think in ways which would be simply impossible for isolated individuals. The Internet is surely an example of this. That is why the rise of the Internet created from the onset huge expectations about a possible “overcoming” of thought processes at the individual level, towards an emergence of a new – more powerful – form of technologically-mediated intelligence. A plethora of images and metaphors of the Internet as a super-intelligent agent thus invaded the literature on media studies – such as the Internet as an extended mind, a distributed digital consciousness, a higher-order intelligent being, etc…

Yet, the collective processes that make Internet such a powerful cognitive media are precisely an example of “collective intelligence” in the intended meaning of this workshop, that is, a mean of aggregation of individual choices and preferences. What Internet made possible though – and this was indeed spectacular - was a brand new form of aggregation that simply didn’t exist before its invention and diffusion around the world. In this sense, it provided a new tool for aggregating individual behaviours that may serve as a basis for rethinking other forms of institutions whose survival depends on combining in the appropriate way the views of the many.

1.1. The Internet and the Web

As I said in the introduction, the salient aspect of this new form of aggregation is a special way of articulating individual choices and collectively-filtered preferences through the technology of the Internet and, especially, of the World Wide Web. In this sense, it is useful to distinguish from the onset between the Internet as a networking phenomenon and the Web as a specific technology made possible by the existence of this new network. The Internet is a network whose beginnings go back to the Sixties, when American scientists at AT&T, Rand and MIT and the Defense Communication Agency started to think of an alternative model of transmitting information through a network. In the classical telephone system, when you call New York from your apartment in Paris, a circuit is open between you and the New York destination – roughly a copper line which physically connects the two destinations. The idea was thus to develop an alternative – “packet-switching” technology, by digitalizing conversations – that is – translating waves into bits, then chopping the result into packets which could flow independently through a network while giving the impression of a real-time connection on the other end. In the early Seventies the first decentralised network, Arpanet, was put in use that was able to transfer a message by spreading its chunks through the network and then reconstructing it at the end. By the mid Seventies, the first important application on the network, the mail, was created. What made this net such a powerful tool was its decentralised way of growing: Internet is a network of networks, which uses pre-existing wires (like telephone networks) to make computers communicate through a number of protocols (things like: IP/TCP) that are not proprietary: each new user can connect to the network by using these protocols. Each invention of an application, a mail system, a system of transfer of video, a digital phone system, can use the same protocols. Internet protocols are “commons”[2], and that was a boost to the growth of the network and the creativity of the applications using it. This is a crucial for the wisdom of the net. Without the political choice to keep these protocols free, the net would not have grown in a decentralised manner and the collaborative knowledge practices that it has realized would not have been possible. The World Wide Web, which is a much more recent invention, maintained the same philosophy of open protocols compatible with the Internet (like HTTP –hypertext transfer protocol or HTML- hypertext markup language). The Web is a service which operates through the Internet, a set of protocols and conventions that allows “pages” (i.e. a particular format of information that makes easy to write and read content) to be easily linked to each other, by the technique of hyperlink. It’s a visualization protocol that makes the display of information very simple. The growth of the Web is not the same thing as the growth of Internet. What made the Web grow so fast is that the creating a hyperlink doesn’t require any technical competence. The Web is an illustration of how an Internet application may flourish thanks to the openness of the protocols. And it is true that impact of IT on collective intelligence are due mostly to the Web.

1.2. The Web, collective memory and meta-memory

What makes the aggregation of individual preferences so special through the Web? For the history of culture, the Web is a major revolution on the storage, dissemination and retrieving of information. The major cultural revolutions in the history of culture have had an impact on the distribution of memory. The Web is one such revolution. Let’s see in what sense. The Web has often been compared to the invention of writing or printing. Both comparisons are valid. Writing, introduced at the end of the 4th millennium BCE in Mesopotamia, is an external memory device that makes possible the reorganization of intellectual life and the structuring of thoughts, neither of which are possible in oral cultures. With the introduction of writing, one part of our cognition “leaves” the brain to be distributed among external supports. The visual representation of a society’s knowledge makes it possible to both reorganize the knowledge in a more useful, more ‘logical’, way by using, for example, lists, tables, or genealogical trees, and to solidify it from one generation to the next. What’s more, the birth of “managerial” casts who oversee cultural memory, such as scribes, astrologists, and librarians, makes possible the organization of meta-memory, that is, the set of processes for accessing and recovering cultural memory.

Printing, introduced to our culture at the end of the 15th century, redistributes cultural memory, changing the configuration of the “informational pyramid” in the diffusion of knowledge. In what sense is the Web revolution comparable to the invention of writing and printing? In line with these two earlier revolutions, the Web increases the efficiency of recording, recovering, reproducing and distributing cultural memory. Like writing, the Web is an external memory device, although different in that it’s “active” in contrast to the passive nature of writing. Like printing, the Web is a device for redistributing the cultural memory in a population, although importantly different since it crucially modifies the costs and time of distribution. But unlike writing and printing, the Web presents a radical change in the conditions for accessing and recovering cultural memory with the introduction of new devices for managing meta-memory, i.e., the processes for accessing and recovering memory. Culture, to a large extent, consists in the conception, organization and institutionalization of an efficient meta-memory, i.e. a system of rules, practices and representations that allow us to usefully orient ourselves in the collective memory. A good part of our scholastic education consists in internalizing systems of meta-memory, classifications of style, rankings, etc.. chosen by our particular culture. For example, it’s important to know the basics of rhetoric in order to rapidly “classify” a line of verse as belonging to a certain style, and hence to a certain period, so as to be able to thus efficiently locate it from within the corpus of Italian literature. Meta-memory thus doesn’t serve only a cognitive function – to retrieve information from a corpus – but a social and epistemic function to provide an organization for this information in terms of various systems of classifications that embody the value of the “cultural lore” of that corpus. The way we retrieve information is an epistemic activity which allows us to access through the retrieving filters, how the culture autorities on a piece of information have classified and ranked it within that corpus. With the advent of technologies that automate the functions of accessing and recovering memory, such as search engines and knowledge management systems, meta-memory also becomes part of external memory: a cognitive function, central to the cultural organization of human societies, has become automated—another “piece” of cognition thus leaves our brain in order to be materialized through external supports. Returning to the example above, if I have in mind a line of poetic verse, say “Guido, i’vorrei...” but can recall neither the author nor the period, and am unable to classify the style, these days I can simply write the line of verse in the text window of a search engine and look at the results. The highly improbable combination of words in a line of verse makes possible a sufficiently relevant selection of information that yields among the first results the poem from which the line is taken (my search for this line using Google yielded 654 responses, the first ten of which contained the complete text from the poem in Dante’s Rime).

How is this meta-memory designed through the Web technology? What is unique on the Web is that the actions of the users leave a track on the system that is immediately reusable by it, like the trails that snails leave on the ground, which reveal to other snails the path they are following. The combination of the tracks of the different patterns of use may be easily displayed in a rank that informs and influence future preferences and actions of the users. The corpus of knowledge available on the Web – built and maintained by the individual behaviours of the users – is automatically filtered by systems that aggregate these behaviours in a ranking and make it available as filtered information to new, individual users. I will analyse different classes of meta-memory devices. These systems, although they both provide a selection of information that informs and influences users’ behaviour, are designed in a different way, a difference is worth taking notice of.

2. Collaborative filtering: wisdom out of algorithms

2.1. Knowledge Management Systems

Collaborative filtering is a way of making predictions about the preferences of a users based on the pattern of behaviour of many other users. It is mainly used for commercial purposes in web applications for e-business, although it has been extended to other domains. A well-known example of a system of collaborative filtering which I assume we are all familiar with, is Amazon.com : Amazon.com is a Web application, a knowledge management system which keeps track of users’ interactions with the systems and is designed to display correlations between patterns of activities in a way that informs users about other users’ preferences. The best known feature of this system is the one which associates different items to buy: “Customers who buy X buy also Y”. The originality of these systems is that the matching between X and Y is in a sense bottom-up (although the design of the appropriate thresholds of activities above which this correlation emerges are fixed by the information architecture of the system). The association between James Surowiecki’s book and Ian Ayer’s book Super Crunchers that you can find on the Amazon’s page for The Wisdom of Crowds has been produced automatically by an algorithm that aggregates the preferences of the users and makes the correlation emerge. This is a unique feature of these interactive systems, in which new categories are created by automatically transforming human actions into visible rankings. The collective wisdom of the system is due to a division of cognitive labour between the algorithms which compose and visualize the information, and the users who interact with the system. The classifications and rankings that are thus created aren’t based on previous cultural knowledge of habits and customs of users, but on the emergence of significant patterns of aggregated preferences through the individual interactions with the system. Of course, biases are possible within the system: the weights associated to each item to make it emerge are fixed in such a way that some items have more chances to be recommended that others. But given that the system is alimented by the repeated actions of the users, a too biased recommendation that couples items that users won’t buy together will not be replicated enough times to stabilize within the system.

2.2. PageRank

Another class of systems that realize meta-memory functions through artificial devices are search engines. As we all know by experience, search engines have been a major transformation of our epistemic practices and a profound cognitive revolution. The most remarkable innovation of these tools is due to the discovery of the structure of the Web at the beginning of this century[3]. The structure of the Web is that of a social network, and contains a lot of information about its users’ preferences and habits. The search engines of second generation, like Google, are able to exploit this structure in order to gain information about how knowledge is distributed throughout the world. Basically, the PageRank algorithm interprets a link from a page A to page B as a vote that page A expresses towards page B. But we’re not in democracy on the Web and votes do not have all the same weight. Votes that come from certain sites – called “hubs”- have much more weight than others, and reflect in a sense hierarchies of reputation that exist outside the Web. Roughly, a link from my homepage to Professor Elster’s page, weighs much less than a link to my page from that of Professor Elster. The Web is an “aristocratic” network – an expression that is used by the social network theorists – that is, a network in which “rich get richer” and the more links you receive the higher is the probability that you will receive even more. This disparity of weights creates a “reputational landscape” that informs the result of a query. The PageRank algorithm is nourished by the local knowledge and preferences of each individual user and it influences them by displaying a ranking of results that are interpreted as a hierarchy of relevance. Note that this system is NOT a knowledge management system: the PageRank algorithm doesn’t know anything about the particular pattern of activities of each individual: it doesn’t know how many times you and I go to the JSTOR website and doesn’t combine our navigation paths together. A “click” from a page to another is an opaque information for PageRank, whereas a link between two pages contains a lot of information about users’ knowledge that the system is able to extract. Still, the two systems are comparable from the point of view of the design of collective intelligence: neither requires any cooperation between agents in order to create a shared system of ranking. The “collaborative” aspect of the collective filtering is more in the hands of machines than of human agents[4]. The system exploits the information that human agents either unintentionally leave on the website by interacting with it (KM systems) or actively produce by putting a link from one page to another (search-engines): the result is collective, but the motivation is individual.

Biases of search engines have been a major subject of discussions, controversy and collective fears these years. As I’ve mentioned above, the refinement of the second-generation search engines such as Google has allowed at least to explicitly mark paid inclusions and preferred placements, but this needed a political intervention. Also, the “Mathiew effect” of aristocratic networks is notorious, and the risk of these tools is to give prominence to already powerful sites at the expense of others. The awareness of these biases should imply a refinement on the search practices also: for example, the more improbable is the string of keywords, the more relevant is the filtered result. Novices and learners should be instructed with even simple principles that make them less vulnerable to these biases.

3. Reputation systems: wisdom out of status anxiety

The collaborative filtering of information may require sometimes a more active participation to a community than what is needed in the examples above. In his work on Information Politics on the Web the sociologist Richard Rogers classifies web dynamics as “voluntaristic” or “non-voluntaristic” according to the respective role of human and machines in providing information feed-back for the users. Reputation systems are an example of a more “voluntaristic” web application than the ones seen above. A reputation system is a special kind of collaborative filtering algorithm that determines ratings for a collection of agents based on the opinions that these agents hold about each other. A reputation system collects, distributes, and aggregates feedback about participants’ past behaviour.

The best known and probably simplest reputation system of large impact on the Web is the system of auction sales at www.eBay.com . eBay allows commercial interactions among more than 125 millions of people around the world. People are buyers and sellers. Buyers place a bid on an item. If their bid is successful, they make the commercial transaction, then both (buyers and sellers) leave a feedback about the quality of that transaction. The different feedbacks are then aggregated by the system in a very simple feedback profile, where positive feedbacks and negative feedbacks plus some comments are displayed to the users. The reputation of the agent is thus a useful information in order to decide to pursue the transaction. Reputation has in this case a real, measurable, commercial value: in a market with a fragmented offer and very low information available on each offer, reputation becomes a crucial information in order to trust the seller. Sellers on eBay know very well the value of their good reputation in such a special business environment (no physical encounters, no chance to see and touch the item, vagueness about the normative framework of the transaction – if for example it is realized through two different countries, etc.), so there is a number of transactions at a very low cost whose objective is just to gain one more positive evaluation. The system creates a collective result forcing cooperation, that is, asking users to leave an evaluation at the end of the transaction and sanctioning them if they don’t comply. Without this active participation of the users, the system will be useless. Still, it is a special form of collaborative behaviour that doesn’t require any commitment to cooperation as a value. Non-cooperative users are sanctioned to different degrees: they can be negatively evaluated not only if the transaction isn’t good, but also if they do not participate into the evaluation process. Breaking the rules of e-bay may lead to the exclusion from the community. The design of wisdom thus comprises an active participation from the users for fear to be ostracized by the community (which would be seen as a loss of business opportunities). Biases are clearly possible here also. People invest in cheap transactions whose only aim is to gain reputational points. This is a bias one should be aware of and easily check: if a seller offers too many cheap items, he too concerned with his public image to be considered reliable.

Some reputational features are used also by non-commercial systems such as www.flickr.com. Flickr is a collaborative platform to share photos. For each picture, you can visualise how many users have added it among their favourite pictures and who they are.

Reputation systems differ from other systems of measurement of reputation that use citation analysis, like for example the Science Citation Index. These systems are in a sense reputation-based, given that they use scientometric techniques to measure the impact of a publication in terms of the number of citations in other publications. But they don’t require any active participation of the agents in order to obtain the measure of reputation.

4. Collaborative, open systems: wisdom out of cooperation

The collaborative filtering on the Web may be even more voluntaristic and human-based than the previous examples, while still necessitating a Web support to realize an intelligent outcome. Two are the most discussed cases of collaborative systems that owe their success to active human cooperation in filtering and revising the information made available: the Open Source communities of software development, like Linux, and the collective open content projects such as Wikipedia. In both cases, the filtering process is completely human-made: code or content is made available to a community which can filter it by correcting, editing of erasing it according to personal or shared standards of quality. I would say that these are communities of amateurs instead of experts, that is, people who love what they do and decide to share their knowledge for the sake of the community. Collective wisdom is thus created by individual human efforts that are aggregated in a common enterprise in which some norms of cooperation are shared.

I won’t discuss biases on Wikipedia: it is such a large topic that it could be the subject of another paper. Let me just mention that Larry Sanger, one of its founders, is promoting an alternative project, www.citizendum.org which endorses a policy of accreditation of its authors. Self-promotion, ideology, targeted attacks on reputation may of course act as biases in the selection of entries. But the fear of Wikipedia as a dangerous place of tendentious information has been disconfirmed by facts: thanks to its large size, Wikipedia is hugely differentiated in its topics and views, and it has been shown that its reliability is no less than that of the Encyclopedia Britannica[5].

Recommender systems: wisdom out of connoisseurship

Another class of systems is based on recommendations of connoisseurs in a particular domain. One of my favourite examples of wisdom created out of recommendations is the Music Genoma Project at www.pandora.com a sort of Web-based radio that works by aggregating thousands of descriptions and classifications of pieces of music produced by connoisseurs and matches these descriptions with the “tastes” of listeners (as they describe them). Then it broadcasts a selection of music pieces that correspond to what the listeners like to hear. And it works! Imagine how good would be to have a similar system that selects papers for you on the basis of recommendations of experts that match your tastes! Some recommender systems collect information from users by actively asking them to rate a number of items, or to express a preference between two items, or to create a list of items that they like. The system then compares the data to similar data collected from other users and displays the recommendation. It is basically a collaborative filtering technique with a more active component: people are asked to express their preferences, instead of just inferring their preferences from their behaviour, which makes a huge difference: it is well known in psychology that we are not so good in introspection and sometimes we consciously express preferences that are incoherent with our behaviour: If asked, I may express a preference for classical music, while if I keep a record of how many times I do listen to classical music compared other genres of music in a week, I realize that my preferences are quite different).

Conclusions

This long list of examples of Web tools for producing collective wisdom illustrates how fine-grained can be the choice of the design for aggregating individual choices and preferences. The differences in design that I have underlined end up in deep differences in the kind of collective communities that are generated by the IT. Sometimes the community is absent, as in the case of the Google users, who cannot be defined as a “community” in any interesting normative sense, sometimes the community is normatively demanding, as in the case of eBay, in which participation in the filtering process is needed for the survival of the community. If the new collective production of knowledge that the Web – and in particular the Web 2.0 – makes possible should serve as a laboratory for designing “better” collective procedures for the production of knowledge or of wise decisions, these differences should be taken into account.

But let me come back in the end with a more epistemological claim about what kind of knowledge is produced by these new tools. As I said at the beginning, these tools work insofar as they provide access to rankings of information, labelling procedures and evaluations. Even Wikipedia, which doesn’t display any explicit rating device, works on the following principle: if an entry has survived on the site – that is, it has not been erased by other wikipedians – it is worth reading it. This can be a too weak evaluative tool, and, as I said, discussion goes on these days on the opportunity to introduce more structured filtering devices on Wikipedia[6], but it is my opinion that the survival or even egalitarian projects like Wikipedia depends on their capacity to incorporate a ranking: the label Wikipedia in itself works already as a reputational cue that orients the choices of the users. Without the reputation of the label, the success of the project would be much more limited.

As I said at the beginning, the Web is not only a powerful reservoir of all sort of labelled and unlabelled information, but it is also a powerful reputational tool that introduces ranks, rating systems, weights and biases in the landscape of knowledge. Even in this information-dense world, knowledge without evaluation would be a sad desert landscape in which people would be stunned in front of an enormous and mute mass of information, as Bouvard et Pécuchet, the two heroes of Flaubert's famous novel, who decided to retire and to go through every known discipline without, in the end, being able to learn anything. An efficient knowledge system will inevitably grow by generating a variety of evaluative tools: that is how culture grows, how traditions are created. A cultural tradition is to begin with a labelling system of insiders and outsiders, of who stays on and who is lost in the magma of the past. The good news is that in the Web era this inevitable evaluation is made through new, collective tools that challenge the received views and develop and improve an innovative and democratic way of selection of knowledge. But there's no escape from the creation of a "canonical"—even if tentative and rapidly evolving—corpus of knowledge.

References

A. Clark (2003) Natural Born Cyborgs, Oxford University Press.

L. Lessig (2001) The Future of Ideas, Vintage, New York

G. Origgi (2007) “Wine epistemology: The role of reputation and rating systems in the world of wine”, in B. Smith (ed.) Questions of Taste, Oxford University Press.

G. Origgi (2007) « Un certain regard. Pour une épistémologie de la réputation », presented at the workshop La réputation, Fondazione Olivetti, Rome, April 2007.

G. Origgi (2008) Qu’est-ce que la confiance, VRIN, Paris.

R. Rogers (2004) Information Politics of the Web, MIT Press

L. Sanger (2007) “Who says we know: On the new Politics of knowledge” at www.edge.org

Taraborelli, D. (2008) “How the Web is changing the way we trust”, in: K. Waelbers, A. Briggle, P. Brey (Eds.), Current Issues in Computing and Philosophy, IOS Press, Amsterdam, 2008.

P. Thagard (2001). Internet epistemology: Contributions of new information technologies to scientific research. In K. Crowley, C. D. Schunn, and T. Okada, (Eds.) Designing for science: Implications from professional, instructional, and everyday science.Mawah, NJ: Erlbaum, 465-485.


[1] Princeton Survey Research Associates, “A Matter of Trust: What Users Want from Websites”, Princeton, January 2002, at: http://www.consumerWebwatch.com/news/report1.pdf . The case is reported in R. Rogers (2004) Information Politics on the Web, MIT Press.

[2] Cf. on this point, L. Lessig (2001) The Fututre of Ideas, Vintage, New York.

[3] Kleinberg, J. (2001) “The Structure of the Web”, Science.

[4] Knowledge management systems like Amazon.com have some collaborative filtering features that need cooperation, like writing a review of a book or ranking a book with the five stars ranking system, but these aren’t essential to the functioning of the collaborative filtering process.

[5] Cf. “Internet Encyclopedias go head to head” Nature, 438, 15 December 2005.

[6] See. L. Sanger «”Who says we know. On the new politics of knowledge” on line at www.edge.org and my reply to him, G. Origgi “Why reputation matters”

Saturday, April 26, 2008

LiquidPublication call for job


'LiquidPublication' Post-doctoral researcher
Innovating the Scientific Knowledge Object Lifecycle
Institut Jean Nicod (CNRS, EHSS, ENS), Paris
-under the responsibility of Gloria Origgi and Roberto Casati
Candidates are invited to submit an application (in English) including a detailed curriculum vitae, a list of publications, a statement of interest, and two letters of recommendation. The application should be sent directly both to Gloria Origgi at origgi@ehess.fr and to Roberto Casati at casati@ehess.fr .


We are seeking to recruit a post-doctoral researcher as part of an international project entitled LiquidPublication. Funded by the European Commission, the project will bring together a highly interdisciplinary team of researchers and experts in order to explore how ICT and the lessons learned from software engineering and the social Web can be applied to provide a radical paradigm shift in the way scientific knowledge is created, disseminated, evaluated, and maintained. The goal to exploit the novel technologies to enable a transition of the “scientific paper” from its traditional “solid” form, (i.e., a crystallization in space and time of a scientific knowledge artifact) to a Liquid Publication (or LiquidPub for short), that can take multiple shapes, evolves continuously in time, and is enriched by multiple sources. We call these new, dynamic objects, Scientific Knowledge Objects (SKO). More details on the project and its partners are available at: http://project.liquidpub.org/

Keywords: social epistemology, web epistemology, scientific evaluation, information design, social simulation, human-computer interaction

The post-doctoral researcher will be based in Paris, at the Institut Nicod (Ecole Normale Supérieure and Ecole des Hautes Etudes en Sciences Sociales,
www.institutnicod.org ) and will have the opportunity to (1) work in an interdisciplinary team, (2) learn to manage an international research project (3) present research plans and findings to specialist audiences at project workshops and conferences, and (4) disseminate their research in a wide range of project publications.

The core expert consortium on the project includes Project Coordinator, Professor Fabio Casati (Department of Computer Science, University of Trento), the Spanish National Research Council, the academic publishing house Springer Science, the University of Fribourg and the CNRS in Paris.

The post will start in late May 2008. Candidates are welcome to submit their applications from now on. Applications will be accepted until the position is fulfilled. The position will be initially for 12 months. The project will continue thereafter for a further 24 months and post-holders will be eligible to apply for continuing post-doctoral research positions. During the first 24 months, research will mainly be based at the Institut Jean Nicod (CNRS, EHESS, ENS) in Paris, France. The researcher will be required to develop a programme of research under the supervision of Gloria Origgi and Roberto Casati (CNRS) on "Defining Processes and Roles of Scientific Knowledge Objects (SKO)", that is, understanding the mutual interaction between different actors in the creation of a liquid document: authors, collaborators, readers, evaluators, publishers, and designing a prototype version of SKO and its related plug-ins. The candidate will be also involved in research on copyright policies and will be encouraged to explore already existing solutions on the Web 2.0 to diffuse and evaluate academic research.

The post-doctoral researcher should fill the following requirements:
    A background in epistemology, web-studies, media studies, information design, human-computer interaction. An interest and experience in web-based design and evaluation of knowledge. An interest and experience in the Web 2.0 social tools Some programming skills for Web development (PhP, SQL)
Doctoral degrees must be completed before appointment to the post.
Candidates should be effective team players and independent researchers who can work to deadline and who are able to communicate effectively across disciplines.

Fluent spoken and written English is essential.
The salary is €2000 per month (net). French social security and retirement benefits will apply.

Candidates are invited to submit an application (in English) including a detailed curriculum vitae, a list of publications, a statement of interest, and two letters of recommendation. The application should be sent directly both to Gloria Origgi at
origgi@ehess.fr and to Roberto Casati at casati@ehess.fr .
The successful candidate will be given ample leeway. Our objective is to create a leading group in web epistemology, and SKOs, and thereby to make sure that our candidate will establish her or himself as a leading figure in the domain. We will be thus flexible in redefining assignments during the life of the project.

The candidate is expected to contribute to the scientific and the management aspects of the project.

1. Participate in the research and design process:
-Writing up a state of the art on evaluation policies and software (in the early months)
-Contributing to the definition of SKO roles and the information design of the SKO prototype
- Contributing to the software development of the SKO prototype
-Producing two scientific papers per year in international peer-reviewed journals), on themes defined in coordination with the principal investigators.
-Tracking the relevant communities (such as Bp3 and Researchblogging): search for relevant communities, follow up of activities, production of periodic executive summaries.
-Participate in the creation of guidelines for SKOs (copyright design, etc.).

2. Management
-Effectively managing the project on behalf on the Institut Nicod, under the direction of the principal investigators, including: implementing project policies, keeping in touch with the coordinator, producing reports, ensuring the timely production of deliverables for WPs, liaising the CNRS and the European Commission, organizing meetings, keeping track of knowledge (e.g. by contributing to the project website and blog).
-Providing creative solutions for making the LiquidPublication community members interact in an efficient way, and for outreaching to other communities.

For enquiries, email Gloria Origgi
origgi@ehess.fr.

A pdf copy of this call can be retrived here.

Sunday, April 13, 2008

Dan Sperber's Templeton Research Lecture. The causes of religion

Dan's First Templeton Lecture on Religion, Nashville, April 2008. Bravo Dan.

A Vision of Students Today

What university is and what will be. This video has been made by some American students in Anthropology. And they are perfectly right. We are maintaining a higher educational system that has no more contact with reality. Ex-catedra classes are useless nowadays. Students would gain much more from a serious and personalized supervision by their teachers while working with collaborative tools through Internet. We are informationally overloaded today: students do not need teachers to get information: they need them as guides, as connoisseurs who can transmit a talent in browsing the corpus of knowledge and selecting what is worth studying.

We are maintaining disciplinary boundaries whose only interest is to reproduce a cast of academics who will feel comfortably established in each particular discipline. We are maintaining a mode of scientific production that is old, based on the XIX century model that fitted Prussian universities, in which the scientist was seen as a public officer whose productive constraints were determined by societal needs, while we all know today that the impact of a single result can change the course of science, given the speed at which it will be diffused through Internet. We all know that many research programs are hopeless today, that entire departments could just shut their doors without any serious cultural loss (apart from the feeling of loss that some people resent each time an "endangered species" disappears, but it is true: most disciplines are endangered species and they would naturally and rapidly disappear if they weren't kept alive by the old-fashioned academic structure).

There is a lot of mutual connivance in the Academy today, that creates an incredible inertia: people prefer to stick to traditional modes of production of knowledge because they are comfortable with them, even of they know very well that they are sub-optimal for students and for the advancement of research in general. The appeal to normative standards of truth and scientific quality are still used to defend a system whose only beneficiaries are those who produce it.


Wednesday, April 02, 2008

Qu'est-ce que la confiance?







My book on Trust is now out in French.

Mon livre sur
Qu'est-ce que la confiance? est paru chez VRIN, Paris.




Description:
Vrin, « Chemins Philosophiques ». 128 p., 11 × 18 cm. ISBN : 978-2-7116-1870-5.
Concept-clé pour comprendre notre action sociale et morale, la confiance reste cependant l’une des notions les plus difficiles à traiter de la philosophie et des sciences sociales. La confiance est un état cognitif et motivationnel complexe, un mélange de rationalité, de sentiments et d’engagement. Faire confiance implique donner aux autres un certain pouvoir sur nous-mêmes et accepter la vulnérabilité que cela comporte. Ce volume analyse cette notion sous ses différentes dimensions : sa dimension morale, affective, épistémique et politique, en posant des questions de fond : Avons-nous des devoirs de confiance? Face à un médecin, avons-nous vraiment le choix de faire confiance? Faut-il faire confiance à ceux qui nous gouvernent?

Friday, February 08, 2008

Trust, Authority and Epistemic Responsibility

Draft. Do not quote. Paper submitted for a collective volume on Epistemic Justice.

Trusting others is one of the most common epistemic practices to make sense of the world around us. Sometimes we have reasons to trust, sometimes not, and many times our main reason to trust is based on the epistemic authority we attribute to our informants. The way we usually weight this authority, how we select the “good informants”[1], which of their properties we use as indicators of their trustworthiness is a rather complex matter. Indicators of trustworthiness may be notoriously biased by our prejudices, they may be faked or manipulated by malevolent or just interested informants and change through time and space. Discussions around the way in which these criteria are established and shared in a community range from history of science, to contemporary epistemology and moral philosophy. Their flavour can be more descriptive - as in Steven Shapin’s account of the role of gentelmanry in the establishment of scientific reputation in XVII century in Britain – or normative, as in Miranda Fricker’s work on the virtue of epistemic justice as the basic virtue that a hearer must have in order to judge in a non-biased way his or her informants.[2] Here, I do not want to propose some other criteria for ascertaining the reliability of our informants, rather, I would like to discuss the variability of these criteria within different contexts of communication. Communication is an active process through which interlocutors not only transmit and receive information, but negotiate new epistemic standards by constructing together new reasons and justifications that are heavily influenced by the moral, social or political context and by the interests at stake on both sides, the speaker and the hearer. The kind of “doxastic responsibility”[3] hearers exercise on testimonial knowledge is sensitive to the way the communicative process takes place and to the many contextual factors that influence its success or failure.

The overall picture of trust in testimonial knowledge I want to suggest is due to some ideas I have developed upon the relations between the epistemology of trust and the pragmatics of communication[4]. What I want to defend here is that the way we gain knowledge through communication is influenced by the responsibility we take in granting authority to a source of information. Our responsibility is not just a moral quality we happen to possess or have learnt through experience, but a way of approaching our communicative interactions, a stance towards credibility and credulity that is shared, in successful cases, with our informants. It’s a dynamic stance, which varies through contexts and contents at stake, but it is necessary for any epistemic enrichment of our cognitive life, apart from the marginal cases of epistemic luck[5]. An obvious advantage of a pragmatic approach to epistemology of testimony is to avoid an artificial image that depicts the hearer as a rational chooser who has the option of accepting or refusing a chunk of information that is presented towards her by a speaker. In most cases we just do not have the choice: we learn from others because we’re immerged in conversational practices whose output is exactly the chunk we end to believe or disbelieve. There is no a priori information to gain that is not constructed in the process of communication.

In order to illustrate my main point, I will present in the rest of the paper a series of examples - some real and some other fictional - that show how epistemic standards vary through contexts, how different is our way to weigh evidence as our interests at stake change and, finally, how the interplay between pragmatic effects of communication and its epistemic consequences is a dynamic feature of our way of accessing new information. As contexts and stakes change, what we may have considered once as just a pragmatic vagary of conversation may become a crucial epistemic cue that orients our allocation of credibility and moral authority to our informants.

Let me start with a political example - which had a huge impact of the credibility and authority of many political leaders in United States and Great Britain – namely, the search of evidence and the rhetoric of justification that surrounded the decision of attacking Iraq in 2003. It is interesting to notice that this historical example is probably the first case, at least to my knowledge, in which epistemic reasons were so involved in the justification of a political choice that they were used in the public debate in order to get consensus. Getting evidence on the presence of weapons of mass destruction in Iraq became at a certain point the key issue in order to possess a political justification to go to war. But the story I’m going to tell has also an interesting epistemic moral: there are epistemic standards which people care about and whose violation is heavily sanctioned in terms of credibility. The social order of a society is dependent also on its cognitive order and the back-lash of public opinion on political leaders when the expertise they appealed to proved unreliable and its political exploitation disingenuous was a clear demonstration of this. I underline this point because there is a post-modern tendency today - in social science and in social epistemology too - to consider that all public opinion stems from ideology, and that the reasons given to justify an idea depend only on power. Yet, I think that people are smarter and more responsible epistemic agents than post-modernists tend to consider: their awareness of epistemic standards orients their choices and the formation of their opinions, at least in mature democracies.

But let’s stick the facts.

On February 3rd 2003, The Intelligence of the British Government released a dossier entitled: “Iraq- Its infrastructure of concealment, deception and intimidation” on how Iraq's security organisations operated to conceal weapons of mass destruction from UN Inspectors, the organization of Iraq Intelligence and the effects of the security apparatus on ordinary citizens. The report was previously sent to Mr. Colin Powell who used some of the material for his well known presentation at United Nations (Monday, February 3rd).

Few days later, Channel 4 News learned from the Cambridge academic Glen Rangwala - a lecturer in Middle Eastern politics - that the dossier had been massively copied from an article written by a post-graduate student, Ibrahim al Marashi, published on September 2002 on The Middle East Review of International Affairs, and some other articles in Jane’s Intelligence Review. Glen Rangwala was able to disclose the plagiarism by spotting the government’s duplication of material of his own site on Iraq weaponry and Middle Eastern affairs[6]. The news was rapidly confirmed by Downing Street and the government apologized for the plagiarism, which actually revealed as a very lucky case of “blind” trust in expertise: further controls confirmed the accuracy of al-Marashi evidence about Iraq Intelligence. But the back-lash on public opinion was strong: the impression left was that of an up to the minute Intelligence analysis with no serious search of evidence. Furthermore, that Colin Powell could have used such shallow evidence in order to justify the necessity of an attack on Iraq in front of the world was perceived as a lack of moral and epistemic responsibility. In Bernard Williams’ terms, one can say that the loss of credibility was due in this case to a lack of accuracy about truth than to a lack of sincerity[7]. This is interesting because there is a rich moral tradition which severely sanctions deception as one of the worst sin against our conspecifics. Our ties of trust are built on the credibility of the words we exchange to each other, hence lying is a betrayal of our human nature, a deep violation of the fundamental – almost natural – rules that make up human communities. Montaigne says for example:

Verily, lying is an ill and detestable vice. Nothing makes us men, and no other means keeps us bound one to another, but our word; knew we but the horror and weight of it, we would with fire and sword pursue and hate the same, and more justly than any other crime […] Whatsoever a lier should say, we would take it in a contrary sense. But the opposite of truth has many shapes, and an indefinite field.[8]

Kant’s well-known position on lying is that it can be never justified under all possible circumstances. Not lying is a universal law, with no exception: “This means that when you tell a lie, you merely take exception to the general rule that says everyone should always tell the truth”[9]. In his exchange with Benjamin Constant, who objects to the German philosopher the existence of a “duty to tell the truth”, Kant restates his position even in response to an extreme example that Constant submits to him. Constant, who wrote against Kant that: “The moral principle stating that it is a duty to tell the truth would make any society impossible if that principle were taken singly and unconditionally”[10], presents the following example as a clear case in which the duty to tell the truth is objectionable: If someone were at your door to murder a friend of yours who is hiding in your house and asks you where he is, do you still have the duty to tell him the truth? Kant replies in a text entitled: “On a Supposed Right to Lie Because of Philanthropic Concerns”, that yes, even in such an extreme case you have the duty to tell the truth: “Truthfulness in statements that cannot be avoided is the formal duty of man to everyone, however great the disadvantage that may arise there from for him or for any other”.[11] So, sincerity seems to be a much more important duty than accuracy in the history of philosophy. Inaccuracy is a more recent “moral fault”, that comes with Modernity and with the idea that trust in governments should not be based on the moral virtues of governors, rather, on their expertise and on the reliability of the procedures that assure the good functioning of the State[12]. In a “disenchanted world” in which politics doesn’t depend anymore on virtue, truthfulness has to be based on evidence, and an epistemic flaw in providing such evidence in support of a claim which has political consequences is a moral weakness as well as a way of undermining the grounds of trust that sustain a society.

But let us go on with the story.

On June 2003 Alastair Campbell, Director of Communications of Tony Blair’s government, was accused by the BBC journalist Andrew Gilligan to have “sexed up” another report, released on September 24, 2002 on Iraq ‘s weapons of mass destruction, that the Government unprecedentely decided to publish with a preface of the Prime Minister. Based on the testimony of an insider expert who contributed to the report, Gilligan affirmed that the claim made in the dossier that Iraq could be ready to use chemical weapons in forty-five minutes was known to be exaggerated by the Government at the time they decided to include it in the dossier. Here also, the use of evidence is surprisingly new: Traditionally, political propaganda in support to war wasn’t committed to use evidence to justify its choice: appealing to rhetorical values such as “Loyalty to the Nation” or “Duty to defend our boarders from the enemy” was all that governors felt the duty to share as information with citizens. Here, though, the British Government felt the urge to share with its citizens the information contained in the dossier, at its own risk: responsible receivers of this information were able to disclose another flaw, even if of a different kind. In Harry Frankfurt’s crude terms, Campbell’s mistake is more of the order of a “bullshit” than that of a “lie”. Bullshits are common in our informationally overloaded societies: given that there is too much information around, bullshits are a way of “sexing up” information in order to get attention to it.[13] It’s a way of playing with the relevance of the information, by adding some appealing pragmatic effects to it, hence having more chances to be heard. But in this case, a not-so-innocent pragmatic manoeuvre was perceived as a grave violation of the epistemic standards that a decent society must endorse in order to sustain trust relationships between citizens and politicians. The consequences of this act are well known.

On July, the name of Gilligan’s informant was revealed to the press by the Defence Office. On July 18th the microbiologist David Kelly, adviser of the Foreign Office and the Defence Office about the chemical weapons, was found dead, two days after a very tough audience at the Parliament on July 15th. Having learned about his death while travelling in Japan, Tony Blair declared that an independent inquiry would be open on the case. Lord Hutton was charged to establish the facts around David Kelly’s death.

In January 2004, Lord Hutton released his report. After having heard 74 testimonies and analysed more than 300 claims, Lord Hutton established the following facts:

  • David Kelly killed himself under no pressure by any other person
  • In his conversation with Mr. Gilligan on May 23, David Kelly was in breach of the rules governing disclosure of confidential information even if part of his job description as an adviser of the Foreign and Defence Offices concerned speaking to the media and institutions on Iraq’s weapons.
  • It is dubious that David Kelly said to Mr. Gilligan that the claim about 45 minutes was exaggerated.
  • There were no effective pressures of the Government for “sexing up” the dossier.

The rest is known: The main responsibility for the affair fell on the BBC, Gilligan lost his job and the Director General of the BBC, Greg Dyke, resigned.

The result of the inquiry was disappointing for the public opinion, because it was perceived as a way to acquit the government from its responsibilities. But here I do not want to enter the debate on the moral responsibilities of the UK government. Rather, what interested me in this example was the role played by epistemic standards shared by responsible citizens in evaluating the credibility of the government’s testimony on Iraqi military power. The way in which information was filtered and evaluated was different in the two cases I’ve discussed: in the first case, the flaw was due to inaccuracy whereas in the second one to insincerity: yet, the fact that in both cases there were people able to detect it and severely judge it, shows that real standards exist, that people usually have quite an accurate conception of them and that it is not easy to fool everybody without paying a price in credibility.

This is also a particularly illuminating example of the strict relation between epistemic authority and political authority in democratic societies. Social epistemology today is becoming also a “political epistemology”: political authority is the more and more sustained by various forms of epistemic authority: experts, reports, oracles, think-tanks, independent inquiries, have to provide the evidence on which a political choice is going to be taken and judged.

Here are some of the provisory conclusions on trust in authority that I want to draw from this example:

1. Governments rely on experts on technical matters to take decision. But the unprecedented choice of the British Government to publish the September 2002 dossier shows that the use of experts in this case was more than just for acquiring information about the facts in Iraq: it was also a way of legitimizing its political action on the basis of the information contained in the dossier. The political authority appealed to the epistemic authority of its experts to justify its action.

2. In the February 2003 report, the expertise of Intelligence was questioned because of plagiarism: the information revealed correct, but the way it was acquired was unreliable and inappropriate. It seems so that justification of epistemic authority matters for public opinion: an institution that has epistemic authority not only must hold the appropriate information but it must be justified in holding it. The “epistemic luck” of acquiring the right piece of information by chance (or through an unreliable method) deflates the authority of the institution.

3. Even if the September 2003 dossier was obviously produced for political reasons, that it, with the aim to establish the facts that would justify the invasion of Iraq, a direct influence of the political authority on the presentation of facts is intolerable for the democratic functioning of a society, in particular on such a delicate matter as the decision to send people to war. In general: when the potential consequences are grave, standards of evidence, objectivity and impartiality must be raised. An institution that has epistemic authority knows the facts that may have justified a political decision but its authority depend on its autonomy from the political power.

4. A political independent authority, Lord Hutton, is then charged to check the facts and assess the responsibilities of all the actors of the affairs, experts, media and political authorities. His moral authority gave to his report a special epistemic status of “ultimate truth about the case”. Our trust in the “cognitive order” of our society - that is, who holds knowledge, on what rules and principles knowledge is distributed and diffused in a society, on what grounds experts should be believed – influences our trust in its social order and is influenced by it. Yet, as I said, real standards play a role, epistemic justice is a shared value and massive violations of it are difficult to maintain at least in democracies which ground their consent in the autonomy and epistemic responsibility of subjects.

On the second part of this paper I would like to explore how these real standards vary nonetheless, according to time, places and contexts of communication. For example, what could have been perceived for my father’s generation as a form of gallantry and well mannered way to deal with women in conversation, as hiding the price list in a restaurant to women guests, is nowadays perceived as an unjust way of blocking access of information to a special category of people. Or else, the paternalistic way of doctors to hide part of the information about a patient’s health in cases of serious diagnoses has been now evacuated as a legitimate communicational practice from medical ethics. A contractualist relationship based on informed consent has become the standard way of dealing with ethical issues within medical decision making. This practice aimed at readjusting the balance between the power of doctors and the autonomy of patients in order to avoid the risks of abuse that a blind trust relationship would expose the patients too. Informed consent ha been introduced as a moral and legal requirement for any medical intervention be it research or therapeutic, by the 1997 European Convention of Human Rights and Biomedicine[14].

These examples show a continuum that I wish to explore between communicational practices and epistemic standards. How we talk to other people - what we say and don’t say – is more than just a matter of linguistic preference for a certain style of conversation. We can adjust our language to what we want to give access to in terms of information. And, as hearers, we always adjust our interpretations according not only to our pragmatic expectations, but also to our epistemic needs.

For example, authority is an important epistemic cue in interpreting what other people say. Nice experiments show that the same text given to two different groups of people while indicating to each group two different sources, one authoritative and the other not, gives very different results in the interpretation[15] given by the readers. In the case of the authoritative source, even if the text may result obscure, people tend to overinterpret it in order to make sense of it. While in the second case, the effort of interpretation is more limited and, if the text is too obscure, people rapidly conclude that it’s nonsense (many of us have experienced the same “authority” effect in reviewing articles from colleagues and students…This is one of the reason why the peer review process is anonymous!). Authority biases are thus extremely relevant to what we come to understand and believe from our informants: but this is not a on/off process in which we believe in authoritative sources and don’t believe in non- credible ones: it is the way we process information that comes from authorities, how we adjust our interpretation in order to make sense of what they say that determines what we come to believe. It is thus the stance we take towards our informants, the way we exercise our epistemic responsibility, that makes us believe or not believe what we are said. Thus, the construction of testimonial knowledge is a shared responsibility between informants and hearers: there are not purely unbiased informants - apart from some uninteresting cases like the timetable of trains in the station – as there aren’t naïve receivers of information. Even children, who have been considered for longtime as the paradigmatic case of naïve credulous creatures, have proved more sophisticated epistemic subjects than what we used to think: they take into account cues of credibility in order to accept what an informant says and check the linguistic consistency of their informants in order to adjust their credibility investment[16].

Epistemic responsibility is thus a matter of adjusting our way of interpreting what other people say to our epistemic needs: if I’m involved in a small talk in a party in which someone is talking of the possibility of an invasion on Iran because of its military nuclear plan, I can accept loose evidence for the sake of conversation. But if I hear the same rumour coming from an insider of the Ministry of Foreign Affairs, and if the consequences for my life are that my son may risk his life by fighting in a potential war, then it is my responsibility to raise epistemic standards and to ask for further evidence about Iran’s military nuclear plan.

The interplay between linguistic practices and epistemic concerns may seem as a trivial claim. But, surprisingly, there is little work in philosophy and epistemology that ties the debate on the pragmatics of communication and the acceptance of testimonial knowledge. For example, in discussing hearer’s responsibilities in gaining knowledge from testimony, McDowell refers to a vague “doxastic responsibility” that the hearer should exercise before accepting testimonial information.[17] But what this doxastic responsibility consists of is left largely unexplained. I think that a fruitful way of conceiving this responsibility is to place it in the inferential process of interpretation of what others say, in the responsible stance we assume to dose the epistemic weight we give to what we hear.

In communication, people do not look for true information, but for relevant information, that is, information that is relevant enough in a particular context to deserve our attention. But what is relevant in a context is a good proxy for information that has an epistemic value for us[18]. We trust other people to provide us relevant information, and adjust our epistemic requirements according to the context in which the interpretation takes place. We exercise our “epistemic vigilance”, to use an expression coined by Dan Sperber[19], during our interpretation by adopting a stance of trust that our interlocutors will provide relevant information for us. Any departure from the satisfaction of our expectations of relevance may result in a revision or a withdrawal of our default trust.

Let me illustrate this interplay between epistemology and interpretation with two fictional examples: the first one whose aim is to show that a departure from relevance may have effects on our epistemic stance, and, conversely, the second that illustrates how a change in our stance of trust may result in a different appraisal of relevance in interpretation.

Consider this case. Arianna is late tonight for the Parent-Children Association meeting at her son’s school. It’s not the first time she’s late at these kinds of events, and feels awfully guilty. She justifies herself with the President of the Association by telling her a long and detailed story about a series of accidents and unforeseen events that explain her delay. She adds a little bit too much to her story: Not only the underground did stop for 15 minutes due to an alarm, but also she fell on the stairs and broke her umbrella, and had to shelter from the rain under a roof. Then she met a very old friend who announced her a serious illness and was too touched to brusquely interrupt the conversation…The President is listening with the lesser and lesser attention: the relevance of what Arianna is saying is decreasing: too many details just for explaining a delay. This lack of relevance of what Arianna is saying weakens her stance of trust: Why all these details? May be isn’t she telling the truth?

Or consider this second example. A typical Parisean fraude that may happen in the street is to be approached by a stranger, who pretends to be Italian, is very friendly and, after a conversation, tries to convince his “victim” to buy some fake leather jackets he has in his car. He deceives his “clients” by asking them to buy the jackets because he’s unable to bring them back with him in Italy for custom reasons, and therefore it’s a very good bargain to buy them. Now, imagine that Jim is crossing the street and is approached by a guy with a strong Italian accent, who introduces himself as “Jules”. He starts a conversation, with the default trust he usually displays, but at a certain point he remembers that a friend told him about this kind of fraude. He stays still, unable to withdraw immediately the attention he granted to the guy. But Jules’ words are no more the same to his ears. For example, Jules says: “Are you in a hurry?”. Jim is not in a hurry, but interprets this as an invitation to spend more time with Jules and accepting his bargain proposals. He answers “Yes” and flies away.

These two stories illustrate the interplay between trust and interpretation as I intend it here – that is- the search for relevant information (information whose cost to have is balanced with the effort to treat it) . In the first case, Arianna’s description is too detailed: she’s giving too much information to the President to be relevant for her, and this creates a suspicious attitude in the President. In the second example, Jules is no more reliable in Jim’s mind: this acts as a bias in his way of interpreting what he’s saying.

Our epistemic responsibility is first of all a matter of taking an opportune stance of trust towards our informants, a sort of “virtual trust” that doesn’t commit us to accept as true what is said in conversation. We weigh through our interpretation the authority and credibility of our informants according to our epistemic needs. On the other hand, the way the informants “pack in language” what they want to say has epistemic consequences on our allocation of credibility. And the epistemic duty of the informants amounts to be relevant for us in a context, thus suggesting to us some possible epistemic gains in listening to them. We may take the risk to take a trustful posture in the speakers’ willingness to be relevant and yet check their trustfulness and reliability through the process of interpretation.

Epistemic responsibilities are thus shared, but in a lighter sense than what is often intended in the epistemological literature on testimonial knowledge: we share a context of communication, and a practice of interpretation and take on both sides the responsibility of the epistemic consequences of our social life.

References

AA.VV. The Hutton Inquiry, Tim Coates Publisher, 2004, London.
Adler, J. (2003) Belief’s own ethics, Mit Press.
Clément, F.; Koenig, M.; Harris, P. (2004) “The Ontogenesis of Trust”, Mind & Language, 19, pp. 360-379.
Coady, A. (1992) Testimony, Oxford, Clarendon Press.
Foley, R. (2001) Intellectual Trust in Oneself and Others, Cambridge University Press.
Fricker, E. (2006) “Testimony and Epistemic Autonomy” in J. Lackey and E. Sosa (eds.) The Epistemology of Testimony, Oxford University Press.
Fricker, M. (2007) Epistemic Injustice, Oxford University Press.
Gopnik, A., Graf, P. (1988) “Knowing how you know: Young Children’s Ability to Identify and Remember the Sources of Their Beliefs”, Child Development, 59, n. 5, pp. 1366-1371.
Holton, R. (1994) “Deciding to Trust, Coming to Believe”, Australasian Journal of Philosophy, vol. 72, pp. 63-76.
Moran, R. (2005) “Getting Told and Being Believed”, in Philosophers’ Imprints, vol. 5, n. 5, pp. 1-29.
Origgi, G. (2004) “Is Trust an Epistemological Notion?” Episteme, 1, 1, pp. 61-72.
Origgi, G. (2005) “What Does it Mean to Trust in Epistemic Authority?” in P. Pasquino (ed.) Concept of Authority, Edizioni Fondazione Olivetti, Rome.
Origgi, G. (2007) “Le sens des autres. L’ontogenèse de la confiance épistémique », in A. Bouvier, B. Conein (eds.) L’épistémologie sociale, EHESS Editions, Paris
Origgi, G. (2008) Qu’est-ce que la confiance?, Paris, VRIN.
Pettit, P., Smith, M. (1996) “Freedom in Belief and Desire”, The Journal of Philosophy, XCIII, 9, pp. 429-449.
Pritchard, D. (2005) Epistemic Luck, Clarendon Press, Oxford.
A. Ross (1986) “Why Do We Believe What We Are Told?” Ratio, 28, 1986, pp. 69-88
T. Ruffmann, L. Slade, E. Crowe (2002) The Relation between Children’s and Mothers’ Mental State Language and Theory of Mind Understanding”, Child Development, 73, pp. 734-751.

M. A. Sabbagh, D. Baldwin (2001) “Learning Words from Knowledgeable vs. Ignorant Speakers: Links between preschooler’s Theory of Mind and Semantic Development”, Child Development, 72, 1054-1070.

Shapin, S. (2004) The Social History of Truth, Harvard University Press.

Sperber, D., Wilson, D. (1986/1995) Relevance. Communication and Cognition, Basil Blackwell, Oxford.

Wilson, D. , Sperber, D. (2002) “Truthfulness and Relevance”, Mind, 111(443):583-632.



[1] On the concept of « good informant » see E. Craig (1990), Knowledge and the State of Nature, Oxford, Clarendon Press, where he argues that our very concept of knowledge originates from the basic epistemic need in the State of Nature of recognizing the good informants, that is, those who are trustworthy and bear indicator properties of their trustworthiness.

[2] See S. Shapin (1992) A Social History of Truth, Chicago University Press; M. Fricker (2007) Epistemic Injustice, Oxford University Press, and M. Fricker, this volume.

[3] The expression is due to McDowell. See J. McDowell (1998) “Knowledge by Hearsay”, in Meaning, Knowledge and Reality, Harvard University Press, Cambridge, Mass.

[4] Cf. G. Origgi (2004) “Is Trust an Epistemic Notion?”, Episteme, 1, 1.

[5] For an interesting and recent analysis of such cases, see D. Pritchard (2005) Epistemic Luck, Oxford University Press.

[6] Cf. http://middleeastreference.org.uk/

[7] Cf. B. Williams (2002), Truth and Truthfulness, Princeton University Press.

[8] Cf. Montaigne : « On Lyers », Essays, Book 1, Ch. IX.

[9] Cf. Kant, Groundwork of the Metaphysics of Moral.

[10] Cf. B. Constant

[11] Cf. Kant « On a Supposed Right to Lie because of Philantropic Concerns » published as a postscript of the Groundwork.

[12] See on this point Russell Hardin: “Trust in Governments” in

[13] Cf. H. Franfurt (2005), On Bullshit, Princeton University Press.

[14] Cf. on this case G. Origgi, M. Spranzi (2007) “La construction de la confiance dans l’entretien medical”, in T. Martin, P-Y. Quiviger (eds.) Action médicale et confiance, Presses Universitaires de Franche-Comté.

[15] Cf. G. Mosconi (1985) L’ordine del discorso, Il Mulino, Bologna ; D. Sperber (2005) « The guru effect », unpublished article, on line at www.dan.sperber.com ;

[16] Cf. F. Clément et al. (2004) « The ontogenesis of Trust », Mind and Language, 19, p. 360-379; G. Origgi (2007) “Le sens des autres. L’ontogenèse de la confiance épistémique”, in A. Bouvier, B. Conein (eds.) L’épistémologie sociale, EHESS Editions, Paris.

[17] Cf. J. McDowell, cit.

[18] I’m using here the technical concept of relevance developed by D. Sperber and D. Wilson in their post-gricean approach to pragmatics. Cf. D. Sperber, D. Wilson (1986/95) Relevance. Communication and Cognition, Basil Blackwell. On the relations between relevance and truth, cf. D. Wilson and D. Sperber (2000) “Truthfulness and Relevance”, Mind.

[19] Cf. D. Sperber, O. Mascaro (draft) “Mindreading, comprehension and epistemic vigilance”

Tuesday, February 05, 2008

Yes We Can - Barack Obama Music Video

Gloria supports Obama!