A discussion on discourse
Posted: Thu Nov 04, 2021 10:59 am
Wasting too much time not doing my work this am, but pulled this quick and it may indict me in some ways but thought it was an interesting blog type piece.
https://medium.com/wonk-bridge/a-short- ... 66fa6f3ba7
A Short Introduction to the Mechanics of Bad Faith
Having Taught Computers Humanity, Perhaps They Can Teach Us Good Faith
Oliver Meredith Cox
Oliver Meredith Cox
Follow
Nov 23, 2020 · 19 min read
Winner of the Best Article from a New Contributor 2020 Award at Wonk Bridge’s 2020 Award Ceremony.
Degrain’s “Otelo e Desdemona”, an image and scenario epitomising the quandary of our present focus — to risk good faith, or embrace the ease of bad faith?
Bad faith is corrosive and, as with the nuclear calculus, the only way to deal with it is invoke a deterrent that is just as (or more) dangerous: the accusation of bad faith, which has the potential to negate any conversation. This appears to be the case because the game theory is off: one might say that our public conversations are like public goods (fisheries, forests, waterways) in which those high in ambition and low in virtue can win big at everyone else’s expense.
What follows is an exploration of this dynamic, and some proposals for how we might find ways to converse better with people who are different to us.
A Background to the Mechanics
A little while ago I wrote an article called “Conversational Interoperability,” which set out some ground rules designed to facilitate fruitful communication between people of different mindsets; this is in the context of how, quite frequently, people of different fields, philosophies or politics find their communication degrading for no other reason than they have different words for the same thing or different perspectives, when they really think quite similarly or at least don’t have differences that warrant getting angry.
That piece was based on Postel’s Law, or the Law of Robustness: “Be conservative in what you send, be liberal in what you accept.” John Postel, who was fundamental to the development of the Internet, expressed these words in the context of digital communications, meaning, in essence, that one should only transmit well-formed data but, if one receives malformed data that is nonetheless decipherable, one should parse it anyway.
Hopefully the reader can see how this provides for maximum communication, in theory: in the domain that you control (what you transmit) you should maximize obedience to the standard; in the domain you do not control (what others transmit) you should maximize the total information that you decipher. I extend the Law to my six rules of conversational interoperability, in three parts:
Clarity:
When speaking: be clear.
When listening: be charitable
Offense:
When speaking: try not to cause offense.
When listening: don’t take offense.
Errors:
When good-faith errors occur: be charitable.
When bad-faith errors occur: treat them like good-faith errors.
One can see how each of these pairs follows the liberal/conservative duality that Postel originally outlined in his law, except for the last one. Upon first meeting with Yuji and Max from the Wonk Bridge team, Yuji asked me about the last point. Yuji’s background is in international relations, wherein one must respond: a nation not responding to border violations and other aggressive acts is irresponsible, given that it invites further foul play. How could I, he asked, possibly recommend treating bad-faith errors just as I would treat good-faith errors?
In this piece I hope to explore the nuance of bad faith and how it interacts with human nature, while proposing a system to incentivize good faith. It consists of six policies, expressed as three progressive steps, each using the former as foundation. For more on bad faith, game theory, and these recommendations in detail, read further.
Six Policies to Encourage Good Faith
Step one: being able to call out bad faith without getting confused by mere sloppy arguing.
1. All fora should publish lists of logical fallacies for people to avoid.
2. All fora should publish lists of philosophy smells or philosophical anti-patterns. A philosophy smell is a style of thinking that is not, itself, fallacious but is flawed in a way that makes it easy for fallacies to slip through.
Step two: avoiding the escalation that can often come when accusations of bad faith are thrown around.
1. Claims of bad faith should be recorded diligently, so as to discourage people from debasing the coin.
2. All claims of bad faith should be falsifiable. Falsifiable claims are stronger, and avoid trap accusations that one can’t get out of.
Step three: building a system with the right incentive structure to encourage good faith behaviour, identify bad faith, and discourage false accusations.
1. All claims of bad faith should be reciprocal: if you make an accusation, you should own up to a past infraction of your own.
2. Good Faith Bonds: an organizational structure that aims to promote good faith. It works thus: people join the organization by posting bond, agreeing that they will act in good faith and avoid baseless accusations of bad faith thereafter. Those found acting in bad faith or making bad faith accusations forfeit their bonds. See below for more detail.
What Is Bad Faith?
What is a bad-faith actor? What is bad faith, for that matter? Bad faith is to act in ways that spoof the normal modes of interaction, such as in debate, conversation, commerce, while actually pursuing hidden, selfish motives or even hoping to disrupt the operation of the system in which they operate.
The most clear and common example is trolling: trolls will often ask what appear normal questions of others, when in fact the real goal is to frustrate, annoy or waste time.
More sophisticated bad faithers use these techniques in debates. Creationists, for example, are notorious for pretending to think that Evolution is just a theory in the sense that “in theory the pub is still open” when they know that Evolution is a theory in that it is the best-evidenced model for the system it describes, having stood up to rigorous tests.
In my view, the most popular form of bad faith is the deliberate misunderstanding of phrases and wording, attributing to people views that they do not hold and clearly weren’t actually expressing. One Hitchens quote, if you’ll indulge me: Hitchens describes his opposition to the way in which particular religions categorize women as chattel, owned; the radio talk-show host responded by saying, “Well then are you against ownership?” Hitchens responded to say: “Of people: yes.”
Hopefully you can glimpse some of what we’re dealing with: bad faith, for our purposes, is when people will twist the universe around in their mind with as much effort as is necessary to avoid understanding a conflicting viewpoint to the extent that it might actually pose a challenge. All of these examples are shadows cast by the same desire: to frustrate, annoy, show off, posture in front of one’s followers, rather than to learn, exchange information and create connections.
The Trouble with Calling out Bad Faith
As such, one might ask, why don’t we just call out these bad actors, plainly and as often as possible? For example, in the case of the creationists, tell them that they have been corrected so many times on the point of the definition of “theory” that it is clear that they are not actually playing to win, but to foul?
The first reason not to do so is that telling someone that they’re in bad faith is the nuclear option, it is meta, it contains the statement that the accused is not actually interested in discourse or ideas, that they are, at bottom, a troll, or worse, a liar. This is something of a conversation stopper. Granted, it’s probably justified sometimes, but quite often people fall into sin by repeating arguments they hear elsewhere but don’t interrogate.
For example, have you ever heard someone say: “There’s no smoke without fire,” meaning that if there’s a fuss, there must be something to fuss about. Do you see how horrible it is? The phrase makes no allowance for lies or mistakes, and is cruelly wrong in that it damns the victims of false rumors. One might as well say, “There’s no map without territory.”
Meanwhile, and perhaps more importantly, there’s the problem of the proliferation of this nuclear option. Anyone can accuse anyone else of acting in bad faith, but proving that one isn’t is as hard as proving any other negative proposition. To the extent to which a given person is believed by their followers and readers, the accusation of bad faith against another threatens to cut that individual out completely (some people are more than happy to be given reason to ignore another person’s ideas).
This has also been miniaturized into the tactical bad faith insult: cheap character critiques like racist, socialist, anti-American. The accusation of bad faith could be said to lie above all of these, e.g. the person is a social justice warrior and what they say can’t, therefore, be trusted.
The assumption of good faith is essentially the assumption that, dispite our differences, we are all trying to seek truth. It is, metaphorically, the phones in JFK and Khrushchev’s offices. It’s a fragile line of communication — and when one side introduces bad faith as a talking point, the other may be tempted to cut the cord.
Game Theory, Mutually Assured Destruction and Technology
Game Theory
The answer to the question of how to deal with the bad actors, or at least the study of this question, starts with game theory. Game theory is the abstraction of strategic action, allowing us to simulate and theorize about the merits of particular strategies. Usually game theory comes down to simulated interactions between two or more “players” with different approaches to the “game.”
For example, Dawkins’ The Selfish Gene features games that simulate social interactions, with only two possible moves: cooperate or defect. These games have multiple rounds: during each round both players make their move, the outcome is decided, then they play the next round, and so on. In the Dawkins example, if both players cooperate, both get a payoff, if one player defects and the other cooperates, the defector gets a payoff, and the cooperator nothing. If they both defect, they both lose.
This allows the researcher to test out different strategies, “always cooperate,” “always defect,” “breast-for-tat” and so on — for American readers, who I am told don’t have the phrase breast-for-tat, this means to give it back as you get it, I’ll be nice if you’re nice, but if you hit me I’ll hit you back on the next move.
Imagine if two people were in oppositional conversation, in public, with their audiences watching. Imagine if each person had the option to press a “bad faith” button that would identify the other, categorically, as someone with sinister motives. Well, a large number would press the button instantly and on all occasions. Remember that to some people, all conservatives are oil-heads, to others, progressives don’t want to improve America, they want to destroy it.
As I mentioned above, each time someone presses the button, to the extent that they are believed, their audience is further encouraged to think that the only reason one would have to disagree would be dishonesty. Granted, some people would alienate their audience by acting in this way; others practically make a living from it.
Daniel Mróz
Mutually Assured Destruction
Then, of course, one has the ultimate domain of game theory: nuclear deterrents. Essentially the well-worn theory goes like this: any nation launching a nuclear attack today would, often automatically, provoke a counter attack on the part of their adversary. Thus, mutually assured destruction.
Continuing the liberal draw on John Postel’s wisdom, one might call this the law of fragility: each nation must be totally permissive in what it accepts (technology that can fend off nuclear attacks ranges from non-existent to dubious) and as a defensive reaction to this weakness, each nation must be totally aggressive in what it sends (launch at any credible threat). It is bizarre that, perhaps as far from the balance of Postel’s law as one can go, we find something that is oddly stable.
In the game theory of a nuclear exchange, if either side defects, both are annihilated. Compare this to the typical behaviour of hatchet wielding journalists and social media agitators, who stand to gain in the short term by acting in bad faith, and even can gain if both sides defect. This appears to be because different people have different audiences with different takes on events. However the argument actually goes, people often uphold their champion as the winner.
But, of course, there are no free lunches, and I argue that the terminus of this path is mutually assured destruction, also. A society that lies this much will eventually degrade to the extent that collaboration across party lines becomes impossible. This is part of why Eric Weinstein is so fond of recommending that we return to above-ground nuclear testing: a true perception of the risks is an excellent way to give people reason to coordinate their action effectively.
I kept a rather game theoretical horse tied nearby for just this sort of occasion: images on the Web. Do you remember when blog posts, articles, etc. didn’t always have an image accompanying them? Do you remember when Twitter consisted primarily of text? Well, what happened? It appears that the social networks, especially Twitter and Facebook, began offering fairly functional previews when users shared URLs, together with easy support for posting images themselves.
We know, to the embarrassment of our species, that people are more likely to click on something if it has an associated image; more so if the image features a human being. Thus, the incentive is for users to post images with everything, all the time: and if you don’t post an image to accompany your article you’ll lose clicks to someone who does.
In this example, online writers were playing a game of image versus no image, and image almost always wins (in the short term). This delivered us the sickening phenomenon of the mandatory blog featured image: teenagers on phones, a man reading a newspaper, a dreamy barista. This isn’t a lost cause, but it wasted a lot of time, energy and bandwidth along the way.
Fundamentally, the system of discourse appears to be configured such that it is very hard to win against bad faith actors and, like with the image battle, you can build a pocket of decency, but it’s much harder to stop the system as a whole from denaturing. To do that, we need better systems and better norms, the game theory itself must be different.
Technological Esperanto
You have probably heard of Esperanto, a constructed language, designed to be spoken universally and, thus, to foster international communication and collaboration. Esperanto, the most successful constructed language, spoken by millions, is dwarfed by another universal language, one which operates quite differently: TCP/IP.
TCP/IP describes the suite of protocols that facilitate the Internet; they are almost universally used and accepted, despite personal and global differences of language, creed and politics. Amusingly, when people are yelling at each other on Twitter, trolling, telling each other that we have nothing to offer, that we disagree on everything, they do so within a common framework about which there is total agreement: the protocols of the Internet that govern how we exchange data functionally, including John Postel’s contributions and his Law, which is where we started.
John Postel/Wikipedia
Online, at least on the very basic level, there is almost total interoperability. For those who aren’t familiar, the Network Centric Operations Industry Consortium provides nice primer on interoperability:
To claim interoperability, you must:
1) be able to communicate with others;
2) be able to digitize the data/information to be shared; and
3) be recognizable/registered (i.e., you must have some form of credentials verifying both senders and receivers on the network as authorized participants with specific roles).
For example, the USB standard provides for interoperability: all compliant devices fit into the hole correctly and can transmit data. Conversely, some Windows and Mac disk formats are still non-interoperable, throwing errors or even doing damage when one crosses the streams.
To the extent that interoperability is reduced, I see no instances wherein the community benefits: we benefit where there is more scope for communication, only those who want fawning, captive audiences would want people to use technology, or adopt mindsets, that cannot interact properly with others.
Another aside: if you want to build a cult, or at least make it difficult for your adherents to leave whatever organization you are building, it’s in your interest to give them an incompatible philosophy or mindset and especially to denature language, such that adherents can’t communicate properly with people using common parlance.
It’s worth noting the domain of operation of TCP/IP and the Law of Robustness and why it works there. Its domain, of course, is computers and their communication, and it works at least partly because there are few grey areas: either the packets are well-formed or they are not, my father would have said: “That’s one of the convenient things about computing: it either works or it doesn’t.” Things are much more difficult to judge in the domain of people and culture.
Amusingly, it seems that when left to our own devices, we find bad faith action irresistible; we have been astonishingly successful in maintaining good faith by offloading this responsibility to dependable machines. In order to build a less divisive discourse, we might want to be a little more like our devices — at least how they communicate.
Six Proposals for the Promotion of Good Faith
Reducing False Positives
1. All fora should publish lists of logical fallacies for people to avoid.
Fora, communities, social networks and particularly schools, should publish lists of logical fallacies: argumentum ad hominem, post hoc ergo propter hoc, etc. It’s a travesty that school children don’t emerge knowing logic and it’s fallacious use: one has to admit that those not versed make better votaries and consumers.
Fundamentally, dealing with points that are logically unsound is frequently the source of friction in conversations: truly, if you use a fallacious argument, you either forgot yourself, are ignorant, or really are acting in bad faith. Usually it’s the first two, but with better norms, it seems that we could reduce them to practically nothing.
The most common fallacy, I find, and one that is perhaps less well known, is touquoque: meaning “also you.” It describes the act of trying to absolve one’s own side of something by identifying that the other side did it, too, or perhaps did something just as bad. This may, like a child realizing that they are not the only one in trouble, lessen the feeling of guilt, but it certainly doesn’t change the ethical implications of what one actually did.
The tu quoque, this style of argument, is supremely popular, perhaps one of the most popular rhetorical devices: yet it wastes our time and our patience with each other, during conversations that require every ounce of grace that we can afford.
2. All fora should publish lists of philosophy smells or philosophical anti-patterns.
Along with the list of fallacies, I would have people publish lists of “philosophy smells.” This, I believe, is a concept original to me.[1] A philosophy smell borrows from the code smell idea in programming: code smells don’t describe bugs or errors, necessarily; rather, they are evidence of poor practices and thinking, which are themselves likely to throw up bugs in the future. These smells, rewardingly, have the fruity names common to programmers: object orgy, feature envy, boolean blindness.
Philosophy smells are equivalent: they aren’t the same as logical fallacies, but are evidence of a style of philosophizing that is likely to produce errors and mistakes in future. My favorite, so far, is drive-by syllogism, which is when someone submits rapid-fire premises and demands an immediate yes/no answer to the conclusion; this doesn’t mean that the logic is wrong, only that you’re much more likely to get a case of l’espirit de escalier. It’s not a particularly fruitful discussion if your means of winning is baffling people with speed and, in the process, missing out on their best counter-arguments. (If you are intellectually honest, you would want to hear the best they have.)
One more: astrolophizing, which is a blend of the style of astrology and philosophy, exemplified when people deploy logic and especially critiques that are so generic that they can be applied wholesale to more or less anything and are, thus, not necessarily wrong but are unlikely to be right. Christopher Hitchens gives a lovely example in Hitch 22:
The last time I heard an orthodox Marxist statement that was music to my ears was from a member of the Rwanda Patriotic Front, during the mass slaughter in the country. ‘The terms Hutu and Tutsi,’ he said severely, ‘are merely ideological constructs, describing different relationships to the means and mode of production.’ But of course!
I hope that establishing these two norms would eliminate many of the false positives: people who look like they’re acting in bad faith, but by accident. If philosophy smell is too weird a phrase, philosophical anti-pattern might be more palatable, to borrow from Andrew Koning’s work in software.
Making It Easier and More Productive to Handle Bad Faith
Additionally, I have two proposals for moderating and softening accusations of bad faith, and to disincentivize people from making frivolous accusations a habit.
3. Claims of bad faith should be recorded diligently.
We should record accusations of bad faith. We could do with this feature in other domains, but we may as well start here. It is as common as dirt for commentators, the general public, etc. to make strong accusations against their fellows, but that essentially get black holed or forgotten: it had an impact at the time, but eventually people forget and move on. Few people check on these claims to see if they turn out to be true or false.
The accuser gets to pack some punch at the time, and perhaps do so repeatedly, with no tally of their accuracy. To be clear, I’m not saying there’s anything wrong with being wrong — it’s a beautiful thing — but we should be wrong in public and embrace it. More apologies, more admitting that we were wrong, less forgetting.
4. All claims of bad faith should be falsifiable.
For that matter, all non-metaphysical claims should be falsifiable too, but that’s a fight for another day. The reason for this injunction is that it’s easier than spitting to make an accusation against someone: most of the time, they’re so woolly as to be impossible to prove or disprove. If the accusation is well-formed, then there ought to be something that the accused can do, or even prove that they have already done, to convince you that they are in good faith.
My editor asked that I give an example of an unfalsifiable accusations. I don’t like to make a habit of public criticisms like this, so instead I will draw on a historical example: the Hamilton-Burr duel. In 1804, Vice President Aaron Burr shot and killed fellow Founding Father Alexander Hamilton, during a duel. Burr challenged Hamilton because the latter had refused to deny insulting the former during a dinner in upstate New York. Putting aside whether this sort of thing is worth a duel, how could Hamilton possibly have falsified an account of a dinner held in private? Of course, Burr did not say that Hamilton acted in bad faith, but hopefully this incident illustrates the danger of any and all unfalsifiable accusations.
Hamilton and Burr dueling/Wikipedia
Remember: claims of bad faith cannot be metaphysical. If your accusation is not falsifiable, it is non-admissible. There are of course grey claims, like saying that there are 5 sextillion grains of sand on our planet: this is falsifiable, but impossible to prove. We should avoid such grey accusations for this reason, their greyness is a potential get-out-of-jail-free card.
Finally, the means of falsifying the accusation should be part of the accusation; this is to say that if I accuse you of something, I should also state what I would accept as proof to the contrary. For example, if one accuses a politician of being an oil shill, one should state also which oil companies or lobbyists are involved, thereby making it possible to see if that person had been paid.
This gives some liberty to the accuser, but there’s no incentive to softball, and we can all tell, conversely, when an accuser demands evidence that makes acquittal impossible (which is obvious bad form). This is not perfect, but ought never to be worse and usually to be better than the norm today.
Incentivizing Good Faith and Disincentivizing Frivolous Accusations
5. All claims of bad faith should be reciprocal.
There should be a cost associated with making such bold claims. Just as in the legal system, wherein court fees disincentivize frivolous action, we ought to make it mean something to make accusations of this magnitude. We know from game theory studies that people will happily pay, often more than the value of the infraction, for justice.
A non-monetary system would work on reciprocation: if you accuse someone of bad faith, you ought to pair the accusation with an admission of a time that you acted in bad faith in the past. The extent to which people name only petty examples of their own bad faith, I expect, will speak for the quality of their accusation. This method ought to be advantageous not least because it demonstrates us reaching for a higher value: the game should not be one of I’m right you’re wrong but of we are all trying to be better.
6. Good Faith Bonds
Finally, a monetary proposal: Good Faith Bonds. Here’s how it would work:
Individuals active in the public sphere would post bond with a the Good Faith Bonds organization, on the promise that they act in good faith thereafter.
If a claimant comes with a falsifiable accusation, the accused can either submit evidence in their defence, apologize or (hopefully not) do nothing.
If they apologize they keep their bond, if they are proven wrong or ignore the accusation, they lose their bond.
If they are proven right, the accuser loses their bond.
We would need to find a means of disposing of the forfeited money in a way that doesn’t generate perverse incentives: for example, it would be a very bad idea for the forfeited money to go to the accuser, this would make it impossibly tempting to abuse the system. One might say that the funds should go to charity, but that isn’t unproblematic either.
There’s obviously more to be ironed out here, but hopefully the idea is, at least, provocative. Questions around conflicts of interest and corruption within the organization are non-trivial, but new trustless modes of organization afforded by blockchain technology offer exciting possibilities.
Daniel Mróz
Finding the Best of Humanity Expressed by Computers
As I mentioned at the start, it seems as though our games of communication incentivize bad faith. Bad faith, one might say, is worse than lying, in that it breaks down the mechanisms of communication and our ability to coordinate, especially among people who are different from one another.
That said, technology and computers in particular can give us tools to reflect some light back onto our species. To talk in such terms, bad faith breaks interoperability, and offers the cultural and conversational equivalent of VHS v.s. Betamax; building interoperability offers whatever the cultural and conversational equivalent of the Internet is: imperfect, confusing, but game-changing enough that any self-respecting tyrant would want to censor it.
Some complain at the inhumanity of computers and systems. This is usually fallacious: all such systems are created in the images of humans, and usually what people lament inhuman isn’t so in the sense that it is unlike people (say, lacking emotion) but in actually in the sense that it lacks empathy and ethics: if you spend any time with humans, you will learn just how fond some people are of creating inhuman systems; the only difference, in the realm of computers, is that the machines give people a place to hide.
So, Postel’s Law of Robustness, which computers find it much easier to obey than we do, is actually very human, and Postel, a man very fond of machines, a great humanist. Thus, when I recommended, above, being a little bit more like our machines, I meant only the extent to which they are faithful, diligent, and disinterested; characteristics which, combined with very human traits like public-spiritedness and a sense of community, we might call virtue. With this as our guiding principle, we might find conversation a little easier.
Notes:
I have not found reference to the philosophy smell idea anywhere, so believe it to be original. I will of course hand it to its rightful owner, if corrected.
https://medium.com/wonk-bridge/a-short- ... 66fa6f3ba7
A Short Introduction to the Mechanics of Bad Faith
Having Taught Computers Humanity, Perhaps They Can Teach Us Good Faith
Oliver Meredith Cox
Oliver Meredith Cox
Follow
Nov 23, 2020 · 19 min read
Winner of the Best Article from a New Contributor 2020 Award at Wonk Bridge’s 2020 Award Ceremony.
Degrain’s “Otelo e Desdemona”, an image and scenario epitomising the quandary of our present focus — to risk good faith, or embrace the ease of bad faith?
Bad faith is corrosive and, as with the nuclear calculus, the only way to deal with it is invoke a deterrent that is just as (or more) dangerous: the accusation of bad faith, which has the potential to negate any conversation. This appears to be the case because the game theory is off: one might say that our public conversations are like public goods (fisheries, forests, waterways) in which those high in ambition and low in virtue can win big at everyone else’s expense.
What follows is an exploration of this dynamic, and some proposals for how we might find ways to converse better with people who are different to us.
A Background to the Mechanics
A little while ago I wrote an article called “Conversational Interoperability,” which set out some ground rules designed to facilitate fruitful communication between people of different mindsets; this is in the context of how, quite frequently, people of different fields, philosophies or politics find their communication degrading for no other reason than they have different words for the same thing or different perspectives, when they really think quite similarly or at least don’t have differences that warrant getting angry.
That piece was based on Postel’s Law, or the Law of Robustness: “Be conservative in what you send, be liberal in what you accept.” John Postel, who was fundamental to the development of the Internet, expressed these words in the context of digital communications, meaning, in essence, that one should only transmit well-formed data but, if one receives malformed data that is nonetheless decipherable, one should parse it anyway.
Hopefully the reader can see how this provides for maximum communication, in theory: in the domain that you control (what you transmit) you should maximize obedience to the standard; in the domain you do not control (what others transmit) you should maximize the total information that you decipher. I extend the Law to my six rules of conversational interoperability, in three parts:
Clarity:
When speaking: be clear.
When listening: be charitable
Offense:
When speaking: try not to cause offense.
When listening: don’t take offense.
Errors:
When good-faith errors occur: be charitable.
When bad-faith errors occur: treat them like good-faith errors.
One can see how each of these pairs follows the liberal/conservative duality that Postel originally outlined in his law, except for the last one. Upon first meeting with Yuji and Max from the Wonk Bridge team, Yuji asked me about the last point. Yuji’s background is in international relations, wherein one must respond: a nation not responding to border violations and other aggressive acts is irresponsible, given that it invites further foul play. How could I, he asked, possibly recommend treating bad-faith errors just as I would treat good-faith errors?
In this piece I hope to explore the nuance of bad faith and how it interacts with human nature, while proposing a system to incentivize good faith. It consists of six policies, expressed as three progressive steps, each using the former as foundation. For more on bad faith, game theory, and these recommendations in detail, read further.
Six Policies to Encourage Good Faith
Step one: being able to call out bad faith without getting confused by mere sloppy arguing.
1. All fora should publish lists of logical fallacies for people to avoid.
2. All fora should publish lists of philosophy smells or philosophical anti-patterns. A philosophy smell is a style of thinking that is not, itself, fallacious but is flawed in a way that makes it easy for fallacies to slip through.
Step two: avoiding the escalation that can often come when accusations of bad faith are thrown around.
1. Claims of bad faith should be recorded diligently, so as to discourage people from debasing the coin.
2. All claims of bad faith should be falsifiable. Falsifiable claims are stronger, and avoid trap accusations that one can’t get out of.
Step three: building a system with the right incentive structure to encourage good faith behaviour, identify bad faith, and discourage false accusations.
1. All claims of bad faith should be reciprocal: if you make an accusation, you should own up to a past infraction of your own.
2. Good Faith Bonds: an organizational structure that aims to promote good faith. It works thus: people join the organization by posting bond, agreeing that they will act in good faith and avoid baseless accusations of bad faith thereafter. Those found acting in bad faith or making bad faith accusations forfeit their bonds. See below for more detail.
What Is Bad Faith?
What is a bad-faith actor? What is bad faith, for that matter? Bad faith is to act in ways that spoof the normal modes of interaction, such as in debate, conversation, commerce, while actually pursuing hidden, selfish motives or even hoping to disrupt the operation of the system in which they operate.
The most clear and common example is trolling: trolls will often ask what appear normal questions of others, when in fact the real goal is to frustrate, annoy or waste time.
More sophisticated bad faithers use these techniques in debates. Creationists, for example, are notorious for pretending to think that Evolution is just a theory in the sense that “in theory the pub is still open” when they know that Evolution is a theory in that it is the best-evidenced model for the system it describes, having stood up to rigorous tests.
In my view, the most popular form of bad faith is the deliberate misunderstanding of phrases and wording, attributing to people views that they do not hold and clearly weren’t actually expressing. One Hitchens quote, if you’ll indulge me: Hitchens describes his opposition to the way in which particular religions categorize women as chattel, owned; the radio talk-show host responded by saying, “Well then are you against ownership?” Hitchens responded to say: “Of people: yes.”
Hopefully you can glimpse some of what we’re dealing with: bad faith, for our purposes, is when people will twist the universe around in their mind with as much effort as is necessary to avoid understanding a conflicting viewpoint to the extent that it might actually pose a challenge. All of these examples are shadows cast by the same desire: to frustrate, annoy, show off, posture in front of one’s followers, rather than to learn, exchange information and create connections.
The Trouble with Calling out Bad Faith
As such, one might ask, why don’t we just call out these bad actors, plainly and as often as possible? For example, in the case of the creationists, tell them that they have been corrected so many times on the point of the definition of “theory” that it is clear that they are not actually playing to win, but to foul?
The first reason not to do so is that telling someone that they’re in bad faith is the nuclear option, it is meta, it contains the statement that the accused is not actually interested in discourse or ideas, that they are, at bottom, a troll, or worse, a liar. This is something of a conversation stopper. Granted, it’s probably justified sometimes, but quite often people fall into sin by repeating arguments they hear elsewhere but don’t interrogate.
For example, have you ever heard someone say: “There’s no smoke without fire,” meaning that if there’s a fuss, there must be something to fuss about. Do you see how horrible it is? The phrase makes no allowance for lies or mistakes, and is cruelly wrong in that it damns the victims of false rumors. One might as well say, “There’s no map without territory.”
Meanwhile, and perhaps more importantly, there’s the problem of the proliferation of this nuclear option. Anyone can accuse anyone else of acting in bad faith, but proving that one isn’t is as hard as proving any other negative proposition. To the extent to which a given person is believed by their followers and readers, the accusation of bad faith against another threatens to cut that individual out completely (some people are more than happy to be given reason to ignore another person’s ideas).
This has also been miniaturized into the tactical bad faith insult: cheap character critiques like racist, socialist, anti-American. The accusation of bad faith could be said to lie above all of these, e.g. the person is a social justice warrior and what they say can’t, therefore, be trusted.
The assumption of good faith is essentially the assumption that, dispite our differences, we are all trying to seek truth. It is, metaphorically, the phones in JFK and Khrushchev’s offices. It’s a fragile line of communication — and when one side introduces bad faith as a talking point, the other may be tempted to cut the cord.
Game Theory, Mutually Assured Destruction and Technology
Game Theory
The answer to the question of how to deal with the bad actors, or at least the study of this question, starts with game theory. Game theory is the abstraction of strategic action, allowing us to simulate and theorize about the merits of particular strategies. Usually game theory comes down to simulated interactions between two or more “players” with different approaches to the “game.”
For example, Dawkins’ The Selfish Gene features games that simulate social interactions, with only two possible moves: cooperate or defect. These games have multiple rounds: during each round both players make their move, the outcome is decided, then they play the next round, and so on. In the Dawkins example, if both players cooperate, both get a payoff, if one player defects and the other cooperates, the defector gets a payoff, and the cooperator nothing. If they both defect, they both lose.
This allows the researcher to test out different strategies, “always cooperate,” “always defect,” “breast-for-tat” and so on — for American readers, who I am told don’t have the phrase breast-for-tat, this means to give it back as you get it, I’ll be nice if you’re nice, but if you hit me I’ll hit you back on the next move.
Imagine if two people were in oppositional conversation, in public, with their audiences watching. Imagine if each person had the option to press a “bad faith” button that would identify the other, categorically, as someone with sinister motives. Well, a large number would press the button instantly and on all occasions. Remember that to some people, all conservatives are oil-heads, to others, progressives don’t want to improve America, they want to destroy it.
As I mentioned above, each time someone presses the button, to the extent that they are believed, their audience is further encouraged to think that the only reason one would have to disagree would be dishonesty. Granted, some people would alienate their audience by acting in this way; others practically make a living from it.
Daniel Mróz
Mutually Assured Destruction
Then, of course, one has the ultimate domain of game theory: nuclear deterrents. Essentially the well-worn theory goes like this: any nation launching a nuclear attack today would, often automatically, provoke a counter attack on the part of their adversary. Thus, mutually assured destruction.
Continuing the liberal draw on John Postel’s wisdom, one might call this the law of fragility: each nation must be totally permissive in what it accepts (technology that can fend off nuclear attacks ranges from non-existent to dubious) and as a defensive reaction to this weakness, each nation must be totally aggressive in what it sends (launch at any credible threat). It is bizarre that, perhaps as far from the balance of Postel’s law as one can go, we find something that is oddly stable.
In the game theory of a nuclear exchange, if either side defects, both are annihilated. Compare this to the typical behaviour of hatchet wielding journalists and social media agitators, who stand to gain in the short term by acting in bad faith, and even can gain if both sides defect. This appears to be because different people have different audiences with different takes on events. However the argument actually goes, people often uphold their champion as the winner.
But, of course, there are no free lunches, and I argue that the terminus of this path is mutually assured destruction, also. A society that lies this much will eventually degrade to the extent that collaboration across party lines becomes impossible. This is part of why Eric Weinstein is so fond of recommending that we return to above-ground nuclear testing: a true perception of the risks is an excellent way to give people reason to coordinate their action effectively.
I kept a rather game theoretical horse tied nearby for just this sort of occasion: images on the Web. Do you remember when blog posts, articles, etc. didn’t always have an image accompanying them? Do you remember when Twitter consisted primarily of text? Well, what happened? It appears that the social networks, especially Twitter and Facebook, began offering fairly functional previews when users shared URLs, together with easy support for posting images themselves.
We know, to the embarrassment of our species, that people are more likely to click on something if it has an associated image; more so if the image features a human being. Thus, the incentive is for users to post images with everything, all the time: and if you don’t post an image to accompany your article you’ll lose clicks to someone who does.
In this example, online writers were playing a game of image versus no image, and image almost always wins (in the short term). This delivered us the sickening phenomenon of the mandatory blog featured image: teenagers on phones, a man reading a newspaper, a dreamy barista. This isn’t a lost cause, but it wasted a lot of time, energy and bandwidth along the way.
Fundamentally, the system of discourse appears to be configured such that it is very hard to win against bad faith actors and, like with the image battle, you can build a pocket of decency, but it’s much harder to stop the system as a whole from denaturing. To do that, we need better systems and better norms, the game theory itself must be different.
Technological Esperanto
You have probably heard of Esperanto, a constructed language, designed to be spoken universally and, thus, to foster international communication and collaboration. Esperanto, the most successful constructed language, spoken by millions, is dwarfed by another universal language, one which operates quite differently: TCP/IP.
TCP/IP describes the suite of protocols that facilitate the Internet; they are almost universally used and accepted, despite personal and global differences of language, creed and politics. Amusingly, when people are yelling at each other on Twitter, trolling, telling each other that we have nothing to offer, that we disagree on everything, they do so within a common framework about which there is total agreement: the protocols of the Internet that govern how we exchange data functionally, including John Postel’s contributions and his Law, which is where we started.
John Postel/Wikipedia
Online, at least on the very basic level, there is almost total interoperability. For those who aren’t familiar, the Network Centric Operations Industry Consortium provides nice primer on interoperability:
To claim interoperability, you must:
1) be able to communicate with others;
2) be able to digitize the data/information to be shared; and
3) be recognizable/registered (i.e., you must have some form of credentials verifying both senders and receivers on the network as authorized participants with specific roles).
For example, the USB standard provides for interoperability: all compliant devices fit into the hole correctly and can transmit data. Conversely, some Windows and Mac disk formats are still non-interoperable, throwing errors or even doing damage when one crosses the streams.
To the extent that interoperability is reduced, I see no instances wherein the community benefits: we benefit where there is more scope for communication, only those who want fawning, captive audiences would want people to use technology, or adopt mindsets, that cannot interact properly with others.
Another aside: if you want to build a cult, or at least make it difficult for your adherents to leave whatever organization you are building, it’s in your interest to give them an incompatible philosophy or mindset and especially to denature language, such that adherents can’t communicate properly with people using common parlance.
It’s worth noting the domain of operation of TCP/IP and the Law of Robustness and why it works there. Its domain, of course, is computers and their communication, and it works at least partly because there are few grey areas: either the packets are well-formed or they are not, my father would have said: “That’s one of the convenient things about computing: it either works or it doesn’t.” Things are much more difficult to judge in the domain of people and culture.
Amusingly, it seems that when left to our own devices, we find bad faith action irresistible; we have been astonishingly successful in maintaining good faith by offloading this responsibility to dependable machines. In order to build a less divisive discourse, we might want to be a little more like our devices — at least how they communicate.
Six Proposals for the Promotion of Good Faith
Reducing False Positives
1. All fora should publish lists of logical fallacies for people to avoid.
Fora, communities, social networks and particularly schools, should publish lists of logical fallacies: argumentum ad hominem, post hoc ergo propter hoc, etc. It’s a travesty that school children don’t emerge knowing logic and it’s fallacious use: one has to admit that those not versed make better votaries and consumers.
Fundamentally, dealing with points that are logically unsound is frequently the source of friction in conversations: truly, if you use a fallacious argument, you either forgot yourself, are ignorant, or really are acting in bad faith. Usually it’s the first two, but with better norms, it seems that we could reduce them to practically nothing.
The most common fallacy, I find, and one that is perhaps less well known, is touquoque: meaning “also you.” It describes the act of trying to absolve one’s own side of something by identifying that the other side did it, too, or perhaps did something just as bad. This may, like a child realizing that they are not the only one in trouble, lessen the feeling of guilt, but it certainly doesn’t change the ethical implications of what one actually did.
The tu quoque, this style of argument, is supremely popular, perhaps one of the most popular rhetorical devices: yet it wastes our time and our patience with each other, during conversations that require every ounce of grace that we can afford.
2. All fora should publish lists of philosophy smells or philosophical anti-patterns.
Along with the list of fallacies, I would have people publish lists of “philosophy smells.” This, I believe, is a concept original to me.[1] A philosophy smell borrows from the code smell idea in programming: code smells don’t describe bugs or errors, necessarily; rather, they are evidence of poor practices and thinking, which are themselves likely to throw up bugs in the future. These smells, rewardingly, have the fruity names common to programmers: object orgy, feature envy, boolean blindness.
Philosophy smells are equivalent: they aren’t the same as logical fallacies, but are evidence of a style of philosophizing that is likely to produce errors and mistakes in future. My favorite, so far, is drive-by syllogism, which is when someone submits rapid-fire premises and demands an immediate yes/no answer to the conclusion; this doesn’t mean that the logic is wrong, only that you’re much more likely to get a case of l’espirit de escalier. It’s not a particularly fruitful discussion if your means of winning is baffling people with speed and, in the process, missing out on their best counter-arguments. (If you are intellectually honest, you would want to hear the best they have.)
One more: astrolophizing, which is a blend of the style of astrology and philosophy, exemplified when people deploy logic and especially critiques that are so generic that they can be applied wholesale to more or less anything and are, thus, not necessarily wrong but are unlikely to be right. Christopher Hitchens gives a lovely example in Hitch 22:
The last time I heard an orthodox Marxist statement that was music to my ears was from a member of the Rwanda Patriotic Front, during the mass slaughter in the country. ‘The terms Hutu and Tutsi,’ he said severely, ‘are merely ideological constructs, describing different relationships to the means and mode of production.’ But of course!
I hope that establishing these two norms would eliminate many of the false positives: people who look like they’re acting in bad faith, but by accident. If philosophy smell is too weird a phrase, philosophical anti-pattern might be more palatable, to borrow from Andrew Koning’s work in software.
Making It Easier and More Productive to Handle Bad Faith
Additionally, I have two proposals for moderating and softening accusations of bad faith, and to disincentivize people from making frivolous accusations a habit.
3. Claims of bad faith should be recorded diligently.
We should record accusations of bad faith. We could do with this feature in other domains, but we may as well start here. It is as common as dirt for commentators, the general public, etc. to make strong accusations against their fellows, but that essentially get black holed or forgotten: it had an impact at the time, but eventually people forget and move on. Few people check on these claims to see if they turn out to be true or false.
The accuser gets to pack some punch at the time, and perhaps do so repeatedly, with no tally of their accuracy. To be clear, I’m not saying there’s anything wrong with being wrong — it’s a beautiful thing — but we should be wrong in public and embrace it. More apologies, more admitting that we were wrong, less forgetting.
4. All claims of bad faith should be falsifiable.
For that matter, all non-metaphysical claims should be falsifiable too, but that’s a fight for another day. The reason for this injunction is that it’s easier than spitting to make an accusation against someone: most of the time, they’re so woolly as to be impossible to prove or disprove. If the accusation is well-formed, then there ought to be something that the accused can do, or even prove that they have already done, to convince you that they are in good faith.
My editor asked that I give an example of an unfalsifiable accusations. I don’t like to make a habit of public criticisms like this, so instead I will draw on a historical example: the Hamilton-Burr duel. In 1804, Vice President Aaron Burr shot and killed fellow Founding Father Alexander Hamilton, during a duel. Burr challenged Hamilton because the latter had refused to deny insulting the former during a dinner in upstate New York. Putting aside whether this sort of thing is worth a duel, how could Hamilton possibly have falsified an account of a dinner held in private? Of course, Burr did not say that Hamilton acted in bad faith, but hopefully this incident illustrates the danger of any and all unfalsifiable accusations.
Hamilton and Burr dueling/Wikipedia
Remember: claims of bad faith cannot be metaphysical. If your accusation is not falsifiable, it is non-admissible. There are of course grey claims, like saying that there are 5 sextillion grains of sand on our planet: this is falsifiable, but impossible to prove. We should avoid such grey accusations for this reason, their greyness is a potential get-out-of-jail-free card.
Finally, the means of falsifying the accusation should be part of the accusation; this is to say that if I accuse you of something, I should also state what I would accept as proof to the contrary. For example, if one accuses a politician of being an oil shill, one should state also which oil companies or lobbyists are involved, thereby making it possible to see if that person had been paid.
This gives some liberty to the accuser, but there’s no incentive to softball, and we can all tell, conversely, when an accuser demands evidence that makes acquittal impossible (which is obvious bad form). This is not perfect, but ought never to be worse and usually to be better than the norm today.
Incentivizing Good Faith and Disincentivizing Frivolous Accusations
5. All claims of bad faith should be reciprocal.
There should be a cost associated with making such bold claims. Just as in the legal system, wherein court fees disincentivize frivolous action, we ought to make it mean something to make accusations of this magnitude. We know from game theory studies that people will happily pay, often more than the value of the infraction, for justice.
A non-monetary system would work on reciprocation: if you accuse someone of bad faith, you ought to pair the accusation with an admission of a time that you acted in bad faith in the past. The extent to which people name only petty examples of their own bad faith, I expect, will speak for the quality of their accusation. This method ought to be advantageous not least because it demonstrates us reaching for a higher value: the game should not be one of I’m right you’re wrong but of we are all trying to be better.
6. Good Faith Bonds
Finally, a monetary proposal: Good Faith Bonds. Here’s how it would work:
Individuals active in the public sphere would post bond with a the Good Faith Bonds organization, on the promise that they act in good faith thereafter.
If a claimant comes with a falsifiable accusation, the accused can either submit evidence in their defence, apologize or (hopefully not) do nothing.
If they apologize they keep their bond, if they are proven wrong or ignore the accusation, they lose their bond.
If they are proven right, the accuser loses their bond.
We would need to find a means of disposing of the forfeited money in a way that doesn’t generate perverse incentives: for example, it would be a very bad idea for the forfeited money to go to the accuser, this would make it impossibly tempting to abuse the system. One might say that the funds should go to charity, but that isn’t unproblematic either.
There’s obviously more to be ironed out here, but hopefully the idea is, at least, provocative. Questions around conflicts of interest and corruption within the organization are non-trivial, but new trustless modes of organization afforded by blockchain technology offer exciting possibilities.
Daniel Mróz
Finding the Best of Humanity Expressed by Computers
As I mentioned at the start, it seems as though our games of communication incentivize bad faith. Bad faith, one might say, is worse than lying, in that it breaks down the mechanisms of communication and our ability to coordinate, especially among people who are different from one another.
That said, technology and computers in particular can give us tools to reflect some light back onto our species. To talk in such terms, bad faith breaks interoperability, and offers the cultural and conversational equivalent of VHS v.s. Betamax; building interoperability offers whatever the cultural and conversational equivalent of the Internet is: imperfect, confusing, but game-changing enough that any self-respecting tyrant would want to censor it.
Some complain at the inhumanity of computers and systems. This is usually fallacious: all such systems are created in the images of humans, and usually what people lament inhuman isn’t so in the sense that it is unlike people (say, lacking emotion) but in actually in the sense that it lacks empathy and ethics: if you spend any time with humans, you will learn just how fond some people are of creating inhuman systems; the only difference, in the realm of computers, is that the machines give people a place to hide.
So, Postel’s Law of Robustness, which computers find it much easier to obey than we do, is actually very human, and Postel, a man very fond of machines, a great humanist. Thus, when I recommended, above, being a little bit more like our machines, I meant only the extent to which they are faithful, diligent, and disinterested; characteristics which, combined with very human traits like public-spiritedness and a sense of community, we might call virtue. With this as our guiding principle, we might find conversation a little easier.
Notes:
I have not found reference to the philosophy smell idea anywhere, so believe it to be original. I will of course hand it to its rightful owner, if corrected.