A discussion on discourse
-
- Posts: 23909
- Joined: Sat Feb 23, 2019 10:53 am
A discussion on discourse
Wasting too much time not doing my work this am, but pulled this quick and it may indict me in some ways but thought it was an interesting blog type piece.
https://medium.com/wonk-bridge/a-short- ... 66fa6f3ba7
A Short Introduction to the Mechanics of Bad Faith
Having Taught Computers Humanity, Perhaps They Can Teach Us Good Faith
Oliver Meredith Cox
Oliver Meredith Cox
Follow
Nov 23, 2020 · 19 min read
Winner of the Best Article from a New Contributor 2020 Award at Wonk Bridge’s 2020 Award Ceremony.
Degrain’s “Otelo e Desdemona”, an image and scenario epitomising the quandary of our present focus — to risk good faith, or embrace the ease of bad faith?
Bad faith is corrosive and, as with the nuclear calculus, the only way to deal with it is invoke a deterrent that is just as (or more) dangerous: the accusation of bad faith, which has the potential to negate any conversation. This appears to be the case because the game theory is off: one might say that our public conversations are like public goods (fisheries, forests, waterways) in which those high in ambition and low in virtue can win big at everyone else’s expense.
What follows is an exploration of this dynamic, and some proposals for how we might find ways to converse better with people who are different to us.
A Background to the Mechanics
A little while ago I wrote an article called “Conversational Interoperability,” which set out some ground rules designed to facilitate fruitful communication between people of different mindsets; this is in the context of how, quite frequently, people of different fields, philosophies or politics find their communication degrading for no other reason than they have different words for the same thing or different perspectives, when they really think quite similarly or at least don’t have differences that warrant getting angry.
That piece was based on Postel’s Law, or the Law of Robustness: “Be conservative in what you send, be liberal in what you accept.” John Postel, who was fundamental to the development of the Internet, expressed these words in the context of digital communications, meaning, in essence, that one should only transmit well-formed data but, if one receives malformed data that is nonetheless decipherable, one should parse it anyway.
Hopefully the reader can see how this provides for maximum communication, in theory: in the domain that you control (what you transmit) you should maximize obedience to the standard; in the domain you do not control (what others transmit) you should maximize the total information that you decipher. I extend the Law to my six rules of conversational interoperability, in three parts:
Clarity:
When speaking: be clear.
When listening: be charitable
Offense:
When speaking: try not to cause offense.
When listening: don’t take offense.
Errors:
When good-faith errors occur: be charitable.
When bad-faith errors occur: treat them like good-faith errors.
One can see how each of these pairs follows the liberal/conservative duality that Postel originally outlined in his law, except for the last one. Upon first meeting with Yuji and Max from the Wonk Bridge team, Yuji asked me about the last point. Yuji’s background is in international relations, wherein one must respond: a nation not responding to border violations and other aggressive acts is irresponsible, given that it invites further foul play. How could I, he asked, possibly recommend treating bad-faith errors just as I would treat good-faith errors?
In this piece I hope to explore the nuance of bad faith and how it interacts with human nature, while proposing a system to incentivize good faith. It consists of six policies, expressed as three progressive steps, each using the former as foundation. For more on bad faith, game theory, and these recommendations in detail, read further.
Six Policies to Encourage Good Faith
Step one: being able to call out bad faith without getting confused by mere sloppy arguing.
1. All fora should publish lists of logical fallacies for people to avoid.
2. All fora should publish lists of philosophy smells or philosophical anti-patterns. A philosophy smell is a style of thinking that is not, itself, fallacious but is flawed in a way that makes it easy for fallacies to slip through.
Step two: avoiding the escalation that can often come when accusations of bad faith are thrown around.
1. Claims of bad faith should be recorded diligently, so as to discourage people from debasing the coin.
2. All claims of bad faith should be falsifiable. Falsifiable claims are stronger, and avoid trap accusations that one can’t get out of.
Step three: building a system with the right incentive structure to encourage good faith behaviour, identify bad faith, and discourage false accusations.
1. All claims of bad faith should be reciprocal: if you make an accusation, you should own up to a past infraction of your own.
2. Good Faith Bonds: an organizational structure that aims to promote good faith. It works thus: people join the organization by posting bond, agreeing that they will act in good faith and avoid baseless accusations of bad faith thereafter. Those found acting in bad faith or making bad faith accusations forfeit their bonds. See below for more detail.
What Is Bad Faith?
What is a bad-faith actor? What is bad faith, for that matter? Bad faith is to act in ways that spoof the normal modes of interaction, such as in debate, conversation, commerce, while actually pursuing hidden, selfish motives or even hoping to disrupt the operation of the system in which they operate.
The most clear and common example is trolling: trolls will often ask what appear normal questions of others, when in fact the real goal is to frustrate, annoy or waste time.
More sophisticated bad faithers use these techniques in debates. Creationists, for example, are notorious for pretending to think that Evolution is just a theory in the sense that “in theory the pub is still open” when they know that Evolution is a theory in that it is the best-evidenced model for the system it describes, having stood up to rigorous tests.
In my view, the most popular form of bad faith is the deliberate misunderstanding of phrases and wording, attributing to people views that they do not hold and clearly weren’t actually expressing. One Hitchens quote, if you’ll indulge me: Hitchens describes his opposition to the way in which particular religions categorize women as chattel, owned; the radio talk-show host responded by saying, “Well then are you against ownership?” Hitchens responded to say: “Of people: yes.”
Hopefully you can glimpse some of what we’re dealing with: bad faith, for our purposes, is when people will twist the universe around in their mind with as much effort as is necessary to avoid understanding a conflicting viewpoint to the extent that it might actually pose a challenge. All of these examples are shadows cast by the same desire: to frustrate, annoy, show off, posture in front of one’s followers, rather than to learn, exchange information and create connections.
The Trouble with Calling out Bad Faith
As such, one might ask, why don’t we just call out these bad actors, plainly and as often as possible? For example, in the case of the creationists, tell them that they have been corrected so many times on the point of the definition of “theory” that it is clear that they are not actually playing to win, but to foul?
The first reason not to do so is that telling someone that they’re in bad faith is the nuclear option, it is meta, it contains the statement that the accused is not actually interested in discourse or ideas, that they are, at bottom, a troll, or worse, a liar. This is something of a conversation stopper. Granted, it’s probably justified sometimes, but quite often people fall into sin by repeating arguments they hear elsewhere but don’t interrogate.
For example, have you ever heard someone say: “There’s no smoke without fire,” meaning that if there’s a fuss, there must be something to fuss about. Do you see how horrible it is? The phrase makes no allowance for lies or mistakes, and is cruelly wrong in that it damns the victims of false rumors. One might as well say, “There’s no map without territory.”
Meanwhile, and perhaps more importantly, there’s the problem of the proliferation of this nuclear option. Anyone can accuse anyone else of acting in bad faith, but proving that one isn’t is as hard as proving any other negative proposition. To the extent to which a given person is believed by their followers and readers, the accusation of bad faith against another threatens to cut that individual out completely (some people are more than happy to be given reason to ignore another person’s ideas).
This has also been miniaturized into the tactical bad faith insult: cheap character critiques like racist, socialist, anti-American. The accusation of bad faith could be said to lie above all of these, e.g. the person is a social justice warrior and what they say can’t, therefore, be trusted.
The assumption of good faith is essentially the assumption that, dispite our differences, we are all trying to seek truth. It is, metaphorically, the phones in JFK and Khrushchev’s offices. It’s a fragile line of communication — and when one side introduces bad faith as a talking point, the other may be tempted to cut the cord.
Game Theory, Mutually Assured Destruction and Technology
Game Theory
The answer to the question of how to deal with the bad actors, or at least the study of this question, starts with game theory. Game theory is the abstraction of strategic action, allowing us to simulate and theorize about the merits of particular strategies. Usually game theory comes down to simulated interactions between two or more “players” with different approaches to the “game.”
For example, Dawkins’ The Selfish Gene features games that simulate social interactions, with only two possible moves: cooperate or defect. These games have multiple rounds: during each round both players make their move, the outcome is decided, then they play the next round, and so on. In the Dawkins example, if both players cooperate, both get a payoff, if one player defects and the other cooperates, the defector gets a payoff, and the cooperator nothing. If they both defect, they both lose.
This allows the researcher to test out different strategies, “always cooperate,” “always defect,” “breast-for-tat” and so on — for American readers, who I am told don’t have the phrase breast-for-tat, this means to give it back as you get it, I’ll be nice if you’re nice, but if you hit me I’ll hit you back on the next move.
Imagine if two people were in oppositional conversation, in public, with their audiences watching. Imagine if each person had the option to press a “bad faith” button that would identify the other, categorically, as someone with sinister motives. Well, a large number would press the button instantly and on all occasions. Remember that to some people, all conservatives are oil-heads, to others, progressives don’t want to improve America, they want to destroy it.
As I mentioned above, each time someone presses the button, to the extent that they are believed, their audience is further encouraged to think that the only reason one would have to disagree would be dishonesty. Granted, some people would alienate their audience by acting in this way; others practically make a living from it.
Daniel Mróz
Mutually Assured Destruction
Then, of course, one has the ultimate domain of game theory: nuclear deterrents. Essentially the well-worn theory goes like this: any nation launching a nuclear attack today would, often automatically, provoke a counter attack on the part of their adversary. Thus, mutually assured destruction.
Continuing the liberal draw on John Postel’s wisdom, one might call this the law of fragility: each nation must be totally permissive in what it accepts (technology that can fend off nuclear attacks ranges from non-existent to dubious) and as a defensive reaction to this weakness, each nation must be totally aggressive in what it sends (launch at any credible threat). It is bizarre that, perhaps as far from the balance of Postel’s law as one can go, we find something that is oddly stable.
In the game theory of a nuclear exchange, if either side defects, both are annihilated. Compare this to the typical behaviour of hatchet wielding journalists and social media agitators, who stand to gain in the short term by acting in bad faith, and even can gain if both sides defect. This appears to be because different people have different audiences with different takes on events. However the argument actually goes, people often uphold their champion as the winner.
But, of course, there are no free lunches, and I argue that the terminus of this path is mutually assured destruction, also. A society that lies this much will eventually degrade to the extent that collaboration across party lines becomes impossible. This is part of why Eric Weinstein is so fond of recommending that we return to above-ground nuclear testing: a true perception of the risks is an excellent way to give people reason to coordinate their action effectively.
I kept a rather game theoretical horse tied nearby for just this sort of occasion: images on the Web. Do you remember when blog posts, articles, etc. didn’t always have an image accompanying them? Do you remember when Twitter consisted primarily of text? Well, what happened? It appears that the social networks, especially Twitter and Facebook, began offering fairly functional previews when users shared URLs, together with easy support for posting images themselves.
We know, to the embarrassment of our species, that people are more likely to click on something if it has an associated image; more so if the image features a human being. Thus, the incentive is for users to post images with everything, all the time: and if you don’t post an image to accompany your article you’ll lose clicks to someone who does.
In this example, online writers were playing a game of image versus no image, and image almost always wins (in the short term). This delivered us the sickening phenomenon of the mandatory blog featured image: teenagers on phones, a man reading a newspaper, a dreamy barista. This isn’t a lost cause, but it wasted a lot of time, energy and bandwidth along the way.
Fundamentally, the system of discourse appears to be configured such that it is very hard to win against bad faith actors and, like with the image battle, you can build a pocket of decency, but it’s much harder to stop the system as a whole from denaturing. To do that, we need better systems and better norms, the game theory itself must be different.
Technological Esperanto
You have probably heard of Esperanto, a constructed language, designed to be spoken universally and, thus, to foster international communication and collaboration. Esperanto, the most successful constructed language, spoken by millions, is dwarfed by another universal language, one which operates quite differently: TCP/IP.
TCP/IP describes the suite of protocols that facilitate the Internet; they are almost universally used and accepted, despite personal and global differences of language, creed and politics. Amusingly, when people are yelling at each other on Twitter, trolling, telling each other that we have nothing to offer, that we disagree on everything, they do so within a common framework about which there is total agreement: the protocols of the Internet that govern how we exchange data functionally, including John Postel’s contributions and his Law, which is where we started.
John Postel/Wikipedia
Online, at least on the very basic level, there is almost total interoperability. For those who aren’t familiar, the Network Centric Operations Industry Consortium provides nice primer on interoperability:
To claim interoperability, you must:
1) be able to communicate with others;
2) be able to digitize the data/information to be shared; and
3) be recognizable/registered (i.e., you must have some form of credentials verifying both senders and receivers on the network as authorized participants with specific roles).
For example, the USB standard provides for interoperability: all compliant devices fit into the hole correctly and can transmit data. Conversely, some Windows and Mac disk formats are still non-interoperable, throwing errors or even doing damage when one crosses the streams.
To the extent that interoperability is reduced, I see no instances wherein the community benefits: we benefit where there is more scope for communication, only those who want fawning, captive audiences would want people to use technology, or adopt mindsets, that cannot interact properly with others.
Another aside: if you want to build a cult, or at least make it difficult for your adherents to leave whatever organization you are building, it’s in your interest to give them an incompatible philosophy or mindset and especially to denature language, such that adherents can’t communicate properly with people using common parlance.
It’s worth noting the domain of operation of TCP/IP and the Law of Robustness and why it works there. Its domain, of course, is computers and their communication, and it works at least partly because there are few grey areas: either the packets are well-formed or they are not, my father would have said: “That’s one of the convenient things about computing: it either works or it doesn’t.” Things are much more difficult to judge in the domain of people and culture.
Amusingly, it seems that when left to our own devices, we find bad faith action irresistible; we have been astonishingly successful in maintaining good faith by offloading this responsibility to dependable machines. In order to build a less divisive discourse, we might want to be a little more like our devices — at least how they communicate.
Six Proposals for the Promotion of Good Faith
Reducing False Positives
1. All fora should publish lists of logical fallacies for people to avoid.
Fora, communities, social networks and particularly schools, should publish lists of logical fallacies: argumentum ad hominem, post hoc ergo propter hoc, etc. It’s a travesty that school children don’t emerge knowing logic and it’s fallacious use: one has to admit that those not versed make better votaries and consumers.
Fundamentally, dealing with points that are logically unsound is frequently the source of friction in conversations: truly, if you use a fallacious argument, you either forgot yourself, are ignorant, or really are acting in bad faith. Usually it’s the first two, but with better norms, it seems that we could reduce them to practically nothing.
The most common fallacy, I find, and one that is perhaps less well known, is touquoque: meaning “also you.” It describes the act of trying to absolve one’s own side of something by identifying that the other side did it, too, or perhaps did something just as bad. This may, like a child realizing that they are not the only one in trouble, lessen the feeling of guilt, but it certainly doesn’t change the ethical implications of what one actually did.
The tu quoque, this style of argument, is supremely popular, perhaps one of the most popular rhetorical devices: yet it wastes our time and our patience with each other, during conversations that require every ounce of grace that we can afford.
2. All fora should publish lists of philosophy smells or philosophical anti-patterns.
Along with the list of fallacies, I would have people publish lists of “philosophy smells.” This, I believe, is a concept original to me.[1] A philosophy smell borrows from the code smell idea in programming: code smells don’t describe bugs or errors, necessarily; rather, they are evidence of poor practices and thinking, which are themselves likely to throw up bugs in the future. These smells, rewardingly, have the fruity names common to programmers: object orgy, feature envy, boolean blindness.
Philosophy smells are equivalent: they aren’t the same as logical fallacies, but are evidence of a style of philosophizing that is likely to produce errors and mistakes in future. My favorite, so far, is drive-by syllogism, which is when someone submits rapid-fire premises and demands an immediate yes/no answer to the conclusion; this doesn’t mean that the logic is wrong, only that you’re much more likely to get a case of l’espirit de escalier. It’s not a particularly fruitful discussion if your means of winning is baffling people with speed and, in the process, missing out on their best counter-arguments. (If you are intellectually honest, you would want to hear the best they have.)
One more: astrolophizing, which is a blend of the style of astrology and philosophy, exemplified when people deploy logic and especially critiques that are so generic that they can be applied wholesale to more or less anything and are, thus, not necessarily wrong but are unlikely to be right. Christopher Hitchens gives a lovely example in Hitch 22:
The last time I heard an orthodox Marxist statement that was music to my ears was from a member of the Rwanda Patriotic Front, during the mass slaughter in the country. ‘The terms Hutu and Tutsi,’ he said severely, ‘are merely ideological constructs, describing different relationships to the means and mode of production.’ But of course!
I hope that establishing these two norms would eliminate many of the false positives: people who look like they’re acting in bad faith, but by accident. If philosophy smell is too weird a phrase, philosophical anti-pattern might be more palatable, to borrow from Andrew Koning’s work in software.
Making It Easier and More Productive to Handle Bad Faith
Additionally, I have two proposals for moderating and softening accusations of bad faith, and to disincentivize people from making frivolous accusations a habit.
3. Claims of bad faith should be recorded diligently.
We should record accusations of bad faith. We could do with this feature in other domains, but we may as well start here. It is as common as dirt for commentators, the general public, etc. to make strong accusations against their fellows, but that essentially get black holed or forgotten: it had an impact at the time, but eventually people forget and move on. Few people check on these claims to see if they turn out to be true or false.
The accuser gets to pack some punch at the time, and perhaps do so repeatedly, with no tally of their accuracy. To be clear, I’m not saying there’s anything wrong with being wrong — it’s a beautiful thing — but we should be wrong in public and embrace it. More apologies, more admitting that we were wrong, less forgetting.
4. All claims of bad faith should be falsifiable.
For that matter, all non-metaphysical claims should be falsifiable too, but that’s a fight for another day. The reason for this injunction is that it’s easier than spitting to make an accusation against someone: most of the time, they’re so woolly as to be impossible to prove or disprove. If the accusation is well-formed, then there ought to be something that the accused can do, or even prove that they have already done, to convince you that they are in good faith.
My editor asked that I give an example of an unfalsifiable accusations. I don’t like to make a habit of public criticisms like this, so instead I will draw on a historical example: the Hamilton-Burr duel. In 1804, Vice President Aaron Burr shot and killed fellow Founding Father Alexander Hamilton, during a duel. Burr challenged Hamilton because the latter had refused to deny insulting the former during a dinner in upstate New York. Putting aside whether this sort of thing is worth a duel, how could Hamilton possibly have falsified an account of a dinner held in private? Of course, Burr did not say that Hamilton acted in bad faith, but hopefully this incident illustrates the danger of any and all unfalsifiable accusations.
Hamilton and Burr dueling/Wikipedia
Remember: claims of bad faith cannot be metaphysical. If your accusation is not falsifiable, it is non-admissible. There are of course grey claims, like saying that there are 5 sextillion grains of sand on our planet: this is falsifiable, but impossible to prove. We should avoid such grey accusations for this reason, their greyness is a potential get-out-of-jail-free card.
Finally, the means of falsifying the accusation should be part of the accusation; this is to say that if I accuse you of something, I should also state what I would accept as proof to the contrary. For example, if one accuses a politician of being an oil shill, one should state also which oil companies or lobbyists are involved, thereby making it possible to see if that person had been paid.
This gives some liberty to the accuser, but there’s no incentive to softball, and we can all tell, conversely, when an accuser demands evidence that makes acquittal impossible (which is obvious bad form). This is not perfect, but ought never to be worse and usually to be better than the norm today.
Incentivizing Good Faith and Disincentivizing Frivolous Accusations
5. All claims of bad faith should be reciprocal.
There should be a cost associated with making such bold claims. Just as in the legal system, wherein court fees disincentivize frivolous action, we ought to make it mean something to make accusations of this magnitude. We know from game theory studies that people will happily pay, often more than the value of the infraction, for justice.
A non-monetary system would work on reciprocation: if you accuse someone of bad faith, you ought to pair the accusation with an admission of a time that you acted in bad faith in the past. The extent to which people name only petty examples of their own bad faith, I expect, will speak for the quality of their accusation. This method ought to be advantageous not least because it demonstrates us reaching for a higher value: the game should not be one of I’m right you’re wrong but of we are all trying to be better.
6. Good Faith Bonds
Finally, a monetary proposal: Good Faith Bonds. Here’s how it would work:
Individuals active in the public sphere would post bond with a the Good Faith Bonds organization, on the promise that they act in good faith thereafter.
If a claimant comes with a falsifiable accusation, the accused can either submit evidence in their defence, apologize or (hopefully not) do nothing.
If they apologize they keep their bond, if they are proven wrong or ignore the accusation, they lose their bond.
If they are proven right, the accuser loses their bond.
We would need to find a means of disposing of the forfeited money in a way that doesn’t generate perverse incentives: for example, it would be a very bad idea for the forfeited money to go to the accuser, this would make it impossibly tempting to abuse the system. One might say that the funds should go to charity, but that isn’t unproblematic either.
There’s obviously more to be ironed out here, but hopefully the idea is, at least, provocative. Questions around conflicts of interest and corruption within the organization are non-trivial, but new trustless modes of organization afforded by blockchain technology offer exciting possibilities.
Daniel Mróz
Finding the Best of Humanity Expressed by Computers
As I mentioned at the start, it seems as though our games of communication incentivize bad faith. Bad faith, one might say, is worse than lying, in that it breaks down the mechanisms of communication and our ability to coordinate, especially among people who are different from one another.
That said, technology and computers in particular can give us tools to reflect some light back onto our species. To talk in such terms, bad faith breaks interoperability, and offers the cultural and conversational equivalent of VHS v.s. Betamax; building interoperability offers whatever the cultural and conversational equivalent of the Internet is: imperfect, confusing, but game-changing enough that any self-respecting tyrant would want to censor it.
Some complain at the inhumanity of computers and systems. This is usually fallacious: all such systems are created in the images of humans, and usually what people lament inhuman isn’t so in the sense that it is unlike people (say, lacking emotion) but in actually in the sense that it lacks empathy and ethics: if you spend any time with humans, you will learn just how fond some people are of creating inhuman systems; the only difference, in the realm of computers, is that the machines give people a place to hide.
So, Postel’s Law of Robustness, which computers find it much easier to obey than we do, is actually very human, and Postel, a man very fond of machines, a great humanist. Thus, when I recommended, above, being a little bit more like our machines, I meant only the extent to which they are faithful, diligent, and disinterested; characteristics which, combined with very human traits like public-spiritedness and a sense of community, we might call virtue. With this as our guiding principle, we might find conversation a little easier.
Notes:
I have not found reference to the philosophy smell idea anywhere, so believe it to be original. I will of course hand it to its rightful owner, if corrected.
https://medium.com/wonk-bridge/a-short- ... 66fa6f3ba7
A Short Introduction to the Mechanics of Bad Faith
Having Taught Computers Humanity, Perhaps They Can Teach Us Good Faith
Oliver Meredith Cox
Oliver Meredith Cox
Follow
Nov 23, 2020 · 19 min read
Winner of the Best Article from a New Contributor 2020 Award at Wonk Bridge’s 2020 Award Ceremony.
Degrain’s “Otelo e Desdemona”, an image and scenario epitomising the quandary of our present focus — to risk good faith, or embrace the ease of bad faith?
Bad faith is corrosive and, as with the nuclear calculus, the only way to deal with it is invoke a deterrent that is just as (or more) dangerous: the accusation of bad faith, which has the potential to negate any conversation. This appears to be the case because the game theory is off: one might say that our public conversations are like public goods (fisheries, forests, waterways) in which those high in ambition and low in virtue can win big at everyone else’s expense.
What follows is an exploration of this dynamic, and some proposals for how we might find ways to converse better with people who are different to us.
A Background to the Mechanics
A little while ago I wrote an article called “Conversational Interoperability,” which set out some ground rules designed to facilitate fruitful communication between people of different mindsets; this is in the context of how, quite frequently, people of different fields, philosophies or politics find their communication degrading for no other reason than they have different words for the same thing or different perspectives, when they really think quite similarly or at least don’t have differences that warrant getting angry.
That piece was based on Postel’s Law, or the Law of Robustness: “Be conservative in what you send, be liberal in what you accept.” John Postel, who was fundamental to the development of the Internet, expressed these words in the context of digital communications, meaning, in essence, that one should only transmit well-formed data but, if one receives malformed data that is nonetheless decipherable, one should parse it anyway.
Hopefully the reader can see how this provides for maximum communication, in theory: in the domain that you control (what you transmit) you should maximize obedience to the standard; in the domain you do not control (what others transmit) you should maximize the total information that you decipher. I extend the Law to my six rules of conversational interoperability, in three parts:
Clarity:
When speaking: be clear.
When listening: be charitable
Offense:
When speaking: try not to cause offense.
When listening: don’t take offense.
Errors:
When good-faith errors occur: be charitable.
When bad-faith errors occur: treat them like good-faith errors.
One can see how each of these pairs follows the liberal/conservative duality that Postel originally outlined in his law, except for the last one. Upon first meeting with Yuji and Max from the Wonk Bridge team, Yuji asked me about the last point. Yuji’s background is in international relations, wherein one must respond: a nation not responding to border violations and other aggressive acts is irresponsible, given that it invites further foul play. How could I, he asked, possibly recommend treating bad-faith errors just as I would treat good-faith errors?
In this piece I hope to explore the nuance of bad faith and how it interacts with human nature, while proposing a system to incentivize good faith. It consists of six policies, expressed as three progressive steps, each using the former as foundation. For more on bad faith, game theory, and these recommendations in detail, read further.
Six Policies to Encourage Good Faith
Step one: being able to call out bad faith without getting confused by mere sloppy arguing.
1. All fora should publish lists of logical fallacies for people to avoid.
2. All fora should publish lists of philosophy smells or philosophical anti-patterns. A philosophy smell is a style of thinking that is not, itself, fallacious but is flawed in a way that makes it easy for fallacies to slip through.
Step two: avoiding the escalation that can often come when accusations of bad faith are thrown around.
1. Claims of bad faith should be recorded diligently, so as to discourage people from debasing the coin.
2. All claims of bad faith should be falsifiable. Falsifiable claims are stronger, and avoid trap accusations that one can’t get out of.
Step three: building a system with the right incentive structure to encourage good faith behaviour, identify bad faith, and discourage false accusations.
1. All claims of bad faith should be reciprocal: if you make an accusation, you should own up to a past infraction of your own.
2. Good Faith Bonds: an organizational structure that aims to promote good faith. It works thus: people join the organization by posting bond, agreeing that they will act in good faith and avoid baseless accusations of bad faith thereafter. Those found acting in bad faith or making bad faith accusations forfeit their bonds. See below for more detail.
What Is Bad Faith?
What is a bad-faith actor? What is bad faith, for that matter? Bad faith is to act in ways that spoof the normal modes of interaction, such as in debate, conversation, commerce, while actually pursuing hidden, selfish motives or even hoping to disrupt the operation of the system in which they operate.
The most clear and common example is trolling: trolls will often ask what appear normal questions of others, when in fact the real goal is to frustrate, annoy or waste time.
More sophisticated bad faithers use these techniques in debates. Creationists, for example, are notorious for pretending to think that Evolution is just a theory in the sense that “in theory the pub is still open” when they know that Evolution is a theory in that it is the best-evidenced model for the system it describes, having stood up to rigorous tests.
In my view, the most popular form of bad faith is the deliberate misunderstanding of phrases and wording, attributing to people views that they do not hold and clearly weren’t actually expressing. One Hitchens quote, if you’ll indulge me: Hitchens describes his opposition to the way in which particular religions categorize women as chattel, owned; the radio talk-show host responded by saying, “Well then are you against ownership?” Hitchens responded to say: “Of people: yes.”
Hopefully you can glimpse some of what we’re dealing with: bad faith, for our purposes, is when people will twist the universe around in their mind with as much effort as is necessary to avoid understanding a conflicting viewpoint to the extent that it might actually pose a challenge. All of these examples are shadows cast by the same desire: to frustrate, annoy, show off, posture in front of one’s followers, rather than to learn, exchange information and create connections.
The Trouble with Calling out Bad Faith
As such, one might ask, why don’t we just call out these bad actors, plainly and as often as possible? For example, in the case of the creationists, tell them that they have been corrected so many times on the point of the definition of “theory” that it is clear that they are not actually playing to win, but to foul?
The first reason not to do so is that telling someone that they’re in bad faith is the nuclear option, it is meta, it contains the statement that the accused is not actually interested in discourse or ideas, that they are, at bottom, a troll, or worse, a liar. This is something of a conversation stopper. Granted, it’s probably justified sometimes, but quite often people fall into sin by repeating arguments they hear elsewhere but don’t interrogate.
For example, have you ever heard someone say: “There’s no smoke without fire,” meaning that if there’s a fuss, there must be something to fuss about. Do you see how horrible it is? The phrase makes no allowance for lies or mistakes, and is cruelly wrong in that it damns the victims of false rumors. One might as well say, “There’s no map without territory.”
Meanwhile, and perhaps more importantly, there’s the problem of the proliferation of this nuclear option. Anyone can accuse anyone else of acting in bad faith, but proving that one isn’t is as hard as proving any other negative proposition. To the extent to which a given person is believed by their followers and readers, the accusation of bad faith against another threatens to cut that individual out completely (some people are more than happy to be given reason to ignore another person’s ideas).
This has also been miniaturized into the tactical bad faith insult: cheap character critiques like racist, socialist, anti-American. The accusation of bad faith could be said to lie above all of these, e.g. the person is a social justice warrior and what they say can’t, therefore, be trusted.
The assumption of good faith is essentially the assumption that, dispite our differences, we are all trying to seek truth. It is, metaphorically, the phones in JFK and Khrushchev’s offices. It’s a fragile line of communication — and when one side introduces bad faith as a talking point, the other may be tempted to cut the cord.
Game Theory, Mutually Assured Destruction and Technology
Game Theory
The answer to the question of how to deal with the bad actors, or at least the study of this question, starts with game theory. Game theory is the abstraction of strategic action, allowing us to simulate and theorize about the merits of particular strategies. Usually game theory comes down to simulated interactions between two or more “players” with different approaches to the “game.”
For example, Dawkins’ The Selfish Gene features games that simulate social interactions, with only two possible moves: cooperate or defect. These games have multiple rounds: during each round both players make their move, the outcome is decided, then they play the next round, and so on. In the Dawkins example, if both players cooperate, both get a payoff, if one player defects and the other cooperates, the defector gets a payoff, and the cooperator nothing. If they both defect, they both lose.
This allows the researcher to test out different strategies, “always cooperate,” “always defect,” “breast-for-tat” and so on — for American readers, who I am told don’t have the phrase breast-for-tat, this means to give it back as you get it, I’ll be nice if you’re nice, but if you hit me I’ll hit you back on the next move.
Imagine if two people were in oppositional conversation, in public, with their audiences watching. Imagine if each person had the option to press a “bad faith” button that would identify the other, categorically, as someone with sinister motives. Well, a large number would press the button instantly and on all occasions. Remember that to some people, all conservatives are oil-heads, to others, progressives don’t want to improve America, they want to destroy it.
As I mentioned above, each time someone presses the button, to the extent that they are believed, their audience is further encouraged to think that the only reason one would have to disagree would be dishonesty. Granted, some people would alienate their audience by acting in this way; others practically make a living from it.
Daniel Mróz
Mutually Assured Destruction
Then, of course, one has the ultimate domain of game theory: nuclear deterrents. Essentially the well-worn theory goes like this: any nation launching a nuclear attack today would, often automatically, provoke a counter attack on the part of their adversary. Thus, mutually assured destruction.
Continuing the liberal draw on John Postel’s wisdom, one might call this the law of fragility: each nation must be totally permissive in what it accepts (technology that can fend off nuclear attacks ranges from non-existent to dubious) and as a defensive reaction to this weakness, each nation must be totally aggressive in what it sends (launch at any credible threat). It is bizarre that, perhaps as far from the balance of Postel’s law as one can go, we find something that is oddly stable.
In the game theory of a nuclear exchange, if either side defects, both are annihilated. Compare this to the typical behaviour of hatchet wielding journalists and social media agitators, who stand to gain in the short term by acting in bad faith, and even can gain if both sides defect. This appears to be because different people have different audiences with different takes on events. However the argument actually goes, people often uphold their champion as the winner.
But, of course, there are no free lunches, and I argue that the terminus of this path is mutually assured destruction, also. A society that lies this much will eventually degrade to the extent that collaboration across party lines becomes impossible. This is part of why Eric Weinstein is so fond of recommending that we return to above-ground nuclear testing: a true perception of the risks is an excellent way to give people reason to coordinate their action effectively.
I kept a rather game theoretical horse tied nearby for just this sort of occasion: images on the Web. Do you remember when blog posts, articles, etc. didn’t always have an image accompanying them? Do you remember when Twitter consisted primarily of text? Well, what happened? It appears that the social networks, especially Twitter and Facebook, began offering fairly functional previews when users shared URLs, together with easy support for posting images themselves.
We know, to the embarrassment of our species, that people are more likely to click on something if it has an associated image; more so if the image features a human being. Thus, the incentive is for users to post images with everything, all the time: and if you don’t post an image to accompany your article you’ll lose clicks to someone who does.
In this example, online writers were playing a game of image versus no image, and image almost always wins (in the short term). This delivered us the sickening phenomenon of the mandatory blog featured image: teenagers on phones, a man reading a newspaper, a dreamy barista. This isn’t a lost cause, but it wasted a lot of time, energy and bandwidth along the way.
Fundamentally, the system of discourse appears to be configured such that it is very hard to win against bad faith actors and, like with the image battle, you can build a pocket of decency, but it’s much harder to stop the system as a whole from denaturing. To do that, we need better systems and better norms, the game theory itself must be different.
Technological Esperanto
You have probably heard of Esperanto, a constructed language, designed to be spoken universally and, thus, to foster international communication and collaboration. Esperanto, the most successful constructed language, spoken by millions, is dwarfed by another universal language, one which operates quite differently: TCP/IP.
TCP/IP describes the suite of protocols that facilitate the Internet; they are almost universally used and accepted, despite personal and global differences of language, creed and politics. Amusingly, when people are yelling at each other on Twitter, trolling, telling each other that we have nothing to offer, that we disagree on everything, they do so within a common framework about which there is total agreement: the protocols of the Internet that govern how we exchange data functionally, including John Postel’s contributions and his Law, which is where we started.
John Postel/Wikipedia
Online, at least on the very basic level, there is almost total interoperability. For those who aren’t familiar, the Network Centric Operations Industry Consortium provides nice primer on interoperability:
To claim interoperability, you must:
1) be able to communicate with others;
2) be able to digitize the data/information to be shared; and
3) be recognizable/registered (i.e., you must have some form of credentials verifying both senders and receivers on the network as authorized participants with specific roles).
For example, the USB standard provides for interoperability: all compliant devices fit into the hole correctly and can transmit data. Conversely, some Windows and Mac disk formats are still non-interoperable, throwing errors or even doing damage when one crosses the streams.
To the extent that interoperability is reduced, I see no instances wherein the community benefits: we benefit where there is more scope for communication, only those who want fawning, captive audiences would want people to use technology, or adopt mindsets, that cannot interact properly with others.
Another aside: if you want to build a cult, or at least make it difficult for your adherents to leave whatever organization you are building, it’s in your interest to give them an incompatible philosophy or mindset and especially to denature language, such that adherents can’t communicate properly with people using common parlance.
It’s worth noting the domain of operation of TCP/IP and the Law of Robustness and why it works there. Its domain, of course, is computers and their communication, and it works at least partly because there are few grey areas: either the packets are well-formed or they are not, my father would have said: “That’s one of the convenient things about computing: it either works or it doesn’t.” Things are much more difficult to judge in the domain of people and culture.
Amusingly, it seems that when left to our own devices, we find bad faith action irresistible; we have been astonishingly successful in maintaining good faith by offloading this responsibility to dependable machines. In order to build a less divisive discourse, we might want to be a little more like our devices — at least how they communicate.
Six Proposals for the Promotion of Good Faith
Reducing False Positives
1. All fora should publish lists of logical fallacies for people to avoid.
Fora, communities, social networks and particularly schools, should publish lists of logical fallacies: argumentum ad hominem, post hoc ergo propter hoc, etc. It’s a travesty that school children don’t emerge knowing logic and it’s fallacious use: one has to admit that those not versed make better votaries and consumers.
Fundamentally, dealing with points that are logically unsound is frequently the source of friction in conversations: truly, if you use a fallacious argument, you either forgot yourself, are ignorant, or really are acting in bad faith. Usually it’s the first two, but with better norms, it seems that we could reduce them to practically nothing.
The most common fallacy, I find, and one that is perhaps less well known, is touquoque: meaning “also you.” It describes the act of trying to absolve one’s own side of something by identifying that the other side did it, too, or perhaps did something just as bad. This may, like a child realizing that they are not the only one in trouble, lessen the feeling of guilt, but it certainly doesn’t change the ethical implications of what one actually did.
The tu quoque, this style of argument, is supremely popular, perhaps one of the most popular rhetorical devices: yet it wastes our time and our patience with each other, during conversations that require every ounce of grace that we can afford.
2. All fora should publish lists of philosophy smells or philosophical anti-patterns.
Along with the list of fallacies, I would have people publish lists of “philosophy smells.” This, I believe, is a concept original to me.[1] A philosophy smell borrows from the code smell idea in programming: code smells don’t describe bugs or errors, necessarily; rather, they are evidence of poor practices and thinking, which are themselves likely to throw up bugs in the future. These smells, rewardingly, have the fruity names common to programmers: object orgy, feature envy, boolean blindness.
Philosophy smells are equivalent: they aren’t the same as logical fallacies, but are evidence of a style of philosophizing that is likely to produce errors and mistakes in future. My favorite, so far, is drive-by syllogism, which is when someone submits rapid-fire premises and demands an immediate yes/no answer to the conclusion; this doesn’t mean that the logic is wrong, only that you’re much more likely to get a case of l’espirit de escalier. It’s not a particularly fruitful discussion if your means of winning is baffling people with speed and, in the process, missing out on their best counter-arguments. (If you are intellectually honest, you would want to hear the best they have.)
One more: astrolophizing, which is a blend of the style of astrology and philosophy, exemplified when people deploy logic and especially critiques that are so generic that they can be applied wholesale to more or less anything and are, thus, not necessarily wrong but are unlikely to be right. Christopher Hitchens gives a lovely example in Hitch 22:
The last time I heard an orthodox Marxist statement that was music to my ears was from a member of the Rwanda Patriotic Front, during the mass slaughter in the country. ‘The terms Hutu and Tutsi,’ he said severely, ‘are merely ideological constructs, describing different relationships to the means and mode of production.’ But of course!
I hope that establishing these two norms would eliminate many of the false positives: people who look like they’re acting in bad faith, but by accident. If philosophy smell is too weird a phrase, philosophical anti-pattern might be more palatable, to borrow from Andrew Koning’s work in software.
Making It Easier and More Productive to Handle Bad Faith
Additionally, I have two proposals for moderating and softening accusations of bad faith, and to disincentivize people from making frivolous accusations a habit.
3. Claims of bad faith should be recorded diligently.
We should record accusations of bad faith. We could do with this feature in other domains, but we may as well start here. It is as common as dirt for commentators, the general public, etc. to make strong accusations against their fellows, but that essentially get black holed or forgotten: it had an impact at the time, but eventually people forget and move on. Few people check on these claims to see if they turn out to be true or false.
The accuser gets to pack some punch at the time, and perhaps do so repeatedly, with no tally of their accuracy. To be clear, I’m not saying there’s anything wrong with being wrong — it’s a beautiful thing — but we should be wrong in public and embrace it. More apologies, more admitting that we were wrong, less forgetting.
4. All claims of bad faith should be falsifiable.
For that matter, all non-metaphysical claims should be falsifiable too, but that’s a fight for another day. The reason for this injunction is that it’s easier than spitting to make an accusation against someone: most of the time, they’re so woolly as to be impossible to prove or disprove. If the accusation is well-formed, then there ought to be something that the accused can do, or even prove that they have already done, to convince you that they are in good faith.
My editor asked that I give an example of an unfalsifiable accusations. I don’t like to make a habit of public criticisms like this, so instead I will draw on a historical example: the Hamilton-Burr duel. In 1804, Vice President Aaron Burr shot and killed fellow Founding Father Alexander Hamilton, during a duel. Burr challenged Hamilton because the latter had refused to deny insulting the former during a dinner in upstate New York. Putting aside whether this sort of thing is worth a duel, how could Hamilton possibly have falsified an account of a dinner held in private? Of course, Burr did not say that Hamilton acted in bad faith, but hopefully this incident illustrates the danger of any and all unfalsifiable accusations.
Hamilton and Burr dueling/Wikipedia
Remember: claims of bad faith cannot be metaphysical. If your accusation is not falsifiable, it is non-admissible. There are of course grey claims, like saying that there are 5 sextillion grains of sand on our planet: this is falsifiable, but impossible to prove. We should avoid such grey accusations for this reason, their greyness is a potential get-out-of-jail-free card.
Finally, the means of falsifying the accusation should be part of the accusation; this is to say that if I accuse you of something, I should also state what I would accept as proof to the contrary. For example, if one accuses a politician of being an oil shill, one should state also which oil companies or lobbyists are involved, thereby making it possible to see if that person had been paid.
This gives some liberty to the accuser, but there’s no incentive to softball, and we can all tell, conversely, when an accuser demands evidence that makes acquittal impossible (which is obvious bad form). This is not perfect, but ought never to be worse and usually to be better than the norm today.
Incentivizing Good Faith and Disincentivizing Frivolous Accusations
5. All claims of bad faith should be reciprocal.
There should be a cost associated with making such bold claims. Just as in the legal system, wherein court fees disincentivize frivolous action, we ought to make it mean something to make accusations of this magnitude. We know from game theory studies that people will happily pay, often more than the value of the infraction, for justice.
A non-monetary system would work on reciprocation: if you accuse someone of bad faith, you ought to pair the accusation with an admission of a time that you acted in bad faith in the past. The extent to which people name only petty examples of their own bad faith, I expect, will speak for the quality of their accusation. This method ought to be advantageous not least because it demonstrates us reaching for a higher value: the game should not be one of I’m right you’re wrong but of we are all trying to be better.
6. Good Faith Bonds
Finally, a monetary proposal: Good Faith Bonds. Here’s how it would work:
Individuals active in the public sphere would post bond with a the Good Faith Bonds organization, on the promise that they act in good faith thereafter.
If a claimant comes with a falsifiable accusation, the accused can either submit evidence in their defence, apologize or (hopefully not) do nothing.
If they apologize they keep their bond, if they are proven wrong or ignore the accusation, they lose their bond.
If they are proven right, the accuser loses their bond.
We would need to find a means of disposing of the forfeited money in a way that doesn’t generate perverse incentives: for example, it would be a very bad idea for the forfeited money to go to the accuser, this would make it impossibly tempting to abuse the system. One might say that the funds should go to charity, but that isn’t unproblematic either.
There’s obviously more to be ironed out here, but hopefully the idea is, at least, provocative. Questions around conflicts of interest and corruption within the organization are non-trivial, but new trustless modes of organization afforded by blockchain technology offer exciting possibilities.
Daniel Mróz
Finding the Best of Humanity Expressed by Computers
As I mentioned at the start, it seems as though our games of communication incentivize bad faith. Bad faith, one might say, is worse than lying, in that it breaks down the mechanisms of communication and our ability to coordinate, especially among people who are different from one another.
That said, technology and computers in particular can give us tools to reflect some light back onto our species. To talk in such terms, bad faith breaks interoperability, and offers the cultural and conversational equivalent of VHS v.s. Betamax; building interoperability offers whatever the cultural and conversational equivalent of the Internet is: imperfect, confusing, but game-changing enough that any self-respecting tyrant would want to censor it.
Some complain at the inhumanity of computers and systems. This is usually fallacious: all such systems are created in the images of humans, and usually what people lament inhuman isn’t so in the sense that it is unlike people (say, lacking emotion) but in actually in the sense that it lacks empathy and ethics: if you spend any time with humans, you will learn just how fond some people are of creating inhuman systems; the only difference, in the realm of computers, is that the machines give people a place to hide.
So, Postel’s Law of Robustness, which computers find it much easier to obey than we do, is actually very human, and Postel, a man very fond of machines, a great humanist. Thus, when I recommended, above, being a little bit more like our machines, I meant only the extent to which they are faithful, diligent, and disinterested; characteristics which, combined with very human traits like public-spiritedness and a sense of community, we might call virtue. With this as our guiding principle, we might find conversation a little easier.
Notes:
I have not found reference to the philosophy smell idea anywhere, so believe it to be original. I will of course hand it to its rightful owner, if corrected.
Harvard University, out
University of Utah, in
I am going to get a 4.0 in damage.
(Afan jealous he didn’t do this first)
University of Utah, in
I am going to get a 4.0 in damage.
(Afan jealous he didn’t do this first)
-
- Posts: 5434
- Joined: Tue Mar 05, 2019 8:36 pm
Re: A discussion on discourse
And here we are.
“ A society that lies this much will eventually degrade to the extent that collaboration across party lines becomes impossible.”
“ A society that lies this much will eventually degrade to the extent that collaboration across party lines becomes impossible.”
Farfromgeneva wrote: ↑Thu Nov 04, 2021 10:59 am Wasting too much time not doing my work this am, but pulled this quick and it may indict me in some ways but thought it was an interesting blog type piece.
https://medium.com/wonk-bridge/a-short- ... 66fa6f3ba7
A Short Introduction to the Mechanics of Bad Faith
Having Taught Computers Humanity, Perhaps They Can Teach Us Good Faith
Oliver Meredith Cox
Oliver Meredith Cox
Follow
Nov 23, 2020 · 19 min readm
Winner of the Best Article from a New Contributor 2020 Award at Wonk Bridge’s 2020 Award Ceremony.
Degrain’s “Otelo e Desdemona”, an image and scenario epitomising the quandary of our present focus — to risk good faith, or embrace the ease of bad faith?
Bad faith is corrosive and, as with the nuclear calculus, the only way to deal with it is invoke a deterrent that is just as (or more) dangerous: the accusation of bad faith, which has the potential to negate any conversation. This appears to be the case because the game theory is off: one might say that our public conversations are like public goods (fisheries, forests, waterways) in which those high in ambition and low in virtue can win big at everyone else’s expense.
What follows is an exploration of this dynamic, and some proposals for how we might find ways to converse better with people who are different to us.
A Background to the Mechanics
A little while ago I wrote an article called “Conversational Interoperability,” which set out some ground rules designed to facilitate fruitful communication between people of different mindsets; this is in the context of how, quite frequently, people of different fields, philosophies or politics find their communication degrading for no other reason than they have different words for the same thing or different perspectives, when they really think quite similarly or at least don’t have differences that warrant getting angry.
That piece was based on Postel’s Law, or the Law of Robustness: “Be conservative in what you send, be liberal in what you accept.” John Postel, who was fundamental to the development of the Internet, expressed these words in the context of digital communications, meaning, in essence, that one should only transmit well-formed data but, if one receives malformed data that is nonetheless decipherable, one should parse it anyway.
Hopefully the reader can see how this provides for maximum communication, in theory: in the domain that you control (what you transmit) you should maximize obedience to the standard; in the domain you do not control (what others transmit) you should maximize the total information that you decipher. I extend the Law to my six rules of conversational interoperability, in three parts:
Clarity:
When speaking: be clear.
When listening: be charitable
Offense:
When speaking: try not to cause offense.
When listening: don’t take offense.
Errors:
When good-faith errors occur: be charitable.
When bad-faith errors occur: treat them like good-faith errors.
One can see how each of these pairs follows the liberal/conservative duality that Postel originally outlined in his law, except for the last one. Upon first meeting with Yuji and Max from the Wonk Bridge team, Yuji asked me about the last point. Yuji’s background is in international relations, wherein one must respond: a nation not responding to border violations and other aggressive acts is irresponsible, given that it invites further foul play. How could I, he asked, possibly recommend treating bad-faith errors just as I would treat good-faith errors?
In this piece I hope to explore the nuance of bad faith and how it interacts with human nature, while proposing a system to incentivize good faith. It consists of six policies, expressed as three progressive steps, each using the former as foundation. For more on bad faith, game theory, and these recommendations in detail, read further.
Six Policies to Encourage Good Faith
Step one: being able to call out bad faith without getting confused by mere sloppy arguing.
1. All fora should publish lists of logical fallacies for people to avoid.
2. All fora should publish lists of philosophy smells or philosophical anti-patterns. A philosophy smell is a style of thinking that is not, itself, fallacious but is flawed in a way that makes it easy for fallacies to slip through.
Step two: avoiding the escalation that can often come when accusations of bad faith are thrown around.
1. Claims of bad faith should be recorded diligently, so as to discourage people from debasing the coin.
2. All claims of bad faith should be falsifiable. Falsifiable claims are stronger, and avoid trap accusations that one can’t get out of.
Step three: building a system with the right incentive structure to encourage good faith behaviour, identify bad faith, and discourage false accusations.
1. All claims of bad faith should be reciprocal: if you make an accusation, you should own up to a past infraction of your own.
2. Good Faith Bonds: an organizational structure that aims to promote good faith. It works thus: people join the organization by posting bond, agreeing that they will act in good faith and avoid baseless accusations of bad faith thereafter. Those found acting in bad faith or making bad faith accusations forfeit their bonds. See below for more detail.
What Is Bad Faith?
What is a bad-faith actor? What is bad faith, for that matter? Bad faith is to act in ways that spoof the normal modes of interaction, such as in debate, conversation, commerce, while actually pursuing hidden, selfish motives or even hoping to disrupt the operation of the system in which they operate.
The most clear and common example is trolling: trolls will often ask what appear normal questions of others, when in fact the real goal is to frustrate, annoy or waste time.
More sophisticated bad faithers use these techniques in debates. Creationists, for example, are notorious for pretending to think that Evolution is just a theory in the sense that “in theory the pub is still open” when they know that Evolution is a theory in that it is the best-evidenced model for the system it describes, having stood up to rigorous tests.
In my view, the most popular form of bad faith is the deliberate misunderstanding of phrases and wording, attributing to people views that they do not hold and clearly weren’t actually expressing. One Hitchens quote, if you’ll indulge me: Hitchens describes his opposition to the way in which particular religions categorize women as chattel, owned; the radio talk-show host responded by saying, “Well then are you against ownership?” Hitchens responded to say: “Of people: yes.”
Hopefully you can glimpse some of what we’re dealing with: bad faith, for our purposes, is when people will twist the universe around in their mind with as much effort as is necessary to avoid understanding a conflicting viewpoint to the extent that it might actually pose a challenge. All of these examples are shadows cast by the same desire: to frustrate, annoy, show off, posture in front of one’s followers, rather than to learn, exchange information and create connections.
The Trouble with Calling out Bad Faith
As such, one might ask, why don’t we just call out these bad actors, plainly and as often as possible? For example, in the case of the creationists, tell them that they have been corrected so many times on the point of the definition of “theory” that it is clear that they are not actually playing to win, but to foul?
The first reason not to do so is that telling someone that they’re in bad faith is the nuclear option, it is meta, it contains the statement that the accused is not actually interested in discourse or ideas, that they are, at bottom, a troll, or worse, a liar. This is something of a conversation stopper. Granted, it’s probably justified sometimes, but quite often people fall into sin by repeating arguments they hear elsewhere but don’t interrogate.
For example, have you ever heard someone say: “There’s no smoke without fire,” meaning that if there’s a fuss, there must be something to fuss about. Do you see how horrible it is? The phrase makes no allowance for lies or mistakes, and is cruelly wrong in that it damns the victims of false rumors. One might as well say, “There’s no map without territory.”
Meanwhile, and perhaps more importantly, there’s the problem of the proliferation of this nuclear option. Anyone can accuse anyone else of acting in bad faith, but proving that one isn’t is as hard as proving any other negative proposition. To the extent to which a given person is believed by their followers and readers, the accusation of bad faith against another threatens to cut that individual out completely (some people are more than happy to be given reason to ignore another person’s ideas).
This has also been miniaturized into the tactical bad faith insult: cheap character critiques like racist, socialist, anti-American. The accusation of bad faith could be said to lie above all of these, e.g. the person is a social justice warrior and what they say can’t, therefore, be trusted.
The assumption of good faith is essentially the assumption that, dispite our differences, we are all trying to seek truth. It is, metaphorically, the phones in JFK and Khrushchev’s offices. It’s a fragile line of communication — and when one side introduces bad faith as a talking point, the other may be tempted to cut the cord.
Game Theory, Mutually Assured Destruction and Technology
Game Theory
The answer to the question of how to deal with the bad actors, or at least the study of this question, starts with game theory. Game theory is the abstraction of strategic action, allowing us to simulate and theorize about the merits of particular strategies. Usually game theory comes down to simulated interactions between two or more “players” with different approaches to the “game.”
For example, Dawkins’ The Selfish Gene features games that simulate social interactions, with only two possible moves: cooperate or defect. These games have multiple rounds: during each round both players make their move, the outcome is decided, then they play the next round, and so on. In the Dawkins example, if both players cooperate, both get a payoff, if one player defects and the other cooperates, the defector gets a payoff, and the cooperator nothing. If they both defect, they both lose.
This allows the researcher to test out different strategies, “always cooperate,” “always defect,” “breast-for-tat” and so on — for American readers, who I am told don’t have the phrase breast-for-tat, this means to give it back as you get it, I’ll be nice if you’re nice, but if you hit me I’ll hit you back on the next move.
Imagine if two people were in oppositional conversation, in public, with their audiences watching. Imagine if each person had the option to press a “bad faith” button that would identify the other, categorically, as someone with sinister motives. Well, a large number would press the button instantly and on all occasions. Remember that to some people, all conservatives are oil-heads, to others, progressives don’t want to improve America, they want to destroy it.
As I mentioned above, each time someone presses the button, to the extent that they are believed, their audience is further encouraged to think that the only reason one would have to disagree would be dishonesty. Granted, some people would alienate their audience by acting in this way; others practically make a living from it.
Daniel Mróz
Mutually Assured Destruction
Then, of course, one has the ultimate domain of game theory: nuclear deterrents. Essentially the well-worn theory goes like this: any nation launching a nuclear attack today would, often automatically, provoke a counter attack on the part of their adversary. Thus, mutually assured destruction.
Continuing the liberal draw on John Postel’s wisdom, one might call this the law of fragility: each nation must be totally permissive in what it accepts (technology that can fend off nuclear attacks ranges from non-existent to dubious) and as a defensive reaction to this weakness, each nation must be totally aggressive in what it sends (launch at any credible threat). It is bizarre that, perhaps as far from the balance of Postel’s law as one can go, we find something that is oddly stable.
In the game theory of a nuclear exchange, if either side defects, both are annihilated. Compare this to the typical behaviour of hatchet wielding journalists and social media agitators, who stand to gain in the short term by acting in bad faith, and even can gain if both sides defect. This appears to be because different people have different audiences with different takes on events. However the argument actually goes, people often uphold their champion as the winner.
But, of course, there are no free lunches, and I argue that the terminus of this path is mutually assured destruction, also. A society that lies this much will eventually degrade to the extent that collaboration across party lines becomes impossible. This is part of why Eric Weinstein is so fond of recommending that we return to above-ground nuclear testing: a true perception of the risks is an excellent way to give people reason to coordinate their action effectively.
I kept a rather game theoretical horse tied nearby for just this sort of occasion: images on the Web. Do you remember when blog posts, articles, etc. didn’t always have an image accompanying them? Do you remember when Twitter consisted primarily of text? Well, what happened? It appears that the social networks, especially Twitter and Facebook, began offering fairly functional previews when users shared URLs, together with easy support for posting images themselves.
We know, to the embarrassment of our species, that people are more likely to click on something if it has an associated image; more so if the image features a human being. Thus, the incentive is for users to post images with everything, all the time: and if you don’t post an image to accompany your article you’ll lose clicks to someone who does.
In this example, online writers were playing a game of image versus no image, and image almost always wins (in the short term). This delivered us the sickening phenomenon of the mandatory blog featured image: teenagers on phones, a man reading a newspaper, a dreamy barista. This isn’t a lost cause, but it wasted a lot of time, energy and bandwidth along the way.
Fundamentally, the system of discourse appears to be configured such that it is very hard to win against bad faith actors and, like with the image battle, you can build a pocket of decency, but it’s much harder to stop the system as a whole from denaturing. To do that, we need better systems and better norms, the game theory itself must be different.
Technological Esperanto
You have probably heard of Esperanto, a constructed language, designed to be spoken universally and, thus, to foster international communication and collaboration. Esperanto, the most successful constructed language, spoken by millions, is dwarfed by another universal language, one which operates quite differently: TCP/IP.
TCP/IP describes the suite of protocols that facilitate the Internet; they are almost universally used and accepted, despite personal and global differences of language, creed and politics. Amusingly, when people are yelling at each other on Twitter, trolling, telling each other that we have nothing to offer, that we disagree on everything, they do so within a common framework about which there is total agreement: the protocols of the Internet that govern how we exchange data functionally, including John Postel’s contributions and his Law, which is where we started.
John Postel/Wikipedia
Online, at least on the very basic level, there is almost total interoperability. For those who aren’t familiar, the Network Centric Operations Industry Consortium provides nice primer on interoperability:
To claim interoperability, you must:
1) be able to communicate with others;
2) be able to digitize the data/information to be shared; and
3) be recognizable/registered (i.e., you must have some form of credentials verifying both senders and receivers on the network as authorized participants with specific roles).
For example, the USB standard provides for interoperability: all compliant devices fit into the hole correctly and can transmit data. Conversely, some Windows and Mac disk formats are still non-interoperable, throwing errors or even doing damage when one crosses the streams.
To the extent that interoperability is reduced, I see no instances wherein the community benefits: we benefit where there is more scope for communication, only those who want fawning, captive audiences would want people to use technology, or adopt mindsets, that cannot interact properly with others.
Another aside: if you want to build a cult, or at least make it difficult for your adherents to leave whatever organization you are building, it’s in your interest to give them an incompatible philosophy or mindset and especially to denature language, such that adherents can’t communicate properly with people using common parlance.
It’s worth noting the domain of operation of TCP/IP and the Law of Robustness and why it works there. Its domain, of course, is computers and their communication, and it works at least partly because there are few grey areas: either the packets are well-formed or they are not, my father would have said: “That’s one of the convenient things about computing: it either works or it doesn’t.” Things are much more difficult to judge in the domain of people and culture.
Amusingly, it seems that when left to our own devices, we find bad faith action irresistible; we have been astonishingly successful in maintaining good faith by offloading this responsibility to dependable machines. In order to build a less divisive discourse, we might want to be a little more like our devices — at least how they communicate.
Six Proposals for the Promotion of Good Faith
Reducing False Positives
1. All fora should publish lists of logical fallacies for people to avoid.
Fora, communities, social networks and particularly schools, should publish lists of logical fallacies: argumentum ad hominem, post hoc ergo propter hoc, etc. It’s a travesty that school children don’t emerge knowing logic and it’s fallacious use: one has to admit that those not versed make better votaries and consumers.
Fundamentally, dealing with points that are logically unsound is frequently the source of friction in conversations: truly, if you use a fallacious argument, you either forgot yourself, are ignorant, or really are acting in bad faith. Usually it’s the first two, but with better norms, it seems that we could reduce them to practically nothing.
The most common fallacy, I find, and one that is perhaps less well known, is touquoque: meaning “also you.” It describes the act of trying to absolve one’s own side of something by identifying that the other side did it, too, or perhaps did something just as bad. This may, like a child realizing that they are not the only one in trouble, lessen the feeling of guilt, but it certainly doesn’t change the ethical implications of what one actually did.
The tu quoque, this style of argument, is supremely popular, perhaps one of the most popular rhetorical devices: yet it wastes our time and our patience with each other, during conversations that require every ounce of grace that we can afford.
2. All fora should publish lists of philosophy smells or philosophical anti-patterns.
Along with the list of fallacies, I would have people publish lists of “philosophy smells.” This, I believe, is a concept original to me.[1] A philosophy smell borrows from the code smell idea in programming: code smells don’t describe bugs or errors, necessarily; rather, they are evidence of poor practices and thinking, which are themselves likely to throw up bugs in the future. These smells, rewardingly, have the fruity names common to programmers: object orgy, feature envy, boolean blindness.
Philosophy smells are equivalent: they aren’t the same as logical fallacies, but are evidence of a style of philosophizing that is likely to produce errors and mistakes in future. My favorite, so far, is drive-by syllogism, which is when someone submits rapid-fire premises and demands an immediate yes/no answer to the conclusion; this doesn’t mean that the logic is wrong, only that you’re much more likely to get a case of l’espirit de escalier. It’s not a particularly fruitful discussion if your means of winning is baffling people with speed and, in the process, missing out on their best counter-arguments. (If you are intellectually honest, you would want to hear the best they have.)
One more: astrolophizing, which is a blend of the style of astrology and philosophy, exemplified when people deploy logic and especially critiques that are so generic that they can be applied wholesale to more or less anything and are, thus, not necessarily wrong but are unlikely to be right. Christopher Hitchens gives a lovely example in Hitch 22:
The last time I heard an orthodox Marxist statement that was music to my ears was from a member of the Rwanda Patriotic Front, during the mass slaughter in the country. ‘The terms Hutu and Tutsi,’ he said severely, ‘are merely ideological constructs, describing different relationships to the means and mode of production.’ But of course!
I hope that establishing these two norms would eliminate many of the false positives: people who look like they’re acting in bad faith, but by accident. If philosophy smell is too weird a phrase, philosophical anti-pattern might be more palatable, to borrow from Andrew Koning’s work in software.
Making It Easier and More Productive to Handle Bad Faith
Additionally, I have two proposals for moderating and softening accusations of bad faith, and to disincentivize people from making frivolous accusations a habit.
3. Claims of bad faith should be recorded diligently.
We should record accusations of bad faith. We could do with this feature in other domains, but we may as well start here. It is as common as dirt for commentators, the general public, etc. to make strong accusations against their fellows, but that essentially get black holed or forgotten: it had an impact at the time, but eventually people forget and move on. Few people check on these claims to see if they turn out to be true or false.
The accuser gets to pack some punch at the time, and perhaps do so repeatedly, with no tally of their accuracy. To be clear, I’m not saying there’s anything wrong with being wrong — it’s a beautiful thing — but we should be wrong in public and embrace it. More apologies, more admitting that we were wrong, less forgetting.
4. All claims of bad faith should be falsifiable.
For that matter, all non-metaphysical claims should be falsifiable too, but that’s a fight for another day. The reason for this injunction is that it’s easier than spitting to make an accusation against someone: most of the time, they’re so woolly as to be impossible to prove or disprove. If the accusation is well-formed, then there ought to be something that the accused can do, or even prove that they have already done, to convince you that they are in good faith.
My editor asked that I give an example of an unfalsifiable accusations. I don’t like to make a habit of public criticisms like this, so instead I will draw on a historical example: the Hamilton-Burr duel. In 1804, Vice President Aaron Burr shot and killed fellow Founding Father Alexander Hamilton, during a duel. Burr challenged Hamilton because the latter had refused to deny insulting the former during a dinner in upstate New York. Putting aside whether this sort of thing is worth a duel, how could Hamilton possibly have falsified an account of a dinner held in private? Of course, Burr did not say that Hamilton acted in bad faith, but hopefully this incident illustrates the danger of any and all unfalsifiable accusations.
Hamilton and Burr dueling/Wikipedia
Remember: claims of bad faith cannot be metaphysical. If your accusation is not falsifiable, it is non-admissible. There are of course grey claims, like saying that there are 5 sextillion grains of sand on our planet: this is falsifiable, but impossible to prove. We should avoid such grey accusations for this reason, their greyness is a potential get-out-of-jail-free card.
Finally, the means of falsifying the accusation should be part of the accusation; this is to say that if I accuse you of something, I should also state what I would accept as proof to the contrary. For example, if one accuses a politician of being an oil shill, one should state also which oil companies or lobbyists are involved, thereby making it possible to see if that person had been paid.
This gives some liberty to the accuser, but there’s no incentive to softball, and we can all tell, conversely, when an accuser demands evidence that makes acquittal impossible (which is obvious bad form). This is not perfect, but ought never to be worse and usually to be better than the norm today.
Incentivizing Good Faith and Disincentivizing Frivolous Accusations
5. All claims of bad faith should be reciprocal.
There should be a cost associated with making such bold claims. Just as in the legal system, wherein court fees disincentivize frivolous action, we ought to make it mean something to make accusations of this magnitude. We know from game theory studies that people will happily pay, often more than the value of the infraction, for justice.
A non-monetary system would work on reciprocation: if you accuse someone of bad faith, you ought to pair the accusation with an admission of a time that you acted in bad faith in the past. The extent to which people name only petty examples of their own bad faith, I expect, will speak for the quality of their accusation. This method ought to be advantageous not least because it demonstrates us reaching for a higher value: the game should not be one of I’m right you’re wrong but of we are all trying to be better.
6. Good Faith Bonds
Finally, a monetary proposal: Good Faith Bonds. Here’s how it would work:
Individuals active in the public sphere would post bond with a the Good Faith Bonds organization, on the promise that they act in good faith thereafter.
If a claimant comes with a falsifiable accusation, the accused can either submit evidence in their defence, apologize or (hopefully not) do nothing.
If they apologize they keep their bond, if they are proven wrong or ignore the accusation, they lose their bond.
If they are proven right, the accuser loses their bond.
We would need to find a means of disposing of the forfeited money in a way that doesn’t generate perverse incentives: for example, it would be a very bad idea for the forfeited money to go to the accuser, this would make it impossibly tempting to abuse the system. One might say that the funds should go to charity, but that isn’t unproblematic either.
There’s obviously more to be ironed out here, but hopefully the idea is, at least, provocative. Questions around conflicts of interest and corruption within the organization are non-trivial, but new trustless modes of organization afforded by blockchain technology offer exciting possibilities.
Daniel Mróz
Finding the Best of Humanity Expressed by Computers
As I mentioned at the start, it seems as though our games of communication incentivize bad faith. Bad faith, one might say, is worse than lying, in that it breaks down the mechanisms of communication and our ability to coordinate, especially among people who are different from one another.
That said, technology and computers in particular can give us tools to reflect some light back onto our species. To talk in such terms, bad faith breaks interoperability, and offers the cultural and conversational equivalent of VHS v.s. Betamax; building interoperability offers whatever the cultural and conversational equivalent of the Internet is: imperfect, confusing, but game-changing enough that any self-respecting tyrant would want to censor it.
Some complain at the inhumanity of computers and systems. This is usually fallacious: all such systems are created in the images of humans, and usually what people lament inhuman isn’t so in the sense that it is unlike people (say, lacking emotion) but in actually in the sense that it lacks empathy and ethics: if you spend any time with humans, you will learn just how fond some people are of creating inhuman systems; the only difference, in the realm of computers, is that the machines give people a place to hide.
So, Postel’s Law of Robustness, which computers find it much easier to obey than we do, is actually very human, and Postel, a man very fond of machines, a great humanist. Thus, when I recommended, above, being a little bit more like our machines, I meant only the extent to which they are faithful, diligent, and disinterested; characteristics which, combined with very human traits like public-spiritedness and a sense of community, we might call virtue. With this as our guiding principle, we might find conversation a little easier.
Notes:
I have not found reference to the philosophy smell idea anywhere, so believe it to be original. I will of course hand it to its rightful owner, if corrected.
"There is nothing more difficult and more dangerous to carry through than initiating changes. One makes enemies of those who prospered under the old order, and only lukewarm support from those who would prosper under the new."
- youthathletics
- Posts: 16169
- Joined: Mon Jul 30, 2018 7:36 pm
Re: A discussion on discourse
Dan and Noah spoke about this in the YouTube clip I posted yesterday.
A fraudulent intent, however carefully concealed at the outset, will generally, in the end, betray itself.
~Livy
“There are two ways to be fooled. One is to believe what isn’t true; the other is to refuse to believe what is true.” -Soren Kierkegaard
~Livy
“There are two ways to be fooled. One is to believe what isn’t true; the other is to refuse to believe what is true.” -Soren Kierkegaard
- MDlaxfan76
- Posts: 27419
- Joined: Wed Aug 01, 2018 5:40 pm
Re: A discussion on discourse
I recommend going to the link and reading it there as it's far less dauntingly dense, albeit if does challenge one's powers of concentration!
Interesting; will ponder.
Interesting; will ponder.
-
- Posts: 23909
- Joined: Sat Feb 23, 2019 10:53 am
Re: A discussion on discourse
That’s why I added the link but a solid note on this topic I thought either way.MDlaxfan76 wrote: ↑Thu Nov 04, 2021 12:48 pm I recommend going to the link and reading it there as it's far less dauntingly dense, albeit if does challenge one's powers of concentration!
Interesting; will ponder.
Harvard University, out
University of Utah, in
I am going to get a 4.0 in damage.
(Afan jealous he didn’t do this first)
University of Utah, in
I am going to get a 4.0 in damage.
(Afan jealous he didn’t do this first)
-
- Posts: 23909
- Joined: Sat Feb 23, 2019 10:53 am
Re: A discussion on discourse
Reviving this for a bit of CBT
10 Cognitive Distortions Identified in CBT
Cognitive distortions are negative or irrational patterns of thinking. These negative thought patterns can play a role in diminishing your motivation, lowering your self-esteem, and contributing to problems like anxiety, depression, and substance use.
Cognitive behavioral therapy (CBT) is an approach that helps people recognize these cognitive distortions and replace them with more helpful, realistic thoughts.1
This article discusses different types of cognitive distortions including defining what they are, how they work, and offering hypothetical examples to show how this kind of thinking affects behavior.
All-or-Nothing Thinking
All-or-nothing thinking is also known as black and white thinking or polarized thinking. This type of thinking involves viewing things in absolute terms: Situations are always black or white, everything or nothing, good or bad, success or failure.2
All-or-nothing thinking is associated with certain mental health conditions, including narcissistic personality disorder (NPD) and borderline personality disorder (BPD).3
For example, Joan feels like a failure at school. Every time she makes a mistake, instead of acknowledging the error and trying to move past it, she gives up and assumes that she'll never be able to do well.
The problem with this type of thinking is that it doesn't allow any room to acknowledge anything between the two extremes. It can impair your motivation and confidence and make it hard to stick to long-term goals.
For example, instead of sticking to a healthy eating plan, you might throw up your hands and call yourself a failure every time you deviate from your plan. Or you might feel like starting a new workout plan is hopeless because you think that if you can't stick to it 100%, then you are a failure.
CBT works to overcome this type of cognitive distortion by helping you recognize that success and progress are not all-or-nothing concepts. By addressing this type of thinking and replacing self-defeating thoughts, you can feel better about your progress and recognize your strengths.
Overgeneralization
Overgeneralization happens when you make a rule after a single event or a series of coincidences.4 The words "always" or "never" frequently appear in the sentence. Because you have experience with one event playing out a certain way, you assume that all future events will have the same outcome.
For example, Ben has inferred from a series of coincidences that seven is his lucky number and has overgeneralized this to gambling situations involving the number seven, no matter how many times he loses.
The problem with this type of thinking is that it doesn't account for differences between situations as well as the role that chance or luck can play. This thinking can have a number of consequences on how people think and act in different situations.
Overgeneralization is associated with the development and maintenance of different anxiety disorders. When people have a bad experience in one situation, they assume that the same thing will happen again in the future.
Research also suggests that this type of cognitive distortion is common in people who have post-traumatic stress disorder (PTSD).5 Generalizing fear from one situation to future events can create feelings of anxiety, which often leads to avoidance of those situations.
Mental Filters
A mental filter is the opposite of overgeneralization, but with the same negative outcome.6 Instead of taking one small event and generalizing it inappropriately, the mental filter takes one small event and focuses on it exclusively, filtering out anything else.
This type of cognitive distortion can contribute to problems including addiction, anxiety, poor self-belief, and interpersonal problems, among other issues.
For example, Nathan focuses on all of the negative or hurtful things that his partner has said or done in their relationship, but he filters all the kind and thoughtful things his partner does. This thinking contributes to feelings of negativity about his partner and their relationship.
Filtering out the positive and focusing on the negative can have a detrimental impact on your mental well-being. One study found that when people focused only on negative self-beliefs, it contributed to feelings of hopelessness and increased the risk of suicidal thinking.7
Discounting the Positive
Discounting the positive is a cognitive distortion that involves ignoring or invalidating good things that have happened to you.8 It is similar to mental filtering, but instead of simply ignoring the positives, you are actively rejecting them.
For example, Joel completes a project and receives an award for his outstanding work. Rather than feeling proud of his achievement, he attributes it to pure luck that has nothing to do with his talent and effort.
When people use this cognitive distortion, they view positive events as flukes. Because these positives are always seen as anomalies, they don't expect them to happen again in the future.
The problem with this type of thinking is that it undermines your faith in your abilities. Rather than recognizing your strengths, you assume that you aren't competent or skilled—you just got lucky.
When you discount the positive and challenges arise, you won't have faith in your ability to cope or overcome them. This lack of faith in yourself can lead to a sense of learned helplessness where you assume there is no point in even trying to change the outcome.
Jumping to Conclusions
There are two ways of jumping to conclusions:
Mind reading: When you think someone is going to react in a particular way, or you believe someone is thinking things that they aren't
Fortune telling: When you predict events will unfold in a particular way, often to avoid trying something difficult
Here's an example: Jamie engaged in fortune-telling when he believed that he wouldn't be able to stand life without heroin. In reality, he could and he did.
Magnification
Magnification is exaggerating the importance of shortcomings and problems while minimizing the importance of desirable qualities. Similar to mental filtering and discounting the positive, this cognitive distortion involves magnifying your negative qualities while minimizing your positive ones.
When something bad happens, you see this as "proof" of your own failures. But when good things happen, you minimize their importance. For example, a person addicted to pain medication might magnify the importance of eliminating all pain, and exaggerate how unbearable their pain is.
This thinking can affect behavior in a variety of ways. It can contribute to feelings of anxiety, fear, and panic because it causes people to exaggerate the importance of insignificant problems.
People sometimes believe that other people notice and judge even small mistakes. At the same time, they will minimize their own ability to cope with feelings of stress and anxiety, which can then contribute to increased anxiety and avoidance.9
Emotional Reasoning
Emotional reasoning is a way of judging yourself or your circumstances based on your emotions. For instance, Jenna used emotional reasoning to conclude that she was a worthless person, which in turn led to binge eating.
This type of reasoning assumes that because you are experiencing a negative emotion, it must be an accurate reflection of reality. If you feel experience feelings of guilt, for example, emotional reasoning would lead you to conclude that you are a bad person.
This type of thinking can contribute to a number of problems including feelings of anxiety and depression. While research has found that this distortion is common in people who have anxiety and depression, it is actually a very common way of thinking that many people engage in.10
Cognitive behavior therapy can help people learn to recognize the signs of emotional reasoning and realize that feelings are not facts.
"Should" Statements
"Should" statements involve always thinking about things that you think you "should" or "must" do. These types of statements can make you feel worried or anxious.
They can also cause you to experience feelings of guilt or a sense of failure. Because you always think you "should" be doing something, you end up feeling as if you are always failing.
These statements are self-defeating ways we talk to ourselves that emphasize unattainable standards. Then, when we fall short of our own ideas, we fail in our own eyes, which can create panic and anxiety.
An example: Cheryl thinks that she should be able to play a song on her violin without making any mistakes. When she does make mistakes, she feels angry and upset with herself. As a result, she starts to avoid practicing her violin.
Labeling
Labeling is a cognitive distortion that involves making a judgment about yourself or someone else as a person, rather than seeing the behavior as something the person did that doesn't define them as an individual.
You might think of this cognitive distortion as an extreme type of all-or-nothing thinking because it involves attaching a label to someone that offers no room for anything outside of that narrow, restrictive box.
For example, you might label yourself as a failure. You can also label other people as well. You might decide that someone is a jerk because of one interaction and continue to judge them in all future interactions through that lens with no room for redemption.
Personalization and Blame
Personalization and blame is a cognitive distortion whereby you entirely blame yourself, or someone else, for a situation that in reality involved many factors that were out of your control.
For example, Anna blamed herself for her daughter's bad grade in school. Instead of trying to find out why her daughter is struggling and exploring ways to help, Anna assumes it is a sign that she is a bad mother.
Personalization and blame cause people to feel inadequate. It can also lead to people experiencing feelings of shame and guilt.
Blame can also be attributed to others. In some cases, people will blame other people while ignoring other factors that could potentially play a role in the situation. For example, they might blame their relationship problems on their partner without acknowledging their own role.
Cognitive distortions are the mind’s way of playing tricks on us and convincing us of something that just isn’t true. While many cognitive distortions are common, there are some that can indicate a more serious condition and take a toll on mental health, leading to an increase in symptoms of stress, anxiety, or depression.
If you think that cognitive distortions may be altering your sense of reality and are concerned about how these thoughts may be negatively affecting your life, talk to your healthcare provider or therapist. Treatments such as cognitive behavioral therapy are helpful and can help you learn to think in ways that are more accurate and helpful.
10 Cognitive Distortions Identified in CBT
Cognitive distortions are negative or irrational patterns of thinking. These negative thought patterns can play a role in diminishing your motivation, lowering your self-esteem, and contributing to problems like anxiety, depression, and substance use.
Cognitive behavioral therapy (CBT) is an approach that helps people recognize these cognitive distortions and replace them with more helpful, realistic thoughts.1
This article discusses different types of cognitive distortions including defining what they are, how they work, and offering hypothetical examples to show how this kind of thinking affects behavior.
All-or-Nothing Thinking
All-or-nothing thinking is also known as black and white thinking or polarized thinking. This type of thinking involves viewing things in absolute terms: Situations are always black or white, everything or nothing, good or bad, success or failure.2
All-or-nothing thinking is associated with certain mental health conditions, including narcissistic personality disorder (NPD) and borderline personality disorder (BPD).3
For example, Joan feels like a failure at school. Every time she makes a mistake, instead of acknowledging the error and trying to move past it, she gives up and assumes that she'll never be able to do well.
The problem with this type of thinking is that it doesn't allow any room to acknowledge anything between the two extremes. It can impair your motivation and confidence and make it hard to stick to long-term goals.
For example, instead of sticking to a healthy eating plan, you might throw up your hands and call yourself a failure every time you deviate from your plan. Or you might feel like starting a new workout plan is hopeless because you think that if you can't stick to it 100%, then you are a failure.
CBT works to overcome this type of cognitive distortion by helping you recognize that success and progress are not all-or-nothing concepts. By addressing this type of thinking and replacing self-defeating thoughts, you can feel better about your progress and recognize your strengths.
Overgeneralization
Overgeneralization happens when you make a rule after a single event or a series of coincidences.4 The words "always" or "never" frequently appear in the sentence. Because you have experience with one event playing out a certain way, you assume that all future events will have the same outcome.
For example, Ben has inferred from a series of coincidences that seven is his lucky number and has overgeneralized this to gambling situations involving the number seven, no matter how many times he loses.
The problem with this type of thinking is that it doesn't account for differences between situations as well as the role that chance or luck can play. This thinking can have a number of consequences on how people think and act in different situations.
Overgeneralization is associated with the development and maintenance of different anxiety disorders. When people have a bad experience in one situation, they assume that the same thing will happen again in the future.
Research also suggests that this type of cognitive distortion is common in people who have post-traumatic stress disorder (PTSD).5 Generalizing fear from one situation to future events can create feelings of anxiety, which often leads to avoidance of those situations.
Mental Filters
A mental filter is the opposite of overgeneralization, but with the same negative outcome.6 Instead of taking one small event and generalizing it inappropriately, the mental filter takes one small event and focuses on it exclusively, filtering out anything else.
This type of cognitive distortion can contribute to problems including addiction, anxiety, poor self-belief, and interpersonal problems, among other issues.
For example, Nathan focuses on all of the negative or hurtful things that his partner has said or done in their relationship, but he filters all the kind and thoughtful things his partner does. This thinking contributes to feelings of negativity about his partner and their relationship.
Filtering out the positive and focusing on the negative can have a detrimental impact on your mental well-being. One study found that when people focused only on negative self-beliefs, it contributed to feelings of hopelessness and increased the risk of suicidal thinking.7
Discounting the Positive
Discounting the positive is a cognitive distortion that involves ignoring or invalidating good things that have happened to you.8 It is similar to mental filtering, but instead of simply ignoring the positives, you are actively rejecting them.
For example, Joel completes a project and receives an award for his outstanding work. Rather than feeling proud of his achievement, he attributes it to pure luck that has nothing to do with his talent and effort.
When people use this cognitive distortion, they view positive events as flukes. Because these positives are always seen as anomalies, they don't expect them to happen again in the future.
The problem with this type of thinking is that it undermines your faith in your abilities. Rather than recognizing your strengths, you assume that you aren't competent or skilled—you just got lucky.
When you discount the positive and challenges arise, you won't have faith in your ability to cope or overcome them. This lack of faith in yourself can lead to a sense of learned helplessness where you assume there is no point in even trying to change the outcome.
Jumping to Conclusions
There are two ways of jumping to conclusions:
Mind reading: When you think someone is going to react in a particular way, or you believe someone is thinking things that they aren't
Fortune telling: When you predict events will unfold in a particular way, often to avoid trying something difficult
Here's an example: Jamie engaged in fortune-telling when he believed that he wouldn't be able to stand life without heroin. In reality, he could and he did.
Magnification
Magnification is exaggerating the importance of shortcomings and problems while minimizing the importance of desirable qualities. Similar to mental filtering and discounting the positive, this cognitive distortion involves magnifying your negative qualities while minimizing your positive ones.
When something bad happens, you see this as "proof" of your own failures. But when good things happen, you minimize their importance. For example, a person addicted to pain medication might magnify the importance of eliminating all pain, and exaggerate how unbearable their pain is.
This thinking can affect behavior in a variety of ways. It can contribute to feelings of anxiety, fear, and panic because it causes people to exaggerate the importance of insignificant problems.
People sometimes believe that other people notice and judge even small mistakes. At the same time, they will minimize their own ability to cope with feelings of stress and anxiety, which can then contribute to increased anxiety and avoidance.9
Emotional Reasoning
Emotional reasoning is a way of judging yourself or your circumstances based on your emotions. For instance, Jenna used emotional reasoning to conclude that she was a worthless person, which in turn led to binge eating.
This type of reasoning assumes that because you are experiencing a negative emotion, it must be an accurate reflection of reality. If you feel experience feelings of guilt, for example, emotional reasoning would lead you to conclude that you are a bad person.
This type of thinking can contribute to a number of problems including feelings of anxiety and depression. While research has found that this distortion is common in people who have anxiety and depression, it is actually a very common way of thinking that many people engage in.10
Cognitive behavior therapy can help people learn to recognize the signs of emotional reasoning and realize that feelings are not facts.
"Should" Statements
"Should" statements involve always thinking about things that you think you "should" or "must" do. These types of statements can make you feel worried or anxious.
They can also cause you to experience feelings of guilt or a sense of failure. Because you always think you "should" be doing something, you end up feeling as if you are always failing.
These statements are self-defeating ways we talk to ourselves that emphasize unattainable standards. Then, when we fall short of our own ideas, we fail in our own eyes, which can create panic and anxiety.
An example: Cheryl thinks that she should be able to play a song on her violin without making any mistakes. When she does make mistakes, she feels angry and upset with herself. As a result, she starts to avoid practicing her violin.
Labeling
Labeling is a cognitive distortion that involves making a judgment about yourself or someone else as a person, rather than seeing the behavior as something the person did that doesn't define them as an individual.
You might think of this cognitive distortion as an extreme type of all-or-nothing thinking because it involves attaching a label to someone that offers no room for anything outside of that narrow, restrictive box.
For example, you might label yourself as a failure. You can also label other people as well. You might decide that someone is a jerk because of one interaction and continue to judge them in all future interactions through that lens with no room for redemption.
Personalization and Blame
Personalization and blame is a cognitive distortion whereby you entirely blame yourself, or someone else, for a situation that in reality involved many factors that were out of your control.
For example, Anna blamed herself for her daughter's bad grade in school. Instead of trying to find out why her daughter is struggling and exploring ways to help, Anna assumes it is a sign that she is a bad mother.
Personalization and blame cause people to feel inadequate. It can also lead to people experiencing feelings of shame and guilt.
Blame can also be attributed to others. In some cases, people will blame other people while ignoring other factors that could potentially play a role in the situation. For example, they might blame their relationship problems on their partner without acknowledging their own role.
Cognitive distortions are the mind’s way of playing tricks on us and convincing us of something that just isn’t true. While many cognitive distortions are common, there are some that can indicate a more serious condition and take a toll on mental health, leading to an increase in symptoms of stress, anxiety, or depression.
If you think that cognitive distortions may be altering your sense of reality and are concerned about how these thoughts may be negatively affecting your life, talk to your healthcare provider or therapist. Treatments such as cognitive behavioral therapy are helpful and can help you learn to think in ways that are more accurate and helpful.
Harvard University, out
University of Utah, in
I am going to get a 4.0 in damage.
(Afan jealous he didn’t do this first)
University of Utah, in
I am going to get a 4.0 in damage.
(Afan jealous he didn’t do this first)
-
- Posts: 1366
- Joined: Sat Oct 27, 2018 11:58 pm
Re: A discussion on discourse
So as well written as these articles may be, real discourse, in my opinion, can be boiled down to just one of Steven Covey's 7 Habits of Highly Effective People. "Seek to understand before you seek to be understood".MDlaxfan76 wrote: ↑Thu Nov 04, 2021 12:48 pm I recommend going to the link and reading it there as it's far less dauntingly dense, albeit if does challenge one's powers of concentration!
Interesting; will ponder.
"I would never want to belong to a club that would have me as a member", Groucho Marx
Re: A discussion on discourse
+1.get it to x wrote: ↑Fri Feb 04, 2022 9:52 pmSo as well written as these articles may be, real discourse, in my opinion, can be boiled down to just one of Steven Covey's 7 Habits of Highly Effective People. "Seek to understand before you seek to be understood".MDlaxfan76 wrote: ↑Thu Nov 04, 2021 12:48 pm I recommend going to the link and reading it there as it's far less dauntingly dense, albeit if does challenge one's powers of concentration!
Interesting; will ponder.
Also Robert Fulghum's "Everything I needed to know I learned in Kindergarten".
-
- Posts: 23909
- Joined: Sat Feb 23, 2019 10:53 am
Re: A discussion on discourse
Listened to a great podcast on communication yesterday.
https://podcasts.apple.com/us/podcast/e ... 0579183397
Adding a description
To the Founding Fathers it was free libraries. To the 19th century rationalist philosophers it was a system of public schools. Today it's access to the internet. Since its beginnings, Americans have believed that if facts and information were available to all, a democratic utopia would prevail. But missing from these well-intentioned efforts, says author and journalist David McRaney, is the awareness that people's opinions are unrelated to their knowledge and intelligence. In fact, he explains, the better educated we become, the better we are at rationalizing what we already believe. Listen as the author of How Minds Change speaks with EconTalk host Russ Roberts about why it's so hard to change someone's mind, the best way to make it happen (if you absolutely must), and why teens are hard-wired not to take good advice from older people even if they are actually wiser.
https://podcasts.apple.com/us/podcast/e ... 0579183397
Adding a description
To the Founding Fathers it was free libraries. To the 19th century rationalist philosophers it was a system of public schools. Today it's access to the internet. Since its beginnings, Americans have believed that if facts and information were available to all, a democratic utopia would prevail. But missing from these well-intentioned efforts, says author and journalist David McRaney, is the awareness that people's opinions are unrelated to their knowledge and intelligence. In fact, he explains, the better educated we become, the better we are at rationalizing what we already believe. Listen as the author of How Minds Change speaks with EconTalk host Russ Roberts about why it's so hard to change someone's mind, the best way to make it happen (if you absolutely must), and why teens are hard-wired not to take good advice from older people even if they are actually wiser.
Last edited by Farfromgeneva on Tue Oct 18, 2022 9:31 pm, edited 1 time in total.
Harvard University, out
University of Utah, in
I am going to get a 4.0 in damage.
(Afan jealous he didn’t do this first)
University of Utah, in
I am going to get a 4.0 in damage.
(Afan jealous he didn’t do this first)
- youthathletics
- Posts: 16169
- Joined: Mon Jul 30, 2018 7:36 pm
Re: A discussion on discourse
Thanks. Will listen tomorrow.
A fraudulent intent, however carefully concealed at the outset, will generally, in the end, betray itself.
~Livy
“There are two ways to be fooled. One is to believe what isn’t true; the other is to refuse to believe what is true.” -Soren Kierkegaard
~Livy
“There are two ways to be fooled. One is to believe what isn’t true; the other is to refuse to believe what is true.” -Soren Kierkegaard
-
- Posts: 23909
- Joined: Sat Feb 23, 2019 10:53 am
Re: A discussion on discourse
You probably would like the platform enough to just subscribe as it’s free. Whatever you use I just do apple with an iPhone. I listen to them all but sometimes it’s contemporary and caught up some stretches where I go back to Ken’s that backed up out of order.
Harvard University, out
University of Utah, in
I am going to get a 4.0 in damage.
(Afan jealous he didn’t do this first)
University of Utah, in
I am going to get a 4.0 in damage.
(Afan jealous he didn’t do this first)
Re: A discussion on discourse
Here is Fulgham's lista fan wrote: ↑Fri Feb 04, 2022 11:09 pm+1.get it to x wrote: ↑Fri Feb 04, 2022 9:52 pmSo as well written as these articles may be, real discourse, in my opinion, can be boiled down to just one of Steven Covey's 7 Habits of Highly Effective People. "Seek to understand before you seek to be understood".MDlaxfan76 wrote: ↑Thu Nov 04, 2021 12:48 pm I recommend going to the link and reading it there as it's far less dauntingly dense, albeit if does challenge one's powers of concentration!
Interesting; will ponder.
Also Robert Fulghum's "Everything I needed to know I learned in Kindergarten".
"Share everything.
Play fair.
Don’t hit people.
Put things back where you found them.
Clean up your own mess.
Don’t take things that aren’t yours.
Say you’re sorry when you hurt somebody.
Wash your hands before you eat.
Flush.
Warm cookies and cold milk are good for you.
Live a balanced life—learn some and think some and draw and paint and sing and dance and play and work every day some."