Comments on Mises and Gordon on Popper

Spread the love

Rafe wrote:

In a nutshell, Popper shifted the focus from the justification of beliefs to the formation of critical preferences for publicly articulated (objective) theories.

Care to offer a defense of critical preferences from my criticism?

Context
Criticism

There are in this orbit no such things as experimentally established facts. All experience in this field is, as must be repeated again and again, historical experience, that is, experience of complex phenomena. [This is Mises speaking.]

I agree with Mises that experimental tests cannot be the focal point of thought about human action. I think he gives a good reason. The data is complex. That means interpretation dominates and the data is secondary. We also, in general, can’t do relevant repeatable or controlled tests.

Tests have limited use in science too. As David Deutsch explained in _The Fabric of Reality_ (e.g. p 66), most bad scientific theories are rejected for being bad explanations, not by doing a test. Tests should be thought of as a very useful way to help settle some of the more difficult scientific disagreements, not as the primary method of science. Seen this way — as more of a bonus than a necessity — it’s not so worrying to do without them in an area such as human action or philosophy.

So Mises was mistaken that his field needs different methods than science in any important sense, since tests aren’t so crucial. Fundamentally, all creating of knowledge is done by the method of conjecture and refutation. No other method has ever been thought of. And supporting epistemological ideas, such as fallibility, rational attitudes to disagreement, the search for good explanations, and the need for error correction, apply just as much to Mises’ work as to scientific work.

Why did Popper think that demarcation was one of the two fundamental problems in the philosophy of science?

He didn’t. People have overemphasized what he said about it, perhaps because it’s one of the few things he said that they understood. Popper explained why he considered it an important problem and why he thought about it in, IIRC, C&R. It’s because he was thinking about Marx, Freud, and Adler and wanted to criticize their claims to scientific legitimacy.

About Elliot

http://fallibleideas.com/ http://curi.us/
This entry was posted in Uncategorized. Bookmark the permalink.

59 Responses to Comments on Mises and Gordon on Popper

  1. Lee Kelly says:

    Elliot,

    You said:

    The concept of a critical preference makes the common sense assumption that there are strong and weak arguments, or in other words that arguments or ideas can be evaluated on a continuum based on their merit.

    Perhaps I am unrepresentative of other critical rationalists, but wouldn’t have described my understanding of “critical preference” in this way.

    I have never given much thought to the matter, but preference has less to do with a notion of merit (which seems to me like a degree of justification) and more to do with what seems to “fit” the best. Although a theory may resolve one problem, it may create others that ultimately render it more problematic than a theory it would replace. Therefore, critical preference depends heavily upon what ideas one has favoured in the past, but often critical preference may emerge simply from habit or what “feels right.” It is not always possible to give an explicit analysis and comparison of alternative positions for the multitude of assumptions we knowingly and unwittingly make. We try our best to weed out bad ideas to bring our worldview into alignment with the truth, but there is no method (that I know of) by which our critical preference can be guaranteed to bring us closer to truth.

  2. Elliot says:

    “Fit” is a kind of merit. CRists don’t accept the same kind of merit as justificationists, but they do accept that we can have non-boolean evaluations of ideas (as you have accepted in your comment), e.g. where you say “more problematic” (more, being a comparative word, refers to degrees/continuum).

    But what reasonable meaning can “more problematic” have? If a theory is problematic *at all*, why isn’t that the end of it? The theory is flawed/refuted, although a variant of it may not be.

    Is the meaning that we should sometimes use or accept problematic theories, as long as the degree of problematicness is low enough? That sounds to me like ignoring criticism and making excuses for refuted theories. I think in many cases that’s not intended, but I don’t see what better interpretation is available.

  3. Lee Kelly says:

    Elliot,

    The degree of problematicity seems to be an evaluation made relative to a pre-existing weltanshauung. For example, an idea that may be problematic to me may be unproblematic to you, because its relationship to our respective prevailing critical preferences is paramount. In your case, it may fit into your understanding of the world neatly, solving problems while not upsetting your prevailing worldview. Perhaps the opposite is true for me. I may be unwilling to overturn a broad selection of prior judgements, ideas and theories which seemed to solve many problems at the time, without more reflection and criticism. Perhaps we anlogise critical preference to peicemeal social engineering–while we invite criticism and desire evolution, some conservatism is sensible when confronted with the prospect of a sweeping revolution. Which policies count as piecemeal or utopian social engineering depends on context, i.e. prevailing social institutions and traditions, and similarly the degree to which an idea is problematic depends on context, i.e. beliefs, conjectures, and habits.

    None of this entails that we should deny problems where they exist, but just because a theory is problematic doesn’t mean there is a better alternative. Often these problems can be solved in ways that aren’t immediately obvious, and sometimes we initially accept criticisms that turn out to be erroneous. I believe it was Popper who said that some dogmatism in science may even be useful, at least to science as a social institution, since theories need strong defences as well as attackers.

  4. Elliot says:

    > None of this entails that we should deny problems where they exist, but just because a theory is problematic doesn’t mean there is a better alternative.

    There is always a non-problematic alternative, and non-refuted theories are better than refuted theories. Saying there sometimes isn’t a non-refuted theory available is just an assumption, which you and Popper have made, which turns out to be false. No argument for it has been offered, and I have posted a universal method of creating a non-refuted theory.

    @relative to people or problem situations: Consider the case where there’s only one person in the universe, and when we say “X is problematic” we mean in his opinion not someone else’s. My statements apply to that situation.

  5. Lee Kelly says:

    Elliot,

    You wrote:

    If we see a problem with an idea, then it’s no good, it’s refuted. We should never accept, or act on, ideas we know are flawed. Or in other words, if we know about an error it’s irrational to continue with the error anyway … I don’t think it makes sense to simultaneously accept a criticism of an idea, and accept the idea.

    I wouldn’t have thought that any critical rationalist would say that we should ever “accept” problematic theories, i.e. a theory which has been undermined by an apparently sound criticism. We may choose to act on such a theory in the absence of an alternative, especially since it likely only problematic with regard to some types of predictions.

    As to where you “posted a universal method for creating a non-refuted theory,” I must confess that I have no idea what you mean.

    In any case, as a matter of pure logic, just because we call one proposition a “criticism” and another a “theory,” doesn’t mean the former logically trumps the latter. After all, a criticism is only sound if the theory is false, otherwise the criticism is false. We must always make a fallible choice between alternatives when addressing criticism–neither is logically special. in other words, every criticism is theory-refuted if that is what one chooses.

  6. Lee Kelly says:

    Addendum: that every criticism may be theory-refuted seems to imply that conventionalism cannot be criticised with pure logic. In other words, there is no purely logical reason why one should accept a (non-logical) criticism of a theory. Is not this, after all, why Popper called his book, The Logic of Scientific Discovery? Pure logic must be supplemented with methodological rules and social conventions, because unless we are incredibly lucky, whatever theories prevail today are false, and the capacity for adaption must be introduced if we ever hope to stumble upon the truth.

  7. Elliot says:

    @criticism vs theory: what’s the relevance?

    @accept: if you want to use terminology so that we act on unaccepted ideas, then just tell me what word to use for accepting an idea enough to act on it (or form a critical preference for it), then substitute it into what i wrote. problem solved?

    also in your argument on this subject you make a claim about what is likely. can you provide an argument that it is likely, and also specify what you mean by likely? (probability, or what?)

    @method of creating non-refuted theories, linked from original post:

    > My approach to solving this problem is to declare the conflict (temporarily) undecided (pending a new idea or criticism) and then to ask the question, “Given the situation, including that conflict being undecided, what should be done?” Answering this new question does not depend on resolving the conflict, so it gets us unstuck.

    > When approaching this new question we may get stuck again on some other conflict of ideas. Being stuck is always temporary, but temporary can be a long time, so again we’ll need to do something about it. What we can do is repeat the same method as before: declare that conflict undecided and consider what to do given that the undecided conflicts are undecided.

    > A special case of this method is discussed here. [http://fallibleideas.com/avoiding-coercion]

    This may be clearer if you read my first comment in this thread: http://curi.us/1490-using-false-theories. In particular:

    > Any refuted theory T is strictly inferior to a non-refuted theory which has 3 parts:

    > 1) says T is refuted due to argument X
    > 2) contains T
    > 3) explains the circumstances in which T has some uses.

  8. Rafe says:

    Taking up this part of Elliot’s criticism:

    “Let’s now consider the situation where we have conflicting non-refuted ideas, which is the problem that critical preferences try to solve. How should we approach such a conflict? We can make progress by criticizing ideas. But it may take us a while to think of a criticism, and we may need to carry on with life in the meantime. In that case, the critical preferences approach attempts to compare the non-refuted ideas, evaluate their merit, and act on the best one.”

    I don’t see critical preference as being restricted to non-refuted alternatives, this applies to theories, spouses, cars and houses (for example) where there may be several contenders and they all have strong and weak points. We have to do the best we can in light of our wants, resources and the time available to search and make a decision.

    Two considerations: first the purpose of the exercise. If we are looking for a practical solution to a problem we may do perfectly well using a theory that is known to be refuted (Newton).

    Second, the time frame. In extreme situations you have to decide which way to jump (or turn the wheel of the car) in a split second.

    One of the things that I like about the notion of critical preference is that it unifies the scientific method and the subjective theory of value, in each case the process is the outcome of a situational analysis.

    In the case of value, (preferences relating to purchases or investment decisions), the analysis may take a split second (an impulse buy) or it may be the outcome of lengthy deliberation and calculation (easier now with computers to run scenarios).

    We are told that one of the main concerns of von Mises (in addition to refuting socialism and certain kinds of intervetionism) was to spell out the possibilities of economic calculation.

    Elliot went on to note the situation where a practical decision has to be made without delay.

    “My approach to solving this problem is to declare the conflict (temporarily) undecided (pending a new idea or criticism) and then to ask the question, “Given the situation, including that conflict being undecided, what should be done?” Answering this new question does not depend on resolving the conflict, so it gets us unstuck.”

    In theory scientific problems do not usually need to be decided in a hurry, you can always say “more research required”. And one of the criteria for evaluating theories is the value of the research program that they set in train.

    So for scientific theories you can use several criteria and there does not need to be a clear-cut winner on any of them unless you are so fixated on justified belief that you insist that there has to be a winner (a justified belief). Somone postulated, on the list of human rights, the right to belief! (this was in the context of epistemology, not religion).

    To repeat, the criteria include explanatory power (or theoretical problem-solving power), capacity to unify different fields, capacity to stand up to tests (not just evidence but logic, consistency with other well-tested theories), practical or instrumental value and the capacity to stimulate good research.

  9. Lee Kelly says:

    Elliot,

    Perhaps I do not understand your argument, because so far I fail to see the purpose of your criticism.

  10. Elliot says:

    Lee,

    Could you point out, say, the first passage you don’t understand or don’t see a purpose of? Otherwise I’d be left rewriting the entire thing, or picking a subset at random.

    Rafe,

    > If we are looking for a practical solution to a problem we may do perfectly well using a theory that is known to be refuted (Newton).

    If we use the naive-Newton theory, sometimes we’ll go wrong. We’ll go wrong in all the situations where it’s false. It’s considered refuted for a reason.

    What we need is a non-refuted theory, such as enlightened-Newton, which states that Newton’s theory is false (due to relativity, etc), but because the new theory just adds a correction to the mass term, in cases where the correction is tiny (i.e. with velocities no where near the speed of light) then Newton offers a good approximation while being a bit simpler. This theory A) guides us about when it’s safe to use Newton as an approximation or not B) isn’t refuted.

    Forming this non-refuted theory is important for several reasons:

    1) naive-Newton is already criticized and refuted. This makes people care less about further criticism of it; they say “I already knew it’s false”. So it’s important to have a non-refuted theory that is fully sensitive to criticism, that you can say “if you can criticize this theory, even a little, we will learn something, b/c as far as I know it has zero flaws”. This comes up with induction a lot. Inductivists don’t listen to criticism which implies induction is false because they already knew induction is false. But they also don’t present any clear target which they claim is a non-refuted theory, which uses induction in some way, and which criticism of will matter.

    2) if you try to act on enlightened-newton, while saying you are using naive-newton, then by not being clear about what theory you’re using you make criticism and discussion harder, and also make it harder to find new ideas to improve on your theory, b/c you don’t have as good an understanding of what your theory is

    3) Sometimes we try to form a better theory, using a refuted theory, and we fail. In fact that happens a lot. Often refuted theories are bad ideas. How can we find out when it’s good to continue using a refuted theory in some manner, and when to (tentatively, pending a new idea) reject it in full? The answer is to try to form a non-refuted theory using it, and if you succeed great, and if you fail then you can’t rationally use the refuted idea in any capacity for now.

    4) We need to know when the refuted theory is useful as an approximation, or for some other reason, and when it isn’t. We need a theory to tell us that, and for that theory to be itself the subject of critical debate, not swept under the rug and ignored. If we only had naive-Newton, we wouldn’t know anything about how to use it without going wrong.

    Is this one issue now clear?

  11. Lee Kelly says:

    Elliot,

    It seems to me a verbal quibble. For me, a theory about a refuted theory, such as under what circumstances a refuted theory still yields accurate predictions, does not replace the refuted theory in my order of critical preference. I can see what you are saying: a refuted theory in conjunction with another theory about when the refuted theory is useful is, itself, more preferrable than the original theory alone. However, that conjunction would not customraily be referred to as a “theory” in and of itself, and I believe doing so breeds misunderstanding–as evidenced by these comments!

  12. Elliot says:

    Lee,

    Whether you want to call modified theories new or not would be primarily a verbal issue. BTW if you do want to call the modified theory the same theory, then you have to say it’s *not* refuted since you have modified it in such a way that it’s no longer refuted (if you haven’t, you have a problem!). In no case does one legitimately use a refuted version of a theory.

    But that’s not the main thing at stake here. The main issue is about how to judge theories. Can we take two non-refuted theories and form a preference for one over the other by any method other than refuting one of them? I say no. Popper and Rafe say yes. They say we can form a preference for one based on things like:

    > To repeat, the criteria include explanatory power (or theoretical problem-solving power), capacity to unify different fields, capacity to stand up to tests (not just evidence but logic, consistency with other well-tested theories), practical or instrumental value and the capacity to stimulate good research.

    I say that considering those criteria will either lead us to invent a criticism that refutes one theory, or else it cannot decide. In no case do we form a critical preference; only refutations count for anything.

    This leads to a problem (the one that critical preferences tried to solve is renewed). The issue is we have conflicting non-refuted theories, and we want to know what to do before we come up with a refutation which could take centuries. If we can’t form a critical preference for one, what do we do instead? Here I offer a different approach to solve this problem: in short, create a new theory which is not in conflict.

  13. Rafe says:

    One of my friends used to say, if it is hard to choose between two options, then it probably doesn’t matter which one you choose.

    Like Lee, I am struggling to see the point. It comes back to the purpose of the choice, and the criteria that you are using. If the choice is hard you can review the criteria.

    Real examples may help.

  14. Elliot says:

    What do you mean you don’t see the point? That is ambiguous.

    I claim: Popper was wrong about critical preferences; that approach is a mistake. All the criteria of critical preferences should be repurposed for use in criticism only. All ideas should be evaluated only as refuted or non-refuted with no middle ground. There are no “weighty though inconclusive arguments”, Objective Knowledge p 41. This isn’t a one off mistake, but rather Popper says similar stuff frequently (e.g. see all the page numbers I gave in the Context post), as do many Popperians.

    Are you saying that if I am right about those things, it doesn’t really matter?

    Or do you mean you aren’t sure what I am claiming? If so, is that now clarified?

    Or do you mean you know my claims, and you agree they are substantive, but you don’t see any argument for them?

  15. Brian Scurfield says:

    > One of my friends used to say, if it is hard to choose between two options, then it probably doesn’t matter which one you choose.

    Isn’t this another bad thing about critical preferences: that in the situation where choices are hard, then we don’t care which we choose?

  16. mlionson says:

    “When approaching this new question we may get stuck again on some other conflict of ideas. Being stuck is always temporary, but temporary can be a long time, so again we’ll need to do something about it. What we can do is repeat the same method as before: declare that conflict undecided and consider what to do given that the undecided conflicts are undecided.”

    Elliot,
    Your approach does not solve the problem of how to arrive at a theory that causes an argument to halt. When two people are arguing with each other, one of them may share some of your ideas and say, “We can declare the conflict undecided and consider what to do given that the undecided conflicts are undecided”

    But the person making the declaration is not a-priori more rational than another who argues that one should revisit the original disagreement and not “agree to disagree”. If one person gets tired of persuading the other to agree to disagree, he is not a-priori irrational to walk away from the argument, nor is the other a-priori irrational by insisting that the original argument should still be discussed and will be quickly and meaningfully resolved to his opponents satisfaction (i.e. arguing that the other should not walk away).

    Believing otherwise is denying fallibilism. Believing otherwise is also illogical (equivalent to saying that the Halting problem is soluble and so contradicting Godel, Church, and Turing.)

    What does cause people to halt an argument?

    Often social convention and not logic. Sometimes people fight.

  17. elliot says:

    Michael,

    What do you think about the case where the conflicting ideas are both within one mind? Do you have any objection in the one person case?

    As to the case of an uncooperative person, of course the method will only work if all the people want to do it.

  18. mlionson says:

    A single person cannot decide, based on logic alone, whether he should continue to try to resolve two conflicting theories in his mind or stop thinking about them and do something else for a while.

    A person knows this when he knows that he is fallible.

  19. Elliot says:

    A) I don’t remember saying anything about “logic alone”. Where is that coming into it?

    B) Are you saying that coercion is inevitable, and TCS is impossible?

  20. mlionson says:

    “There is always a non-problematic alternative, and non-refuted theories are better than refuted theories. Saying there sometimes isn’t a non-refuted theory available is just an assumption, which you and Popper have made, which turns out to be false. ”

    Elliot,
    I am referring to that quote. What does non-problematic alternative mean?

  21. Elliot says:

    A non-problematic alternative is a theory without a known problem (flaw) in it. It’s a non-refuted idea.

    It’s there if the person/people want it. It exists. There is a method by which he/they could think of it, with some mild effort. Failure is possible, but success is too, and it’s not a very difficult method to perform, once understood. It can be done with high reliability.

    I am not aware that this has anything to do with logic alone.

  22. mlionson says:

    You have “posted a universal method for creating a non-refuted theory”.

    Is it logic that allows you to know that the theory has not been refuted?

  23. Elliot says:

    > Is it logic that allows you to know that the theory has not been refuted?

    No. Fallible, critical, imaginative thinking let’s us refute ideas by criticism. If no one has refuted an idea (yet) then it is non-refuted.

  24. mlionson says:

    You posted a “a universal method for creating a non-refuted theory”, and stated essentially that it requires only “mild effort” to use this theory ; yet, we still need “fallible, critical, and imaginative thinking” to resolve disagreemnents and somehow we know it only requires “mild effort”. And this creates critical and imaginative thinking?

    These seem like contradictions to me. Am I missing something? We are talking about coming up with theories that people use. What is your “universal” method for creating a theory that is non-refuted amongst people arguing with each other, will guide the action of one or many involved in the argument, and only requires “mild effort” to create?

    Can you specify any of the details. People disagree all the time.

  25. Elliot says:

    > What is your “universal” method for creating a theory that is non-refuted

    Pasting from earlier in this comment thread. I think what you want specifically is http://fallibleideas.com/avoiding-coercion

    @method of creating non-refuted theories, linked from original post:

    > My approach to solving this problem is to declare the conflict (temporarily) undecided (pending a new idea or criticism) and then to ask the question, “Given the situation, including that conflict being undecided, what should be done?” Answering this new question does not depend on resolving the conflict, so it gets us unstuck.

    > When approaching this new question we may get stuck again on some other conflict of ideas. Being stuck is always temporary, but temporary can be a long time, so again we’ll need to do something about it. What we can do is repeat the same method as before: declare that conflict undecided and consider what to do given that the undecided conflicts are undecided.

    > A special case of this method is discussed here. [http://fallibleideas.com/avoiding-coercion]

    This may be clearer if you read my first comment in this thread: http://curi.us/1490-using-false-theories. In particular:

    > Any refuted theory T is strictly inferior to a non-refuted theory which has 3 parts:

    > 1) says T is refuted due to argument X
    > 2) contains T
    > 3) explains the circumstances in which T has some uses.

  26. mlionson says:

    Disagreement About Creating an Agreement

    Bob is arguing with Harry and the argument is consuming time and Harry is getting tired of arguing. If Bob asks Harry to not follow Elliot’s simplification procedure and instead focus just on the original point of argument, Bob is not *a priori* irrational and Harry rational. What Bob is suggesting (that following Elliot’s “universal” procedure is bad, right now) is that arguing about the original more complex argument, even if Harry is tired, makes better use of everyone’s time. But if Harry convinces Bob that the Elliot procedure is known to be universally applicable to all long disagreements (in which one person feels like quitting) bad things can occur. Bob and Harry may separate with an agreement in hand, but not really have accomplished anything, though they could have with just a bit more effort.

    In hindsight, days or years later, both could agree that trying to agree on a simpler argument caused both to lose out on very valuable new knowledge that would surely have occurred had they not been derailed by following Elliot’s procedure. Elliot’s method may be useful at times, but it is certainly not universal, as the above example illustrates. And it can lead to failure to develop new knowledge, a bad outcome.

    Similarly, if I can’t resolve whether to spend time resolving an issue in my mind, or instead follow the theory that is making me sleepy and go to bed, Elliot’s procedure does not tell me what to do about this conflict. I could agree to think a little and then go to bed; but, the best solution may be to fight the urge to sleep and not simplify the disagreement in my mind, even if resolving a problematic issue takes all day and night. By staying up all night, though I feel very sleepy, I am disagreeing that it is important to eliminate the sleepy feeling before solving another problem. I am disagreeing that I have to have an agreement with myself before knowledge can be created.

    Unlike my understanding of Elliot’s theory, critical preferences and other rules of thumb (like Deutsch’s idea about picking the theory that can be least varied) are actually useful and help people to decide what to do in a practical sense when there are conflicting theories. Agreeing to disagree may be the best solution to a long argument. But it may not be.

    Disagreeing about agreeing, for example by not satisfying another person’s desire to talk to you when time is scarce, may also create truth. Famous scientists do this all the time in order to save time. All of us do this to create knowledge when we do not engage with people whom we don’t think we can learn from. I also sometimes refuse to create an agreement (even with myself!) when I stay up all night to solve an important problem, though I obviously hold a competing theory that I should sleep because I feel so tired!

    You seem to assume, for some reason, that agreeing to disagree is always more rational than, for example, disagreeing about whether one should develop an agreement. But both can create truth and both can be the wrong thing to do. There is no universal procedure for determining which should be done in a given situation of conflict.

    Agreement does not create truth. It may be the enemy of those who wish to develop it, or its friend.

    Truth, on the other hand, does create agreement. It is therefore more important to focus on creating knowledge than on creating agreement, since developing truth does not require (and may be impeded by) universal theories about needing to develop agreement just because there are prolonged and heated disputes.

  27. Elliot says:

    > If Bob asks Harry to not follow Elliot’s simplification procedure and instead focus just on the original point of argument, Bob is not *a priori* irrational and Harry rational.

    I didn’t make claims about who is rational, let alone a priori claims. I just said if they follow the procedure they can come to an agreement. If Bob thinks coming to an agreement is bad and won’t do it, then the harm that follows is his fault. Shrug.

    > Bob and Harry may separate with an agreement in hand, but not really have accomplished anything, though they could have with just a bit more effort.

    What they’ve accomplished is not hurting each other. If one person doesn’t want to continue, they should find a way to stop. The stopper might be mistaken, but it’s his life to use as he pleases.

    > Elliot’s method may be useful at times, but it is certainly not universal, as the above example illustrates. And it can lead to failure to develop new knowledge, a bad outcome.

    Universal doesn’t mean universally achieves perfect outcomes. It means it can be used in all situations to achieve the outcome it claims to offer (a conflict resolution).

    > Similarly, if I can’t resolve whether to spend time resolving an issue in my mind, or instead follow the theory that is making me sleepy and go to bed, Elliot’s procedure does not tell me what to do about this conflict.

    That is ambiguous.

    It doesn’t tell you the theoretically ideal thing to do. i.e. it’s not a shortcut to unlimited knowledge.

    It does tell you a thing to do that will resolve the conflict, which means you won’t be coerced.

    > By staying up all night, though I feel very sleepy, I am disagreeing that it is important to eliminate the sleepy feeling before solving another problem.

    This is ambiguous.

    Either you’ve settled that disagreement and believe it’s not important to sleep now.

    Or you haven’t, and you stay up while wanting to go to sleep, which is coercion.

    Do you think coercion ever helps us?

  28. mlionson says:

    Your solution is coercive to someone who currently wants to find agreement on the initial disagreement. One party thinks your solution is something that is bad. So it is not a solution to complex disagreement. It is *part of the disagreement itself*. So it hardly can be perceived to always work or be universal in any sense of the word.

    Following your procedure may reduce the rate of growth of knowledge, as discussed above. Surely in those circumstances it would be anti-rational,

    The issue is not coercion. “Having” to come to an agreement takes valuable time that could be used to grow knowledge elsewhere. So having to come to an agreement in order to be “rational” is itself coercive. Furthermore, following your procedure can certainly prevent knowledge growth by preventing use of time in a more productive way.

    But in fairness to your ideas, following your procedure is *no less coercive* than not being given the opportunity to come to an agreement to begin with. Both are coercive. Logic can not solve this problem at any given moment in time when there is incomplete knowledge.

    When neither approach is a priori superior, neither is necessarily more likely to lead to knowledge growth and neither is more likely to prevent violence. Only conventions and rules of thumb can help us here because of our imperfect knowledge. These complex arguments are situations in which the “critical preferences” that we have been talking about are needed. Unfortunately, your theory does not help us to solve these difficult problems. Critical preferences, however, can help us to figure out what use of our time is good, which non-refuted theory should be considered first.

  29. Elliot says:

    > Your solution is coercive to someone who currently wants to find agreement on the initial disagreement.

    No. The requirements for my method include that all the people want to do it, and try to do it. If someone doesn’t want to do it, a requirement is not met, so my method isn’t being done. So my method isn’t coercing anyone.

    There’s no reason not to want to use my method because it has no downsides. Because it solves the conflict in a way that no one is left with cause for complaint, the method gives no one cause for complaint.

    > Following your procedure may reduce the rate of growth of knowledge, as discussed above. Surely in those circumstances it would be anti-rational,

    In science, following the scientific method takes more time than just making a wild guess and assuming it’s true. Therefore, science slows down the rate of the growth of knowledge compared to being a lucky fool. While this is true in some sense, there’s no criticism of science here.

    What we need are methods that can handle errors. Science is resilient against mistakes, whereas the guess-and-assume method is not. Forcing someone, against their will, to continue a discussion when they don’t want to, uses force. Force is bad at dealing with errors. (For reasons I have not provided in detail. If you are an advocate of force, we could go into them.)

  30. mlionson says:

    “There’s no reason not to want to use my method because it has no downsides.”

    Are you asserting that there is no downside to two people agreeing on something so “use my method”=agree on something. Or are you asserting something more?

  31. Elliot says:

    There exist methods of conflict resolution that do have downsides, such as compromising.

    The method I’m advocating does not have downsides. No one loses out. So I don’t see the point of your scenarios where someone doesn’t want to do it.

  32. mlionson says:

    I have already stated in several different forms the downsides to your method if it means something more than people can agree on something.

  33. Elliot says:

    I’ve been disagreeing and arguing with the downsides you’ve discussed. If you think there’s one I haven’t criticized, can you paste it please.

  34. mlionson says:

    Elliot,

    Your theory about how to create an agreement (argue and then agree about something that is more superficial than the original disagreement) does not, in and of itself, create truth. The reason is that the opposite is equally good advice in terms of creating truth. One could say that those who are arguing and not agreeing should consider deeper problems and that is better than considering problems that are easier to agree on. These deeper problems may take more time to resolve, but the result could be more powerful knowledge creation per time spent.

    For some reason your theory is about using argument to get an agreement, rather than using argument to create more knowledge. Creating knowledge may even require creating more disagreement, at least in the short term.

    Please recognize: Agreement between people does not in any way imply that knowledge has been created, nor does it even create agreement in the long-term! On the contrary, agreement between people often implies stagnation, and the same problems re-emerge in different forms, with subsequent additional disagreement. On the other hand, greater truth (verisimilitude or truth-likeness) does create agreement in the long-term because it is objective and therefore ultimately it can be perceived to be true by everyone.

    S0, greater verisimilitude of a conjecture creates agreement in the long-term, but greater agreement DOES NOT, in and of itself, create more knowledge. Greater verisimilitude even creates a better ability to create better conjectures in the future (because better conjectures are formed when greater knowledge interacts with random variation, so those who know more tend to conjecture better.)

    Popperians speak of critical conjectures because unlike your theory or its opposite, a critical preference actually does help people to choose between various unrefuted theories, recognizing that human beings have time constraints. These critical conjectures, too, are subject to criticism and refutation. Deutsch’s idea, for example, that we should choose the theory that is harder to vary is an example of this.

    First we try to get closer to the truth. Agreement then follows from that: Sometimes immediately and sometimes after many years. But when one discovers truth, agreement does indeed follow.

    Finding superficial agreement using your theory, in and of itself, does nothing to create truth, and therefore does nothing ultimately to help people to find agreement about objective truth.

    The downside of your theory is that it recommends not using something that may work (critical conjectures) and suggests replacing them with a method such that utilizing it or its opposite is equally good or bad at creating knowledge.

  35. Elliot says:

    > Your theory about how to create an agreement (argue and then agree about something that is more superficial than the original disagreement) does not, in and of itself, create truth.

    It does not create truth about the original issue, but does create truth about how to proceed. Agreed?

    > greater agreement DOES NOT, in and of itself, create more knowledge

    I think apples and oranges are being compared. Is the scientific method better at creating knowledge than my conflict resolution method? It depends what kind of knowledge you want. If you want scientific knowledge, then yes. If you want the knowledge needed to not fight and hurt each other, than no.

    In a situation where conflict resolution is applicable, it’s better and more efficient to do it than ignore the problem. Fights and force are bad for knowledge creation. You can’t carry on with scientific collaboration when you don’t agree to do so.

    > The reason is that the opposite is equally good advice in terms of creating truth. One could say that those who are arguing and not agreeing should consider deeper problems and that is better than considering problems that are easier to agree on.

    Consider a rapist and a potential victim. Is it really equally good advice that when one person doesn’t want to carry on interacting, they try harder to interact?

    If people *voluntarily* consider deeper problems, that is not the opposite of my method but rather fully compatible. If someone doesn’t want to, then it’s *very bad* if they are *forced* to.

  36. mlionson says:

    “It does not create truth about the original issue, but does create truth about how to proceed. Agreed?”

    Not really. Arguably, one could get as much or more knowledge per time about “how to agree” by doing the opposite of what you are suggesting (by delving deeper into problems)

    But the above is irrelevant. There is no moral virtue in agreeing or disagreeing about something independent of knowing what one is agreeing or disagreeing about. Learning to agree (more easily) about something that is bad, is worse!

    “Consider a rapist and a potential victim. Is it really equally good advice that when one person doesn’t want to carry on interacting, they try harder to interact?”

    Your assumption that agreement is good implies:

    If 100% of all women *agreed* that men should own their bodies (so that women should not control them but men should have sex with them at will), the evil of the process would be no less just because women agreed to it.

    Indeed, if women consent to abuse it is more horrible (not less) than if they fight it. When men abuse women they do so because of a flagrant lack of moral knowledge and the evil is greater if women agree to be subjected to it.

    First one conjectures whether the activity creates more knowledge than other alternatives, then one agrees or disagrees! It is knowledge that creates consent. Consent does not create knowledge.

    “If people *voluntarily* consider deeper problems, that is not the opposite of my method but rather fully compatible. If someone doesn’t want to, then it’s *very bad* if they are *forced* to.”

    To determine whether one should convince someone to use your algorithm, first one would have to *know something else*

    We have one (unrefuted) theorythat says, “Delving deeper creates more knowledge/time about how to agree”. And another unrefuted theory says “Simplifying the argument creates more knowledge per time.”

    Then you tell us when to apply the simplification procedure (in non-scientific arguments and in rape situations, for example). Do you understand that telling us when to prefer each unrefuted theory is itself a critical preference?

    But you are also arguing that one should not use critical preferences, but instead use your algorithm. So in order to argue that one should not use critical preferences, you in fact create a critical preference (that arguments should be simplified in some situations rather than others).

    In effect, your theory is a critical preference that argues against the use of critical preferences. That seems illogical to me.

  37. Elliot says:

    I think we better focus on one narrow issue at a time. If we succeed, then we can come back to the other issues.

    Let’s consider the symmetry you are claiming btwn my method of conflict resolution and its opposite.

    And let’s consider only one part of that issue.

    You say that agreeing isn’t a good in itself, and doesn’t constitute, say, scientific progress.

    I am saying that conflict can hurt. I think you will agree with that.

    Some conflicts hurt, and some don’t. Right?

    I call the conflicts-that-hurt “coercion” and the ones that don’t “problems/questions”.

    For the conflicts-that-hurt, failure to create some agreement hurts, so it’s important to be able to create it. Doing the opposite means people suffer, but successfully creating agreement (a common preference) means no one suffers.

    Creating an agreement to avoid anyone being hurt is not scientific progress, but it’s important and valuable. Being hurt is bad; conflict resolution for conflicts-that-hurt is a good.

    Is this clear so far? Do you agree with *only* what I said in this comment?

  38. mlionson says:

    “Is this clear so far? Do you agree with *only* what I said in this comment?”

    No! Although I appreciate the effort. For example,

    “For the conflicts-that-hurt, failure to create some agreement hurts, so it’s important to be able to create it.”

    Not necessarily. Sometimes it is equally important to increase the hurt. If we believe it will take me or others 1000 years to convince a terrorist that he should not blow up buildings and kill people (destroy knowledge and knowledge creation), then I have no problem – none at all – with increasing his pain, coercing, and/or killing him to prevent the damage.

    Consent and coercion are means to an end (protecting and growing knowledge). They are not ends in and of themselves.

  39. Elliot says:

    Excluding the case of dealing with aggressive force or where someone doesn’t want to find a solution, do you agree?

  40. mlionson says:

    I do not agree that the goal is to create consent. First one creates knowledge, consent may (or may not) follow.

    We should therefore not look for ways to create consent. Instead, we should look for the best way to create knowledge.

  41. Elliot says:

    In a conflict-that-hurts, are you saying that we should try to create lots of knowledge *while being hurt*? With the person we’re in conflict with? I think that being actively hurt is detrimental to general purpose knowledge creation, and we can’t expect to get a lot done while in that bad state, and we should make no longer being hurt a priority.

    You haven’t said that in your post, I’m just guessing. Your last comment doesn’t discuss the hurt aspect of the situation. I think people being hurt, or not, is very important. Do you agree it’s important?

  42. mlionson says:

    “Your last comment doesn’t discuss the hurt aspect of the situation. I think people being hurt, or not, is very important. Do you agree it’s important?”

    Important for what? Destroying knowledge? Creating knowledge? Important for helping humanity. More pain or more pleasure can do either.

    The only explanation of why we experience coercion and hurt is lack of knowledge.

    The reason that we usually say that it is bad to hurt other people’s feelings is that it hurts *their ability* to protect and create knowledge. If we are to protect knowledge, we must protect other people.

    So much is unknown in our world. Furthermore, it is only these unknown risks that can hurt and coerce us. But we are surrounded by unknown risks. Therefore we are chronically in a state of mild coercion because of our lack of knowledge.

    If we must create pain in order to protect and grow knowledge, then we should do it. For example, if we are having surgery, it might hurt, but we should have it anyway. Surgery is painful, but it protects our ability to live and so continue to generate knowledge. So surgery is morally correct even though it hurts us.

    The issue is not whether coercion is bad or pleasure is good. Neither are good if they destroy knowledge. Both are good if they protect and create it. Create knowledge and you will create consent and knowledge. Create consent without knowledge and you will create neither.

  43. Elliot says:

    Is your view that hurting never helps (and indeed is undesirable?), *all other things being equal*, but that sometimes intentionally hurting people is good if there’s a tradeoff and something of greater value may be gained?

    I believe there is never a tradeoff between hurting people and gaining benefits. There’s always a better way to get benefits without hurting anyone (still setting aside issues of aggressive force and self defense).

    I believe that in all situations there is a right thing to do, and it’s never to hurt people intentionally and knowingly (still allowing for self-defense).

    You think that sometimes hurting people intentionally and knowingly is good, if it will create more knowledge (and perhaps some other reasons as well). Is that right? If so, can you give a concrete example where hurting is beneficial?

    You mention the example of surgery. But surgery is often non-coercive, and certainly can be, and is better that way. The type of harm I’m talking about is not physical pain, but *conflicts-that-hurt*, aka coercion. This is a mental issue. Physical pain can cause mental pain, but doesn’t always do so.

  44. mlionson says:

    “Excluding the case of dealing with aggressive force or where someone doesn’t want to find a solution, do you agree?”

    No. There are two problems with this formulation.

    1. “Excluding the case….where someone doesn’t want to find a solution, do you agree?”

    One can not exclude this argument without missing the point of our discussion, because you are implicitly asking me to forgive one little contradiction in your argument and then you are happy to proceed!

    You implicitly assume that person A’s time is better spent coming to an agreement with person B, who wants to discuss something with A, than arguing with anyone else or doing anything else. So A does not want to come to an agreement (does not want to reach consent). My point is that the person who does not want to come to an agreement is not necessarily morally wrong, though he may coerce the other person.

    Your contention seems to be that it is always better for Person A to continue to argue with (even well-intentioned) person B, regardless of what else Person A could do, until person B releases him from the discussion by agreeing that he has said something that person B thinks is important enough to warrant discontinuation of the discussion.

    Otherwise person A (who does not want to continue the conversation) is thought to be coercive if he walks away from the discussion before he has agreed with something that person B thinks it is important for him to agree to. But person A may be able to generate more knowledge by not talking to person B, even though person B wants him to keep talking.

    This is rather like Joe telling Harry that he believes in consent so Harry cannot leave the room until Harry signs a piece of paper agreeing with information that Joe thinks is important (thereby creating consent!). But it is Joe who is being coercive, though Joe is also the one seeking consent!

    Your argument is:
    “Excluding the case….where someone doesn’t want to find a solution, do you agree?”

    We cannot exclude this case because if one person no longer wants to come to an agreement (because his time is more valuable than the knowledge created by the person he is talking to), he is held hostage by the other person’s demand’s on his time, if he does not want to be called coercive. So you are asking me to accept that there needs to be no coercion for your theory to be true, except this one little bit about having to come to an agreement, and then you are happy to proceed with the argument! You are asking me to accept that 1=2 and then you will proceed. But that is illogical.

    Perhaps this would become clearer if we imagine a situation in which someone who believes in a particular religion honestly wants to convert me because he is very afraid that I am going to hell, if I do not convert. Because he values my health and well-being, he continually asks to speak to me, indeed comes to my door to convert me. He does agree to disagree when nothing new is being discussed on a given day, but he keeps coming back because he has a new conjecture that he thinks will now convince me that he is right.

    I think it is reasonable to tell him to never come to my door again (unless I invite him) if he is going to discuss religion, even though he could be right about his religion and is well-intentioned. He is honestly trying to come to an agreement with me but it is simply not worth my time to continue to argue with him, because I estimate that it will take me 20 or 30 years to convince him that I am right and I can create more knowledge with someone else than with him.

    Unlike your supposition, I do not want to come to an agreement with him , though I know that it will hurt him. If you will, I am “knowingly” hurting him and refusing to come to an agreement. Since I do not want to come to an agreement with him and he wants to come to an agreement with me, I have coerced him when I shut the door.

    I certainly agree that I coerced him, but think that my behavior is right. I think an honest interpretation of your theory would be that I coerced him, also. But my behavior is perfectly legitimate, because I estimate that knowledge will grow faster if he does not come to my door any more, though he is hurt by what I do.

    “Excluding the case of dealing with…..aggressive force”?
    “You think that sometimes hurting people intentionally and knowingly is good, if it will create more knowledge (and perhaps some other reasons as well). Is that right?”

    Not quite. I don’t believe in intentionally hurting people but believe that knowingly hurting them can be the right thing to do. Protecting and growing knowledge is the only self-defense that we have. So I see it as a moral obligation to help myself and others protect and create knowledge. I also see it as self-defense to help myself and others protect and grow knowledge.

    If you are trying to exclude the notion of self-defense from the argument, then I can not agree with that. Self-defense is protecting and growing knowledge and that is my theory.

    So if you are asking me to exclude self-defense (growing and protecting knowledge), you are implicitly saying to me,
    “Assuming that you can’t use your theory, give me an example of when it is right”. This question seems illogical to me.

    In short, I don’t think we can “exclude” the cases of self-defense from the discussion and situations in which one party does not think it is rational to come to an agreement, because these situations are the heart of virtually any real-world disagreement. And we are talking about what to do in a disagreement, not what to do when people agree.

    To answer the question below:

    “You think that sometimes hurting people intentionally and knowingly is good, if it will create more knowledge (and perhaps some other reasons as well). Is that right?”

    Not quite. I think that knowingly hurting other people, particularly in the short-term, can be exactly the right thing to. I do not have intention to hurt and wish that I could accomplish the same moral goals without hurting people. But without perfect knowledge, this is impossible.

    I already gave you one example of stopping (and hurting) certain religious individuals from trying to study with me, though they sincerely want to be helpful.

    Another example might be the situation of someone who is brought in by ambulance in a delerium requiring immediate surgery to save his life. I would certainly help to tie him to the bed though he screams at me and tries to get away. I would order medication to be administered intramuscularly to calm him down enough for the anesthesiologist to involuntarily get an IV in him, though he screams that he wants no surgery and fights vigorously to escape his restraints. Needless to say, patients can be very very angry and these situations are frequently accompanied by considerable invective and violence against those who are trying to coerce and restrain with much mental pain to the recipient whose will is being overridden.

    I am willing to be floridly coercive because he is a valuable knowledge-creating/morality-creating person who must be protected from himself. I wish to preserve knowledge first (his knowledge) and this simply overrides his consent. In these situations, my mind is invariably eased when I know that in a few days he will thank me for the intervention and apologize for his bad behavior, though of course there never is a moral need to apologize because when we are able to be mature, we invariably understand.

    Elliot,
    Consent is properly called “informed” consent. Knowledge must always come before consent. That’s why the “informed” comes metaphorically and literally before the “consent”.

    Attempt to create consent before creating knowledge and you will get neither. Create knowledge before consent and you will get both.

  45. Elliot says:

    When I ask to exclude something, it’s temporary. If you agree excluding it, then we can talk about it and why no exclusion is needed after. If you don’t agree, even excluding it, then there must be some other reason you don’t agree, separate from the issue being excluded, so then we get the opportunity to take issues one at a time and not mix them. In this case, I expect the other reason that isn’t the aggressive force case, to be the deeper and more important reason, which will be more fruitful to focus on.

    I don’t think there is a contradiction, but I also don’t think we can talk about everything at the same time.

  46. Elliot says:

    > Your contention seems to be that it is always better for Person A to continue to argue with (even well-intentioned) person B, regardless of what else Person A could do, until person B releases him from the discussion by agreeing that he has said something that person B thinks is important enough to warrant discontinuation of the discussion.

    What? No! I am the one advocating that if people have trouble cooperating, they should cooperate less. All cooperation should be voluntary and for mutual benefit. If someone wants to leave and not try to cooperate, then he should go his separate way, immediately. Person B has no power whatsoever to make person A continue to discuss (unless there is some clear and specific obligation to the contrary, e.g. person A signed a contract to do an hour long filmed discussion for $100,000. And even that he can walk out and no one should grab him and tie him to his chair, but he will be liable for damages.).

    If they “go their separates ways” that consists of both of them doing whatever the hell they want except for aggressive force. They don’t need permission or consent to live their own life, only for joint interactions.

    You are the one saying the opposite method of mine, i.e. one where they react to someone wanting to stop by trying to harder rather than leaving each other alone, even though the person still wants to stop, is equally good (so as I see it, he’s being forced to continue the discussion against his will. But I imagine if I start talking about how you’re advocating discussions be done at gunpoint, you’ll say you never said that and meant something else entirely. But what?)

    > I think it is reasonable to tell him to never come to my door again (unless I invite him) if he is going to discuss religion, even though he could be right about his religion and is well-intentioned.

    Yes, that’s compatible with my view, but not with the opposite of my view. So will you now agree that the opposite of my view is flawed?

    > > You think that sometimes hurting people intentionally and knowingly is good, if it will create more knowledge (and perhaps some other reasons as well). Is that right?

    > I already gave you one example of stopping (and hurting) certain religious individuals from trying to study with me, though they sincerely want to be helpful.

    By asking someone not to come back, you are not harming him. If he is coerced, that is due to his own irrational psychology, and no fault of yours.

    Your request is completely in accordance with the basic liberal values of our society, which all reasonable people will respect and defer to, when dealing with someone who isn’t a close friend or family. That makes it very good at creating an agreement between you to do it.

    Requesting that people leave you alone is an *extremely effective way of creating consent* with almost anyone. It’s quite reliable. If you go into a store, and the store owner asks you to leave for no reason, you think “wow, what a jerk!” and you leave, and you have no desire to stay and fight with him over your right to be on his private property. Even his crude and unfair request leads to an agreement that you leave, b/c it’s in line with liberal, humanitarian values you both respect.

    Presumably you will say: what if you know in advance that, as a matter of fact, the religious guy will be hurt/coerced by my rejection of him (due to his mistaken ideas)? Then isn’t it intentionally hurting him?

    But no, your intent has nothing to do with hurting him. Hurting him is no part of your plan; it’s not something that you think will benefit you; you’d rather if he wasn’t hurt. And BTW if you know his psychology in detail, and learned it in any realistic way (e.g. by being interested in him and reading his website or talking with him) then you’ll, realistically, be happy to spend 5 minutes briefly explaining liberal values to him, and why he should be happy to leave you alone. You’ll give him a bit of help in not being coerced (though certainly not unlimited — he isn’t your responsibility).

    Your intent is only to protect yourself, not to hurt him. And if he won’t respect that desire of yours to run your own life, then he is an aggressor, isn’t he? He’s blackmailing you with his own flawed psychology to force you to do stuff you don’t want to, and if you don’t he’ll yell that you’ve hurt him and he’s a victim and so on.

    Now suppose for a minute you want to resolve your conflict with the religious guy, not just get rid of him. You don’t want to spend unlimited time on this, but you’d like to come to a better agreement than just going your separate ways. Can you do that? How can you do that. The answer is you can create an agreement, quickly and reliably, if you use my method (if, and only if, he wants to do this, and wants to use the method. Also you have to both know the method in advance or it won’t be so quick b/c it will have to be learned).

  47. mlionson says:

    “By asking someone not to come back, you are not harming him. If he is coerced, that is due to his own irrational psychology, and no fault of yours.”

    “Yes, that’s compatible with my view, but not with the opposite of my view. So will you now agree that the opposite of my view is flawed?”

    “He’s blackmailing you with his own flawed psychology”

    I don’t think you are agreeing with me, but maybe you are. If you agree that I am doing the right thing and coercing the religious guy (AND the religious guy is coercing me) then we are in agreement. I still think I am right, though I think I have coerced him.

    And what makes you think that he has flawed psychology? I think one can only come to that conclusion if *you assume* his ideas are mistaken. Imagine that he is right and my listening to only one more conjecture of his will in fact “save” my life. He then does have a moral obligation to continue to try to persuade me. Notice that you have to rely on “liberal” values of society to make your point. BTW, these liberal values that you hold are also called “critical preferences”. They are helping you to decide whether I should have a *right* to stop him from coming to my door when we have two ambiguous theories. Namely, I think I am right and he thinks that he is right and neither of us can show, to the others satisfaction, that one of the theories has been unambiguously refuted.

    And thank goodness I do have a legal right. But he has a moral obligation, if his views are correct, to try to persuade me unless the punishment for doing so makes it more likely that he can save someone else’s life by ignoring me. The law hopefully convinces him of that.

    But he thinks that he is right. Furthermore, since we are fallibilists, he may BE RIGHT. Therefore, he has a moral obligation to continue to try to persuade me even if it coerces me.

    If it turns out that he is right, would you still say that he is coercive just by successfully trying to talk to me one more time and successfully saving my life?

    Would you change your mind about who is coercing whom if it turns out that he is right so listening to him just one more time will save my life?

    It seems to me you have to assume that his knowledge is wrong, before you come to the conclusion that his rather innocuous behavior is coercive. Right?

  48. Elliot says:

    How is the religious guy coercing you!?

    You have an initial chat voluntarily (or don’t have one). You decide you don’t want another. You ask him not to come back. He doesn’t come back. Where’s the coercion for you?

    > And what makes you think that he has flawed psychology?

    IFF he is coerced by a request not to come on your private property and chat with you, but instead to leave you alone, that’s a psychological flaw. He has very intrusive and demanding preferences about other people, and if other people don’t do as he wants he feels bad. That’s not a good kind of mindset!

    Even if you are correct that he should keep trying to persuade you (which I’m going to set aside for now, except to comment that maybe he’d have better luck persuading someone else who hasn’t asked him to leave), there’s no reason for him to feel bad about your request. He can see it as a challenge, and understand that other people make mistakes and have a legal right to and so on. No need for him to be coerced.

    > If it turns out that he is right, would you still say that he is coercive just by successfully trying to talk to me one more time and successfully saving my life?

    If he trespasses and talks to you against your will, that is liable to cause you coercion, just like if he steals your iPhone or otherwise violates your rights in an unpleasant way. If it does coerce you, it does. Whether he was right has no bearing on the psychological effect on you before you agree with him.

    > Would you change your mind about who is coercing whom if it turns out that he is right so listening to him just one more time will save my life?

    No.

    > It seems to me you have to assume that his knowledge is wrong, before you come to the conclusion that his rather innocuous behavior is coercive. Right?

    No it’s irrelevant.

  49. mlionson says:

    Michael (mlionson): “Would you change your mind about who is coercing whom if it turns out that he is right so listening to him just one more time will save my life?”

    Elliot: No.

    Michael: “It seems to me you have to assume that his knowledge is wrong, before you come to the conclusion that his rather innocuous behavior is coercive. Right?”

    Elliot: No it’s irrelevant.

    Our liberal values do allow us to coerce people. For example police not only do not have to knock on the door and get permission to be on personal property, if there is a delirious person in need of immediate surgery who is barricading himself in his house, the police will not only not need to knock, the police will certainly enter the house and drag him out with help of ambulance personnel. They will do this to save his life. This is right and proper. Our liberal society recognizes that we can do this because the man is not able to give informed consent and his life is too valuable to waste.

    If the same amount of verisimilitude existed in the theory that if I hear one more conjecture from the religious man I will not die (as exists from coercing a delirious man into surgery) surely our liberal values would insist that I hear the conjecture! For example if it were well-known that I would thank him two days later and that my life will actually be saved, our liberal values would insist that I be given this new information, even if I do not want to hear it. This is because my dissent is not informed and just a little bit more learning will save my life and cause me to give consent to being forced to hear it, after I hear it.

    Our accepted ethical behavior with the delirious man is far more coercive than my being forced to hear an additional new conjecture from a religious man, if hearing that conjecture will also save my life. Our liberal values would insist that just because I have not heard the conjecture from the religious man that will surely save my life, I should therefore die! There can be no informed consent unless I hear that conjecture, with my consent or not.

    The reason we now exclude the religious man from my property, but not the police from the property of the delirious man, is that we attach far more confidence to the theory that performing certain surgeries on delirious people will save lives, than that hearing additional conjectures from religious people will save lives.

    If knowledge comes before consent, there will be knowledge and consent. If consent come before knowledge, there will neither be knowledge nor consent.

  50. Elliot says:

    > If the same amount of verisimilitude existed in the theory that if I hear one more conjecture from the religious man I will not die (as exists from coercing a delirious man into surgery) surely our liberal values would insist that I hear the conjecture!

    The topic has strayed a lot, but OK you want to discuss liberalism.

    BTW the best liberal writer is William Godwin and you might do well to read him. And it just so happens that he anticipated TCS (and is, to my knowledge, the only writer, ever, to anticipate TCS substantively). I do not think that it’s a coincidence that the man who best understood liberalism also was drawn to a TCS approach. They are related. So perhaps we’re not so far off topic as it may seem.

    Liberal values are about knowledge creation, improving, rationality, and error correction.

    They recognize that in a disagreement, the preacher or trespasser isn’t always right, and that if he uses force it’s bad for the above things (e.g. it doesn’t help the people improve). BTW it’s not *guaranteed* to be bad if luck intervenes or similar, so an example where force luckily turns out for the best won’t be relevant.

    Liberal values further recognize that *we do not know the truth* so they make no exceptions for “What if the preacher’s ideas are true!” We only have guesses about the truth, which sometimes lead to disagreements, in which case see above. Liberalism does not acknowledge “I am right, and this other guy is wrong, and what should I do about that to fix or help or convert him?” as a valid question no matter how beneficial it’d be if ur right, and how low u estimate the odds of u being mistaken are.

    When you have a great idea, it has to go through what we might call *liberal institutions*. It has to survive various tests and challenges to get implemented. And these obstacles are designed so that if it is implemented, it will be implemented *voluntarily*. In fact the entire challenge could be phrased thus: get people to do it without using force. We call non-forceful ways of getting people to agree to ideas “persuasion”.

    Persuasion is very important b/c you have to in some way demonstrate the merit of your idea in order to persuade people — you have to show them some benefits of doing it, and explain it to their satisfaction so they can understand it enough to judge it. In this way, bad ideas have a harder time spreading b/c you try to show the merits and people are like “LOL that’s dumb”. Of course people make mistakes and sometimes bad ideas do spread. The liberal system isn’t about guaranteeing utopia but rather setting things up so the best thing we know how to do can happen. you might say that it only works as well as the knowledge of the people in the system. that’s fine. no possible system can do better than to utilize the knowledge that exists (and help create more knowledge). that’s a fundamental limit.

    TCS says that in families, we must do the liberal thing: use persuasion rather than force in our dealings with our children. That’s not the only thing TCS says, but it’s a good start.

    That leads to the problem: what if we don’t agree about, say, who gets to use the TV today? If neither of us uses force, and we both fail at persuasion, then *what happens next*? And that is where a method of conflict resolution is needed, which brings us back to the original topic.

    With the sick man, we’re doing our best to help him *by his own standards*. We’re doing what we think he would want, even though he can’t tell us right now. With the preacher, if he trespasses against my will, he’s clearly not trying to do what I want, or what I think is best according to my own judgment.

    Helping someone consists in doing things that help them *by their standards*. This is the liberal, fallible conception of help. And it’s why reviving an unconscious man is help, but trespassing to preach is not help.

    None of this analysis has anything to do with whether the preacher is in fact correct about religion (and how much benefit it would do if everyone agreed with him), or how much confidence we have that he’s correct or incorrect. It’s all about acknowledging fallibility and aiming for rational institutions.

  51. mlionson says:

    “With the sick man, we’re doing our best to help him *by his own standards*. We’re doing what we think he would want, even though he can’t tell us right now. With the preacher, if he trespasses against my will, he’s clearly not trying to do what I want, or what I think is best according to my own judgment.”

    You are mistaken. Given the way I phrased the question, it would indeed be consistent with my standards (that growing and protecting knowledge is paramout) if I heard his one additional conjecture, though I did not want to. And liberal society would insist that I hear it, so that I do not die. I am not able to make informed consent without knowledge.

    Additional knowledge that the delerious man did not have (because of his delerium) and additional knowledge that I did not have would indeed make me retrospectively give consent. In both cases, however, coercion is necessary for the delerious man or me to gain the understanding. Again, I am assuming that the one additional comment by the religious man is known to have the same capacity to save my life that surgery on a delerious man does. In both cases (after the surgery and after hearing the religious man) both the delerious man and I will give consent, because what happened is consistent with our deepest values and the deepest values of a liberal society. In both cases, we don’t want to die and by assumption there is an immediate life-saving benefit to the surgery and to hearing the comment by the religious man.

    In general liberal values do not support coercion. But there is a reason for that. The reason is that we believe that people will learn *more* when they are not coerced. We do not believe in coercion because it destroys knowledge growth, virtually always, but not always. In the case of the delerious man and in my case with the religious man; since I stated that it is well understood that failure to have one little bit of knowledge will cause me to die and the abscence of surgery will cause death: if both of us are given just a little bit more knowledge, both of us are very likely to consent that the coercion was completely necessary. Liberal values treat life and liberty as precious. Denying knowledge to those who must consent destroys both and is therefore not a liberal value. And indeed the coercion of me in this case is completely consistent with my own values, which are liberal values. This coercion, quite literally, protects “life, liberty and the pursuit of happiness”, certainly liberal values! A person’s willingness to defend freedom is a person’s willingness to coerce to defend freedom. Freedom is not free. Pacisfism is not a liberal value.

    BTW, you were the one who brought up liberalism (a critical preference) to try to defend your theory that there are no critical preferences! My argument has been a defense of a particular critical preference that is completely consistent with liberal values:

    My critical preference is this : “When in doubt about two unambiguously unrefuted theories, choose the one that you think creates and protects the most knowledge, even if it compromises consent in the short term, because it will lead to more knowledge and more consent in the future.”

    It seems to me that your critical preference (that you deny that you have) is “When in doubt about two unambiguously refuted theories, choose the one you think creates consent now, even if your best conjecture is that it compromises knowledge growth in the future.”

    Do you see the relevance to our previous discussions of critical preferences, now?

  52. Elliot says:

    > > With the sick man, we’re doing our best to help him *by his own standards*. We’re doing what we think he would want, even though he can’t tell us right now. With the preacher, if he trespasses against my will, he’s clearly not trying to do what I want, or what I think is best according to my own judgment.

    > You are mistaken. Given the way I phrased the question, it would indeed be consistent with my standards (that growing and protecting knowledge is paramout) if I heard his one additional conjecture, though I did not want to. And liberal society would insist that I hear it, so that I do not die. I am not able to make informed consent without knowledge.

    Let’s try to be really clear about this.

    A preacher speaks to Joe. Joe is unimpressed. Joe finds the sermon so distasteful that he asks the preacher never to come back or speak to him again. He really doesn’t want to hear more.

    Now suppose the preacher uses force (such a trespassing) to preach more.

    You say preacher is making a reasonable attempt to help Joe, according to Joe’s own standards (which value knowledge and his life), because the preacher is offering those things.

    I say attention is a scarce resource. We can’t just listen to everyone who wants our attention. We have to prioritize. It’s Joe’s right to divvy up his attention as he chooses. If he thinks he’ll learn more from a book he has than another sermon, then in Joe’s scheme of things having a sermon interrupt his book reading will *harm him* b/c he judges he’d lear more if he was not interrupted.

    The preacher knows Joe has made a decision in favor of other projects, and against listening to any more preaching. So by trepassing to preach more, against Joe’s will, he’s intentionally going against Joe’s values and preferences. He’s intentionally giving Joe something unwanted (sermon).

    This is fundamentally the same as if Joe refused to buy a TV at a given price, and then someone decided it would benefit both him and Joe if Joe did buy that TV, and broke into Joe’s home to put the TV there and take the money. It’s overriding Joe’s judgment, by force, on the completely insufficient basis of believing (even genuinely) that Joe is mistaken and this is for the best. This forceful approach is low on error correction: what if the preacher, or the TV salesman, is mistaken? What if Joe is correct? They just don’t worry too much about that and assume they’re correct.

    As to informed consent, it’s up to us how informed we think is worth being. Because attention to become informed about things is a scare resource, we have to prioritize it, not just become informed about everything.

    > Additional knowledge that the delerious man did not have (because of his delerium) and additional knowledge that I did not have would indeed make me retrospectively give consent.

    It might or it might not. The preacher doesn’t really know if it would. Joe thinks additional knowledge would not make him want more preaching. They have a straight up, plain old, unremarkable … disagreement. And the preacher is saying “I’m right, so I’ll use force against Joe, and even call it help.” with no concern for error correction or reason.

    > Again, I am assuming that the one additional comment by the religious man is known to have the same capacity to save my life that surgery on a delerious man does.

    You’re cheating by building into the hypothetical scenario that the preacher IS right, and Joe IS wrong, and when you do that then error correcting institutions seem inefficient and flawed compared to just doing what we know is best and by premise in fact is best. But in real life, we never have a set of known true premises that need no error correction. (And btw if we did, why on Earth would Joe disagree with them?)

    So do you see how force isn’t help, and that it neglects error correction when you assume the forcer is correct and the victim is mistaken?

    > In general liberal values do not support coercion. But there is a reason for that. The reason is that we believe that people will learn *more* when they are not coerced.

    Out of curiosity, which liberal authors ever said that?

    Also, btw, you’re using “coercion” to mean “force” here, right? Please let’s use coercion to mean TCS-coercion to keep things straight, and force for force.

    > It seems to me that your critical preference (that you deny that you have) is “When in doubt about two unambiguously refuted theories, choose the one you think creates consent now, even if your best conjecture is that it compromises knowledge growth in the future.”

    Have you actually read my original posts? I offer a method to create a *new theory* that isn’t in any conflict, not for picking between conflicting theories. I further say it’s always irrational to use a refuted theory. And I say that picking between theories which are both refuted or both non-refuted is a mistake. And btw if we thought something would compromise the growth of knowledge (relative to the alternatives), that’d be a criticism of it, so it’d be refuted, so I’d reject it. None of what you attribute to me resembles my position.

  53. mlionson says:

    Unfortunately Elliot I do not know how to see my previous posts. Forgive the computer illiteracy!

    I have read your posts.

    Elliot,
    You characterize my position as follows:
    *A preacher speaks to Joe. Joe is unimpressed. Joe finds the sermon so distasteful that he asks the preacher never to come back or speak to him again. He really doesn’t want to hear more.
    Now suppose the preacher uses force (such a trespassing) to preach more.*

    Unfortunately I do not have access to my post to quote exactly what I said because of my inability to navigate on this blog, but surely you must be aware that this is a gross and total mischaracterization.

    I’m almost sure I used the word “verisimilitude”. I am almost sure I said that if the preacher now has a *new* conjecture that he conjectures will save my life and this conjectures is known in society to have the same degree of verisimilitude as the proposition that surgery will save my life: Then if the preacher knocks on my door and presents his conjecture, I argued that he behaved coercively and morally. And I asked you whether you agreed. I compared the *hypotheitcal* knowledge of the preacher to the knowledge we have that certain forms of surgery will save lives to make sure you understood (at least I think I did!)

    I said that he has acted coercively, but that it was exactly the right behavior, because one cannot make informed consent without knowledge. Knowledge must come before consent.

    You strongly disagreed and thought that it would be morally wrong and against liberal values for the preacher to save my life because it would coerce me like taking an “i-pod”. So you seemed to be clearly expressing a preference for consent over knowledge. Better to have people die than to provide a little information to someone that will save his life, if he does not want to hear the information.

    This is so, you argued, even though I mentioned that just like the case in surgery, I told you my assumption that we have an excellent conjecture that the person hearing the information (against his will) will quickly thank the person who provided the information for saving his life.

    Your point does seem clear to me and a fair reading of what was said I think will show that your point of view in this situation was that we should focus on consent in the short term, even if it destroys knowledge in the long term by causing a presumably rational and reasonable person to die, for example.

    You now say that what I say does not resemble your position. .
    I know that you would not kill people in real life by withholding life-saving information, even if someone did not want to hear it, but your argument seems to have lead to that exact conclusion in this blog.

    Or, perhaps I did not express myself clearly or perhaps you misunderstood what I said. Again, I have no access to the primary source (earlier parts of this blog) so if you teach me how to access those, I can see what you and I actually said!

  54. Elliot says:

    How do we create knowledge in the longterm? By doing certain things in the short term. One of them is focussing on error correction, which implies opposing all aggressive force. Nothing in this view is saying we should ever do anything at the expense of knowledge creation.

    > You strongly disagreed and thought that it would be morally wrong and against liberal values for the preacher to save my life

    This is a mistake. It is assuming the preacher will save the life. It is assuming he isn’t mistaken. The only fair scenario is the preacher *attempts* to save a life but is fallible. The unbiased version is the preacher forcibly pushes unwanted aid of controversial effectiveness on you.

    Essentially the argument for coercion depends on building infallibilist assumptions into the scenario. Which is why I’ve been emphasizing that what’s needed is rational institutions: ones that work well for fallible people who actually need error correction and can’t just assume or state they are correct.

    > I know that you would not kill people in real life by withholding life-saving information, even if someone did not want to hear it

    What are you talking about? I do that daily. I possess life saving information which I don’t tell people exactly because they don’t want to listen. TCS saves lives, you know. Thousands of kids die every year from a lot of stupid causes that TCS can prevent, including suicide due to forced school attendance. I hold my tongue about all sorts of crucially important things, all the time, even without being asked not to speak up, simply because telling people the truth won’t do any good if they don’t want to hear it.

    You yourself, by not understanding TCS, do things that risk the death of the children in your life. I have known this for years, but saying it won’t do any good. Suppose you knew how to be 10% more helpful to children you meet, and you meet 500 children over the next 20 years. They each have some chance of suicide, which would be reduced by extra help, so being more TCS would prevent some number of suicides (possibly a fraction below one, depending how you estimate the numbers). None of this is a justification to email you Popper books every day and try to harass you into improving.

  55. mlionson says:

    We have knowledge that surgery can save lives. For any given surgical procedure there is a certain degree of verisimilitude in our conjectures and our social institutions recognize that. For example, we do not get consent from individuals with delirium needing life-saving surgery because we conjecture that surgery will keep them alive and they will be grateful. In my previous example, my (implausible) but certainly possible conjecture was that a particular additional conjecture of a preacher was understood by our social institutions as well as by the preacher to be life-saving. We and our institutions do not have to know *with certainty* that surgery will always save lives any more than our social institutions and the preacher have to perfectly know that his new conjecture will! That is a red herring. The only thing we can do is utilize our best theories. All that has to be possible is that people can receive additional knowledge, against their will, and rapidly agree that they needed it and that it saved their life. One example is enough to refute your theory that coercion of rational people is always wrong. I have given you that.

    Here is an even more plausible one. I could have said to someone who seemed threatening to never come on my property again, because I am terrified of what I believe is his inappropriate fighting and pit bulls. I could be very scared indeed when I see him approach my door, after being told to never step foot on my property again. But I could also be very, very grateful when he ignores my fears and my property rights and knocks on my door and tells me that my house is smoking and may burn down.

    But his knowledge of my lack of knowledge (about the fire, in this case) overrides my need to consent and a court would never convict him of trespassing, even if I were so evil as to sue him. One person’s knowledge of another person’s lack of knowledge can be (rarely but possibly) excellent grounds to override consent.

    You could say that my initial set of values led me to thank the man (after he saved my life) and so my consent was not really overridden. But my consent actually was overridden in the short-term (I did not want him to come to the door). As rational human beings, our only overriding concern should be increasing the growth of knowledge. I obviously include enacting ethical behavior as the primary demonstration of knowledge. This was true before the fire as well as after. I just did not know how to act to be consistent with my highest values! But the man who warned me knew how to get me to live up to my highest ethical values (that I am important and should not die in fires) Coercion, in the short-term, helped me. Appropriately overriding consent in the short term must always be consistent with our values emphasizing the long-term growth of knowledge, including the growth of ethical behavior. We have enduring values emphasizing the growth of knowledge, whether we are coerced or not.

    One could argue that when I told the man to never come on my property, what I really meant was “Do not come on my property unless to tell me about fires, or things that I will want to know!” This assumes that he knows everything about what I could possibly want (obviously unreasonable, since *I* do not know, either.) But it also assumes that I know everything about him and everything that I might want and every possibility that could happen.. To be able to specify (let alone have him understand) every possible problem that would justify him coming on my property would require that I essentially know everything in the universe, including everything about *his motivations* (to understand what to think when he says something). It is my lack of knowledge, after all, that makes me afraid of him coming on my property to begin with and that justifies his coercion of me.

    So I can’t perfectly know about his motivations, his fighting, his pit bulls, the fire he might notice etc., but I make my best conjecture and say that I don’t want him on my property, at all, and that might be the best thing to do, given that I doubt his motivations.

    But because he knows more than I about his motivations and the fire and most importantly; he knows what I need to know when he sees smoke (isn’t that possible?), he becomes perfectly justified in saving my life and (unintentionally but necessarily) scaring me by walking on my property. Precisely because I can not know everything about his motivation and am fallible about everything else, I cannot give him precise instructions about when to come on my property. Precisely because he knows more than I (particularly about his motivations), he is justified in coercing me and saving my life.

    All that is needed to refute your theory is that rational people can receive additional knowledge, against their will, and rapidly agree that they needed it and that it saved their life and it was good. All that is needed is one example of rational coercion of a rational person that helps knowledge to grow and so is good. Said in reverse, one example should be enough to a Popperian to refute his theory that coercion of a rational person (in order to increase knowledge) is never good. I have given you that.

  56. mlionson says:

    “You yourself, by not understanding TCS, do things that risk the death of the children in your life. I have known this for years, but saying it won’t do any good.”

    Are you completely convinced, beyond any doubt that, “Email(ing)” me “Popper books every day and try(ing) to harass ..(me).. into improving”, could never improve me?

    I mean that as a serious question.

    Thanks.

  57. Elliot says:

    > Are you completely convinced, beyond any doubt that, “Email(ing)” me “Popper books every day and try(ing) to harass ..(me).. into improving”, could never improve me?

    Does “completely convinced, beyond any doubt” mean I feel certain? Or have objective rather than subjective certainty? If so, no of course not to either. If not, what does it mean?

    I have a theory that it won’t work, which I consider only moderately reliable because people are complex and there’s lots of stuff about you I don’t know and so I’m making various guesses that could be mistaken, plus I may have made a mistake.

    Then you say “could never improve”. But it could improve you. It’s physically possible. Never say never! So that’s the wrong way of thinking about it. I can’t guarantee it’d never work, but I can think of other outcomes that’d be way more likely, e.g. that you set up a filter to delete incoming emails from me.

    You already have plenty of access to TCS and Popper, and the opportunity to pursue them if interested. I don’t see why trying to push it on you would help anything. I would only want to bring it up if I had some decent idea about how to persuade you to be interested; just emailing you material without some new argument is a bit pointless.

    So the important thing, I think, in this question is the epistemological misconceptions about certainty. That’s what stands out to me. It’s the mistake I said you were making in some previous comments. And it’s also the mistake in:

    > We have knowledge that surgery can save lives. For any given surgical procedure there is a certain degree of verisimilitude in our conjectures and our social institutions recognize that.

    This is, despite the (mis)use of the word “verisimilitude”, a non-Popperian, justificationist way of thinking about knowledge. It’s fudging its way towards knowledge = justified, true belief. It’s hinting at certainty or certainty of high probability. (Or certainty of high probability of high probability of high probability.)

    We never can know or measure how much verisimilitude any theory has. If you disagree, give the method of measuring, and comment on how reliable that method is, and how reliable your knowledge of its reliability is, and how reliable your knowledge of that is, and so on.

    regarding the house on fire and the scary guy, that is a substantially different scenario than the original one where the preacher returns to continue preaching controversial religious ideas. our best understanding is the guy will want to know about a fire — what he didn’t want is to be scared, harassed, bothered, etc. our best understanding is NOT that the guy will want to be preached to again — that’s exactly what he said he didn’t want more of.

    btw, you realize if the preacher goes back and is mistaken he’s doing harm to an innocent? our society rightly puts a very high burden on people to make damn sure not to harm innocents. thinking your preaching will save lives, or thinking your TCS-preaching will save lives, even if you’d be right should the other guy listen to you, is not good enough b/c A) it’s controversial B) reasonable ppl would disagree C) that clause that it will only do good *if* he listens is a problem b/c he’s not going to listen, he judged ur not worth listening to, and his attention is a scare resource

  58. mlionson says:

    “I hold my tongue about all sorts of crucially important things, all the time, even without being asked not to speak up, simply because telling people the truth won’t do any good if they don’t want to hear it.”
    Elliot

    “I possess life saving information which I don’t tell people exactly because they don’t want to listen.”
    Elliot

    “I believe there is never a tradeoff between hurting people and gaining benefits. There’s always a better way to get benefits without hurting anyone”

    Elliot, you may wish to work on your logic (and your technique) a little bit.

    “telling people the truth won’t do any good if they don’t want to hear it.”

    Fallibilism….remember?

    Good luck in learning about TCS, economics, and liberal studies. Protecting and helping children is a noble goal and I appreciate your enthusiasm.

  59. Elliot says:

    > Elliot, you may wish to work on your logic (and your technique) a little bit.

    You have not given an argument. Just some quotes which I think are all true.

    > > telling people the truth won’t do any good if they don’t want to hear it.”

    > Fallibilism….remember?

    I didn’t say absolutely guaranteed or anything infallibilist. What’s your point supposed to be?

    My point in that quote is that people don’t learn much when they don’t make an effort, let alone make an effort not to learn or even hear what you’re saying. Trying is important in learning, b/c all knowledge is created in and by the learner (with optional help from others), not transferred without them having an active role.

    > Good luck in learning about TCS, economics, and liberal studies. Protecting and helping children is a noble goal and I appreciate your enthusiasm.

    Are you saying you’re done discussing — you give up on thinking of a better way to explain your position, and/or a better way to understand mine — and you’re closing with the above non-arguments, where you don’t even try to explain what your point is?

Leave a Reply

Your email address will not be published. Required fields are marked *

please answer (required): * Time limit is exhausted. Please reload the CAPTCHA.