You regard two things as on a par when you don't prefer one to other and aren't indifferent between them. What does rationality require of you when choosing between risky options whose outcomes you regard as on a par? According to Prospectism, you are required to choose the option with the best prospects, where an option's prospects is a probability-distribution over its potential outcomes. In this paper, I argue that Prospectism violates a dominance principle --- which I call The Principle of Predominance --- because it sometimes requires you to do something that's no better than the alternatives and might (or even likely) be worse. I argue that this undermines the strongest argument that's been given in favor of Prospectism.
I offer an explanation for why certain sequences of decisions strike us as irrational while others do not. I argue that we have a standing desire to tell flattering yet plausible narratives about ourselves. And that those cases of diachronic behavior which strike us as irrational are those in which you had the opportunity to hide something unflattering and fail to do so.
I argue that any plausible decision theory for agents with incomplete preferences which obeys the Never Worse Principle will violate Transitivity. The Never Worse Principle says that if one option never does worse than another, you shouldn’t disprefer it. Transitivity says that if you prefer X to Y and you prefer Y to Z, then you should prefer X to Z. Violating Transitivity allows one to be money pumped. Although agents with incomplete preferences are already, in virtue of having incomplete preferences, vulnerable to being money pumped, I argue that the money pump argument for Transitivity is more serious than the one for Completeness.
Business and Economic textbooks warn against committing the Sunk Cost Fallacy: you, rationally, shouldn't let unrecoverable costs influence your current decisions. In this paper, I argue that this isn't, in general, correct. Sometimes it's perfectly reasonable to wish to carry on with a project because of the resources you've already sunk into it. The reason? Given that we're social creatures, it's not unreasonable to care about wanting to act in such a way so that a plausible story can be told about you according to which your diachronic behavior doesn't reveal that you've suffered, what I will call, diachronic misfortune. Acting so as to hide that you've suffered diachronic misfortune involves striving to make yourself easily understood while disguising any shortcomings that might damage your reputation as a desirable teammate. And making yourself easily understood to others while hiding your flaws will, sometimes, put pressure on you to honor sunk costs.
I argue that an interesting aspect of the distinction between lying and mere misleading, ultimately, amounts to a distinction between what we can, in the case of misleading, and cannot, in the case of lying, plausibly get away with. Roughly, an utterance is considered a lie when we think, were it to be discovered that the speaker communicated something she knew she lacked the grounds to believe, she would not be able to maintain plausible deniability about having done something deceptive. On the other hand, an utterance is considered to be merely misleading when we think, were it to be discovered that the speaker communicated something she knew she lacked the grounds to believe, would be able to plausibly deny her deception. I defend this view and draw out some of the ethical consequences of such an account.
When you are indifferent between two options, it’s rationally permissible to take either. One way to decide between two indifferent options is to flip a fair coin, taking the one if it lands heads and the other if it lands tails. Is it rationally permissible to employ such a tie-breaking procedure? Intuitively, yes. However, if you are genuinely risk-averse --- in particular, if you adhere to Risk-Weighted Expected Utility Theory (Buchak 2013) and have a strictly convex risk-function --- the answer will often be no: the REU of deciding by coin-flip will be lower than the REU of choosing one of the options outright (so long as at least one of the options is a nondegenerate gamble). To what extent, if at all, is this a worry for Risk-Weighted Expected Utility Theory? I argue that this fact adds some additional bite to the well-known worries about diachronic consistency afflicting views, like Risk-Weighted Expected Utility Theory, that violate Independence. And that, while these worries are ultimately surmountable, surmounting them comes at a price.
My dissertation develops a decision-theoretic account of instrumental rationality (which, very roughly, says that you should align your preferences over your options to your best estimates of how the actual values of those options compare), called The Actual Value Conception of Instrumental Rationality. In the first chapter, I argue that this account underlies Causal Decision Theory and is incompatible with Evidential Decision Theory. In the second chapter, I develop a decision theory for agents with incomplete preferences that, unlike its more popular competitors, is consistent with, and motivated by, the picture of instrumental rationality sketched in the first chapter. In the last chapter, I explore some of the consequences of taking a view like this seriously. In particular, I argue that we should reject the idea that instrumental rationality consists in doing what you have the most reason to do; and I argue that it is sometimes rationally permissible to have non-transitive preferences.