When thinking through career options, we’re faced with dilemmas like: should we focus on helping people or animals? Should we work at a charity or earn more money in order to donate? Should we try to help people who live now or people in the future? Reflecting on our values and priorities can help us make important decisions about how to focus our careers.
What do we mean when we say doing good?
As you may know, the question of “what is good” has been discussed at length by many people. This article gives very brief introductions to some of the basic questions that may arise when deciding between career paths. There’s much more to say about these deep questions than we can fit here, which is why we link to interesting and relevant articles that can serve as jumping-off points for further reading and thinking about each of these questions.
Increasing happiness and reducing suffering
Many people, including us at Probably Good, believe that making people happier and reducing their suffering is important to doing good in the world. “More happiness is good and more suffering is bad” might sound redundant, but there are nuances around this claim that aren’t obvious at all. While nearly everyone believes that, in general, you should try to act in ways that increase happiness, how does this interact with other considerations? If you haven’t before, it’s not a bad idea to take a few moments to think about this: In what situations would it be ok to lie in order to reduce someone’s suffering? Are there actions that we would never do, no matter what consequences they may have?
First, we’ll briefly address two main moral theories that we might adopt.
Consequentialism
Consequentialism is the idea that when we are evaluating the moral value of an action (deciding whether it is the right thing to do), we only care about the action’s consequences. Other ideals are only valuable to the extent that they might eventually promote good consequences. Thinking about this raises some difficult questions: Are there actions that are wrong (stealing, lying, hurting someone) even if they would have positive consequences overall, such as helping many others?
For example, breaking the law is usually considered a bad thing to do. From the point of view of consequentialism, breaking the law is only wrong if it leads to bad outcomes (either for the victims of the crime or for society in general). If we were completely confident that breaking the law would do more good than harm (including all of the very-difficult-to-follow indirect effects), the consequentialist view would be that it is a morally good thing to do.
Lying is another interesting example: Most people believe that some lies are justified (e.g. white lies) and some lies aren’t. Is lying always morally wrong regardless of consequences? Is the lie itself unimportant and should we only consider consequences? Or is there some difficult-to-draw border between good and bad lies? Can we, even generally, say where that border lies?
Duty-based theories
Not all moral theories are consequentialist. A prominent family of non-consequentialist moral theories is deontology, which focuses on duties and rights rather than consequences. Deontological moral theories state that the actions themselves are right or wrong according to some set of rules. These rules could include anything from “It is wrong to lie” to “Respect your parents” or “Do not drink alcohol”. These rules would determine morality first and foremost, even if they espouse actions that have negative consequences.
Choosing a moral theory
You don’t need to choose a single moral theory and disregard the rest, or even fully commit to a decision about whether an idea like consequentialism is completely correct. While it makes sense to give names (like ‘consequentialism’) to specific, and often absolute, ways of thinking, your own views don’t need to be absolute.
There are many views and ways of thinking about morality which take consequences into account, but also other considerations. In fact, your own moral beliefs can integrate multiple moral theories. They could also follow some specific moral theory in some cases, but not in others. You may even be unsure about which moral theory is correct, and you need to take this uncertainty into account in your decisions, as we’ll discuss later in this article.
Moral patienthood: who do we care about?
If we want to increase welfare, we have to ask: Whose welfare? This question can be tricky because our own biases and instincts tend to prefer whoever is more similar or closer to us.
Moral philosopher Peter Singer describes the concept of the expanding moral circle: How throughout history humans have been willing to consider more and more beings’ welfare: Earlier in human history, most people cared almost exclusively for themselves, their family or their tribe. Over time, the ‘circle’ of those we care about expanded to include more people, who are less and less like us. Today more than ever before, we’re aware of how pernicious these biases can be and how easy it can be to disregard certain communities or people when we consider morality, be it because of their gender, skin color, nationality or beliefs.
We look back at people in history who claimed that those who are different should be afforded less moral consideration (or none at all), and often call them racist or sexist, and view their lack of care as abhorrent. Of course, most people living in past centuries held these views not because they wanted to be immoral, but because they thought they were right. Some were never exposed to the harms of their views and societal structures, even more had never given it any thought, and almost all were taught these morals from a young age. These points don’t aim to excuse their behavior, quite the contrary: We should apply an equally critical view towards what we might still be missing today!
Understanding this principle should challenge us to consider others, who we don’t currently consider as individuals that we care about (what philosophers like to call “moral patients”). We should ask ourselves – are they also deserving of the same value as us, our family and others we care about? It’s incredibly unlikely that we have, quite recently, gotten exactly the right moral circle. When future generations look back at us – what views that we hold will they view as awful? If we can figure that out – we might be able to help take the step forward that we’d want our ancestors to have taken sooner.
Helping where it’s most needed
When considering who we should care about, it’s important to remember the most basic and least controversial part of the impartiality principle. Our personal experiences constantly move us to see and address the local issues around us: they are the most visible and obvious to us, and they’re usually easiest for us to empathize with.
For many people, those local issues, while absolutely important in their own right, won’t be the issues that need your skills and help the most, or that can use them most effectively. We encourage you to take a global perspective and ask where your time and effort would make the biggest difference. This is particularly important because the areas where people tend to have the most resources (time, money, etc.) tend not to be the same areas where people need help the most. This is easier to measure when discussing donations than career choices – there are many cases where donating to a charity operating in alleviating the worst global poverty can help over 100 times more than donating the same amount to a charity alleviating poverty where most donors live – in high-income countries.
While there are very real and valid considerations that may push you to give to those closest, it’s at the very least worthwhile to introspect about whether they direct you towards the way to make the biggest positive change that you can.
Animal welfare
After thinking about expanding our moral circle to include all humans (and specifically, those who are different from us and need our help), we should consider the moral patienthood of animals. To what extent do they need and deserve our consideration and help? Most people have the intuition that we should care about animal suffering in some situations, such as preventing the abuse of stray dogs. But many don’t have a clear picture of “How much should we care?” or “In which situations should we care?”.
Think of the variety of different living creatures: Between humans, chickens, cows, mosquitoes, fish, dogs and bald eagles – why do we care for some more than others? Having read until this point, you may have the intuition that some of our reasons might be our own biases: Do we prefer creatures that are close to us, and with whom we have more positive interactions? What differences are there that might be genuine reasons to give more or less moral consideration?
At the very least, you should consider different creatures’ ability to feel suffering. It seems reasonable that we could prefer helping creatures who would benefit more from alleviation of suffering. The main difficulty is that animals’ ability to communicate with us is limited, which makes it very difficult to fully understand their needs and feelings. There is a lot of brilliant research being done to improve our (still very partial) understanding of animals’ experience and should probably inform your views on animal moral patienthood.
Be warned though: As has been demonstrated repeatedly in human history, it’s very easy to discount suffering that looks different from our own as ‘not really suffering’. Keep an open mind as to how animals, even very different from you, might still feel pain.
The long-term future
It’s easier to care about those who are close to us, not only geographically, but also in time. It’s easy to care about people who need our help right now, and much harder to imagine (and care about) the consequences of our actions ten or one hundred years down the line. In recent years we’ve seen an encouraging phenomenon of people taking responsibility for the long-term consequences of their actions as part of the growing climate change movement. In addition to climate change, there are other potential risks and catastrophes, such as nuclear war, pandemics or risks that arise from new technology that could impact future generations’ welfare. If we believe future people matter and deserve our concern, there are some strong reasons to think that securing a long and positive future is an important cause area. At the very least, there are (potentially) far more people in the future than currently exist.
When thinking about future generations, there are a lot of questions ranging from the practical (how do we know if we’re making an impact if most of the impact is in the far future?) to the abstract (do we prefer a future world with few people who are extremely happy or many people who are reasonably happy?).
Regardless of the specific answers, our actions today can make a big difference in the happiness of future generations, even if we don’t see the effects ourselves.
Given the potentially huge impact we can have on the long-term future, we recommend reading more about long-termism and averting global catastrophic risks.
Moral uncertainty
You may appreciate that some of these questions are quite difficult, especially if you read further into the links provided in this article. For most people, there isn’t a clean way to articulate a single, coherent moral theory that they believe in fully and would want to follow in all circumstances. Even those who do have a clear, coherent moral system in mind should be at least somewhat concerned about the question “But are you sure?”.
Career choices naturally involve the sort of decisions that can be difficult to make and become even more difficult if you’re uncertain about your moral view:
- How should you prioritize helping specific animals in awful conditions when you’re uncertain if those animals can feel pain?
- Is it okay to participate in a lucrative but harmful industry if the money you earn is donated to highly impactful charities?
- Should you spend years in order to help avert a potential disaster which might never come to pass even without your work?
- If you believe in a certain cause area, but most people whose opinions you value think it’s a waste of your time – how much value do you place on their view.
Recognizing uncertainty
It’s important to note what you’re unsure about. This can become important if other factors are clearer (indeed, in many cases, your personal fit or the importance of some causes can be more easily evaluated than questions in moral philosophy). It’s completely fine not to know your precise moral stance and it might be easier to try and clarify where you stand in a general sense. With many moral questions, it’s easier to think about what’s definitely outside of your moral view than exactly where you stand.
This can be done by trying to make specific statements about the bounds of your uncertainty like “helping one person avoid a painful illness for one year is more important to me than helping one chicken avoid a lifetime of abusive conditions, but is still probably less important than significantly improving the lives of 1,000 chickens”. The precise numbers aren’t the point. The important part is having an idea about where your moral range is. In some cases, even a very large range can still be informative enough to be very helpful in your eventual decision-making.
Recognizing your own uncertainty also has the important benefit of giving you more motivation to read, listen, think and learn more about the subject. Being uncertain, at least to some degree, helps you avoid the pitfalls of blindly defending your current view even in light of new arguments or evidence.
Moving forward
When thinking about moral philosophy, there’s more to read and consider than anyone could in a lifetime. Don’t let that deter you from taking time to read and think on these issues: They’re both interesting and can sometimes be important in big decisions. In many cases, it can be useful to talk about these with other people, and not only think and read by yourself.
On the other hand, don’t feel bad about exploring impactful careers while you still have a lot of open questions that still need to be answered. Just remember the questions that you’re unsure about, come back to them, and take notice when you make decisions that are affected by those questions.
Philosophy & utilitarianism
- Utilitarianism.net – a great introduction to utilitarianism.
- Justice: What’s The Right Thing To Do – A Harvard University lecture by Professor Michael Sandel on the trolley problem and utilitarianism.
- Crucial Considerations and Wise Philanthropy – Nick Bostrom’s talk on how some considerations can have an outsized impact on your decisions and practical conclusions from this (a long talk available both in audio and text format).
- Purchase Fuzzies and Utilons Separately – a blog post on differentiating between doing good to help others and doing good in order to feel good and stay motivated.
Expanding circle & animal rights
- The Drowning Child and the Expanding Circle – Peter Singer’s paper that was discussed in this article.
- Famine, Affluence and Morality – Singer’s 1971 essay on the obligation to donate more to disaster and people in need.
- Should animals, plants and robots have the rights as you?: A Vox article on the basics of moral patienthood and the expanding circle.
- Report: Animal consciousness and moral patienthood: A very thorough and detailed report on the evidence for and against consciousness in different animals.
- Wild animal suffering: An introduction: An introduction to the cause area of wild animal suffering. How much animals suffer in the wild, why it matters, and how we can help them.
- The Expanding Circle \ Animal Liberation – Peter Singer’s books.
Longtermism
- 80,000 Hours on Longtermism: 80,000 Hours clear explanation, and case for, longtermism.
- Evidence, cluelessness and the long term: A talk by Hillary Greaves (both video and transcript available) on the top interventions, the limitations of their evidence and some possible responses.