Tuesday, July 18, 2017

Response to a Conversation on “Intelligence”




I think much confusion is caused by a lack of clarity about the meaning of the word “intelligence”, and not least a lack of clarity about the nature of the thing(s) we refer to by this word. This is especially true when it comes to discussions of artificial intelligence (AI) and the risks it may pose. A recently published conversation between Tobias Baumann (blue text) and Lukas Gloor (orange text) contains a lot of relevant considerations on this issue, along with some discussion of my views on it, which makes me feel compelled to respond.


The statement that gave rise to the conversation was apparently this:


> Intelligence is the only advantage we have over lions.


My thoughts on which is that this is a simplistic claim. First, I take it that “intelligence” here means cognitive abilities. But cognitive abilities alone — a competent head on a body without legs or arms — will not allow one to escape from lions; it will only enable one to think of and regret all the many useful “non-cognitive” tools one would have liked to have. The sense in which humans have an advantage over other animals, in terms of what has enabled us to take over the world for better or worse, is that we have a unique set of tools — upright walk, vocal cords, hands with fine motor skills, and a brain that can acquire culture. This has enabled us, over time, to build culture, with which we have been able to develop tools that have enabled us to gain an advantage over lions, mostly in the sense of not needing to get near them, as that could easily get fatal, even given our current level of cultural sophistication and “intelligence”.


I could hardly disagree more with the statement that “the reason we humans rule the earth is our big brain”. To the extent we do rule the Earth, there are many reasons, and the brain is just part of the story, and quite a modest one relative to what it gets credit for (which is often all of it). I think Jacob Bronowski’s The Ascent of Man is worth reading for a more nuanced and accurate picture of humanity’s ascent to power than the “it’s all due to the big brain” one.


There is a pretty big threshold effect here between lions (and chimpanzees) and humans, where with a given threshold of intelligence, you're also able to reap all the benefits from culture. (There might be an analogous threshold for self-improvement FOOM benefits.)


The question is what “threshold of intelligence” means in this context. All humans do not reap all the same benefits from culture — some have traits and abilities that enable them to reap far more benefits than others. And many of these traits have nothing to do with “intelligence” in any usual sense. Good looks, for instance. Or a sexy voice.


And the same holds true for cognitive abilities in particular: it is more nuanced than what measurement along a single axis can capture. For instance, some people are mathematical geniuses, yet socially inept. There are many axes along which we can measure abilities, and what allows us to build culture is all these many abilities put together. Again, it is not, I maintain, a single “special intelligence thing”, although we often talk as though it were.


For this reason, I do not believe such a FOOM threshold along a single axis makes much sense. Rather, we see progress along many axes that, when certain thresholds are crossed, allows us to expand our abilities in new ways. For example, at the cultural level we may see progress beyond a certain threshold in the production of good materials, which then leads to progress in our ability to harvest energy, which then leads to better knowledge and materials, etc. A more complicated story with countless little specialized steps and cogs. As far as I can tell, this is the recurrent story of how progress happens, on every level: from biological cells to human civilization.


Magnus Vinding seems to think that because humans do all the cool stuff "only because of tools," innate intelligence differences are not very consequential.


I would like to see a quote that supports this statement. It is accurate to say that I think we do “all the cool stuff only because of tools”, because I think we do everything because of tools. That is, I do not think of that which we call “intelligence” as anything but the product of a lot of tools. I think it’s tools all the way down, if you will. I suppose I could even be considered an “intelligence eliminativist”, in that I think there is just a bunch of hacks; no “special intelligence thing” to be found anywhere. RNA is a tool, which has built another tool, DNA, which, among other things, has built many different brain structures, which are all tools. And so forth. It seems to me that the opposite position with respect to “intelligence” — what may be called “intelligence reification” — is the core basis of many worries about artificial intelligence take-offs.


It is not correct, however, that I think that “innate differences in intelligence [which I assume refers to IQ, not general goal-achieving ability] are not very consequential”. They are clearly consequential in many contexts. Yet IQ is far from being an exhaustive measure of all cognitive abilities (although it sure does say a lot), and cognitive abilities are far from being all that enables us to achieve the wide variety of goals we are able to achieve. It is merely one integral subset among many others.


This seems wrong to me [MV: also to me], and among other things we can observe that e.g. von Neumann’s accomplishments were so much greater than the accomplishments that would be possible with an average human brain.


I wrote a section on Von Neumann in my Reflections on Intelligence, which I will refer readers to. I will just stress, again, that I believe thinking of “accomplishments” and “intelligence” along a single axis is counterproductive. John Von Neumann was no doubt a mathematical genius of the highest rank. Yet with respect to the goal of world domination in particular, which is what we seem especially concerned about in this context, putting Von Neumann in charge hardly seems a recipe for success, but rather the opposite. As he reportedly said:
“If you say why not bomb them tomorrow, I say why not today? If you say today at five o' clock, I say why not one o' clock?”
To me, these do not seem to be the words of a brain optimized for taking over the world. If we want to look at such a brain, we should, by all appearances, rather peer into the skull of Putin or Trump (if it is indeed mainly their brain, rather than their looks, or perhaps a combination of many things, that brought them into power).


One might argue that empirical evidence confirms the existence of a meaningful single measure of intelligence in the human case. I agree with this, but I think it's a collection of modules that happen to correlate in humans for some reason that I don't yet understand.


I think a good analogy is a country’s GDP. It’s a single, highly informative measure, yet a nation’s GDP is a function of countless things. This measure predicts a lot, too. Yet it clearly also leaves out a lot of information. More than that, we do not seem to fear that the GDP of a country (or a city, or the indeed the whole world) will suddenly explode once it reaches a certain level. But why? (For the record, I think global GDP is a far better measure of a randomly selected human’s ability to achieve a wide variety of goals [of the kind we care about] than said person’s IQ is.)


> The "threshold" between chimps and humans just reflects the fact that all the tools, knowledge, etc. was tailored to humans (or maybe tailored to individuals with superior cognitive ability).
So there's a possible world full of lion-tailored tools where the lions are beating our asses all day?


Depending on the meaning of “lion-tailored tool” it seems to me the answer could well be “yes”. In terms of the history of our evolution, for instance, it could well be that a lion tool in the form of, say, powerful armor could have meant that humans were killed by them in high numbers rather than the other way around.


Further down you acknowledge that the difference is "or maybe tailored to individuals with superior cognitive ability" – but what would it mean for a tool to be tailored to inferior cognitive ability? The whole point of cognitive ability is to be good at make the most out of tool-shaped parts of the environment.


I suspect David Pearce might say that that’s a parochially male thing to say. One could also say that the whole point of cognitive abilities is to make others feel good — a drive/task that has no doubt played a large role both for human survival and the increase in our cognitive abilities and goal-achieving abilities in general, arguably just as great as “making the most out of tool-shaped parts of the environment”.


Second, I think the term “inferior cognitive ability” again overlooks that there are many dimensions along which we can measure cognitive abilities. Once again, take the mathematical genius who has bad social skills. How to best make tools — ranging from apps to statements to say to oneself — that improve the capabilities of such an individual seems likely to be different in significant ways from how to best make tools for someone who is, say, socially gifted and mathematically inept.


Magnus takes the human vs. chimp analogy to mean that intelligence is largely “in the (tool-and-culture-rich) environment.


I would delete the word “intelligence” and instead say that the ability to achieve goals is a product of a large set of tools, of which, in our case, the human brain is a necessary but, for virtually all of our purposes, insufficient subset.
Also, chimps display superior cognitive abilities to humans in some respects, so saying that humans are more “intelligent” than chimps, period, is, I think, misleading. The same holds true of our usual employment of the word “intelligence” in general, in my view.


My view implies that quick AI takeover becomes more likely as society advances technologically. Intelligence would not be in the tools, but tools amplify how far you can get by being more intelligent than the competition (this might be mostly semantics, though).


First, it should be noted that “intelligence” here seems to mean “cognitive abilities”, not “the ability to achieve goals”. This distinction must be stressed. Second, as hinted above, I think the dichotomy between “intelligence” (i.e. cognitive ability) on the one hand and “tools” on the other is deeply problematic. I fail to see in what sense cognitive abilities are not tools? (And by “cognitive abilities” I also mean the abilities of computer software.) And I think the causal arrows between the different tools that determine how things unfold are far more mutual than they are according to the story that “intelligence (some subset of cognitive tools) is that which will control all other tools”.
Third, for reasons alluded to above, I think the meaning of “being more intelligent than the competition” stands in need of clarification. It is far from obvious to me what it means. More cognitively able, presumably, but in what ways? What kinds of cognitive abilities are most relevant with respect to the task of taking over the world? And how might they be likely to be created? Relevant questions to clarify, it seems to me.


Some reasons not to think that “quick AI takeover becomes more likely as society advances technologically” include that other agents would be more capable (closer to notional limits of “capabilities”) the more technologically advanced society is, that there would be more technology learned about and mastered by others to learn about and master in order to take over, and finally that society presumably will learn more about the limits and risks of technology, including AI, the more technologically advanced it is, and hence know more about what to expect and how to counter it.

Friday, April 21, 2017

New Book: 'You Are Them'




What follows if we reject belief in any kind of non-physical soul and instead fully embrace what we know about the world? The main implication, this book argues, is a naturalization of personal identity and ethics. A radically different way of thinking about ourself.

“A precondition of rational behaviour is a basic understanding of the nature of oneself and the world. Any fusion of ethical and decision-theoretic rationality into a seamless package runs counter to some our deepest intuitions. But "You Are Them" makes a powerful case. Magnus Vinding's best book to date. Highly recommended.”
— David Pearce, co-founder of The Neuroethics Foundation, co-founder of World Transhumanist Association / Humanity+, and author of The Hedonistic Imperative and The Anti-Speciesist Revolution.

Free download:
https://www.smashwords.com/books/view/719903

Monday, January 30, 2017

Reasons to Focus on Values as Our Main Cause




Given decent clarity about our fundamental values, the all-important question becomes: what causes and interventions optimize those values?


In this post I shall present some of the reasons in favor of focusing directly on fundamental values in these regards. That is, reasons why a good way to optimize our fundamental values is to reflect on, argue for, and work out the implications of, these values themselves. We may call it “the values cause”.


So why is this sensible? In short, because of the unmatched importance of fundamental values. Our fundamental values comprise the most important and fundamental element in our notional ‘tree of ought’. They are what determine the sensibility of any cause and intervention we may take part in, and hence what any reasonable choice of causes and interventions should be based on. An important implication of this is that the (expected) sensibility of any cause or intervention cannot be greater than the (expected) sensibility of our fundamental values. For example, if we have 90 percent confidence in our fundamental values, and then choose a cause or intervention based on these, we cannot have greater confidence in the sensibility of this cause or intervention than 90 percent. Indeed, 90 percent would be the level of credence we should have if we were 100 percent sure that the specific cause or intervention optimizes our fundamental values perfectly; a degree of confidence we will of course never have about any cause or intervention. Thus, we must have greater confidence in the sensibility of our values than in the sensibility of any action taken to optimize those values.


In a world where little seems certain, this relationship is worth taking note of. It means that, of all our beliefs pertaining to ethics, our fundamental values are — or at least should be — what we are the most certain about. This also, I would argue, makes them the thing we should be most confident about arguing for in our pursuit for a better world.

Getting Others to Join Us in the Best Way


Arguing directly for our fundamental values rather than causes and interventions derived from those values also, if done successfully, has the benefit of bringing people down alongside with us in the basement of fundamental values from which the sensibility of causes and interventions must be assessed. In other words, not only do we argue for that which we are most confident about when we argue for our fundamental values, we also invite people to join us in the best possible place: our core base from which we ourselves are trying to find out which causes and interventions that best optimize our values. Having more minds to help optimize our tree of ought from the bottom up seems very positive, and the deeper down they join us, the better.


And even if we do not manage to convince people to fully share our fundamental values, arguing for our values likely does at least make them update somewhat in our direction, which, given the large changes in practical implications that can result from small changes in fundamental values, could well be far more valuable than convincing others to agree with specific causes or interventions we may favor. Not least because it might make them more likely to agree with these interventions, which then leads to another, albeit somewhat speculative, reason to focus on fundamental values in practice.


For one counterargument to the argument I have made above is that people might be more receptive to arguments for specific causes or interventions than they are to the fundamental values that recommend those causes. Yet I think the opposite is generally true. I suspect it is generally easier to convince people of one’s fundamental values, or at least make them update significantly toward them, than it is to convince others of one’s most favored causes or interventions.


For example, it seems much easier to convince people that extreme suffering is of great significance and worth reducing than it does to convince them that they should go vegan. And in order to convince people of the importance of a given cause or intervention, it might well require bottom-up reasoning from first principles — in this case, fundamental values — to see the reasonableness of that given cause or intervention. It can indeed seem naive for us to think, after we ourselves have come to support a given intervention based on an underlying value framework, that we should then be able to convince others to support that intervention without communicating this very framework that led us to consider that intervention a sensible one ourselves.


So not only may people be more receptive to our fundamental values than the causes and interventions we support (an admittedly speculative “may”), it might also be that arguing for our fundamental values is the best way to bring people on board with our preferred causes and interventions in many cases, due to the likely necessity of following a chain of inferential steps. And again, if we invite others to try to step in our own inferential footsteps, we might be lucky to have them spot missteps. In this way, we enable others to help us find even better causes and interventions based on our fundamental values than the ones we presently focus on.

An instructive example of failure here, I think, is found in the strategy of most anti-natalists. The vast majority of anti-natalists seems to share the fundamental goal of reducing net suffering, yet their advocacy tends to focus exclusively on anthropocentric anti-natalism — a highly specific and narrow intervention. They appear to confidently assume that this is the best way to reduce suffering in the world, rather than focusing on the fundamental goal of reducing suffering itself, and encouraging discussion and research about how to best do this. If anti-natalists focused more on the latter, they would likely have more success, both by inspiring more people to take their fundamental values into consideration, and by inviting these others (and themselves not least) to think deeper about which other ideas they might be able to spread that could be more conducive to the goal of reducing suffering than the idea of anthropocentric anti-natalism (which seems rather unlikely to be the best idea to push in order to reduce the most suffering in our future light cone).

Reducing Moral Uncertainty/Updating our Fundamental Values


Another reason to focus on fundamental values is our own moral uncertainty. For given that we may be wrong about what we value, whether in a strong moral realist sense or an idealized personal preferences sense (or anything in-between), we should be keen on updating our fundamental values. And reflecting on and discussing them openly is likely among the best ways to do so. To restate this important point once more: given the immense importance of fundamental values, even small updates here could be among the most significant moves we could make.


And fundamental values do appear quite open to change. Indeed, values are contagious and subject to cultural influence to a great extent, as a map of people’s religious beliefs around the world reveals (such beliefs are undeniably closely tied to beliefs about fundamental values). Arguably, our values are subject to change and cultural influence to a significantly greater extent than technological progress is (cf. What Technology Wants by Kevin Kelly), which may be harder to influence and hence might be less of a leverage point for impacting change than focusing on values is. To put things crudely, technologies tend to be developed regardless, while how they are used generally seems more contingent. And arguing values seems among the best ways to impact how we use our powers.


Values are, to a first approximation, ideas, and ideas tend to be updatable and spreadable. In my own case, I used to not care about ethics at all, then I became a classical utilitarian, and eventually I updated toward negative utilitarianism and suffering-focused ethics as I came upon arguments in their favor. We should expect similar changes to be possible in others, and in ourselves, as we learn more and keep on updating our beliefs.

Better Cooperation?


Not only would we all benefit from having our moral uncertainty reduced/our moral views updated, which is valuable in itself; it seems that we should also expect to benefit from the greater convergence on fundamental values that is likely to follow from mutual discussion and updating on them, even if the magnitude of this updating is small. The reason this is beneficial is that such convergence likely reduces the level of friction in our efforts of cooperation, and on virtually any set of fundamental values, success in achieving the most valuable/least disvaluable future seems to rest on humanity’s ability to cooperate. This makes such cooperation a high priority for all of us. While somewhat speculative, this consideration in favor of convergence on fundamental values, and hence, arguably, in favor of mutual discussion and updating on them, is important to factor in as well.

Fundamental Values and AI Safety


I have tried elsewhere to explain why I think the Bostromesque framing of the issue of “AI safety” is unsound. But even assuming it isn’t, I would argue that fundamental values should likely still be our main focus, the reason being that we have little clarity or consensus about which values to load a notional super-powerful AI with in the first place (and I should note that I find using the term “AI” in this unqualified way highly objectionable — for what does it refer to?).


The main problem claimed to exist within the cause of “AI safety” is the so-called control problem, particularly what is called the value loading problem: how do we load “an AI” with good values? What seems implicit in such a question, however, is that we have a fairly high level of consensus about what constitutes good values. Yet when we look at modern discussions of ethics, especially population ethics, we find that this is not the case — indeed, we see that strong certainty about what constitutes good values is hardly reasonable for any of us. This suggests that we have a lot to clarify before we start programming, namely what values we estimate to be ideal. We must have decent clarity about what constitutes good values before we can implement such values — in anything we do or create. We must solve the values problem before we can solve any notional values loading problem.


For an example of an unresolved question, take the following, in my view critically important one: What are the theoretical upper bounds of the ratio between happiness and suffering in a functional civilization, and can the suffering it contains, if there is any, ever be outweighed by the happiness? At the very least, these questions deserve consideration, yet they are hardly ever asked (not to mention the loud silence on the issue of the utilitronium shockwave that would seem, at least in theory, the main corollary of classical utilitarianism; are classical utilitarians obliged to work toward such a shockwave, contra the present dominant view on “AI ethics”, which seems to be an anthropocentric preference utilitarianism of sorts [see the note on the goals of AI builders below], which appears very bad from a classical utilitarian perspective, at least compared to a utilitronium shockwave?).


Another example would be the aforementioned subject of population ethics, where many ethicists believe that we should bring about the greatest number of happy beings we can, while many others believe that adding an additional happy life to any given population has no intrinsic value. Given such a near-maximal divergence of views on an issue like this, what does it mean to say that we should build a system that does what humans want? What could it mean?


This issue of value implementation underscores the importance of convergence on values, as that would likely make any such project of implementation go smoother (an example of the general point about human cooperation made above). It could well be that trying to make mutual value updating happen among those who try to build the world of tomorrow — both in the realm of software and in other realms — is the better way to implement our values than to bargain at the level of the direct implementation with others who have more divergent values; that is, more divergent values than they would have had if we had put more effort into arguing for our fundamental values directly.


In other words, if humans are going to program values into “an AI”, the best way to impact the outcome of that process could well be to impact the values of these humans and humanity in general. Not least because the goal many of these AI researchers aim to implement in tomorrow’s software simply is “that which humans want” (Paul Christiano: “I want to see a future where AI systems help humans get what they want […]”; OpenAI: “We believe AI should be an extension of individual human wills […]”; the so-called Partnership on AI by Google, Facebook, Microsoft, Amazon, IBM, and Apple seems to have essentially the same goal).

I have argued that the future of “intelligence” on Earth and beyond will be shaped by a collective, distributed process comprised of what many agents do, which also holds true in the case of a software takeover. And the best way to impact such a collective process in a positive direction is, I think, most likely one where we try to impact values directly.





Whether we deem it the main cause or not, it seems clear to me that “the values cause” must be considered a main cause, and an utmost neglected one at that. Our altruistic efforts ought to be informed by careful considerations based on first principles, those principles being our fundamental values. Yet for the most part, this isn’t what we are doing. If it were, we would have better clarity about what exactly our first principles are; at the very least, we would be aware of the fact that we do not have such clarity in the first place. Instead, we go with intuition and vague ideas like “more happiness, less suffering”, believing that to be good enough for all practical purposes. As I have tried to argue, this is far from the case.

Saying that we should focus much more on fundamental values is not, however, to say that we should not focus on other specific causes and interventions that follow from those values, nor that we should not do advocacy for these. I think we should. What I think it does imply, however, is that we should try to communicate our (carefully considered) fundamental values in such advocacy. For instance, when doing concrete anti-speciesist advocacy, we should do so by phrasing it in terms of our fundamental values, e.g. concern for sentience and involuntary suffering. Thereby, we both do advocacy for a (relatively) specific cause recommended by our fundamental values and those values themselves, which invites people to consider and discuss both. It does not have to be a matter of either focusing on values or focusing on “doing”. We can encourage people to reflect on fundamental values with our doing.

Wednesday, November 23, 2016

Fundamental Values and the Relevance of Uncertainty



As argued in a previous essay, reflection on fundamental values seems to stand among the most important things we could be doing. This is really quite obvious: working effectively toward a goal requires knowing what that goal is in the first place. And yet it does not seem to me that we, as purportedly goal-oriented “world improvers” or “effective altruists”, have much such clarity when it comes to what our goal is, nor do we seem to be working particularly hard on gaining it. This, I think, is unreasonable. How can we systematically try to help or improve the world as much as possible if we do not have decent clarity about what this in fact means? We can’t. We are trying to optimize an ill-defined function. And that is bound to be a confused endeavor.

It is tempting to be lazy, of course, and to think that we at least have some idea about what a better world looks like — “one that contains less unnecessary suffering”, for instance — and that this suffices for all intents and purposes. Yet that would be a fatal mistake, since fundamental values are what the sensibility of the pursuit of any action or cause rests upon. Everything depends on fundamental values. And even apparently small differences in fundamental values can imply enormous differences in terms of what we should do in practice. This means that a three-word sentiment like “minimize unnecessary suffering”, while arguably a good start, will not suffice for all intents and purposes (after all, what exactly does “minimize”, “unnecessary”, and “suffering” mean in this context? E.g. does “minimize” allow outright destruction of sentient beings or not?). We need to be as elaborate and qualified as possible about fundamental values if we are to bring about the most valuable/least disvaluable outcomes.

Indeed, I would argue that the unmatched importance of fundamental values — after all, clarification of fundamental values is all about clarifying what is most important, which almost by definition makes it the most important thing we could be doing; only when we are reasonably sure that we have a decent map of the landscape of value, a decent idea of what the notional “utility function” we are trying to optimize looks like, can we move effectively toward optimizing accordingly — combined with the fact that serious reflection about fundamental values seems widely neglected, implies that reflection upon fundamental values itself stands as a promising candidate for being the most important cause of all, however detached from real world concerns it may seem.


“Improving the World” — Two Questions Follow

If we have the goal of improving the world, this gives us two basic things to clarify: 1) what does improving the world mean? In other words, what does the goal we are trying to accomplish look like in more specific terms? And 2) how does one accomplish that?

We have an “end question” and a “path question”. And I would argue that we are not sufficiently aware of this distinction, and that we are generally far too fixated on paths compared to ends. We are not wired to reflect on goals, it seems, at least not as much as we are to accomplish an already given goal. We are optimizers more than we are reflectors, which makes sense from an evolutionary perspective. Yet it makes no sense if we are serious about “improving the world”. Success in this regard requires reflection on the aforementioned “what” question, and perhaps far more resources should be spent on reflecting on this question than on attacking the “how” question, since, again, the sensibility of any path depends on the sensibility of the end that it leads to. Paths depend on ends.


What Does Clarification of the “What” Question Look Like?

To many of us, answering this “what” question has consisted in our deeming utilitarianism to be the correct/our preferred moral theory, and then we have jumped to the path stage from there — how do we optimize things based on this theory?

Yet this is much too vague an answer to warrant moving on to the “how” stage already. After all, what kind of utilitarianism are we talking about? Hedonistic or preference utilitarianism? Even similar versions of these two theories often have radically different practical implications. More fundamental still, do we subscribe to classical or negative utilitarianism? The differences in terms of practical implications between the two can be extreme.


What Kind of that Kind of Utilitarian?

And yet much still remains to be clarified at the “what” stage even if we have these questions settled. For instance, if we subscribe to a version of negative hedonistic utilitarianism — i.e. hold that reducing conscious experiences of suffering is our highest moral obligation — this still leaves us with many open questions. For to say that our focus is purely on suffering still leaves open how we prioritize different kinds of suffering. Crucially: are we much more concerned with instances of extreme suffering than we are with comparatively milder forms, perhaps even so much more that we consider it impossible for any number of mildly bad experiences to be worse than a single very bad one? And, similarly, that it is impossible for any number of very bad experiences to be worse than a single very very bad experience, etc. We may place any number of points along the continuum of more or less horrible forms of suffering where no amount of less bad experiences can be considered comparably bad as at that given point, and the differences in terms of the practical implications that follow from views with and without such points can again be enormous (for instance, given such a “chunked” view of the relative disvalue of suffering, averting the risk of instances of maximally optimized states of suffering — “dolortronium” — would seem to dominate everything else in terms of ethical priorities, while other views might only consider it yet another important risk among many).


Is the Continuum Exhaustive or Not?

Another thing that would seem in need of clarification is whether the continuum of more or less (un)pleasant experiences provides an exhaustive basis for ethics, as opposed to merely being an extremely significant part, which it no doubt is on virtually any ethical view that has ever been defended. For example, if we imagine a world inhabited by a single person who suffers significantly and who is destined to suffer in this way for the rest of their life, yet who nonetheless very much wants to live on, would it be right for us to painlessly kill this person if we could? It would seem that we are obliged to do so on hedonistic versions of utilitarianism, and yet saying that such an act is permissible, much less normative, seems highly counterintuitive, and it seems to suggest that where on the continuum of more or less (un)pleasant states of consciousness a person’s experiences fall is — while highly important — not all that matters. One may consider this a strong reason in favor of granting significant weight to preferences in one’s account of what matters.

Yet to consider both the quality of experiences and preferences is arguably still not sufficient when it comes to what is ethically relevant. For imagine that we again have a world inhabited by just one person, a person who experiences the world like you and I do, with the (admittedly rather significant) exceptions that their experience is always exactly hedonically neutral — i.e. neither pleasant nor unpleasant — and that they have no preferences. If preferences and hedonic tone exhaustively account for what makes a being ethically significant, it would seem that there is nothing wrong with killing this being. Yet this does not seem right either, at least not to me. After all, this being does not want to die, so who are we to deem their death permissible? What if we learned that one of our fellow beings in this world actually experiences the world in this way? Would this mean that they are not inherently valuable as individuals? That does not seem right to me.


The Nature of Happiness, Suffering, and Persons

Even if we have clarified the questions above and know that our goal is, say, to minimize extreme suffering and premature death as much as possible (among other things), this still leaves an enormous research project related to the “what” question ahead of us. For what is suffering and what is a person? While the answers to these questions may be fairly clear in phenomenological terms (although perhaps they are not), they are far from clear when we speak in physical terms. What is suffering and happiness in terms of physical states? And what are the differences between the physical signatures of mild and extreme forms of suffering? More generally, what is a person in physical terms? In other words, what does it take to give rise to a unitary conscious mind? Without decent answers to these questions concerning the nature of our main objects of concern, we cannot hope to act effectively toward our goals.

And yet barely anyone seems to have made the clarification of these crucial questions a main priority (David Pearce, Mike Johnson, and Andrés Gómez Emilsson are notable exceptions). Such clarification of this aspect of the “what” question must also be considered a neglected cause (one can say that there is a phenomenological side of the “what” question, where we discuss what is valuable in terms of conscious states [e.g. suffering and happiness], and a physical one, where we describe things in terms of physical states [e.g. brain states], and both are extremely neglected, in my view).

Can digital computers mediate a unitary mind that can suffer? Can empty space? If so, does empty space contain more suffering than what we would expect there to be in the same amount of space filled with digital computers of the future? These may seem crazy questions, but much depends on our answers to them. Acting sensibly requires us to have as good answers to such questions concerning the basis of consciousness as we can, as quickly as we can. We need to attack these “what” questions concerning the nature of consciousness, including happiness and suffering in particular, with urgency.


Reflection on Values — Win Win

As mentioned, small differences in fundamental values can yield enormous differences in terms of practical implications, which hints that it makes good sense to spend a significant amount of our resources on becoming more qualified about them. And this applies whether we are moral realists or simply wish to optimize “what we care about”. For a moral realist, there are truths about what has value, and we can discover these or fail to. Similarly, for a moral subjectivist who wishes to optimize “what they care about”, deep reflection seems equally reasonable, since there in this case also are truths to be discovered in some sense: truths concerning what one in fact cares about.

Why, then, do we see so little discussion concerning fundamental values? After all, having a large discussion about these seems likely to help us all become more qualified in our reflections — to reconsider and sharpen our own views — and not least to bring others closer to our own view by causing them to update. And even if discussion only causes others to move away from one’s view, this seems like a welcome call for serious reexamination, which can then be done based on the reasons given for the rejection of one’s view. It seems like a win-win game that we are all guaranteed to gain from, yet we refuse to show up and claim the reward.

One may object that all this reflection is a distraction from the real suffering going on today that we should address with urgency. While I am quite sympathetic to this sentiment, and share it to some degree, the urgency and magnitude of suffering going on right now does not imply that we should reflect less. After all, the primary reason that many of us have made reducing suffering a priority in the first place was reflection, and the same applies to how we got to care about the biggest specific sources of suffering we are concerned with, such as factory farming and suffering in nature — we came to realize the importance of these via reflection, not by optimizing already established goals. And who is to say that there may not be more forms of suffering we are still missing, even forms of suffering that could be taking place today? Moreover, the fact that the far future is much bigger than the immediate future, and therefore will contain much more suffering by any standard, implies that if we truly are concerned with reducing suffering, starting today to reflect on how we can best reduce suffering in the future seems among the most sensible things we can do. Even in a world full of urgent catastrophes, we still urgently need to reflect.

However, saying that reflection should be a priority is of course not to say that we should not also be focused on direct interventions. After all, experience with interventions is likely to also teach us many things, and provide valuable input, for our reflections about value and what can be achieved in the space of more or less valuable states of the world.


What Is Value and What is Valuable? — My Own View

In the hope of encouraging such thinking and discussion about fundamental value, I shall here present my own idiosyncratic, yet unoriginal, account of what value is and what has value. This, in my view as a moral realist, is an attempt of getting the facts right when it comes to what value is, which is not to say that I do not maintain considerable uncertainty about it (as we should in the case of all difficult factual questions).

I believe value is a property of the natural world — more specifically, a property of consciousness.

Perhaps I should be intensely skeptical of myself already at this point. For doesn’t it seem suspiciously self-centered for me, as a conscious being, to claim that consciousness is what, even all, that matters in the world? Why should only conscious beings matter? Why not something else?

This may indeed seem strange, but I think this skepticism gets everything backwards. Contrary to common sense, it is not the case that we have a general Platonic notion of “value” drawn out of some neutral nowhere that we then arbitrarily assign to consciousness. Rather, value emerges and is known in conscious experience, and might then be projected on to the “world out there” from there. In my view, value, like the color red, does not exist, indeed cannot exist, outside conscious experience, because, like red, value is itself a phenomenal phenomenon. We may talk about non-phenomenal value, and even do so in meaningful ways — we can for instance talk about instrumentally valuable things — just like we can talk about red objects “out there”, yet, ultimately, “value” and “red” are not external to consciousness; they are properties/states of it.

“But how does this fit with the thought experiment above that strongly hints that preferences seem intrinsically important as well, and the thought experiment that hinted that even in combination, hedonic tone and preferences do not seem able to provide an exhaustive account for what is valuable?”

Preferences do indeed matter, yet in what sense can they be considered different from our conscious states? Preferences are contained in our experience moment-to-moment, and if a state of experience contains a preference to continue that state, this conscious state can, I would argue — even if it contains pain — be considered valuable in a broader sense of value, yet one that still only places value in experience itself. Preferences are yet another aspect of experience, a highly value significant one.

Another, more controversial response one might give is that our healthy social intuitions that are of great instrumental value — such as a relentless insistence on respect for the preferences and lives of others — cause us to overestimate the badness of death in both thought experiments above (which is a reasonable reaction, and hence not an overestimate, in our social world, where embracing the notional sanctity of life indeed is of immense instrumental value). After all, we do not find it terribly bad, if even bad at all, when a person who very much wants to stay awake falls asleep against their will, and yet the case of painlessly turning off someone’s consciousness against their will is, modulo the secondary effects on others and on ourselves (that we were supposed to ignore in the thought experiments above, given that we had a world inhabited by just one person, which should hold all else equal), in effect the same from the perspective of the person who falls asleep. One might object that in the case of sleep, one will wake up again, yet we could also say in the case of “turning someone off” that we could turn the person on again eight hours later. This hardly makes us see the turning off as less bad, especially if we continue turning the person off like this every day. The fact that the turning off is done by someone else, and that that someone is ourselves of all moral angels in the universe, just does not sit right with our social and moral — to a first approximation, “afraid to get punished/stand outside” — intuitions.

We have strong intuitions about death being a bad thing, which is not at all hard to make sense of in evolutionary terms. In our evolutionary past, we needed our fellow beings whom we cared about to be around for the sake of our survival and for our genes to be propagated. Largely for that reason, it seems safe to say, we have evolved to feel great sorrow and pain when those we care about die. To perceive that as very bad. Yet is the badness in the death or in our perception of it?

I do not have clear answers to these difficult questions. However, what I think is clear is that value ultimately pertains to consciousness and consciousness only. This is the common thread in both thought experiments above: we have pitted hedonic tone and preferences against each other, and also removed them both, yet consciousness was there in the subjects in both cases, and this does seem the undeniable precondition for there to be any value, and hence for any ethical concern to meaningfully apply. If we were talking about unconscious bodies, there would be no dilemma. The only remaining problem would then be the secondary effects of the kind Kant worried about with respect to our harming non-human beings: that hurting them might make us more prone to harming “real” moral subjects.

Conclusively, the claim that value is something found only in consciousness holds, in my view. And not only do I hold that value is ultimately contained in this singular realm that is consciousness, I also think we can measure it along a single scale, at least in theory if not in practice. In other words, I find value monism compelling (see the link for a good case for it).

This is not to say that (dis)value is a simple phenomenon, much less something that can be easily measured. Yet it is to say that it is something real and concrete that we can locate in the world, and something there can be more or less of in the world, which of course still leaves many questions unanswered.


Positive and Negative Value — Commensurable or Not?

For to claim that value comes down to facts about consciousness is rather like saying that science is about uncovering facts about the world — it says nothing about what those facts are. For example, saying that there is positive value in happiness while there is negative value in suffering does not imply that these values are necessarily commensurable. Many have doubted that they are. Karl Popper was one such doubter: “[…] from the moral point of view, pain cannot be outweighed by pleasure, and especially not one man's pain by another man's pleasure. Instead of the greatest happiness for the greatest number, one should demand, more modestly, the least amount of avoidable suffering for all […]”

So is David Pearce: “No amount of happiness or fun enjoyed by some organisms can notionally justify the indescribable horrors of Auschwitz.”

I find that I agree with many of these asymmetrical intuitions — at least when it comes to extreme suffering (to say that extreme suffering cannot be outweighed by any amount of happiness is not to say that this also applies to mild forms of suffering; too often discussions get stuck in this latter dilemma, happiness vs. mild suffering, rather than the former, happiness vs. extreme suffering, where it is much harder to defend a non-negative position).

There is such a thing as unbearable suffering, yet it seems that there cannot be anything analogous on the scale of happiness. The expression “unbearable levels of happiness” makes no sense. Another thing that we find in suffering, at least in extreme suffering, that we do not find in happiness is urgency. There is no urgent obligation for us to create happiness. For instance, imagine that we are at an EA conference and someone shows up with happiness pills that would make everyone maximally happy. Would there be any urgency in giving everyone such a pill as quickly as possible? Would and should we rush to distribute this pill? It seems not. Yet if a single person suddenly fell to the ground and experienced intense suffering, people would and should rush to help. There is urgency for betterment in that case — and that urgency is inherent to extreme suffering while wholly absent in happiness. We would rightly send ambulances to relieve someone from extreme suffering, but not to elevate someone to extreme levels of happiness.

A similar consideration was crucial in my own moving away from the view that happiness and suffering are commensurable, more specifically, a consideration about the Abolitionist Project that David Pearce advocates. For if happiness and suffering are truly commensurable and carry the same ethical weight, this would mean that a completion of the Abolitionist Project — that is, the abolition of suffering in all sentient life — would not represent a significant change in the status of our moral obligations. We would then have just as great an obligation to keep on moving sentience toward greater heights. Yet this did not seem right to me at all. If we were to abolish suffering for good, we would, I think, have discharged our strongest moral obligations and be justified in breathing a deep sigh of relief.

Another reason in favor of the asymmetrical view is that it seems that the absence of a good is not bad in the same way that the absence of a bad is good. If a person were in deep sleep, experiencing nothing, rather than, say, having the experience of a lifetime, this cannot, I believe, be characterized as a catastrophe. It is in no way similar to the difference between sleeping and being tortured, the difference between which is a matter of catastrophe and great moral weight.

In contemplating any supposed symmetry between suffering and happiness, it seems worth considering whether there is any pleasure so great that it can justify just a single of the atrocities that happen every day — a rape, for instance. Can the pleasure experienced by a rapist, if it is made great enough, possibly justify the suffering it imposes on the rape victim? Classical utilitarianism has it that if the pleasure is great enough for the rapist, the rape can in fact be justified, even normative. Negative utilitarianism pulls the breaks here, however. The level of pleasure experienced by the rapist is irrelevant: imposing such harm for the sake of pleasure cannot be justified.

This is of course not to say that there is not great value in happiness. Indeed, there is no contradiction in considering pleasure more valuable than nothing, and to consider increasing happiness to be valuable, yet to not ascribe urgency to it, and to not consider it a moral obligation. This is my view: Happiness is wonderful, but compared to the alleviation of extreme suffering, increasing happiness (of the already happy) seems secondary and morally frivolous — like a supererogation rather than a moral obligation. Counterintuitively, however, the urgency of alleviating extreme suffering does actually make boosting happiness and good physical and mental health an urgent obligation too, at least an instrumental one, as we must stay healthy and motivated if we are to effectively alleviate extreme suffering.


The Continuum of Suffering: Breaking Points or Not?

As my repeated mention of extreme suffering above hints, I do believe that there is a breaking point along the continuum of suffering, likely many, at which no amount of less bad experiences can be considered as bad as at that given point. For example, it seems obvious to me that no number of moments of tediousness can be of greater disvalue than a single instance of torture. One might argue that such a discrete jump seems “weird” and counterintuitive, yet I would argue that it shouldn’t. We see many such jumps in nature, from the energy levels of atoms to the breaking point of Hooke’s law: you can keep on stretching a spring, and the force with which it pulls will approximately be proportional to how far you stretch it — up to a point, the point where the spring snaps. I do not find it counterintuitive to say that gradually making the degree of suffering worse is like gradually stretching a spring: at some point, continuity breaks down, and our otherwise reasonably valid framework of description and measurement no longer applies.

Unfortunately, I do not have a detailed picture of where such points lie or, as mentioned, how many there might be. All I can say at this point is that I think this is an utmost important issue to contemplate, discuss, and explore in greater depth in the future, and that much depends on how we view this issue.

Thus, it seems to me that the prevention of the most extreme forms of suffering — the prevention of the emergence of “dolortronium”, if you will — is our main moral obligation. In my view, this is where the greatest value in the world lies. I could be wrong, however.


The Relevance of Uncertainty — Doing What Seems Best Given our Uncertainty

“When our reasons to do something are stronger than our reasons to do anything else, this act is what we have most reason to do, and may be what we should, ought to, or must do.”
— Derek Parfit, from summary of the first chapter of “On What Matters”

It seems reasonable to maintain some uncertainty when it comes to our view of fundamental values. This again applies whether we are moral realists or subjectivists. In the case of moral realists, there is always the risk of being wrong about what is in fact valuable, while in the case of moral subjectivists, there is the risk of being wrong about what one actually cares about most deeply. This is not, however, to say that one knows nothing, or that one has no functional certainty about anything. For instance, while we may not be able to settle the details about value, we likely all agree and have great confidence in the claim that, all else equal, suffering is bad and worth preventing, and the more intense, the worse and more worth preventing it tends to be.

The interesting question is how to act given our moral uncertainty. Doing what seems most reasonable in light of all that we know seems, well, most reasonable. Yet, given uncertainty about fundamental values, what seems most reasonable is not to merely pick the single ethical theory or account of value that we find the most compelling and then to try to work out the implications and act based on that, although it may be tempting and straightforward. Rather, the most reasonable thing would be to weigh the plausibility of different accounts of value, including one’s preferred one, and to then work out the implications and act based on the collective palette of weighted values one gets from this process, however small one’s normative uncertainty may be.

And it is worth noting here how the distinction between absolute and relative uncertainty is highly relevant. For imagine that we know only of three different value theories, and we assign 5 percent credence to value theory A, 10 percent to theory B, and 15 to C. This is not the same situation as if we assign 10 percent to A, 20 percent to B, and 30 to C, although the relative weight between the theories is the same. In the first case, the possibility that we are fundamentally wrong about values is kept far more open than in the latter case, and this has implications for how confident one should be, and how many resources one should put into getting a better grasp of values compared to other things.

To relate this to my own view, I have relatively high confidence in the view that all value relates to consciousness ultimately — more than 90 percent — which is a high absolute credence, yet when it comes to the possibilities of consciousness, my own provincial knowledge of the space of possible states of mind forces me to admit that my view of the landscape of value — that is, the landscape of value found within consciousness — could be deeply flawed. Concerning this question, I have a considerable degree of uncertainty, yet relatively speaking, compared to other accounts of value I have come across, I still find my own view most compelling by far (and this should hardly be surprising, given that my current view already is a product of countless updates based on writings and discussions on ethics). However, the fact that I have changed my mind about value in significant ways over the last few years should also teach me to be humble and to admit that my present view could well be wrong.

How confident I should be in my best estimate concerning what the landscape of value in consciousness looks like is hard to say — 70 percent? 5 percent? For the fact that the landscape of possible experiences lies mostly unexplored before me does not invalidate the limited knowledge of the landscape that I do have, and the reasoning about it I have done, and this knowledge and reasoning does provide what appears to me a decent basis for my current best estimate. I should probably be humble and keep on reflecting, yet at the same time it does not seem like my large uncertainty, in itself, should cause me to change my current estimate — after all, my view might be wrong along all axes, in both positive and negative directions. I might have a too negative view of value, or I might not. If anything, my uncertainty calls for deeper exploration and reflection.


The Views of Others

There are other people in the world than ourselves, and many of them have thought a lot about the subject of value as well, which makes it seem worth paying attention to their views and to update based on that. After all, why should we be more correct than others when it comes to what is valuable? Why give our own perspective a privileged position compared to those of other conscious minds that also experience value? Or phrased in subjectivist-friendly terms: if others upon reflection have found that they value something different from what we value ourselves, might we not in fact also value that upon reflection, at least to a greater degree than we thought?

After all, in dealing with ethics and values, what many of us think matters is not our own view of the perspectives of others, but those perspectives themselves. It therefore makes good sense to listen to those perspectives. And if they report something radically different from what we believe about their perspectives and what they find valuable, who are we to claim we know better on their behalf, about their perspective, which they know intimately and we don’t? Is that not just yet another instance of the “vulgar pride of intellectuals”?

I think this is a valid point, and in my case, this consideration should arguably move my view in a less negative direction, and it probably has, although it is not entirely clear how much it should move me. After all, I do not think my view is that contrary to what others report. Again, my view is not that happiness is not of great value, but rather that it cannot outweigh extreme suffering, and I have yet to encounter a convincing case against this view (and not many have tried to make such a case, it seems). Another reason I should perhaps not move so much is that some of the most influential traditions in Asia — the continent where the majority of the human population lives — such as Buddhism and Jainism seem to share my negative, i.e. suffering-focused, values. The fact that paying close attention to consciousness is a central part of these traditions, and the fact that optimism bias is strong in most humans and seems likely to influence our evaluation of what is valuable, could well imply that I should resist the tug from the view of the more “positive”, predominantly Western thinkers. More than that, the fact that I do not like the negative view, and very much wish that the magnitude and moral status of negative value were not incommensurably greater than that of positive value, also suggests that if I have a bias in any direction, it is probably away from the negative (for the same reason, I’m likely also strongly biased against moral realism being true, in that I wish that no continuum of truly disvaluable states could exist in the world; unfortunately, I find that there is too much evidence to the contrary).

Yet all this being said, I could be wrong, and I wish we had much more discussion on these matters that we could sharpen our views on. I should also note that there of course are other disagreements about value than the relative significance of positive and negative value, for instance concerning egalitarianism, prioritarianism, and the value of rules. In my view, these things are ultimately all instrumental to how the dynamics of consciousness play out, as opposed to being inherently valuable, which is not to say that these views are not important to discuss, or that they don’t contribute with much wisdom. I think they do, and I do maintain some, although admittedly very small, uncertainty about them being intrinsically valuable.


Biasing Intuitions

In trying to be reasonable, it is always worth being aware of one’s own biases. And when it comes to thinking about values, we have many biases that are likely to influence our views and what we say about them. We are social primates adapted to survive and propagate our genes, which means that we have moral intuitions that have been built to accomplish this task efficiently — not to help us land on deep truths about value.

This can influence us in countless ways. For example, as mentioned above, one could argue that the only reason we view death as bad is because it was costly for our group’s survival, and hence our genes, to lose anyone in our group (although I would argue that there indeed are good reasons to consider death bad, even if we disregard our immediate feelings about it).

In the case of my own negative view of value, I might be biased in that I’m an organism evolved to signal that I am sympathetic and compassionate, someone who will protect you if you are in pain and trouble. Negative utilitarianism, a skeptical person might claim, is merely an attempt to signal “I’m more ethical than you”, and ultimately, I’m just another horny organism that makes elaborate sounds in order to get satisfied.

I certainly don’t deny the latter proposition, and it is worth being mindful of such pitfalls, even when they seem unlikely to be influential, and when there seems to be many reasons that count against them — for instance, do equally strong, or perhaps stronger, biases not exist in the opposite direction as well? Isn’t a willingness to sacrifice suffering for some positive gain generally much more attractive in a male primate? Does negative utilitarianism really make you appear cooler than the classical utilitarian who prioritizes to work for a future full of happy life above all else?  (Although it should be noted that a future full of life is, at least according to David Pearce, not necessarily incompatible with negative utilitarianism.)

More generally, might our uniquely wired brain lead us to value something that might not at all be valuable, or perhaps only of puny value, compared to what might be found outside of a biological perspective, or at least outside the perspective of what we can remember in the present moment from less than a single lifetime of one species of primate? It is certainly worth pondering.

It is hard to appreciate just how strongly our thinking is influenced by our crude, survival-driven intuitions, many of which contradict each other much of the time, and it is worth being intensely skeptical of these, and mindful of ways in which our narrow perspective might be misguided in general, when thinking about values.


Updated View of Value — What Follows?

What follows from an updated, weighted view of value is that it leads us, however slightly, toward favoring causes and actions that are robust under many different views of value as opposed to our single (immediately) most favored one. What these causes and actions are, and how to assess this, is largely an open question that of course depends on what our palette of weighted values ends up being. Yet a good example of a cause that seems highly normative on almost all accounts of value is the Abolitionist Project proposed by David Pearce: to move all sentient beings above hedonic zero by making sentience entirely animated by gradients of purely positive experiences.

Regardless of which palette of weighted values we get, however, it seems like the continued effort of gaining more clarity about the composition of this palette seems an ever-relevant task. As mentioned above, and as I will argue below, encouraging such an effort is of utmost importance.


Conclusion

We will always perceive the world from a limited perspective that contains limited information. When we are talking about the most value significant events that can emerge in the world — notionally, “dolortronium” and “utilitronium” — I maintain that none of us have a good idea about what we are talking about. What to do in light of this ignorance, apart from maintaining some degree of humility, is not clear. All we have to go by is our limited information.

What does seem clear, however, is that continued reflection on fundamental values is important. Indeed, given the importance and difficulty of getting fundamental values as right as possible, it seems that seeding a future in which we continually reflect self-critically on fundamental values and how to act on these is among the best things we can do.

Again, this applies to moral subjectivists too, who will also benefit from reflecting on what others have found through their serious reflections, and who might even take the position that they care about what others care about, in which case the importance of ensuring that these others — such as beings of the future — find out what they care about is self-evident.

In conclusion, kindling a research project on fundamental values and widespread, qualified discussion about it — more generally: moving us in the direction of a more reflective future — should be a main priority for anyone who wants to “improve the world”. We need to be much more focused on the “what” question in the future.