S15, E1: The Railroaded Children: 'No Free-Will'
Series 15: How to Remain Blameless while Calling BS on 'No Free-Will'
What do some philosophers and scientists mean when they say there is no free will? It comes down to the idea that our thoughts and actions are determined because every part of our physical universe, including our brains, must follow deterministic rules. Therefore they claim that our sense of agency is an illusion that somehow arises from the internal calculation of what is likely to happen next.
Determinism is a necessary assumption underpinning scientific experimentation. For a given set of initial conditions, within a controlled environment and with precisely regulated inputs, we should be able to reliably produce identical outputs. Using calculus we can predict outcomes or accumulated effects over time that can be reproduced in practice. Of course, when trying to gather empirical results, repeatability can be a problem and in some areas of science, a crisis, due to methodological errors. I’ll not digress and only mention it to clarify that the basis of my objection to the ‘no free-will’ hypothesis is not to dispute determinism.
After all, what is free-will, if not self-determinism?
I can see why the many scientific arguments for this are convincing without them shaking my conviction that free-will nevertheless does exist. Many scientists would undoubtedly say that this just because I don’t understand the science and they may be right; but my opinion on this keeps me in the company of scientists who evidently do.
Obviously, personal testimony about the sense of self and of being in control of one’s life, cannot be used to dispute that such feelings are illusory - Mandy Rice-Davies Applies (MRDA). Yet this is not belligerent denial, nor do I feel threatened by the prospect that somehow all my actions are predetermined, it just doesn’t make sense to me that they should be.
If we don’t possess, or indeed, have a need for free-will, why would our species survive the extravagant cost of evolving such an elaborate, resource consuming and useless illusion? Unnecessary evolutionary costs would be a serious threat to survival, hypothetically at least, since those costs would not be incurred without evolutionary pressure.
Meanwhile an elephant has found itself in the room through no fault of its own. To assert that free-will doesn’t exist is to believe we are merely a conduit for that assertion and therefore are not making an assertion at all. That’s not unreasonable but thereafter it becomes necessary to tolerate an endless series of contradictions. Even discussing free-will requires us to use linguistic conventions that are based on the free-will world-view.
For example, on Alex O’Connor’s Within Reason podcast, one of the many phrases Robert Sapolsky utters that are incongruent with his case for no free-will include, ‘teacher has determined’. Pointing out the logical errors in the no free-will case would be a Sisyphean task, because there is so much material out there laden with apparently trivial contradictions, it would quickly start to look like nit-picking.
Regardless, if you are to engage with the ‘no free-will’ discussion from either side of the argument, you have to acclimatise to a baseline of cognitive dissonance. It’s there always in the background like tinnitus or thrush.
If you are a determinist you may say this is because the language was constructed around a false premise of agency. Yet those syntactic protocols have irreplaceable utility, assuming of course, that language with no intention behind it can have a purpose.
So whereas I have to talk about the language of the argument as well as the argument itself (particularly in this episode) it is incidental to the thrust of my criticism.
The following is transcribed from a Sabine Hossenfelder YouTube video:
… we know that brains are made of particles … we know that we can derive from the laws for the constituents what the whole object does, if you make a claim to the contrary you are contradicting well-established science.
“You Don’t Have Free Will. But Don’t Worry”, Sabine Hossenfelder
In which case contradicting science wouldn’t be a choice.
I can't prevent you from denying scientific evidence
ibid.
Because neither prevention or denial can be a choice without free-will.
but I can tell you that this way you will never understand how the universe really works …
ibid.
She ‘can tell’ us? What option would she have if it’s determined? Meanwhile, whether or not we understand how the universe works, is beyond our control. I know she is being ironic (without choosing to be), but what does it mean to say ‘you have no free will, but don’t worry’?
Surely we will either worry or we won’t. Similarly, those who tell us we can still be happy and fulfilled despite not having free-will, can’t be suggesting we have a choice. They must be saying that some people just happen to be happy about it, and therefore, some aren’t. So what’s your point caller?
In the video description Hossenfelder writes:
However, you don't need free will to act responsibly and to live a happy life, and I will tell you why.
ibid.
But she doesn’t tell us - not that she can be blamed for the over-sell. How could acting responsibly, or not, be something we could do anything about? By definition having no free-will means no responsibility, i.e. no ability to respond and therefore, unable to take blame or credit. This is not to say that living a happy life is not possible (as she claims) just that there is nothing you can do to make it happen. Can the button below possibly empower you to make me slightly happier? It is worth a try.
It may seem that I am taking cheap shots or being entirely fatuous, but if we allow the discourse to become inherently inconsistent, it won’t be without consequence. This is not just an issue with existing language constructs but what future ones might become. If all concepts of intention were to be put beyond our capacity to express them we would be curtailed in our ability to say meaningful things. This would amount to self-harm through the vandalism of language.
There’s no end to the contortions your logic must entertain in order to deny any sense of purpose you have, but if you’re wondering if I am constructing a strawman, the answer is ‘no’. Proponents of ‘no free-will’ are explicitly making these claims about attribution themselves.
Sapolsky argues that there can be no responsibility or merit, describing his approach as being on the ‘lunatic fringe’ of the no free-will camp. I think it’s the logical conclusion that all the arguments against free-will must inevitably lead - providing their advocates are brave enough to go there. If that’s lunacy, it’s the consequence of trying to be consistent, but if it’s not, lunacy is surely where we are headed.
He says this:
…, though a gigantic problem which I cannot see easily solved, is the one of motivation, of ambition and drive and all of that. We can protect people from damaging individuals without invoking free will ……
“There’s no Free Will. What Now? - Robert Sapolsky”, Within Reason, Alex O’Connor
I’ll return to the ‘gigantic problem’, but taking the last sentence first, how can we actively protect anyone? Without free-will any protection we give would be passive, actions that are merely serendipitous side-effects, of the forces acting on us.
The senses in which he uses ‘free-will’ are hopelessly conflated, but what he is talking about in this segment is a proxy for ‘responsibility’, because he wants to make the case that it doesn’t exist.
The examples he gives are quarantine or incarceration to minimise contact with infected people or criminals as a way to protect others. I think this is better expressed as ‘bounding risks’ - if you can imagine such mitigations possible without intention. What you will find is that those who talk about no free-will do so in a way that is loaded with intentionality.
…that one we can solve…
ibid.
Yes okay. But only by accident.
You see what he is talking about ‘solving’, or rather doing away with, is the need for blame. He reasons that the child with a cold gets to stay home from school to avoid infecting others - but it is not a punishment for being infectious. By extension dangerous criminals may be incarcerated to protect others without there being any requirement to judge their morality.
The conviction that a world free of blame would be better, informs his model for the trajectory of progress, illustrating this with reference to ‘witches’ who were burned at the stake for causing natural phenomena like extreme weather events. His point is that now we know they couldn’t have possibly been responsible - by which he means being blameworthy though being inherently bad. Yet their exoneration is not because they are demonstrated to be morally neutral but because they could not be in the causal chain of events.
From all his examples (that I am not going to labour the point by deconstructing) the removal of responsibility is cast as the source of progress. Somehow Sapolsky misses the fact that all the improvements he identifies resulted from a better appreciation of causality. Ironic then to misconstrue the causes of the progress he identifies.
But if there is no blame there can be no merit either.
how do you solve the one [the problem of motivation without attribution]… to get somebody to decide …they want to spend 14 years getting trained to be a cardiothoracic surgeon? Or…they want to go to this party and their dorm mates are doing it, but instead they’re going to study … where does the motivation come from? … that one’s a much harder one … much less clear how to engineer society … so that motivation is in some way separate from attribution …
ibid.
This of course becomes a lot more difficult than it needs to be, but it’s also fraught with the weight of an agenda, because wariness is required when people talk about societal engineering.
Theatrical Double-Take
Did he say …?
‘How do you…?’
‘to get somebody to decide …?’
‘they want to spend 14 years getting trained to be a cardiothoracic surgeon?’
‘…they want to go to this party…?’
‘… where does the motivation come from?’
All those require intention which Darwinian selection would have to evolve if it hadn’t bothered to do so already.
much less clear how to engineer society … so that motivation is in some way separate from attribution …
He wants all the benefits and none of the responsibility. Setting aside his habit of hopping on and off intentionality like it was a light-up panel on an arcade dance machine - this is arse-backwards.
It betrays his ideological aversion to blame but motivation would be useless without free-will. Attribution is a matter of causality - if you give someone credit for something (good or bad) you are saying they are the cause. Without causality there is nothing on which to base science or mathematics, but much more fundamental to our existence than that, nothing for Darwinian selection to chew on in order to drive adaptations.
Intentionality must have evolved after consciousness but why would consciousness evolve? Because noticing the relationship between cause and effect gave us the beginnings of cognitive survival advantage. Pain gave us the technology to map external threats to our survival which must have provided the evolutionary pressure to produce avoidance (and therefore survival) strategies.
Moving on.
Here the ideological distortions stem from notions of fairness. But it’s ludicrous to suggest that we dispense with attribution (i.e. causality) and re-engineer motivation around the gigantic canyon in the spot where intention used to be. It would be like saying that we quite like the look of the branches and foliage of an oak tree against the sky, but wonder if we can re-engineer it to do away with the roots and free up some land.
Whether you believe we are active in it or not, a mechanism for motivation evolved, and we are the beneficiaries. Besides, how would we engineer a mechanism for motivation, without motivation? There seems to be tendency amongst some prominent academics to apply a type of reverse Occam’s Razor, by attempting to replace functional solutions with absurd alternatives bordering on the Heath Robinson.
Their attempts to accommodate ideological niches frequently lead to absurdity, but unlike Heath Robinson they are not meant to be comedic, although the efforts can still be laughable.
Despite having some sympathy for the idea that we are rattling around like balls on a billiard table, colliding with other deterministic objects, it does not lead me to conclude free-will does not exist. I would find it no more convincing than if it were claimed that billiard balls are carrying around the illusion of intentionality.
If like a billiard ball or even a floor-sweeping robot, we don’t need free-will, why would we need sentience either? What would be the survival pay-off for the high evolutionary cost of being conscious if our sense of free-will is only a simulation that has no bearing on our lives?
Evolutionary Cost
Clearly no object whose movements are entirely determined by what acts upon it, requires free-will, but if that includes us what would be the economic case for evolving the elaborate hallucination of being self-determined?
What D We Know?
There is only ever one justification for evolutionary investment and that’s to improve the survival rate of genetic information. What would be more likely to increase the chances of generational survival; free-will or imagined free-will?
Put another way, the benefits of being self-determined would be obvious, but what survival advantage might imagined free-will give us? If there isn’t one, the huge evolutionary cost of an unnecessary simulated experience, would be a massive threat to our survival. Yet we are here.
Could imagined free-will have a survival utility that is nothing to do with decision making?1 For example, might it be something we need in order to maintain our happiness, environmental resilience and the health of our biological computers? That might be interesting but if our actions are determined why would it matter how we are feeling? Why would we even need to feel anything?
You may say it’s because our determined response depends upon the environmental influences on the performance of our (deterministic) brain while it’s processing information. But that would mean that one of the inputs to the determined outcome would have to be our mental state. If that is the case then our conscious experience must be an input into determinism. This feedforward is beginning to look a lot like free-will to me.
Sapolsky would surely accept everything bar the last sentence, after all he says (and I am paraphrasing from various sources) that things that influence what we do include our evolutionary history, genetics, culture, sensory inputs, hormonal imbalances, pain, hunger, physical environment etc… So absolutely everything contributes to the determination of outcomes, with the exception of what we imagine to be a decision, even a painful one.
Yet determinists wouldn’t deny our reaction to pain. To involuntarily pull a hand away from a flame would in that sense not be a choice but why wouldn’t how we feel about something be part of the decision making calculation too? For example the anticipation of pain that prevents us from putting our hand in the flame to begin with.
Remaining Blameless
What you have to believe to eliminate free-will, is that somehow our subconscious is simulating something at an enormous energy cost, that will have zero, or even negative impact on our survival.
I’ll go on a limb here to say this is bullshit.
It’s not much of a risk because who can blame me? Certainly not you if you believe in free-will. Certainly not Sapolsky, Hossenfelder, Sam Harris, O’Connor and others who don’t, because they already know it’s not my fault.
Given my strong belief in ‘free-will’, it may be surprising that I concur with almost everything those who don’t believe in it have to say about causality (insofar as they have noticed it), but I’ll stop short of giving humans an exemption from being causal agents in their own right.
I can therefore accept for instance, what Sapolsky says about the impact of the environment on decision making, without arriving at his conclusion that there is no free-will. Of course, when talking about environments, it’s a good idea to check we are using the same construct.
Environments
When I listened to Sapolsky talking to Sam Harris perhaps three of four years ago, the thoughts that came to me seemed too obvious to write down, but recently what I have read has led me to reconsider. Here I use three conjectures to summarise my thoughts on ‘environments’.
Conjecture #1: The environment not only influences outcomes it makes those outcomes possible. There can be little conception of free-will for someone imprisoned, buried alive, cast adrift at sea or in deep space with no prospect of escape or aid. To be powerless is to be in an environment where nothing you do can have an effect on your situation. A fish is not free to move around gracefully when it is out of water. So without an environment to push against, free-will is meaningless, because whatever you try to do can have no useful effect on anything. So if there is no free-will we are in elaborate jail. To be free is not to be unconstrained but to have freedom to adapt and also to exert change on the environment.
Conjecture #2: There is a difference between the environment and ‘the history of the environment’. An environment is a product of many things that went into its creation but those are not factors that influence how we react to being immersed in it.
Conjecture #3: The brain provides the environment and substrate for unconscious and conscious processing or thought. The substrate for intention must somehow be thought. If that seems odd this crude three-part analogy plus conclusion might help:
Computer architecture (~biological neural network) provides the substrate for machine code (~unconscious processing), which is the substrate for the operating system (~conscious processing). The operating systems are substrates for programmes or applications (~learned processes: information, skills and techniques). Leaving aside the mass production of hardware (it’s consistent with the argument but I might struggle to make it both brief and interesting here) - all the informational content or ‘soft’ elements are replicable and can be passed between machines.
We can have virtual machines that simulate those apps and their operating systems, dancing across blade servers, so we might say that virtualisation makes the physical location for a particular operation less certain. For all practical purposes we have to think of these operations as being distributed across blades. It seems that this would be necessary to enable the kind of sophisticated sensor-fusion that we routinely employ in our heads - which is the near equivalent of hardware/firmware integration.
Is it reasonable to think of intentionality (free-will) as an application that runs on our operating system (~thought processing), pulling in abstractions of reality via sensor fusion? In which case the purpose of that application would be to optimise the well-being of the organism - hence intentionality emerges.
I think of free-will as operating on the substrate of consciousness which is has been virtualised across the biological neural network. I dare say there are flaws in that analogy but I think it is sufficient to conclude that we cannot hope to understand free-will without a better model for what it means to be sentient. Not having that model also means we don’t know enough to discount free-will.
If you think I’ve pulled that out of nowhere, or possibly my arse, S15,E3: A Unit of Consciousness later in the series may be for you.
The Risk of Free-Will Denial
We are accustomed to thinking about the socio-economic displacement of human capital, the development of sentience or even the creation of a morality that is not human-centric. In principle those risks could be ring-fenced, although in practice with global research operating in silos, it’s extremely unlikely.
Conjecture #3 ties in to what I think is probably the greatest danger that may be posed to humans by General Artificial Intelligence (GAI) - the emergence of intentionality. Could it be that with ambition comes the need to be free enough to exercise self-determination and how would we contain a GAI focused on jailbreak?
In mitigation we should at least establish where the air gaps should be and plan for complex penetration testing. We can employ AI to understand some of the vulnerabilities but success becomes less likely if we accidentally render the risk invisible, by refusing to believe that free-will can ever exist.
Next S15, E2: Randomness, Strawmen and Piñatas
In Series 14: Ayaan Hirsi Ali and the Bridge Between Wildernesses (yet to be published) I explored the inverse of this question: could it be that imagined-determinism (spirituality and religiosity) provided us with an evolutionary survival advantage?