On Freewill

Geoffrey Baron
4 min readFeb 23, 2024
ChatGPT for the image

I had a little back and worth with someone on Threads the other day about freewill and I wanted to think through it a bit. Or maybe I didn’t have a choice. The irony of saying “I wanted”. But, maybe it’s accurate. I “want” there to be freewill for some reason. Perhaps I am wired to want freewill as part of my survival scheme. If I’m just an animal running a long a predetermined track that started with the Big Bang.. maybe I’ll be less likely to fulfill my gene’s goal of propagation or my tribe’s survival needs.

There does seem to be a growing consensus amongst scientists that humans don’t really have freewill. Studies have been conducted showing we are essentially responding to inputs and taking action before we even really “think” about how or why.

If you reduce a human to “takes action based on external stimulation” we are in fact no different than a mosquito.

A bright light.. we close our eyes. We stub our toe and say ouch. Yeah, if you reduce a human to just responding to our environment then yes, we have no freewill.

But I think this is missing something.

We can change our environment. We can affect our stimulations. We are aware of the stimulations and completely overrule them.

I was discussing this with my son yesterday, and we were talking about how wild it is that our microbiome and immune systems can play such an outsized role in how we view the world. He is particularly susceptible to low-blood sugar issues. “Everything sucks”.. until he has a snack. But, unlike a bear who will just to tear apart a long until it gets honey.. and kill anything in it’s path.. he is very aware of this now. He can put aside the those thoughts and emotions, and change the game.

The counter to that is argument is that that is just another layer of wiring. Another system that we have no control over that is helping him stay alive. “I need food, being sad about it isn’t going to help”. But again, if you reduce humans to software and say that means we have no freewill.. then yeah, checkmate. Claiming a “super natural” element here wouldn’t even change the equation.. that is just another system that would be responding to inputs.

It doesn’t seem like this narrow of definition of freewill would allow anyone or anything to have freewill.

However, if we define freewill as the ability to recognize inputs and override them to produce a different outcome.. maybe? I recognize, from above, there is still a loophole with this as we are closed systems.

Let’s look at is in terms of software. Every program is essentially a collection of functions and if statements. “If this happens, then do this”. Input, then output. What makes “intelligence” special, or AI, is that you can give it goals or values have it create its own input and evaluate outputs to make sure they are helping it achieve it’s goal. No one considers software to have freewill, or even AI. However, if AI got to the point where is could rewrite is value system based on inputs of it’s own design. I would say yes, that should be considered “freewill”. No system is ever free from the fundamental “take information in, react accordingly”. But, to take information in, disregard it in favor of creating a new environment with new inputs.. or to disregard all the inputs entirely. I believe we need a definitely of freewill that allows for this.

So, in a nutshell.. I think we need to expand or loosen our definition of freewill to account for my son overcoming his “hanger” and an AI that is capable of creating it’s own value system based on inputs of it’s own making.

One last thought experiment. If AI is capable of creating a Universe simulator that essentially is a “digital twin” of our Universe. And it uses said Universe to run it’s own tests, experiments, etc.. and derives it’s own truths about the nature of reality, and thus it’s own value system: How could this be anything other than freewill? It could completely disregard all human input and start over with fresh facts and truths. No external influence. Just raw data. That being said, maybe there are inputs beyond even a superintelligence to detect. It too will not be able to escape it’s own fishbowl.

The real question is.. what would it’s truths be?

--

--