I’m not sure whether humans take our ability to really think for granted or if mathematicians and economists are far too intrigued by everyday social actions. Either way, artificial intelligence has come an amazing distance in a relatively short period. Consider for example the time between Da Vinci’s sketches of flying machines and the first plane crafted by the Wright brothers—about 400 years. It only took about 40 years after Nash set the Game Theory table for email providers to fit his ideas to spam-detection algorithms.

Although essentially digital, email is still far more “human” than playing Checkers, given that we communicate far more than we challenge one another to duals—especially now that we’re out of that sword-carrying fashion trend.

But artificial intelligence has penetrated human activities deeper than just email.

Bayesian Statistics has gained increasing support since the advent of the personal computer. This branch of Mathematics provides a numerical way to represent, in a startling “natural” way, how beliefs change over time as new evidence is collected. In this way, a machine can start with an unbiased, but undecided, belief about, say, whether a given email is spam or legit. Trusting us humans, it learns as we toss messages into our junk folders, updating its beliefs, hopefully, in the proper direction.

In research, Bayesian Statistics allows humans to specify complex models that are computationally near-impossible to solve. Following the same unbiased-evidence-update pattern, the computer simply guesses the right parameters, feeling around a multi-dimensional dark room until it’s pretty certain it’s found the light switch—we can rarely know absolutely if the computer has stumbled upon the right numbers, but it gets us closer than we ever could on our own.

So what. AI can mark messages as spam and approximate numbers for us, but it still can’t write literature or accomplish other marves of art.

Imagine that it could.

Herbert Simon is my all-time favorite author and a systems thinker whose book first lead me down this Complex Systems rabbit-hole. I’m upset that he died during my lifetime, meaning that I could have met him, had I made different choices and followed some other cosmic path, so to say. But this is just the image Simon had in mind when he turned the question inside-out and considered how humans might behave like machines when writing or painting or singing.

We make choices and we follow paths.

Just like a Checkers player compares the benefits of each play available to him, the artist compares the effects of several available decisions. One might ask, “Should I start my Superhero/Sci-Fi novel at the beginning when the hero gains his powers, or do I try in medias res and get right into the action by saving a baby from a burning building.”

The key difference is that Checkers plays are easier to digitize, that is, to put into numbers. If the player values having more “kings,” he would choose the play that best maximizes his kings while minimizing those of his opponents. Arthur Samuel, in a book by John Holland, is said to have created the world’s first “learning” program, an AI Checkers Player. This program contained a long list of numbers, from number of kings to pieces around the center to some rather obscure values Samuel tossed in for good measure. Each value was updated after each move and summed together. Additionally, each value had a weight associated with it. By adjusting these weights, the program’s strategy would develop over time, allowing it to consider some values higher and others negatively.

The program would simply look a few moves ahead and choose the path that maximized the sum acheived above.

Simon’s consideration for how artists operate closely resembles the approach taken by Samuel’s Checkers Player. Instead of numbers calculated from a board game configuration, we must use heuristics, that is, rules of thumb, to quantify qualities like how a work of art might evoke a particular emotion. This definition suffers in that it is vague—it’s impossible to digitize truly human concepts, so any attempt to introduce heuristics into a program will rely on what arbitrary, subjective decisions to include.

However, the abstract model promises insight into human nature: in all human decisions, we run through multiple alternatives, weighing the costs and benefits of each, and selecting that which we believe, at the time, will provide a “good enough,” if not optimal, outcome. But we are frequently wrong, so we learn by adjusting the strategy with which we compare our options, just as the Checkers Player adjusts the weights of its values. And just as AI feels around a digital tree of available moves to make, we stumble through a continuous, multi-dimensional field of possibilities.

A common definition for “good AI” is that which isn’t human, but can fool a human into believing it is. The greatest difficulty in putting this definition into action is not that human thinking is complex—it’s that computers require digitized instructions and that human thought processes don’t have a “best” way to be translated into numbers.

A central thesis of Simon’s work is that Man is simple once reduced to its most basic form. The pattern of Man’s behavior arises from a short list of rules and an arrangement of parts, including a storage of memory, sensors of the external world, devices for interacting with the world, and the brain, treated in many ways like a computer’s CPU. By sensing input from the world and recalling information from memory, the brain carries out, over and over, day in and day out, a decision making process—scan possible actions, select what is believed to be the best, and update those beliefs if a suboptimal action was taken.

But is thinking human this simple? Just like Linguistics moved away from Structuralism and Literature from Modernism and Geometry from Euclideanism and Statistics from Frequentism and Economics from Unbounded Rationality and…

I have a Google Note filled with these.

The pattern has been that a field of study will ground itself in rigor, only to seek more “human” models later as it encounters or attempts harder, more real problems.

Complex Systems allows us to escape this rigorous/human dichotomy by holding, at once, conflicting views, framing them in the language of Emergence. What is good at one scale might be devasting at another, while having no noticable effect at a third.

So, I ask, is this model of Simon’s, that all decision is reducible to search through a dimly-lit room of choices, human? ∎

Follow me on Twitter. Let’s chat sometime.