Computer users are humans. They evolved from monkeys not so long ago. That's why today's UI uses shades, smooth animation, tries to look Material: this way UI reminds real world that we evolved against.
That's why one of the most effective ways to comprehend math is to rely on geometry, visual thinking. That's why we use metaphor of balancing, for aligning our expenses with your incomes. It seems like most of our thought abstractions come from actual physical experience in material world.
It's possible to think about programming environments (languages) as an UI for program design. Maybe this programming UI should also be limited to only those abstractions that fit well into human brain. Those abstractions and ideas that can be experienced in everyday monkey life.
Some examples of abstractions and ideas that fit well:
- immutable data. Most objects in real world most of the time stay unchanged. Just like we have many identical pieces of paper (or rocks) that we modify, we have many copies of same immutable data, that used to produce something new.
- objects behave differently in different contexts (partial application of pure function as context update?). Objects in real world behave differently when they are placed in different contexts: warm, cold, light, dark.
- actions can be undone by traversing routes backwards. That what we do, when we lost our keys in the middle of the route. We try to remember where exactly we walked, and walk backwards. Once we found the keys, we start new branch of walk.
- separation for reliable actions, and unreliable actions that may have unpredictable outcome. If you work with static objects (or objects that move linearly), you can be sure that you can walk, jump, catch, run pretty much unconsciously with desired results. If you engage with another actor: a moving object, you need to calculate, negotiate, and it's much harder. In real world reducing number of moving objects, using as many static objects as possible makes life easier. Same in programming: keeping as many things immutable as possible, reduce number of dependencies that can change behavior without your explicit command -- makes life easier.
Abstractions and ideas that don't fit well:
- closure functions that capture IO interface (tcp socket, db connection). Basically what you get, is special kind of object that may behave totally differently, regardless of your input. It has implicit invisible dependency that change its behavior. It is not an object under your control, but rather an independent actor that you need to negotiate with (just like another human being).
- objects that can arbitrarily modify their own behavior. We have that kind of objects in languages with metaprogramming (e.g. Lisp), but we rarely see anything similar in everyday life. Most of the time objects are changed explicitly from outside.
- objects that can implicitly change context, global state. Usually if there is a change, it's always visible and explicit.
How our programming models gonna change, if we take this understanding of human limitations into account?