How Technologies Adjust - and we do too
Most technological systems around us have forced us #humans for quite a long time now to #adjust to whatever the logic of the #technology required.
Whenever we adopt an idea or feel we have invented something new we walk through a door. Every evolutionary step, the generation of ideas, provide these doors to us. When we actually walk through that door, we have accepted the underlying idea of that door. After going through that door, though, we see a new set of doors. Maybe we see three or four doors in front of us to open now—doors that we had not seen before. We did not even know they were options that we could end up with. Now that we have walked through the first door, we see the next set of doors and pick one to walk through. The concept of “the adjacent possible” describes the map of doors as the current reality we can lead to if we walk through the first door. Steven notes that the adjacent possible “captures both the limits and the creative potential of change and innovation”. The limits are set by what we can see from the status quo. We simply do not know what the next door will hold for us. We can only know by opening the door. The creative potential, on the other hand, is set by our capability to open the doors and keep exploring what is behind the next door. Indeed, we are afraid at times, since a closed door could hide danger.
Yet, our drive for progress leads us to open another door at every turn. And the further we walk through door after door, the further we move away from where we started. This is how some of the robots in our life have become what they are today, and who knows what they might become after we open the next door.
Of course, there is a long list of evolutionary steps leading up to the personal computer, but let’s start from there. Do you remember the first time you opened Excel on your computer? How would you describe the experience of using the software from today’s perspective? Did it feel natural? Was it easy to use? I did not think so when I used it first. There was a lot to learn.
Most technological systems (for the sake of categorical simplicity, we’ll call them all “robots”) around us have forced us humans for quite a long time now to adjust to whatever the logic of the technology required. We literally had to learn to use the software, meaning we had to form new neural pathways in our brains to adapt to the software requirements.
After the first generation of interfaces became more visually appealing, they started mimicking inanimate objects in our environment. The desktop looked like a desk, files looked like paper files, and folders would hold files, just like in the real world. The process of duplicating a file would even be called “copying” it.
Some time later, interfaces would start animating and anthropomorphizing inanimate objects. Now paperclips started talking (though Microsoft’s Clippy was hardly of any help when working on a document), and they started bouncing around. When you threw something in the trash folder, an invisible hand would crumble the paper and throw it in the basket (which resembled the basket next to your real-world desk).
Then we took on the interaction with the interface by adding main input sources in addition to the mouse and keyboard, such as voice commands and touch screens. While we went mobile with our personal computers (sort of robots too) software had to adjust to smaller screens and different function- ality. For the first time interfaces started adjusting to humans, rather than humans having to adjust to the requirements of software.
Next, our robots learned to talk to us and to take commands in natural language, which is, even now, transforming how we interact with our computers and the devices we need. Chatbots learned to mimic us and have conversations like a human being would naturally have.
In the near future, robots will connect and exchange information to serve us without us doing anything. Think of a robot sitting in every conference room, displaying information that it thinks might be relevant to the current conversation in the room. When the conversation is about a sales report, it will be right there ready for you to look at. If the conversation is about a marketing campaign including a YouTube clip your competitor made, it will be ready to play without your doing anything.
The doors we walk through open up opportunities to us and challenge us. We cannot know for sure what is behind the next door. We also cannot know how many doors there are. But without going through them, we will not be part of creating our future. It will simply happen to us and that is not a good idea.
Robots became human because we as humans like to interact with humans. We designed them step-by-step to serve our needs. The ideas behind them went through hundreds of evolutionary generations before they became what they are today.