Features of the Future: How Digitalization Influences the Social Sphere and Economics

Features of the Future: How Digitalization Influences the Social Sphere and Economics

Nobody would deny that the evolution of data processing computers and networks will lead and have already led to substantial changes in our society – in terms of social structure, art and science. 
Nobody would deny that the evolution of data processing computers and networks will lead and have already led to substantial changes in our society – in terms of social structure, art and science. Such changes cannot be planned beforehand, but rather they happen as a natural reaction of society to new IT capabilities.

In society, like in any other extremely complex system, it is rather difficult to predict such changes. It could have been seen in the themes discussed in futurology and science fiction some 50 years ago where they anticipated the development of nuclear energy, space flights, weather control and complex computations. But not information technologies, miniaturization and global connectedness.
Therefore, it would be reasonable to draw attention to some peculiarities of IT, which have already emerged but have not yet had a direct impact on our society. On the other hand, maybe, they are already beginning to affect our life, but to a small and almost imperceptible extent.


Attention span

The first and most substantial point is a significant decrease in attention span. The ease with which we can access new information, the particular peculiarities of the human mind as a feedback machine and the reinforcement of useful behavior have led to us start preferring small chunks or “snippets” of information. We find it more and more difficult to focus our attention on longer work or study, or any large amount of interrelated data. This becomes more and more evident with each new generation, gradually but irrevocably.

It would not be right to blame social media, journalists or anything else. Of course not. On the contrary, it’s a characteristic of the brain, built into its construction, or (in IT terms) it’s a “bug” that emerges under certain external conditions. We cannot remedy this bug now but have to learn how to live with it and consider it when planning our education, work or research.


Increased specialization

Close to that is the increasing specialization and absence of encyclopedic knowledge. We already rely on external search and classifications of information; scientific research is almost 100% based on computers and the Internet.

The world is constantly undergoing changes due to technological progress. Society becomes stronger but every individual becomes physically weaker. Science evolves but every individual becomes a little more “stupid” relying more on external knowledge (computer) than on their brain.

The global connectedness and dependence on computers brings some positive things as well. For example, it makes equipment more accessible for scientists. Now there is no need for your physical presence in the place where experiments are conducted. A team can be distributed across different countries and continents. Moreover, human presence in many scientific experiments (if not all) will become completely unnecessary. Experiments can be conducted in places that humans have not yet reached or cannot physically reach.

If computers can control or conduct experiments, then they can also create new knowledge with expert systems or artificial intelligence. Such “new” knowledge, however, will be logically inexplicable for humans. We do not yet have a very good understanding of how the human brain works, but most scientists agree that there is a big autonomous section (subconscious) which generates new knowledge, and another section (mind) which provides logical explanations for this new knowledge. It also structures and simplifies it for future use.

Features of the Future.jpg


Most modern AI systems try to imitate the work of the subconscious, while the other part – structuring and simplifying for human perception – still evades implementation. Moreover, it is not quite clear whether it will be able to work as a system “for human perception” or it will structure information to make it “understandable” for other computers (not humans).

Paraphrasing Arthur Clark, American sci-fi writer and futurologist, we may say that any sufficiently developed knowledge is indistinguishable from magic. Of course, for complicated areas of knowledge, this is somehow true already, but for computer-generated knowledge, it will be indistinguishable from magic for all humankind.

For example currently we cannot logically (i.e. with rules and equations) explain weather forecasts made by neural networks based on current and past data. In addition, even in a simpler area: in most cases, we cannot “explain” the solutions of a scoring machine, which predicts credit risks based on historical data on bank clients. We get a conclusion, but there is no “rational explanation.”
If we look at current IT projects and analyze their connection with society, we can note the following trends.

More and more communications happen through digital channels in all spheres. From personal communication and games to medicine, interaction with public authorities and science. Mobile and cable network operators are gradually transforming into providers of access to a unified single network: they will occupy either a new niche (as, for example, companies building and operating highways) or find new ways of interaction with other segments – retail, logistic, entertainment industries, public bodies, etc.


Digitalization of communication

The digitalization of communications will lead to the complete identification of all agents involved, both organizations and people. Anonymity as such will soon disappear almost completely, because both businesses and governments are interested in it.

Mechanisms of identification (both formal and indirect) will develop, as well as mechanisms of data storage, mapping and processing. Personal data protection laws, although pursuing the right objectives, are currently far from being actionable. It is still unclear if an actually useful, consistent and adequate mechanism can be implemented.

Accumulating large amounts of big data will lead to further evolution and implementation of machine analysis systems (neural network AI and yet unknown systems) which will be used for identifying subject interaction, behavioral pattern analysis, forecasting, and sometimes (illegally or in a “gray” zone) generating new data indistinguishable from the real one (fake videos, fake news, voice imitation, etc.).
Mai ai întrebări?
Conectați-văcu noi