With the expansion of computational power observed in the last ten years, artificial intelligence (AI) models have been gaining more and more space in various sectors of industry and academia. In an article written in 2021 and signed by a relevant group of researchers (including our co-founders, Adriano Koshiyama and Emre Kazim), it was stated that we are starting the Age of Algorithms (whether they are AI, machine learning or similar). This statement logically leads to the idea that we are now increasingly close to the Age of AI Economies. Meaning that work processes, the way that different markets are organised, the consumption patterns of economic agents, the way in which economic phenomena occur and are analysed, all are permeated by algorithms that generate impacts that are as of now still unknown.
In the last few months, large language models (LLMs) such as ChatGPT and GPT-4 have rapidly become household names and have moved markets that once seemed stable. New businesses have been developed (in the areas of entertainment, health, education, content production for social networks, productivity systems, etc.), as well as new professions (such as prompt engineering).
In this new Age, algorithms bring challenges for regulators, researchers, companies, and society. We are facing a movement with repercussions in different economic and social sectors, and within the context of Industry 5.0, it will bring possibilities for advances in areas such as medicine, industrial automation, geoengineering, biotechnology, and more, through the integration between machines and humans. Industry 5.0 is a concept that aims to integrate the benefits of Industry 4.0 technologies (connecting machines and systems through AI and internet of things) with human-centred values (such as creativity, sustainability, and collaboration).
Figure 1 presents a simplification of how an economy in which artificial intelligence is a central element in the decision-making process of households, companies, and governments, is organised. The dark purple (counterclockwise) flow represents the circulation of real assets, meaning that AI workers offer their labour and skills in an AI Market for Factors of Production, and companies buy this labour and skills to create their AI systems. Following the same flow, companies offer their AI products in an AI Market for Goods and Services (software and hardware for AI), and households buy these goods and services.
The light purple (clockwise) flow presents the financial circulation associated with the AI Economy. Companies pay salaries to AI workers, and this received salary becomes income. The income is spent on purchasing or using AI goods and services. Finally, the income of AI workers becomes revenue (and profit) for the companies.
The relationship between the government, households, and companies was not built in the traditional form of tax payment but rather by households requesting more responsible AI systems (fair, explainable, robust, and secure) to increase well-being, and by regulating the use of AI by companies, which must follow the established rules.
This flow summarises how the main economic agents behave within an AI-based system. The interrelation between them becomes clear, as does the need for governments to act as mediators of any conflicts of interest that may exist between households and companies.
In the case of AI models, there are some points that must be treated with care. Among these points, we can highlight the presence of biases (gender, race, age, nationality, etc.) in the data used to train the models, the difficulty of explaining the results in a humanly understandable way, issues such as privacy and security, and ethical aspects.
Through this conflicting situation, important initiatives (such as the Defense Advanced Research Projects Agency, part of the United States Department of Defense) have emerged to mitigate the adverse effects in the use of AI systems. The development of methods that make the models safety and transparent to end-users is a major challenge that is posed for the coming years and will make the use of artificial intelligence even more present in our daily lives.
These advances are not escaping the notice of governments around the world. The widespread use of AI models by market agents can generate adverse systemic effects that are already being discussed and investigated by regulators in several countries. In September 2021, the UK government launched a program called the National AI Strategy. This initiative aims to define guidelines and analyse how the use of AI can help governments and private institutions have increasingly resilient processes, with greater productivity, addressing ethical issues and collaborating for economic growth and innovation.
In addition, several countries are developing their own strategies and regulations for AI, such as Spain, China, and the Netherlands, and it's not just developed economies that are proposing to advance in this area. The Brazilian government launched the Brazilian Artificial Intelligence Strategy (EBIA in Portuguese) in April 2021. Guided towards the development of research and innovations in AI, EBIA has 73 strategic actions ranging from legislative and regulatory adaptation for the use of AI to the process of training and capacitating professionals to work in the field.
The debate surrounding the use of AI, especially among thinkers in the humanities and social sciences, is also rapidly expanding. This is a topic that is no longer limited to the codes and mathematical models developed by computer scientists and engineers, but is increasingly on the research agenda for philosophers, sociologists, anthropologists, economists, among others.
This is due to the interdisciplinarity that the advancement of the topic and its widespread use by society bring. As a result, there is also a new and rapidly growing AI job market in developed and developing economies.
The AI job market can be explored at a research level (specialised professionals for building computational models), operational level (professionals who deal with regulatory compliance, risk management, etc.), and abstract level (with professionals who are able to discuss and propose solutions for debates related to the ethical use of data and algorithms).
We are facing a scenario in which artificial intelligence is materialising in everyday society with increasing depth. And this materialisation is occurring rapidly, reshaping our way of interacting with technology and each other. At this moment of transition, it is up to society to discuss what the right path for this technological advancement is, especially in the case of artificial intelligence and its applications.
The existing relationships between governments, households, and companies are important for exploring the flows of real assets (labour, goods, and services) and the financial flow associated with an AI economy. In addition, the role of governments within this system is important for building a balance between the well-being of society and the benefits of the technology.
It is important to highlight the importance of governments continuing to act to mitigate adverse effects generated by the misuse of AI systems. It is necessary for countries to continue to develop regulations for the ethical use of AI that does not harm the most vulnerable in society, as well as ensuring that these systems act to increase fairness, explainability, robustness, and security.
DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.
Schedule a call with one of our experts