Demonstration held in Madrid last July to demand decent wages.Gustavo Valiente (Europa Press)
What is the fair wage? Can artificial intelligence establish it? Kotaro Hara, a professor of computer science at the University of Singapore, believes the first question poses “a problem that urgently needs to be solved.” At the end of the day, an income pact is one of the proposed solutions to the current crisis situation. Added to this is the extreme precariousness of less qualified sectors – employees in delivery or domestic tasks can earn between 9 and 14 euros per hour – and, on the opposite front, the open wage war to attract workers more linked to technological sectors, where more demand is registered. The second question —if artificial intelligence can establish a fair salary— has contradictory answers: yes, because it can provide the tools to establish how much is paid for a given task; but no, because without human supervision, the algorithm can lead to wrong decisions. Even so, many companies are beginning to use artificial intelligence to set their salary policies.
Kotaro Hara investigate the possibility of developing an equation that allows establishing a fair salary for unskilled tasks, but starts from a premise: the interaction between people and computers is necessary. Of the same opinion are Josep Capell, director of Ceinsa, a finalist company for the InnovaRH awards for an application that proposes an adequate salary for each position, and Juan Ignacio Rouyet, professor at the International University of La Rioja (Unir) and main consultant for Quint for technology strategies.
And study by a German team published on September 29 in Patterns endorses this caution by concluding: “People perceive a decision as fairer when humans are involved.” In this sense, Christoph Kern, from the University of Mannheim and co-author of the study, states: “As expected, fully automated decision-making is not favoured. What is interesting is that when there is human oversight over automated decision making, the level of perceived fairness becomes similar to human-centric decision making.”
Artificial intelligence feeds on databases to find out how much is paid in the market for a given activity. But there are no parameters. Should you earn the same in Cádiz as in Barcelona for the same work? Is it the same to replenish, for example, at night than during the day? Should someone with experience charge the same as someone who has just started? Can an SME pay the same as a large company? The Ceinsa application incorporates these conditions through a dialogue between the employer and the program. “We make the app take unique features into account,” explains Capell.
But this is a base. As the manager explains, many waiters complain that someone without training is paid the same as an experienced person. Or that the path of each worker in the company is not taken into account, the value that he adds due to his competence or ability to achieve the objectives. “That’s much more complicated, but surely, in the long run, it will be the next step,” he says.
Money is not everything in the job market, but it is key. Diego Velázquez, a programmer for a multinational company that does not want to be identified, is now 22 years old. He studied a vocational training module, did the internship a year ago at the company he works for and started as the youngest in his unit. He today he is the oldest. The proximity of the headquarters and the possibilities of teleworking 80% of the time have made him stay, while his colleagues have been captured by other technology companies with increases of between 300 and 500 euros per month. He came to consider it six months ago, but they raised his salary by 200 euros a month, enough to continue, although he continues to haunt the idea. “I get offers every day,” he says.
“You have to take into account”, explains Capell, “the remuneration part, which is the fixed plus variables, training, development, recognition, reconciliation of personal and professional life, flexible remuneration or social benefits… But the retributive part is hygienic and you have to keep it organized. It is very good that they let you telecommute, that they train you, but as long as the salary is correct, balanced, equitable with the rest of the positions in the organization and, in addition, allows you to maintain a correct standard of living. Telecommuting and flexibility are already taken for granted. It is very good to have all that, but give me the money I need for my quality of life.”
The remuneration part is hygienic and you have to keep it organized. It is very good that they let you telework, that they train you, but as long as the salary is correct, it is balanced and I have equality with the rest of the positions
Josep Capell, director of Ceinsa,
The manager affirms that the ideal model is one in which the company and the worker win: “If the two axes are very unbalanced, it is difficult to sustain over time.” And it also adds transparency as a key for the organization to be flexible depending on the circumstances. “It is the model to move forward: that it be fair and solid, that allows both the survival of the organization and that the worker is not the one who has to suffer the loss of purchasing power.”
It is about avoiding what el economista Jan Eeckhout called “the profit paradox” (title of his work The profit paradox), whose effects extend to workers and also to consumers: “Instead of transmitting the advantages of the best technologies to consumers, companies Superstar they take advantage of them to achieve even higher profit margins. The consequences are huge, from unnecessarily high prices for almost everything, to fewer start-ups to compete, to rising inequality and frozen wages for most workers, to highly limited social mobility.”
In this way, artificial intelligence to establish a fair salary is a necessary tool, but not enough. It provides information that is needed to make the most appropriate decision, but it also contains dangers: the programmer’s bias if he establishes a label of gender, age or origin, for example. Ceinsa, according to its director, excludes them from its application to define the positions in the most objective way possible. But Capell warns: “I would not cede the decision only to the machine, to artificial intelligence.”
The Unity teacher John Ignatius Rouyet match. “Establishing a salary based on artificial intelligence is technically feasible and, in fact, it is already being done. Ethically, it can be done as long as we are clear about some principles, such as, for example, that the criteria by virtue of which these salary values are being established are transparent. Artificial intelligence automates certain processes, but the important thing is to know what those criteria are and that the human being intervenes, that it is, in the end, a person who decides”. “Who would fly in a plane without a pilot, even if you can?” she asks.
Artificial intelligence automates certain processes, but the important thing is to know what the criteria are and that the human being intervenes, that it is, in the end, a person who decides. Who would fly in a plane without a pilot, even if you can?
Juan Ignacio Rouyet, professor at Unir
The expert in technological strategies adds that this element is also essential to establish responsibility, so that no one takes refuge in the fact that the machine has made a decision because it is not “aseptic”: “It feeds on data and can contain more men than women or of a certain qualification or to design the algorithm in one way or another”. “If justice were a matter of an algorithm, we would have already developed it,” he clarifies as an example.
Entrusting decisions of social importance to a machine leads to what the professor calls “digital despotism”. “We are already living it because we have algorithms whose mission is supposedly to work for our good, but without counting on us. To avoid this, we must claim our rights. The procedure by which an organization decides the salary must continue to be the same, only that, now, the information is available; but the decision mechanism cannot eliminate the human being”.
And two more fundamental elements that Rouyet warns: human supervision cannot be random, it has to fall to “someone who really knows where the biases are occurring; and artificial intelligence has to be explainable: if it says that position x has a salary range, that we know how to explain why”. “A person in charge can make the decision to hire someone taking as input data, as one more variable, the result that artificial intelligence gives, but it is a tool. Then we decide”, she concludes.