Timo Kullerkupp and Ken Kaarel Gross discuss about artificial intelligence in everyday work

‹ News

Timo Kullerkupp and Ken Kaarel Gross discuss about artificial intelligence in everyday work

Artificial intelligence has been around for quite some time as an exciting tool to 'play' with. Programs based on artificial intelligence, such as chat and translation robots, are increasingly being used to speed up work. If until now this has mostly been done in good faith, recently companies have started to establish internal regulations related to the use of artificial intelligence in order to mitigate risks, according to partner Timo Kullerkupp and attorney Ken Kaarel Gross of RASK law firm.

In itself, the desire of employees to do certain tasks faster with the help of artificial intelligence is welcome. If Artificial Intelligence helps in daily work: many use it, few regulate it, however, it happens only at the employee's own discretion, without the risks associated with the use of artificial intelligence being thought through at the management level of the company and relevant rules of use being developed, such practice contains several risks that require awareness and prevention.

Since the birth of ChatGPT, due to the multitude of opinions expressed by legal experts, it is no longer a surprise that the use of artificial intelligence is primarily considered problematic from the point of view of intellectual property rights. Who is the author of AI-generated content, who owns the property and personal rights to the work, and how to make sure that AI does not infringe on someone else's intellectual property rights are some of the questions that legal theorists still debate and lack consensus on.

Instead of banning, one should be aware of the risks

In this kind of legal ambiguity, each company should map the risks related to the use of content created by artificial intelligence, taking into account the specifics of the field of activity. Risks related to intellectual property rights should be assessed especially closely in companies whose activity consists mainly in the creation of various works protected by copyright, be it for example software code or advertising text. 

The use of artificial intelligence also entails risks related to the processing of personal data and trade secrets. In addition to your own personal data, sensitive customer data is often at stake. Important business information may also be transmitted using a translation or chatbot, which is subject to strict restrictions. However, the transmission of data may compromise the confidentiality of the data. This is especially problematic if there is no understanding of how the artificial intelligence stores the entered data and ensures its security. In order to mitigate risks, every company should provide clear instructions for processing (personal) data using artificial intelligence and make sure that employees follow them.

It is important to be aware of possible consequences, not in terms of fear, but awareness. There is no substantive limit to the amount of damages in the case of disclosure of a trade secret - depending on the situation, we can talk about millions or billions of euros. However, if the act qualifies as a crime, the person who discloses the data can also go to prison for a couple of years, and the legal entity can be fined up to 16 million euros in addition to the obligation to compensate for the damage.

A new layer of data literacy

It should also be taken into account that although artificial intelligence can sift through a volume of data incomprehensible to a human in a short time, artificial intelligence is not infallible. It was recently reported that, for example, ChatGPT's math problem solving accuracy dropped from 97.6% in March to just 2.4% in June. Solutions and answers provided by artificial intelligence may be outdated, biased and contain factual errors. The situation is also complicated by the fact that chatbots often present their answers without adding user reservations, and people's faith in the truth of the material created by artificial intelligence is very high.

It is extremely important to explain the shortcomings of artificial intelligence to employees, carry out risk assessments taking into account the peculiarities of the company, and develop guidelines for the critical evaluation of artificial intelligence creations. Awareness of the possibilities, dangers, shortcomings of various tools and the meaning of exchanging data with robots is important during training. Thinking a little more broadly, it is data literacy that has taken a modern form, but still remains a critically important skill.

Aware of this and to mitigate industry-specific risks, more and more companies are establishing internal regulations related to the use of artificial intelligence, in which it is possible to list, among other things, acceptable artificial intelligence services, procedures for the use of artificial intelligence and due diligence measures applicable to the employee.

The economic interests of the company are at stake

It needs to be emphasized that, in addition to mitigating legal risks, establishing specific rules for the use of artificial intelligence is also in the company's economic interests. Surveys have shown that the use of artificial intelligence is not often discussed in the work environment, as employees do not know whether such an activity is acceptable by the company's management. Establishing a set of rules in this regard, however, gives a signal that the use of artificial intelligence is allowed. In this way, employees can also share practical information with each other about how to improve the performance of work tasks and thus the operation of the company with the help of artificial intelligence.

Current trends show that artificial intelligence has become or will soon become an important tool in many fields of work. Therefore, it is worthwhile for companies whose employees currently or in the future use artificial intelligence in the performance of their work tasks, to deal with mitigating the corresponding risks. Ignoring the use of artificial intelligence will end up with unpleasant surprises for the company sooner or later.