You are currently viewing Inteligencia Artificial: implicaciones de la protección de datos y la propiedad intelectual

Artificial Intelligence: implications for data protection and intellectual property

Faced with the accelerated advances of artificial intelligence in the world, many countries are beginning to consider how their legislation protects users and products created through this technology. Although the benefits are great, many consider that the use of these tools feeds the algorithm with our personal data and private life experiences. So they are beginning to draw the limits of this invention as opposed to the protection of data and intellectual property of individuals.

So on that basis, in this article we will identify the impact of artificial intelligence on data protection and intellectual property. As well as the legal efforts to standardize this technology in the world.

What is data protection?

The data protection Personal data protection is a set of legal and computer techniques that seek to ensure the confidentiality of information of people who use technology. This means that it is based on the control of personal information that is shared in online environments, such as tax returns, phone calls, sending emails, among others.

It is based on the universal principle that individuals should decide whether or not to share certain data, while having the ability to modify their decisions as needed.

What is intellectual property?

Intellectual property refers to the protection of the product developed by the human intellect in the literary, artistic and industrial industries. This means that it is the legal protection exercised through patents, trademarks and copyrights. To enable the recognition of inventions or creations within any environment.

How do artificial intelligence, data protection and intellectual property relate?

Data protection and intellectual property protection is being challenged by the development of Artificial Intelligence worldwide. Since its use involves the processing of massive data, including the personal data of the person running the algorithm. In addition to establishing an ambiguous line between intellectual creation carried out by a person or a machine.

In different parts of the world, several countries have established their positions regarding this technology. In addition to launching the inclusion of laws that help establish legal frameworks for these concerns. Such is the case of the Norwegian Personal Data Protection Agency, which recognizes that the vast majority of AI-related apps require large volumes of data to learn and make decisions.

Similarly, the European Union established that the authorship of the creations corresponds to the individuals who perform the creative work and execute the commands that guide that algorithm. Therefore, even though the application does not show the sources or the analyzed data, the authorship corresponds to the person who executes the command, as well as the original sources of each reference.

On the other hand, the constant concern is evidenced by the lack of control over the data that feeds this algorithm. Since these technologies are prone to be affected by leaks of personal data, without any repercussions in many contexts.

Chat GPT and its legal conflicts in Italy

An example of such conflicts is the case of Chat GPT in Italywhere the Italian Data Protection Agency decided to block the chatbot until it had an adequate legal basis. This means that in the face of a non-existent data protection policy on the part of the app, the State preferred to restrict its use, until the legal basis was adapted to the country's legal standards.

Thus, in the face of the growing alert of security breaches by Chat GPT, by not allowing users to consent to the use of the owner's data. As well as not verifying the age of users before running the application, there were many risks that it was preferable not to assume.

Faced with this restriction, the company responsible for the artificial intelligence app called OpenAI, opted to make some adaptations to its internal policies in order to comply with the country's legal system. Establishing on its website a section where it explains what types of data are processed for the training of the algorithm. They also made adaptations to their terms and conditions of use, in order to add security filters to prevent unsupervised use by children under 13 years of age.

In view of these adjustments, the Italian agency stated that this legal settlement is a breakthrough in combating the negative effects of the use of this technology, allowing the use of GPT Chat again in the country.

If you would like to obtain more information or need advice on this and other law-related topics, do not hesitate to contact contact us at through our form. In addition, we share with you our social networks so that you can keep updated on the legal topics that are in trend: Instagram, Facebook, Twitter y Linkedin.