In 2016, the amount of advertising investment on the Internet exceeded, for the first time in France, that of advertising investment on television. However, algorithms occupy an increasingly important place in the purchase of space on the Web, which raises many ethical and legislative questions.
The rise of algorithms
The digital advertising market is now close to 3.5 billion euros . While until now advertising has mainly concerned display on websites (display) and the purchase of keywords (Google AdWords), the purchase of automated advertising space (referred to as “programmatic buying”) has made his appearance. Profiling of the Internet user is carried out thanks to his navigation traces and makes it possible to predict, at any time, his interest in an ad. It is therefore possible to calculate in real time, using algorithms, a value for the advertising space present on the page he is viewing.
The use of algorithms has the advantage of displaying banners that correspond to our interests, but their uncontrolled use carries risks. Indeed, the opacity of their operation impacts the behavior of Internet users without their being aware of it. What is more, algorithms sometimes enjoy exaggerated trust, while their results can be discriminatory. This raises the question of the neutrality and ethics of algorithms. The latter must be based on an understanding of how we relate to these new technologies. It involves taking an interest on the one hand in the apprehension of algorithms through the law, and on the other hand in the evolution of the digital advertising ecosystem.
Faced with these new challenges, it would be wise to focus not on the data processed, but on the algorithms themselves, by putting in place systems capable of testing and controlling them in order to prevent their harmful consequences .
Law and algorithms: reforming Europe
A new revolution is underway, based on the collection and processing of data on an unprecedented scale, which promotes the emergence of new products and services. This increase in the mass and diversity of data can be explained in particular by the development of connected objects and the rise of consumers (“empowerment”). Their capacity to act has indeed increased with the evolution of technologies: companies are becoming more and more dependent not only on the data produced by consumers, but also on their opinions, and must therefore constantly ensure that they maintain a good e-reputation.
Faced with this situation, the European institutions have started a process of reforming the legislation on personal data. The new European data protection regulation (GDPR) will come into force in May 2018, in all member states. It imposes an increase in the transparency and the accountability of the actors processing data, according to a logic of conformity with the law, and provides for heavy penalties. Likewise, a right to data portability is asserted, and those responsible for processing personal data must ensure operations that respect the protection of personal data, both from the design of the product or service (“privacy by design “).
The GDPR strives to implicitly regulate the algorithmic processing of data. We find this trend in the advertising sector: in general, all sites, services and products using algorithms take care not to name them and to hide their determining role, preferring to use the term personalization. However, as soon as there is “personalization”, there is most often “algorithmization”.
Legislation ill-suited to digital advertising
“Classic” advertising law is based on the principle of the person’s prior consent to the processing of their data. However, this conception of data protection becomes less relevant with digital advertising. Indeed, the data collected within the framework of traditional marketing often correspond to objective and relatively stable elements such as name, age, sex, address or family situation. However, the notion of “data” is radically changing in digital marketing. On social networks, it is no longer just a question of qualification data (age, sex, address), but also of data from daily life: what I have done, what I listen to …
This new situation raises the question of the relevance of the distinction between personal / non-personal data. It also questions the relevance of the principle of prior consent. It is indeed often almost impossible to use an application without accepting to be traced. Consent becomes compulsory for the use of technology, and the actual use that will be made of the data by the data controller is completely opaque. The problem is therefore no longer based on the issue of prior consent to data processing but on the automatic and predictive deductions made by the companies that have captured this data.
Algorithms accentuate this trend by making it possible to multiply the capture and use of trivial data, decontextualized but likely to contribute to a precise profiling of individuals, and to produce “knowledge” about them based on probabilities rather than on certainties. concerning their personal and intimate propensities. In this situation, wouldn’t it be more relevant to examine not the data that feeds the algorithms, but rather the algorithms that process them and generate new data?
Legal and ethical challenges of online advertising
Orientation of consumer choices, subliminal influence, even submission likely to change the perception of reality: behavioral targeting involves significant risks. Requirements of accountability, transparency and verifiability of actions resulting from algorithms therefore become essential to prevent possible abuses.
This situation questions the relationship between law and ethics, which is unfortunately often confused. The law refers in fact to the regulation of behavior by law – to what is permitted, prohibited or required from a legal point of view – while ethics refers more broadly to the distinction between good and evil, independently or beyond our strictly legal obligations. An ethics applied to algorithmic processing should be structured around two main principles: transparency, and the implementation of tests intended to control the results of the algorithms in order to prevent possible damage.
Transparency and accountability of algorithms
The activities of online platforms are essentially based on the selection and classification of information as well as on offers of goods or services. They design and activate various algorithms that guide the consumption behavior and thinking patterns of users. This personalization is sometimes misleading, because it is based on the machine’s conception of our way of thinking. However, this conception is not based on what we are, but on what we have done, on what we have consulted. This observation implies a necessary transparency: people impacted by an algorithm should be informed first of the existence of algorithmic processing, what it implies, the type of data it uses and for what purposes, in order to to be able, if necessary, to make any complaints.
Towards algorithm tests?
In the field of advertising, algorithms can lead to differentiating the prices of a product or a service, or even to establishing typologies of risky policyholders for the calculation of the insurance premium according to sometimes illicit criteria. , by cross-checking “sensitive” information. Not only is the collection and processing of these data (racial, ethnic origins, political opinions, religious) prohibited in principle, but the results of these algorithmic manipulations can also be discriminatory. The results of the first international beauty contest based entirely on algorithms thus led to the selection of uniquely white candidates .
To avoid this type of drift, it is urgent to set up algorithm results tests. In addition to the legislation and the role played by the protection authorities (CNIL), codes of good conduct are also appearing: advertising professionals, grouped within the Digital Advertising Alliance (DAA) , have thus adopted a protocol materialized by an icon visible next to targeted advertising, in order to explain how it works.
Companies have every interest in adopting a more ethical behavior in order to preserve their good reputation, and therefore a competitive advantage. There is indeed a weariness of Internet users vis-à-vis advertising deemed too intrusive. If the ultimate goal of advertising is to anticipate our needs as well as possible in order to “consume better”, it must be part of an environment that respects legislation and above all conforms to responsible and ethical innovation. This can be the vector of a new industrial revolution, concerned with fundamental rights and freedoms, in which the citizen is called upon to take his full place by giving him back power over his data.
This article was originally posted on Digital advertising and supervision of algorithms