Posts

Pharmaceutical industry

Caring for the population or one’s earnings? A dilemma for marketers in the pharmaceutical industry

Loréa Baïada-Hirèche, Institut Mines-Télécom Business School ; Anne Sachet-Milliat, ISC Paris Business School et Bénédicte Bourcier-Béquaert, ESSCA École de Management

The pharmaceutical industry is rocked by scandals on a regular basis. Oxycodon, for example, has been massively distributed in the United States despite being a highly addictive opioid analgesic, and has been implicated in some 200,000 deaths by overdose in the United States since 1999.

Closer to home, it took more than 15 years for Servier Laboratories’ Mediator to be withdrawn from the market, even though its prescription as an appetite suppressant, outside its initial therapeutic indication, caused numerous victims, including 2,000 recorded deaths. The outcome of the trial in March 2021 highlighted not only the responsibility of doctors, but also that of the laboratories producing these drugs, as was also the case for Levothyrox, manufactured by Merck.

These different scandals are merely the visible manifestation of the constant tension generated in this sector between the pursuit of profit and its fundamental health mission. The marketing professionals who are responsible for promoting medicines to patients and doctors seem particularly concerned by this ethical conflict which can cause them to question their real mission: is it treating or selling?

In the course of our research, we set out to discover how marketers in the pharmaceutical sector perceive this quandary and how they deal with it.

Economic interest but a health mission

The ethical conflicts encountered can lead marketers into situations of “moral dissonance”. This refers to occasions when people’s behaviors or decisions conflict with their moral values. Because it brings into play elements which are central to people’s identity such as their values, moral dissonance can generate significant psychological discomfort, giving rise to guilt and affecting self-esteem.

The people affected will then engage in strategies designed to reduce this state of dissonance, which are mainly based on the use of self-justification mechanisms but may also include changing their behavior or seeking social support.

To understand the attitudes of pharmaceutical marketing professionals, we conducted in-depth interviews with 18 of them, which revealed that these individuals are beset by ethical conflicts of varying severity, most of which relate to decisions that are of economic interest but lead to their failure to fulfill their health mission. This may involve potential harm to patients, infringements of regulations or breaches of professional ethics. Conflicts seem to affect people more intensely when the choices have major impacts on patients’ health.

The Servier affair – a turning point

Our series of interviews revealed that three strategies are employed in an effort to resolve this conflict. The first strategy is to minimize the ethically sensitive nature of the issue, which means burying one’s head in the sand, ignoring the conflict or forgetting about it as quickly as possible.

For example, one respondent explains:

“I wouldn’t say that pharmaceutical industry is whiter than white, either. There have been cases like Servier, of people who were dishonest. But that’s not the case for most people who work in the industry. They are happy to work in an industry that has made a positive contribution to society.”

According to these professionals, there is no conflict between the health and economic missions: making a profit is a way to finance medical research. This perspective makes pharmaceutical companies out to be “the main investors in health”.

In addition, they point out that their practices are very tightly regulated by law. Several respondents point out that Mediator was a landmark case:

“There is no longer a problem because everything has been regulated. Problems caused by conflicts of interest such as the Servier case are over, they can’t happen anymore. There truly was a before and after Mediator, it really changed things.”

Unable to ignore the media-driven attacks on the pharmaceutical industry, they defend themselves by denouncing the media’s role in stirring up controversy, the headlines that seek to “create a buzz” and the “journalists who don’t have anything better to write about”.

In contrast, other respondents are well aware of the risks that the marketed product poses to patients. However, they claim to be taking these risks precisely for patient’s sake. This is how the rationale for doubling the doses recommended under the regulations for children with serious pathologies is justified:

Like heroes

“Even if it’s a product that is dangerous, potentially dangerous, and on which you don’t have too much hindsight, you tell yourself that you can decide, with the chief scientist, to support the doctors doubling the doses because there’s a therapeutic benefit.”

The emphasis on acting in the patient’s interest is disturbing because it leads marketers to conceal the economic dimension of their activity and to present it as a secondary concern. However, doubling the doses does indeed increase the sales of the product.

Paradoxically, referring to the patient’s well-being in this way can actually serve to endorse unethical acts, while sometimes enabling the marketers to present themselves as heroes who work miracles for their patients. One of them justifies his actions in this way:

“Our product was very beneficial to patients; everyone was grateful to us… First there were the health professionals who told us ‘Our patients are delighted, their cholesterol levels are really low, it’s great’ and then there were the patients who testified that ‘My doctor had been forcing me to take cholesterol-lowering drugs for the past three years and I was always in pain everywhere… I’ve been taking your products for two months now and not only is my cholesterol level low, but above all, I’m no longer in any pain whatsoever.’”

Their way of presenting their profession sometimes even makes them out to be acting as caregivers.

In the final strategy, some respondents note that the notion of profitability takes precedence over the health mission, and express their mistrust of the discourse developed by other sales professionals:

“Money has become so important these days, and I get the impression there is hardly any concern for ethics in the organizations and people marketing the products.”

The disillusionment of these marketers is such that, in contrast to the cases mentioned above, they can no longer find arguments to justify their marketing actions and reduce their malaise.

“I was not very comfortable because I felt like I was selling something that could possibly hurt people or even be fatal in certain cases. I was feeling a little guilty actually… I was thinking that I would have preferred to have been marketing clothes, or at least untainted products.”

The only way out of their dissonance seems to be to avoid problematic practices by changing jobs, companies, or even leaving the pharmaceutical industry altogether.

Training and regulatory affairs

What is the solution? It seems difficult to make recommendations to pharmaceutical manufacturers in light of the doubts about the real willingness of top management to prevent unethical behavior by their employees when such behavior is adopted in their economic interest.

However, highlighting the existence of moral dissonance and the psychological suffering it inflicts upon workers should cause them concern. Studies show that these phenomena have negative consequences such as loss of commitment to work and increased staff turnover.

This is especially true in the pharmaceutical industry, which is involved in a noble cause – health – to which the respondents generally remain strongly attached.

Externally, an ethical dimension should be more systematically integrated into marketing training, especially in specialized health marketing courses.

Moreover, although the law has been tightened up, particularly after the Mediator affair, this has not prevented the emergence of new scandals, particularly in new markets such as implants. To protect citizens, the public authorities should therefore be paying more attention to para-medical products, which are currently subject to less restrictive regulations.

Loréa Baïada-Hirèche, Senior Lecturer in Human Resources Management, Institut Mines-Télécom Business School; Anne Sachet-Milliat, Lecturer and Researcher in Business Ethics, ISC Paris Business School and Bénédicte Bourcier-Béquaert, Lecturer and Researcher in Marketing, ESSCA École de Management

This article has been republished from The Conversation under a Creative Commons license. Read the original article (in French).

IMPETUS: towards improved urban safety and security

How can traffic and public transport be managed more effectively in a city, while controlling pollution, ensuring the safety of users and at the same time, taking into account ethical issues related to the use of data and mechanisms to ensure its protection? This is the challenge facing IMPETUS, a €9.3 million project receiving funding of €7.9 million from the Horizon 2020 programme of the European Union[1]. The two-year project launched in September 2020 will develop a tool to increase cities’ resilience to security-related events in public areas. An interview with Gilles Dusserre, a researcher at IMT Mines Alès, a partner in the project.

What was the overall context in which the IMPETUS project was developed?

Gilles Dusserre The IMPETUS project was the result of my encounter with Matthieu Branlat, the scientific coordinator of IMPETUS, who is a researcher at SINTEF (Norwegian Foundation for Scientific and Industrial Research) which supports research and development activities. Matthieu and I have been working together for many years. As part of the eNOTICE European project, he came to take part in a use case organized by IMT Mines Alès on health emergencies and the resilience of hospital organizations. Furthermore, IMPETUS is the concrete outcome of efforts made by research teams at Télécom SudParis and IMT Mines Alès for years to promote joint R&D opportunities between IMT schools.

What are the security issues in smart cities?

GD A smart city can be described as an interconnected urban network of sensors, such as cameras and environmental sensors; it offers a multitude of valuable big data. In addition to better managing traffic and public transport and controlling pollution, this data allows for better police surveillance, adequate crowd control. But these smart systems increase the risk of unethical use of personal data, in particular given the growing use of AI (artificial intelligence) combined with video surveillance networks. Moreover, they increase the attack surface for a city since several interconnected IoT (Internet of Things) and cloud systems control critical infrastructure such as transport, energy, water supply and hospitals (which play a central role in current problems). These two types of risks associated with new security technologies are taken very seriously by the project: a significant part of its activities is dedicated to the impact of the use of these technologies on operational, ethical and cybersecurity aspects. We have groups within the project and external actors overseeing ethical and data privacy issues. They work with project management to ensure that the solutions we develop and deploy adhere to ethical principles and data privacy regulations. Guidelines and other decision-making tools will also be developed for cities to help them identify and take into account the ethical and legal aspects related to the use of intelligent systems in security operations.

What is the goal of IMPETUS?

GD In order to respond to these increasing threats for smart cities, the IMPETUS project will develop an integrated toolbox that covers the entire physical and cybersecurity value chain. The tools will advance the state of the art in several key areas such as detection (social media, web-based threats), simulation and analysis (AI-based tests) and intervention (human-machine interface and eye tracking, optimization of the physical and cyber response based on AI). Although the toolbox will be tailored to the needs of smart city operators, many of the technological components and best practices will be transferable to other types of critical infrastructure.

What expertise are researchers from IMT schools contributing to the project?  

GD The work carried out by Hervé Debar‘s team at Télécom SudParis, in connection with researchers at IMT Mines Alès, resulted in the creation of the overall architecture of the IMPETUS platform, which will integrate the various modules of smart city as proposed in the project. Within this framework, the specification of the various system components, and the system as a whole, will be designed to meet the requirements of the final users (cities of Oslo and Padua), but also to be scalable to future needs.

What technological barriers must be overcome?

GD The architecture has to be modular, so that each individual component can be independently upgraded by the provider of the technology involved. The architecture also has to be integrated, which means that the various IMPETUS modules can exchange information, thereby providing significant added value compared to independent smart city and security solutions that work as silos.  

To provide greater flexibility and efficiency in terms of collecting, analyzing, storing and access to data, the IMPETUS platform architecture will combine IoT and cloud computing approaches. Such an approach will reduce the risks associated with an excessive centralization of large amounts of smart city data and is in line with the expected changes in communication infrastructure, which will be explored at a later date.  

This task will also develop a testing plan. The plan will include the prerequisites, the execution of tests, and the expected results. The acceptance criteria will be defined based on the priority and percentage of successful test cases. In close collaboration with the University of Nimes, IMT Mines Alès will work on innovative approach to environmental risks, in particular related to chemical or biological agents, and to hazard assessment processes.

The consortium includes 17 partners and 11 EU member states and associated countries. What are their respective roles?

GD The consortium was formed to bring together a group of 17 organizations that are complementary in terms of basic knowledge, technical skills, ability to create new knowledge, business experience and expertise. The consortium comprises a complementary group of academic institutions (universities) and research organizations, innovative SMEs, industry representatives, NGOs and final users.

The work is divided into a set of interdependent work packages. It involves interdisciplinary innovation activities that require a high level of collaboration. The overall strategy consists of an iterative exploration, an assessment and a validation, involving the final users at every step.

[1] This project receives funding from Horizon 2020, the European Union’s Framework Programme for Research and Innovation (H2020) under grant agreement N° 883286. Learn more about IMPETUS.

Eclairer boites noires, algorithms

Shedding some light on black box algorithms

In recent decades, algorithms have become increasingly complex, particularly through the introduction of deep learning architectures. This has gone hand in hand with increasing difficulty in explaining their internal functioning, which has become an important issue, both legally and socially. Winston Maxwell, legal researcher, and Florence d’Alché-Buc, researcher in machine learning, who both work for Télécom Paris, describe the current challenges involved in the explainability of algorithms.

What skills are required to tackle the problem of algorithm explainability?

Winston Maxwell: In order to know how to explain algorithms, we must draw on different disciplines. Our multi-disciplinary team, AI Operational Ethics, focuses not only on mathematical, statistical and computational aspects, but also on sociological, economic and legal aspects. For example, we are working on an explainability system for image recognition algorithms used, among other things, for facial recognition in airports. Our work therefore encompasses these different disciplines.

Why are algorithms often difficult to understand?

Florence d’Alché-Buc: Initially, artificial intelligence used mainly symbolic approaches, i.e., it simulated the logic of human reasoning. Logical rules, called expert systems, allowed artificial intelligence to make a decision by exploiting observed facts. This symbolic framework made AI more easily explainable. Since the early 1990s, AI has increasingly relied on statistical learning, such as decision trees or neural networks, as these structures allow for better performance, learning flexibility and robustness.

This type of learning is based on statistical regularities and it is the machine that establishes the rules which allow their exploitation. The human provides input functions and an expected output, and the rest is determined by the machine. A neural network is a composition of functions. Even if we can understand the functions that compose it, their accumulation quickly becomes complex. So a black box is then created, in which it is difficult to know what the machine is calculating.

How can artificial intelligence be made more explainable?

FAB: Current research focuses on two main approaches. There is explainability by design where, for any new constitution of an algorithm, explanatory output functions are implemented which make it possible to progressively describe the steps carried out by the neural network. However, this is costly and impacts the performance of the algorithm, which is why it is not yet very widespread. In general, and this is the other approach, when an existing algorithm needs to be explained, it is an a posteriori approach that is taken, i.e., after an AI has established its calculation functions, we will try to dissect the different stages of its reasoning. For this there are several methods, which generally seek to break the entire complex model down into a set of local models that are less complicated to deal with individually.

Why do algorithms need to be explained?

WM: There are two main reasons why the law stipulates that there is a need for the explainability of algorithms. Firstly, individuals have the right to understand and to challenge an algorithmic decision. Secondly, it must be guaranteed that a supervisory institution such as the  French Data Protection Authority (CNIL), or a court, can understand the operation of the algorithm, both as a whole and in a particular case, for example to make sure that there is no racial discrimination. There is therefore an individual aspect and an institutional aspect.

Does the format of the explanations need to be adapted to each case?

WM: The formats depend on the entity to which it needs to be explained: for example, some formats will be adapted to regulators such as the CNIL, others to experts and yet others to citizens. In 2015, an experimental service to deploy algorithms that detect possible terrorist activities in case of serious threats was introduced. For this to be properly regulated, an external control of the results must be easy to carry out, and therefore the algorithm must be sufficiently transparent and explainable.

Are there any particular difficulties in providing appropriate explanations?

WM: There are several things to bear in mind. For example, information fatigue: when the same explanation is provided systematically, humans will tend to ignore it. It is therefore important to use varying formats when presenting information. Studies have also shown that humans tend to follow a decision given by an algorithm without questioning it. This can be explained in particular by the fact that humans will consider from the outset that the algorithm is statistically wrong less often than themselves. This is what we call automation bias. This is why we want to provide explanations that allow the human agent to understand and take into consideration the context and the limits of algorithms. It is a real challenge to use algorithms to make humans more informed in their decisions, and not the other way around. Algorithms should be a decision aid, not a substitute for human beings.

What are the obstacles associated with the explainability of AI?

FAB: One aspect to be considered when we want to explain an algorithm is cyber security. We must be wary of the potential exploitation of explanations by hackers. There is therefore a triple balance to be found in the development of algorithms: performance, explainability and security.

Is this also an issue of industrial property protection?

WM: Yes, there is also the aspect of protecting business secrets: some developers may be reluctant to discuss their algorithms for fear of being copied. Another counterpart to this is the manipulation of scores: if individuals understand how a ranking algorithm, such as Google’s, works, then it would be possible for them to manipulate their position in the ranking. Manipulation is an important issue not only for search engines, but also for fraud or cyber-attack detection algorithms.

How do you think AI should evolve?

FAB: There are many issues associated with AI. In the coming decades, we will have to move away from the single objective of algorithm performance to multiple additional objectives such as explainability, but also equitability and reliability. All of these objectives will redefine machine learning. Algorithms have spread rapidly and have enormous effects on the evolution of society, but they are very rarely accompanied by instructions for their use. A set of adapted explanations must go hand in hand with their implementation in order to be able to control their place in society.

By Antonin Counillon

Also read on I’MTech: Restricting algorithms to limit their powers of discrimination