algorithms

Restricting algorithms to limit their powers of discrimination

From music suggestions to help with medical diagnoses, population surveillance, university selection and professional recruitment, algorithms are everywhere, and transform our everyday lives. Sometimes, they lead us astray. At fault are the statistical, economic and cognitive biases inherent to the very nature of the current algorithms, which are supplied with massive data that may be incomplete or incorrect. However, there are solutions for reducing and correcting these biases. Stéphan Clémençon and David Bounie, Télécom ParisTech researchers in machine learning and economics, respectively, recently published a report on the current approaches and those which are under exploration.

 

Ethics and equity in algorithms are increasingly important issues for the scientific community. Algorithms are supplied with the data we give them including texts, images, videos and sounds, and they learn from these data through reinforcement. Their decisions are therefore based on subjective criteria: ours, and those of the data supplied. Some biases can thus be learned and accentuated by automated learning. This results in the algorithm deviating from what should be a neutral result, leading to potential discrimination based on origin, gender, age, financial situation, etc. In their report “Algorithms: bias, discrimination and fairness”, a cross-disciplinary team[1] of researchers at Télécom ParisTech and the University of Paris Nanterre investigated these biases. They asked the following basic questions: Why are algorithms likely to be distorted? Can these biases be avoided? If yes, how can we minimize them?

The authors of the report are categorical: algorithms are not neutral. On the one hand, because they are designed by humans. On the other hand, because “these biases partly occur because the learning data lacks representativity” explains David Bounie, researcher in economics at Télécom ParisTech and co-author of the report. For example: the recruitment algorithm for the giant Amazon was heavily criticized in 2015 for having discriminated against female applicants. At fault, was an imbalance in the history of the pre-existing data. The people recruited in the previous ten years were primarily men. The algorithm had therefore been trained by a gender-biased learning corpus. As the saying goes, “garbage in, garbage out”. In other words, if the input data is of poor quality, the output will be poor too.

Also read Algorithmic bias, discrimination and fairness

Stéphan Clémençon is a researcher in machine learning at Télécom Paristech and co-author of the report. For him, “this is one of the growing accusations made of artificial intelligence: the absence of control over the data acquisition process.” For the researchers, one way of introducing equity into algorithms is to contradict them. An analogy can be drawn with surveys: “In surveys, we ensure that the data are representative by using a controlled sample based on the known distribution of the general population” says Stéphan Clémençon.

Using statistics to make up for missing data

From employability to criminality or solvency, learning algorithms have a growing impact on decisions and human lives. These biases could be overcome by calculating the probability that an individual with certain characteristics is included in the sample. “We essentially need to understand why some groups of people are under-represented in the database” the researchers explain. Coming back to the example of Amazon, the algorithm favored applications from men because the recruitments made over the last ten years were primarily men. This bias could have been avoided by realizing that the likelihood of finding a woman in the data sample used was significantly lower than the distribution of women in the population.

“While this probability is not known, we need to be able to explain why an individual is in the database or not, according to additional characteristics” adds Stéphan Clémençon. For example, when assessing banking risk, algorithms use data on the people eligible for a loan at a particular bank to determine the borrower’s risk category. These algorithms do not look at applications by people who were refused a loan, who have not needed to borrow money or who obtained a loan in another bank. In particular, young people under 35 years old are systematically assessed as carrying a higher level of risk than their elders. Identifying these associated criteria would make it possible to correct the biases.

Controlling data also means looking at what researchers call “time drift”. By analyzing data over very short periods of time, an algorithm may not account for certain characteristics of the phenomenon being studied. It may also miss long-term trends. By limiting the duration of the study, it will not pick up on seasonal effects or breaks. However, some data must be analyzed on the fly as they are collected. In this case, when the time scale cannot be extended, it is essential to integrate equations describing potential developments in the phenomena analyzed, to compensate for the lack of data.

The difficult issue of equity in algorithms

Other than the possibility of using statistics, researchers are also looking at developing algorithmic equity. This means developing algorithms which meet equity criteria according to attributes protected under law such as ethnicity, gender or sexual orientation. As for statistical solutions, this means integrating constraints into the learning program. For example, it is possible to impose that the probability of a particular algorithmic result will be equal for all individuals belonging to a particular group. It is also possible to integrate independence between the result and a type of data, such as gender, income level, geographical location, etc.

But which equity rules should be adopted? For the controversial Parcoursup algorithm for higher education applications, several incompatibilities were raised. “Take the example of individual equity and group equity. If we consider only the criterion of individual equity, each student should have an equal chance at success. But this is incompatible with the criterion of group equity, which stipulates that admission rates should be equal for certain protected attributes, such as gender” says David Bounie. In other words, we cannot give an equal chance to all individuals regardless of their gender and, at the same time, apply criteria of gender equity. This example illustrates a concept familiar to researchers: the rules of equity contradict each other and are not universal. They depend on ethical and political values that are specific to individuals and societies.

There are complex, considerable challenges facing social acceptance of algorithms and AI. But it is essential to be able to look back through the algorithm’s decision chain in order to explain its results. “While this is perhaps not so important for film or music recommendations, it is an entirely different story for biometrics or medicine. Medical experts must be able to understand the results of an algorithm and refute them where necessary” says David Bounie. This has raised hopes of transparency in recent years, but is no more than wishful thinking. “The idea is to make algorithms public or restrict them in order to audit them for any potential difficulties” the researchers explain. However, these recommendations are likely to come up against trade secret and personal data ownership laws. Algorithms, like their data sets, remain fairly inaccessible. However, the need for transparency is fundamentally linked with that of responsibility. Algorithms amplify the biases that already exist in our societies. New approaches are required in order to track, identify and moderate them.

[1] The report (in French) Algorithms: bias, discrimination and equity was written by Patrice Bertail (University of Paris Nanterre), David Bounie, Stephan Clémençon and Patrick Waelbroeck (Télécom ParisTech), with the support of Fondation Abeona.

Article written for I’MTech by Anne-Sophie Boutaud

To learn more about this topic:

Ethics, an overlooked aspect of algorithms?

Ethical algorithms in health: a technological and societal challenge

cave paintings

The hidden secrets of the colors of cave paintings at prehistoric sites

The colors of cave paintings are of great interest because they provide information about the techniques and materials used. Studying them also allows fewer sample to be taken from ancient paleolithic works. Research in colorimetry by Dominique Lafon-Pham at IMT Mines Alès provides a better definition of the colors used in paintings by our ancestors.

 

Mammoths, steppe lions and woolly rhinoceroses have been extinct for thousands of years, but they have by no means disappeared from paleolithic caves. Paintings of these animals still remain on the walls of the caves that our ancestors once lived in or travelled to. For archeologists, cave art specialists and paleo-anthropologists, these paintings are a valuable source of information. Cave art, found at various sites in different regions and dating from a long period that covers several tens of thousands of years, reflects the distribution and evolution of prehistoric wildlife. Analysis of the complex scenes sometimes depicted — such as hunting — and study of the artistic techniques used also bear valuable witness to paleolithic social practices. They are an expression of the symbolic world of our ancestors.

Scientists examine and handle these works with minute care. “Permission to take samples of the painted works is only granted after a strict application process and remains exceptional. Decorated caves can contain a wealth of information but are also be extremely restrictive due to the fragility of the information itself,” explains Dominique Lafon-Pham. The researcher at IMT Mines Alès is developing measurement methods that do not require contact with the color and which help characterize rock paintings. She has been carrying out her work for several years in close collaboration with the French National Center for Prehistory (CNP). She alternates field work and lab experiments in partnership with Stéphane Konik, geoarcheologist at the CNP attached to the PACEA[1] laboratory.

“Colorimetric analysis isn’t a replacement for chemical and mineralogical methods of analysis”, Dominique Lafon-Pham stresses. In certain cases it does, however, provide initial information on the nature of the colorant material. The color alone is not enough to accurately trace the constituents of the mixes, but it does provide a clue. Comparing the colors in different works is a way to avoid taking samples of the pictorial layer from the walls of prehistoric caves. The researcher’s work helps contribute to a “detective investigation” led by archaeologists at scenes dating from several tens of thousands of years ago, where even the smallest piece of evidence merits examination.

The color and, more generally, the appearance of the drawings observed by teams of scientists in caves such as Chauvet and Cussac tell us some of the history of the chosen colorant material that was prepared and applied and has been exposed to the passing of time. It is a way of entering into the work through analysis of the ancient material used. Data produced from this analysis may allow parietal archaeologists to approach the work from the perspective of its creation and even its purpose, whereas conservation specialists are more interested in its evolution over time.

Our visual ability does not allow us to compare subtle differences in color that do not fall within our visual range. We do not have perfect color memory. In addition, the impressions created by an area of color are influenced by the surrounding chromatic environment. “When we can measure the color of a mark without the problem of deterioration due to aging, we will be able to establish similarities between works of the same color, whether they are on the same rock wall or not,” indicates the researcher at IMT Mines Alès.

Objectifying the perception of colors

This comparative method may seem a simple one, but it is important not to underestimate the complexity of the site. Lighting — very often artificial — alters the perception of the human eye. A colored surface will not appear the same when lit in two different ways. The aging of the rock also has an impact. The calcite that forms in the caves sometimes covers the paintings and alters the optical performance of the material, dulling and modifying the color of the depictions. In addition, moisture conditions vary with the seasons and between different sectors at a single site, leading to reversible variation in the colors perceived and measured. All these different impacts require set procedures to be put in place to identify, in the most objective way possible, the color produced by the interaction between light and material.

Measuring the color of cave paintings is not an easy task. Researchers use spectroradiometry and a whole set of associated procedures to keep the lighting constant for each measurement, as seen here in the cave of Chauvet.

 

Researchers use a spectroradiometer, an instrument that measures the spectral power distribution of a luminous radiance in the range of visible light, which is a physical scale that has no correlation with the color perceived by the eye. “The advantage of working at an underground site is that we can control the lighting of rock paintings,” explains Dominique Lafon-Pham. “We always try and light the work in the same way.” The situation becomes more complex when the scientists need to work outside. “We are currently taking measurements at the site of the Cro-magnon rock shelter,” explains the researcher. This site, listed as UNESCO World Heritage, is located in Dordogne in France and was a shelter for Cro-magnon men approximately 30,000 years ago. “The analysis of potentially decorated rock walls which are exposed to the open air is much more complex due to changes in the natural light. It is a real challenge in this situation to try and distinguish between very similar colors using measurements.”

Towards virtual caves?

The use of mixed reality (part-way between augmented reality and virtual reality) at cultural sites is increasingly common. This technology opens up new possibilities for transmitting knowledge such as through the creation of remote guided tours in an entirely virtual environment. The quality of the cultural mediation depends on the realism and exactitude of the features and objects in the virtual world. Taking objectified measurements allows standardization of data collection on the optical characteristics of the parietal art at prehistoric sites. Data collected in this way can be processed using modelling and realistic simulation tools. It provides some of the information required for the construction of virtual facsimile.

The scientific community is also keeping a close eye on such devices which capitalize on new media technology. Highly accurate virtual replicas of prehistoric sites could offer considerable research opportunities by enabling researchers to access sites regardless of how easily accessible they are or not. For conservation and safety reasons — such as the presence of high levels of CO2 in the air at certain times of the year — it is only possible to access caves for very short periods of time and under strict control of movement. Although Dominique Lafon-Pham agrees that these are particularly promising prospects, she nevertheless tempers expectations: “For the moment, the image generators we have tested are a long way off being able to render the subtlety of light and color variations that we see in reality.

It will be a little longer before it is possible to create identical virtual replicas of paleolithic caves and their art with sufficient realism to allow quality cultural and scientific mediation. Nevertheless, this doesn’t stop the researchers at Mines Alès continuing to study the colors of rock paintings and, in particular, the way they looked at the time of our ancestors. 30,000 years ago, our predecessors painted and viewed their art by firelight, which has been replaced in caves today by very different electric lighting. “The light cast by fire flickers: what does that mean for the way in which the painted or engraved work is seen and perceived?” wonders Dominique Lafon-Pham. Another question: if researchers today are able to detect multiple shades of red in a single drawing using these systems of measurement, were these different shades seen by our Homo sapiens ancestors? If so, were they accidental or deliberate and did they serve a purpose for the artist?

[1] “From Prehistory to Today: Culture, Environment and Anthropology” (PACEA) laboratory. A mixed research unit attached to the CNRS, the University of Bordeaux and the French Ministry of Culture and Communication.

Digital twins in the health sector: mirage or reality?

Digital twins, which are already well established in industry, are becoming increasingly present in the health sector. There is a wide range of potential applications for both diagnosis and treatment, but the technology is mostly still in the research phase.

 

The health sector is currently undergoing digital transition with a view to developing “4P” treatment: personalized, predictive, preventive and participative. Digital simulation is a key tool in these changes. It consists in creating a model of an object, process or physical process on which different hypotheses can be tested. Today, patients are treated when they fall sick based on clinical studies of tens or, at best, thousands of other people, with no real consideration of the individual’s personal characteristics. Will each person one day have their own digital twin to allow prediction of the development of acute or chronic diseases based on their genetic profile and environmental factors, and anticipation of their response to different treatments?

“There are a lots of hopes and dreams built on this idea,” admits Stéphane Avril, Researcher at Mines Saint-Étienne. “The profession of doctor or surgeon is currently practiced as an art on the strength of experience acquired over time. The idea is that a program could combine the experience of a thousand doctors, but in reality there is a huge amount of work still to do to create equations for and integrate non-mathematical knowledge and skills in very different fields such as immunology and the cardio-vascular system.”  We are still a long way from simulating an entire human being. “But in certain fields, digital twins provide excellent predictions that are even better than those of a practitioner,” adds Stéphane Avril.

From imaging to the operating room

Biomechanics, for example, is a field of research that lends itself very well to digital simulation. Stéphane Avril’s research addresses aortic aneurysms. 3D digital twins of the affected area are developed based on medical images. The approach has led to the creation of a start-up called Predisurge, whose software allows the creation of individual endoprostheses. “The FDA [American Food and Drug Administration] is encouraging the use of digital simulation for validating the market entry of prostheses, both orthopedic and vascular,” explains the researcher from St Etienne.

Digital simulation also helps surgeons prepare for operations because the software provides predictions on the insertion and effects of these endoprostheses once in place, as well as simulating any pre-surgical complications that could arise. “This technique is currently still in the testing and validation phase, but it could have a very promising impact on reducing surgery time and complications,” stresses Stéphane Avril. The team at Mines Saint-Étienne is currently working on improving our understanding of the properties of the aortic wall using four-dimensional MRI and mechanical tests on aneurysm tissue removed during the insertion of prostheses. The idea is to validate a digital twin designed using 4D MRI images, which could predict the future rupture or stability of an aneurysm and indicate the need for surgery or not.

Read more on I’MTech: A digital twin of the aorta to prevent Aneurysm rupture

Catalin Fetita, a researcher at Télécom SudParis, also uses digital simulation in the field of imaging, but this time in the case of air-borne transmission alongside pulmonary parenchyma analysis. The aim of her work is to obtain biomarkers from medical images for a more precise definition of pathological phenomena in respiratory diseases such as asthma, chronic obstructive pulmonary disease (COPD) and idiopathic interstitial pneumonia (IIP). The model allows assessment of an organ’s functioning based on its morphology. “Digital simulation is used to identify the type of dysfunction and its exact location, quantify it, predict changes in the disease and optimize the treatment process,” explains Catalin Fetita.

Ethical and technical barriers

The problem of data security and anonymity is currently at the heart of ethical and legal debates. For the moment, researchers are having great difficulty accessing databases to “feed” their programs. “To obtain medical images, we have to establish a relationship of trust with a hospital radiologist, get them interested in our work and involve them in the project. We need images to be precisely labeled for the model to be relevant.” Especially since the analysis of medical images can vary from one radiologist to another. “Ideally, we would like to have access to a database of images that have been analyzed by a panel of experts with a consensus on their interpretation,” Catalin Fetita affirms.

The researcher also points to the lack of technical staff. Models are generally developed in the framework of a thesis, and rarely lead to a finished product. “We need a team of research or development engineers to preserve the skills acquired, ensure technology transfer and carry out monitoring and updates.” Imaging techniques are evolving and algorithms can encounter difficulties in processing new images that sometimes have different characteristics.

For Stéphane Avril, a new specialization in engineering and health with mixed skills is needed. “These tools will transform doctors’ and surgeons’ professions, but it’s still a bit like science fiction to practitioners at the moment. The transformation will take place tentatively, with restraint, because full medical training takes more than 10 years.” The researcher thinks that it will be another ten years or so before the tools to integrate the systemic aspect of physiopathology will be operational: “like for self-driving vehicles, the technology exists but there are still quite a few stages to go before it actually arrives in hospitals.

 

Article written by Sarah Balfagon for I’MTech.

 

The TeraLab data machines.

TeraLab: data specialists serving companies

Belles histoires, bouton, CarnotTeraLab is a Big Data and artificial intelligence platform that grants companies access to a whole ecosystem of specialists in these fields. The aim is to remove the scientific and technological barriers facing organizations that want to make use of their data. Hosted by IMT, TeraLab is one of the technology platforms proposed by the Carnot Télécom & Société Numérique. Anne-Sophie Taillandier, Director of TeraLab, presents the platform.

 

What is the role of the TeraLab platform?

Anne-Sophie Taillandier: We offer companies access to researchers, students and innovative enterprises to remove technological barriers in the use of their data. We provide technical resources, infrastructure, tools and skills in a controlled, secure and neutral workspace. Companies can prototype products or services in realistic environments with a view to technology transfer as fast as possible.

In what ways do you work with companies?

AST: First of all, we help them formalize the use case. Companies often come to us with a vague outline of the use case, so we help them with that and can provide specialist contributions if necessary. This is a crucial stage because our aim is also for companies to be able to assess the return on investment at the end of the research or innovation work. It helps them estimate the investment required to launch production, so the need must be clearly defined. We then help them understand what they have the right to do with the data. There again we can call upon expert legal advice if necessary. Lastly, we support them in the specification of the technical architecture.

How do you stand out from other Big Data and artificial intelligence service platforms?

AST: Firstly, by the ecosystem we benefit from. TeraLab is associated with IMT, so we have a number of specialist researchers in these fields as well as students we can mobilize to resolve technological challenges posed by companies. Secondly, TeraLab is a pre-competitive platform. We can also define a framework that brings together legal and technical aspects to meet companies’ needs in an individual way. We can strike a fairly fine balance between safety and flexibility to reassure the organizations who come to us and at the same time give researchers enough space to find solutions to the problems posed.

What level of technical security can you provide?

AST: We can reach an extremely high level of technical security, where the user of the data supplied, such as the researcher, can see it but never extract it. Generally speaking, a validation process involving the data supplier and the Teralab team must be followed in order to extract a piece of data from the workspace. During a project, data security is guaranteed by a combination of technical and legal factors. Moreover, we work in a neutral and controlled space which also provides a form of independence that reassures companies.

What does neutrality mean for you?

AST: The technical components we propose are open source. We have nothing against products under license, but if a company wants to use a specific tool, it must provide the license itself. Our technical team has excellent knowledge of the different libraries and APIs as well as the components required to set up a workspace. They adapt the tools to the company’s needs. We do not host the service beyond the end of the experimentation phase. Instead, we enter a new phase of technology transfer to allow the products or services to be integrated at the client’s end. We therefore have nothing to “sell” except our expertise. This also guarantees our neutrality.

What use cases do you work on?

AST: Since we started TeraLab, more than 60 projects have come through the platform, and there are currently 20 on the go. They can last between 3 months and 3 years. We have had projects in logistics, insurance, public services, energy, mobility, agriculture etc. At the moment, we are focusing on three sectors. The first is cybersecurity: we are interested in seeing what data access barriers there are, how to make a workspace compliant, and how to guarantee respect of personal data. We also work a lot in the health sector and industry. Geographically speaking, we are increasingly working at a European level in the framework of H2020 projects. The platform also benefits from growing recognition among European institutions with, in particular, the “Silver i-space” label awarded by the BDVA.

Physically, what does TeraLab look like?

AST: TeraLab comprises machines at Douai, a technical team in Rennes and a business team in Paris. The platform is accessible remotely, so there is no need to be physically close to it, making it different to other service platforms. We have recently also been able to secure client machines directly on site if the client has specific restrictions with regard to the movement of data.

 

User immersion, between 360° video and virtual reality

I’MTech is dedicating a series of success stories to research partnerships supported by the Télécom & Société Numérique (TSN) Carnot Institute, which the IMT schools are a part of.

[divider style=”normal” top=”20″ bottom=”20″]

To better understand how users interact in immersive environments, designers and researchers are comparing the advantages of 360° video and full-immersion virtual reality. This is also the aim of the TroisCentSoixante inter-Carnot project uniting the Télécom & Société Numérique and the M.I.N.E.S. Carnot Institutes. Strate Research, the research department at Strate School of Design which is a member of the Carnot TSN, is studying this comparison in particular in the case of museography mediation.  

 

When it comes to designing immersive environments, designers have a large selection of tools available to them. Mixed reality, in which the user is plunged into a more or less interactive environment, covers everything from augmented video to fully synthetic 3D images. To determine which is the best option, researchers from members of the TSN Carnot Institute (Strate School of Design) and the M.I.N.E.S Carnot Institute (Mines ParisTech and IMT Mines Alès) have joined forces. They have compared, for different use cases, the differences in user engagement between 360° video and full 3D modeling, i.e. virtual reality.

“At the TSN Carnot Institute we have been working on the case of a museum prototype alongside engineers from Softbank Robotics, who are interested in the project,” explains Ioana Ocnarescu, researcher at Strate. A room containing exhibits such as a Minitel, tools linked to the development of the internet, photos of famous researchers in robotics and robots has been created at Softbank Robotics to create mediation on science and technology. Once the object is in place, a 3D copy is made and a visit route is laid out between the different exhibits. This base scenario is used to film a 360° video guided by a mediator and to create a virtual guide in the form of a robot called Pepper, which travels around the 3D scene with the viewer. In both cases, the user is immersed in the environment using a mixed reality headset.

Freedom or realism: a choice to be made

Besides the graphics, which are naturally different between video and 3D modelling, the two technologies have one fundamental difference: freedom of action in the scenario. “In 360° video the viewer is passive,” explains Ioana Ocnarescu. “They follow the guide and can zoom in on objects, but cannot move around freely as they wish.” Their movement is limited to turning their head and deciding to spend longer on certain objects than others. To allow this, the video is cut in several places allowing a decision tree to be made that leads to specific sequences depending on the user’s choices.

Like the 3D mediation, the 360°-video trial mediation is guided by a robot called Pepper.

Like the 3D mediation, the 360°-video trial mediation is guided by a robot called Pepper.

 

3D modeling, on the other hand, grants a large amount of freedom to the viewer. They can move around freely in the scene, choose whether to follow the guide or not, walk around the exhibits and look at them from any angle, which is where 360° video is limited by the position of the camera. “User feedback shows that certain content is better suited to one device or the other,” the Strate researcher reports. For a painting or a photo, for example, there is little use in being able to travel around the object, and the viewer prefers to be in front of the exhibit in it its surroundings with as much realism as possible. “360° video is therefore better adapted for museums with corridors and paintings on the walls,” she points out. On the other hand, 3D modeling is particularly adapted to looking at and examining 3D artefacts such as statues.

These experiments are extremely useful to researchers in design, in particular because they involve real users. “Knowing what people do with the devices available is at the heart of our reflection,” emphasizes Ioana Ocnarescu. Strate has been studying user-machine interaction for over 5 years to develop more effective interfaces. In this project, the people in immersion can give their feedback directly to the Strate team. “It is the most valuable thing in our work. When everything is controlled in a laboratory environment, the information we collect is less meaningful.

The tests must continue to incorporate a maximum amount of feedback from as many different types of audience as possible. Once finished, the results will be compared with those of other use cases explored by the M.I.N.E.S Carnot Institute. “Mines ParisTech and IMT Mines Alès are comparing the same two devices but in the case of self-driving cars and exploration of the Chauvet cave,” explains the researcher.

 

[divider style=”normal” top=”20″ bottom=”20″]

Carnot TSN, a guarantee of excellence in partnership-based research since 2006

The Télécom & Société numérique (TSN) Carnot Institute has partnered companies in their research to develop digital innovations since 2006. On the strength of over 1,700 researchers and 50 technology platforms, it offers cutting-edge research to resolve complex technological challenges produced by digital, energy and environmental and industrial transformations within the French production fabric. It addresses the following themes: Industry of the Future, networks and smart objects, sustainable cities, mobility, health and security.

The TSN Carnot Institute is composed of Télécom ParisTech, IMT Atlantique, Télécom SudParis, Institut Mines-Télécom Business School, Eurecom, Télécom Physique Strasbourg, Télécom Saint-Étienne, École Polytechnique (Lix and CMAP laboratories), Strate School of Design and Femto Engineering.

[divider style=”normal” top=”20″ bottom=”20″]