EUROP platform, Carnot TSN, Carnot Télécom & Société numérique, Télécom Saint-Étienne

EUROP: Testing an Operator Network in a Single Room

Belles histoires, bouton, CarnotThe next article in our series on the Carnot Télécom technology platforms and digital society, with EUROP (Exchanges and Usages for Operator Networks) at Télécom Saint-Étienne. This platform offers physical testing of network configurations, to meet service providers’ needs. We discussed the platform with its director, Jacques Fayolle, and assignment manager Maxime Joncoux.

 

What is EUROP?

Jacques Fayolle: The EUROP platform was designed through a dual partnership between the Conseil Départemental de la Loire and LOTIM, a local telecommunications company and subsidiary of the Axione group. EUROP brings together researchers and engineers from Télécom Saint-Étienne, specialized in networks and telecommunications and in computer science. We are seeing an increasing convergence between infrastructure and the services that use this infrastructure. This is why these two skillsets are complementary.

The goal of the platform is to simulate an operator network in a single room. To do so, we reconstructed a full network, from the production of services to consumption by a client company or an individual. This enabled us to depict every step in the distribution chain of a network, up to the home.

 

What technology is available on the platform?

JF: We are currently using wired technologies, which make up the operator part of a fiber network. We are particularly interested in being able to compare the usage of a service according to the protocols used as the signal is transferred from the server to the final customer. For instance, we can study what happens in a housing estate when an ADSL connection is replaced by FTTH fiber optics (Fiber to the Home).

The platform’s technology evolves, but the platform as a whole never changes. All we do is add new possibilities, because what we want to do is compare technologies with each other. A telecommunications system has a lifecycle of 5 to10 years. At the beginning, we mostly used copper technology, then we added fiber point to point, then point to multi-point. This means that we now have several dozen different technologies on the platform. This roughly corresponds to all technology currently used by telecommunications operators.

Maxime Joncoux: And they are all operational. The goal is to test the technical configurations in order to understand how a particular type of technology works, according to the physical layout of the network we put in place.

 

How can a network be represented in one room?

MJ: The network is very big, but in fact it fits into a small space. If we take the example of Saint-Étienne, this network fits into a large building, but it covers all communications in the city. This represents around 100,000 copper cables that have been reduced. Instead of having 30 connections, we only have one or two. As for the 80 kilometers of fiber in this network, they are simply wound around a coil.

JF: We also have distance simulators, objects that we can configure according to the distance we want to represent. Thanks to this technology, we can reproduce a real high-speed broadband or ADSL network. This enables us to look at, for example, how a service will be consumed depending on whether we have access to a high-speed broadband network, for example in the center of Paris, or in an isolated area in the countryside, where the speed might be slower. EUROP allows us to physically test these networks, rather than using IT models.

It is not a simulation, but a real laboratory reproduction. We can set up scenarios to analyze and compare a situation with other configurations. We can therefore directly assess the potential impact of a change in technology across the chain of a service proposed by an operator.

 

Who is the platform for?

JF: We are directly targeting companies that want to innovate with the platform, either by validating service configurations or by assessing the evolution of a particular piece of equipment in order to achieve better-quality service or faster speed. The platform is also used directly by the school as a learning and research tool. Finally, it allows us to raise awareness among local officials in rural areas about how increasing bandwidth can be a way of improving their local economy.

MJ: For local officials, we aim to provide a practical guide on standardized fiber deployment. The goal is not for Paris and Lyon to have fiber within five years while the rest of France still uses ADSL.

 

EUROP platform, Télécom Saint-Étienne

EUROP platform. Credits: Télécom Saint-Étienne

 

Could you provide some examples of partnerships?

JF: We carried out a study for Adista, a local telecommunications operator. They presented the network load they needed to bear for an event of national stature. Our role was to determine the necessary configuration to meet their needs.

We also have a partnership with IFOTEC, an SME creating innovative networks near Grenoble. We worked together to provide high-speed broadband access in difficult geographical areas. That is, where the distance to the network node is greater than it is in cities. The SME has created DSL offset techniques (part of the connection uses copper, but there is fiber at the end) which provides at least 20 Mb 80 kilometers away from the network node. This is the type of industrial companies we are aiming to innovate with, looking for innovative protocols or hardware.

 

What does the Carnot accreditation bring you?

JF: The Carnot label gives us visibility. SMEs are always a little hesitant in collaborating with academics. This label brings us credibility. In addition, the associated quality charter gives our contracts more substance.

 

What is the future of the platform?

JF: Our goal is to shift towards Openstack[1] technology, which is used in large data centers. The aim is to launch into Big Data and Cloud Computing. Many companies are wondering about how to operate their services in cloud mode. We are also looking into setting up configuration systems that are adapted to the Internet of Things. This technology requires an efficient network. The EUROP platform enables us to validate the necessary configurations.

 

 [1] platform based on open-source software for cloud computing

[box type=”shadow” align=”” class=”” width=””]

The TSN Carnot institute, a guarantee of excellence in partnership-based research since 2006

 

Having first received the Carnot label in 2006, the Télécom & Société numérique Carnot institute is the first national “Information and Communication Science and Technology” Carnot institute. Home to over 2,000 researchers, it is focused on the technical, economic and social implications of the digital transition. In 2016, the Carnot label was renewed for the second consecutive time, demonstrating the quality of the innovations produced through the collaborations between researchers and companies. The institute encompasses Télécom ParisTech, IMT Atlantique, Télécom SudParis, Télécom, École de Management, Eurecom, Télécom Physique Strasbourg and Télécom Saint-Étienne, École Polytechnique (Lix and CMAP laboratories), Strate École de Design and Femto Engineering.[/box]

Tunisian revolution

Saving the digital remains of the Tunisian revolution

On March 11 2017, the National Archives of Tunisia received a collection of documentary resources on the Tunisian revolution which took place from December 17, 2010 to January 14, 2011. Launched by Jean-Marc Salmon, a sociologist and associate researcher at Télécom École de Management, the collection was assembled by academics and associations and ensures the protection of numerous digital resources posted on facebook. These resources are now recognized as a driving force in the uprising. Beyond serving as a means for rememberance and historiography, this initiative is representative of the challenges digital technology poses in archiving the history of contemporary societies.

 

[dropcap]A[/dropcap]t approximately 11 am on December 17, 2010 in the city of Sidi Bouzid, in the heart of Tunisia, the young fruit vendor Mohammed Bouazizi had his goods confiscated by a police officer. For him, it was yet another example of abuse by the authoritarian regime of Zine el-Abidine Ben Ali, who had ruled the country for over 23 years. Tired of the repression of which he and his fellow Tunisians were victims, he set himself on fire in front of the prefecture of the city later that day. Riots quickly broke out in the streets of Sidi Bouzid, filmed by inhabitants. The revolution was just getting underway, and videos taken with rioters’ smartphones would play a crucial role in its escalation and the dissemination of protests against the regime.

Though the Tunisian government blocked access to the majority of image banks and videos posted online — and proxies* were difficult to implement in order to bypass these restrictions — Facebook was wide open. The social networking site gave protestors an opportunity to alert local and foreign news channels. Facebook groups were organized in order to collect and transmit images. Protestors fought for the event to be represented in the media by providing images taken with their smartphones to television channels France 24 and Al Jazeera. Their desired goal was clearly to have a mass effect: Al Jazeera is watched by 10 to 15% of Tunisians.

“The Ben Ali government was forced to abandon its usual black-out tactic, since it was impossible to implement faced with the impact of Al Jazeera and France 24,” explains Jean-Marc Salmon, an associate researcher at Télécom École de Management and a member of IMT’s LASCO IdeaLab (See at the end of the article). For the sociologist who is specialized in this subject, “There was a connection between what was happening in the streets and online: the fight to spur on representation in through Facebook was the prolongation of the desire for recognition of the event in the streets.” It was this interconnection that prompted research as early as 2011, on the role of the internet in the Tunisian revolution —  the first of its kind to use social media as an instrument for political power.

Television channels had to adapt to these new video resources which they were no longer the first to broadcast, since they had previously been posted on the social network. As of the second day of rioting, a new interview format emerged: on the set, the reporter conducted a remote interview with a notable figure (a professor, lawyer, doctor, etc.) on site in Sidi Bouzid or surrounding cities where protests were gaining ground. A photograph of the interviewee’s face was displayed on half the screen while the other half aired images from smartphones which had been retrieved from Facebook. The interviewee provided live comments to explain what was being shown.

 

Tunisian revolution, LASCO IdeaLab, Télécom École de Management

On December 19, 2010 “Moisson Maghrébine” — Al Jazeera’s 9 pm newscast — conducted a live interview with Ali Bouazizi, a local director of the opposition party. At the same time, images of the first uprisings which had taken place the two previous nights were aired, such as this one showing the burning of a car belonging to RCD, the party of president Ben Ali. The above image is a screenshot from the program.

 

Popularized by media covering the Tunisian revolution, this format has now become the media standard for reporting on a number of events (natural disasters, terrorist attacks, etc.) for which journalists do not have videos filmed by their own means. For Jean-Marc Salmon, it was the “extremely modern aspect of opposition party members” which made it possible to create this new relationship between social networking sites and mass media. “What the people of Sidi Bouzid understood was that there is a digital continuum: people filmed what was happening with their telephones and immediately thought, ‘we have to put these videos on Al Jazeera.’

 

Protecting the videos in order to preserve their legacy

Given the central role they played during the 29 days of the revolution, these amateur videos have significant value for current and future historians and sociologists. But, as a few years have passed since the revolution, they are no longer consulted as much as they were during the events of 2010. The people who put them online no longer see the point of leaving them there so they are disappearing. “In 2015, when I was in Tunisia to carry out my research on the revolution, I noticed that I could no longer find certain images or videos that I had previously been able to access,” explains the sociologist. “For instance, in an article on the France 24 website the text was still there but the associated video was no longer accessible, since the YouTube account used to post it online had been deleted.”

The research and archiving work was launched by Jean-Marc Salmon and carried out by the University of La Manouba under the supervision of the National Archives of Tunisia, with assistance from the Euro-Mediterranean Foundation of Support for Human Rights Defenders. The teams participating in this collaboration spent one year travelling throughout Tunisia in order to find the administrators of Facebook pages, amateur filmmakers, and members of the opposition party who appeared in the videos. The researchers were able to gather over a thousand videos and dozens of testimonies. This collection of documentary resources was handed over to the National Archives of Tunisia on March 17 of this year.

The process exemplifies new questions facing historians of revolutions. Up to now, their primary sources have usually consisted of leaflets published by activists, police reports or newspapers with clearly identified authors and cited sources. Information is cross-checked or analyzed in the context of its author’s viewpoint in order to express uncertainties. With videos, it is more difficult to classify information. What is its aim? In the case of the Tunisian revolution, is it an opponent of the regime trying to convey a political message, or simply a bystander filming the scene? Is the video even authentic?

To respond to these questions, historians and archivists must trace back the channels through which the videos were originally broadcast in order to find the initial rushes. Because each edited version expresses a choice. A video taken by a member of the opposition party in the street will take on different value when it is picked up by a television channel which has extracted it from Facebook and edited for the news. “It is essential that we find the original document in order to understand the battle of representation, starting with the implicit message of those who filmed the scene,” says Jean-Marc Salmon.

However, it is sometimes difficult to trace videos back to the primary resources. The sociologist admits that he has encountered anonymous users who posted videos on YouTube under pseudonyms or used false names to send videos to administrators of pages. In this case, he has had to make do with the earliest edited versions of videos.

This collection of documentary resources should, however, facilitate further efforts to find and question some of the people who made the videos: “The archives will be open to universities so that historians may consult them,” says Jean-Marc Salmon. The research community working on this subject should gradually increase our knowledge about the recovered documents.

Another cause for optimism is the fact that the individuals questioned by researchers while establishing the collection were quite enthusiastic about the idea of contributing to furthering knowledge about the Tunisian revolution. “People are interested in archiving because they feel that they have taken part in something historic, and they don’t want it to be forgotten,” notes Jean-Marc Salmon.

The future use of this collection will undoubtedly be scrutinized by researchers, as it represents a first in the archiving of natively digital documents. The Télécom École de Management researcher also views it as an experimental laboratory: “Since practically no written documents were produced during the twenty-nine days of the Tunisian Revolution, these archives bring us face-to-face with the reality of our society in 30 or 40 years, when the only remnants of our history will be digital.”

 

*Proxies are staging servers used to access a network which is normally inaccessible.

[divider style=”normal” top=”20″ bottom=”20″]

LASCO, an idea laboratory for examining how meaning emerges in the digital era

Jean-Marc Salmon carried out his work on the Tunisian revolution with IMT’s social sciences innovation laboratory (LASCO IdeaLab), run by Pierre-Antoine Chardel, a researcher at Télécom École de Management. This laboratory serves as an original platform for collaborations between the social sciences research community, and the sectors of digital technology and industrial innovation. Its primary scientific mission is to analyze the conditions under which meaning emerges at a time when subjectivities, interpersonal relationships, organizations and political spaces are subject to significant shifts, in particular with the expansion of digital technology and with the globalization of certain economic models. LASCO brings together researchers from various institutions such as the universities of Paris Diderot, Paris Descartes, Picardie Jules Verne, the Sorbonne, HEC, ENS Lyon and foreign universities including Laval and Vancouver (Canada), as well as Villanova (USA).

[divider style=”normal” top=”20″ bottom=”20″]

 

Patrice Pajusco, IMT Atlantique, 5G

5G Will Also Consume Less Energy

5G is often presented as a faster technology, as it will need to support broadband mobile usage as well as communication between connected objects. But it will also have to use less energy in order to find its place within the current context of environmental transition. This is the goal of the ANR Trimaran and Spatial Modulation projects, run by Orange, and in association with IMT Atlantique and other academic actors.

 

Although it will not become a reality until around 2020, 5G is being very actively researched. Scientists and economic stakeholders are really buzzing about this fifth-generation mobile telephony technology. One of the researchers’ goals is to reduce the energy consumption of 5G communication. The stakes are high, as the development of this technology aims to be coherent with the general context of energy and environmental transition. In 2015, the alliance for next generation mobile networks, (NGMN) estimated thatin the next ten years, 5G will have to support a one thousand-fold increase in data traffic, with lower energy consumption than today’s networks.” This is a huge challenge, as it means increasing the energy efficiency of mobile networks by 2,000.

To achieve this, stakeholders are counting on the principle of focusing communication. The idea is simple: instead of transmitting a wave from an antenna in all directions, as is currently the case, it is more economical to send it towards the receiver. Focusing waves is not an especially new field of research. However, it was only recently applied to mobile communications. The ANR Trimaran project, coordinated by Orange and involving several academic and industrial partners[1] including IMT Atlantique, explored the solution between 2011 and 2014. Last November, Trimaran won the “Economic Impact” award at the ANR Digital Technology Meetings.

Also read on I’MTech: Research and Economic Impacts: “Intelligent Together”

 

In order to successfully focus a wave between the antenna and a mobile object, the group’s researchers have concentrated on a technique of time reversal: “the idea is to use a mathematical property: a solution to wave equations is correct, whether the time value is positive or negative” says Patrice Pajusco, telecommunications researcher at IMT Atlantique. He explains with an illustration: “take a drop of water. If you drop it onto a lake, it will create a ripple that will spread to the edges. If you reproduce the ripple at the edge of the lake, you can create a wave that will converge towards the point in the lake where the drop of water fell. The same phenomena will happen again on the lake’s surface, but with a reversed time value.

When applied to 5G, the principle of time reversal uses an initial transmission from the mobile terminal to the antenna. The mobile terminal transmits its position, sends an electromagnetic wave which spreads through the air, over hills, and is defined by the terrain, before arriving at the antenna with its echoes, and a specific profile of its journey. The antenna recognizes the profile and can send it in the opposite direction to meet the user’s terminal. The team at IMT Atlantique is especially involved in modeling and characterizing the communication channel that is created. “The physical properties of the propagation channel vary according to the echoes coming from several directions which are spaced more or less differently. They must be well-defined in order for the design of the communication system to be effective” Patrice Pajusco underlines.

 

Focusing also depends on antennas

Improving this technique also involves working on the antennas used. Focusing a wave on a standard antenna is not difficult, but focusing in on a specific antenna when there is another one nearby is problematic. The antennas must be spaced out for the technique to work. To avoid this, one of the partners in the project is working on new types of micro-structured antennas which make it possible to focus a signal over a shorter distance, therefore limiting the spacing constraint.

The challenge of focusing is so important that since January 2016, most of the partners in the Trimaran project have been working on a new ANR project called spatial modulation. “The idea of this new project is to continue to save energy, while transmitting additional information to the antennas”, Patrice Pajusco explains. Insofar as it is possible to focus on a specific antenna, this connection with the terminal represents information. “We will therefore be able to transmit several bits of information, simply by changing the focus of the antenna”, the researcher explains.

This new project brings on an additional partner, Centrale Supélec, an “expert in the field of spatial modulation”, Patrice Pajusco says. If conclusive, it could eventually provide a technology to compete with the MIMO antennas based on the use of many emitters and receivers to transmit a signal. “By using spatial modulation and focusing, we could have a solution that would be much less complex that the conventional MIMO system”, the researcher hopes. Focusing clearly has the capacity to bring great value to 5G. The fact that it can be applied to moving vehicles has already been judged as one of the most promising techniques by the H2020 METIS project, a reference in the European public-private partnership for 5G.

 

[1] The partners are Orange, Thalès, ATOS, Institut Langevin, CNRS, ESPCI Paris, INSA Rennes, IETR, and IMT Atlantique (formerly Télécom Bretagne and Mines Nantes).

AutoMat, Télécom ParisTech

With AutoMat, Europe hopes to adopt a marketplace for data from connected vehicles

Projets européens H2020Data collected by communicating vehicles represent a goldmine for providers of new services. But in order to buy and sell this data, economic players need a dedicated marketplace. Since 2015, the AutoMat H2020 project has been developing such an exchange platform. To achieve this mission by 2018 — the end date for the project— a viable business model will have to be defined. Researchers at Télécom Paristech, a partner in the project, are currently tackling this task.

 

Four wheels, an engine, probably a battery, and most of all, an enormous quantity of data generated and transmitted every second. There is little doubt that in the future, which is closer than we may think, cars will be intelligent and communicating. And beyond recording driving parameters to facilitate maintenance, or transmitting information to improve road safety, the data acquired by our vehicles will represent a market opportunity for third-party services.

But in order to create these new services, a secure platform for selling and buying data must still be developed, with sufficient volume to be attractive. This is the objective that the AutoMat  project — began in April 2015 and funded by the H2020 European research programme — is trying to achieve by developing a marketplace prototype.

The list of project members includes two service providers: MeteoGroup and Here, companies which specialize in weather forecasts and mapping respectively. For these two stakeholders, data from cars will only be valuable if it comes from many different manufacturers. For MeteoGroup, the purpose of using vehicles as weather sensors is to have access to real-time information about temperatures or weather conditions nearly anywhere in Europe. But a single brand would not have a sufficient number of cars to be able to provide this much information: therefore data from several manufacturers must be aggregated. This is no easy task since, for historical reasons, each one has its own unique format for storing data.

 

AutoMat, Télécom ParisTech, communicating vehicles, connected cars

Data from communicating cars could, for example, optimize meteorological measurements by using vehicles as sensors.

 

To simplify this task without giving anyone an advantage, the technical university of Dortmund is participating in the project by defining a new model with a standard data format agreed upon by all parties. This, however, requires automobile manufacturers to change their processes in order to integrate a data formatting phase. But the cost of this adaptation is marginal compared to the great potential value of their data combined with that of their competitors. The Renault and Volkswagen groups, as well as the Fiat research centre are partners in the AutoMat project in order to identify how to tap into the underlying economic potential.

 

What sort of business model?

In reality, it is less difficult to convince manufacturers than it is to find a business model for the marketplace prototype. This is why Télécom ParisTech’s Economics and Social Sciences Department (SES) is contributing to the AutoMat project. Giulia Marcocchia, a PhD student in Management Sciences who is working on the project, describes different aspects which must be taken into consideration:

“We are currently carrying out experiments on user cases, but the required business model is unique so it takes time to define. Up until now, manufacturers have used data transmitted by cars to optimize maintenance or reduce life cycle costs. In other sectors, there are marketplaces for selling data by packets or on a subscription basis to users clearly identified as either intermediary companies or final consumers.
But in the case of a marketplace for aggregated data from cars, the users are not as clearly defined: economic players interested in this data will only be discovered upon the definition of the platform and the ecosystem taking shape around connected cars.”

For researchers in the SES department, this is the whole challenge: studying how a new market is created. To do so, they have adopted an effectual approach. Valérie Fernandez, an innovation management researcher and director of the department, describes this method as one in which “the business model tool is not used to analyze a market, but rather as a tool to foster dialogue between stakeholders in different sectors of activity, with the aim of creating a market which does not currently exist.”

The approach focuses on users: what do they expect from the product and how will they use it? This concerns automobile manufacturers who supply the platform with the data they collect as much as service providers who buy this data. “We have a genuine anthropological perspective for studying these users because they are complex and multifaceted,” says Valérie Fernandez. “Manufacturers become producers of data but also potential users, which is a new role for them in a two-sided market logic.”

The same is true for drivers, who are potential final users of the new services generated and may also have ownership rights for data acquired by vehicles they drive. From a legal standpoint nothing has been determined yet and the issue is currently being debated at the European level. But regardless of the outcome, “The marketplace established by AutoMat will incorporate questions about drivers’ ownership of data,” assures Giulia Marcocchia.

The project runs until March 2018. In its final year, different use cases should make it possible to define a business model that responds to questions relating to uses by different users. Should it fulfill its objective, AutoMat will represent a useful tool for developing intelligent vehicles in Europe.

[divider style=”normal” top=”20″ bottom=”5″] 

Ensuring a secure, independent marketplace

In addition to the partners mentioned in the article above, the AutoMat project brings together stakeholders responsible for securing the marketplace and handling its governance. Atos is in charge of the platform, from its design to data analysis in order to help identify the market’s potential. Two partners, ERPC and Trialog, are also involved in key aspects of developing the marketplace: cyber-security and confidentiality. Software systems engineering support for the various parties involved is ensured by ATB, a non-profit research organization.

[divider style=”normal” top=”20″ bottom=”20″] 

Digital social innovations

What are digital social innovations?

Müge Ozman, Télécom École de Management – Institut Mines-Télécom and Cédric Gossart, Institut Mines-Télécom (IMT)

One of the problems that we encounter in our research on digital social innovation (DSI) is related with defining it. Is it a catch-all phrase? A combination of three trendy words? Digital social innovations (DSI) are often associated with positive meanings, like openness, collaboration or inclusion, as opposed to more commercially oriented innovations. In trying to define such a contested concept as digital social innovation, we should strive to disentangle it from its positive aura.

The following figure is helpful for a start. Digital social innovation lies at the intersection of three spheres: innovation, social and environmental problems, and digital technologies.

Authors’ own.

The first sphere is innovation. It refers to the development and diffusion of a (technological, social…) novelty that is not used yet in the market or sector or country where it is being introduced. The second sphere concerns the solutions put in place to address social and environmental problems, for example through public policies, research projects, new practices, civil society actions, business activities, or by decentralising the distribution of power and resources through social movements. For example, social inclusion measures facilitate, enable and open up channels for people to participate in social life, regardless of their age, sex, disability, race, ethnicity, origin, religion or socioeconomic status (e.g. the positive discrimination measures that enable minority students to enter universities). Finally, the third sphere relates to digital technologies, which concern hardware and software technologies used to collect, process, and diffuse information.

 

From innovative ideas to diffused practices

Many digital technologies are no longer considered innovations in 2017, at least in Europe, where they have become mainstream. For example, according to [Eurostat] only 15 % of the EU population do not have access to the Internet. On the other hand, some digital technologies are novel ones (area C in the figure), such as the service Victor & Charles, which enables hotel managers to access the social-media profile of their clients in order to best meet their needs.

As regards the yellow sphere, many of its solutions to social and environmental problems are neither digital nor innovative. They relate to more traditional ways of fighting social exclusion or pollution, for example. To solve housing problems in France, the HLM system (habitations à loyer modéré) was introduced after the World War II to provide subsidised housing to low-income households. When introduced it was an innovative solution, but it has now been institutionalised.

At the intersection between the solutions and digital technologies we find the area B which does not intersect with the blue innovation sphere. There we find digital solutions to social and environmental problems which are not innovative, such as the monthly electronic newsletter Atouts from OPH (Fédération nationale des Offices Publics de l’Habitat), a federation of institutions in charge of the HLM system, which uses the newsletter to foster best practices among HLM agencies in France. We also find innovations that aim to solve social and environmental problems which are not digital (area A). For example, the French start-up Baluchon builds affordable wooden and DIY micro-houses that enable low-income people to live independently. As for area C, it concerns innovative digital technologies which do not aim to solve a social or environmental problem, such as a 3D tablet.

 

Using digital technologies to address real-world problems

In the area where the three spheres intersect lie digital social innovations. DSI can thus be defined as novelties that use, develop, or rely on digital technologies to address social and/or environmental problems. They include a broad group of digital platforms which facilitate peer-to-peer interactions and the mobilisation of people in order to solve social and/or environmental problems. Neighbourhood information systems, civic engagement platforms, volunteered geographic information systems, crowdfunding platforms for sustainability or social issues, are some of the cases of the DSI area.

For example, the Ushahidi application, designed to map violent acts following the 2008 elections in Kenya, aggregates and diffuses information collected by citizens about urban violence, which enables citizens and local authorities to take precautionary measures. As for the I Wheel Share application, it facilitates the collection and diffusion of information about urban (positive and negative) experiences that may be useful to disabled people. Two other examples involve the use of a digital hardware (other than a smartphone). First, KoomBook, created by the NGO Librarians Without Borders, is an electronic box using a wifi hotspot to provide key educational resources to people deprived of Internet access. Second, the portable sensor developed by the Plume Labs company, which can be used as a key holder, measures local air pollution in real time and diffuses collected data to the community.

 

Theoretical clarity, practical imprecision

But as it always happens with categorisations, boundaries are not as clear-cut as it may seem on a figure. In our case, there is a grey area surrounding digital social innovations. For example, if a technology makes it easier for a lot of people to access certain goods or services (short-term recreational housing, individual urban mobility…), does it solve a social problem? The answer is clouded with the positive meaning attached to digital innovations, which can conflict with their possible negative social and environmental impacts (e.g., they might generate unfair competition or strong rebound effects).

Take the case of Airbnb: according to our definition, it could be considered a digital social innovation. It relies on a digital platform through which a traveller can find cheaper accommodations while possibly discovering local people and lifestyles. Besides avoiding the anonymity of hotels, tailored services are now offered to clients of the platform. Do you want to take a koto course while having your matcha tea in a Japanese culture house? This Airbnb “experience” will cost you 63 euros. Airbnb enables (some) people to earn extra income.

www.airbnb.com

 

But the system can also cause the loss of established capabilities and knowledge, and exclude locals who may not have the necessary digital literacy (neither lodgings located in central urban areas). While Airbnb customers might enjoy the wide range of offers available on the platform as well as local cultural highlights sold in a two-hour pack, an unknown and ignored local culture lies on the poor side of the digital (and economic) divide.

 

Measuring the social impact

Without having robust indicators of the social impact of DSI, it is difficult to clarify this grey area and to solve the problem of definition. But constructing ex-ante and ex-post indicators of social impact is not easy from a scientific point of view. Moreover it is difficult to obtain user data as firms intentionally keep them proprietary, impeding research. In addition, innovators and other ecosystem members can engage in “share-washing”, concealing commercial activities behind a smokescreen of socially beneficial activities. An important step towards overcoming these difficulties is to foster an open debate about how profits obtained from DSI are distributed, about who is excluded from using DSI and why, and about the contextual factors that ultimately shape DSI social impacts.

The ConversationAs troublesome as definition issues may be, researchers should not reject the term altogether for being too vague, since DSI can have a strong transformative power regarding empowerment and sustainability. But neither should they impose a restrictive categorisation of DSI, in which Uber and Airbnb have no place. The involvement of a broad variety of actors (users and nonusers, for-profit and not-for-profit…) in the definition of this public construct would do justice to the positive reputation of DSI.

 

Müge Ozman, Professor of Management, Télécom École de Management – Institut Mines-Télécom et Cédric Gossart, Maître de conférences , Institut Mines-Télécom (IMT)

La version originale de cet article a été publiée sur The Conversation.

Enhanced humans

Technologically enhanced humans: a look behind the myth

What exactly do we mean by an “enhanced” human? When this possibility is brought up, what is generally being referred to is the addition of human and machine-based performances (expanding on the figure of the cyborg popularized by science fiction). But enhanced in relation to what? According to which reference values and criteria? How, for example, can happiness be measured? A good life? Sensations, like smells or touch which connect us to the world? How happy we feel when we are working? All these dimensions that make life worth living. We must be careful here not to give in to the magic of figures. A plus can hide a minus; something gained may conceal something lost. What is gained or lost, however, is difficult to identify as it is neither quantifiable nor measurable.

Pilots of military drones, for example, are enhanced in that they use remote sensors, optronics, and infrared cameras, enabling them to observe much more than could ever be seen with the human eye alone. But what about the prestige of harnessing the power of a machine, the sensations and thrill of flying, the courage and sense of pride gained by overcoming one’s fear and mastering it through long, tedious labor?

Another example taken from a different context is that of telemedecine and remote diagnosis. Seen from one angle, it creates the possibility of benefitting from the opinion of an expert specialist right from your own home, wherever it is located. For isolated individuals who are losing independence and mobility, or for regions that have been turned into medical deserts, this represents a real advantage and undeniable progress. However, field studies have shown that some people are worried that it may be a new way of being shut off from the world and confined to one’s home. Going to see a specialist, even one who is located far away, forces individuals to leave their everyday environments, change their routines and meet new people. It therefore represents an opportunity for new experiences, and to a certain extent, leads to greater personal enrichment (another possible definition for enhancement).

 

Enhanced humans

Telemedecine consultation. Intel Free Press/Wikimedia, CC BY-SA

How technology is transforming us

Of course, every new form of progress comes with its share of abandonment of former ways of doing and being, habits and habitus. What is most important is that the sum of all gains outweighs that of all losses and that new feelings replace old ones. Except this economic and market-based approach places qualitatively disparate realities on the same level: that of usefulness. And yet, there are things which are completely useless —devoting time to listening, wasting time, wandering about — which seem to be essential in terms of social relations, life experiences, learning, imagination, creation etc. Therefore, the issue is not knowing whether or not machines will eventually replace humans, but rather, understanding the values we place in machines, values which will, in turn, transform us: speed, predictability, regularity, strength etc.

The repetitive use of geolocation, for example, is making us dependent on this technology. More worryingly, our increasing reliance on this technology is insidiously changing our everyday interactions with others in public or shared places. Are we not becoming less tolerant of the imperfections of human beings, of the inherent uncertainty of human relationships, and also more impatient in some ways? One of the risks I see here is that in the most ordinary situations, we will eventually expect human beings to behave with the same regularity, precision, velocity and even the same predictability as machines. Is this shift not already underway, as illustrated by the fact that it has become increasingly difficult for us to talk to someone passing by, to ask a stranger for directions, preferring the precise, rapid solution displayed on the screen of our iPhone to this exchange, which is full of unpredictability and in some ways, risk? These are the questions we must ask ourselves when we talk about “enhanced humans.”

Consequently, we must also pay particular attention to the idea that, as we get used to machines’ binary efficiency and lack of nuance, it will become “natural” for us and as a result, human weakness will become increasingly intolerable and foreign. The issue, therefore, is not knowing whether machines will overthrow humans, take our place, surpass us or even make us obsolete, but rather understanding under what circumstances — social, political, ethical, economic — human beings start acting like machines and striving to resemble the machines they design. This question, of humans acting like machines which is implicit in this form of behavior, strikes me as both crucial and pressing.

 

Interacting with machines is more reassuring

It is true that with so-called social or “companion” robots  (like  Paro, NaoNurseBotBaoAiboMy Real Baby) in whom we hope to see figures, capable not only of communicating with us, acting in our everyday familiar environments, but also of demonstrating emotions, learning, empathy etc. the perspective seems to be reversed. Psychologist and anthropologist Sherry Turkle has studied this shift in thinking of robots as frightening and strange to thinking about them as potential friends.  What happened, she wondered, to make us ready to welcome robots into our everyday lives and even want to create emotional attachments with them when only yesterday they inspired fear or anxiety?

 

Enhanced Humans, Gérard Dubey

Korean robot, 2013. Kiro-M5, Korea Institute of Robot and Convergence

 

After several years studying nursing homes which had chosen to introduce these machines, the author of Alone together concluded that one of the reasons why people sometimes prefer the company of machines to that of humans is the prior deterioration of relationships which they may have experienced in the real world. Hallmarks of these relationships are distrust, fear of being deceived and suspicion. Turkle also cites a certain fatigue from always having to be on guard, as well as boredom: being in others’ company bores us. She deduces that the concept of social robots suggests that our way of facing intimacy may now be reduced to avoiding it altogether. According to her, this deterioration of human relationships represents the foundation and condition for developing social robots, which respond to a need for a stable environment, fixed reference points, certainty and predictably seldom offered by normal relationships in today’s context of widespread deregulation.

It is as if we expect our “controlled and controllable” relationships with machines to make up for the helplessness we sometimes feel, when faced with the injustice and cruelty reserved for entire categories of living beings (humans and non-humans, when we think of refugees, the homeless or animals used for industry). A solution of withdrawal, or a sort of refuge, but one which affects how we see ourselves in the world, or rather outside the world, without any real way to act upon it.

[divider style=”normal” top=”5″ bottom=”5″]

 

Gérard Dubey, Sociologist, Télécom École de Management, Institut Mines-Télécom
This article was originally published in French in The Conversation France

Arago, IMT Atlantique, Jean-Louis de Bougrenet

Arago, technology platform in optics for the industrial sector

Belles histoires, bouton, CarnotArago is a technology platform specializing in applied optics, based at IMT Atlantique’s Brest campus. It provides scientific services and a technical environment for manufacturers in the optics and vision sectors. It has unique expertise in the field of liquid crystals, micro-optics and health.

 

Are you in the industrial sector or a small business wanting to experiment with a disruptive innovation? Perhaps you are looking for innovative technological solutions, but don’t have the time, or the technical means, to do so? The solution might be to try your luck with a technology platform.

Arago can help you to turn your new ideas into a reality. Arago was designed to foster innovation and technology transfer in research. It is home to highly-skilled scientists and technological facilities. Nexter, Surys and Valéo are among the companies who have already placed their trust in the platform, a component of Carnot Télécom and Société Numérique, specialized in the field of optics. Arago has provided them with access to a variety of high-end technology solutions for design, modeling, creating micro-optics, electro-optic functions based on liquid crystals, and composite materials for vision and protection. The fields of application are varied, ranging from protective and corrective glasses to holographic marking. It all involves a great deal of technical expertise, which we discussed step by step with Jean-Louis de Bougrenet, researcher at IMT Atlantique and the creator of the Arago platform.

 

Health and impact measurement of new technologies

Health is a field which benefits directly from Arago’s potential. The platform can directly count on the Health Interest Group 3D Fovéa, which includes IMT Atlantique, the University Hospital of Brest, and INSERM, and which led to the creation of the spinoff Orthoptica (the French leader in digital orthoptic tools). Thanks to Orthoptica, the platform was able to set up Binoculus, an orthoptic platform.

Arago, IMT Atlantique, Jean-Louis de BougrenetThis operates as a digital platform and aims to replace the existing tools used by orthoptists, which are not very ergonomic. This technology consists solely of a computer, a pair of 3D glasses, and a video projector. It makes the technology more widely available by reducing the duration of tests. The glasses are made with shutter lenses, allowing the doctor to block each eye in synchronization with the material being projected. This othoptic tool is used to evaluate the different degrees of binocular vision. It aims to help adolescents who have difficulties with fixation or concentration.

Acting for health now is worthwhile, but anticipating the needs of the future is even better. In this respect, the platform plays an important role for ANSES[1] in the evaluation of risks involved in deploying new immersive technology such as the Oculus Vive augmented reality headsets. “When the visual system is involved, an impact study with scientific and clinical criteria is essential”, explains Jean-Louis de Bougrenet. These evaluations are based on in-depth testing carried out on clinical samples, in liaison with hospitals such as Necker, HEGP (Georges Pompidou) and the University Hospital of Brest.

 

Applications in diffractive micro-optics and liquid crystal

At the same time, Arago has the expertise of researchers with several decades of experience in liquid crystal engineering (Pierre-Gilles de Gennes laboratory). “These materials present significant electro-optical effects, enabling us to modulate the light in different ways, at very low voltage”, explains Jean-Louis de Bougrenet.

The researchers have used liquid crystals for many industrial purposes (protection goggles, spectral filters, etc.). In fact, liquid crystals will soon be part of our immediate surroundings without us even knowing. They are used in flat screens, 3D and augmented reality goggles, or as a camouflage technique (Smart Skin). To create these possibilities, Arago’s manufacturing and testing facilities are unique in France (including over 150m² of cleanrooms).

Other objects are also omnipresent in our daily lives, and even involve our smartphones: diffractive micro-optics. One of their specific features is that they exist in different sizes. Arago has all the tools necessary to design and produce these optics at different scales both individually and collectively, with easily-industrialized nano-imprint duplication processes. “We use micro-optics in many fields, for example manufacturing security holograms, biometric recognition, quality control and the automobile sector”, explains Jean-Louis de Bougrenet. Researchers recently set up a so-called two-photon photopolymerization technique, which allows the direct fabrication of fully 3D nanostrutures.

 

European ambitions

Arago is also involved in many other projects. Since 2016, it has hosted an IRT BCom platform. This platform is dedicated to creating very high-speed optical transmission systems in free space, for wireless connections for virtual reality headsets in environments such as the Cave Automatic Virtual Environment (CAVE).

Arago is already firmly established in Brittany, having recently finalized a technology partnership with the INL (International Iberian Nanotechnology Laboratory, a European platform similar to CERN). The partnership involves pooling resources, privileged access to technology, and the creation of joint European projects. The INL is unique in the European field of nanoscience and nanotechnology. Arago contributes the complementary material components it had been lacking. For Jean-Louis de Bougrenet, “in the near future, Arago will become part of the European technology cluster addressing new industrial actors, by adding to our offering and more easily integrating European programs with a sufficient critical mass. This partnership will enable us to develop our emerging activity in the field of intelligent sensors for the environment and biology”.

 

 [1] ANSES: French Agency for Food, Environmental and Occupational Health & Safety

Some examples of products developed through Arago

 

[one_half][box]

Night-vision driving glasses

These glasses are designed for driving at night, and prevent the driver being dazzled by car lights. They were designed in close collaboration with Valéo.

These glasses use two combined effects: absorption and reflection. This is possible due to a blend of liquid crystals and absorbent material, which creates a variable density. This technology is the result of several years’ research. Several patents have been registered, and have been licensed to the industrial partner.

[/box][/one_half]

[one_half_last][box]

Holographic marking to fight counterfeiting

This marking was entirely designed by Arago. Micro-optics are integrated in banknotes, for example, to fight counterfeiting. The come in the form of holographic films or strips. Their manufacture uses a copying system which reduces production costs, making mass production possible. The work was carried out in close collaboration with industrial partner Surys. Patents have been registered and transferred. The project also led to a copying machine being created for the industrial partner. The machine is currently being used by the company.

[/box][/one_half_last]

[box type=”shadow” align=”” class=”” width=””]

The TSN Carnot institute, a guarantee of excellence in partnership-based research since 2006

Having first received the Carnot label in 2006, the Télécom & Société numérique Carnot institute is the first national “Information and Communication Science and Technology” Carnot institute. Home to over 2,000 researchers, it is focused on the technical, economic and social implications of the digital transition. In 2016, the Carnot label was renewed for the second consecutive time, demonstrating the quality of the innovations produced through the collaborations between researchers and companies. The institute encompasses Télécom ParisTech, IMT Atlantique, Télécom SudParis, Télécom, École de Management, Eurecom, Télécom Physique Strasbourg and Télécom Saint-Étienne, École Polytechnique (Lix and CMAP laboratories), Strate École de Design and Femto Engineering.[/box]

Television, NexGenTV, Eurecom, Raphaël Troncy

The television of the future: secondary screens to enrich the experience?

The television of the future is being invented in the Eurecom laboratories, at Sophia-Antipolis. This is not about creating technologies for even bigger, sharper screens, but rather reinventing the way TV is used. In a French Unique Interministerial Fund (FUI) project named NexGenTV, researchers, broadcasters and developers have joined forces to achieve this goal. The project was launched in 2015 for a duration of three years, and is already showing results. Raphaël Troncy is a Eurecom researcher in data sciences who is involved in the project. He presents the progress that has been made to date and the potential for improvement, primarily based on enriching content with a secondary screen.

 

With the NexGenTV project, you are trying to reinvent the way we use TV. Where did this idea come from?

Raphaël Troncy: This is a fairly widespread movement in the audiovisual sector. TV channels are realizing that people are using their television screens less and less to watch their programs. They watch them through other modes, such as replay, or special mobile phone apps, and do other things at the same time. TV channels are pushing for innovative applications. The problem is that nobody really knows what to do, because nobody knows what users want. At Eurecom, we worked on an initial project, financed by the FP7 European program called LinkedTV. We worked with users to find out what they want, and what the channels want. Then with NexGenTV we focused on applications for a second screen, like a tablet, to offer enriched content to viewers, as well as affording TV channels the ability to maintain editorial content.

 

Although the project won’t be completed until next year, have you already developed promising applications?

RT: Yes, our technology has been used since last summer by the tablet app for the beIN Sports channels. The technology allows users to automatically select the highlights and access additional content for Ligue 1 football matches. Users can access events such as goals or fouls, they can see who was touching the ball at a given moment, or statistics on each player, all in real time. We are working towards offering information such as replays of similar goals by other players in the championship, or goals in previous matches by the player who has just scored.

 

In this example, what is your contribution as a researcher?

RT: The technology we have developed opens up several possibilities. Firstly, it collects and formats the data sent by service providers. For example, the number of kilometers a player covers, or images of the goals. This is challenging, because live conditions mean that this needs to happen within the few seconds between the real event and the moment it is broadcast to the user. Secondly, the technology performs semantic analysis, extracting data from players’ sites, official FIFA or French Football Federation sites, or Wikipedia, to provide a condensed version to the TV viewer.

 

 

Do you also perform image analysis, for example to enable viewers to watch similar goals?

RT: We did this for the first prototypes, but we realized that the data provided were rich enough already. However, we do analyze images for another use: the many political debates that are happening at present during the election period. There is not yet an application for this, we are developing it. But we practiced on the debates for the two primary elections, and we are carrying on this practice for the current and upcoming debates for the presidential and legislative elections. We would like to be able to put an extract of a candidate’s previous speech on the tablet while they are talking about a particular subject. Either because what they are saying is complementary, contradictory, or linked to a proposition that is relevant to their program. We also want to be able to isolate the “best moments” based on parallel activity on Twitter, or on a semantic analysis of the candidates’ speeches, and offer a condensed summary.

 

What is the value of image analysis in this project?

RT: For replays, image analysis allows us to better segment a program, to offer the viewer a frame of reference. But it also provides access to specific information. For example, during the last debate of the right-wing primary election, we measured the on-screen presence time of the candidates, using facial recognition programmed by deep learning. We wanted to see if there was a difference in the way the candidates were treated, or if there was an equal balance, as is the case with speaking time controlled by the CSA (French institution for media regulation). We realized that broadcasters’ choice was more heavily weighted towards Nicolas Sarkozy than the other candidates. This can be explained, because he was strongly challenged by the other candidates, and so he was focused on when he didn’t speak. But this also demonstrates how an image recognition application can provide viewers with keys to interpreting programs.

 

The goal of your technology is to give even more context, and inform the user?

RT: Not necessarily, we also have an example of use with an educational program broadcast on France Télévisions. In this case, we wanted to provide viewers with quizzes, to provide educational material. We are also working on adapting advertising to users for replay viewing. The idea is to make the most of the potential of secondary screens to improve the user’s experience.

 

[divider style=”normal” top=”20″ bottom=”20″]

NexGenTV: a consortium for inventing the new TV

The NexGenTV project combines researchers from Eurecom and the Irisa joint research unit (co-supervised by IMT). The consortium also includes companies from the audiovisual sector. Wildmoka is in charge of creating applications for secondary screens, along with Envivio (taken over by Ericsson) and Avisto. The partners are working in collaboration with an associated broadcasters club, who provide the audiovisual content required for creating the applications. The club includes France Télévisions, TF1, Canal+, etc.

[divider style=”normal” top=”20″ bottom=”20″]

cybersécurité, détecter, université paris saclay, detect

Cybersecurity: Detect and Conquer

Researchers from Université Paris-Saclay members are developing algorithms and visual tools to help detect and counteract cybersecurity failures.

 

You can’t fight what you can’t see. The protection of computer systems is a growing concern, with an increasing number of smart devices gathering our private data. Computer security has to cover hardware as well as software vulnerabilities, including network access. It needs to offer efficient countermeasures.But the first step to cybersecurity is to detect and identify intrusions and cyberattacks.

Usual attacks have adverse effects on the availability of a service (Denial of Service), try to steal confidential information or to compromise the service’s behavior by modifying the flow of events produced during an execution (that is, adding, removing or modifying events). They are difficult to detect in a highly distributed environment (like the cloud or e-commerce applications), where the order of the observed events is partially unknown.

Researchers from CentraleSupélec designed a new approach to tackle this issue. They used an automaton, modeling the correct behavior of a distributed application, and a list of temporal properties that the computation must comply with in any execution (“is always or never followed by”, “always precede”, etc.). The automaton is then able to generalize the model from a finite (thus incomplete) set of behaviors. It also avoids introducing incorrect behaviors in the model in the learning phase. Combining these two types of methods (automaton and list), the team managed to lower the rate of false positives (down to 2% in certain cases) and the mean time necessary to detect an intrusion (less than one second).

Another team from the same UPSaclay member chose a different approach. Researchers designed an intuitive visualization tool that helps to easily and automatically manage security alerts. Cybersecurity mechanisms raise large quantities of alerts, many of them being false-positive. VEGAS, for “Visualizing, Exploring and Grouping AlertS”, is a customizable filter system. It offers the front-line security operator (in charge of dispatching the alerts to security analysts) a simple 2D representation of the original dataset of alerts they receive. Alerts that are close in the original dataset are still close in the computed representation, while alerts that are distant stay distant. The officer can then select alerts that visually appear to belong to the same group, i.e. similar alerts, to generate a new rule to be inserted in the dispatching filter. That way, the amount of alerts the front-line security operator receives is reduced and security analysts only get the alerts they need to investigate further.

Those analysts could then use another visualization tool developed by a team from CNRS and Télécom SudParis to calculate the impact of cyber attacks and security countermeasures. Here, systems are given coordinates in multiple spatial, temporal and context dimensions. For instance, in order to access a web-server (resource) of a given organization, an external user (user account) connects remotely (spatial condition) to the system by providing their login and password (channel) at a given date (temporal condition).

In this geometrical model, an attack that compromises some resources using a given channel will be represented as a surface (square or rectangle). If it also compromises some users, it will be a parallelepiped. On the contrary, if we only know which resources are compromised, the attack will only affect one axis of the representation and be a line.

 

“Cybersecurity mechanisms raise large quantities of alerts.”

Researchers then geometrically determine the portion of the service that is under attack and the portion of the attack controlled by a given security measure. They can automatically calculate the residual risk (the percentage of the attack that is left untreated by any countermeasure) and the potential collateral damage (the percentage of the service that is not under attack but is affected by a given countermeasure). Such figures allow security administrators to compare the impact of multiple attacks and/or countermeasures in complex attack scenarios. Administrators are able to measure the size of cyber events, identify vulnerable elements and quantify the consequences of attacks and measures.

But what if the attack takes place directly in the hardware? Indeed, when outsourcing their circuits, companies can not be assured that no malicious circuit, such as a hardware trojan horse, has been introduced. Researchers from Télécom ParisTech proposed a metric to measure the impact of the size and location of a trojan: using this metric, there is a probability superior to 99% (with a false negative rate of 0.017%) of detecting a hardware trojan bigger than 1% of the original circuit.

More recently, the same team designed a sensor circuit to detect “electromagnetic injection”, an intentional fault injection utilized to steal secret information hidden inside integrated circuits. This sensor circuit has a high fault detection coverage and a small hardware overhead. So maybe you can fight what you can’t see, or at least you can try to. You just have to be prepared!

 

References
Gonzalez-Granadillo et al. A polytope-based approach to measure the impact of events against critical infrastructures. Journal of Computer and System Sciences, Volume 83, Issue 1, February 2017, Pages 3-21
Crémilleux et al. VEGAS: Visualizing, exploring and grouping alerts. NOMS 2016 – 2016 IEEE/IFIP Network Operations and Management Symposium, Istanbul, 2016, pp. 1097-1100
Totel et al. Inferring a Distributed Application Behavior Model for Anomaly Based Intrusion Detection. 2016 12th European Dependable Computing Conference (EDCC), Gothenburg, 2016, pp. 53-64
Miura et al. PLL to the rescue: A novel EM fault countermeasure. 2016 53nd ACM/EDAC/IEEE Design Automation Conference (DAC), Austin, TX, 2016, pp.

 

[divider style=”normal” top=”5″ bottom=”5″]

The original version of this article has been published in l’Édition of l’Université Paris Saclay.

 

supply chain management, SCM, industrial engineering, Mines Albi, génie industriel

Supply chain management: Tools for responding to unforeseen events

The inauguration of IOMEGA took place on march 14, 2017. This demonstration platform is designed to accelerate the dissemination of contributions by Mines Albi researchers in the industrial world, particularly concerning their expertise in Supply Chain Management. Matthieu Lauras, an industrial engineering researcher, is already working on tools to help businesses and humanitarian agencies manage costs and respond to unforeseen events.

 

First there were Fordism and Toyotism and then came Supply Chain Management (SCM). So much for rhyming. During the 1990s businesses were marked by globalization of trade and offshoring. They began working in networks and relying on information and communication technologies which were constantly changing. It was clear that the industrial organization used in the past would no longer work. Researchers named this revolution Supply Chain Management.

Twenty-five years later SCM has come a long way and has become a discipline in its own right. It aims to manage all of the various flows (materials, information, cash) which are vital to a business or a network of businesses.

Supply Chain Management today

Supply chain management considers the entire network: from suppliers to final users of a product (or service). Matthieu Lauras, an Industrial Engineering researcher at Mines Albi, gives an example. “For the yogurt supply chain, there are the suppliers of raw materials (milk, sugar, flour…) then purchasing of containers to make the cups and boxes, etc.” Supply chain management coordinates all these various flows in order to manufacture products on schedule and deliver them to the right place, in keeping with the planned budget.

SCM concerns all sectors of activity from the manufacturing industry to services. It has become essential to a business’s performance and management. But there is room for improvement. Up until now, the tools created have been devoted to cost control. “The competitiveness problem that businesses are currently facing is no longer linked to this element. What now interests them is their ability to detect disruptions and react to them. That’s why our researchers are focusing on supply chain agility and resilience,” explains Matthieu Lauras. At Mines Albi, researchers are working on improving SCM tools using a blend of IT and logistics skills.

Applied research to better handle unforeseen events

A number of elements can disrupt the proper functioning of supply chains. On one hand, markets are constantly changing, making it difficult to estimate production volumes. On the other hand, globalization has made transport subject to greater variations. “The strength of a business lies in its ability to handle disruptions,” notes Matthieu Lauras. This is why researchers are developing new tools which are better suited to these networks. “We are working on detecting differences between what was planned and what is really happening. We’re also developing decision-making support resources in order to enhance decision-makers’ ability to adapt. This helps them take corrective action in order to react quickly and effectively to unforeseen events,” explains the researcher.

As a first step, researchers are concentrating on the resistance and resilience of the network. They have set up research designs based on simulations of disruptions in order to evaluate the chain’s response to these events. Modeling makes it possible to test different scenarios and evaluate the impact of a disruption according to its magnitude and location in the supply chain. “We are working on a concrete case as part of the Agile Supply Chain Chair with Pierre Fabre. For example, this involves evaluating if a purchaser’s network of suppliers would potentially be able to face significant variations in demand. It is also important to determine if the purchaser could maintain his activity in acceptable conditions in the event of a sudden default of one of these partners,” explains Matthieu Lauras

New technology for real-time monitoring of supply chains

Another area of research is real-time management. “We use connected devices because they allow us to obtain information at any time about the entire network. But this information arrives in a jumble…that’s why we are working on tools based on artificial intelligence to help ‘digest’ it and pass on only what is necessary to the decision-maker,” says the researcher.

In addition, these tools are tested through collaborations with businesses and final users. “Using past data, we observe the level of performance of traditional methods in a disrupted situation. Performance is measured in terms of fill percentage, cycle time (time it takes between a certain step and delivery for example), etc. Then we simulate the performance we would obtain using our new tools. This allows us to measure the differences and demonstrate the positive impact,” explains Matthieu Lauras.

Industry partners then provide the opportunity to conduct field experiments. If the results are confirmed, partners like Iterop carry out the necessary development of commercial devices which then serve a wider range of users. Founded in 2013 by two former Mines Albi PhD students, the start-up Interopsys develops and markets software solutions for simplifying the collaboration between the personnel of a company and their information system.

A concrete example: The Red Cross

Mines Albi researchers are working on determining strategic locations around the world for the Red Cross to pre-position supplies, thus enabling the organization to respond to natural disasters more quickly. Unlike businesses, humanitarian agencies do not strive to make a profit but rather to control costs. This gives them a greater scope of action and allows them to take action in a greater number of operational areas for the same cost.

Matthieu Lauras explains: “Our research has helped reorganize the network of warehouses used by this humanitarian agency. When a crisis occurs, it must be able to make the best choices for the necessary suppliers and mode of transport. However, it does not currently have a way to measure the pros and cons of these different modes. For example, it focuses on its list of international suppliers but does not consider local suppliers. So we have decision-making support tools for planning and taking action in the short term in order to make decisions in an emergency situation.

But is it possible to transpose techniques from one sector to another? Naturally, researchers have identified this possibility, which is referred to as cross-learning. Supply chains in the humanitarian sector already function with agility, while businesses control costs. “We take the best practices from one sector and use them in another. The difficulty lies in successfully adapting them to very different environments,” explains Matthieu Lauras. In both cases, this applied research has proven to be successful and will only continue to expand in scope. The arrival of the IOMEGA platform should help researchers perform practical tests and reduce the time required for implementation.

 

[box type=”shadow” align=”” class=”” width=””]

IOMEGA: Mines Albi’s industrial engineering platform

This platform, which was inaugurated on March 14, 2017, makes it possible for Mines Albi to demonstrate tools for product design and configuration as well as for designing information systems for crisis management, risk management for projects and supply chain management.

Most importantly it offers decision-making support tools for complex and highly collaborative environments. For this, the platform benefits from experiment kits for connected devices and computer hardware with an autonomous network. This technology makes it possible to set up experiments under the right conditions. An audiovisual system (video display wall, touchscreen…) is also used for the demonstrations. This helps potential users immerse themselves in configurations that mimic real-life situations.

IOMEGA was designed to provide two spaces for scenario configuration on which two teams may work simultaneously. One uses conventional tools while the other tests those from the laboratory.

A number of projects have already been launched involving this platform, including the Agile Supply Chain Chair in partnership with Pierre Fabre, the AGIRE joint laboratory dedicated to the resilience of businesses in association with AGILEA (a supply chain management consulting firm). Another project is a PhD dissertation on the connected management of flows of urgent products with the French blood bank (EFS). In the long term, IOMEGA should lead to new partnerships for Mines Albi. Most importantly, it strives to accelerate the dissemination of researchers’ contributions to the world of industry and users.

© Mines Albi

[/box]