energy consumption

The worrying trajectory of energy consumption by digital technology

Fabrice Flipo, Institut Mines-Télécom Business School

[divider style=”normal” top=”20″ bottom=”20″]

[dropcap]I[/dropcap]n November, the General Council for the Economy, Industry, Energy and Technology (CGEIET) published a report on the energy consumption of digital technology in France. The study draws up an inventory of equipment and lists consumption, making an estimate of the total volume.

The results are rather reassuring, at first glance. Compared to 2008, this new document notes that digital consumption seems to have stabilized in France, as have employment and added value in the sector.

The massive transformations currently in progress (growth in video uses, “digitization of the economy”, increased platform use, etc.) do not seem to be having an impact on the amount of energy expended. This observation can be explained by improvements in energy efficiency and the fact that the increase in consumption by smartphones and data centers has been offset by the decline in televisions and PCs. However, these optimistic conclusions warrant further consideration.

61 million smartphones in France

First, here are some figures given by the report to help understand the extent of the digital equipment in France. The country has 61 million smartphones in use, 64 million computers, 42 television sets, 6 million tablets, 30 million routers. Although these numbers are high, the authors of the report believe they have greatly underestimated the volume of professional equipment.

The report predicts that in coming years, the number of smartphones (especially among the elderly) will grow, the number of PCs will decline, the number of tablets will stabilize and screen time will reach saturation (currently at 41 hours per week).

Nevertheless, the report suggests that we should remain attentive, particularly with regard to new uses: 4K then 8K for video, cloud games via 5G, connected or driverless cars, the growing numbers of data centers in France, data storage, and more. A 10% increase in 4K video in 2030 alone would produce a 10% increase in the overall volume of electricity used by digital technology.

We believe that these reassuring conclusions must be tempered, to say the least, for three main reasons.

Energy efficiency is not forever

The first is energy efficiency. In 2001, the famous energy specialist Jonathan Koomey established that computer processing power per joule doubles every 1.57 years.

But Koomey’s “law” is the result of observations over only a few decades: an eternity, on the scale of marketing. However, the basic principle of digital technology has remained the same since the invention of the transistor (1947): using the movement of electrons to mechanize information processing. The main cause of the reduction in consumption is miniaturization.

Yet, there is a minimum threshold of physical energy consumption required to move an electron, which is known as “Landauer’s principle”. In technological terms, we can only get close to this minimum.  This means that energy efficiency gains will slow down and then stop. The closer technology comes to this minimum, the more difficult progress will be. In some ways, this brings us back to the law of diminishing returns established by Ricardo two centuries ago, on the productivity of land.

The only way to overcome the barrier would be to change the technological paradigm: to deploy the quantum computer on a large scale, as its computing power is independent of its energy consumption. But this would represent a massive leap and would take decades, if it were to happen at all.

Exponential data growth

The second reason why the report’s findings should be put into perspective is the growth in traffic and computing power required.

According to the American IT company Cisco, traffic is currently increasing tenfold every 10 years. If we follow this “law”, it will be multiplied by 1,000 within 30 years. Such data rates are currently impossible: the 4G copper infrastructure cannot handle them. 5G and fiber optics would make such a development possible, hence the current debates.

Watching a video on a smartphone requires digital machines – phones and data centers – to execute instructions to activate the pixels on the screen, generating and changing the image. The uses of digital technology thus generate computing power, that is, a number of instructions executed by the machines. The computing power required has no obvious connection with traffic. A simple SMS can just as easily trigger a few pixels on an old Nokia or a supercomputer, although of course the power consumption will not be the same.

In a document dating back to a few years ago, the semiconductor industry developed another “law”: the steady growth in the amount of computing power required on a global scale. The study showed that at this rate, by 2040 digital technology will require the total amount of energy produced worldwide in 2010.

This result applies to systems with the average performance profile of 2015, when the document was written. The study also considers the hypothesis of global equipment with an energy efficiency that is 1,000 times higher. Maturity would only be shifted by 10 years: 2050. If the entire range of equipment reached the limit of “Landauer’s principle”, which is impossible, then by 2070 all of the world’s energy (from 2010) would be consumed by digital technology.

Digitization without limits

The report does not say that energy-intensive uses are limited to the practices of a few isolated consumers. They also involve colossal industrial investments, justified by the desire to use the incredible “immaterial” virtues of digital technology.

All sectors are passionate about AI. The future of cars seems destined to turn towards autonomous vehicles. Microsoft is predicting a market of 7 billion online players. E-sport is growing. Industry 4.0 and the Internet of Things (IoT) are presented as irreversible developments. Big data is the oil of tomorrow, etc.

Now, let us look at some figures. Strubell, Ganesh & McCallum have shown, using a common neural network used to process natural language, that training consumed 350 tons of CO₂, equivalent to 300 round trips from New York to San Francisco and back. In 2016, Intel announced that the autonomous car would consume 4 petabytes per day. Knowing that in 2020 a person generates or transits 2 GB/day, this represents 2 million times more. The figure announced in 2020 is instead 1 to 2 TB/hour, which is 5,000 times more than individual traffic.

A surveillance camera records 8 to 15 frames per second. If the image is 4 MB, this means 60 MB/s, without compression, or 200 GB/hour, which is not insignificant in the digital energy ecosystem. The IEA EDNA report highlights this risk. The “volumetric video”, based on 5K cameras, generates a flow of 1 TB every 10 seconds. Intel believes that this format is “the future of Hollywood”!

In California, online gambling already consumes more than the power required for electric water heaters, washing machines, dishwashers, clothes dryers and electric stoves.

Rising emissions in all sectors

All this for what, exactly? This brings us to the third point. How does digital technology contribute to sustainable development? To reducing greenhouse gas emissions? To saving soils, biodiversity, etc.?

The Smart 2020 report promised a 20% reduction in greenhouse gases in 2008, thanks to digital technology. In 2020 we see that this has not happened. The ICT sector is responsible for 3% of global greenhouse gas emissions, which is more or less what the report predicted. But for the other sectors, nothing has happened: while digital technology has spread widely, emissions are also on the rise.

However, the techniques put forward have spread: “intelligent” engines have become more popular, the logistics sector relies heavily on digital technology, and soon artificial intelligence, not to mention the widespread use of videoconferencing, e-commerce and orientation software in transport. Energy networks are controlled electronically. But the reductions have not happened. On the contrary, no “decoupling” of emissions from economic growth is in sight, neither from a greenhouse gas perspective nor in other parameters, such as consumption of materials. The OECD predicts that material consumption will almost triple by 2060.

Rebound effect

The culprit, says the Smart 2020 report, is the “rebound effect”. This is based on the “Jevons paradox” (1865), which states that any progress in energy efficiency results in increased consumption.

A curious paradox. The different forms of “rebound effect” (systemic, etc.) are reminiscent of something we already know: they can take productivity gains, as found for example by Schumpeter or even Adam Smith (1776).

A little-known article also shows that in the context of neoclassical analysis, which assumes that agents seek to maximize their gains, the paradox becomes a rule, according to which any efficiency gain that is coupled with an economic gain always translates into growth in consumption. Yet, the efficiency gains mentioned so far (“Koomey’s law”, etc.) generally have this property.

A report from General Electric illustrates the difficulty very well. The company is pleased that the use of smart grids allows it to reduce CO2 emissions and save money; the reduction in greenhouse gas is therefore profitable. But what is the company going to do with those savings? It makes no mention of this. Will it reinvest in consuming more? Or will it focus on other priorities? There is no indication. The document shows that the general priorities of the company remain unchanged, it is still a question of “satisfying needs” which are obviously going to increase.

Digital technology threatens the planet and its inhabitants

Deploying 5G without questioning and regulating its uses will therefore open the way to all these harmful applications. The digital economy may be catastrophic for climate and biodiversity, instead of saving them. Are we going to witness the largest and most closely-monitored collapse of all time? Elon Musk talks about taking refuge on Mars, and the richest people are buying well-defended properties in areas that will be the least affected by the global disaster. Because global warming threatens agriculture, the choice will have to be made between eating and surfing. Those who have captured value with digital networks are tempted to use them to escape their responsibilities.

What should be done about this? Without doubt, exactly the opposite of what the industry is planning: banning 8K or, failing that, discouraging its use, reserving AI for restricted uses with a strong social or environmental utility, limiting the drastic level of power required by e-sport, not deploying 5G on a large scale, ensuring a restricted and resilient digital infrastructure with universal access that allows low-tech uses and low consumption of computing and bandwidth to be maintained. Favoring mechanical systems or providing for disengageable digital technology, not rendering “backup techniques” inoperable. Becoming aware of what is at stake. Waking up.

[divider style=”dotted” top=”20″ bottom=”20″]

This article benefited from discussions with Hugues Ferreboeuf, coordinator of the digital component of the Shift Project

Fabrice Flipo, Professor of social and political philosophy, epistemology and history of science and technology at Institut Mines-Télécom Business School

This article was republished from The Conversation under the Creative Commons license. Read the original article here.

[box type=”info” align=”” class=”” width=””]

This article has been published in the framework of the 2020 watch cycle of the Fondation Mines-Télécom, dedicated to sustainable digital technology and the impact of digital technology on the environment. Through a monitoring notebook, conferences and debates, and science promotion in coordination with IMT, this cycle examines the uncertainties and issues that weigh on the digital and environmental transitions.

Find out more on the Fondation Mines-Télécom website

[/box]

Being Human with algorithms: an early adopter perspective

Marc-Oliver Pahl is a researcher in cybesecurity at IMT Atlantique. In 2018, he launched “Being human with algorithms”, a series of video interviews between technicians and non-technicians around the topic of digital transformation. Through open discussions and dialogues, he depicts how digital technologies are perceived, and affect humans as citizens, consumers, workers…

In this episode, Marc-Oliver meets with Emmanuel Bricard, Chief Information Officer at elm.leblanc and early adopter of information technology. In this video, Emmanuel Bricard gives an insight not only from his job but also from his broad experience from his time in Africa and elsewhere.

Wikipedia

Wikipedia in the time of the Covid-19 crisis

Wikipedia provides freely reusable, objective and verifiable content that every citizen may modify and improve. It is difficult to achieve this aim when it comes to providing real-time information about a crisis marked by uncertainty, as is the case with the current Covid-19 epidemic. At Télécom ParisCaroline Rizza, a researcher in information sciences, and Sandrine Bubendorff, a sociologist, have explored how crises are approached on Wikipedia – a topic they have been studying for three years through the ANR MACIV project. In this joint interview, they explain the motivations for producing content about the health crisis by comparing it to more short-term crises, such as the attacks of 13 November 2015 in Paris.

 

Why study Wikipedia in crisis situations?

Sandrine Bubendorff: In our research, we examine how information spreads in crisis situations. And Wikipedia is a place where articles are created very quickly when an event occurs, with a goal: summarize the information available. So we look at how this summary is constructed. This allows us to analyze the way in which information is spread and aggregated about an event as it unfolds. For instance, how do the contributors deliberate in real time about what constitutes a reliable source? The idea is to understand how the Wikipedia page can help make sense of the events.

On Wikipedia, is the current health crisis approached like any other crisis?

Caroline Rizza: Our research on Wikipedia began by studying what are referred to as civil security crises. We focused in particular on the attacks of 13 November 2015. We studied the dynamics of creating pages, debates between contributors about what makes a source reliable or unreliable, strategies for building tree structures to best present the information etc. A defining characteristic of civil security crises is that they take place over a short period of time. Whether it’s a matter of an attack, flooding or an industrial accident, the crisis is usually short-lived: whether it’s “anticipated” or unexpected, we observe that the crisis intensifies and peaks rapidly with a “return to normalcy” in the hours or days follow. The time frame for our current health crisis is different. The crisis intensifies, then levels off and plateaus without any real way of predicting when things will return to normal. As a result of this plateauing, a variety of mechanisms come into play on Wikipedia, some of which are unique to crisis situations, and others that are more similar to questions raised for more common topics.

Read more on I’MTech: Social media and crisis situations: learning how to control the situation

What are the typical mechanisms for producing knowledge about crises on Wikipedia?  

SB: The first characteristic is that a page is created for the event very quickly with a basic summary of information. For the attacks of 13 November for example, the first entry on the page was “Shooting in the 11th arrondissement, live coverage on BFM TV”. Less than five minutes later, it was corrected by another person to change the term “shooting” and remove the reference to BFM TV. In situations like this, we observe that information is constructed very quickly and in an iterative way. This can also be seen in the pages about the Covid-19 epidemic.

There is also a flurry of activity to create secondary pages on Wikipedia: the ones that allow members of the community to interact with one another. The initial discussions are usually about whether or not it is appropriate to create a page dedicated to the topic: are the events significant enough to appear in an encyclopedia? Such debates impact the architecture of pages about events: does a given sub-event deserve to have a dedicated page? For example, in the case of the Covid-19 crisis, we’re seeing debates about whether each country should have its own dedicated page for the situation, or if all the information should instead be centralized on a single page. These debates are interesting since they give us the opportunity to understand how events are organized hierarchically. And as always on Wikipedia, this desire to produce encyclopedic, lasting knowledge.

Why are there debates about whether or not to create pages about certain topics?

CR: What’s unique about crisis topics on Wikipedia is that there is an influx of new contributors to provide information about the topic, probably due to the unfolding nature of the crisis. This means that the community of editors will not be composed solely of regular editors, and that there will be opposing viewpoints. New contributors are likely to produce content very quickly and provide information about a topic using a journalistic approach. The more regular contributors we interviewed reported that they have a specific approach for this type of article, which is to not intervene immediately. They wait a day or a few days, until the contributions level off so that they can thoroughly revise the article in terms of both content and form. So they use more of an encyclopedic approach. They think about presenting the event from a knowledge perspective: the information must stand the test of time. 

What are the different ways in which Wikipedia contributors approach topics?

SB: For the regular contributors we’ve met, the urgent need is not to report information, but rather to provide a synthesis of information. In general, Wikipedia only deals with secondary sources. The information must exist elsewhere in order to be cited on Wikipedia, and there must be a consensus about its sources. The content presented must therefore be verified and credible, which calls for a different time frame than putting information from “primary sources” online. For pages dedicated to crisis situations, regular contributors generally only intervene after content produced at the same time the crisis is unfolding has leveled off. They edit the information, eliminating side notes and keeping only a summary of important information, by restructuring pages, improving sources etc.

As an example of this issue, there was a debate about the best source to use to provide figures concerning the numbers of Covid-19 cases and deaths. Such a discussion exemplifies the contrast between approaches that can be described as encyclopedic or journalistic. Each country updates and reports its data on a daily basis. However, the WHO updates its figures with a 24 to 48 hour delay since it needs time to aggregate the data. So on Wikipedia, there is a debate about which data to take into account. Ultimately, so far the consensus has weighed in favor of the WHO figures, even if they come with a delay. This was the consensus reached between the contributors.

At the top of the French Wikipedia page about the Covid-19 epidemic, two banners are displayed to warn readers about the possible lack of information. The first is specific to the current health crisis, while the second is displayed on all pages concerning current events.

 

Are there unique aspects as to how the health crisis is being approached on Wikipedia ?

SB: There are two differences between the Covid-19 crisis and civil security crises such as: its time frame and its scientific dimension. Because it is taking place over a long period of time, the debates also continue over time. Although the crisis began several months ago, the pages still display specific banners warning readers that the information may be inaccurate or obsolete, and that it relates to current events. These banners are very interesting because they attest to the construction of encyclopedic knowledge in real time. Behind-the-scenes debates are playing out between contributors to determine which scientific sources to take into account: should only published scientific articles be discussed? Should articles that are currently being peer-reviewed be incorporated? All these questions illustrate how the community produces information that will stand the test of time and. And ultimately they are similar to the debates we observe between Wikipedia contributors for “non-crisis” articles. Scientific articles, publications, are sources that Wikipedians are accustomed to using and discussing.

Why do so many people readily volunteer to provide such encyclopedic information?

CR: One of the ideas that has emerged in our research on crises is that citizens contribute in order to make sense of what is happening, and to fill an information vacuum. To be more specific, like other researchers, we have noted that any crisis comes with its share of uncertainty. This uncertainty results in anxiety among citizens. Faced with such uncertainty and the anxiety it causes, citizens attempt to “solve” these questions. They try to understand what’s happening and what should be done, and even what they should or could do. They come together, communicate using their smartphones and organize groups on social media. This is exactly what we have demonstrated with Wikipedia: in the absence of effective communication about the crisis, citizens are plunged into uncertainty, which they try to resolve by coproducing encyclopedic informational content. Working as a community — debating information sources and quality — therefore allows them to find some small degree of certainty amidst an overwhelming amount of sometimes contradictory information that is being spread, and ultimately, to fill an information vacuum that leaves the door open to rumors or misinformation. We’re seeing this today and there’s a good reason why the Covid-19 pandemic has been coupled with the idea of an “infodemic.” 

Is Wikipedia the only platform that makes this possible?

CR: No, we talk about Wikipedia but this is also something we’ve seen in our research on Twitter. Ultimately, what we’ve demonstrated about this topic through the ANR MACIV project we’re working on is that despite the main purposes of each of these media — to construct encyclopedic knowledge on Wikipedia and to disseminate information on Twitter — citizens are very attentive to the truthfulness of information. They develop intrinsic verification mechanisms: in the comments on posts in Twitter feeds for example, and on the talk page of a Wikipedia article. As such, we have suggested that Wikipedia represents a “digital social network” during a crisis, in the same way as Twitter or Facebook, in light of the extensive activity on the talk page about the crisis.

Interview by Benjamin Vignard, for I’MTech.

[divider style=”normal” top=”20″ bottom=”20″]

MACIV: social media in crisis situations

Caroline Rizza and Sandrine Bubendorff are researchers at the Interdisciplinary Institute of Innovation, a joint research unit (UMR 9217) between Télécom Paris/CNRS/École Polytechnique/Mines ParisTech. They are working on the MACIV project (MAnagement of CItizens and Volunteers: social media in crisis situation), for which Caroline Rizza is the scientific coordinator. The project is funded by the ANR via the French General Secretariat for Defense and National Security. Since 2018, the MACIV project has been examining how information is created and spread on social media in crisis situations. An exploratory study carried out prior to this project and funded by the Defense and Security Zone of the Paris Prefecture of Police as part of the Euridice consortium led by the Laboratory of Technology, Territories and Societies (LATTS), made it possible to initiate this research. MACIV project partners include I3-Télécom Paris, IMT Mines Albi, LATTS, the French Directorate General for Civil Security and Crisis Management, VISOV, NDC, and the Defense and Security Zone of the Paris Prefecture of Police.

[divider style=”normal” top=”20″ bottom=”20″]

5G-Victori

5G-Victori: large-scale tests for vertical industries

Twenty-five European partners have joined together in the three-year 5G-Victori project launched in June 2019. They are conducting large-scale trials for advanced use case validation in commercially relevant 5G environments. Navid Nikaein, researcher at EURECOM, key partner of the 5G-Victori project, details the challenges here.

 

What was the context for developing the European 5G-Victori project?

Navid Nikaein: 5G-Victori stands for VertIcal demos over Common large scale field Trials fOr Rail, energy and media Industries. This H2020 project is funded by the European Commission as part of the 3rd phase of the 5GPPP projects (5G Infrastructure Public Private Partnership). This phase aims to validate use cases for vertical industry applications on realistic and commercially relevant 5G test environments. 5G-Victori focuses on use cases involving in Transportation, Energy, Media, and Factories of the Future.

What is the aim of this project?

NN: The aim is threefold. First, the integration of different 5G operational environments required for the demonstration of the large variety of 5G-Victori vertical and cross-vertical use cases. Second, testing the four main use-cases, namely Transportation, Energy, Media, and Factories of future, on 5G platforms located in Sophia Antipolis (France), Athens (Greece), Espoo and Oulu (Finland), allowing partners to validate their 5G use cases in view of a wider roll-out of services. Third, the transformation of the current closed, purposely developed and dedicated infrastructures into open environments where resources and functions are exposed to the telecom and the vertical industries through common repositories.

What technological and scientific challenges do you face?

NN: A number of challenges have been identified for each use cases that will be tackled during the course of the project in their relevant 5G environment (see figure below).

In the Transport use case, we validate the sustainability of critical services, such as collision avoidance, and enhanced mobile broadband applications, such a 4K video streaming, under high-speed mobility in Railway environments. Main challenges considered in 5G-Victori are (a) the interconnection of on-board devices with the trackside and the trackside with the edge and/or core network (see figure below), and (b) guaranteed delivery of railway-related critical data and signalling services addressing on-board and trackside elements using a common software-based platform.

In the Energy use case, the main challenge is to facilitate the smart energy metering, fault detection, and preventive maintenance taking advantage of the low latency signal exchange between the substations and the control Center over 5G networks. Both high-voltage and low-voltage energy operations are considered.

In the Media use case, the main challenge is to enable divers content delivery networks capable of providing services in dense, static and mobile environments. In particular, 4K video streaming service continuity in mobile scenarios with 5G network coverage, and bulk transfer of large volumes of content for the disconnected operation for personalized Video on Demand (VoD) services.

In the Factories of future use case, the main challenge is the design and development of a fully automated Digital Utility Management system over a 5G network demonstrating advanced monitoring solutions. Such a solution shall be able to track all the operations, detect equipment fault, (3) support decision-making process of first responders based on the collected data.

How are EURECOM researchers contributing to this project?

NN: EURECOM is one of the key partners in this project as it will provide its operational 5G testing facilities based on OpenAirInterface (OAI) and Mosaic5G platforms. The facility provides Software-defined Networks (SDN), Network Function Virtualization (NFV) and Multi-access Edge Computing (MEC) solutions for 5G networks. In addition, Eurecom will design and develop a complete 5G network slicing solution that will be used to deploy a virtualized 5G network tailored to the above-mentioned use cases. Finally, Eurecom will pre-validate a subset of scenarios considered in the project on.

Also read on I’MTech SDN and virtualization: more intelligence in 5G networks

Who are your partners and what are your collaborations?

NN: The project counts 25 European partners that are represented in the figure below: SMEs, network operators, vendors, academia… EURECOM is playing a key role in the project in that it provides (a) 5G technologies through OpenAirInterface and Mosaic5G platforms to a subset of partners, and (b) 5G deployment and testing facilities located at Sophia Campus.

What are the expected benefits of the project?

NN: In addition to the scientific benefits in terms of publications, the project will provide supports in continuous development and maintenance of the OpenAirInterface and Mosaic5G software platforms. It also allow us to validate whether 5G network is able to deliver the considered use cases with the expected performance. We also plan to leverage our results by providing feedbacks when possible to the standardization bodies such as 3GPP and ORAN.

What are the next important steps for the project?

NN: In the first year, the project focused on refining the 5G architecture and software platforms to enable efficient execution of the considered use cases. In the 2nd year, the project will focus on deploying the use cases on the target 5G testing facilities provided by 5G-EVE, 5G-VINNI, 5GENESIS, and 5G-UK.

Learn more about the 5G-Victori project

Interview by Véronique Charlet for I’MTech

tribology

What is tribology?

The science of friction: this is the definition of tribology. Tribology is a focal point shared by several disciplines and an important field of study for industrial production. Far from trivial, friction is a particularly complex phenomenon. Christine Boher, a tribologist at IMT Mines Albi[1], introduces us to this subject.

 

What does tribology study?

Christine Boher: The purpose of tribology is to understand what happens at the contact surface of two materials when they rub, or are in “relative movement” as we call it. Everyone is aware of the notion of friction: rubbing your hands together to warm up is friction, playing the guitar, skiing, braking, oiling machines, all involve friction. Friction induces forces that oppose the movement, resulting in damage. We are trying to understand these forces by studying how they manifest themselves, and the consequences they have on the behavior of materials. Tribology is therefore the science of friction, wear and lubrication. The phenomenon of wear and tear may seem terribly banal, but when you look more closely, you realize how complex it is!

What scientific expertise is used in this discipline?

CB: It is a “multiscience”, because it involves many disciplines. A tribologist can be a researcher specializing in solid mechanics, fluid mechanics, materials, vibratory behavior, etc. Tribology is the conjunction of all these disciplinary fields, and this is what makes it so complex. Personally, I specialize in material sciences.

Why is friction so interesting?

CB: You first need to understand the role of friction in contact. Although it sounds intuitive, when two materials rub together, many phenomena occur: the surface temperature increases, the mechanical behavior of both parts is changed, particles are created due to wear, which have an impact on the load and sliding speed. As a result, material properties arise which would not have happened without friction. Tribology focuses on both the macrometric and micrometric aspects of the surfaces of materials in contact.

How is the behavior of a material changed by friction?

CB: Take for example the wear particles generated during friction. As they are generated, they can decrease the frictional resistance between the two bodies. They then act as a solid lubricant, and in most cases they have a rather useful, desirable effect. However, these particles can damage the materials if they are too hard. If this is the case, they will accelerate the wear. Tribologists therefore try to model how, during friction, these particles are generated and under what conditions they are produced in optimal quantities.

Another illustration is the temperature increase of parts. In some cases of high-speed friction, the temperature of the materials can rise from 20°C to 700°C in just a few minutes. The mechanical properties of the material are then completely different.

Could you illustrate an application of tribology?

CB: Take the example of a rolling mill, a large tool designed to produce sheet metal by successive reductions in thickness. There is a saying in the discipline: “no friction, no lamination”. If problems arise during friction, that is, if there are problems of contact between the surface of the sheets and those of the cylinders, the sheets will be damaged. For the automotive industry, this means body sheets are damaged during the production phase, compromising surface integrity. To avoid this, we are working in collaboration with the relevant industrialists either on new metallurgical alloys or on new coatings to be put on the rollers. The purpose of the coating is to protect the material from wear and to slow down the damage of the working surfaces as much as possible.

Who are the main beneficiaries of tribology research?

CB: We work with manufacturers in the shaping industry, such as ArcelorMittal, or Aubert & Duval. We also have partnerships with companies in the aeronautics sector, such as Ratier Figeac. Generally, we are called in by major groups or subcontractors of major industrial groups because they are interested in increasing their speeds, and this is where friction-related performance becomes important.

 

[1] Christine Boher is a researcher at the Institut Clément Ader, a joint research unit
of IMT Mines Albi/CNRS/INSA Toulouse/ISAE Supaero/University Toulouse III Paul Sabatier/Federal University Toulouse Midi-Pyrénées.

earth

The world’s oldest building material is also the most environmentally friendly

The original version of this article was published on The Conversation
By Abdelhak Maachi and Rodolphe Sonnier, IMT Mines Alès.

[divider style=”normal” top=”20″ bottom=”20″]

[dropcap]D[/dropcap]espite the recommendations of IPCC experts, who in 2018 recommended that greenhouse gas emissions be reduced by 40 to 70% by 2050 in an attempt to limit the impacts of the climate crisis, global CO2 output increased by 0.6% in 2019.

Among the efforts made towards reducing the CO2 emitted through human activity, earth, the world’s most widespread raw material, has an essential role to play. Let’s not forget that the building sector alone generates nearly 40% of annual greenhouse gas emissions.

An ancient and global history

11,000 years ago, Homo sapiens was already building with earth in the region of present-day Syria. This age-old eco-material is still one of the world’s main building materials today. It is estimated to represent more than a third in the countries of the South.

Examples of earthen architecture, from the most modest to the most monumental, can be found on all continents and in all climates. 175 sites, wholly or partially built using this material, are classified by UNESCO as World Heritage Sites, highlighting the durability of this construction method.

 

In orange, the regions where earth is used in construction. The dots indicate the main architectural sites on the UNESCO World Heritage List. Source: Craterre.

 

Examples include the Grand Mosque of Djenné in Mali, built in 1907. It remains one of the largest mud brick buildings in the world and is one of the emblems of the country’s culture. In China, the Great Wall has sections several kilometers long built from earth where stone was not available locally. Also worthy of note is the 16th-century town of Shibam in Yemen, the world’s first dense, vertical town with high-rise buildings about 30 meters high, built entirely of molded mud bricks (called “adobes”). Due to the ongoing civil war in the country, the city is now on the UNESCO World Heritage in Danger list.

In Morocco, the four imperial cities – FezMarrakeshMeknes and Rabat – are also classified as World Heritage Sites because of their traditional medinas built of adobe and rammed earth (pisé). The country also features prodigious fortresses made from ochre earth, called ksars and kasbahs. The Ksar of Ait-Ben-Haddou is an emblematic example of the traditional Amazigh architecture of southern Morocco.

earth

Shibam (Yemen) and its brick towers. Source: Kebnekaise/Flickr, CC BY-NC-SA

 

earth

In Marrakesh (Morocco). Source: Luca Di Ciaccio/Flickr, CC BY-NC-SA

 

In Europe, earthen constructions are not restricted to rural environments. In Granada, the dazzling Alhambra palace (meaning “red” in Arabic, in reference to the color of the earth), was largely built from rammed earth in the 13th century, in particular its ramparts.

France is one of the few countries featuring earthen heritage buildings using the 4 main traditional techniques – rammed earth (pisé), adobe, torchis cob and bauge cob – and where the majority of the earthen buildings often date back more than a century. Lyon is home to some remarkable examples of earthen architecture. In the Croix-Rousse neighborhood, people have been living in 4 and 5-floor rammed earth buildings since the 1800s.

Les tours défensives de l’Alhambra, à Grenade (Espagne). Source : Moli Sta Elena/Flickr, CC BY-NC-SA

The defense towers of Alhambra in Granada (Spain). Source: Moli Sta Elena/Flickr, CC BY-NC-SA

Building with what is under our feet

The earth is made up of minerals, organic matter, water and air. These minerals, composed mainly of silicates – quartz, clays, feldspars and micas – and carbonates, are the result of physical and chemical alteration of a source rock. The earth used for building (the raw material), which is essentially mineral, is easily removed from the ground, underneath the layer of soil that is rich in organic matter (humus) and is used for plant production.

After extraction using rudimentary or more elaborate tools, the earth (consisting of clay, silt, sand and possibly gravel and pebbles) is transformed into building material using traditional or more contemporary methods. The methods can be grouped into four main categories.

  • Compacted earth, not saturated with water: to make rammed earth walls (pisé) and blocks of compressed earth.
  • Stacked or molded earth: to make bauge cob walls and adobes.
  • Earth mixed when wet with plant fibers: to make torchis cob (to fill a wooden framework), straw earth, hemp earth, etc.
  • Earth poured in a liquid state into framework, such as fluid or self-consolidating concrete.
Cycle de vie de la terre crue : extraction, construction, utilisation, démolition et recyclage. Arnaud Misse

Life cycle of earthen building materials: extraction, construction, use, demolition and recycling. Source: Arnaud Misse.

Pisé : la terre peu humide est compactée à l’aide d’une dame (pisoir) dans des coffrages en bois (banches). Source : Arnaud Misse.

Rammed earth (pisé): earth containing little moisture is compacted with the help of a tamper into wooden formwork. Source: Arnaud Misse.

Blocs de terre comprimée (BTC) : la terre peu humide est comprimée dans des moules à l’aide d’une presse. Source : Arnaud Misse.

Blocks of compressed earth: earth containing little moisture is compressed in molds using a press. Source: Arnaud Misse.

Bauge : la terre malléable est empilée pour former un mur. Arnaud Misse

Bauge cob: malleable earth is piled up to form a wall. Source: Arnaud Misse.

Adobe : la terre est moulée à l’état plastique et séchée à l’air libre. Arnaud Misse

Adobe: earth is molded while it is malleable and dried in the open air. Source: Arnaud Misse.

Torchis : la terre, mélangée à de la paille, recouvre une ossature en bois. Arnaud Misse

Torchis cob: earth is mixed with straw, then used to cover a wooden frame. Source: Arnaud Miss

Easy to work with, healthy and environmentally friendly

There are a lot of advantages to using earth: it is a natural material, abundantly and locally available (transport is often nil), with low embodied energy (energy consumed throughout the life cycle of a material) and infinitely recyclable. It is raw and diversified, and offers a variety of granularities, natural colors and lively textures, for a minimalist aesthetic.

Earth also provides hygrothermic natural comfort, optimal acoustics and a healthy indoor atmosphere. It provides hygrometric regulation and solid walls provide good thermal inertia and sound insulation. It emits no VOCs (volatile organic compounds) and absorbs odors. These virtues have been empirically understood for thousands of years, but have now been confirmed scientifically.

Unlike industrialized and globalized materials, earth is easy to work with and poses no health risk. It thus helps to promote participatory building sites and self-built constructions (especially for the most disadvantaged), to value the diversity of construction cultures and to stimulate local development.

Earthen construction also contributes to the recovery of excavated land in large cities considered as waste. While the Greater Paris Express construction site will generate 40 million tons of earth by 2030, the “cycle terre” project aims to transform part of this “waste” into eco-materials for construction in a circular economy.

These eco-responsible advantages make earth a building material of the future, an alternative to energy-intensive and polluting building materials such as fired brick or cement (nearly 7% of global CO2 emissions), and a solution to be promoted in the building industry to respond to the global housing crisis (affecting one billion people) and the climate emergency, as hoped by the signatories of the “Manifesto for a Happy Frugality”.

Ordres de grandeur d’énergie grise de différents matériaux de construction. Abdelhak Maachi

Orders of magnitude of embodied energy of different building materials. Source: Abdelhak Maachi.

earth

Earth offers natural hygrothermal comfort. Source: Antonin Fabbri.

Limitations to be overcome

But earth also has its limits. The main problem is its sensitivity to water. To overcome this, mud walls are traditionally protected, especially in rainy climates, with a base (made of stone for example) to prevent moisture soaking up through capillary action, and a roof overhang to protect against erosion due to rain.

A small amount of cement is sometimes added to limit sensitivity to water and to increase, albeit modestly, mechanical properties. However, this “stabilization” technique is open to criticism because it impacts the ecological advantages and penalizes the life cycle of the material.

Répartition du patrimoine architectural en terre crue. En orange : torchis, en marron : pisé, en jaune : bauge et en rouge : adobe. Craterre

Distribution of architectural heritage made from earth. In orange: torchis cob, in brown: rammed earth (pisé), in yellow: bauge cob and in red: adobe. Source: Craterre.

Earth represents 15% of French buildings. However, the percentage of new buildings made of earth remains close to zero at the national level, although it is on the increase. The omnipresent reign of concrete, the hyper-industrialized context of construction, lobbying, inappropriate regulations, unfavorable prejudices (it is often seen as a primitive material for poor countries!), the lack of knowledge among decision-makers, engineers and project owners, are all reasons that explain the marginalization and ostracism from which this material suffers.

To overcome these barriers, earthen construction requires appropriate specific regulations for its implementation and maintenance, as well as adapted tests taking into account its specificities and complexity; evaluating its physical properties and durability. The development of earthen construction also requires scientific research, education, appropriate training of future designers and builders and promotion.

But earth, along with other ecomaterials such as wood, stone and bio-based insulators (such as hemp and straw), should undoubtedly contribute to building the resilient and autonomous city of tomorrow.

earth

Construction of Terra Janna in Marrakesh (Morocco): an earthen construction training center (Centre de la Terre) and a guest house built entirely using the adobe technique. This is an example of contemporary environmentally responsible earthen architecture. Source: Denis Coquard.

[divider style=”dotted” top=”20″ bottom=”20″]

Arnaud Misse (CRAterre), Ecole nationale supérieure d’architecture de Grenoble), Laurent Aprin, Marie Salgues, Stéphane Corn, Éric Garcia-Diaz (IMT Mines Alès) and Philippe Devillers (Ecole nationale supérieure d’architecture de Montpellier) are co-authors of this article.

Abdelhak Maachi doctoral student in material science, Mines Alès – Institut Mines-Telecom and Rodolphe Sonnier, assistant professor at the Ecoles des Mines, Mines Alès – Institut Mines-Telecom

This article was republished from The Conversation under the Creative Commons license. Read the original article here (in French).

 

 

digital technologies

Digital technologies: three major categories of risks

Lamiae Benhayoun, Institut Mines-Télécom Business School and Imed Boughzala, Institut Mines-Télécom Business School

[divider style=”normal” top=”20″ bottom=”20″]

[dropcap]T[/dropcap]o respond to an environment of technological disruption, companies are increasingly taking steps toward digital transformation or digitalization, with spending on such efforts expected to be USD 2 trillion in 2022.

Digitalization reflects a profound, intentional restructuring of companies’ capabilities, resources and value-creating activities in order to benefit from the advantages of digital technology. This transformation, driven by the advent of  SMAC technologies (social, mobile, analytics, cloud), has intensified with the development of DARQ technologies (distributed ledger, artificial intelligence, extended reality, quantum calculation), which are pushing companies toward a post-digital era.

DARQ New Emerging Technologies (Accenture, February 2019).

 

There are clear benefits to the use of these technologies – they help companies improve the user experience, streamline business processes and even revolutionize their business models. Being a digital-first business has become a prerequisite for survival in an ever-changing marketplace.

But the adoption of these digital technologies gives rise to risks that must be managed to ensure a successful digital transformation. As such, the TIM department (technology, information and management) at Institut Mines-Télécom Business School is conducting a series of research studies on digital transformation, which has brought to light three categories of risks related to the use of digital technologies.

This characterization of risks draws on a critical review of research studies on this topic over the past decade and on insights from a number of digital transformation professionals in sectors with varying degrees of technological intensity.

Risks related to data governance

Mobile digital technologies and social media lead to the generation of data without individuals’ knowledge. Collecting, sharing and analyzing this data represents a risk for companies, especially when medical, financial or other sensitive data is involved. To cite one example, a company in the building industry uses drones to inspect the facades of buildings.

Drones for Construction (Bouygues Construction, February 2015).

 

It has noted that these connected objects can be intrusive for citizens and present risks of non-compliance for the company in terms of data protection. In addition, our exploration of the healthcare industry found that this confidentially problem may even hinder collaboration between care providers and developers of specialized technologies.

Furthermore, many companies are confronted with an overwhelming amount of data, due to poor management of generation channels and dissemination flow. This is especially a problem in the banking and insurance industry. A multinational leader in this sector has pointed out that cloud technology can be useful for managing this data mining cycle, but at the same time, it poses challenges in terms of data sovereignty.

Risks related to relationships with third parties

Digital technologies open companies up to other stakeholders (customers, suppliers, partners etc.), and therefore to more risks in terms of managing these relationships. A maritime logistics company, for example, said that its use of blockchain technology to establish smart contracts has led to a high degree of formality and rigidity in its customer-supplier relationships.

In addition, social media technologies must be used with caution, because they can lead to problems in terms of overexposure and lack of visibility. This was the case for a company in the agri-food industry,  which found itself facing viral social media sharing of bad reviews by customers. And a fashion industry professional emphasized that mobile technologies present risks when it comes to establishing an effective customer relationship because it is difficult for the company to stand out in relation to mobile applications and e-commerce sites, which can even confuse customers and make them skeptical.

Furthermore, many companies in the telecommunications and banking-insurance sectors interviewed by the researchers are increasingly aware of the risks related to the advent of blockchain technology for their business models and their roles across the socio-economic landscape.

Risks related to managing digital technologies

The recent nature of digital technologies presents challenges for the majority of companies. Chief information officers (CIO) must master these technologies quickly to respond to fast-changing business needs, or they will find themselves facing shadow IT problems (systems implemented without approval from the CIO), as occurred at an academic institution we studied. In this case, several departments implemented readily available solutions to meet their transformation needs, which created a problem in terms of IT infrastructure governance.

It is not always easy to master digital technologies quickly, especially since the development of digital skills can be slowed down when there is a shortage of experts, as was the case for a company in the logistics sector. Developing these skills requires significant investments in terms of time, efforts and costs, which may prove to be useless in just a short time, due to the quick pace at which digital technologies evolve and become obsolete. This risk is especially present in the military sector, where digital technologies are tailor-made and must ensure a minimum longevity to amortize the development costs,  as well as in agriculture, given the great vulnerability of the connected objects used  in the sector.

Furthermore, some management problems are associated with digital technologies in particular. We have identified the recurrent risk of loss of assets in cases where companies use a cloud provider, and the risk of irresponsible innovation following a banking-insurance firm’s adoption of artificial intelligence technology. Lastly, a number of professionals underscored the potential risks of mimicry and oversizing that may emerge with the imminent arrival of quantum technologies.

These three categories of risks highlight the issues related to digital technologies in particular, as well as the challenges of the interconnected nature of these technologies and their simultaneous use.  It is crucial that those who work with such technologies are made aware of these risks to be anticipated in order to benefit from their investments in digital transformation. For this transformation goes beyond operational considerations and presents opportunities, but also risks associated with change.

[divider style=”dotted” top=”20″ bottom=”20″]

Lamiae Benhayoun, Assistant Professor at Institut Mines Telecom Business School (IMT BS), Institut Mines-Télécom Business School and Imed Boughzala, Dean of the Faculty and Professor, Institut Mines-Télécom Business School

This article has been republished from the The Conversation under a Creative Commons license. Read the original article (in French).

ERC

Twin ERC grants for research on the aorta

In 2015, the Mines Saint-Étienne engineering and health center was awarded two grants by the European Research Council (ERC). This funding was for two five-year projects on ruptured aortic aneurysms in the Sainbiose laboratory[1]Pierre Badel received a €1.5 million starting grant (young researcher grant), and Stéphane Avril received a €2 million consolidator grant (for putting together a research team). 2020 marks the end of their grants and the related research projects. On this occasion, I’MTech conducted a joint interview with these two researchers to discuss their results and the impact of these ERC grants on their work.

 

Your two ERC grants were awarded in 2014 and started in 2015, focusing on similar topics: the biomechanics of the aorta in the context of ruptured aneurysms. What were the particularities of each of your projects?

Pierre Badel: The starting point for my project, AArteMIS, was to better explain the resistance of the walls of the aorta. In 2014, we had just developed in vitro tests to study the mechanical strength of this artery. The ERC grant was used to add experiments on the microstructure. In concrete terms, developing protocols to draw on these materials and study the structural properties when the wall breaks.

Stéphane Avril: My Biolochanics project had a degree of overlap with AArteMIS. We had recovered aneurysm tissue from real patients through our partnership with Saint-Étienne University Hospital, and we wanted to characterize the mechanical stresses in these tissues in order to understand how an aneurysm develops and how it ruptures. The two projects were not designed to work together, it is not common to have two ERC grants in the same team. However, the evaluation committees for the starting grant and the consolidator grant applications are different, which meant that the two projects were judged independently and both ended up receiving grants. The connection between the two projects was made afterwards.

Reproduction in vitro d'une dissection aortique pour l'étude de la rupture d'un anévrisme. Ici, l'image est réalisée par tomographie à rayons X sur une artère de lapin.

In vitro reproduction of an aortic dissection for the study of aneurysm rupture. Here, the image is made by X-ray tomography on a rabbit artery.

 

How did you adapt the research in each of the projects to what was being done in the other?

SA: When we learned that we had been awarded the two scholarships, I redesigned my project. I turned my focus towards the mechanical and biological aspects of the research. Rather than studying the mechanical reasons for aneurysm rupture and their relationship to artery wall structure – which the AArteMIS project was already doing – I focused on early aortic wall changes and their relationship to the environment. For example, the study looked at how blood flows through the aorta and how this affects the development of the aneurysm. We also launched a new protocol in the project to include patients with very small aneurysms. We are still monitoring these patients today, and this gives us a better understanding of the development of the pathology.

PB: For my part, I stayed fairly close to the planned program, i.e. the mechanical study of the material of the artery. The only difference with the original project is that we were able to look further into the structural aspect in the rupture of walls. We had the opportunity to use a new technique: X-ray tomography. This is like a CT scan, but suitable for very small samples. This allowed us to work on each layer of the vessels that make up the wall of the aorta, which have different properties.

These two projects have gone on for five years and will come to an end in a few months. What are the key findings?

PB: For AArteMIS, we now have experience that proves that even if we know the precise thickness of an aneurysm, we cannot predict where it will break. We are very proud of this result because a material will usually break at its thinnest point. However, this approach is wrong. This result helps in the diagnosis of aneurysms by explaining to practitioners that there is more involved than just looking at the thickness of the aortic wall when deciding whether an aneurysm is at risk of rupture.

What about the results of the Biolochanics project?

SA: There are two things I’m very happy about. The first is having finished a scientific article that took 5 years to write. It concerns the development of a method to reconstitute the elasticity map of vessels. It’s a very interesting technique because no one had managed to make an elasticity map of the vessels in the aorta before us. We have filed a patent, and the technique could be used in pharmacological research. The second result is that we have developed a digital model to simulate the accelerated aging of the aorta according to biological parameters. This is a step towards the development of a digital twin of the aorta for patients.

The research conducted under Stéphane Avril’s ERC grant has led to the development of a digital model to simulate the development of an aneurysm (in red on the right) based on biological parameters of the initial artery (left).

 

Read on I’MTech: A digital twin of the aorta to prevent aneurysm rupture

An ERC grant provides major funding over five years. How do these funds help you to develop a research project in concrete terms?

PB: First of all, an ERC grant means that for a few years we don’t have to waste time looking for money. This is a great comfort for researchers, who constantly have to apply for funding to conduct their work. Specifically for my project, the grant allowed me to recruit three PhD students and three post-docs. A whole team was put together, and that has given us greater research power. In our discipline, there are also many experiments involving expensive tools and equipment. The grant makes it possible to acquire state-of-the-art equipment and to set up the experiments that we wish.

SA: It’s similar for me: we were able to hire nine post-docs for Biolochanics. That’s a considerable size for a research team. The financial comfort also means that you can devote time to scientific resourcing and collaborations. I have been able to spend one to two months each year at Yale University in the United States, where there is also a very good team in specialized biomechanics of the aorta, led by Jay Humphrey.

Read on I’MTech: How Biomechanics can Impact Medicine – Interview with Jay Humphrey

How does being responsible for a project funded by an ERC grant affect your life as a researcher?

SA: There’s a lot of time spent managing and organizing. It’s demanding, but you can see the benefits for the laboratory directly. It is time that is well spent, and that is the main difference from having to spend time looking for funding, where the outcome is more uncertain. It also means a lot of recognition for the work. As researchers, we are solicited more often, we receive invitations that probably would not have come without the ERC grant. In terms of international interaction, it makes a significant difference.

As you approach the end of the projects – the end of December for you Stéphane, and the end of October for you Pierre – how do you envisage the future of your research?

PB: Right now we’re at full throttle! We still have several scientific articles in progress. The project officially ends in the fall, so I’m slowly starting to look for funding again. For example, I have a local project that is about to start up on soft tissue rupture for abdominal wall repair, funded by the Rhône-Alpes Region, Lyon University Hospital, Insa Lyon, and Medtronic. But the next few months will still be very busy with the end of the AArteMIS project.

SA: During the ERC grant period, we have little time to initiate and coordinate other projects. For the last five years, my approach has been to jump on trains without driving them. This has involved associations with other academic partners to submit projects, but without being a leader. Recently, one such project was accepted for funding under a Marie Curie International Training Network Action, European funding for the recruitment of cohorts of doctoral students. The laboratory is thus participating in the supervision of 6 theses on the digital twin for aneurysms in the aorta starting in the spring of 2020. In addition, I plan to take advantage of the end of this project to see what is being done elsewhere in my field of research. For one year, I will have a position as a visiting professor at the Vienna University of Technology in Austria. It’s also important to give yourself time in your career to open up and build relationships with your peers.

 

[1] The Sainbiose laboratory is a joint research unit of Mines Saint-Étienne/Inserm/Jean Monnet University

 

Interview by Benjamin Vignard, for I’MTech.

 

Subcultron

The artificial fish of the Venice lagoon

The European H2020 Subcultron project was completed in November 2019 and successfully deployed an autonomous fleet of underwater robots in the Venice lagoon. After four years of work, the research consortium  — which includes IMT Atlantique ­— has demonstrated the feasibility of synchronizing a swarm of over one hundred autonomous units in a complex environment. An achievement made possible by the use of robots equipped with a bio-inspired sixth sense known as an “electric sense.”

 

Curious marine species inhabited the Venice lagoon from April 2016 to November 2019. Nautical tourists and divers were able to observe strange transparent mussels measuring some forty centimeters, along with remarkable black lily pads drifting on the water’s surface. But amateur biologists would have been disappointed had they made the trip to observe them, since these strange plants and animals were actually artificial.  They were robots submerged in the waters of Venice as part of the European H2020 Subcultron project. Drawing on electronics and biomimetics, the project’s aim was to deploy an underwater swarm of over 100 robots, which were able to coordinate autonomously with one another by adapting to the environment.

To achieve this objective, the scientists taking part in the project chose Venice as the site for carrying it out. “The Venice lagoon is a sensitive, complex environment,” says Frédéric Boyer, a robotics researcher at IMT Atlantique — a member of the Subcultron research consortium. “It has shallow, very irregular depths, interspersed with all sorts of obstacles. The water is naturally turbid. The physical quantities of the environment vary greatly: salinity, temperature etc.” In short, the perfect environment for putting the robots in a difficult position and testing their capacity for adaptation and coordination.

An ecosystem of marine robots

As a first step, the researchers deployed 130 artificial mussels in the lagoon.  The mussels were actually electronic units encapsulated in a watertight tube. They were able to collect physical data about the environment but did not have the ability to move, other than sinking and resurfacing. Their autonomy was ensured by an innovative charging system developed by one of  the project partners: the Free University of Brussels. On the surface, the floating “lily pads” powered by solar energy were actually data processing bases. There was just one problem: the artificial mussels and lily pads could not communicate with one another. That’s where the notion of coordination and a third kind of robots came into play.

In the turbid waters of the Venice lagoon, artificial fish were responsible for transmitting environmental data from the bottom of the lagoon to the surface.

 

To send information from the bottom of the lagoon to the surface, the researchers deployed some fifty robotic fish. “They’re the size of a big sea bream and are driven by small propellers, so unlike the other robots, they can move,” explains Frédéric Boyer. This means that there is only a single path for transmitting data between the bottom of the lagoon and the surface: the mussels transmit information to the fish who swim towards the surface to deliver it to the lily pads, and then return to the mussels to start the process over again.  And all of this takes place in a variable marine environment, where the lily pads drift and the fish have to adapt.

Fish with a sixth sense

Developing this autonomous robot ecosystem was particularly difficult . “Current robots are developed with a specific goal, and are rarely intended to coordinate with other robots with different roles,” explains Frédéric Boyer. Developing the artificial fish, which played a crucial role, was therefore the biggest challenge of the project. The IMT Atlantique team contributed to these efforts by providing expertise on a bio-inspired sense: electric sense.

It’s a sense found in certain fish that live in the waters of tropical forests,” says the researcher. “They have electrosensitive skin, which allows them to measure the distortions of electric fields produced by themselves or others in their immediate environment: another fish passing nearby causes a variation that they can feel. This means that they can stalk their prey or detect predators in muddy water or at night.” The artificial fish of the turbid Venice lagoon were equipped with this electric sense.

This capacity made it possible for the fish to engage in organizational, cooperative behaviors. Rather than each fish looking for the mussels and the lily pads on their own, they grouped together and travelled in schools. They were therefore better able to detect variations in the electric field, whether under the water or on the surface, and align themselves in the right direction. “It’s a bit like a compass that aligns itself with the Earth’s electromagnetic field,” says Frédéric Boyer.

The Subcultron project therefore marked two important advances in the field of robotics: the coordination of a fleet of autonomous agents and equipping under-water robots with a bio-inspired sense. These advances are of particular interest for monitoring ecosystems and the marine environment. One of the secondary aims of the project, for example, was tracking the phenomenon of oxygen depletion in the water of the Venice lagoon. An event that occurs at irregular intervals, in an unpredictable manner, which leads to local mortality of aquatic species. Using the data they measured, the swarm of underwater robots successfully demonstrated that it is possible to forecast this phenomenon more effectively. In other words, an artificial ecosystem for the benefit of the natural ecosystem.

Learn more about Subcultron

[box type=”info” align=”” class=”” width=””]

The Subcultron project was officially launched in April 2015 as part of the Horizon 2020 research program . It was coordinated by the University of Graz, in Austria. It brought together IMT Atlantique in France, along with partners in Italy (the Pisa School of Advanced Studies, and the Venice Lagoon Research Consortium), Belgium (the Free University of Brussels), Croatia (the University of Zagreb), and Germany (Cybertronica).

[/box]

Video conferences and socializing: bringing some joy back to our daily lives

Stéphane Safin, Télécom Paris – Institut Mines-Télécom

The lockdown has upended the ways we communicate and maintain social connections. It has forced us to rethink our social habits and to create new ones. The use of video conference systems has become widespread. But is this technology designed to facilitate work as effective when it comes to socializing on a personal level? How can we reinvent our forms of interaction so that they may continue to be meaningful and fulfilling?

Socializing online

Remote communication is largely supported by information and communication technologies, for both professional and personal purposes. Technological solutions have proliferated, giving rise to new ways of communicating, and even to new forms of language. For example, the limitations of text messaging have made it necessary to develop shortened forms of writing, which has led not only to the development of a specific abbreviated form of spelling, but also to new digital social media platforms that have enshrined this practice, such as Twitter. Another example is the lack of metacommunication in writing – meaning communication about communication itself, which conveys nuance and helps us understand one another – which has led to increasingly sophisticated systems, such as emoticons and animated GIFs.

As remote communication tools are largely available and increasingly affordable, they provide an opportunity to maintain social ties, and even strengthen them, at a time when we are all forced to be physically isolated from each other.

There are two main forms of remote communication: asynchronous communication – with tools based on written and visual components such as email, sharing documents and digital social media – and synchronous communication, including for example tools such as telephones, video conferencing and the specific case of instant messaging, which offers a hybrid category between synchronous and asynchronous communication.

Do video conferences allow for informal socializing?

Video conferencing tools were developed to hold work meetings. In certain cases, they even help improve the quality of collaboration, in comparison to situations of co-presence (physical presence in the same place) since they force interactions to be structured. Being in front of a screen for a set period of time can boost concentration and efficiency, and employees are pushed to “get to the point” by dispensing with the informal aspects. Too bad if some socializing time is lost before and after the meeting, or over coffee to finish the discussion. Yet, when it comes to “everyday” socializing, it is the informal aspects that matter the most. Taking a coffee break together is much more important than the task being carried out.

But compared to situations of co-presence, these tools have a number of limitations, the main two being the lack of a shared context (who knows what’s happening outside the frame of the other person’s webcam?) and the fact that considerable resources must be allocated to manage interactions (dividing up speaking time, managing the tools and their technical constraints). In general, they only work well for groups that have already been formed, who have a shared a frame of reference: shared goals, a relatively unequivocal vocabulary etc.

But most importantly, although even meetings for professional situations – which are usually highly standardized in terms of power relations and how speaking time is structured – must still be “run well” in order to be effective, what about our informal meetings, which are considerably more chaotic (which is also part of their charm) ?

Everyday socializing takes creativity

In short, video conferencing allow us to carry out tasks in structured groups, but requires clear, formal management of the context and interactions. But it is far from ideal for everyday socializing, which relies on the group’s immersion in a shared environment, the flow of conversation (especially necessary for humor), and on enjoying being together without carrying out a task, but simply to be together.  

In an effort to overcome the inherent shortcomings of the technology, and because we don’t really have other options, forms of resilience have been developed in order to (re)invent original formats for social interaction. These strategies fall into three broad strategies, which we will illustrate below with examples from the lockdown period found here and here.

Strategy one: creating a shared context

Time actually spent together is combined with a supposedly spared space. Virtual cocktail parties are a perfect example: settling into a comfortable chair, toasting to one another through the webcam and sharing hors d’œuvres are all ways to recreate habits to immerse ourselves in a sort of shared frame of reference – everyone knows what to do at a cocktail party. Another example: some remote meeting systems make it possible to cut out the user’s face and set it against a background. Originally designed to protect privacy, this feature also provides an opportunity to recreate a shared context. Using the same background can help make everyone feel like they are in “the same place.”

Strategy two: recreating tasks

Video conferencing is especially suited to carrying out tasks, so why not set a goal for the time spent socializing, when there wasn’t necessarily one before? Virtual babysitting, for example. For those with children, a bit of help is always appreciated and enlisting grandparents or other adults to read a story or help with homework is beneficial all around: it gives parents a break and helps maintain the connection between children and grandparents. The connection is now established through the task, which provides a purpose and goal for this connection. 

Other examples include playing board games virtually or cooking together, activities which are also ways to recreate a task, set a goal and take advantage of what these tools are made for. Exercising in lockdown, and therefore with limited resources, helps people get up and moving, but more importantly, it gives them a reason to get together.

Strategy three: changing the very format of communication

This involves combining video conferencing with other media (images, videos, music) to enhance the message and focus on content. It may also involve transforming communication, both in its form and content, to overcome the shortcomings of the technology: while we may no longer be able to make long-winded statements because of network quality, we can interact through short sentences and snappy humor, and as such, make room for light-hearted moments, where we don’t dwell on the virus or our feelings about the current situation. Another example is using features many tools offer to “disguise” ourselves virtually, fostering forms of metacommunication that are especially useful when we share a limited context and our non-verbal behavior is barely visible.   

Lessons to learn

Of course, we are not all equal when it comes to our lockdown experience, especially when it comes to access to digital tools. And these tools do not always ensure privacy, far from it in fact. But in these times, they are an invaluable resource for maintaining our social life, as long as we are able to invent ways to use them.  

And maybe on the other side of this crisis, we will have learned to appreciate the importance of informal socializing and therefore value relationships in our professional interactions, challenging the dominant paradigm of hyper-performance.

Stéphane Safin, Ergonomics research professor, Télécom Paris – Institut Mines-Télécom

This article has been republished from The Conversation under a Creative Commons license. Read the original article (in French).