lithium-ion battery

What is a lithium-ion battery?

The lithium-ion battery is one of the best-sellers of recent decades in microelectronics. It is present in most of the devices we use in our daily lives, from our mobile phones to electric cars. The 2019 Nobel Prize in Chemistry was awarded to John Goodenough, Stanley Wittingham, and Akira Yoshino, in recognition of their initial research that led to its development. In this new episode of our “What’s?” series, Thierry Djenizian explains the success of this component. Djenizian is a researcher in microelectronics at Mines Saint-Étienne and is working on the development of new generations of lithium-ion batteries.

 

Why is the lithium-ion battery so widely used?

Thierry Djenizian: It offers a very good balance between storage and energy output. To understand this, imagine two containers: a glass and a large bottle with a small neck. The glass contains little water but can emptied very quickly. The bottle contains a lot of water but will be slower to empty. The electrons in a battery behave like the water in the containers. The glass is like a high-power battery with a low storage capacity, and the bottle a low-power battery with a high storage capacity. Simply put, the lithium-ion battery is like a bottle but with a wide neck.

How does a lithium-ion battery work?

TD: The battery consists of two electrodes separated by a liquid called electrolyte. One of the two electrodes is an alloy containing lithium. When you connect a device to a charged battery, the lithium will spontaneously oxidize and release electrons – lithium is the chemical element that releases electrodes most easily. The electrical current is produced by the electrons flowing between the two electrodes via an electrical circuit, while the lithium ions from the oxidation reaction migrate through the electrolyte into the second electrode.

The lithium ions will thus be stored until they no longer have any available space or until the first electrode has released all its lithium atoms. The battery is then discharged and you simply apply a current to force the reverse chemical reactions and have the ions migrate in the other direction to return to their original position. This is how lithium-ion technology works: the lithium ions are inserted into and extracted from the electrodes reversibly depending on whether the battery is charging or discharging.

What were the major milestones in the development of the lithium-ion battery?

TD: Wittingham discovered a high-potential material composed of titanium and sulfur capable of reacting with lithium reversibly, then Goodenough proposed the use of metal alloys. Yoshino marketed the first lithium-ion battery using graphite and a metal oxide as electrodes, which considerably reduced the size of the batteries.

What are the current scientific issues surrounding lithium-ion technology?

TD: One of the main trends is to replace the liquid electrolyte with a solid electrolyte. It is best to avoid the presence of flammable liquids, which also present risks of leakage, particularly in electronic devices. If the container is pierced, this can have irreversible consequences on the surrounding components. This is particularly true for sensors used in medical applications in contact with the skin. Recently, for example, we developed a connected ocular lens with our colleagues from IMT Atlantique. The lithium-ion battery we used included a solid polymer-based electrolyte because it would be unacceptable for the electrolyte to come into contact with the eye in the event of a problem. Solid electrolytes are not new. What is new is the research work to optimize them and make them compatible with what is expected of lithium-ion batteries today.

Are we already working on replacing the lithium-ion battery?

TD: Another promising trend is to replace the lithium with sodium. The two elements belong to the same family and have very similar properties. The difference is that lithium is extracted from mines at a very high environmental and social cost. Lithium resources are limited. Although lithium-ion batteries can reduce the use of fossil fuels, if their extraction results in other environmental disasters, they are less interesting. Sodium is naturally present in sea salt. It is therefore an unlimited resource that can be extracted with a considerably lower impact.

Can we already do better than the lithium-ion battery for certain applications?

TD: It’s hard to say. We have to change the way we think about our relationship to energy. We used to solve everything with thermal energy. We cannot use the same thinking for electric batteries. For example, we currently use lithium-ion button cell batteries for the internal clocks of our computers. For this very low energy consumption, a button cell has a life span of several hundred years, while the computer will probably be replaced in ten years. A 1mm² battery may be sufficient. The size of energy storage devices needs to be adjusted to suit our needs.

Read on I’MTech: Towards a new generation of lithium batteries?

We also have to understand the characteristics we need. For some uses, a lithium-ion battery will be the most appropriate. For others, a battery with a greater storage capacity but a much lower output may be more suitable. For still others, it will be the opposite. When you use a drill, for example, it doesn’t take four hours to drill a hole, nor do you need a battery that will remain charged for several days. You want a lot of power, but you don’t need a lot of autonomy. “Doing better” than the lithium-ion battery, perhaps simply means doing things differently.

What does it mean to you to have a Nobel Prize awarded to a technology that is at the heart of your research?

TD:  They are names that we often mention in our scientific publications, because they are the pioneers of the technologies we are working on. But beyond that, it is great to see a Nobel Prize awarded to research that means something to the general public. Everyone uses lithium-ion batteries on a daily basis, and people recognize the importance of this technology. It is nice to know that this Nobel Prize in Chemistry is understood by many people.

In France, AMAPs (associations for community-supported agriculture) are emblematic examples of the social solidarity economy. But they are not the only social solidarity economy (SSE) organizations. Other examples include cooperative banks, non-profit groups and mutual funds.

What is the social and solidarity economy?

The social and solidarity economy (SSE) encompasses organizations that seek to respond to human problems through responsible solutions. Far from being an epiphenomenon, the SSE accounts for a significant share of the economy both in France and around the world. Contrary to popular belief, these principles are far from new. Mélissa Boudes, a researcher in management at Institut Mines-Télécom Business School, helps us understand the foundations of this economy.

 

What makes the social and solidarity economy unique?

Mélissa Boudes: The social and solidarity economy (SSE) is based on an organization structure that is both different and complementary to public economy and capitalist economy. This structure is dedicated to serving human needs. For example, organizations that are part of the SSE are either non-profit or low-profit limited companies. In this second case, profits are largely reinvested in projects rather than being paid to shareholders in the form of dividends. In general, SSE organizations have a democratic governance model, in which decisions are made collectively based on the “one person one vote” principle and involve those who benefit from their services.

What types of organizations are included in this economy?

MB: A wide range! Non-profit groups typically fall within this framework. Although sports and community non-profit groups do not necessarily claim to be part of the SSE, they fall within the framework based on their official statutes. Cooperatives, mutual funds and social businesses of varying sizes are also part of the SSE. One example is the cooperative group Up—formerly called Chèque déjeuner—which now has an international dimension. Other organizations include mutual health insurance groups, wine cooperatives, and cooperative banks.

How long has this economy existed?

MB: We often say that it has existed since the 19th century. The social and solidarity economy developed in response to the industrial revolution. At this time, workers entered a subordinate relationship that was difficult to accept. They wanted a way out. Alternative organizations were created with a primary focus on workers’ concerns. The first organizations of this kind were mutual aid companies that provided access to medical care and consumer cooperatives that helped provide access to good quality food. At the time, people often went into debt buying food. Citizens therefore created collective structures to help each other and facilitate access to good quality, affordable food.

So why have we only heard about the social and solidarity economy in recent years?

MB: It’s true that we seem to be witnessing the re-emergence of SSE, which was the subject of a law in 2014. SSE is now back in the forefront because the issues that led to its creation in the 19th century are reappearing—access to food that is free of pollution, access to medical care for “uberized” workers.  AMAPs (associations for community-supported agriculture) and cooperative platforms such as Label Emmaüs are examples of how the SSE can respond to these new expectations. Although new media coverage would suggest that these organization models are new, they actually rely on practices that have existed for centuries. However, the historical structures behind the SSE are less visible now because they have become institutionalized. For example, we sometimes receive invitations to participate in the general meeting for our banks or mutual funds. We don’t pay much attention to this, but it shows that even without knowing it, we are all part of the SSE.

Is the social and solidarity economy a small-scale phenomenon, or does it play a major role in the economy?

MB: The SSE exists everywhere in France, but also around the world. We must understand that SSE organizations aim to provide solutions to universal human problems: better access to education, mobility, healthcare… In France, the SSE represents 10% of employment.  This share rises to 14% if we exclude the public economy and only look at private employment. Many start-ups have been created based on the SSE model. This is therefore an important economic phenomenon.

Can any type of organization claim to be part of the social and solidarity economy?

MB: No, they must define an official status that is compatible with the SSE at the time the organization is founded, or request authorization if the company has a commercial status. They must request specific approval as a solidarity-based company of social benefit, which is attributed by the regional French employment authority (DIRECCTE).  Approval is granted if the company demonstrates that it respects certain principles, including providing a social benefit, a policy in its statutes limiting remuneration, an absence from financial markets, etc.

How does the social and solidarity economy relate to the concept of corporate social responsibility (CSR)?

MB: In practice, CSR and SSE concepts sometimes overlap when commercial companies partner with SSE companies to develop their CSR. However, these two concepts are independent. The CSR concept does, however, reveal an economic movement that places increasing importance on organizations’ social aims. More and more commercial companies are opting for a hybrid structure: without becoming SSE companies, they impose limited salary scales to avoid extremely high wages. We are in the process of moving towards an environment in which the dichotomies are more blurred. We can no longer think in terms of virtuous SSE organizations on one side and the profit-driven capitalist economy on the other. The boundaries are not nearly as clear-cut as they used to be.

Read on I’MTech Social and solidarity economy in light of corporate reform

cyber sovereignty

What is cyber sovereignty?

Sovereignty is a concept that is historically linked to the idea of a physical territory, whereas the digital world is profoundly dematerialized and virtual. So what does the notion of cyber sovereignty mean? It combines the economic strength of online platforms, digital technologies and regulations based on new societal values. Francis Jutand, Deputy CEO of IMT and member of the Scientific Council of the Institut de la Souveraineté Numérique (Institute of Cyber Sovereignty), presents his view on the foundations of this concept.

 

What does it mean to be “sovereign”?

Francis Jutand: The notion of sovereignty can apply to individuals, companies or nations. To be sovereign is to be able to choose. This means being able to both understand and act. Sovereignty is therefore based on a number of components for taking action: technological development, economic and financial autonomy (and therefore power), and the ability to influence regulatory mechanisms. In addition to these three conditions, there is security, in the sense that being sovereign also means being in a space where you can protect yourself from the potential hostility of others. The fifth and final parameter of sovereignty for large geographical areas, such as nations or economic spaces, is the people’s ability to make their voices heard.

How does this notion of sovereignty apply in the case of digital technology?

FJ: The five components of the ability to act transpose naturally into this field. Being sovereign in a digital world means having our own technology and being independent from major economic players in the sector, such as Google, and their huge financial capacity. It also means developing specific regulations on digital technology and being able to protect against cyber-attacks. As far as the general public is concerned, sovereignty consists in training citizens to understand and use digital technology in an informed way. Based on these criteria, three main zones of cyber sovereignty can be defined around three geographical regions: the United States, Europe and China.

What makes these zones of sovereignty so distinct?

FJ: The American zone is based on economic superpowers and powerful national policy on security and technology operated by government agencies. On the other hand, the state of their regulation in the cyber field is relatively weak. China relies on an omnipresent state with strict regulation and major investments. After its scientific and industrial backwardness in this area, China has caught up over the past few years. Lastly, Europe has good technological skills in both industry and academia, but is not in a leading position. In its favor, the region of European sovereignty has strong market power and pioneering regulations based on certain values, such as the protection of personal data. Its biggest weakness is its lack of economic leadership that could lead to the existence of global digital players.

How is the concept of sovereignty embodied in concrete terms in Europe?

FJ: Europe and its member countries are already investing at a high level in the digital field, through the European Framework Programmes, as well as national programs and ongoing academic research. On the other hand, the small number of world-class companies in this field weakens the potential for research and fruitful collaborations between the academic and industrial worlds. The European Data Protection Board, which is composed of the national data protection authorities of the European Union member states, is another illustration of sovereignty work in the European zone. However, from the point of view of regulations concerning competition law and financial regulation, Europe is still lagging behind in the development of laws and is unassertive in their interpretation. This makes it vulnerable to lobbies as shown by the debates on the European directive on copyright.

How does the notion of cyber sovereignty affect citizens?

FJ: Citizens are consumers and users of cyber services. They play a major role in this field, as most of their activities generate personal data. They are a driving force of the digital economy, which, we must remember, is one of the five pillars of sovereignty. This data, which directly concerns users’ identity, is also governed by regulations. Citizens’ expression is therefore very important in the constitution of an area of sovereignty.

Why is the academic world concerned by this issue of cyber sovereignty?

FJ: Researchers, whether from IMT or other institutions, have insights to provide on cyber sovereignty. They are at the forefront of the development and control of new technology, which is also one of the conditions of sovereignty. They train students and work with companies to disseminate this technology. IMT and its schools are active in all these areas. We therefore also have a role to play, notably by using our neutrality to inform our parliamentarians. We have experimented in this sense with an initial event for deputies and senators on the theme of technological and regulatory sovereignty. Our researchers discussed the potential impacts of technology on citizens, businesses and the economy in general.

 

physical internet

What is the physical internet?

The physical internet is a strange concept. It borrows its name from the best-known computer network, yet it bears little connection with it, other than being an inspiration for bringing together economic stakeholders and causing them to work together. The physical internet is in fact a new way of organizing the logistics network. In light of the urgent climate challenges facing our planet and the economic challenges of companies, we must rethink logistics from a more sustainable perspective. Shenle Pan, a researcher in management science at Mines ParisTech and specialist in logistics and transport, explains this concept and its benefits.

This article is part of our series on “The future of production systems, between customization and sustainable development.”

 

What does the physical internet refer to?

Shenle Pan: It’s the metaphor of the internet applied to supply chain networks and related services. When we talk about the physical internet, the objective is to interconnect distribution networks, storage centers, suppliers, etc. Today, each contributor to the supply chain system is on their own. Companies are independent and have their own network. The idea of the physical internet is to introduce interoperability between stakeholders. The internet is a good analogy for guiding the ideas and structuring new organizational methods.

What is the benefit of this subject?

SP: Above all, it is a way of making logistics more sustainable. For example, when each stakeholder works on its own, a delivery truck leaves without being full. The delivery must be on time, and the truck leaves even if it is only half full. By connecting stakeholders, a truck can be filled with more goods for another supplier. If enough companies share transport resources, they can even reach a flow of goods significant enough to use rail freight. Since one full truck emits less CO2 than two half-filled trucks, and the train runs on electricity, the environmental impact would be greatly reduced for the same flow of goods. Companies also save due to the scale effect. The benefits are also related to other logistics departments, such as storage, packaging and handling.

How will this impact the logistics markets?

SP: By interconnecting stakeholders, competing companies will be connected. Yet today, these stakeholders do not share their information and logistical means. New rules and protocols must therefore be established to control stakeholders’ access to components in the supply chain, using the networks, transporting goods, etc. This is what protocols do, which in the case of the internet include TCP/IP. New intermediaries must also be introduced on the markets. Some are already beginning to appear. Start-ups offer to mutualize transport to maximize the trucks’ capacity. Others sell storage areas for one pallet for a short period of time to adapt to the demand, whereas stakeholders are generally used to buying entire warehouses they do not always fill. The physical internet therefore leads us toward a new logistics model called Logistics as a Service. This new model is more flexible, efficient, interoperable and sustainable.

What makes the physical internet a field of study?

SP: Real interdisciplinary research is needed to make all these changes. It is not easy, for example, to design standardized means for promoting interoperability. We must determine which mechanisms are the best suited and why. Then, in the area of management science, we must ask which intermediaries should be introduced into the network to manage the openness and the new business models this would involve. From a computer science perspective: how can the services of the various stakeholders be connected? Personally, I am working on the mathematical aspect, modelling new types of organization for the network, for example for assessing gains.

What are the tangible gains of the physical internet in terms of logistics?

SP: We took two major supply chains from mass distribution in France and we integrated the data into our new organizational models to simulate the gains. Depending on the scenarios, we improved the filling of trucks by 65% to 85%. Greenhouse gases decreased 60% for CO2 emissions due to multi-modality. In our simulations, these significant results were directly linked to interoperability and the creation of the network. Our models allow us to determine the strategic locations where shared storage centers should be established for several companies, optimize transport times, reduce supply times and storage volumes… We also had gains of over 20% in stock sizes.

Does the logistics sector already use the principles of the physical internet?

SP: The physical internet is a fairly recent concept. The first scientific publication on the topic dates to 2009, and companies have only been interested in the subject for approximately three years. They are adopting the concept very quickly, but they still need time. This is why we have a research chair on the physical internet at Mines ParisTech, with French and European companies; they submit their questions and use cases to help develop the potential of this concept. They recognize that we need a new form of organization to make logistics more sustainable, but the market has not yet reached a point where the major players are restructuring based on the physical internet model. We are currently seeing start-ups beginning to emerge and offer new intermediary services.

When will we experience the benefits of the physical internet?

SP: In Europe, the physical internet has established a solid roadmap, developed in particular by the ALICE alliance, which connects the most significant logistics platforms on the continent. This alliance regularly issues recommendations that are used by European H2020 research programs. Five focus areas have been proposed for integrating the physical internet principles in European logistics by 2030. This is one of the largest initiatives worldwide. In Europe, we therefore hope to quickly see the physical internet comprehensively redefine logistics and offer its benefits, particularly in terms of environmental impacts.

 

hydrogen

What is hydrogen energy?

In the context of environmental and energy challenges, hydrogen energy offers a clean alternative to fossil fuels. Doan Pham Minh, a chemist and environmental engineering specialist at IMT Mines Albi, explains why this energy is so promising, how it works and the prospects for its development.

 

What makes hydrogen so interesting?

Doan Pham Minh: The current levels of interest in hydrogen energy can be explained by the pollution problems linked to carbon-based energy sources. They emit fine particles, toxic gases and volatile organic compounds. This poses societal and environmental problems that must be remedied. Hydrogen offers a solution because it does not emit any pollutants. In fact, hydrogen reacts with oxygen to “produce” energy in the form of heat or electricity. The only by-product of this reaction is water. It can therefore be considered clean energy.

Is hydrogen energy “green”? 

DPM: Although it is clean, it cannot be called “green”. It all depends on how the dihydrogen molecule is formed. Today, around 96% of hydrogen is produced from fossil raw materials, like natural gas and hydrocarbon fractions from petrochemicals. In these cases, hydrogen clearly is not “green”. The remaining 4% is produced through the electrolysis of water. This is the reverse reaction of the combustion of hydrogen by oxygen: water is separated into oxygen and hydrogen by consuming electricity. This electricity can be produced by nuclear power stations, coal-fired plants or by renewable energies: biomass, solar, hydropower, wind, etc. The environmental footprint of the hydrogen produced by electrolysis depends on the electricity’s origin.

How is hydrogen produced from biomass?

DPM: In terms of the chemistry, it is fairly similar to the production of hydrogen from oil. Biomass is also made up of hydrocarbon molecules, but with a little more oxygen. At IMT Mines Albi, we work a great deal on thermo-conversion. Biomass, in other words wood, wood waste and agricultural residues, etc. is heated without oxygen, or in a low-oxygen atmosphere. The biomass is then split into small molecules and primarily produces carbon monoxide and the dihydrogen. Biomass can also be transformed into biogas through anaerobic digestion by microorganisms. This biogas can then be transformed into a mixture of carbon monoxide and dihydrogen. An additional reforming step uses water vapor to transform the carbon monoxide into carbon dioxide and hydrogen. We work with industrial partners like Veolia to use the CO2 and prevent the release of greenhouse gas. For example, it can be used to manufacture sodium bicarbonate, which neutralizes the acidic and toxic gases from industrial incinerators. The production of hydrogen from biomass is therefore also very clean, making it a promising technique.

Read more on I’MTech: Vabhyogaz uses our waste to produce hydrogen

Why is it said that hydrogen can store electricity?

DPM: Storing electricity is difficult. It requires complex batteries, used on a large scale. A good strategy is therefore to transform electricity into another energy that is easier to store. Through the electrolysis of water, electrical energy is used to produce dihydrogen molecules. This hydrogen can easily be compressed, transported, stored and distributed before being reused to produce heat or generate electricity. This is a competitive energy storage method compared to mechanical and kinetic solutions, such as dams and flywheels.

Why is it taking so long to develop hydrogen energy?

DPM: In my opinion, it is above all a matter of will. We see major differences between different countries. Japan, for example, is very advanced in the use of hydrogen energy. South Korea, the United States and China have also invested in hydrogen technologies. Things are beginning to change in certain countries. France now has a hydrogen plan, launched last June by Nicolas Hulot. However, it remains a new development, and it will take time to establish the infrastructures. We currently only have around 20-25 hydrogen fuel stations in France, which is not many. Hydrogen vehicles remain expensive: a Toyota Mirai sedan costs €78,000 and a hydrogen bus costs approximately €620,000. These vehicles are much more expensive than the equivalent in vehicles with diesel or gas engines. Nevertheless, these prices are expected to decline in coming years, because the number of hydrogen vehicles is still very limited. Investment programs must be established, and they take time to implement.

Read more on I’MTech:

IIoT

What is the Industrial Internet of Things (IIoT)?

Industry and civil society do not share the same expectations when it comes to connected objects. The Internet of Things (IoT) must therefore adapt to meet industrial demands. These specific adaptations have led to the emergence of a new field: IIoT, or the Industrial Internet of Things. Nicolas Montavont, a researcher at IMT Atlantique, describes the industrial stakes that justify the specific nature of the IIoT and the challenges currently facing the scientific community.

 

What does the IIoT look like in specific terms?

Nicolas Montavont: One of the easiest examples to present and understand is the way production lines are monitored. Sensors ensure that a product is manufactured under good conditions, by controlling what travels down the conveyor belt and by measuring the temperature, humidity or luminosity for the specific work environment. Actuators can then respond to the data received, for example by reconfiguring a production line based on the environment or context, allowing a machine to perform a different task.

How does the IIoT benefit companies?

NM: There are benefits in every area: production times, line performance, cost reduction, etc. One major benefit is increased flexibility thanks to a more autonomous system. Production lines can operate and adapt with fewer human interventions. Staff can therefore transition from a role of handling and management to supervision and control. This change especially benefits small businesses. Today, production is very focused on large volumes. Increased flexibility and autonomy let companies find more cost-effective ways of manufacturing small quantities.

What justifies referring to IIoT as a separate field, distinct from the mainstream IoT?

NM: Mainstream IoT technologies are not designed to meet industry requirements. In general, when IoT is used for applications, the performance requirements are not very high. Communicating objects are used to send non-critical data packets without strict time constraints. The opposite is true for industry demands, which require object networks that send important data with the lowest possible latency. Therefore, specific IoT standards must be developed for the industrial sector, hence the name IIoT. For example, companies do not want to be limited by propriety standards, and so they want to push the Internet to become the network that supports their architectures.

Why do companies have more performance constraints for their networks of communicating objects?

NM: One scenario that clearly represents industrial constraints is the one we selected for the SCHEIF project, in the context of the German-French Academy for the Industry of the Future (GFA). We initiated a collaboration with the Technische Universität of Munich (TUM) on the quality of the network and data in an industrial environment. We started with a scenario featuring a set of robots that move autonomously through a work environment. They can accomplish specific tasks and can also detect and adapt to environmental changes. For example, if a person walks through the area, they must not be hit by the robots. This scenario includes a major safety aspect, which demands an efficient network, low latency, good-quality data communications and the effective assessment of the state of the environment.

What are the scientific challenges of this type of example?

NM: First of all, locating robots indoors in real time presents a challenge. Technologies exist but are not yet perfect and do not offer sufficient performance levels. Secondly, we need to make sure the robots exchange the monitoring data in an appropriate manner, by prioritizing the information. The main problem is, “who needs to send what, and when?” We are working on how to schedule the communications and represent the robots’ knowledge of their environment. We also have energy consumption constraints, first in terms of hardware and the network. Finally, there is a significant cybersecurity aspect, which has become a major focus area for the scientific community.

digital simulation

What is digital simulation?

Digital simulation has become an almost mandatory step in developing new products. But what does “simulating behavior” or “modeling an event” really mean? Marius Preda, a computer science researcher at Télécom SudParis, explains what’s hiding behind these common industry expressions.

 

What is digital simulation used for?

Marius Preda: Its main goal is to reduce prototyping costs for manufacturers. Instead of testing a product with real prototypes, which are expensive, companies use fully digital twins of these prototypes. These virtual twins take the form of a 3D model that has all the same attributes as the real product– colors, dimensions, visual aspect– and most importantly, in which a great quantity of metadata is injected, such as physical properties of the materials. This makes it possible to create a simulation that is very close to reality. The obvious advantage is that if the product isn’t right, the metadata can simply be changed, or the attributes of the digital twin can be directly modified. With a real prototype, it would have to be entirely remade.

What can be simulated?

MP: The main focus is on production. Companies make simulations in order to accurately measure all the parameters of a part and obtain production specifications. That explains a high percentage of uses of digital simulation. After that, there are significant concerns about aging. Physical laws that determine how materials wear are well-known, so companies inject them into digital models in order to simulate how a part will wear as it is used. One of the new applications is predictive maintenance. Simulations can be used to predict breakage or faults in order to determine the optimal moment a part should be repaired or replaced. All that relates to products but there are also simulations of whole factories to simulate their operations, and simulations of the human body.

Read on IMTech:  A digital twin of the aorta to help prevent aneurysm rupture

How is a digital simulation created?

MP: The first step is defining the goal of the simulation. Taking a car, for example, if the goal is to study how the car body deforms during an impact, the modeling will be different from if the goal were to analyze visual and sound comfort inside the passenger compartment. So modeling is carried out based on what the aim is: automobile manufacturers don’t create a 3D model with the idea that they’ll be able to use it for all simulations. The 3D form may be the same, but what’s important are the physical properties that will be included within the model. For crash test simulations, properties related to the way materials deform are injected into the model in the form of equations that govern their behavior. For sound comfort, the laws of reflectivity and sound propagation are included.

What form do simulations take?

MP: Virtual reality is often presented as something new, but manufacturers have been using it for years for simulations! In the past, they would create 3D environments called “caves,” which where rooms in which different parts of a car – to continue with our automobile example – were projected on the walls. Today, virtual reality headsets make it possible to save space and put more people in the same virtual environment. But beyond this highly visual form of simulation, what industry professionals are really interested in is the model and the results behind it. What matters isn’t really seeing how a car deforms in an accident, but knowing by how many centimeters the engine block penetrates into the passenger compartment. And sometimes, there isn’t even a visual: the simulation takes the form of a curve on a graph showing how material deformation depends on the speed of the car.

What sectors use digital simulations the most?  

MP: I talk about the automobile industry a lot since it’s one of the first to have used digital simulations. Architects were also among the first to use 3D to visualize models. And factories and relatively complex industrial facilities rely on simulation too. Among other things, it allows them to analyze the piping systems behind the walls. It’s a way to access information more easily than with plans. On the other hand, there are sectors, such as construction and civil engineering, where simulation is under-utilized and plans still play a central role.

What are some major ways digital simulation could evolve in the near future?

MP: In my opinion, interaction between humans and 3D models represents a big challenge. New devices like virtual reality glasses are being used, but the way people interact with the model remains unnatural. Yes, from within a virtual space, users can change how rooms are arranged with a wave of the hand. But if they want to change the physical parameters behind a material’s behavior, they still have to use a computer to introduce raw data in coded form. It would be a major advance to be able to directly change these metadata from within the virtual environment.

 

quèsaco mécatronique, mechatronics

What is mechatronics?

Intelligent products can perceive their environment, communicate, process information and act accordingly… Is this science fiction?  No, it’s mechatronics! Every day, we come in contact with mechatronic systems, from reflex cameras to our cars’ braking systems. Beyond the technical characteristics of these devices, the term mechatronics also refers to the systemic and comprehensive nature of their design. Pierre Couturier, a researcher at IMT Mines Alès, answers our questions about the development of these complex multi-technology systems.

 

What is mechatronics?

Mechatronics is an interdisciplinary and collaborative approach for designing and producing multi-technology products. To design a mechatronic product, several different professions must work together to simultaneously solve electronic, IT and mechanical problems.

In addition, designing a mechatronic product means adopting a systemic approach and taking into account stakeholders’ needs for the product over its entire lifecycle, from design, creation, production, use, to dismantling.  The issues of recycling and disposing of the materials are also considered during the earliest stages of the design phase. Mechatronics brings very different professions together and this systemic vision creates a consensus among all the specialists involved.

 

What are the characteristics of a mechatronic product?

A mechatronic product can perceive its environment using sensors, process the information received and then communicate and react accordingly in or on this environment. Developing such capacities requires the integration of several technologies in synergy: mechanics, electronics, IT and automation. Ideally, a product is designed to self-run and self-correct based on how it is used. With this goal in mind, we use artificial intelligence technologies and different types of learning: supervised, non-supervised or reinforced learning.

 

What types of applications are mechatronic products used for?

Mechatronic products are used in many different fields: transport, mobility, robotics, industrial equipment, machine-tools, as well as in applications for the general public… Reflex cameras, which integrate mechanical aspects with mobile parts are one example of mechatronic products.

In the area of transport, we also encounter mechatronics on a daily basis, for example with the ABS braking assistance system that is integrated into most cars. This system detects when the wheels are slipping and releases the driver’s braking request to restore the wheel’s grip on the road.

At IMT Mines Alès, we are also conducting several mechatronic projects on health and disability, including a motorized wheel for an all-terrain wheelchair. The principle is to provide the wheelchair with electrical assistance proportional to how the individual pushes on the handrail.

 

What other types of health projects are you leading at IMT Mines Alès?

In the health sector, we have developed a device for measuring the pressure a shoe exerts on the foot for an orthopedic company from Lozère. This product is intended for individuals with diabetes who have a loss of sensation in their feet: they can sometimes injure themselves by wearing inappropriate shoes without feeling any pain. Using socks equipped with sensors placed at specific places, areas with excessive pressure can be identified. The data is then sent to a remote station which transfers the different pressure points to a 3D model. We can therefore infer what corrections need to be made to the shoe to ensure the individual’s comfort.

We have also developed a scooter for people with disabilities, featuring a retractable kickstand that is activated when the vehicle runs at a low speed, to prevent the rider from falling. Also, in the area of disability, we have worked on a system of controls for electric wheelchairs that involve both a touchpad with two pressure areas to move forward and backward and touch sensors activated by the head to move left or right.

 

What difficulties are sometimes encountered when developing complex mechatronic products?

The first difficulty is to get all the different professions to work together to design a product.  There are real human aspects to manage! The second technical difficulty is caused by the physical interactions between the product’s different components, which are not always predictable. At IMT Mines Alès, for example, we designed a machine for testing the resistance of a foam mattress. A roller moved across the entire length of the mattress to wear it out. However, the interaction between the foam and roller produced electrostatic phenomena that led to electric shocks. We had underestimated their significance… We therefore had to change the roller material to resolve this problem. Due to the complexity of these systems, we discovered physical interactions we had not expected during the design phase!

To avoid this type of problem, we conduct research in systems engineering to assess, verify and validate the principles behind the solution as soon as possible in the design phase, even before physically making any of the product’s components. The ideal solution would be to design a product using digital modeling and simulation, and then produce it without the prototype phase… But that’s not yet possible! In reality, due to the increasing complexity of mechatronic products, it is still necessary to develop a prototype to detect properties or behaviors that are difficult to assess through simulation.

 

supercritical fluid

What is a supercritical fluid?

Water, like any chemical substance, can exist in a gaseous, liquid or solid state… but that’s not all! When sufficiently heated and pressurized, it becomes a supercritical fluid, halfway between a liquid and a gas. Jacques Fages, a researcher in process engineering, biochemistry and biotechnology at IMT Mines Albi, answers our questions on these fluids which, among other things, can be used to replace polluting industrial solvents or dispose of toxic waste. 

 

What is a supercritical fluid?

Jacques Fages: A supercritical fluid is a chemical compound maintained above its critical point, which is defined by a specific temperature and pressure. The critical pressure of water, for example, is the pressure beyond which it can be heated to over 100°C without becoming a gas. Similarly, the critical temperature of CO2 is the temperature beyond which it can be pressurized without liquefying. When the critical temperature and pressure of a substance are exceeded at the same time, it enters the supercritical state. Unable to liquefy completely under the effect of temperature, but also unable to gasify completely under the effect of pressure, the substance is maintained in a physical state between a liquid and a solid: its density will be equivalent to that of a liquid, but its fluidity will be that of a gas.

For CO2, which is the most commonly used fluid in supercritical state, the critical temperature and pressure are relatively low: 31°C and 74 bars, or 73 times atmospheric pressure. Because CO2 is also an inert molecule, inexpensive, natural and non-toxic, it is used in 90% of applications. The critical point of water is much higher: 374°C and 221 bars respectively. Other molecules such as hydrocarbons can also be used, but their applications remain much more marginal due to risks of explosion and pollution.

What are the properties of supercritical CO2 and the resulting applications?

JF: Supercritical CO2 is a very good solvent because its density is similar to that of a liquid, but it has much greater fluidity – similar to that of a gas – which allows it to penetrate the micropores of a material. The supercritical fluid can selectively extract molecules, it can also be used for particle design.

A device designed for implementing extraction and micronization processes of powders.

 

Supercritical CO2 can be used to clean medical devices, such as prostheses, in addition to the sterilization methods used. It removes all the impurities to obtain a product that is clean enough to be implanted in the human body. It is a very useful complement to current methods of sterilization. In pharmacy, it allows us to improve the bioavailability of certain active principles by improving their solubility or speed of dissolution. At IMT Mines Albi, we worked on this type of process for Pierre Fabre laboratories, allowing the company to develop its own research center on supercritical fluids.

Supercritical CO2 has applications in many sectors such as materials, construction, biomedical healthcare, pharmacy and agri-food as well as the industry of flavorings, fragrances and essential oils. It can extract chemical compounds without the use of solvents, guaranteeing a high level of purity.

Can supercritical CO2 be used to replace the use of polluting solvents?

JF: Yes, supercritical CO2 can replace existing and often polluting organic solvents in many fields of application and prevents the release of harmful products into the environment. For example, manufacturers currently use large quantities of water for dyeing textiles, which must be retreated after use because it has been polluted by pigments. Dyeing processes using supercritical CO2 allow textiles to be dyed without the release of chemicals. Rolls of fabric are placed in an autoclave, a sort of large pressure cooker designed to withstand high pressures, which pressurizes and heats the CO2 to its critical state. Once dissolved in the supercritical fluid, the pigment permeates to the core of the rolls of fabric, even those measuring two meters in diameter! The CO2 is then restored to normal atmospheric pressure and the dye is deposited on the fabric while the pure gas returns into the atmosphere or, better still, is recycled for another process.

But, watch out! We are often criticized for releasing CO2 into the atmosphere and thus contributing to global warming. This is not true: we use CO2 that has already been generated by an industry. We therefore don’t actually produce any and don’t increase the amount of CO2 in the atmosphere.

Does supercritical water also have specific characteristics?

JF: Supercritical water can be used for destroying hazardous, toxic or corrosive waste in several industries. Supercritical H2O is a very powerful oxidizing environment in which organic molecules are rapidly degraded. This fluid is also used in biorefinery: it gasifies or liquefies plant residues, sawdust or cereal straw to transform them into liquid biofuel, or methane and hydrogen gases which can be used to generate power. These solutions are still in the research stage, but have potential large-scale applications in the power industry.

Are supercritical fluids used on an industrial scale?

JF: Supercritical CO2 is not an oddity found only in laboratories! It has become an industrial process used in many fields. A French company called Diam Bouchage, for example, uses supercritical CO2 to extract trichloroanisole, the molecule responsible for cork taint in wine. It is a real commercial success!

Nevertheless, this remains a relatively young field of research that only developed in the 1990s. The scope for progress in the area remains vast! The editorial committee of the Journal of Supercritical Fluids, of which I am a member, sees the development of new applications every year.

 

Artificial intelligence

What is artificial intelligence?

Artificial intelligence (AI) is a hot topic. In late March, the French government organized a series of events dedicated to this theme, the most notable of which was the publication of the report “For a Meaningful Artificial Intelligence,” written by Cédric Villani, a mathematician and member of the French parliament. The buzz around AI coincides with companies’ and scientists’ renewed interest in the topic. Over the last few years AI has become fashionable again, as it was in the 1950s and 1960s. But what does this term actually refer to? What can we realistically expect from it? Anne-Sophie Taillandier, director of IMT’s TeraLab platform dedicated to big data and AI, is working on innovations and technology transfer in this field. She was recently listed as one of the top 20 individuals driving AI in France by L’Usine Nouvelle. She sat down with us to present the basics of artificial intelligence.

 

How did AI get to where it is today?

Anne-Sophie Taillandier: AI has played a key role in innovation questions for two or three years now. What has helped create this dynamic are closer ties between two scientific fields: information sciences and big data, both of which focus on the question, “How can information be extracted from data, whether big or small?” The results have been astonishing. Six years ago, we were only able to automatically recognize tiny pieces of images. When deep learning was developed, the recognition rate skyrocketed. But if we have been able to use the algorithms on large volumes of images, it is because of hardware that has made it possible to perform the computations in a reasonable amount of time.

What technology is AI based on?

AST: Artificial intelligence is the principle of extracting and processing information. This requires tools and methods. Machine learning is a method that brings together highly statistical techniques such as neural networks. Deep learning is another technique that relies on deeper neural networks. These two methods have some things in common; what makes them different is the tools chosen. In any event, both technologies are based on the principle of learning. The system learns from an initial database and it is then used on other data. The results are assessed so that the system can keep learning. But AI itself is not defined by these technologies. In the future, there may be other types of technology which will also be considered artificial intelligence. And even today, researchers in robotics sometimes use different algorithms.

Can you give some specific examples of the benefits of artificial intelligence?

AST: The medical sector is a good illustration. In medical imaging, for example, we can teach an algorithm to detect cancerous tumors. It can then help doctors look for parts of an image that require their attention. We can also adjust a patient’s treatment depending on a lot of different data: is he alone or does he have a support network? Is he active or inactive? What is his living environment like? All these aspects contribute to personalized medicine, which has only become possible because we know how to process all this data and automatically extract information. For now, artificial intelligence is mainly used as a decision-making aid. Ultimately, it’s a bit like what doctors do when they ask patients questions, but in this case we help them gather information from a wide range of data. With AI, the goal is first and foremost to reproduce something that we know very well.

How can we distinguish between solutions that involve AI and others?

AST: I would say that it’s not really important. What matters is if using a solution provides real benefits. This question often comes up with chatbots, for example. Knowing whether AI is behind them or not — whether it’s just a decision tree based on a previous scenario or if it’s a human — is not helpful. As a consumer, what’s important to me is that the chatbot in front of me can answer my questions. They’re always popping up on sites now, which is frustrating since a lot of the time they are not particularly useful! So it is how a solution is used that really matters, more than the technology behind it.

Does the fact that AI is “trendy” adversely affect important innovations in the sector?

AST: With TeraLab we are working on very advanced topics with researchers and companies seeking  cutting-edge solutions. If people exaggerate in their communication materials or use the term “artificial intelligence” in their keywords, it doesn’t affect us. I’d rather that the public become familiar with the term and think about the technology already present in their smartphones than fantasize about something inaccessible.