Posts

BeePMN, abeilles, apiculteur

BeePMN: Monitoring bees to take better care of them

At the crossroads between Industry 4.0 and the Internet of Things, the BeePMN research project aims to help amateur and professional beekeepers. It will feature an intuitive smartphone app that combines business processes with real-time measurements of apiaries. 

When a swarm of bees becomes too crowded for its hive, the queen stops laying eggs and the worker bees leave in search of a place to start a new colony. The hive splits into two groups; those who follow the queen to explore new horizons, and those who stay and choose a new queen to take over the leadership of the colony. As exciting as this new adventure is for the bees, for the beekeeper who maintains the hive, this new beginning brings complications. In particular, the loss of part of the colony also leads to a decrease in honey production. On the other hand, the loss of the bees can be caused by something much worse, like the emergence of a virus or an invasion that threatens the health of the bee colony. 

Beekeepers therefore monitor these events in the life of the bees very closely, but keeping track of the hives on a daily basis is a major problem, and a question of time. The BeePMN project, at the crossroads between the processes of Industry 4.0 and the Internet of Things, wants to give the beekeepers eyes in the back of their heads to be able to monitor the health of their hives in real time. BeePMN combines a non-invasive sensor system, to provide real-time data, with an intuitive and easy-to-use application, to provide decision-making support. 

This project was launched as part of the Hubert Curien Partnerships which support scientific and technological exchanges between countries, offering the installation of sites both in France, near Alès, and in Lebanon, with the beekeeping cooperative Atelier du Miel. It is supported by a collaboration between a team led by Gregory Zacharewicz, Nicolas Daclin and François Trousset at IMT Mines Alès, a team led by Charles Yaacoub and Adib Akl at the Holy Spirit University of Kaslik in Lebanon, and the company ConnectHive. This company, which specializes in engineering as applied to the beekeeping industry, was founded by François Pfister, a retired IMT Mines Alès researcher and beekeeping enthusiast.

BeePMN has several goals: to monitor the health of the hives, to increase honey production, and to facilitate the sharing of knowledge between amateurs and professionals. 

“I actually work on business process problems in industry,” says Grégory Zacharewicz, a researcher at IMT Mines Alès on the project. “But the synergy with these different partners has directed us more towards the craft sector, and specifically beekeeping,” with the aim of providing tools to accelerate their tasks or reminders about certain activities. “I often compare BeePMN to a GPS: it is of course possible to drive without it, but it’s a tool that guides the driver to optimize his choices,” he explains. 

Making better decisions 

The different sites, both in France and Lebanon, are equipped with connected sensors, non-invasive for the bee colonies, which gather real-time data on their health, as well as on humidity, temperature, and weight. For the latter, they have developed ‘nomad’ scales, which are less expensive than the usual fixed equivalent. This data is then recorded in an application to help guide the beekeepers in their daily choices. Though professionals are used to making these kinds of decisions, they may not necessarily have all the information at hand, nor the time to monitor all their apiaries. 

The data observed by the sensors is paired with other environmental information such as the current season, weather conditions, and the flowering period. This allows for precise information on each hive and its environment, and improves the relevance of possible actions and choices. 

“If, for example, we observe a sudden 60% weight loss in a hive, there is no other option than to harvest it,” says Charbel Kady, a PhD student at IMT Mines Alès who is also working on the BeePMN project. On the other hand, if the weight loss happens gradually over the course of the week, that might be the result of lots of other factors, like a virus attacking the colony, or part of the colony moving elsewhere. That is the whole point of combining this essential data, like weight, with environmental variables, to provide more certainty on the cause of an event. “It’s about making sense of the information to identify the cause,” notes Charbel Kady. 

The researchers would also like to add vegetation maps to the environmental information. This is an important aspect, especially with regard to honey plants, but this information is difficult to find for certain regions, and complex to install in an application. The project also aims to progress towards prevention aspects: a PhD student, Marianne El Kassis, joined the BeePMN team to work on simulations and to integrate them into the application, to be able to prevent potential risks. 

Learn through play 

The two researchers stressed that one of the points of the application is for beekeepers to help each other. “Beekeepers can share information with each other, and the interesting model of one colleague can be copied and integrated into the everyday life of another,” says Charbel Kady. The application centralizes the data for a set of apiaries and the beekeepers can share their results with each other, or make them available to beginners. That’s the core of the second part of the project, a ‘serious’ game to offer a simplified and fun version to amateur beekeepers who are less independent. 

Professionals are accustomed to repeating a certain set of actions, so it is possible to formalize them with digital tools in the form of business processes to guide amateurs in their activities. “We organized several meetings with beekeepers to define these business rules and to integrate them into the application, and when the sensors receive the information, it triggers certain actions or alerts, for example taking care of the honey harvest, or needing to add wax to the hive,” explains Grégory Zacharewicz. 

“There is a strong aspect of knowledge and skill transfer. We can imagine it like a sort of companionship to pass on the experience acquired,” says the researcher. The GPS analogy is applicable here too: “It makes available a whole range of past choices from professionals and other users, so that when you encounter a particular situation, it suggests the best response based on what has been decided by other users in the past,” the researcher adds. The concept of the app is very similar, in offering the possibility to capitalize on professionals’ knowledge of business processes to educate yourself and learn, while being guided at the same time. 

The BeePMN project is based on beekeeping activities, but as the researchers point out, the concept itself can be applied to various fields. “We can think of a lot of human and industrial activities where this project could be replicated to support decision-making processes and make them stronger,” explains Grégory Zacharewicz.

Tiphaine Claveau

Infographie représentant l'interconnexion entre différents systèmes

Machines surfing the web

There has been constant development in the area of object interconnection via the internet. And this trend is set to continue in years to come. One of the solutions for machines to communicate with each other is the Semantic Web. Here are some explanations of this concept.

 “The Semantic Web gives machines similar web access to that of humans,” indicates Maxime Lefrançois, Artificial Intelligence researcher at Mines Saint-Etienne. This area of the web is currently being used by companies to gather and share information, in particular for users. It makes it possible to adapt product offers to consumer profiles, for example. At present, the Semantic Web occupies an important position in research undertaken around the Internet of Things, i.e. the interconnection between machines and objects connected via the internet.

By making machines work together, the Internet of Things can be a means of developing new applications. This would serve both individuals and professional sectors, such as intelligent buildings or digital agriculture. The last two examples are also the subject of the CoSWoT1 project, funded by the French National Research Agency (ANR). This initiative, in which Maxime Lefrançois is participating, aims to provide new knowledge around the use of the Semantic Web by devices.

To do so, the projects’ researchers installed sensors and actuators in the INSA Lyon buildings on the LyonTech-la Doua campus, the Espace Fauriel building of Mines Saint-Etienne, and the INRAE experimental farm in Montoldre. These sensors record information, like the opening of a window or the temperature and CO2 levels in a room. Thanks to a digital representation of a building or block, scientists can construct applications that use the information provided by sensors, enrich it and make decisions that are transmitted to actuators.

Such applications can measure the CO2 concentration in a room, and according to a pre-set threshold, open the windows automatically for fresh air. This could be useful in the current pandemic context, to reduce the viral load in the air and thereby reduce the risk of infection. Beyond the pandemic, the same sensors and actuators can be used in other cases for other purposes, such as to prevent the build-up of pollutants in indoor air.

A dialog with cards

The main characteristic of the Semantic Web is that it registers information in knowledge graphs: kinds of maps made up of nodes representing objects, machines or concepts, and arcs that connect them to one another, representing their relationships. Each hub and arc is registered with an Internationalized Resource Identifier (IRI): a code that makes it possible for machines to recognize each other and identify and control objects such as a window, or concepts such as temperature.

Depending on the number of knowledge graphs built up and the amount of information contained, a device will be able to identify objects and items of interest with varying degrees of precision. A graph that recognizes a temperature identifier will indicate, depending on its accuracy, the unit used to measure it. “By combining multiple knowledge graphs, you obtain a graph that is more complete, but also more complex,” declares Lefrançois. “The more complex the graph, the longer it will take for the machine to decrypt,” adds the researcher.

Means to optimize communication

The objective of the CoSWoT project is to simplify dialog between autonomous devices. It is a question of ‘integrating’ the complex processing linked with the Semantic Web into objects with low calculating capabilities and limiting the amount of data exchanged in wireless communication to preserve their batteries. This represents a challenge for Semantic Web research.  “It needs to be possible to integrate and send a small knowledge graph in a tiny amount of data,” explains Lefrançois. This optimization makes it possible to improve the speed of data exchanges and related decision-making, as well as to contribute greater energy efficiency.

With this in mind, the researcher is interested in what he calls ‘semantic interoperability’, with the aim of “ensuring that all kinds of machines understand the content of messages that they exchange,” he states. Typically, a connected window produced by one company must be able to be understood by a CO2 sensor developed by another company, which itself must be understood by the connected window. There are two approaches to achieve this objective. “The first is that machines use the same dictionary to understand their messages,” specifies Lefrançois, “The second involves ensuring that machines solve a sort of treasure hunt to find how to understand the messages that they receive,” he continues. In this way, devices are not limited by language.

IRIs in service of language

Furthermore, solving these treasure hunts is allowed by IRIs and the use of the web. “When a machine receives an IRI, it does not need to automatically know how to use it,” declares Lefrancois. “If it receives an IRI that it does not know how to use, it can find information on the Semantic Web to learn how,” he adds. This is analogous to how humans may search for expressions that they do not understand online, or learn how to say a word in a foreign language that they do not know.

However, for now, there are compatibility problems between various devices, due precisely to the fact that they are designed by different manufacturers. “In the medium term, the CoSWoT project could influence the standardization of device communication protocols, in order to ensure compatibility between machines produced by different manufacturers,” the researcher considers. It will be a necessary stage in the widespread roll-out of connected objects in our everyday lives and in companies.

While research firms are fighting to best estimate the position that the Internet of Things will hold in the future, all agree that the world market for this sector will represent hundreds of billions of dollars in five years’ time. As for the number of objects connected to the internet, there could be as many as 20 to 30 billion by 2030, i.e. far more than the number of humans. And with the objects likely to use the internet more than us, optimizing their traffic is clearly a key challenge.

[1] The CoSWoT project is a collaboration between the LIMOS laboratory (UMR CNRS 6158 which includes Mines Saint-Étienne), LIRIS (UMR CNRS 5205), Hubert Curien laboratory (UMR CNRS 5516) INRAE, and the company Mondeca.

Rémy Fauvel

Read on I’MTech

réseaux optiques, optical networks

The virtualization of optical networks to support… 5G

Mobile networks are not entirely wireless. They also rely on a network of optical fibers, which connect antennas to the core network, among other things. With the arrival of 5G, optical networks must be able to keep up with the ramping up of the rest of the mobile network to ensure the promised quality of service. Two IMT Atlantique researchers are working on this issue, by making optical networks smarter and more flexible.  

In discussions of issues surrounding 5G, it is common to hear about the installation of a large number of antennas or the need for compatible devices. But we often overlook a crucial aspect of mobile networks: the fiber optic infrastructure on which they rely. Like previous generations, 5G relies on a wired connection in most cases. This technology is also used in the “last mile”. It therefore makes it possible to connect antennas to core network equipment, which is linked to most of the connected machines around the world. It can also connect various devices within the same antenna site.

In reality, 5G is even more dependent on this infrastructure than previous generations since the next-generation technology comes with new requirements related to new uses, such as the Internet of Things (IoT). For example, an application such as an autonomous car requires high availability, perfect reliability, very-low latency etc. All of these constraints weigh on the overall architecture, which includes fiber optics. If they cannot adapt to new demands within the last mile, the promises of 5G will be jeopardized. And new services (industry 4.0, connected cities, telesurgery etc.) will simply not be able to be provided in a reliable, secure way.

Facilitating network management through better interoperability

Today, optical networks are usually over-provizioned in relation to current average throughput needs. They are designed to be able to absorb 4G peak loads and are neither optimized, nor able to adapt intelligently to fluctuating demand. The new reality created by 5G, therefore represents both a threat for infrastructure in terms of its ability to respond to new challenges, and an opportunity to rethink its management.

Isabel Amigo and Luiz Anet Neto, telecommunications researchers at IMT Atlantique, are working with a team of researchers and PhD students to conduct research in this area. Their goal is to make optical networks smarter, more flexible and more independent from the proprietary systems imposed by vendors. A growing number of operators are moving in this direction. “At Orange, it used to be common to meet specialists in configuration syntaxes and equipment management for just one or two vendors,” explains Luiz Anet Neto, who worked for the French group for five years. “Now, teams are starting to set up a “translation layer” that turns the various configurations, which are specific to each vendor, into a common language that is more straightforward and abstract.”

This “translation layer”, on which he is working with other researchers, is called SDN, which stands for Software-Defined Networking. This model is already used in the wireless part of the network and involves offloading certain functions of network equipment. Traditionally, this equipment fulfills many missions: data processing (receiving and sending packets back to their destination), as well as a number of control tasks (routing protocols, transmission interfaces etc.) With SDN, equipment is relieved from these control tasks, which are centralized within an “orchestrator” entity that can control several devices at once.  

Read more on I’MTech: What is SDN?

There are many benefits to this approach. It provides an overview of the network, making it easier to manage, while making it possible to control all of the equipment, regardless of its vendor without having to know any proprietary language. “To understand the benefit of SDN, we can use an analogy between a personal computer and the SDN paradigm,” says Isabel Amigo. “Today, it would be unthinkable to have a computer that would only run applications that use a specific language. So, machines have an additional layer – the operating system – that is in charge of “translating” the various languages, as well as managing resources, memory, disks etc. SDN therefore aims to act like an operating system, but for the network.” Similarly, the goal is to be able to install applications that are able to work on any equipment, regardless of the hardware vendor. These applications could, for example, distribute the load based on demand.

Breaking our dependence on hardware vendors

SDN often goes hand in hand with another concept, inspired by virtualization in data centers: NFV (Network Functions Virtualization). Its principle: being able to execute any network functionality (not just control functions) on generic servers via software applications.”Usually, dedicated equipment is required for these functions,” says the IMT researcher. “For example, if you want to have a firewall, you need to buy a specific device from a vendor. With NFV, this is no longer necessary: you can implement the function on any server via an application.”

Read more on I’MTech: What is NFV?

As with SDN, the arrival of virtualization in optical networks promotes better interoperability. This makes it harder for vendors to require the use of their proprietary systems linked to their equipment. The market is also changing, by making more room for software developers. “But there is still a long way to go,” says Luiz Anet Neto. “Software providers can also try to make their customers dependent on their products, through closed systems. So operators have to remain vigilant and offer an increasing level of interoperability.”

Operators are working with the academic world precisely for this purpose. They would fully benefit from standardization, which would simplify the management of their optical networks. Laboratory tests carried out by IMT Atlantique in partnership with Orange provide them with technical information and areas to explore ahead of discussions with vendors and standardization bodies.

Sights are already set on 6G

For the research teams, there are many areas for development. First of all, the scientists are seeking to further demonstrate the value of their research, through testing focusing on a specific 5G service (up to now, the experiments have not applied to a specific application). Their aim is to establish recommendations for optical link dimensioning to connect mobile network equipment.

The goal is then to move towards smart optimization of optical networks. To provide an example of how findings by IMT Atlantique researchers may be applied, it is currently possible to add a “probe” that can determine if a path is overloaded and shift certain services to another link if necessary. The idea would then be to develop more in-depth mathematical modeling of the phenomena encountered, in order to automate incident resolution using artificial intelligence algorithms.

And it is already time for researchers to look toward the future of technology. “Mobile networks are upgraded at a dizzying pace; new generations come out every ten years,” says Luiz Anet Neto. “So we already have to be thinking about how to meet future requirements for 6G!

Bastien Contreras

IoT, Internet of Things

A standardized protocol to respond to the challenges of the IoT

The arrival of 5G has put the Internet of Things back in the spotlight, with the promise of an influx of connected objects in both the professional and private spheres. However, before witnessing the projected revolution, several obstacles remain. This is precisely what researchers at IMT Atlantique are working on, and they have already achieved results of global significance.

The Internet of Things (IoT) refers to the interconnection of various physical devices via the Internet for the purpose of sharing data. Sometimes referred to as the “Web 3.0”, this field is set to develop rapidly in the coming years, thanks to the arrival of new networks, such as 5G, and the proliferation of connected objects. Its applications are infinite: monitoring of health data, the connected home, autonomous cars, real-time and predictive maintenance on industrial devices, and more.

Although it is booming, the IoT still faces major challenges. “We need to respond to three main constraints: energy efficiency, interoperability and security,” explains Laurent Toutain, a researcher at IMT Atlantic. But there is one problem: these three aspects can be difficult to combine.

The three pillars of the IoT

First, energy is a key issue for the IoT. For most connected objects, the autonomy of a smartphone is not sufficient. In the future, a household may have several dozen such devices. If they each need to be recharged every two or three days, the user will have to devote several hours to this task. And what about factories that could be equipped with thousands of connected objects? In some cases, these are only of value if they have a long battery life. For example, a sensor could be used to monitor the presence of a fire extinguisher at its location and send an alert if it does not detect one. If you have to recharge its battery regularly, such an installation is no longer useful.

For a connected object, communication features account for the largest share of energy consumption. Thus, the development of IoT has been made possible by the implementation of networks, such as LoRa or Sigfox, allowing data to be sent while consuming little energy.

The second issue is interoperability, i.e. the ability of a product to work with other objects and systems, both current and future. Today, many manufacturers still rely on proprietary universes, which necessarily limits the functionalities offered by the IoT. Take the example of a user who has bought connected light bulbs from two different brands. They will not be able to control them via a single application.

Finally, the notion of security remains paramount within any connected system. This observation is all the more valid in the IoT, especially with applications involving the exchange of sensitive data, such as in the health sector. There are indeed many risks. An ill-intentioned user could intercept data during transmission, or send false information to connected objects, thus inducing wrong instructions, with potentially disastrous consequences.

Read more on I’MTech: The IoT needs dedicated security – now

On the Internet, methods are already in place to limit these threats. The most common is end-to-end data encryption. Its purpose is to make information unreadable while it is being transported, since the content can only be deciphered by the sender and receiver of the message.

Three contradictory requirements?

Unfortunately, each of the three characteristics can influence the others. For example, by multiplying the number of possible interlocutors, interoperability raises more security issues. But it also affects energy consumption. “Today, the Internet is a model of interoperability,” explains Laurent Toutain. For this, it is necessary to send a large amount of information each time, with a high degree of redundancy. It offers remarkable flexibility, but it also takes up a lot of space.” This is only a minor disadvantage for a broadband network, but not for the IoT, which is constrained in its energy consumption.

Similarly, if you want to have a secure system, there are two main possibilities. The first is to close it off from the rest of the ecosystem, in order to reduce risks, which radically limits interoperability.

The second is to implement security measures, such as end-to-end encryption, which results in more data being sent, and therefore increased energy consumption.

Reducing the amount of data sent, without compromising security

For about seven years, Laurent Toutain and his teams have been working to reconcile these different constraints, in the context of the IoT. “The idea is to build on what makes the current Internet so successful and adapt it to the constrained environments, says the researcher. We are therefore taking up the principles of the encryption methods and protocols used today, such as HTTP, but taking into account the specific requirements of the IoT”.

The research team has developed a compression mechanism named SCHC (Static Context Header Compression, pronounced “chic”). It aims to improve the efficiency of encryption solutions and provide interoperability in low-power networks.

For this purpose, SCHC works on the headers of the usual Internet protocols (IP, UDP and CoAP), which contain various details: source address, destination address, location of the data to be read, etc. The particularity of this method is that it takes advantage of the specificity of the IoT: a simple connected object, such as a sensor, has far fewer functions than a smartphone. It is then possible to anticipate the type of data sent. “We can thus free ourselves from the redundancy of classic exchanges on the web, says Laurent Toutain. We then lose flexibility, which could be inconvenient for standard Internet use, but not for a sensor, which is limited in its applications”.

In this way, the team at IMT Atlantique has achieved significant results. It has managed to reduce the size of the headers traditionally sent, weighing 70-80 bytes, to only 2 bytes, and to 10 bytes in their encrypted version. “A quantity that is perfectly acceptable for a connected object and compatible with network architectures that consume very little energy,” concludes the researcher.

A protocol approved by the IETF

But what about that precious interoperability? With this objective, the authors of the study approached the IETF (Internet Engineering Task Force), the international organization for Internet standards. The collaboration has paid off, as SCHC has been approved by the organization and now serves as the global standard for compression. This recognition is essential, but is only a first step towards effective interoperability. How can we now make sure that manufacturers really integrate the protocol into their connected objects? For this, Laurent Toutain has partnered with Alexander Pelov, also a researcher at IMT Atlantic, in order to found the start-up company Acklio. The company works directly with industrialists and offers them solutions to integrate SCHC in their products. It thus intends to accelerate the democratization of the protocol, an effort supported in particular by  €2 million in funds raised at the end of 2019.

Read more on I’MTech Acklio: linking connected objects to the Internet

Nevertheless, industrialists remain to be convinced that the use of a standard is also in their interest. To this end, Acklio also aims to position SCHC among the protocols used within 5G. To achieve this, it will have to prove itself with the 3GPP (3rd Generation Partnership Project) which brings together the world’s leading telecommunications standards bodies. “A much more constraining process than that of the IETF,” however, warns Laurent Toutain.

Bastien Contreras