Invenis

Invenis: machine learning for non-expert data users

Invenis went into incubation at Station F at the beginning of July, and has since been developing at full throttle. This start-up has managed to make a name for itself in the highly competitive sector of decision support solutions using data analysis. Its strength? Providing easy-to-use software aimed at non-expert users, which processes data using efficient machine learning algorithms.

 

In 2015, Pascal Chevrot and Benjamin Quétier, both in the Ministry of Defense at the time, made an observation that made them want to launch a business. They considered that the majority of businesses were using outdated digital decision support tools that were increasingly ill-suited to their needs. “On the one hand, traditional software was struggling to adapt to big data processing and artificial intelligence”, Pascal Chevrot explains. “On the other hand, there were expert tools that existed but were inaccessible to anyone that didn’t have significant technical knowledge.” Faced with this situation, the two colleagues founded Invenis in November 2015 and joined the ParisTech Entrepreneurs incubator. On July 3, 2017, less than two years later, they joined Station F, one of the biggest start-up campuses in the world located in the 13th arrondissement of Paris.

The start-up is certainly appealing: it aims to rectify the lack of available decision support tools with SaaS software (Software as a Service). Its goal is to make the value provided by data available to people that manipulate them every day in order to obtain information, but who are by no means experts. Invenis therefore targets professionals that know how to extract data and use it to obtain information, but who find themselves limited by the capabilities of the tools that they use when they want to go further. Through their solution, Invenis allows these professionals to carry out data processing using machine learning algorithms, simply.

Pascal Chevrot illustrates how simple it is to use, with an example. He takes two data sets and uploads them to Invenis: one is the number of sports facilities per activity and per department, and the other is the population by city in France. The user can then choose what kind of data processing they wish to perform from a library of modules. For example, they could first decide to group the different kinds of sports facilities (football stadiums, boules pitches, swimming pools, etc.) according to regions in France. In parallel, the software will then aggregate the number of inhabitants per commune in order to provide a population value on a regional scale. Once each of these actions has been completed, the user can then carry out an automated segmentation, or “clustering”, in order to classify regions into different groups according to the density of sports facilities in that particular region. In a few clicks, Invenis thus allows users to visualize the regions that have the highest number of sports facilities and those with a low number in relation to the population size, and which should therefore be invested in. Each process carried out on the data is done simply by dragging a processing module into the interface associated with the desired procedure and using this to create a full data processing session.

The user-friendly nature of the Invenis software lies in how simple it is to use these processing modules. Every action has been designed to be simple for the user to understand. The algorithms come from open source libraries Hadoop and Spark, which are references in the sector. “We then add our own algorithms to these existing algorithms, making them easier to manage”, highlights Pascal Chevrot.

For example, the clustering algorithm they use ordinarily requires a certain number of factors to be defined. Invenis’ processing module automatically calculates these factors using its proprietary algorithms. It does, however, allow expert users to modify these if necessary.

In addition to how simple it is to use, the Invenis program has other advantages, namely a close management of data access rights. “Few tools do this”, affirms Pascal Chevrot, before demonstrating the advantages of this function: “For some businesses, such as telecommunication operators, it’s important because they have to report to the CNIL (National Commission for Data Protection and Liberties) for the confidentiality of their data, and soon this will also be the case in Europe, with the arrival of GDPR. Not forgetting that more established businesses have implemented data governance over these questions.”

 

Revealing the value of data

Another advantage of Invenis is that it offers different frameworks. The start-up offers free trial periods to any data users who are using tools they are not satisfied with, along with the opportunity to talk to the technical management team who can demonstrate the tool’s capabilities and even develop proof of concept. However, the start-up also has a support and advice service for businesses that have identified a problem that they would like to solve using their data. “We offer clients guaranteed results, assisting them to resolve their problem with the intention of ultimately making them independent”, explains the co-founder.

It was within this second format that Invenis realized its most iconic proof of concept with CityTaps, another start-up from ParisTech Entrepreneurs that offers prepaid water meters. Using the Invenis software allowed CityTaps to look at three questions. Firstly, how do users consume water in terms of days of the week, size of household, season, etc.? Secondly, what is the optimal moment to warn a user that they need to top up their meter, and would they be quick to do this after receiving an alert SMS? And finally, how can we best predict temperature changes in the meters due to the weather? Invenis provided many responses to these questions by using their processing solutions on CityTaps’ data.

The case of CityTaps shows to what extent data management tools are crucial for companies. Machine learning and intelligent data processing are essential in generating value. However, these technologies can sometimes be difficult to access due to insufficient technical knowledge. Enabling businesses to access this value by reducing access costs in terms of skills is Invenis’ number one aim. As Pascal Chevrot concludes, the key is to provide “”.

algorithms

Ethics, an overlooked aspect of algorithms?

We now encounter algorithms at every moment of the day. But this exposure can be dangerous. It has been proven to influence our political opinions, moods and choices. Far from being neutral, algorithms carry their developers’ value judgments, which are imposed on us without our noticing most of the time. It is now necessary to raise questions about the ethical aspects of algorithms and find solutions for the biases they impose on their users.

 

[dropcap]W[/dropcap]hat exactly does Facebook do? Or Twitter? More generally, what do social media sites do? The overly-simplified but accurate answer is: they select the information which will be displayed on your wall in order to make you spend as much time as possible on the site. Behind this time-consuming “news feed” hides a selection of content, advertising or otherwise, optimized for each user through a great reliance on algorithms. Social networks use these algorithms to determine what will interest you the most. Without questioning the usefulness of these sites — this is most likely how you were directed to this article — the way in which they function does raise some serious ethical questions. To start with, are all users aware of algorithms’ influence on their perception of current events and on their opinions? And to take a step further, what impacts do algorithms have on our lives and decisions?

For Christine Balagué, a researcher at Télécom École de Management and member of CERNA (see text box at the end of the article), “personal data capturing is a well-known topic, but there is less awareness about the processing of this data by algorithms.” Although users are now more careful about what they share on social media, they have not necessarily considered how the service they use actually works. And this lack of awareness is not limited to Facebook or Twitter. Algorithms now permeate our lives and are used in all of the mobile applications and web services we use. All day long, from morning to night, we are confronted with choices, suggestions and information processed by algorithms: Netflix, Citymapper, Waze, Google, Uber, TripAdvisor, AirBnb, etc.

Are your trips determined by Citymapper? Or by Waze? Our mobility is increasingly dependent on algorithms. Illustration: Diane Rottner for IMTech

 

They control our lives,” says Christine Balagué. “A growing number of articles published by researchers in various fields have underscored the power algorithms have over individuals.” In 2015, Robert Epstein, a researcher at the American Institute for Behavioral Research, demonstrated how a search engine could influence election results. His study, carried out with over 4,000 individuals, demonstrated that candidates’ rankings in search results influenced at least 20 % of undecided voters. In another striking example, a study carried out by Facebook in 2012 on 700,000 of its users showed that people who had previously been exposed to negative posts posted predominantly negative content. Meanwhile, those who had previously been exposed to positive posts posted essentially positive content. This proves that algorithms are likely to manipulate individuals’ emotions without their realizing or being informed of it. What role do our personal preferences play in a system of algorithms of which we are not even aware?

 

The opaque side of algorithms

One of the main ethical problems with algorithms stems from this lack of transparency. Two users who carry out the same query on a search engine such as Google will not have the same results. The explanation provided by the service is that responses are personalized to best meet the needs of each of these individuals. But the mechanisms for selecting results are opaque. Among the parameters taken into account to determine which sites will be displayed on the page, over a hundred have to do with the user performing the query. Under the guise of trade secret, the exact nature of these personal parameters and how they are taken into account by Google’s algorithms is unknown. It is therefore difficult to know how the company categorizes us, determines our areas of interest and predicts our behavior. And once this categorization has been carried out, is it even possible to escape it? How can we maintain control over the perception that the algorithm has created about us?

This lack of transparency prevents us from understanding possible biases which could result from data processing. Nevertheless, these biases do exist and protecting ourselves from them is a major issue for society. A study by Grazia Cecere, an economist at Télécom École de Management, provides an example of how individuals are not treated equally by algorithms. Her work has highlighted discrimination between men and women in a major social network’s algorithms for associating interests. “In creating an ad for STEMs (sciences, technology, education, mathematics), we noticed that the software demonstrated a preference for distributing it to men, even though women show more interest for this subject,” explains Grazia Cecere. Far from the myth of malicious artificial intelligence, this sort of bias is rooted in human actions. We must not forget that behind each line of code, there is a developer.

Algorithms are used first and foremost to propose services, which are most often commercial in nature. They are thus part of a company’s strategy and reflect this strategy in order to respond to its economic demands. “Data scientists working on a project seek to optimize their algorithms without necessarily thinking about the ethical issues involved in the choices made by these programs,” points out Christine Balagué. In addition, humans have perceptions about the society to which they belong and integrate these perceptions, either consciously or unconsciously, in the software they develop. Indeed, value judgements present in algorithms quite often reflect the value judgments of their creators. In the example of Grazia Cecere’s work, this provides a simple explanation for the bias discovered, “An algorithm learns what it is asked to learn and replicates stereotypes if they are not removed.”

algorithms

What biases are hiding in the digital tools we use every day? What value judgments passed down from algorithm developers do we encounter on a daily basis? Illustration: Diane Rottner for IMTech.

 

A perfect example of this phenomenon involves medical imaging. An algorithm used to classify a cell as sick or healthy must be configured to make a comparative assessment of the number of false positives and false negatives. Developers must therefore decide to what extent it is tolerable for healthy individuals to receive positive tests in order to prevent sick individuals from receiving negative tests. For doctors, it is preferable to have false positives rather than false negatives while scientists who develop algorithms prefer false negatives to false positives, as scientific knowledge is cumulative. Depending on their own values, developers will privilege one of these professions.

 

Transparency? Of course, but that’s not all!

One proposal for combating these biases is to make algorithms more transparent. Since October 2016, the law for a digital republic, proposed by Axelle Lemaire, the former Secretary of State for Digital Affairs, requires transparency for all public algorithms. This law was responsible for making the higher education admission website (APB) code available to the public. Companies are also increasing their efforts for transparency. As of May 17, 2017, Twitter has allowed its users to see the areas of interest the site associates with them. But despite these good intentions, the level of transparency is far from sufficient for ensuring the ethical dimension. First of all, code understandability is often overlooked: algorithms are sometimes delivered in formats which make them difficult to read and understand, even for professionals. Furthermore, transparency can be artificial. In the case of Twitter, “no information is provided about how user interests are attributed,” observes Christine Balagué.

[Interests from Twitter
These are some of the interests matched to you based on your profile and activity.
You can adjust them if something doesn’t look right.]

Which of this user’s posts led to his being classified under “Action and Adventure,” a very broad category? How are “Scientific news” and “Business and finance” weighed in order to display content in the user’s Twitter feed?

 

To take a step further, the degree to which algorithms are transparent must be assessed. This is the aim of the TransAlgo project, another initiative launched by Axelle Lemaire and run by Inria. “It’s a platform for measuring transparency by looking at what data is used, what data is produced and how open the code is,” explains Christine Balagué, a member of TransAlgo’s scientific council. The platform is the first of its kind in Europe, making France a leading nation in transparency issues. Similarly, DataIA, a convergence institute for data science established on Plateau de Saclay for a period of ten years, is a one-of-a-kind interdisciplinary project involving research on algorithms in artificial intelligence, their transparency and ethical issues.

The project brings together multidisciplinary scientific teams in order to study the mechanisms used to develop algorithms. The humanities can contribute significantly to the analysis of the values and decisions hiding behind the development of codes. “It is now increasingly necessary to deconstruct the methods used to create algorithms, carry out reverse engineering, measure the potential biases and discriminations and make them more transparent,” explains Christine Balagué. “On a broader level, ethnographic research must be carried out on the developers by delving deeper into their intentions and studying the socio-technological aspects of developing algorithms.” As our lives increasingly revolve around digital services, it is crucial to identify the risks they pose for users.

Further reading Artificial Intelligence: the complex question of ethics

[box type=”info” align=”” class=”” width=””]

A public commission dedicated to digital ethics

Since 2009, the Allistene association (Alliance of digital sciences and technologies) has brought together France’s leading players in digital technology research and innovation. In 2012, this alliance decided to create a commission to study ethics in digital sciences and technologies: CERNA. On the basis of multidisciplinary studies combining expertise and contributions from all digital players, both nationally and worldwide, CERNA raises questions about the ethical aspects of digital technology. In studying such wide-ranging topics as the environment, healthcare, robotics and nanotechnologies, it strives to increase technology developers’ awareness and understanding of ethical issues.[/box]

 

 

 

human resources

How is technology changing the management of human resources in companies?

In business, digital technology is revolutionizing more than just production and design. The human resources sector is also being greatly affected; whether that be through better talent spotting, optimized recruitment processes or getting employees more involved with the company’s vision. This is illustrated though two start-ups incubated at ParisTech Entrepreneurs, KinTribe and Brainlinks.

 

Recruiters in large companies can sometimes store tens of thousands of profiles in their databases. However, it often difficult to make use of such a substantial pool of information using conventional methods. “It’s impossible to keep such a large file up-to-date, so the data often become obsolete very quickly”, states Chloé Desault, a former in-company recruiter and co-founder of the start-up KinTribe. “Along with Louis Laroche, my co-founder who was also formerly a recruiter, we aim to facilitate the use of these talent pools and improve the daily lives of recruiters”, she adds. The software solution enables recruitment professionals to create a recruitment pool using professional social networks. With KinTribe, they are able to create a usable database, in which they can perform complex searches in order to find the best person to contact for their need, from tens of thousands of available profiles. “This means they no longer have to waste time on people that do not correspond to the target in question”, affirms the co-founder.

The software’s algorithms can then process the collected data to produce a rating for the relevant market. This rating indicates to what extent a person is susceptible to an external recruitment offer. “70% of people on LinkedIn aren’t actively looking for a job, but would still consider a job offer if it was presented to them”, Louis Laroche explains. In order to identify these people, and to what extent they are likely to be interested, the algorithm is based on key values identified by recruiters. Age, field of work and duration of last employment are all factors that can influence how open someone is to a proposition.

One of the start-up’s next goals is to add new sources of data into the mix, allowing their users to go on other networks to find new talents. Multiplying the available data will also allow them to improve the market rating algorithms. “We want to provide recruiters with the best possible knowledge by aggregating the maximum amount of social data that we can”; summarizes the KinTribe co-founder.

Finally, the two entrepreneurs are also interested in other topics within in the field of recruitment. “As a start-up, we have to try to stay ahead of the curve and understand what the market will do next. We dedicated part of our summer to exploring the potential of a new co-optation product” Chloé concludes.

 

From recruitment to employee involvement

In human resources, software tools represent more than just an opportunity for recruiting new talent. One of their aims is also to get employees involved in the company’s vision, and to listen to them in order to pinpoint their expectations. The start-up Brainlinks was created for this very reason. Today, it offers a mobile app called Toguna for businesses with over 150 people.

The concept is simple: with Toguna, general management or human resources departments can ask employees a question such as: “What is your view of the company of the future?” or “What new things would you like to see in the office?” The employees, who remain anonymous on the app, can then select the questions they are interested in and offer responses that will be made public. If a response made by a colleague is interesting, other employees can vote for it, thus creating a collective form of involvement based on questions about life at work.

In order to make Toguna appeal to the maximum number of people, Brainlinks has opted for a smart, professional design: “Contributions are formatted by the person writing them; they can add an image and choose the font, etc.”, explains Marc-Antoine Garrigue, the start-up’s co-founder. “There is an element of fun that allows each person to make their contributions their own”, he continues. According to Marc-Antoine Garrigue, this feature has helped them reach an average employee participation rate of 85%.

Once the votes have been cast and the propositions collected, managers can analyze the responses. When a response is chosen, it is highlighted on the app, providing transparency in the inclusion of employee ideas. A field of development for the app is to continue improving the dialogue between managers and employees. “We hope to go even further in constructing a collective history: employees make contributions on a daily basis and in return, the management can explain the decisions they have made after having consulted these”, outlines the co-founder. This is an opportunity that could really help businesses to see digital transformation as a vehicle for creativity and collective involvement.

 

Fine particles are dangerous, and not just during pollution peaks

Véronique Riffault, IMT Lille Douai – Institut Mines-Télécom and François Mathé, IMT Lille Douai – Institut Mines-Télécom

[divider style=”normal” top=”20″ bottom=”20″]

[dropcap]T[/dropcap]he French Agency for Food, Environmental and Occupational Health and Safety (ANSES) released a new notice concerning air pollution yesterday. After having been questioned on the potential changes to norms for ambient air quality, particularly concerning fine particles (PM10 and PM2.5), the organization has highlighted the importance of pursuing work on implementing long-term public policies that promote the improvement of air quality. They recommend lowering the annual threshold value for PM2.5 to equal the recommendations made by the WHO, and introducing a daily threshold value for this pollutant. As the following data visualization shows, the problem extends throughout Europe.

Average concentrations of particulate matters whose aerodynamic diameter is below 2.5 micrometers (called “PM2.5” which makes up “fine particles” along with PM10) for the year 2012. Amounts calculated using measures from fixed air quality monitoring stations, shown in micrograms per m3 of air. Data source: AirBase.

The level reached in peak periods is indicated by hovering the mouse over a given circle, the sizes of which will vary depending on the amount. The annual average is also provided, detailing long-term exposure and the subsequent proven impact on health (particularly on the respiratory and cardio-vascular systems). It should be noted that the annual target value for the particles (PM2.5), as specified by European legislation is currently 25 µg/m3. The level will drop to 20 µg/m3 in 2020, whilst the WHO currently recommends an annual threshold of 10 µg/m3.

The data shown on this map correspond exclusively to a so-called “fundamental” site typology, examining not only urban environments but also rural ones, which are far from being influenced by nearby pollution (coming from traffic or industry). Airbase also collects data supplied by member states that use measuring methods that can vary depending on the site but always respect the data quality objectives and are specific to the pollutant (90% of data on PM2.5 is approved annually, with an uncertainty of ± 25%). This perhaps explains why certain regions show little or no data (Belarus, Ukraine, Bosnia-Herzegovina and Greece), keeping in mind that a single station cannot be representative of the air quality across an entire country (as is the case in Macedonia).

The PM2.5 shown here may be emitted directly into the atmosphere (these are primary particles) or formed by chemical reactions between gaseous pollutants in the atmosphere (secondary particles). The secondary formation of PM2.5 often stems from peaks in pollution at certain points in the year when the sources of the pollutants are most significant and in meteorological conditions which allow them to accumulate. Sources connected to human activity are mainly linked to combustion processes (such as engines in vehicles or the burning of biomass and coal for residential heating systems) and agricultural activity.

The above map shows that the threshold suggested by the WHO has been surpassed in a large majority of stations, particularly in Central Europe (Slovakia, South Poland) due to central heating methods, or in Northern Italy (the Po Valley), which has been affected by poor topographical and meteorological conditions.

Currently, only 1.16% of stations are recording measurements that are still within the WHO recommendations for PM2.5 (shown in light green on the map). On top of this, 13.6% of stations have already reached the future European limits to be set in 2020 (shown in green and orange circles).

This illustrates that a large section of the European population is being exposed to concentrations of particles that are harmful to health and that some significant efforts are to be made. In addition, when considering that the mass concentration of particulates is a good indicator of air quality, their chemical composition should not be forgotten. This is something which proves to be a challenge for health specialists and policymakers, especially in real time.

Véronique Riffault, Professor in Atmospheric Sciences, IMT Lille Douai – Institut Mines-Télécom and François Mathé, Professor-Researcher, President of the AFNOR X43D Normalization Commission “Ambient Atmospheres”, Head of Studies at LCSQA (Laboratoire Central de Surveillance de la Qualité de l’Air), IMT Lille Douai – Institut Mines-Télécom

 

The original version  of this article was published in French in The Conversation France.

 

 

auonomous cars, Guillaume Duc, chair C3S, connected cars

No autonomous cars without cybersecurity

Protecting cars from cyber-attacks is an increasingly important concern in developing smart vehicles. As these vehicles become more complex, the number of potential hacks and constraints on protection algorithms is growing. Following the example of the “Connected cars and cybersecurity” chair launched by Télécom ParisTech on October 5, research is being carried out to address this problem. Scientists intend to take on these challenges, which are crucial to the development of autonomous cars.

 

Connected cars already exist. From smartphones connected to the dashboard, to computer-aided maintenance operations, cars are packed with communicating embedded systems. And yet, they still seem to be a long way from the futuristic vehicles we’ve been dreaming up in our imagination. They do not (yet) all communicate with one another or with road infrastructures to provide warnings about dangerous situations for example. Cars are struggling to make the leap from “connected” to “intelligent”. And without intelligence, they will never become autonomous. Guillaume Duc, a research professor in electronics at Télécom ParisTech who specializes in embedded systems, perfectly sums up one of the hurdles to this development, “Autonomous cars will not exist until we are able to guarantee that cyber-attacks will not put a smart vehicle, its passengers or its environment in danger.”

Cybersecurity for connected cars is indeed crucial to their development. Whether rightly or wrongly, no authority will authorize the sale of increasingly intelligent vehicles without first guaranteeing that they will not be out of control on the roads. The topic is of such importance in the industry that researchers and manufacturers have teamed up to find solutions. A “Connected Cars and Cybersecurity” chair bringing together Télécom ParisTech, Fondation Mines-Télécom, Renault, Thalès, Nokia, Valéo and Wavestone was launched on October 5. According to Guillaume Duc, the specific features of connected cars make this is a unique research topic.

The security objectives are obviously the same as in many other systems,” he says, pointing to the problems of information confidentiality or certifying that information has really been sent by one sensor instead of another. “But cars have a growing number of components, sensors, actuators and communication interfaces, making them easier to hack,” he goes on to say. The more devices there are in a car, the more communication points it has with the outside world. And it is precisely these entry points which are the most vulnerable. However, these are not necessarily the instruments that first come to mind, like radio terminals or 4G.

Certain tire pressure sensors use wireless communication to display a possible flat tire on the dashboard. But wireless communication means that without an authentication system to ensure that the received information has truly been sent by this sensor, anyone can pretend to be this sensor from outside the car. And if you think sending incorrect information about tire pressure seems insignificant, think again. “If the central computer expects a value of between 0 and 10 from the sensor and you send it a negative number, for example, you have no idea how it will react,” explains the researcher. This could crash the computer, potentially leading to more serious problems for the car’s controls.

 

Adapting cybersecurity mechanisms for cars

The stakes are high for research on how to protect each of these communicating elements. These components have only limited computing power while algorithms to protect against attacks usually require high computing power. “One of the chair’s aims is to successfully adapt algorithms to guarantee security while requiring less computer resources,” says Guillaume Duc. This challenge goes hand in hand with another one, limiting latency in the car’s execution of critical decisions. Adding algorithms to embedded systems leads to increased computing time when an action is transmitted. But cars cannot afford to take longer to brake. Researchers therefore have their work cut out for them.

In order to address these challenges, they are looking to the avionics sector, which has been facing problems associated with the proliferation of sensors for years. But unlike planes, fleets of cars are not operated in an ultra-controlled environment. And in contrast to aircraft pilots, drivers are masters of their own cars and may handle them as they like. Cars are also serviced less regularly. It is therefore crucial to guarantee that cybersecurity tools installed in cars cannot be altered by their owners’ tinkering.

And since absolute security does not exist and “algorithms may eventually be broken, whether due to unforeseen vulnerabilities or improved attack techniques,” as the researcher explains, these algorithms must also be agile, meaning that they can be adapted, upgraded and improved without automakers having to recall an entire series of cars.

 

Resilience when faced with an attack

But if absolute security does not exist, where does this leave the 100% security guarantee against attacks, which is the critical factor in developing autonomous cars? In reality, researchers do not seek to protect against all possible attacks on connected cars. Their goal rather to ensure that even if an attack is successful, it will not prevent the driver or the car itself from remaining safe. And of course, this must be possible without having to brake suddenly on the highway.

To reach these objectives, researchers are using their expertise to develop resilience in embedded systems. The problem recalls that of critical infrastructures, such as nuclear power plants, which cannot simply shut down when under attack. In the case of cars, a malicious intrusion in the system must first be detected when it occurs. To do so, the vehicle’s behavior is constantly compared to previously-recorded behaviors which are considered normal. If an action is suspicious, it is identified as such. In the event of a real attack, it is crucial to guarantee that the car’s main functions (steering, brakes etc.)  will be maintained and isolated from the rest of the system.

Ensuring a car’s resilience from its design phase, resilience by design, is also the most important condition for cars to continually become more autonomous. Automakers can provide valuable insight for researchers in this area, by contributing to discussions about a solution’s technical acceptability or weighing in on economic issues. While it is clear that autonomous cars cannot exist without security, it is equally clear that they will not be rolled out if the cost of security makes them too expensive to find a market.

[box type=”info” align=”” class=”” width=””]

Cars: personal data on wheels!

The smarter the car, the more personal data it contains. Determining how to protect this data will also be a major research area for the “Connected cars and cybersecurity” chair. To address this question it will work in collaboration with another Télécom ParisTech chair, dedicated to “Values and policies of personal information” which also brings together Télecom SudParis and Télécom École de Management. This collaboration will make it possible to explore the legal and social aspects of exchanging personal data between connected cars and their environment. [/box]

ledgys, blockchain, vivatechnology

With Ledgys, blockchain becomes a tangible reality

Winner of the blockchain BNP jury prize at VivaTechnology last year, Ledgys hopes to reaffirm its relevance once more this year. From June 15 to 17, 2017, the company will be attending the event again to present its Ownest application. The application is aimed at both businesses and consumers, offering a simple solution for using blockchain technology in practical cases such as logistics management or an authenticity certificate for a handbag.

 

Beyond the fantasy of the blockchain, what is the reality of this technology? Ledgys, a start-up founded in April 2016 and currently incubated at Télécom ParisTech, answers this question in a very appealing way. By developing their app, Ownest, they offer professionals and individuals easy access to their blockchain portfolio. Users can therefore easily visualize and manage the objects and products they own that are recorded in the decentralized register, whether that be a watch, containers or a palette on its journey between a distributor and a shop.

To illustrate the application’s potential, Clément Bergé-Lefranc, co-founder of Ledgys, uses the example of a company producing luxury items: “a brand that produces 1,000 individually-numbered bags can create a digital asset for each one that is included in the blockchain. These assets will accompany the product throughout its whole life cycle.” From conception to destruction, including distribution and resale from individual to individual, the digital asset will follow the bag with which it is associated through all phases. Each time the bag moves from one actor of the chain to another, the transaction of the digital asset is recorded in the blockchain, proving that the exchange really took place.

“This transaction is approved by thousands of computers, and its certification is more secure than traditional financial exchanges”, claims Clément Bergé-Lefranc. The blockchain basically submits each transaction to be validated by other users of the technology. The proof of the exchange is therefore certified by all the other participants, and is then recorded alongside all other transactions that have been carried out. It is impossible to then go back and alter this information.

Also read on I’MTech: What is a blockchain?

With Ownest, users have easy access to information on the assets of the products they own. The app allows users to transfer a title to another person in a matter of seconds. There are a whole host of advantages for the consumer. In the example of a bag made by a luxury brand, the asset certifies that the product does indeed come from the manufacturer’s own workshops and that it is not a fake. Should they wish to then resell the bag to an individual, they can prove that the object is authentic. It also solves problems at customs when returning from a journey.

Monitoring the product throughout its life cycle allows businesses to better understand the secondary market. “A collector of luxury bags who only buys second-hand products is totally invisible to a brand”, highlights the Ledgys co-founder. “If they want, the collector can make themselves known to the brand and prove that they own the items, meaning that the brand can offer advantages such as a limited-edition item.” Going beyond customer relations, the blockchain is above all one of the quickest and most effective ways to manage the state of stocks. The company can see in real time which products have been handed over to distributors and which have just been stored in the warehouse. Instead of signing delivery slips on the distribution platforms, a simple transfer of the digital asset from the deliverer to the receiver will suffice. This perspective has also attracted one of the world leaders in distribution, who is now a client of Ledgys, hoping to improve the traceability of their packaging.

“These really are concrete examples of what blockchain technology can do, and it’s what we would like to offer”, Clément Bergé-Lefranc declares enthusiastically. The start-up will present these examples of how it can be used at the VivaTechnology exhibition in Paris from June 15-17. Ledgys is committed to their mission of popularizing the use of the blockchain, and collaborating with another start-up for the event: Inwibe and the app “Wibe me up”. Together, they will offer all participants the chance to vote for the best start-ups. By using blockchain technology to certify the votes, they can ensure a transparent census of the public’s favorites.

Imagining how blockchain might be used in 10 years’ time

As well as developing Ownest, the Ledgys team is working on a more long-term project. “One of our objectives is to build marketplaces for the exchange of data using blockchain technology” presents Clément Bergé-Lefranc. Using Ethereum, the solution allows people to buy and sell data with blockchain. They still have a while to wait however before seeing such marketplaces emerge. “There are technical elements that still need to be resolved for this to be possible and optimized on Ethereum”, admits the Ledgys’ co-founder. “However, we are already building on what the blockchain will become, and its use in a few years’ time.”

In the meantime, the start-up is working on a proof of concept in developing countries, in collaboration with “The Heart Fund”, a foundation devoted to treating heart disease. The UN-accredited project aims to establish a secure and universal medical file for each patient. Blockchain technology will allow health-related data to be certified and accessible. “The aim is that with time, we will promote the proper use of patients’ medical data”, announces Clément Bergé-Lefranc. By authorizing access for professionals in the medical sector in a transparent and secure way, the quality of healthcare in countries where medical attention is less reliable can be improved. Again, this is an example of Ledgys’ desire to use blockchain not just to fulfil fantasies, but also to resolve concrete problems.

The original version of this article was published on the ParisTech Entrepreneurs incubator website.

 

Data&Musée, cultural institutions

Data&Musée – Developing data science for cultural institutions

Télécom SudParis and Télécom ParisTech are taking part in Data&Musée, a collaborative project led by Orpheo, launched on September 27, 2017. The project’s aim is to provide a single, open platform for data from cultural institutions in order to develop analysis and forecasting tools to guide them in developing strategies and expanding their activities.

 

Data science is a recent scientific discipline concerned with extracting information, analyses or forecasts from a large quantity of data. It is now widely used in many different industries from energy and transport to the healthcare sector.

However, this discipline has not yet become a part of French cultural institutions’ practices. Though institutions collect their data on an individual level, until now there had been no initiative to aggregate and analyze all the data from French museums and monuments. And yet, gathering this data could provide valuable information for institutions and visitors alike, whether to establish analyses of cultural products in France, measure institutions’ performance or provide visitors with helpful recommendations for museums and monuments to visit.

The Data&Musée project will serve as a testing ground for exploring the potential of data analysis for cultural institutions and determining how this data can help institutions grow. The project is led by the Orpheo group, a provider of guide systems (audio-guide systems, multimedia guides, software etc.) for cultural and tourist sites, and has brought together researchers and a team of companies specialized in data analysis such as Tech4TeamKernixMyOrpheo. The Centre des Monuments Nationaux, an institution which groups together nearly 100 monuments, and Paris Musées, an organization which incorporates 14 museums in Paris, have agreed to test the project on their sites.

A single, open platform for centralizing data

The Data&Musée project strives to usher museums into the data age by grouping together a great number of cultural institutions on Teralab, IMT and GENES’s shared data platform. “This platform provides a neutral, secure and sovereign storage hosting space. The data will be hosted on the IMT Lille Douai site in France,” explains Antoine Garnier, the head of the project at IMT. “Teralab can host sensitive data in accordance with current regulations and is already recognized as a trustworthy tool.

In addition, highly sensitive data can be anonymized if necessary. The project could enlist the help of Lamane, a startup specializing in these technical issues, which was created through IMT Atlantique incubators.

Previously-collected individual data from each institution, such as ticketing data or web site traffic, will be combined with new sources collected by Data&Musée and created by visitors using a smart guestbook (currently being developed by the corporate partner GuestViews), social media analysis and an indoor geolocation system.

Orpheo seeks to enhance the visitor journey but is not certain whether it should be up to the visitor or carried out automatically,” explains Nel Samama, whose research laboratory at Télécom SudParis is working with Orpheo on the geolocation aspect. “Analyzing flows in a fully automatic way means using radio or optical techniques, which function correctly in demonstration mode but are unreliable in real use. Having the visitor participate in this loop would simplify it tremendously.

Developing tools for indications, forecasting and recommendations

Based on an analysis of this data, the goal is to develop performance indicators for museums and build tools for personalizing the visitor experience.

Other project partners including Reciproque, a company that provides engineering services for cultural institutions, and the UNESCO ITEN chair (Innovation, Transmission and Digital Publishing), will use the data collected to work on modeling aesthetic taste in order to determine typical visitor profiles and appropriate recommendations for content based on these profiles. This tool will therefore increase visitors’ awareness of the rich offerings of French cultural institutions and therefore boost the tourism industry. Jean-Claude Moissinac, a research professor at Télécom ParisTech, is working on this aspect of the project in partnership with Reciproque. “I’m especially interested in data semantics, or web semantics,” explains the researcher. “The idea is to index all the data collected in a homogenous way, then use it to make a graph in order to interlink the information. We can then infer groups, which may be works or users. After that, we use this knowledge to propose different paths.”

The project plans to set up an interface through which partner institutions may view their regional attendance, visitor seasonality, and segmentation compared to other institutions with similar themes. Performance indicators will also be developed for the museums. The various data collected will be used to develop analytical and predictive models for visiting cultural sites in France and for providing these institutions with recommendations to help them determine strategies for expanding their activities.

With a subscription or contribution system, this structured data could eventually be transmitted to institutions that do not produce data or to third parties with the consent of institutions and users. A business model could therefore emerge, allowing Data&Musée to live on beyond the duration of the project.

Project supported by Cap Digital and Imaginove, with funding from Bpifrance and Région Île-de-France.

 

ethics, social networks, éthique, Antonio Cailli, Télécom ParisTech

Rethinking ethics in social networks research

Antonio A. CasilliTélécom ParisTech – Institut Mines-Télécom, University of Paris-Saclay and Paola TubaroCentre national de la recherche scientifique (CNRS)

[dropcap]R[/dropcap]esearch into social media is booming, fueled by increasingly powerful computational and visualization tools. However, it also raises some ethical and deontological issues that tend to escape the existing regulatory framework. The economic implications of large scale data platforms, the active participation of members of networks, the spectrum of mass surveillance, the effect on health, the role of artificial intelligence: a wealth of questions all needing answers. A workshop running from December 5-6, 2017 at Paris-Saclay, organized in collaboration with three international research groups, hopes to make progress in this area.

 

Social Networks, what are we talking about?

The expression “social network” has become commonly used, but those that use it to refer to social media such as Facebook or Instagram are often ignorant about its origin and true meaning. Studies into social networks began long before the dawn of the digital age. Since the 1930s, sociologists have been conducting studies that attempt to explain the structure of the relationships that connect individuals and groups: their “networks”. This could be, for example, relationships based on advice between employees of a business, or friendships between pupils in a school. These networks can be represented as points (the pupils) connected by lines (the relationships).

A graphic representation of a social network (friendships between pupils at a school), created by J.L. Moreno in 1934. Circles = girls, triangles = boys, arrows = friendships. J.L. Moreno, 1934, CC BY

 

Well before any studies into the social aspects of Facebook and Twitter, this research shed significant light on the topic. For example, the role of spouses in a marriage; the importance of “weak connections” in job hunting; the “informal” organization of a business; the diffusion of innovation; the education of political and social elites; and mutual assistance and social support when faced with ageing or illness. The designers of digital platforms such as Facebook now adopt some of the analytical principles that this research was based on, founded on mathematical graph theory (although they often pay less attention to the associated social issues).

Researchers in this field understood very quickly that the classic principles of research ethics (especially the informed consent of participants in a study and the anonymization of any data relating to them) were not easy to guarantee. In social network research, the focus is never on one sole individual, but rather on the links between the participant and other people. If the other people are not involved in the study, it is hard to see how their consent can be obtained. Also, the results may be hard to anonymize, as visuals can often be revealing, even when there is no associated personal identification.

 

Digital ethics: a minefield

Academics have been pondering these ethical problems for a quite some time: in 2005, the journel Social Networks dedicated an issue to these questions. The dilemmas faced by researchers are exacerbated today by the increased availability of relational data which has been collected and used by digital giants such as Facebook and Google. New problems arise as soon as the lines between “public” and “private” spheres become blurred. To what extent do we need consent to access the messages that a person sends to their contacts, their “retweets” or their “likes” on friends’ walls?

Information sources are often the property of commercial companies, and the algorithms these companies use tend to offer a biased perspective on the observations. For example, can a contact made by a user through their own initiative be interpreted in the same way as a contact made on the advice of an automated recommendation system? In short, data doesn’t speak for itself, and we must question the conditions of its use and the ways it is created before thinking about processing it. These dimensions are heavily influenced by economic and technical choices as well as by the software architecture imposed by platforms.

But is negotiation between researchers (especially in the public sector) and platforms (which sometimes stem from major multinational companies) really possible? Does access to proprietary data risk being restricted or unequally distributed (potentially at a disadvantage to public research, especially when it doesn’t correspond to the objectives and priorities of investors)?

Other problems emerge when we consider that researchers may even resort to paid crowdsourcing for data production, using platforms such as Amazon Mechanical Turk to ask the masses to respond to a questionnaire, or even to upload their online contact lists. However, these services raise questions about old beliefs in terms of working conditions and appropriation of a product. The ensuing uncertainty hinders research which could potentially have positive impacts on knowledge and society in a general sense.

The potential for misappropriation of research results for political or economic ends is multiplied by the availability of online communication and publication tools, which are now used by many researchers. Although the interest among the military and police in social network analysis is already well known (Osama Bin Laden was located and neutralized following the application of social network analysis principles), these appropriations are becoming even more common today, and are less easy for researchers to control. There is an undeniable risk that lies in the use of these principles to restrict civil and democratic movements.

A simulation of the structure of an Al-Qaeda network, “Social Network Analysis for Startups” (fig. 1.7), 2011. Reproduced here with permission from the authors. Kouznetsov A., Tsvetovat M., CC BY

 

Celebrating researchers

To break this stalemate, the solution is not to increase the number of restrictions which would just aggravate the constraints that are already inhibiting research. On the contrary, we must create an environment of trust, so that researchers can explore the scope and importance of social networks online and offline, as they are essential in making the most of prominent economic and social phenomena, whilst still respecting people’s rights.

The active role of researchers must be highlighted. Rather than remaining subject to predefined rules, they need to participate in the co-creation of an adequate ethical and deontological framework, drawing on their experience and reflections. This bottom-up approach integrates the contributions of not just academics but also the public, civil society associations and representatives from public and private research bodies. These ideas and reflections could then be brought forward to those responsible for establishing regulations (such as ethics committees)

 

An international workshop in Paris

Poster for the RECSNA17 Conference

Such was the focus of the workshop Recent ethical challenges in social-network analysis. The event was organized in collaboration with international teams (The Social Network Analysis Group from the British Sociological Association, BSA-SNAG; Réseau thématique n. 26 “Social Networks” from the French Sociological Association; and the European Network for Digital Labor Studies (ENDLS)), with support from Maison des Sciences de l’Homme de Paris-Saclay and Institut d’études avancées de Paris. The conference will be held on December 5-6. For more information and to sign up, please consult the event website. Antonio A. Casilli, Associate Professor at Télécom ParisTech and research fellow at Centre Edgar Morin (EHESS), Télécom ParisTech – Institut Mines-Télécom, University of Paris-Saclay and Paola Tubaro, Head of Research at LRI, a Computing Research Laboratory at CNRS. Teacher at ENS, Centre national de la recherche scientifique (CNRS).

 

The original version of this article was published on The Conversation France.

 

laser femtoseconde, Femto Engineering

A new laser machining technique for industry

Belles histoires, Bouton, CarnotFEMTO-Engineering, part of Carnot Télécom & Société numérique institute, offers manufacturers a new cutting and drilling technique for transparent materials. By using a femtosecond laser, experts can reach unrivalled levels of precision when manufacturing ultra-hard materials. Jean-Pierre Goedgebuer, director of FC’Innov (FEMTO-Engineering), explains how the technique works.

What is high aspect ratio precision machining and what is it used for?

Jean-Pierre Goedgebuer: precision machining is used in cutting, drilling and engraving materials. It allows various designs to be inscribed onto materials such as glass, steel or stainless steel. It’s a very widespread method in industry. Precision machining corresponds to a positioning and shaping technique for an extremely small scale, i.e. in the range of 2 microns (10-6 meters). The term “aspect ratio” for example is a reference to drilling. It corresponds to the relationship between the depth and the diameter. Therefore, an aspect ratio of 100 corresponds to a diameter 100 times smaller than its depth.

Cutting or drilling requires local destruction and mastery of the material. In order to achieve this, we supply energy from a laser. This emits heat when it comes into contact with the material.

 

What is femtosecond machining?

JPG: The term femtosecond [1] refers to the duration of the laser pulses, which last a few tens or hundreds of femtoseconds. The length of the pulse determines the length of the interaction between light and the material. The shorter it is, the fewer thermal exchanges there are with the material and therefore in principal, the less the material is destroyed.

In laser machining, we use short pulses (femtoseconds – 10-15 of a second) or longer pulses (nanoseconds – 10-9 of a second). The choice depends on the usage. For machining with no thermal effect, that is, where the material is not affected by the heat produced by the pulse, we tend to use femtosecond pulses, allowing us to find a good compromise between destruction of the material and how high the temperature is. These techniques are associated to light propagation models which allow us to simulate the impact of the properties of a material on the propagation of the light going through it.

 

The femtosecond machining technique generally uses Gaussian beams. The defining characteristic of your process is that it uses Bessel beams. What is the difference?

JPG: Gaussian laser beams are beams inside which the energy is spread in a Gaussian way. When they have raised energy levels, they produce non-linear effects when propagated in the materials. This means that they produce autofocusing effects, making their diameters non-constant and distorting their propagation. These effects can be detrimental to the quality of the machining of certain special kinds of glass.

In contrast, the Bessel laser beams, like what we use in our machining technique, allow us to avoid these non-linear effects. They therefore have the ability to maintain a constant diameter over a well-defined length. They act as very fine “laser needles”, measuring just a few hundred nanometers in diameter (a nanometer corresponds to approximately the size of an atom). Inside these “laser needles” is a very high concentration of energy. This generates an extremely localized plasma within the material, which causes the excision of the material. Furthermore, we can control the length of these “laser needles” in a very precise way. We use them to do very deep cutting or drilling (with an aspect ratio of up to 2,000) producing a precise, clean result with no thermal effects.

In order to start being able to use this new technology, we used a traditional femtosecond laser. What led to several patents being filed by the Institut FEMTO-ST, was finding out how to transform Gaussian beams into Bessel beams.

 

What is the point of this new technology?

JPG: There are two main reasons for it. As we’re dealing with “laser needles” which hold a high density of energy, it is possible to drill very hard materials which would pose a problem for traditional laser machining techniques. Thanks to the technique’s athermic nature, the material in question keeps its physicochemical properties intact; it does not change.

This machining method is used for transparent materials. Industrial demand is high as there are many products that require the machining of herder transparent materials. This is the case for example with smartphones, where the screens need to be made from special kinds of very durable, scratch-resistant glass. This is a big market and is a major focus for many laser manufacturers, particularly in Europe, the US and of course, Asia. There are several other uses however, including elsewhere in the biomedical field.

 

What’s next for this technique?

JPG: Our mission at FEMTO Engineering is to accentuate the research coming out of the Institut FEMTO-ST. In this context, we have partnerships with manufacturers with whom we are exploring how this new technology could respond to their needs in terms of very specific materials where traditional femtosecond machining doesn’t give satisfactory results. We are currently working on cutting new materials for smartphones, as well as polymers for medical use.

The primary research carried out by the Institut FEMTO-ST, is continuing to focus in particular on better understanding light-matter interaction mechanisms and plasma formation. This research was recently formally recognized by the ERC (European Research Council) which finances experimental projects that encourage scientific discovery. The aim is to really master the understanding of the physical properties of Bessel beam propagation which is something that has not been particularly studied on a scientific level before now.

[1] A femtosecond corresponds to one millionth of a billionth of a second. It’s the approximate duration of an electromagnetic wave. A femtosecond is to a second what a second is to the lifetime of the universe.

On the same topic:

Underwater pipeline, Pipeline sous-marin, hydrocarbures, hydrates de méthane, crystallization, cristallisation

Understanding methane hydrate formation to revolutionize pipelines

Since hydrocarbon is always drawn from deep in the sea floor, oil companies face potential obstruction problems in their pipelines due to the formation of solid compounds: methane hydrates. Ana Cameirao, an engineer and PhD specializing in industrial crystallization at Mines Saint-Étienne, is hoping to understand and model this phenomenon. She has contributed to the creation of an industrial chair in collaboration with international laboratories and operators such as Total, with the aim of developing a modelling software for the flow within the pipelines. Their mission is to achieve a more economic and ecological usage of underwater pipelines.

 

Always further, always deeper.” This is the logic behind the implementation of offshore platforms. Faced with the world’s intense demand and thanks to technological progress, hydrocarbon reserves which had previously been considered to be inaccessible are now exploitable. However, the industry has met an obstacle: methane hydrates. These solid compounds are actually solidified water molecules trapped in a sort of cage created by a methane molecule. These are created in environments of around 4°C and 80 bars of pressure, such as in deep-sea pipelines. These can end up accumulating and subsequently obstructing the flow. This issue may prove hard to fix, seeing as depths reach close to 3,000 meters!

In order to get around this problem, oil companies generally inject methanol into the pipelines in order to lower the formation temperature of the hydrates. However, injecting this alcohol carries an additional cost as well as an environmental impact. Additionally, systematic thermal insulation of pipelines is not sufficient to prevent the formation of hydrates. “The latest solution consists in injecting additives which are supposed to slow the formation and accumulation of hydrates”, explains Ana Cameirao, a researcher at the SPIN (Sciences des Processus Industriels et Naturels) research center at Mines Saint-Étienne, and a specialist in crystallization, the science behind the formation and growth of solid aggregates within liquid phases, for instance.

 

Towards the reasonable exploitation of pipelines

For nearly 10 years, the researcher has been studying the formation of hydrates in all conditions likely to occur in offshore pipelines. “We are looking to model the phenomenon, in other words, to estimate the quantity of hydrates formed, to see whether this solid phase can be transported through the flow, to find if there is a need to inject additives, and if yes, in what quantity”, she summarizes. The goal is to prompt a well-considered exploitation of the pipelines and avoid the massive injection of methanol as a preventative measure. In order to establish these models, Ana Cameirao utilizes a valuable experimental tool: the Archimedes platform.

This 50 meter loop located at the SPIN center allows her to reproduce the flow of the mixture of oil, water and gas which circulates in the pipelines. A plethora of equipment, including cameras and laser probes which function under very high pressure levels, allow her to study the formation of the solid compounds, including their size, nature, aggregation speed, etc. She has been closely examining all the possible scenarios: “we vary the temperature and pressure, but also the nature of the mix, for example by incorporating more or less gas, or by varying the proportion of water in the mixture”, explains Ana Cameirao.

Thanks to all these trials, in 2016, the researcher and her team published one of the most complete comprehension models about this phenomenon of methane hydrate crystallization. “Similar models do already exist, but only for fixed proportions of water. Our model is more extensive: it can integrate any proportion of water. This allows a greater variety of oil wells to be studied, including the oldest ones where the mixture can consist of up to 90% water!” This model is the product of painstaking work: over 150 experiments have been completed over the last 5 years, each of them representing at least two measurement days. Above all, it offers new perspectives: “Petrochemical process simulation software is very limited in explaining the flow in pipelines alongside hydrate formation. The main task is to invent modules that are able to take this phenomenon into consideration”, analyses Ana Cameirao.

 

Applications in environmental technology

This is the next step of a soon-to-be completed project: “We are currently aiming to combine our knowledge on crystallization of hydrates with that of experts on fluid mechanics, in order to better characterize their flow”. This multidisciplinary approach is the main subject of the international chair Gas Hydrates and Multiphase Flow in Flow Assurance, which was opened in January 2017 by the Mines school in collaboration with two laboratories hailing from the Federal University of Technology in Parana, Brazil (UTFPR), and the Colorado School of Mines  in the US. The chair, which will span over three to five years, also involves industrial partners, the top level of whom includes Total. “Total, who has been a partner of the research center for 15 years, not only offers financial support, but also shares with us its experience in real exploitation”, tells Ana Cameirao.

 

Credits: Mines Saint-Étienne

 

A better understanding of hydrate crystallization will facilitate the offshore exploitation of hydrocarbon, but it could also benefit environmental technology over time. Indeed, researchers are working on innovative application of hydrates, such as the harvesting of CO2 or new climate control techniques. “The idea would be to form hydrate sorbets overnight when energy is available and less expensive, in order to diffuse this through a climate control system during the daytime. As the hydrates melt, the heat in the surrounding area would be absorbed”, explains Ana Cameirao. Clearly, it seems that crystallization can lead to anything!

 

[author title=”Ana Cameirao : « Creativity comes first »” image=”https://imtech-test.imt.fr/wp-content/uploads/2017/09/Portrait_Ana_Cameirao.jpg”]

Ana Cameirao chose very early on to pursue a course in engineering in her home country of Portugal. “It was the possibility to apply the science which interested me, this potential to have a definitive impact on people’s lives”, she recalls. After finishing her studies in industrial crystallization at IMT Mines Albi, she threw herself into applied research. “It’s a constant challenge, we are always discovering new things”, she marvels, when looking back over her ten years at the SPIN center at Mines Saint Étienne.

Ana Cameirao also invokes creativity in her role as a professor, backed by innovative teaching methods which include projects, specific case studies, bibliographic independent learning, and much more. “Students today are no longer interested in two-hour lectures. You need to involve them”, she tells. The teacher feels so strongly about this topic that she decided to complete a MOOC dedicated to exploring methods for stimulating creativity, and plans to organize her own workshops on the subject within her school in 2018!

[/author]