Réseaux sociaux : comment les professionnels les utilisent ?

The use of social networks by professionals

The proliferation of social networks is prompting professional users to create an account on each network. Researchers from Télécom SudParis wanted to find out how professionals use these different platforms, and whether or not the information they post is the same for all the networks. Their results, based on combined activity on Google+, Facebook and Twitter, have been available online since last June, and in December 2016 will be published in the scientific journal, Social Network Analysis and Mining.

 

 

Athletes, major brands, political figures, singers and famous musicians are all increasingly active on social networks, with the aim of increasing their visibility. So much so, that they have become very professional in their ways of using platforms such as Facebook, Twitter and Google+. Reza Farahbakhsh, a researcher at Télécom SudParis, took a closer look at the complementarity of different social networks. He studied how these users handle posting the same message on one or more of the three platforms mentioned above.

This work, carried out with Noël Crespi (Télécom SudParis) and Ángel Cuevas (Carlos III University of Madrid) showed that 25% of the messages posted on Facebook or Google+ by professional users are also posted on one of the two other social networks. On the other hand, only 3% of tweets appeared on Google+ or Facebook. This result may be explained by the fact that very active users, who publish a lot of messages, find Twitter to be a more appropriate platform.

Another quantitative aspect of this research is that on average, a professional user who cross-posts the same content on different social networks will use the Facebook-Twitter combination 70% of the time, but not Google+. When used as a support platform for duplications, Google+ is associated with Facebook. Yet, ironically, more users post on the three social networks than solely on Google+ and Twitter.

 

Semantic analysis for information retrieval

To measure these values, the researchers first had to identify influential accounts on all three platforms. 616 users were therefore identified. The team then developed software that enabled them to find the posts from all the accounts on Facebook, Twitter and Google+ by taking advantage of these platforms’ programming interfaces. All in all, 2 million public messages were collected by data shuffling.

Following this operation, semantic analysis algorithms were used to identify similar content among the same user’s messages on different platforms. To avoid bias linked to the recurrence of certain habits among users, the algorithms were configured only to search for identical content within a one-week period before and after the message being studied.

 

Cross-posts are more engaging for the target audience

In addition to retrieving information about message content, researchers also measured the number of likes and shares (or retweets) for each message that was collected. This allowed them to measure whether or not cross-posting on several social networks had an impact on fans or followers engaging with the message’s content — in other words, whether or not there would be more likes on Twitter or shares on Facebook for a cross-posted publication than for one that was not cross-posted.

Since the practice of sharing information on other networks is used to better reach an audience, it was fairly predictable that cross-posts on Twitter and Facebook would be more engaging. Researchers therefore observed that, on average, a post initiated on Facebook and reposted on another platform would earn 39% more likes, 32% more comments and 21% more shares than a message that was not cross-posted. For a post initiated on Twitter, the advantage was even greater, with 2.47 times more likes and 2.1 times more shares.

However, the team observed a reverse trend for Google+. A post initiated on this social network and reposted on Twitter or Facebook would barely achieve half the likes earned by a message that was not cross-posted, and one third of the comments and shares. A professional Google+ user would therefore be better off not cross-posting his message on other social networks.

Since this work involves quantitative, and not sociological analysis, Reza Farahbakhsh humbly acknowledges that these last results on public engagement are up for discussion. “One probable hypothesis is that a message posted on Facebook and Twitter has more content than a message that is not cross-posted, therefore resulting in a greater public response,” the researcher suggests.

 

Which platform is the primary source of publication?

Social networks, Noël Crespi

50% of professional users use Facebook as the initial publication source.

On average, half of professional users use Facebook as the initial cross-publication source. 45% prefer Twitter, and only 5% use Google+ as the primary platform. However, the researchers specify that “5.3% of cross-posts are published at the same time,” revealing the use of an automatic and simultaneous publication method on at least two of the three platforms.

 

Although this work does not explain what might motivate the initial preference for a particular network, it does, however, reveal a difference in content, based on which platform is chosen first. For instance, researchers observed that professionals who preferred Twitter posted mostly text content, with links to sites other than social networks, with content that did not change significantly when reposted on another platform. On the other hand, users who preferred Facebook published more audio-visual content, including links to other social platforms.

This quantitative analysis provides a better understanding of the communication strategies used by professional users. To take this research a step further, Reza Farahbakhsh and Noël Crespi would now like to concentrate on how certain events influence public reactions to the posts. This topic could provide insight and guide the choices of politicians during election campaigns, for example, or improve our understanding of how a competition like the Olympic Games may affect an athlete’s popularity.

 

french tech ticket 660x330

French Tech Ticket : IMT incubators go global!

The incubators at Télécom Bretagne, Télécom SudParis and Télécom Business School have been selected for the second edition of the French Tech Ticket operation. This international program allows foreign start-ups to be hosted by the incubators at these IMT schools over a 12-month period.

 

70. That’s the number of foreign start-ups that will be hosted from January 2017 by the 41 French incubators selected by the French Tech label as part of the French Tech Ticket program. The hosted entrepreneurs will develop their projects over the course of 12 months, while attending masterclasses and being mentored. The start-ups will also benefit from €45,000 in financial assistance provided by the program.

Among the incubators selected are those at Télécom Bretagne (in Brest and in Rennes), and Télécom & Management SudParis — the incubator of the two Evry schools. The firms selected to be hosted by these sites will benefit from an ecosystem of excellence. The survival rate of start-ups supported by the institute is 89% after 5 years, compared to the national average of 71% after 3 years.

Incubators and campuses that are already global

According to Godefroy Beauvallet, Director of Innovation at Institut Mines-Télécom (IMT), the nomination of incubators in Brittany and the south of Paris comes thanks to the schools already being internationalized. “At Télécom Bretagne, 64% of our PhD students are international students” he explained. On a broader level, 34% of students at IMT schools are international students.

We already have this international attractiveness in the area of training. We also have this in research through our international partnerships; and with our position in programs such as the French Tech Ticket, we now have this attractiveness in the area of innovation.” added Godefroy Beauvallet. The nomination of these incubators therefore represents an additional asset in the Institute’s international development, creating value.

That’s because behind this operation, connections are formed and knowledge and skills are shared. According to the Director of Innovation, the relationships forged between foreign and French companies are never only one-sided. The proximity of the start-ups in the incubators can lead to the restructuring of teams as well as new projects. Finally, the “connections created in France represent numerous collaborations and potential partnerships, even several years after the hosting phase” concludes Godefroy Beauvallet.

Find out more on Télécom Bretagne’s participation
[divider style=”solid” top=”20″ bottom=”20″]

 

Le+bleu

Discover the second edition of the French Tech Ticket operation:

[divider style=”solid” top=”20″ bottom=”20″]

Electronic voting, Télécom SudParis, Annals of Telecommunications

Remote electronic voting – a scientific challenge

Although electronic voting machines have begun to appear at polling stations, many technological barriers still hinder the development of these platforms that could enable us to vote remotely via the Internet. The scientific journal, Annals of Telecommunications, dedicated its July-August 2016 issue to this subject. Paul Gibson, a researcher at Télécom SudParis, and guest editor of this edition, co-authored an article presenting the scientific challenges in this area.

 

In France, Germany, the United States and elsewhere, 2016 and 2017 will be important election years. During these upcoming elections, millions of electors will be called upon to make their voices heard. In this era of major digital transformation, will the upcoming elections be the first to feature remote electronic voting? Probably not, despite the support for this innovation around the world – specifically to facilitate the participation of persons with reduced mobility.

Yet this service presents many scientific challenges. The scientific journal, Annals of Telecommunications, dedicated its 2016 July-August issue to the topic. Paul Gibson, a computer science researcher at Télécom SudParis, co-authored an introductory article providing an overview of the latest developments in remote electronic voting, or e-voting. The article presented the scientific obstacles researchers have yet to overcome.

To be clear, this refers to voting from home via a platform that can be accessed using everyday digital tools (computers, tablets and smartphones), because, although electronic voting machines already exist, they do not dispense electors with having to be physically present at the polling stations.

The main problem stems from the balance that will have to be struck between parameters that are not easily reconciled. An effective e-voting system must enable electors to log onto the online service securely, guarantee their anonymity, and enable them to make sure their vote has been recorded and correctly counted. In the event of an error, the system must also enable electors to correct the vote without revealing their identity. From a technical point of view, researchers themselves remain undecided about the feasibility of this type of solution.

[divider style=”solid” top=”20″ bottom=”20″]

 

Attitudes of certain European countries toward remote e-voting

infographie europe vote electronique

Yellow: envisaging E-voting; Red: have rejected E-voting; Blue: experimenting with E-voting on a small scale (e.g. in France for expatriates); Green: have adopted E-voting; Purple: demands (from citizens) to implement remote voting

Source of data: A review of E-voting: the past, present and future, July 2016
[divider style=”solid” top=”20″ bottom=”20″]

Security and trust remain crucial in e-voting

Even if this type of system becomes a reality, scientists stress the problems posed by a vote taking place outside of government-run venues. “If voters cast their ballots in unmonitored environments, it is reasonable to ask ‘what would prevent individuals with malicious intent from being present and forcing voters to do as they say?’” ask the researchers.

Technological responses exist, in the form of completely verifiable end-to-end computer systems capable of preventing forced voting. They can be combined with cutting-edge cryptographic techniques to ensure the secrecy of the vote. Unfortunately, these security measures make using the system more complex for voters. This makes the voting process more difficult and could be counterproductive, since the primary goal of the e-voting system is to reduce abstentions and encourage more citizens to participate in the election.

In addition, such encrypted and verifiable end-to-end solutions do not guarantee secure authentication. This requires governments to distribute electronic identification keys and implies that electors trust their government system, which is not always the case. However, a decentralized system would open the door to viruses and other malicious software.

 

Electronic voting – a societal issue

Electors and organizers must also trust the network used to transmit the data. Although the Internet seems like the most obvious choice for an operation of this scale, it is also the most dangerous. “The use of a public network like the Internet makes it very difficult to protect a system against denial-of-service attacks,” warns Paul Gibson and his co-authors.

The idea of trust reveals underlying social concerns relating to new voting techniques. The long road ahead of scientists is not only paved with technological constraints. For example, there is not yet any established scientific consensus on the real consequences of e-voting on increasing civic participation and reducing abstentions.

Similarly, although certain economic studies suggest that this type of solution could reduce election costs, this still must be counterbalanced through cost analyses for the construction, use and maintenance of a remote e-voting system. The issue therefore not only raises questions in the area of information and communication sciences, but also in the humanities and social sciences. There is no doubt that this subject will continue to fascinate researchers for some time to come.

 

Horizon 2020, Commission européenne

IMT to embark on two new H2020 projects on the IoT and 5G

Projets européens H2020At the end of May the European Commission announced the results of two joint calls (Europe/Japan and Europe/Korea) of the Horizon 2020 program dedicated to digital technology. Institut Mines-Télécom is taking part in two new projects in the areas of the Internet of Things (South Korea) and 5G (Japan) through the work of researchers at its Télécom SudParis and Eurecom graduate schools.

 

After working with Japan on the FP7 NECOMA project focusing on computer security, IMT is embarking on two new European projects with Asia. This makes IMT one of the leading players in collaborative research with Japan and South Korea in the strategic fields of digital technology for Europe”, explains Christian Roux, Director of Research and Innovation. “Developing scientific partnerships with Asia is a matter of great importance to us, as the high-level academic players there will provide crucial support in defining future standards in the areas of the Internet of Things and 5G on a global level.”

 

[box type=”shadow” align=”” class=”” width=””]Télécom SudParis, H2020, WiseIoTThe WiseIoT Project (South Korea) and Télécom SudParis

While work is being carried out to develop benchmark architectures in the Internet of Things, the Wie-IoT project brings together top European and Korean contributions to major activities for IoT standardization. Six European and Korean testbeds will be grouped together and applied to smart cities, leisure, and health in order to demonstrate the flexibility of the IoT’s global services. A substantial dissemination plan has been put in place for standardization in particular and will reach its culmination during the Winter Olympic and Paralympic Games in PyeongChang.

The consortium comprises prestigious research institutions, SMEs and a wide range of industries from Europe (EGM, IMT, NEC Europe, Telefonica, CEA, University of Cantabria, Liverpool John Moores University, Ayuntamiento de Santander, FHNW) as well as from Korea (Sejong University, KAIST, KNU, KETI, Sktelecom, Samsung, Axston, KT Corporation., GimpoBigData). The Wise-IoT environment will support SMEs and start-ups from these two regions in their efforts to penetrate the industrial sector of the IoT, by giving them access to a platform providing interoperability between heterogeneous data in smart environments.

Wise IoT is integrated in the IMT-run French-Korean laboratory ILLUMINE (http://illumine.wp.tem-tsp.eu/). Télécom SudParis will contribute its expertise in Social IoT and semantics and will manage an inclusive approach combining social networks and the IoT.[/box]

 

[box type=”shadow” align=”” class=”” width=””]Eurecom, Pagoda, H2020The 5G Pagoda Project (Japan) and Eurecom

The Pagoda project involves European partners such as Ericsson, the Aalto University in Finland, Eurecom, Orange Poland, the Fraunhofer Fokus along with two Swiss SMEs and Japanese partners:  Tokyo and Waseda universities, the operator KDDI, Hitachi and NEC.

The project’s goal is to create a virtual mobile network which can be deployed upon request, dedicated to an application (through idea of Network Slicing), during the Tokyo Olympic Games in 2020. To this end several technologies will be explored and used: Software Defined Networking (SDN), Network Function Virtualization (NFV) and Mobile Edge Computing (MEC).

Eurecom will contribute its expertise in network softwarization (SDN, NFV et MEC) and its Open Air Interface (OAI) tool to create solutions defined during the project on an open source 5G platform.[/box]

 

Chung-Hae Park, Mines Douai, Composite materials

Flax and hemp among tomorrow’s high-performance composite materials

Composite materials are increasingly being used in industry, especially in the transport sectors (automotive and aeronautics). These lightweight and multifunctional materials have great potential for limiting environmental footprint, and will play a major role in the materials of future. At Mines Douai, Chung-Hae Park is contributing to the development of high-performance and economically viable composites. A distinguishing feature of these materials is that they are made using plant-based resources: they are composed of at least 45% natural fibers (by volume), combined with polymer matrices which are also bio-based, and exhibit high mechanical performance while they can be rapidly manufactured.

 

In a restrictive environmental context (the European Union aims to lower greenhouse gas emissions by 80 to 95 % between now and 2050), it is absolutely necessary to reduce energy consumption, and that of fuel in particular. However, the improvements of automobile and aircraft engines seem to be reaching their limits. The other solution is to make vehicles and their components lighter by using composite materials. “This idea has been implemented for several decades and fiberglass and carbon fibers are increasingly being incorporated into polymer matrices“, explains Chung-Hae Park. “In civil and military aviation, composites already represent 50% of the total mass of certain models (Airbus 350 and Boeing 787 Dreamliner).”

However, there are still some problems: to begin with, the cost of these materials is much higher than that of metals (steel or aluminum), and it is no easy matter to recycle these heterogeneous materials since their components are extremely difficult to separate once assembled. This is where plant fibers come into play.

 

Getting flax and hemp to the same level as the conventional synthetic fibers

The Composites and Hybrid Structures group of the TPCIM (Polymers and Composites Technology & Mechanical Engineering) department at Mines Douai, led by Chung-Hae Park, is currently the only academic partner involved in two important but complementary national projects: FIABILIN and SINFONI, both of which were selected as part of the Future Investments Program.

These projects were launched in 2012 for five years, and are helping to structure the French industry producing plant fibers for use in engineering materials (insulation, reinforced plastics, agro-based composites), with final applications in a wide range of industries (automotive, aeronautics, railways, building, etc.) Besides being lightweight and agro-based (annually renewable resources), plant fibers offer the advantage of being degradable and therefore recyclable. “Unfortunately many of them aren’t yet strong enough compared with fiberglass and carbon fiber. In France, flax and hemp are the most promising,” comments Chung-Hae. “Our goal, through the FIABILIN and SINFONI projects, is to establish their position among the most widely-used fibers for composites, just behind fiberglass and carbon fibers.”

Researchers at the TPCIM department are contributing to these projects by studying the natural variability of plant fibers and the consequences of this variability on the properties for composite applications (molding characteristics, mechanical performance). This involves overcoming a great technological barrier for this type of material, and developing the necessary numerical simulation tools for virtual engineering which can be used for industrial product development while taking their specific features into account (fiber variability, as well as porosity or process –induced defects, for example) and including this information in models for simulating manufacturing process technology and performance prediction.

Additionally, the experts in polymer and composites processing at Mines Douai are developing novel molding processes by direct impregnation of reinforcements for manufacturing 100% agro-based and high performance parts, i.e. parts that have a volume ratio of at least 45% plant fibers. One of the biggest challenges is lowering the production cost of these parts by reducing the time required to manufacture one component in a chain (i.e. by increasing production speed) to a maximum of two minutes, as required by the automobile industry for example. “In this group we are interested in every step of a product’s life, from material characterization, part production, and its integration in a multi-material assembly with metals or elastomers, to its structural health monitoring during the service life and recycling at the end of life,” emphasizes Chung-Hae.

 

Smart processes and materials

Matériaux composites, Chung-Hae Park, Mines DouaiA complete understanding of the long-term behavior of these materials and their assemblies is crucial for the development of industrial applications, but these aspects are difficult to predict for these new materials with such a short history (unlike metals). In order to monitor how these industrial parts evolve over several decades (the operational life of a civil aircraft for example), nondestructive testing must be carried out over the service life, today. The Composites and Hybrid Structures group is working on the possibility of removing this expensive and tedious nondestructive testing by integrating in-situ sensors in the structure of the material itself, making it a smart composite which can be remotely monitored online.

There are plans to take this idea a step further, integrating the same type of sensor into tools for manufacturing composite parts in order to test or even improve the product quality in real time. The goal is to head toward a digital chain integrating design/production/testing of composites and their assemblies in response to high industrial demand. “We are doing things differently with this research“, states Chung-Hae, “and even though there are many teams in France and Europe working on agro-based composites, we stand out for the range of performances we strive for, with a minimum of 45% of fibers in the form of textile reinforcements by a cost-effective manufacturing technology, i.e. direct impregnation technique, guaranteeing high mechanical properties, as well as for our level of expertise in numerical simulation of manufacturing processes for the industrial parts involved.”

Composite materials will undoubtedly remain one of the major areas of interest for research in the future. This subject is also included in the seven themes defined by the Industry of the Future Alliance, in which Institut Mines-Télécom participates and which strives to implement the governmental plan with the same name, launched in 2015.

 

Chung-Hae Park, Mines Douai, matériaux compositesAfter earning his bachelor’s and master’s degrees from Seoul National University (South Korea), in 2000 Chung-Hae Park began working on a Ph.D. thesis on composite materials through a joint-supervision arrangement. For three years, he spent six months a year at Seoul National University and six months a year at Mines Saint-Étienne. This great challenge was exceptional in South Korea, where this type of thesis is extremely rare.

Chung-Hae received his PhD in 2003 then started working in Korea for the petrochemical branch of LG, in collaboration with many international companies, in the automotive industry in particular. He left LG to pursue his passion for teaching and passing on knowledge, obtaining an assistant professor/associate professor position at the Université du Havre in 2005. In 2011, Chung-Hae earned a Diplome of Habilitation (HDR), still in the field of composite materials. Drawn to Mines Douai’s breakthrough research in this field, he joined the team as a full professor two years later.

He has been the head of the Composites and Hybrid Structures group of the TPCIM (Polymers and Composites Technology & Mechanical Engineering) department since 2014. This group gathers together some 30 people (full professors/assistant & associate professors, technicians, research engineers, post-doctoral researchers; Ph.D. students).

Aid in interpreting medical images

Reading and understanding computerized tomography (CT) or Magnetic Resonance Imaging (MRI) images is a task for specialists. Nevertheless, tools exist which may help medical doctors interpret medical images and make diagnoses. Treatment and surgery planning are also made easier by the visualization of the organs and identification of areas to irradiate or avoid. Isabelle Bloch, a researcher at Télécom ParisTech specialized in mathematical modeling of spatial relationships and spatial reasoning, is conducting research on this topic.

 

Mathematicians can also make contributions to the field of health. Developing useful applications for the medical profession has been a main objective throughout Isabelle Bloch’s career. Her work focuses on modeling spatial relationships in order to assist in interpreting medical images, in particular during the segmentation and recognition stages, which preceed the diagnosis stage. The goal of segmentation is to isolate the various objects in an image and locate their contours. Recognition consists in identifying these objects, such as organs or diseases.

In order to interpret the images, appearence (different grey levels, contrasts, gradients) and shape must be compared with prior knowledge of the scene. This leads to model-based methods. Since certain diseases can be particularly deforming, as is the case with some tumors in particular, Isabelle Bloch prefers to rely on structural information between the different objects. The shape of organs is subject to great variability even in a non-pathological context, therefore the way in which they are organized in space and arranged in relation to one another is much more reliable and permanent. This relative positioning between objects constitutes the used structural information.

 

Images médicales, Isabelle Bloch, Télécom ParisTech

In color: results of segmentation and recognition of a tumor and internal brain structures obtained from an MRI using spatial relationships between these structures

 

Between mathematics and artificial intelligence

There are different types of spatial relationships, including information about location, topology , parallelism, distance, or directional positioning. In order to model these relationships, they must first be studied using anatomists’ and radiologists’ body of knowledge. Clinical textbooks and worksorks, medical ontologies, and web pages must be consulted. This knowledge, which is most often expressed in linguistic form, must be understood, then translated into mathematical terms despite its sometimes ambiguous nature.

Fortunately, “fuzzy sets” offer great assistance in modeling imprecise but deterministic knowledge. In this theory, gradual or partial membership of an object to a set can be modeled, as well as degrees of satisfaction of a relation. Fuzzy logic makes it possible to reason using expressions as imprecise as “at the periphery of,” “near,” or “between.” When applied to 3D sets in the field of imagery, fuzzy set theory allows for spatial reasoning, which means that objects and their relationships can be modeled in order to navigate between them and interpret, classify, and infer high-level interpretations or revise knowledge.

 

The last image is obtained by superimposing the first two images, slices of a thorax from two complementary techniques traditionally used in oncology

The last image is obtained by superimposing the first two images, slices of a thorax from two complementary techniques traditionally used in oncology

 

Research open to the outside world

IMAG2, a project undertaken jointly by Isabelle Bloch’s team and the Necker-Enfants Malades Hospital (radiology and pediatric surgery departments), is the subject of a PhD Isabelle has supervised since November 2015. The objective is to develop tools for 3D segmentation of MRI images, specifically dedicated to pelvic surgery. Since the involved diseases can be greatly deforming, the aim is to provide surgeons with a 3D view and enable them to navigate between objects of interest. By helping surgeons make a link with images acquired in advance and the surgical site they are to explore, these tools should also help improve surgical planning and allow for less invasive surgery, limiting disabilities and complications for the patient as much as possible.

WHIST Lab, the joint laboratory run by Institut Mines-Télécom and Orange is another example of collaborative research. Created in 2009, WHIST has led to numerous projects involving the interactions between electromagnetic waves and people. As part of this initiative, Isabelle Bloch’s team at Télécom ParisTech notably worked on designing digital models of human beings that are as realistic as possible. The WHIST Lab was the inspiration for the C2M chair, created on 17 December 2015  (see box below).

 

[box type=”shadow” align=”” class=”” width=””]

A chair for studying exposure to electromagnetic waves

The C2M Chair (Modeling, Characterization, and Control of Exposure to Electromagnetic Waves) was created on December 17, 2015 by Télécom ParisTech and Télécom Bretagne, in partnership with Orange. In an environment with increasing use of wireless communications, its objective is to encourage research and support the scientific and societal debate that has arisen from taking account of the possible health effects of the population’s exposure to electromagnetic waves. It is led by Joe Wiart, who ran the WHIST Lab with Isabelle Bloch at Télécom ParisTech and Christian Person at Télécom Bretagne. This chair is supported by Institut Mines-Télécom, Fondation Télécom, Orange and the French National Frequency Agency.[/box]

 

Close ties with the medical community

A fully-automatic process is a utopian dream. It is not a realistic goal to imagine developing mathematical models using information provided by anatomists, running algorithms on images submitted by radiologists, and sending results directly to surgeons. Designing models, methods and algorirhms require frequent interactions between Isabelle Bloch and medical experts (surgeons, anatomists, radiologists…). Additionally, the experimental part relies on carefully selected patient’s data, under the constraints of informed patient consent and data anonymization. Results from the different stages of segmentation must then be validated by medical experts.

There has been a positive outcome to these frequent interactions: the medical community has adopted these methods and, building upon the new possibilities, is in turn developing ideas for applications which would be useful within the field. New functions are thus expected to emerge in the future.

 

Isabelle Bloch, Images médicales, Télécom ParisTech

Isabelle Bloch, a mathematician in the land of medecine

Isabelle Bloch has been interested in medical imagery for a long time. First of all, while at Mines ParisTech Isabelle carried out her first work placement at the Lapeyronie Hospital in Montpellier, which had just acquired one of the first Magnetic Resonance Imaging (MRI) machines in France. Her second work placement then took her to the CHNO (Quinze-Vingts National Hospital Center of Ophthalmology), where she worked on brain imaging. She went on to earn a master diploma in Medical Imaging and a PhD in image processing. Today Isabelle is a professor at Télécom ParisTech, at LTCI (Information Processing and Communication Laboratory). Naturally, her teaching activities attest to this same loyalty. She teaches image processing and interpretation at Télécom ParisTech and in jointly-appointed IT masters programs with UPMC (where she is the co-coordinator of the Images Specialization) and at Université Paris-Saclay. In 2008 she won the Blondel medal, which rewards outstanding work in the field of science.

Blockchain, Patrick Waelbroeck, Télécom ParisTech

What is a Blockchain?

Blockchains are hard to describe. They can be presented as online databases. But what makes them so special? These digital ledgers are impossible to falsify, since they are secured through a collaborative process. Each individual’s participation is motivated by compensation in the form of electronic currency. But what are blockchains used for? Is the money they generate compatible with existing economic models? Patrick Waelbroeck, economist at Télécom ParisTech, demystifies blockchains in this new article in our “What is…?” series.

 

What does “blockchain” really mean?

Patrick Waelbroeck: A blockchain is a type of technology. It is a secure digital ledger. When a user wishes to add a new element to this record, all the other blockchain users are asked to validate this addition in an indelible manner. In order to do this, they are given an algorithmic problem. When one of the users solves the problem, they simultaneously validate the addition, and it is marked with a tamper-proof digital time-stamp. Therefore, a new entry cannot be falsified or backdated, since other users can only authenticate additions in real time. The new elements are grouped together into blocks, which are then placed after older blocks, thus forming a chain of blocks — or a blockchain.

In order for this security method to work for the ledger, there must be an incentive principle motivating users to solve the algorithm. When a request is made for an addition, all users then compete and the first to find the solution receives electronic money. This money could be inBitcoin, Ether, or another type of cryptocurrency.

 

Are blockchains accessible to everybody?

PW: No, especially since specific material is required, which is relatively expensive and must be updated frequently. Therefore, not everyone can earn money by solving the algorithms. However, once this money is created, it can circulate and be used by anyone. It is possible to exchange crypto-currency for common currency through specialized stock exchanges.

What is essential is the notion of anonymity and trust. All the changes to the ledger are recorded in a permanent and secure manner and remain visible. In addition, the management is decentralized: there is not just one person responsible for certification – it is a self-organizing system.

 

What can the ledger created by the blockchain be used for?

PW: Banks are very interested in blockchains due to the possibility of putting many different items in them, such as assets, which would only cost a few cents — as opposed to the current cost of a few euros. This type of ledger could also be used to reference intellectual property rights or land register reference data. Some universities are considering using a blockchain to list the diplomas that have been awarded. This would irrefutably prove a person’s diploma and the date. Another major potential application is smart contracts: automated contracts that will be able to validate tasks and the related compensation. In this example, the advantage would be that the relationship between employees and employers would no longer be based on mutual trust, which can be fragile. The blockchain acts as a trusted intermediary, which is decentralized and indisputable.

 

What still stands in the way of developing blockchains?

 PW: There are big challenges involved in upscaling. Using current technology, it would be difficult to process all the data generated by a large-scale blockchain. There are also significant limitations from a legal standpoint. For smart contracts, for example, it is difficult to define the legal purpose involved. Also, nothing is clearly established in terms of security. For example, what would happen if a State requested special access to a blockchain? In addition, if the key for a public record is only held by one participant, this could lead to security problems. Work still needs to be done on striking such delicate balances.

Read more on our blog

Digital Commons, Nicolas Jullien

Digital commons: individual interests to serve a community

In a world where the economy is increasingly service-based, digital commons is key to developing the collaborative economy. At Télécom Bretagne, Nicolas Jullien, an economics researcher, is studying online epistemic communities, creation communities which provide platforms for storing knowledge. He has demonstrated that selfish behaviors may explain the success of collective projects such as Wikipedia.

From material commons to digital commons

Cities of commons, common areas, commons assemblies: commons are shifting from theory to practice, countering the overexploitation of a common resource for the benefit of only a few, and offering optimistic solutions across a wide range of sociological, economic, and ecological fields.

Work carried out since the 1960s by Elinor Ostrom, winner of the 2009 Bank of Sweden’s “Nobel” prize for Economics, has led us to abandon the idea of a single choice between privatization of common goods and management by a public power,” says Nicolas Jullien. “She studied a third institutional alternative where communities collectively manage their commons.” This work first focused on natural or material commons, such as collective management of fisheries, then, with the advent of the Internet, on knowledge commons. The threat of enclosures (expropriating commons participants from their user rights) emerged once again, around the subject of handling copyrights and new digital rights.

Nicolas studies online collective groups whose goal is to produce formalized knowledge, not to be confused with forums, which are communities of practices. “A digital commons is a set of knowledge managed or co-produced by individuals,” explains the researcher, citing Wikipedia, Free, Open Source Software (FLOSS), open hardware, etc. In order for a material commons to function, the number of individuals involved must be limited so that members may organize the commons and choose their rules: entry barriers are thus required. “It’s different with digital commons since apparently there aren’t any entry barriers and everyone can contribute freely,” remarks the researcher, who raises the following questions: Why do individuals agree to take collective action for a digital commons when they are not obliged to do so? How can this collective action be organized?

The digital world does not fundamentally change how people function, nor does it change voluntary commitment, but it does produce a mass effect and a ripple effect, which act as facilitators. It allows more specialized people to meet each other using coordinated information systems managed through Internet bots and artificial intelligence… Finally, because it is not very expensive to produce and make available, the fact that people take advantage of commons without paying is of less importance.

 

Selfishness as a driving force for collective action

Are new theories required to understand digital commons? Not according to the researcher, who states, “Technology is certainly evolving, but human beings still work the same way. Existing theories explain people’s involvement in digital commons rather well, along with the way participants are structured around these commons.

Among these is Marvell & Olliver’s 1993 theory, which established that people weigh cost and opportunity when making a commitment. Collective action is then a gathering of rather selfish individual interests, with varying levels of involvement. And a common denominator is required to remind people about the rules on a higher level. These comprise entrance filters, coordination systems, people who devote themselves to enforcing these rules, and robots who patrol. “If only selfish people were involved, it wouldn’t work,” says Nicolas. Even though interest is purely selfish at the outset, “I’m doing this because I like it, I need to do this, it’s an intellectual challenge, it’s for the good of the world,” individuals soon become interested in adding content to the platforms and following their rules, “and this is the key to how this system works, since the goal is indeed the production of knowledge.”

How selfish are participants in these communities? The researcher and his team, in collaboration with the Université de la Rochelle and the Wikimédia France foundation, studied the social attitudes of Wikipedians, including non-contributing users whose role is often under-estimated. 13,000 Wikipedians were subjected to the “dictator game where you receive a sum of €10 and have to share it with another person. How much of it do you keep?” Following a precise protocol, the game was used to find out if the community produced prosocial behavior, or if individuals already exhibited pro-social preferences that explained their behavior. Two thirds of the participants replied 50/50, well above the usual 40% of people who reply this way when participating in the game. Wikipedians thus have this social sharing norm in common. Better still, whereas earlier studies indicated that Wikipedia contributors had higher-than-average prosocial attitudes for their age, the study did not reveal differences between contributors and non-contributors. “We observed that visiting Wikipedia makes people more likely to demonstrate prosocial (not necessarily altruistic) behaviors.” In other words, compliance with a social norm increases as the sense of a collective develops

 

Economy and digital commons

These studies, at the crossroads between the disciplines of industrial economics and organizational management, will provide engineering programs with food for thought about how organizations manage digital change. The researcher asks a third question: how do digital commons projects impact the economy? First of all, there is an emergence of what, in 1993, the economist Paul Romer referred to “industrial public goods” around which participants seek to position themselves. “I would ideally like to create something like ‘open street map’ without a competitor doing the same thing before I do. So that’s what leads me to work in an open standard,” explains Nicolas. And since there cannot be a multitude of platforms fulfilling the same need, there are increasing yields with adoption: the more time goes by, the more people want to contribute to a certain platform rather than another, and accept its rules even if they do not exactly correspond to what contributors would like.

What we noticed with FLOSS, we’re noticing with Wikipedia too,” remarks the researcher. Individuals are paid to make sure the references are up-to-date. A whole project involving editing, quantification of information, observation, integration of content produced in the ‘above services’ is developed. This verification momentum provided by the community creates the business. Without that, “if the commons stops evolving,” there is no longer any reason to participate.

Many other questions still remain,” continues the researcher, “within national and European research on innovative societies and industrial renewal.” These questions have been explored in the past by the ANR CCCP-Prosodie project and are currently being studied by the ANR common project with the Fing, CAPACITY. How are professions (librarians, for example) changing in response to daily contact with the digital commons? What is learned in these commons, “which are like game guilds“? How are the practical skills developed there related to academic skills? How valuable is this on a résumé? Do companies that hire commons developers buy access to the community? For Nicolas Jullien, all these open-ended questions would deserve to be studied by a digital commons Chair.

Read the blog post The sharing economy: simplifying exchanges? Not necessarily…

 

Nicolas Jullien, Communs numériques, Digital commonsAn associate research professor in economics at Télécom Bretagne, Nicolas Jullien has been working on digital commons since his thesis on free software in 2001. He was coordinator of the Breton M@rsouin cluster for measuring and studying digital uses until 2008, and is head of the Étic (evaluation of ICT measures) research group and of the Master’s in ICT Economics co-accredited by the Université Rennes 1. He was a visiting professor at the Syracuse iSchool (in New York State) in 2011, where he led research on communities of online production and has held an Accreditation to Lead Research since 2013. Guided by a curious mind more than any particular taste for developing theories, this engineer, who is a fan of multiview approaches to understanding complex phenomena, will become a member of the Board of Directors of LEGO, a research laboratory for management economics for western France (UBO/UBS/Télécom Bretagne), in January 2017.

Formula 1, composite material

What is a composite material?

Composite materials continue to entice researchers and are increasingly being used in transport structures and buildings. Their qualities are stunning, and they are considered to be indispensable in addressing the environmental challenges at hand: reducing greenhouse gas emissions, creating stronger and more durable building structures, etc. How are these materials designed? What makes them so promising? Sylvain Drapier, a researcher in this field at Mines Saint-Étienne, answers our questions in this new addition to the “What is…?” series, dedicated to composite materials.

 

Does the principle behind a composite mean that it consists of two different materials?

Sylvain Drapier: Let’s say at least two materials. For a better understanding, it’s easier to think in terms of volume fractions, in other words, the proportion of volume that each component takes up in the composite. In general, a composite contains between 40 to 60% of reinforcements, often in the form of fibers. The rest is made up of a binder, called the matrix, which allows for the incorporation of these fibers. Increasingly, the binder percentage is being reduced by a few percentage points in order to add what we call fillers, such as minerals, which will optimize the composite material’s final properties.

 

composite material, Sylvain Drapier, Mines Saint-Étienne

50 % of the structure of the Airbus A350 is made of composite materials. The transportation industry is particularly interested in these materials.

Are these fibers exactly like those in our clothing?

SD: Agro-sourced composites, using natural fibers like flax and hemp, are starting to be developed. In this regard, it’s a little like the fibers in our clothing. But these materials are still rare. For composites that are produced for widespread distribution, the fibers are short — at times extremely short — glass fibers. To give an idea of their size, they have a diameter of 10 micrometers and are 1 to 2 millimeters long. They can be larger in products that must absorb limited strain, such as sailboards and electrical boxes, in which they are a few centimeters long. However, high-performance materials require continuous fibers that measure up to several hundred meters, which are wound on reels. This is the case for aramid fibers, with the best known being Kevlar, and is the case for glass fibers used to make wind turbine blades, as well as for carbon fibers used in structures that must withstand heavy use, such as bicycles, high-end cars and airplanes …

 

Can these fibers be bound together by soaking them in glue to form a composite?

SD: It all starts with fiber networks, in 2D or 3D, produced by specialized companies. This involves textiles that are actually woven, or knitted, in the case of revolving parts. After this, there are several production methods. Some processes involve having the plastic resin, in liquid form, soak into this network as a binder. When heated, the resin hardens: we refer to this as a thermosetting polymer. Other polymer resins are used in a solid state, and melt when heated. They fill the spaces between the fibers, and become solid when they return to room temperature. These matrices are called thermoplastic, made from the same polymer family as the plastic recyclable products we use every day. Metal and ceramic matrices exist too, but they are rarer.

 

How is the choice of fiber determined?

SD: It all depends on its use. Ceramic matrices are used for composites inserted into hot structures; thermoplastic resins melt above 200-350°C, and the thermosetting matrices are weakened above 200°C. Some uses require very unusual matrix choices. This is the case for Formula 1 brakes, and the Ariane rocket nozzles, designed with 3D carbon: not only are the fibers carbon, but the binder is carbon too. Compared with a part made completely of carbon, this composite resists much better to crumbling, and can used at temperatures well in excess of 1,000°C.

 

Vinci motor, European Space Agency

The Vinci motor is made for European space agency rockets. Its nozzle (the black cone in the picture), which enables the propulsion, is made of a carbon-carbon composite. Credits: DLR German Aerospace Center.

What are the benefits of composites?

SD: These materials are very light, while offering physical properties that are at least equivalent to those of metallic materials. This benefit is what has won over the transportation industry, since a lighter vehicle consumes less energy. Another benefit of composites is they do not rust. Another feature: we can integrate functions into these materials. For example, we can make a composite more flexible in certain areas by orienting the fibers differently, which can allow sub-assemblies of parts to be replaced by just one part. However, composite resins are often water-sensitive. This is why the aeronautics industry simulates ageing cycles in specific humidity and temperature conditions.

 

What approach is envisaged for recycling composites?

SD: Thermoplastic matrices can be melted. The polymers are then separated from the fibers and each component is processed separately. However, thermosetting matrices lack this advantage, and the composites they form must be recycled in other ways. It is for this reason that researchers, seeking materials with a reduced carbon footprint, are looking to agro-based composites, by using more and more plant fibers. There are even composites that are 100% agro-based, associating bio-sourced polymers with these organic reinforcements. Composite recycling concerns do not yet attract the attention they deserve, but research teams are currently investing in this means of development.

 

Read more on our blog

sharing economy

The sharing economy: simplifying exchanges? Not necessarily…

The groundbreaking nature of the sharing economy raises hope as well as concern among established economic players. It is intriguing and exciting. Yet what lies behind this often poorly defined concept? Godefroy Dang Nguyen, a researcher in economics at Télécom Bretagne, gives us an insight into this phenomenon. He co-authored a MOOC on the subject that will be available online next September.

 

To a certain extent, the sharing, or collaborative, economy is a little like quantum physics: everyone has heard of it, but few really know what it is. First and foremost, it has a dual nature, promoting sharing and solidarity, on the one hand, and facilitating profit-making and financial investment, on the other. This ambiguity is not surprising, since the concept behind the sharing economy is far from straightforward. When we asked Godefroy Dang Nguyen, an economist at Télécom Bretagne, to define it, his reaction said it all: a long pause and an amused sigh, followed by… “Not an easy question.” What makes this concept of the collaborative economy so complex is that it takes on many different forms, and cannot be presented as a uniform set of practices.

 

Wikipedia and open innovation: two methods of collaborative production

First of all, let’s look at collaborative production. “In this case, the big question is ‘who does it benefit’?” says Godefroy Dang Nguyen. This question reveals two different situations. The first concerns production carried out by many operators on behalf of one stakeholder, generally private. “Each individual contributes, at their own level, to building something for a company, for example. This is what happens in what we commonly refer to as open innovation,” explains the researcher. The second situation relates to collaborative production that benefits the community: individuals create for themselves, first and foremost. The classic example is Wikipedia.

Although the second production method seems to be more compatible with the sharing concept, it does have some disadvantages, however, such as the “free rider” phenomenon. “This describes individuals who use the goods produced by the community, but do not personally participate in the production,” the economist explains. To take the Wikipedia example, most users are free riders — readers, but not writers. Though this phenomenon has only a small impact on the online encyclopedia’s sustainability, it is not the case for the majority of other community services, which base their production on the balance maintained with consumption.

 

Collaborative consumption: with or without an intermediary?

The free rider can indeed jeopardize a self-organized structure without an intermediary. In this peer-to-peer model, the participants do not set any profit targets. Therefore, the consumption of goods is not sustainable unless everyone agrees to step into the producer’s shoes from time to time, and contribute to the community, thus ensuring its survival. A rigorous set of shared organizational values and principles must therefore be implemented to enable the project to last. Technology could also help to reinforce sharing communities, with the use of blockchains, for example.

Yet these consumption methods are still not as well known as the systems requiring an intermediary, such as Uber, Airbnb and Blablacar. These intermediaries organize the exchanges, and in this model, the collaborative peer-to-peer situation seen in the first example now becomes commercial. “When we observe what’s happening on the ground, we see that what is being developed is primarily a commercial peer-to-peer situation,” explains Godefroy Dang Nguyen. Does this mean that the collaborative peer-to-peer model cannot be organized? “No,” the economist replies, “but it is very complicated to organize exchanges around any model other than the market system. In general, this ends up leading to the re-creation of an economic system. Some people really believe in this, like Michel Bauwens, who champions this alternative organization of production and trade through the collaborative method.

 

La Ruche qui dit Oui!

La Ruche qui dit Oui! is an intermediary that offers farmers and producers a digital platform for local distribution networks. Credits: La Ruche qui dit Oui!

 

A new role: the active consumer

What makes the organizational challenge even greater is that the sharing economy is based on a variable that is very hard to understand: the human being. The individual, referred to in this context as the active consumer, plays a dual role. Blablacar is a very good example of this. The service users become both participants, by offering the use of their cars by other individuals, and consumers, who can also benefit from offers proposed by other individuals — if their car breaks down, for example, or if they don’t want to use it.

Yet it is hard to understand the active consumer’s practices. “The big question is, what is motivating the consumer?” asks Godefroy Dang Nguyen. “There is an aspect involving personal quests for savings or for profits to be made, as well as an altruistic aspect, and sometimes a desire for recognition from peers.” And all of these aspects depend on the personality of each individual, as each person takes ownership of the services in different ways.

There’s no magic formula… But some contexts are more faborable than others.

 

Among the masses… The ideal model?

Considering all the differentiating factors in the practices of the sharing economy, is there one model that is more viable than another? Not really, according to Godefroy Dang Nguyen. The researcher believes “there’s no magic formula: there are always risk factors, luck and talent. But some contexts are more favorable than others.

The success experienced by Uber, Airbnb and Blablacar is not by chance alone. “These stakeholders also have real technical expertise, particularly in the networking algorithms,” the economist adds. Despite the community aspect, these companies are operating in a very hostile environment. Not only is there tough competition in a given sector, with the need to stand out, but they must also make their mark in an environment where new mobile applications and platform could potentially be launched for any activity (boat exchanges, group pet-walking, etc.). To succeed, the service must meet a real need, and find a target audience ready to commit to it.

The sharing economy? Nothing new!

Despite these success factors, more unusual methods also exist, with just as much success — proving there is no ideal model. The leboncoin.fr platform is an excellent example. “It’s a fairly unusual exception: the site does not offer any guarantees, nor is it particularly user-friendly, and yet it is widely used,” observes Godefroy Dang Nguyen. The researcher attributes this to the fact that “leboncoin.fr is more a classified ad site than a true service platform,” which reminds us that digital practices are generally an extension of practices that existed before the Internet.

After all, the sharing economy is a fairly old practice with the idea of “either mutually exchanging services, or mutually sharing tools,” he summarizes. In short, a sharing solution is at the heart of the social life of a local community. “The reason we hear about it a lot today, is that the Internet has multiplied the opportunities offered to individuals,” he adds. This change in scale has led to new intermediaries, who are in turn much bigger. And behind them, a multitude of players are lining up to compete with them.

Read the blog post Digital commons: individual interests to serve a community

 

[box type=”shadow” align=”” class=”” width=””]

Discover the MOOC on “Understanding the sharing economy”

The “Understanding the sharing economy” MOOC was developed by Télécom Bretagne, Télécom École de management and Télécom Saint-Étienne, with La MAIF. It addresses the topics of active consumers, platforms, social changes, and the risks of the collaborative economy.

In addition to the teaching staff, consisting of Godefroy Dang Nguyen, Christine Balagué and Jean Pouly, several experts participated in this MOOC: Anne-Sophie Novel, Philippe Lemoine, Valérie Peugeot, Antonin Léonard and Frédéric Mazzella.

 

[/box]