smart cameras, safe city

Coming soon: “smart” cameras and citizens improving urban safety

Flavien Bazenet, Institut Mines-Telecom Business School, (IMT) and Gabriel Périès, Institut Mines-Telecom Business School, (IMT)

This article was written based on the research Augustin de la Ferrière carried out during his “Grande École” training at Institut Mines-Telecom Business School (IMT).

[divider style=”normal” top=”20″ bottom=”20″]

[dropcap]« S[/dropcap]afe cities »: seen by some as increasing the security and resilience of cities, others see it as an instance of ICTs (Information and Communication Technologies) being used in the move towards the society of control. The term has sparked much debate. Still, through balanced policies, the “Safe City” could become a part of a comprehensive “smart city” approach. Citizen crowdsourcing (security by citizens) and video analytics—“situational analysis that involves identifying events, attributes or behavior patterns to improve the coordination of resources and reduce investigation time” (source: IBM)—ensure the protection of privacy, and guarantee its cost and performance.

 

Safe cities and video protection

A “safe city” refers to NICT (New Information and Communication Technology) used for urban security purposes. However, in reality, the term is primarily linked to a marketing concept that major groups integrating the security sector have used to promote their video protection systems.

First appearing in the United Kingdom in the mid-1980s, urban cameras gradually became popularized. While their use is sometimes a subject of debate, in general they are well accepted by citizens, although this acceptance varies based on each country’s risk culture and approach to security matters. Today, nearly 250 million video protection systems are used throughout the world. On an international scale, this translates as one camera for every 30 inhabitants. But the effectiveness of these cameras is often called into question. It is therefore necessary to take a closer look at their role and actual effectiveness.

According to several French reports—in particular the “Report on the effectiveness of video protection by the French Ministry of the Interior, Overseas France and Territorial Communities” (2010) and ”Public policies on video protection: a look at the results” by INHESJ (2015)—the systems appear to be effective primarily in deterring minor criminal offences, reducing urban decay and improving interdepartmental cooperation in investigations.

 

The effectiveness of video protection limited by technical constraints

On the other hand, video protection has proven completely ineffective in preventing serious offences. The cameras appear only to be effective in confined spaces, and could even have a “publicity effect” for terrorist attacks. These characteristics have been confirmed by analysts in the sector, and are regularly emphasized by Tanguy Le Goff and Eric Heilmann, researchers and experts on this topic.

They also point out that our expectations for these systems are too high, and stress that the technical constraints are too significant, in addition to the excessive installation and maintenance costs.

To better explain the deficiencies in this kind of system, we must understand that in a remotely monitored city, a camera is constantly filming the city streets. It is connected to the “Urban monitoring center”, where the signal is transmitted to several screens. The images are then interpreted by one or more operators. But no human can be legitimately expected to remain concentrated on a multitude of screens for hours at time, especially when the operator-to-screen ratio is often extremely disproportional. In France, the ratio sometimes reaches one operator to one hundred screens! This is why the typical video protection system’s capacity for prevention is virtually nonexistent.

The technical experts imply that the real hope for video protection through forensic science—the ability to provide evidence—is nullified by the obvious technical constraints.

In a “typical” video protection system, the volume of data recorded by each camera is quite significant. According to one manufacturer’s (Axis Communications) estimate, with a camera capable of recording 24 images per second, the generated data ranges from 0.74 Go/hour to 5Go/hour depending on the encoding and chosen resolution. Therefore, the servers are quickly saturated, since current storage capabilities are limited.

With an average cost of approximately 50 euros per terabyte, local authorities and town halls find it difficult to afford datacenters capable of saving video recordings for a sufficient length of time. In France, the CNIL authorizes 30 days of saved video recordings, but in reality, these recordings are rarely saved for more than 7 consecutive days. For some experts, often these saved are not kept for more than 48 hours. Therefore, this undermines the main argument used in favor of video protection: the ability to provide evidence.

 

A move towards new smart video protection systems?

The only viable alternative to the “traditional” video protection system is that of “smart” video protection using video analytics or “VSI”: technology that uses algorithms and pixel analysis.

Since these cameras are generally supported by citizens, they must become more efficient, and not lead to a waste of financial and human resources. “Smart” cameras therefore offer two possibilities: biometric identification and situational analysis. These two components should enable the activation of automatic alarms for operators so that they can take action, which would mean the cameras would truly be used for prevention.

A massive installation of biometric identification is currently nearly impossible in France, since the CNIL is committed to the principles of purpose and proportionality: it is illegal to associate recorded data featuring citizens’ faces without first establishing a precise purpose for the use of this data. The Senate is currently studying this issue.

 

Smart video protection, safeguarding identity and personal data?

On the other hand, situational analysis offers an alternative that can tap into the full potential of video protection cameras. Through the analysis of situations, objects and behavior, real-time alerts are sent to video protection operators, a feature that restores hope in the system’s prevention capacity. This is in fact the logic behind the very controversial European surveillance project, INDECT: limit the recording of video, to focus only on pertinent information and automated alerts. This technology therefore makes it possible to opt for selective video recording, and even do away with it all together.

“Always being watched”… Here, in Bucharest (Romania), end of 2016. J. Stimp/Flickr, CC BY

VSI with situational analysis could offer some benefits for society, in terms of the effective security measures and the cost of deployment for taxpayers. VSI requires fewer operators than video protection, fewer cameras and fewer costly storage spaces. Referring to the common definition of a “smart city”—realistic interpretation of events, optimization of technical resources, more adaptive and resilient cities—this video protection approach would put “Safe Cities” at the heart of the smart city approach.

Nevertheless, several risks of abuse and potential errors exist, such as unwarranted alerts being generated, and they raise questions about the implementation of such measures.

 

Citizen crowdsourcing and bottom-up security approaches

The second characteristic of a “smart and safe city” must take people into account, citizens users—the city’s driving force. Security crowdsourcing is a phenomenon that finds its applications in our hyperconnected world through “ubiquitous” technology (smartphones, connected objects). The Boston Marathon bombing (2013), the London riots (2011), the Paris attacks (2015), and various natural catastrophes showed that citizens are not necessarily dependent on central governments, and could ensure their own security, or at least work together with the police and rescue services.

Social networks, Twitter, and Facebook with its “Safety Check” feature, are the main examples of this change. Similar applications quickly proliferated, such as Qwidam, SpotCrime, HeroPolis, and MyKeeper, and are breaking into the protection sector. On the other hand, these mobile solutions are struggling to take any ground in France due to a fear of false information being spread. Yet these initiatives offer true alternatives and should be studied and even encouraged. Without responsible citizens, there can be no resilient cities.

A study from 2016 shows that citizens are likely to use these emergency measures on their smartphones, and that they would make them feel safer.

Since the “smart city” relies on citizen, adaptive and ubiquitous intelligence, it is in our mutual interest to learn from bottom-up governance methods, in which information comes directly from the ground, so that a safe city could finally become a real component of the smart city approach.

 

Conclusion

Implementing major urban security projects without considering the issues involved in video protection and citizen intelligence leads to a waste of the public sector’s human and financial resources. The use of intelligent measures and the implementation of a citizen security policy would therefore help to create a balanced urbanization policy, a policy for safe and smart cities.

[divider style=”normal” top=”20″ bottom=”20″]

Flavien Bazenet, Associate professor for Entrepreneurship and Innovation at Institut Mines-Telecom Business School, (IMT) and Gabriel Périès, Professor, Department of Foreign languages and Humanities at Institut Mines-Telecom Business School, (IMT)

The original version of this article (in French) was published in The Conversation.

attack

When the internet goes down

Hervé Debar, Télécom SudParis – Institut Mines-Télécom, Université Paris-Saclay

[divider style=”normal” top=”20″ bottom=”20″]

[dropcap]”A[/dropcap]third of the internet is under attack. Millions of network addresses were subjected to distributed denial-of-service (DDoS) attacks over two-year period,” reports Warren Froelich on the UC San Diego News Center website. A DDoS is a type of denial-of-service (DoS) attack in which the attacker carries out an attack using many sources distributed throughout the network.

But is the journalist justified in his alarmist reaction? Yes and no. If one third of the internet was under attack, then one in every three smartphones wouldn’t work, and one in every three computers would be offline. When we look around, we can see that this is obviously not the case, and if we now rely so heavily on our phones and Wikipedia, it is because we have come to view the internet as a network that functions well.

Still, the DDoS phenomenon is real. Recent attacks testify to this, such as botnet Mirai’s attack on the French web host OVH, and the American web host DynDNS falling victim to the same botnet.

The websites owned by customers of these servers were unavailable for several hours.

What the article really looks at is the appearance of IP addresses in the traces of DDoS attacks. Over a period of two years, the authors found the addresses of two million different victims, out of the 6 million servers listed on the web.

Traffic jams on the information superhighway

Units of data, called packets, circulate on the internet network. When all of these packets want to go to the same place or take the same path, congestion occurs, just like the traffic jams that occur at the end of a workday.

It should be noted that in most cases it is very difficult, almost impossible, to differentiate between normal traffic and denial of service attack traffic. Traffic generated by “Flash crowd” and “slashdot effect” phenomena is identical to the traffic witnessed during this type of attack.

However, this analogy only goes so far, since packets are often organized in flows, and the congestion on the network can lead to these packets being destroyed, or the creation of new packets, leading to even more congestion. It is therefore much harder to remedy a denial-of-service attack on the web than it is a traffic jam.

attaques

Diagram of a deny of service attack. Everaldo Coelho and YellowIcon

 

This type of attack saturates the network link that connects the server to the internet. The attacker does this by sending a large number of packets to the targeted server. These packets can be sent directly if the attacker controls a large number of machines, a botnet.

Attackers also use the amplification mechanisms integrated in certain network protocols, such as the naming system (DNS) and clock synchronization (NTP). These protocols are asymmetrical. The requests are small, but the responses can be huge.

In this type of attack, an attacker contacts the DNS or NTP amplifiers by pretending to be a server that has been attacked. It then receives lots of unsolicited replies. Therefore, even with a limited connectivity, the attacker can create a significant level of traffic and saturate the network.

There are also “services” that offer the possibility of buying denial of service attacks with varying levels of intensity and durations, as shown in an investigation Brian Krebs carried out after his own site was attacked.

What are the consequences?

For internet users, the main consequence is that the website they want to visit is unavailable.

For the victim of the attack, the main consequence is a loss of income, which can take several forms. For a commercial website, for example, this loss is due to a lack of orders during that period. For other websites, it can result from losing advertising revenue. This type of attack allows an attacker to use ads in place of another party, enabling the attacker to tap into the revenue generated by displaying them.

There have been a few, rare institutional attacks. The most documented example is the attack against Estonia in 2007, which was attributed to the Russian government, although this has been impossible to prove.

Direct financial gain for the attacker is rare, however, and is linked to the ransom demands in exchange for ending the attack.

Is it serious?

The impact an attack has on a service depends on how popular the service is. Users therefore experience a low-level attack as a nuisance if they need to use the service in question.

Only certain large-scale occurrences, the most recent being the Mirai botnet, have impacts that are perceived by a much larger audience.

Many servers and services are located in private environments, and therefore are not accessible from the outside. Enterprise servers, for example, are rarely affected by this kind of attack. The key factor for vulnerability therefore lies in the outsourcing of IT services, which can create a dependence on the network.

Finally, an attack with a very high impact would, first of all, be detected immediately (and therefore often blocked within a few hours), and in the end would be limited by its own activities (since the attacker’s communication would also blocked), as shown by the old example of the SQL Slammer worm.

Ultimately, the study shows that the phenomena of denial-of-service attacks by saturation have been recurrent over the past two years. This news is significant enough to demonstrate that this phenomenon must be addressed. Yet this is not a new occurrence.

Other phenomena, such as routing manipulation, have the same consequences for users, like when Pakistan Telecom hijacked YouTube addresses.

Good IT hygiene

Unfortunately, there is no surefire form of protection against these attacks. In the end, it comes down to an issue of cost of service and the amount of resources made available for legitimate users.

The “big” service providers have so many resources that it is difficult for an attacker to catch them off guard.

Still, this is not the end of the internet, far from it. However, this phenomenon is one that should be limited. For users, good IT hygiene practices should be followed to limit the risks of their computer being compromised, and hence used to participate in this type of attack.

It is also important to review what type of protection outsourced service suppliers have established, to ensure sure they have sufficient capacity and means of protection.

[divider style=”normal” top=”20″ bottom=”20″]

Hervé Debar, Head of Department Networks and Telecommunications services, Télécom SudParis – Institut Mines-Télécom, Université Paris-Saclay

The original version of this article (in French) was published on The Conversation.

 

cyrating

Cyrating: a trusted third-party for cybersecurity assessment

Cyrating, a startup incubating at ParisTech Entrepreneurs, provides organizations the service of assessing their performance and efficiency in cybersecurity. By positioning itself as a trust third-party, it is meeting the needs of companies for an objective analysis of their cyber risk. The service allows companies to assess their position relative to competitors.

 

In the cybersecurity sector, Cyrating intends to play a role that organizations are often asking for, but as yet has never been provided: that of a trusted third-party. The startup that has been incubating at ParisTech Entrepreneurs since last September offers to assess the cybersecurity performance of public and private companies. The rating they receive allows them to position themselves relative to their competitors, as well as define areas for improvement and determine the cybersecurity level of their subsidiaries and suppliers.

Regardless of the type of company, the startup bases its assessment on the same criteria. This results in objective ratings that are not dependent on the organization’s size or structure. “For example, we look at the level of protection for domain names, company websites, email services…” explains François Gratiolet, co-founder of Cyrating. He calls these criteria “facts” and they are supplemented by an analysis of “events” such as a data breach or the hosting of malware on the internal server.

Cyrating processes a set of observable data with the aim of uncovering these facts and events related to the organization’s cybersecurity. They are then measured against best practices in order to obtain a rating. Based on assessment algorithms, metrics and ratings are automatically calculated by category. The organizations evaluated by Cyrating therefore obtain a clear view of their efficiency in a variety of cybersecurity issues, in addition to the overall rating. This enables them to identify the measures they must immediately implement to improve their protection and optimize their allocation of financial and human resources.

Unlike auditing and consulting firms, Cyrating’s service does not require any intervention in the organizations’ departments or offices. There is no need to install any software or equipment. Furthermore, the service is based on a subscription system. The rating is ongoing throughout the entire subscription period. Therefore, as they track the changes in their rating, organizations can immediately observe the impact of their actions.

The startup is the first of its kind in Europe. And few startups are offering this type of service on a global level. “It’s a business that is booming in the United States,” says François Gratiolet. This early entry into the European market is a serious advantage for Cyrating, whose business relies on a powerful platform that can be scaled up: as the time the company has been assessing organizations increases, the more attractive their rating system becomes. The startup officially launched its business in Lille in January 2018, at the International Cybersecurity Forum (FIC)—the largest European trade show in the sector. Over the course of the startup’s development and the creation of its use cases—still very recent, since the startup is only a few months old—it has already assessed hundreds of companies. “A year from now we expect to have rated over 50,000 organizations” the co-founder predicts.

The first businesses to be won over by Cyrating’s services were large and intermediate-sized companies. “They see the opportunity to measure the performance of their suppliers and subsidiaries, and optimize their audit cycles,” François Gratiolet explains. But insurance providers could also be interested in this service, as well as agencies that want to purchase data blocks for statistical purposes. By positioning itself as a trusted third-party, the startup could quickly become a key player in cybersecurity in France and Europe.

Neural Meta Tracts, brain, white matter, Pietro Gori

The brain: seeing between the fibers of white matter

The principle behind diffusion imaging and tractography is exploring how water spreads through our brain in order to study the structure of neurons. Doctors can use this method to improve their understanding of brain disease. Pietro Gori, a researcher in image processing at Télécom ParisTech, has just launched a project called Neural Meta Tracts, funded by the Emergence program at DigiCosme. It aims to improve modelling, visualization and manipulation of the large amounts of data produced by tractography. This may considerably improve the analysis of white matter in the brain, and in doing so, allow doctors to more easily pinpoint the morphological differences between healthy and sick patients.

 

What is the goal of the Neural Meta Tracts project?

Pietro Gori: The project stems from my past experience. I have worked in diffusion imaging, which is a non-invasive form of brain imaging, and tractography. This technique allows you to explore the architecture of the brain’s white matter, which is made up of bundles of several millions of neuron axons. Tractography allows us to represent these bundles in the form of curves in a 3D model of the brain. It is a very rich method which provides a great deal of information, but this information is difficult to visualize and make use of in digital calculations. Our goal with Neural Meta Tracts is to facilitate and accelerate the manipulation of these data.

Who can benefit from this type of improvement to tractography?  

PG: By making visualization easier, we are helping clinicians to interpret imaging results. This may help them to diagnose brain diseases more easily. Neurosurgeons can also gain from tractography in planning operations. If they are removing a tumor, they want to be sure that they do not cut fibers in the critical areas of the brain. The more precise the image is, the better prepared they can be. As for improvements to data manipulation and calculation, neurologists and radiologists doing research on the brain are highly interested. As they are dealing with large amounts of data, it can take time to compare sets of tractographies, for example when studying the impact of a particular structure on a particular disease.

Could this help us to understand certain diseases?

PG: Yes. In psychiatry and neurology, medical researchers want to compare healthy people with sick people. This enables them to study differences which may either be the consequence or the cause of the disease. In the case of Alzheimer’s, certain parts of the brain are atrophied. Improving mathematical modeling and visualization of tractography data can therefore help medical researchers to detect these anatomical changes in the brain. During my thesis, I also worked on Tourette syndrome. Through my work, we were able to highlight anatomical differences between healthy and sick subjects.

How do you improve the visualization and manipulation of tractography data?

PG: I am working with Jean-Marc Thiery and other lecturers and researchers at Télécom ParisTech and the École Polytechnique on applying differential geometry techniques. We analyze the geometry of bundles of neuron axons, and we try to approximate them as closely as possible without losing information. We are working on algorithms which will be able to rapidly compare two sets of tractography data. When we have similar sets of data, we try to aggregate them, again trying not to lose information. It is important to realize that if you have a database of a cohort of one thousand patients, it can take days of calculation using very powerful computers to compare their tractographies in order to find averages or main variations.

Who are you collaborating with on this project to obtain the tractography data and study the needs of practitioners?

PG: We use a high-quality freely-accessible database of healthy individuals, called the Human Connectome Project. We also collaborate with clinicians in the Pitié Salpêtrière, Sainte-Anne and Kremlin-Bicêtre hospitals in the Paris region. These are radiologists, neurologists and neurosurgeons. They provide their experience of the issues with which they are faced. We are initially focusing on three applications: Tourette syndrome, multiple sclerosis, and surgery on patients with tumors.

Also read on I’MTech:

[one_half]

[/one_half]

[one_half_last]

[/one_half_last]

bitcoin, Patrick Waelbroeck

Bitcoin: the economic issues at stake

Patrick Waelbroeck, Institut Mines-Télécom (IMT)

[divider style=”normal” top=”20″ bottom=”20″]

[dropcap]C[/dropcap]ryptocurrencies like Bitcoin only have value if all the participants in the monetary system view it to as currency. It must therefore be rare, in the sense that it must not be easily copied (a problem equivalent to counterfeit banknotes for traditional currencies).

This is a requirement that is met by the Bitcoin network, which ensures no double-spending occurs. In addition to the value linked to the acceptance of the currency, Bitcoins owes its value to a variety of economic mechanisms linked to the analysis of the Bitcoins’ supply and demand.

Bitcoin supply

The issuance of currency in the primary market

The creation of Bitcoins is determined by the mining process. Each block that is mined generates Bitcoins. Their design stipulates that the amount per mined block be divided by 2 for every 210,000 blocks, to obtain a total amount of Bitcoins in circulation of (excluding those that are lost) 21 million. This monetary rule is monitored by the Bitcoin Foundation consortium, as we will discuss later in this article. The monetary rule can therefore be modified to respond to fluctuating market conditions, which can result in a hard fork.

Electricity is the main component (over 90% according to current estimates) of a mining farm’s total costs. In 2015, Böhme et al. (2015) assessed the Bitcoin network’s consumption at over 173 megawatts of electricity on a continuous basis. This represented approximately 20% of a nuclear power plant’s production and amounted to 178 million dollars per year (based on residential electricity prices in the United States). This amount may seem high, but Pierre Noizat considers that it is not any more than the annual electricity cost for the global network of ATMs (automatic teller machines), estimated at 400 megawatts. Once we figure in the costs involved in manufacturing and putting currency and bank cards into circulation, we see that the Bitcoin network’s electricity cost is not as high as it seems.

However, this cost may significantly increase as the network continues to develop, due to a negative externality inherent in mining: each miner that invests in new material increases his or her marginal revenue, but at the same time increases the overall mining cost, since the difficulty increases with the number of miners and their computation capacity (hash power).

La quête du bitcoin. xlowmiller/VisualHunt

Therefore, for the Bitcoin network, the difficulty of the cryptography problem that must be solved and approved by a proof-of-work consensus increases along with the network’s overall hash power. There is therefore a risk of over-investing in the mining capacity, since individual miners do not consider the negative effect on the entire network.

It is important to note that increasing the mining difficulty reduces mining incentives and increases the verification time, and thus the efficiency of the blockchain itself. This mechanism brings to mind the tragedy of the commons, in which shared resources (here, hash power) are depleted and only maintained by a handful of farms and pools, thereby nullifying the very principle of the public blockchain, which is decentralized.

There is therefore a risk that mining capacities will become greatly concentrated in the hands of a small group of players, thus invalidating the very principle of the blockchain. This trend is already visible today.

In the end, the supply of Bitcoins, and therefore the monetary creation on the primary market, depend on the cost of electricity and the difficulty associated with the mining process, as well as the governance rules pertaining to the Bitcoin price generated by a mined block.

The Bitcoins value on the secondary market

The Bitcoin can also be bought and sold on an exchange platform. In this case, the Bitcoin’s value is similar to a financial investment in which the financial players anticipate the prospect of financial gain and factors that could cause the Bitcoin to appreciate.

Bitcoin demand

The demand for cryptocurrency depends on several user concerns that are addressed below, starting with the positive factors and ending with the risks.

Financial privacy

Bitcoin accepted here. jurvetson on Visual Hunt

Governments are increasingly limiting the use of cash to demonstrate their efforts to counter money-laundering and the development of black markets. Cash is the only means of payment that is 100% anonymous. Bitcoin and other cryptocurrencies come in second, since the pseudonymous system used by Bitcoin effectively conceals the identity of the individuals making the transactions. Furthermore, other cryptocurrencies, such as the Zcash, go a step further, masking all the metadata linked to a transaction.

Why do people want to use an anonymous payment method? For several reasons.

First of all, this type of payment method prevents users from leaving any traces that could be used for monitoring purposes by the government, employers, and certain companies (especially banks and insurance companies). Companies and banks use price discrimination practices that can sometimes work against consumers. Leaving traces through payment can also cause companies to further incite customers to take advantage of new commercial offers and engage in targeted advertising that some see as a nuisance.

Secondly, paying with an anonymous payment method limits “sousveillance” (or inverse-surveillance) by close friends and family. Like when a payment is made using a joint account.

Thirdly, making payment under a pseudonym makes it possible to maintain business confidentiality.

Fourthly, just like the privacy policy, anonymity in certain transactions (for example healthcare products or hospital visits) helps build trust in society, and is therefore of economic value. Therefore, by enabling pseudonymity, Bitcoin brings added value in these various instances.

The Bitcoin works in times of crisis, thus avoiding capital controls

The Bitcoin emerged right after the financial crisis of 2008. This period witnessed the power of governments and central banks to control cash withdrawals and outstanding capital stock. There are very few means available for avoiding these two institutional constraints. The Bitcoin in one such means. Even if cash withdrawals are prohibited, Bitcoin owners can still pay using their private key.

The Bitcoin imposes discipline on governments

The Bitcoin (and the same is true for other cryptocurrencies) can be considered as a monetary alternative that is not controlled by a central bank. Some economists, like F. Hayek, sees these alternative currencies that compete with the official currency as a means of imposing discipline on governments that might be tempted to use inflation to finance their debt. If this happens, consumers and investors would no longer use the official currency, and would instead purchase the alternative currency, creating a deflationary pressure on the official currency.

Security-related network externalities

The level of security increases with the number of network nodes, since each node increases the computation power required to create a breach in the Blockchain security (through a 51% attack, double-spending, or denial of service–DOS). Furthermore, a DOS attack is especially hard to stage, since it is so difficult to determine who the recipient is. Positive network externalities therefore exist: Bitcoin’s value increases with the number of nodes participating in the network.

Indirect network externalities related to payment method

Bitcoin is a payment method, just like cash, debit cards and Visa/Mastercard/American Express cards. Bitcoin can therefore be understood using the multi-sided market theory, which models situations where two groups of economic players benefit from positive crossed externalities. The consumer who chooses a payment method for a purchase is happy when it is accepted by the merchant. In the same way, merchants are eager to accept a payment method that customers possess. Consequently, the dynamics of multi-sided markets result in virtuous cycles that can experience a slow inception phase, followed by a very fast deployment phase. If the Bitcoin experienced this type of phase, its value would enter a period of acceleration.

A Bitcoin bubble? duncan on Visual Hunt, CC BY-NC

The risks

Among the factors that reduce the demand for Bitcoins, the most prominent are the risks related to rules and regulations. On the one hand, a State could order that the capital gains generated from buying and selling Bitcoins be declared. On the other hand, Bitcoins can be used in regulated sectors (like the insurance and bank sectors) and their use could therefore be regulated as well. Finally, there is always the risk of losing the data on the hard drive where the private key is stored, resulting in the loss of the associated Bitcoins, or a State could force access to private keys for security reasons.

However, the greatest risk involves the governance of the Bitcoin network.

In the event of a disagreement on how the communication protocol should develop, there is a risk that the network could split into several networks (hard fork) with currencies that would be incompatible with each other. The most important issue involves the choice of the consensus rule for validating new blocks. A consensus must be reached on this consensus, which the technology itself appears unable to provide.

Conclusion

The Bitcoin’s economic value depends on many positive economic factors that could propel the cryptocurrency into a period of sustained growth, which would justify the current surge in its prices in the exchange markets. However, the risks related to the network’s governance must not be overlooked, since trust in this new currency depends on it.

Patrick Waelbroeck, Professor of Economics at Télécom ParisTech, Institut Mines-Télécom (IMT)

The original version of this article (in French) was published on The Conversation.

Also read on I’MTech:

 

Davide Balzarotti, Eurecom, ERC, Consolidator grant

A third ERC grant in 3 years at EURECOM

Getting a grant from the European Research Council is not an easy task but this is what Davide Balzarotti, Professor in the Security Department, has just accomplished. He is the third EURECOM professor to obtain an ERC grant in the past 3 years.

 

ERC, Davide Balzarotti, BITCRUMBS, EURECOM

Davide, you just got an ERC Consolidator grant, one of the most prestigious research grants in Europe. What is your feeling today?

Everybody knows it is one of the most selective grants in Europe, so I’m obviously very proud of that. It is definitely a major step in my career. It is an important recognition for the efforts I have made to get this grant and for the relevance of the project I presented. Plus, I was told there are only 329 researchers across Europe – and 38 researchers in France – who got this grant this year, so I am particularly honoured to be one of them. I am also very happy for EURECOM since it has been awarded one ERC grant every year for the past 3 years… Considering there are only 24 professors, it is a real success!

 

Will this grant change your day-to-day life as a researcher at EURECOM?

I am sure it will! In different ways even. First, I won’t have to worry about getting money for the next few years. The Consolidator grant is a five-year grant that represents €2 million. This grant is not only generous, it also offers recognition and visibility. In fact, the two other ERC grantees at EURECOM – David Gesbert & Petros Elia – explained me that I will certainly be more solicited by the research community. It will also give me a lot of independence and creative freedom to conduct the project for which I got this grant: BITCRUMBS – Towards a Reliable and Automated Analysis of Compromised Systems. I will dedicate 70% of my time to the project but I can manage it the way I want depending on the people I will work with. I actually need to hire a team of seven researchers – five Ph.D. students and two post-docs – and one engineer. On top of that, I will be involved in the EURECOM ERC committee that helps scientists benefit from the experience of the ones who already received such grants. This committee actually helped me a lot in writing my proposal, so I look forward to helping my colleagues in return.

 

BITCRUMBS seems to be a ground-breaking project in the computer security area. Could you explain its main objective?

BITCRUMBS is actually a brand new way of addressing computer security issues. And this ERC grant will help me pursue very ambitious research objectives with this project, which covers a wide range of digital security areas. I hope our results will change the way digital security will be managed in the future. The main objective of BITCRUMBS is to rethink what we call the “incident response” (IR). It is clear that research on prevention and detection helps make devices more secure, but since a 100% secure system does not exist, improving IR can be very useful too. Incident response addresses the aftermath of a digital security breach that, if not handled properly, can lead to data breach or a system collapse. We all know the risk of security breaches is now higher than ever. Attackers frequently break into corporate networks, government services and even critical infrastructures. Almost half of computers worldwide are infected by malware. A voting machine can be altered to rig the results of an election, a connected car can be hacked to drive down a cliff or a security camera can be controlled over the Internet to spy over our houses and our families. The problem is that we do not have the tools to analyze these attacks and understand their causes! This has to change.

With BITCRUMBS, I want to give investigators the possibility to quickly verify the state of compromised systems and help citizens trust the result of computer forensic investigations. In the future, I believe we should design digital systems the way we design airplanes – secure against crashes but also equipped with black boxes to collect all the data required to support an incident investigation.

 

What is your strategy to reach this objective?

I want to propose a more scientific and comprehensive methodology to analyse compromised systems. This should be done in three steps. The first part of the project will focus on measuring the effectiveness and accuracy of the techniques currently used to analyse compromised systems, and on assessing the reliability of their data sources. This will help increase the theoretical and scientific foundations of IR techniques. The second part of the project will focus on the design and implementation of new automated analysis techniques able to cope with advanced threats and the analysis of IoT devices. These techniques will have to be robust, scalable and generic – capable of working on different classes of devices. Of course, results given by these new techniques will need to be reliable and based on a solid theoretical foundation. The last step will introduce a new forensics analysis by design methodology. My goal is to provide a set of guidelines for the design of future systems and software – to help developers provide the required information to support the analysis of compromised systems.

 

What about the scientific and technological impacts?

I hope research conducted in BITCRUMBS will have a long-lasting impact – not only scientific – on the area of incident response and on the way we analyze compromised systems. First, BITCRUMBS will bring a scientific foundation to IR, based on repeatable experiments and precise measurements of the reliability of data and techniques used in current investigations. It will also have a practical impact since it will produce open source tools and improve existing software that are regularly used by companies and law enforcement to deal with computer attacks. Last but not least, BITCRUMBS will have an impact on our society. Improving the IR process will increase the trust that citizens have in the result of digital investigations. In order to clearly show the impact of BITCRUMBS in different fields and scenarios, we will address our objectives using real case studies borrowed from traditional computer software and embedded systems.

 

What are the main challenges you will be facing in BITCRUMBS?

Like any very broad project, BITCRUMBS success depends on a lot of factors. From a scientific point of view, it mainly depends on the combination of very different research skills including memory forensics, embedded systems security, malware and binary analysis, distributed systems and operating system design and defenses. I have considerable experience in each of these research areas, but in order to reduce the risks, I already secured key collaborations with leading universities and security companies so I can find research partners from different areas to work with. The other potential risk is the possible failure to develop some of the techniques I have envisioned. It is actually a very common risk in research projects that introduce novel solutions. For this reason, for each disrupting approach I would like to develop, I also have thought of less risky techniques for which I have experience and already conducted some investigation to evaluate the feasibility of a few ideas. But above all, one of the main challenges will be to find motivated postdocs in digital security willing to work in Europe. Most PhD students go to the US for their postdoc or are hired by security companies offering good conditions and interesting opportunities. I hope BITCRUMBS challenges and potential results can attract some of them.

[divider style=”dotted” top=”15″ bottom=”15″]

The original version of this article was published on EURECOM website

[divider style=”dotted” top=”15″ bottom=”15″]

Also read on I’MTech :

DessIA

DessIA: Engineering of the Future with Artificial Intelligence

What is the best architecture for the gearbox of a hybrid car? If an engineer had to answer that question, he would consider a handful of possibilities based on what already exists on the market. But the startup DessIA takes a whole different approach. Its artificial intelligence algorithms enable it to consider billions of different architectures to find the optimum configuration. The software developed by the young company digitally builds all the possible structures using the necessary components. The performance and the feasibility of the architectures built using this method are assessed, the design space is therefore intelligently explored to reduce the number of architectures physically tested. The automated, smart sorting keeps only the best architectures. In addition to the possibility of analyzing considerably more models than a human could, DessIA’s advantage is that the layouts created with its components are radically different from what already exists. “When we present our approaches to manufacturers, many of them say this is exactly the way they want to work, but they have no idea where to start,” say Pierre-Emmanuel Dumouchel and Steven Masfaraud, co-founders of the startup incubated at ParisTech Entrepreneurs.

For now, DessIA is specialized in subjects related to the transmission of mechanical power. It can work on both on gearboxes for cars and systems for transferring energy between a helicopter’s turbines and blades. The field itself is vast, and reflects the experience of its two founders, former employees of PSA. The issues can even include the mechatronic systems of complex electrically motorized mechanisms. The startup’s applications are limited to this subject because the algorithms’ work must be controlled by a thorough knowledge of the sector. Still, the two founders are not ruling out the possibility of someday moving towards providing assistance in the design of electrical or hydraulic systems. But not until a few years from now.

By remaining focused on mechanical systems, many opportunities have opened up for the young company. DessIA’s objective is to go beyond the mere optimization of architectures. Once the best structure has been determined, the ideal solution would be to have a very simple way of obtaining a 2D industrial plan, or even the 3D CAD model to directly integrate into the computer aided design software. The two founders intend to achieve this outcome by the end of 2018. If they succeed, they could redefine how mechanical systems are designed at the industrial level, from the reflection phases to drawing the part.

 

[divider style=”normal” top=”20″ bottom=”20″]

Pierre-Emmanuel Dumouchel worked at PSA for 10 years. After supervising Steven Masfaraud’s thesis for three years, they decided to partner together to create DessIA. They aim to simplify the design process for engineers through a breakthrough approach based on artificial intelligence.

[divider style=”normal” top=”20″ bottom=”20″]

appli sante, health apps

Will health apps soon be covered by health insurance?

Charlotte KrychowskiTélécom École de Management – Institut Mines-Télécom ;
Meyer HaggègeGrenoble École de Management (GEM) and Myriam Le Goff-PronostIMT Atlantique – Institut Mines-Télécom

[divider style=”normal” top=”20″ bottom=”20″]

[dropcap]“A[/dropcap]pproval”. It has now been a year since the French National Authority for Health (HAS) reached a positive conclusion on whether the Diabeo application could be reimbursed by national health insurance. The application is designed to help diabetic patients in dosage and ongoing treatment. This is a first for mobile applications!

The actual ruling of whether the application can be reimbursed, however, depends on the publication of results of a medical and economic study being carried out on the tool. The Telesage study, launched in 2015, includes 700 diabetic patients in France and should indicate the effectiveness of the measure.

Over recent years, there has been a worldwide explosion of mobile applications dedicated to health. Research 2 Guidance, a company specializing in analyzing this market, estimates their number at 259,000 in 2016, compared with 100,000 a year earlier.

Apps for physical exercise, counting calories and making doctor’s appointments

They have many different uses: coaching to encourage physical exercise or healthy eating, calorie counting, making doctor’s appointments, monitoring performance in sports, offering diagnoses, monitoring chronic diseases such as diabetes and soon, cancer with Moovcare, an application designed to detect relapses after a lung tumor. Of course, not all these applications carry the possibility of being reimbursed by Social Security Services. At this point, those recognized by health authorities as medical devices, are rare. These are applications that have received a CE marking, issued by ANSM (Agence Nationale de Sécurité du Médicament et des produits de santé). Their use is reserved for diagnostic or therapeutic means. For such applications the technical requirements are higher, as the health of patients is at stake. For example, an application that allows users to take a photo of a mole so that they can evaluate the risk of a melanoma (skin cancer) has not been considered a medical device, as the editor didn’t commit any validity to the result and explained that the application was solely educational.

health apps

Sports performance monitoring applications are very popular amongst jogging fans. Shutterstock

 

Diabéo, an app used by both patients and nurses

Diabeo is an application monitoring diabetes, labelled a class IIb medical device, and available only by prescription. It was developed by French company Voluntis, in collaboration with the Center for Study and Research into Improving Treatment of Diabetes (CERITD) and a French pharmaceutical lab, Sanofi-Aventis. It provides patients with a “connected” record of their blood sugar levels (glycaemia). The application is coupled with a patch which is to be stuck to the arm, and a small device, a blood sugar level reader. It is used by both the patient and the nursing team. Diabeo allows the patient to adjust the dose of insulin they need to inject, especially at meal times, using the treatment prescribed by their doctor. The application also acts as a motivator, supplying patients with health practices to follow that will help keep their illness under control.

The nursing team, on the other hand, receives reports on the patient’s blood sugar levels in real time. Alerts are triggered when they go over certain thresholds. This system facilitates continuous monitoring of the patient, allowing them to arrange appointments if their treatment needs adjusting.

This app is particularly useful as we find ourselves in an era where the incidence of diabetes is skyrocketing, whilst the number of doctors is on the decline.

Patient empowerment

The example of Diabeo illustrates the benefits we can draw from mobile health, or “m-health”. In the first instance, this allows us to improve the effectiveness of treatment through a personalized monitoring system and increased involvement of the patient in their own treatment, something we call “patient empowerment”. M-health also improves the patient’s quality of life as well as that of those around them.

Mobile health can also facilitate the transfer of information to a medical organization, allowing health professionals to concentrate on their core activity: providing healthcare. Continuously monitoring the patient ultimately reduces the risk of hospitalization, and should it occur, the average length of their stay. This could have a significant impact on public spending, especially as hospitals are being pushed to tighten the belt.

With treatments getting better and the average lifespan getting longer, chronic illnesses now form a growing part, and now even the majority, of our spending on healthcare. This means that it is necessary that public healthcare changes its mentality o purely providing healthcare to focusing on prevention and coordination of care.

Mobile health solutions may ease this transition. For example, Belgium released €3.5 million at the start of 2017 for a six-month experiment in reimbursing 24 health apps and mobile devices that allow users to monitor or treat patients from a distance. The Belgian government’s objective is to learn from these pilot projects before extending the reimbursement program in 2018.

The Medical Board gives its position

Until now, France has been falling behind in the use of digital health technology or “e-health”, but it now seems ready for a fresh approach. The country is taking on board the advice given by HAS on Diabeo, as well as the report to the National Assembly in January, stating that Social Security will partially cover the cost of connected objects for high-risk populations. Along the same lines, the French National Medical Board (CNOM) has stated it is in favor of national health insurance coverage, provided that the evaluation of the applications and connected objects shows benefits for health.

Nevertheless, several conditions are necessary for mobile applications to be able to generate the expected health benefits. In terms of the State, an absolute prerequisite is the regulation of health-related data, to guarantee confidentiality.

Additionally, health authorities must endeavor to evaluate the connected medical devices faster. In total, it has been ten years since Diabeo was developed (clinical tests started in 2007) and the positive response on its reimbursement was issued by the National Authority for Health (HAS). The current time taken for evaluations to be completed are out of sync with the rapid rate at which digital technology is progressing. This is an issue that is also being faced by the American equivalent of HAS, the Food and Drug Administration (FDA).

 

health apps

The application Diabeo is aimed at people suffering from diabetes, but also at doctors, who can receive blood sugar level reports from their patients in real time. Shutterstock

Introducing digital technology when training doctors

We must also amend the payment system for health professionals. Fee-for-service, as is practiced today, forms part of a treatment-based mentality, and does not encourage investment in prevention.

Using health apps requires us to reorganize training systems, for example by introducing teaching on digital technology in medicine studies and by creating training courses for future professions that may emerge in digital healthcare. For example, in the case of Diabeo, there will be a need to train nurses in distance monitoring of diabetes.

In terms of businesses, first and foremost, structuring of the sector must continue. France is a dynamic breeding ground for start-ups in the e-health sector, which will surely mean that better coordination will be required. The creation of structures such as the e-Health France Alliance or France eHealthTech is a first step towards allowing French businesses to gain visibility abroad and establishing a dialogue with public authorities in France.

Linking start-ups with pharmaceutical labs

Fundamentally speaking, beyond technological innovation, these companies must also innovate according to their economic models. This may occur through the alliance with major pharmaceutical labs that are searching for new paths for growth. This is the strategy that Voluntis successfully followed not only when they collaborated closely with Sanofi to produce Diabeo, but also in other therapeutic sectors, collaborating with Roche and AstraZeneca.

New economic models may call for private funding, for example from health insurance companies. These models may implement variable reimbursement rates, depending on results obtained by the app designers for a target population on predefined criteria, for example, a lower rate of hospitalization or better health stability in patients.

It seems likely that the State, by expanding the legislative framework and rethinking traditional economic models, will benefit from the potential offered by these technological advances, as will the public.

[divider style=”dotted” top=”20″ bottom=”20″]

Charlotte Krychowski, Lecturer in strategic management, Télécom École de Management – Institut Mines-Télécom Meyer Haggège, Post-Doctorate Researcher in strategic management and innovation, Grenoble École de Management (GEM) and Myriam Le Goff-Pronost, Associate Professor, IMT Atlantique – Institut Mines-Télécom

The original version of this article was published in French on The Conversation.

 

 

Avec Smarter Time, Emmanuel Pont propose un outil puissant pour résoudre les problèmes d'organisation.

Smarter Time: your life in your pocket

The start-up Smarter Time participated in the Lisbon Web Summit, from November 6-9. The company is incubated at ParisTech entrepreneurs, and offers an activity and time management application which helps users to better organize their day-to-day lives, both personally and professionally.

 

“I’d love to, but I don’t have time!” This is probably one of the most telltale phrases of a lack of organization. New technologies are creating more extra time for us than ever before, especially thanks to faster transport and communication methods. The problem is that we don’t know how to manage this time. Checking social media, for example, can fill several hours of our day without us even realizing. Emmanuel Pont is the founder of the app Smarter Time, which allows users to measure and analyze their time management on a daily basis. He demonstrates the concept through client testimonies: “Studies show that people who feel overloaded with work actually have less to do than they think. We help them to understand that they are simply poorly organized.”

Helping people make this kind of analysis was his reason for founding the start-up, which is today incubated at ParisTech Entrepreneurs. The flourishing business was present for the second consecutive year at the Lisbon Web Summit, November 6-9, with the FrenchTech delegation. The app uses artificial intelligence technology and machine learning to monitor and optimize the user’s daily activities, whether personal or professional. “Every day Facebook and Google use these kinds of techniques to find out more about us and to encourage us to waste our time on their services”, Emmanuel Pont remarks. “I wanted to reverse that by helping people realize how they really manage their time”, he continues.

Smarter Time can locate exactly which room a smartphone is in. Once the app has been downloaded, the user does an initial tour of their house or their workplace, indicating what rooms they are entering as they go. Whether in the kitchen at home or at a desk in the workplace, each room has a unique Wi-Fi footprint defined by the intensity of the signals that it receives from nearby connection points. The app records this footprint and will then be able to recognize which room the smartphone is in.

Whenever the user changes rooms, Smarter Time associates this movement with the most likely activity depending on the user’s agenda, the time of day or past habits. Therefore, if the user routinely attends a meeting at 10am, or they always take their coffee break in a certain room, the app will soon automatically be able to detect these patterns. If it gets something wrong, the user can modify the name of the activity with a simple click. The contextual intelligence algorithms allow the app to very successfully associate the right activity to the appropriate moment in the day.

Manage activities with complete anonymity

Every activity can therefore be learnt and recorded by the app in a precise way: transport time, work, leisure, time with family, and so on. Still more detail can be added to each of these categories, for example by separating time spent in meetings from time spend at one’s desk, or time spent doing sport from time spent reading. Based on the “freemium” model, the app offers users the chance to upgrade from the free version to a subscription, adding a computer plug-in which measures time spent on each website or application.

In any case, “the user remains the master of their data”, Emmanuel Pont assures. “All algorithms operate exclusively within the smartphone, nothing leaves”, he explains. Users can also choose to save their data online to make it more secure. In any case, “data is never sold and remains securely contained”, the founder states.

By shedding light on their activities, the app allows users to better analyze their personal organization on a daily basis. They are then free to create time-management objectives that they will be reminded of regularly if these are not achieved. The start-up hopes to continue developing by offering users analyses and automatic advice through the app, making it a kind of electronic coach. “We are currently studying the extent of knowledge on sleep, to be able to, in time, recommend good practices to follow and suggest to users how to improve their habits”, Emmanuel Pont explains. The start-up has one objective in mind: enabling users to solve their concentration and organization problems. “When people discover to what extent they are wasting their time, they are generally happy to ditch social media in order to spend more time with their children”, he concludes.

Une dossier 5G, pollution numérique

5G: the new generation of mobile is already a reality

[dropcap]W[/dropcap]hile 4G is still being rolled out, 5G is already in the starting blocks. For consumers whose smartphones still show “3G” alongside a reception indicator which rarely makes it over two bars out of four, this can be quite perplexing. Is it realistic to assure them that 5G is already a reality, when they can barely see proof of 4G? It is. And not only because manufacturers argue that the roll-out of 5G will be faster and more homogeneous. Without casting doubt over this promise, it is important to consider the economic factors at play behind the rhetoric and positioning.

5G is indeed a reality. This is essentially because mobile technologies are far more advanced and more efficient than they were in the early days of 4G. Researchers have been working on concrete solutions for providing a very high-speed mobile service, and these are now ready. Millimeter wave technology has proven itself in laboratories, and the first prototypes of networks using this technology are beginning to appear.

The saturated 4G frequency bands can make way for the growth in mobile terminals and communications. This is one of the outcomes of the European project Metis, which among other things, led to the discovery of more efficient wave forms. Error-correcting codes are also ready to manage faster speeds. Researchers are already working on pushing speeds even further, to perhaps deal with what comes after 5G.

Another reason for taking 5G seriously, is that it implies much more than just a faster service for end users. The goal is not only to increase the speed of data transfer between users, it is also to meet new needs for use, such as communication between machines. Without a network to handle communications between connected objects, there can be no Smart City. No smart, communicating cars, either. Researchers have been working for years to create new network architectures to satisfy potential new actors.

In the end, the question is not whether 5G is a reality or not, it is more about understanding the changes it will bring when it is released commercially in 2020, as the European Commission wishes. How will the current actors adapt to the changes in the telecommunications market? How will new actors find their place? Will 5G be an evolution, or rather, a revolution?

[divider style=”normal” top=”20″ bottom=”20″]

To find out more…

To read more on the subject of 5G, here are some additional articles from I’MTech archives:

[divider style=”normal” top=”20″ bottom=”20″]