Impact of Proactive Cyber Threat Intelligence on Exploits from the Dark Web

Impact of Proactive Cyber Threat Intelligence on Exploits from the Dark Web

Lawrence J. Awuah


Abstract: The desire to defend against the ever-growing cyber threat landscape necessitates the need to link exploits from the Dark Web to known vulnerabilities with the sole aim of proactively utilizing Cyber Threat Intelligence (CTI) solutions, with Deep Learning (DL) model and Exploit Vulnerability Attention Deep Structured Semantic Model (EVA-DSSM), to maximize data protection, privacy, and security.


A review of “Linking Exploits from the Dark Web to Known Vulnerabilities for Proactive Cyber Threat Intelligence: An Attention-based Deep Structured Semantic Model”. By Samtani, S., Chai, Y., & Chen, H. (2022). MIS Quarterly, 46(2), 911-946. DOI: 10.25300/MISQ/2022/15392

Summary: “Black hat hackers use malicious exploits to circumvent security controls and take advantage of system vulnerabilities worldwide, costing the global economy over $450 billion annually. While many organizations are increasingly turning to cyber threat intelligence (CTI) to help prioritize their vulnerabilities, extant CTI processes are often criticized as being reactive to known exploits. One promising data source that can help develop proactive CTI is the vast and ever-evolving Dark Web. In this study, we adopted the computational design science paradigm to design a novel Deep Learning (DL)-based Exploit Vulnerability Attention Deep Structured Semantic Model (EVA-DSSM) that includes bidirectional processing and attention mechanisms to automatically link exploits from the Dark Web to vulnerabilities. We also devised a novel Device Vulnerability Severity Metric (DVSM) that incorporates exploit postdate and vulnerability severity to help cybersecurity professionals with their device prioritization and risk management efforts. We rigorously evaluated the EVA-DSSM against state-of-the-art non-DL and DL-based methods for short text matching on 52,590 exploit-vulnerability linkages across four testbeds: web application, remote, local, and Denial of Service. Results of these evaluations indicate that the proposed EVA-DSSM achieves Precision at 1 scores 20% – 41% higher than non-DL approaches and 4% – 10% higher than DL-based approaches. We demonstrated the EVA-DSSM’s and DVSM’s practical utility with two CTI case studies: openly accessible systems in the top eight US hospitals and over 20,000 Supervisory Control and Data Acquisition (SCADA) systems worldwide. A complementary user evaluation of the case study results indicated that 45 cybersecurity professionals found the EVA-DSSM and DVSM results more useful for exploit-vulnerability linking and risk prioritization activities than those produced by prevailing approaches. Given the rising cost of cyber-attacks, the EVA-DSSM and DVSM have important implications for analysts in security operations centers, incident response teams, and cybersecurity vendors.” 

Keywords: cyber threat intelligence, deep learning, deep structured semantic models, vulnerability assessment, hacker forums, dark web, security operations, cybersecurity analytics 

The desire for researchers and subject matter experts to help organizations understand the complexity of attack vectors and support their cyber defense with automated incident response capabilities, driven by machine intelligence, has become so critical in today’s world. We have reached a point where cybersecurity trainees, researchers, and professionals need to continuously gain insights into innovative cybersecurity solutions in the field. The fact that malicious actors consistently use hacking techniques to circumvent security controls and exploit system vulnerabilities in the wake of the current threat landscape motivated Samtani et al. [1] to develop proactive Cyber Threat Intelligence (CTI) model from the perspective of the Dark Web. More to the point, pattern recognition, anomaly detection, and predictive analytics remain to offer threat intelligence and cybersecurity analytics capabilities that are key ingredients in automated incident response and threats mitigation efforts in the ever-evolving threat landscape.

Additionally, machine intelligence has become so ubiquitous and an indispensable tool, in defensive and offensive operations, that it remains to be a useful resource to cybersecurity leaders and device vendors. As part of their study, the authors adopted a novel Deep Learning (DL)-based model, an Exploit Vulnerability Attention Deep Structured Semantic Model (EVA-DSSM), which comprises bidirectional processing and attention mechanisms with the capability to automatically link exploits from the Dark Web to known vulnerabilities [1]. Additionally, a Device Vulnerability Severity Metric (DVSM) model was developed to be employed by cybersecurity professionals when engaging in device prioritization and risk management activities. A high-Level CTI Framework that captures EVA-DSSM and DVSM models is depicted in figure 1.

In another study, Zhu et al. [4] adopted a computational design science solution to develop a DL-based, hierarchical, multi-phase Activity of Daily Living (ADL) framework to address similar concerns. Yet, others deployed Tor-use Motivation Model (TMM) and found a network impacted by illicit commerce and money laundering and concluded that criminality on this dark web is based more upon greed and desire, rather than any particular political motivations [3]. These models and frameworks play key roles in emerging cybersecurity mitigation strategies.

Moreover, the vulnerability assessment as part of the automated CTI process, coupled with analytics, facilitate intelligence required by CTI professionals to conduct initial triage of security incidents for anticipated mitigation strategies. Motivated by the dynamic threat landscape, the authors develop a CTI framework and compared the operational differences between the conventional DSSM and their proposed EVA-DSSM [1]. When the proposed EVA-DSSM model was evaluated against both non-DL and DL-based methods for exploit-vulnerability linkages across selected testbeds (figure 1), the DL-based technique was determined to have achieved a much higher precision than the non-DL counterpart.

Furthermore, when a user evaluation of the CTI case study was conducted, the results indicated that a number of cybersecurity professionals found the EVA-DSSM and DVSM to be more efficient in exploitation-vulnerability linking and risk prioritization activities than those produced by conventional solutions. On the other hand, the user evaluation indicated that these professionals serving in the Security Operations Center (SOC), Incident Response (IR), Vulnerability Management (VM), and Operational Cybersecurity (OS) domains of practice found the EVA-DSSM and DVSM results more useful than those generated without these two models (figure 1). Given the rising cost of cyber-attacks, the EVA-DSSM and DVSM have perceived practical significance and important implications for analysts, for example, in the areas of security operations centers, incident response teams, and cybersecurity vendors.

In summary, there is a strong desire to support the fact that the practical and theoretical significance of the proposed EVA-DSSM and DVSM models evidently benefits analysts in SOC and IR teams, as well as security operations vendors. From the preceding analysis, there is also evidence to suggest that DL-based machine intelligence, as noted by the authors, plays a key role in SOC-related engagements. To that end, in mitigating evolving threats, organizations should empower the security operations teams and vendors with automated AI-based mitigation solutions. To efficiently mitigate these threats, organizations should endeavor to empower the security operations team and leadership with appropriate strategies needed to offer security orchestration and response processes to fully automate and manage the complexity of the SOC ecosystems [1-2]. In other words, the ability to seamlessly automate and manage the complexity of security operations to address the dynamic threat landscape remains an important challenge for security researchers, cybersecurity professionals, and cybersecurity vendors. Finally, from the preceding analysis, the EVA-DSSM and DVSM models certainly have crucial implications for those analysts in the SOC-based environment and cybersecurity vendors. Researchers and professionals alike have a major role to play in search of broader cybersecurity solutions for the interest of society. 

See the full review here: Research Summary-Exploits from the Dark Web


[1] Samtani, S., Chai, Y., & Chen, H. (2022). Linking Exploits from the Dark Web to Known Vulnerabilities for Proactive Cyber Threat Intelligence: An Attention-based Deep Structured Semantic Model. MIS Quarterly, 46(2), 911-946. DOI: 10.25300/MISQ/2022/15392

[2] Kinyua, J. & Awuah, L. (2021). AI/ML in Security Orchestration, Automation and Response: Future Research Directions. Intelligent Automation & Soft Computing, 28(2), 527–545. DOI:10.32604/iasc.2021.016240

[3] Dalins, J., Wilson, C., & Carman, M. (2018). Criminal motivation on the dark web: A categorisation model for law enforcement. Digital Investigation, 24 (1), pp. 62-71. 

Viewpoint – Non-consensual Pornography: How petty desire becomes a tragedy to an individual.

This article is the first of a new series of Viewpoints from Harvard Business School, Sorbonne Business School and ESSEC Business School students and faculty.  Viewpoints section is dedicated to opinions and views that pertain to issues of broad interest to the cybersecurity community, typically, but not exclusively, of a nontechnical nature. Controversial issues will not be avoided but be dealt with fairly. Authors are welcome to submit carefully reasoned “Viewpoints” in which positions are substantiated by facts or principled arguments. Moreover, this section periodically hosts editorial debates in a Point/Counterpoint format in which both sides of an issue are represented.

Non-consensual Pornography: How petty desire becomes a tragedy to an individual.

Viewpoint by Heeju ROH (Harvard Business School)

There was a woman.
She did an ordinary love.

It was not a love life that bares all things, believes all things, hopes all things, and endures all things. But there was an affection: she and her lover couldn’t say each other’s name without smiling. There was a trust. The two people did not feel guilty about their own unspoken things. There were a lot other things and the love brought all of them. They thought, as it is commonly said, that they fell in love. However, it was wrong. By nature, love is not something you can consciously fall into. Love strikes people as if it is an accident. So it was rather obvious that it was also the love who called the end.

As left-overs, they did not know how to deal with the situation because they were both victims. Since the love already left them, their frustration lost its direction to head and destined to wrong targets – each other. Through the time of hurting each other more and more, they somehow survived as two separate individuals. And that was supposed to be it. But one day, she was told that there are pictures and videos of her privacy online. Records of their love, including evidences of the intimacy. Indeed, she could see two bodies. One of those had the same face with hers. However there was a difference between the face in the monitor with the face that she reflects on a mirror every morning. The face in the monitor did not have dignity or self-respect. It did not have a name or identity. It was merely a visual material to facilitate the ejaculation. Yet, it was undoubtedly her face and body.

I admit. Above case cannot possibly be the only background story of nonconsensual pornographies (NCP) in this big world. Maybe there are other victims who has been through a bad breakup, an abusive relationship, or other terrible situation before the leakage. Even if an uploader has built an aggression toward a victim, he or she does not earn an authority to share the private moment with unspecified mass viewers. We all should agree that the distribution of such material cannot be justified under any circumstance, period. However, we are often misled by the name ‘Revenge Porn.’ We are misled in a way that the victims are deserved to be revenged. More importantly, this perception results a general tendency to highlight an interpersonal and emotional conflict between the perpetrator and the victim, while diluting the fact that the NCP is a collective cybercrime. No wonder why bystanders who are unlikely to commit such crime shows certain level of approval upon NCP[i].

So, do I want to claim that NCPs are not really the result of ‘revenge’? Maybe, but that’s not the point. Currently, frequently suggested strategies to stop the NCP is more focused to victims’ protecting themselves. The reasons said are 1) that the victim must’ve agreed on the intercourse itself and 2) that, due to the highly viral environment of online platforms, the identification of the victim and instant reaction is better taken by the first party, the victim, than by the third party, the law enforcement for example. This could be also why even well-respected Medias rather recommend victims to “make sure that your face is not on the picture” or “use more secured application.”[ii] This tendency is an unfiltered evidence of our ignorance. The ignorance about the magnitude of the damage to the victims and the ignorance about the nature of the situation – the crime. The society forces the victims should be the one taking the burden of erasing fees and legal procedures, while dealing with PTSD, trust issues, and hostile social perception of “you deserve it”[iii]. Compare to the severe physical and psychological pain of the victims, the purpose of the NCP is ridiculously shallow – the amusement.

In the online world, we easily let ourselves indulge. Online world is the perfect place to let all of us to swim in the sea of our own gluttony, envy, greed and lust. Of course, it is rather acceptable if the voyeurism is directed to something not ethically challenging, such as mother’s recipe for the banana cake. Sharing information is the accomplishment of the 3rd industrial revolution. However, behind the curtain of anonymity, we also can consume other people’s private life easily and casually. And as the word ‘we’ suggests, there hardly is a sole perpetrator in the online world. They commit to this cyberbullying by creating, consuming, distributing, and making profit from NCPs. When facing the collective wave of violent behavior, individual victims always fail to protect themselves. Examples of victories are absolute minority considering the entire number of victims. The victory has to become our social norm, the general outcome, and expected result.

I believe that laws, policies, and systems are to stand at the front line of this battle. People’s feeling safe from possible harassments is the first job for normal nations to achieve. If people can hurt others and be hurt by the technology facilitated sexual violence without any rational expectation to be salvaged, that status is rather similar to the fight of all against all. To end this fight, we need more victim-focused responses. From the investigation processes, which are often traumatic for victims, to subsidy for erasing the materials and punishing the distributors[iv]. There have been studies and implementation of policies conducted. However, in reality, victims are rather to rely on civil associations than on law enforcement, because the civil associations tend to have more experience in such cases[v]. While the NCPs have become more accessible and affordable, the prosecutorial process has not become victim-friendly. Victims have to endure the ongoing tragedy until the legal process is over, which does not guarantee a fresh clean-up. As a result, the victims are easily left in the blind spot of the system.

Screaming requires a lot more effort than you think. It is not a knee-jerk reflection. Firstly the lungs have to be inflated as big as possible. Then your abdomen has to be flat and tighten in order to expectorate the air. At the moment of exhalation, the vocal cord tremors to deliver the sound wave. It’s a duty of nasal cavity to increase the sound frequency. Finally, as a quasi-verbal communication, this single-syllable sound has to deliver a message: Somebody help me. Unfortunately, the brain cannot often orchestrate the process. It endeavors to send signals to your lungs, abs, and vocal cords, but they simply fails in doing their works.

She felt that she had to scream at the moment she found her pictures. If the screaming was a cardio exercise, her brain must have sweated to be dehydrated. But the brain cannot sweat. So something else did instead: Her eyes released vast amount of salty water. Taking that as a signal, the other body parts finally responded. But it was different from what she imagined. The sound was rather low and growling. It was similar to something that every creatures make in the time of tragedy. It was an ordinary end of an ordinary love. However, because tragedies does not have an eye, they sometimes just barge into an ordinary life. So her ordinary life suddenly became tragic.

On the website, she also found other women. The women who also had faces and bodies without the name and dignity. She wondered what made all these women exposed. What have they done? And she realized that she already knew the answer – an ordinary love. They all did an ordinary love – no more, no less. Just an ordinary love.

* This article does not mean the victimization of all women nor generalization of all men.


[i] Lawson, K., “People Are Terrifyingly OK with Revenge Porn, New Study Finds,” Broadly, March 3, 2017. [ii] Young, S., “How to protect yourself against revenge porn,” Independent, August 24, 2017. [iii] Bates, Samantha Lynn. (2015) “Stripped”: an analysis of revenge porn victims’ lives after victimization.”


[iv] Dickson, Alyse (2016) “‘REVENGE PORN’: A VICTIM FOCUSED RESPONSE,” UNISA Student Law Review, Vol. 2.


[v] 정한라 (2013) “국내외 사이버폭력 사례 및 각국의 대응방안,” 한국인터넷진흥원

Viewpoint – Trolling: annoyance or real threat?

This article is the first of a new series of Viewpoints from Harvard Business School, Sorbonne Business School and ESSEC Business School students and faculty.  Viewpoints section is dedicated to opinions and views that pertain to issues of broad interest to the cybersecurity community, typically, but not exclusively, of a nontechnical nature. Controversial issues will not be avoided but be dealt with fairly. Authors are welcome to submit carefully reasoned “Viewpoints” in which positions are substantiated by facts or principled arguments. Moreover, this section periodically hosts editorial debates in a Point/Counterpoint format in which both sides of an issue are represented.

Trolling: annoyance or real threat?

Viewpoint by Daniel Grieb, Flora Guise, Léontine Paquatte (ESSEC Business School)


Macy’s 2008 Thanksgiving Parade, New York City: American music artist Rick Astley surprises spectators with a live performance of his 1987 song “Never gonna give you up”. Leading up to his performance, the year 2008 saw the rise of a mass internet phenomenon called “Rickrolling”, where millions of users were enticed to click on hyperlinks leading to the music video of the Astley’s song. From this innocent internet prank, trolling has evolved to much more: during the US presidential election of 2017, the concern of the impact of trolling on the public’s opinion has become evident. This paper aims to explore the internet phenomenon “Troll” and will cover their motivations to their impact and current, relevant examples of trolling.

As the phenomenon of “trolling” is a rather recent emergence, there is still no clear, universally agreed upon definition in the academic field. However, a common definition reflects the most observed “trolling behavior”: it describes the act of agonizing others online “by deliberately posting inflammatory, irrelevant, or offensive comments or other disruptive content” “with no apparent (…) purpose”. [1] [2] The troll’s motivation can be categorized in three categories: (1) Personal enjoyment (pleasure seeking through “trolling”), (2) Revenge (as a reaction to being trolled) and (3) Thrill-seeking (deriving joy from the reaction of others to their trolling behavior). [3] It becomes evident that the current definition and motivation associated with trolling focuses on the individual level: it assumes that trolling is exclusively done by individuals and with no external goal. However, numerous, recent examples, show that trolling has evolved.

Trolling basically manifests itself in a malevolent, interpersonal and antisocial individual behavior. Concretely its about “deliberately [provoking], upsetting others by starting arguments or posting inflammatory messages on online comment sections.”[4] The manifestation of individual trolling through online comment sections can be observed through the increasing phenomenon of cyberbullying, identity theft and cyberstalking, therefore putting flesh on the Dark Tetrad personality traits: narcissism, sadism, Machiavellianism and psychopathy.

Quantitative analysis have shown that the online context tends to exacerbate psychopathic behaviors: anonymity, normlessness, asynchronicity on the internet are putting more psychological distance between the troll and his/her target, therefore encouraging him/her to have a sharper and more violent reaction than in an offline context.[5]

The impact of this individual cybertrolling can be seen on many aspects of the life of some victims ever since the phenomenon appeared: the Youth Risk Behavior Surveillance System (YRBSS) identified behavior such as drug use, unhealthy diets and numerous other examples linked with cyberbullying.[6] Even though social networks and interpersonal websites have implemented rules and means of empeaching of trolls, the line between elements considered bullying and justified opinions is not clear, allowing internet trolls to adapt their behavior without being reprimanded.

Looking at the nefarious effect internet trolls can cause on an individual level, the threat of trolls on a societal level seems to arise.

The digital transition has brought the Greek ideal of the public agora to a brand new level: the internet turned into a global, geography-free space where almost anyone can express one’s opinion.[7] Roger Silverstone designated it in 2006 as the “Mediapolis”[8], where people can gather and participate to the virtual debate without being present, reminding the theory of “Global Village” of Marshall McLuhan.[9] It looks like the internet allowed us to become an egalitarian network society with a perfect level of freedom of speech. As it turns out however, that space of freedom became the playground of trolls and haters, breaking down this utopia and questioning models such as deliberative democracy: Trolls, fake news and hate speech occupy so much space on forums, that this may lead to eventually silencing some citizens as they do not want to become the target of trolls or to making citizens lose touch with what is true or not.[10] Disinformation, hate-speech and cyber-harassment have become real threats for democracies as they impede a reasonable, objective public debate which is the basis of this political system. It is therefore necessary to take measures to resolve this problem, however without falling into censorship: a very sensitive, but essential endeavor.

To illustrate this problem, let us take a look at the propaganda movement led by “Reconquista Germanica” that became active during the last Germany’s general election.[11] This group of online extremist used trolling to manipulate the election by spreading hate, fake news and Kremlin propaganda. The techniques used for their “Blitzkrieg against the Old Parties” proved to be very efficient. First, they trolled their opponents by spreading illegally obtained, compromising private content or even manipulated photos and collage, in the hopes that these would become “viral hits”. They also tried to manipulate public opinion by conducting a “war on information” with disinformation, hateful memes and bots sending automated messages. This way, Reconquista was able to get great visibility and influence the opinion particularly among a large proportion of undecided voters. Moreover, these techniques are becoming more and more professionalized and globalized since some groups of activists claim that they influenced Russian, American, British, German and French elections. They now apply very detailed action plans and have become organized world-wide. Yet, while it is still difficult to measure their real influence, trolls have become a cyber-threat that should in no way be neglected.

It became evident that internet trolling can become more than just a simple annoyance: the organized and strategic implementation of “trolling tools” on social media such as hate speech and doxing, can not only have a significant effect on the trolled “victims” but also on societies. Influencing public opinions has become the new goal of organized trolling networks and their first implementation prior to elections can be seen. While the lone internet troll may seem harmless, the influence and impact of an organized community of trolls should not be underestimated. Just like in many aspects in life, Paracelsus’ rule remains true – even on the internet: Sola dosis facit venenum “The dose makes the poison”. And the instrumentalization of this dose by different interest groups has become a new form of cyberthreat.


[1] [2] Buckels, E. E., et al. Trolls just want to have fun. Personality and Individual Differences (2014); P.1

[3] Cook, C., et al.: Under the bridge: An in-depth examination of online trolling in the gaming context.; P.10f.

[4] Gammon J., Over a quarter of Americans have made malicious online comments, (2014)

[5] Nevin, Andrew D., “Cyber-Psychopathy: Examining the Relationship between Dark E-Personality and Online Misconduct” (2015). Electronic Thesis and Dissertation Repository. 2926, P.170

[6] [7] Weichert S., From Swarm Intelligence to Swarm Malice: An appeal (2016)

[8] Silverstone R., Media and morality : on the rise of Mediapolis (2006)

[9] Mc Luhan M., Understanding media: The extensions of man (1964)

[10]Aro J., The cyberspace war: propaganda and trolling as warfare tools (2016)

[11]Von Hammerstein K., Höfner R. and Rosenbach M., Right-Wing Activists Take Aim at German Election, SPIEGEL Online (09/13/2017)


Literary references:

Aro J., The cyberspace war: propaganda and trolling as warfare tools (2016)

Buckels, E. E., et al. Trolls just want to have fun. Personality and Individual Differences (2014)

Cook, C., et al.: Under the bridge: An in-depth examination of online trolling in the gaming context (2014)

Gammon J., Over a quarter of Americans have made malicious online comments, (2014)

Mc Luhan M., Understanding media: The extensions of man (1964)

Nevin, Andrew D., “Cyber-Psychopathy: Examining the Relationship between Dark E-Personality and Online Misconduct” (2015). Electronic Thesis and Dissertation Repository. 2926, P.170

Silverstone R., Media and morality : on the rise of Mediapolis (2006)

Von Hammerstein K., Höfner R. and Rosenbach M., Right-Wing Activists Take Aim at German Election, SPIEGEL Online (09/13/2017)

Weichert S., From Swarm Intelligence to Swarm Malice: An appeal (2016)

Online references:, Youth Risk Behavior Surveillance System page (2017), available at:

Merriam-webster, Definition of troll (2018), available at:

Risks in Governmental Cybersecurity Program : Case Study of the Einstein Project

The Risk of Secrecy in Governmental Cybersecurity Program : Case Study of the Einstein Project

Charlotte Clément-Cottuz

This paper argues that the over-secretive nature of cybersecurity national programs that protect national agencies actually hinders such programs while it demonstrates that a more transparent implementation could enhance its efficiency. This argument can appear paradoxical as logically the more transparent a cybersecurity program is, the easier it can be for hackers to find loopholes in these programs and thus to perpetuate their malicious intents. However, based on the case study of the US Einstein program, this paper demonstrates that the shortcomings of such programs are majorly caused by unnecessary exaggerated secrecy.

Einstein, or formally called the US National Cybersecurity Program System, was developed by the United States Computer Emergency Readiness Team (US-CERT) which is the operational arm of the National Cyber Security Division of the US Department of Homeland Security (DHS). This department “has the mission to provide a common baseline of security across the federal civilian executive branch and to help agencies manage their cyber security risk” (CDT, 2009). Internationally, national governments have implemented similar programs to defend their national organisations against cyber offensives. For example, in France, the ANSSI (Agence Nationale de la Sécurité des Systèmes d’Informations) ensures the cybersecurity of national public and private sector operators. Nevertheless, confronted with the lack of information concerning the digital control and supervisory control and data acquisition systems (DC/SCADA) put in place by the ANSII (Dila, 2013) or other national governments across the globe, this post focuses on the US and its Einstein program.

More precisely, Einstein was developed to fulfil two key roles in federal government cybersecurity. First, as an intrusion detection capability, it detects and blocks cyberattacks from compromising federal agencies by monitoring these federal agencies internet connections for specific predefined signatures of know malicious activity and anomalies and alerts US-CERT when specific network activity or host-based intrusions match the predetermined signatures are detected. Second, Einstein was enhanced to also become an intrusion prevention capability that automatically blocks malicious traffic from entering or leaving the federal civilian executive branch agency networks. To this extent, Einstein has the capability of analysing the content of emails and other Internet websites (Gorman, 2009). This raises massive privacy questions. Indeed, there are no clear or transparent guidelines made public about Einstein’s exact mission, who reads these emails, what are the tools implemented against cyber threats and which precise cyber threats are encompassed in such a vast definition (CDT, 2009). Therefore, the US-CERT and the DHS profit from a lot a legal leeway when they are questioned or held accountable and overall they benefit from this lack of transparency (Gao, 2010) at the expense of the Einstein users.

On top of the privacy risks caused by the lack of transparency, the latter also impairs on Einstein’s efficiency. Indeed, another role of Einstein is cross-collaboration between the agencies: once an agency acknowledges an intrusion/signature/zero day, it alerts the US-CERT which then informs the other agencies of the newly determined intrusion. Therefore like a network effect, the more agencies using Einstein and hence finding signatures and exchanging them, the higher is Einstein’s global success rate. However, Einstein is only implemented in 5 agencies out of 23 because each agency implements different technologies to protect its sensitive data that are not compatible with the Einstein program. Therefore, the lack of transparency between federal cybersecurity programs impairs on the effort of the federal Einstein program and diminishes its efficiencies. Indeed, during a test to flag a portion of vulnerabilities associated with common softwares applications across multiple federal agencies, only 6% of all the security bugs tested were found. That’s 29 out of 489 vulnerabilities (Paganini, 2016). If more transparent, Einstein’s would be easier to implement and hence more efficient.

Finally, the efficiency shortcomings of the Einstein program could be straightened up by informing the federal employees whose computers are running the Einstein program. Indeed, over-preoccupied by the secrecy of the program, the DHS did not inform the federal employees whose computer were running the program. However, if the US-CERT simply informed the employees that the program is running, communicated on the EINSTEIN program, employees would be more aware and careful of malwares and phishing tentatives. Furthermore, if the US-CERT encouraged cybersecurity awareness programs, it would definitively increase the efficiency of Einstein. And to a certain extent, “agencies should ultimately employ a multi-layered approach to security that includes well-trained personnel, effective and consistently applied processes, and appropriate technologies” (Cooney, 2015).

Even though it is being amended, Einstein raises serious concerns of transparency. Its lack thereof causes privacy contingencies but also inefficiencies and failures, which can endanger the US national sovereignty to a certain point. However, a more transparent implementation with more thorough information concerning the program communicated by the US-CERT would increase the number of federal agencies relying on the Einstein program and hence its
capability. Furthermore, at the grass roots level or in other words at the user level, awareness and communication on the EINSTEIN program would increase the number of signatures detected and hence once again EINSTEIN’s efficiency. In a few words, transparency is the best policy.


CDT, 2009. ‘Einstein Intrusion Detection System: Questions that Should be Addressed’, Center for Democracy & Technology, July 2009.

Dila, 2013, Direction de l’information  légale et administrative. Livre Blanc Défense et Sécurité Nationale, 2013.

Gorman, S. 2009. ‘Trouble Plague Cyberspy Defense’, Wall Street journal, July 3rd 2009.

CDT, 2009. ‘CDT report : Privacy, Legal Concerns Surround Secret Government Cybersecurity System’, CDT, July 28, 2009.

Gao, 2010. ’Cybersecurity: Progress made but challenges remain in defining and coordinating the comprehensive national initiative’, Report to Congressional Requesters, March 2010.

Paganini, P. 2016. ‘Audit shows Department of Homeland Security 6 billion U.S. Dollar firewall not so effective against hackers’, Security Affairs, February 1, 2016.

Cooney, M. 2015. ‘GAO: Early look at fed’s “Einstein 3” security weapon finds challenge’, Network world, July 9th 2015.

Read the full blog post here: Risk in Governmental Cybersecurity Program JSTI 2017

Blockchain Regulatory Framework, Legal Challenges and the Financial Industry

Blockchain Regulatory Framework, Legal Challenges and the Financial Industry

Camille Madec


In order to stay competitive, financial industry must seize the opportunities of the on-going technological disruption, and particularly with the recent so-called blockchain innovation when some argue that this new technology has the potential to replace banks as financial intermediaries for transfer and exchanges of money. In this transitional context, financial sector could face new cybersecurity risks, with sophisticated attacks, which eventually call for a renewed regulation framework. Here the financial sector means banks, insurers, asset managers, and advisory firms.

Blockchain can be defined as “a peer-to-peer operated public digital ledger that records all transactions executed for a particular asset (…) The Blockchain maintains this record across a network of computers, and anyone on the network can access the ledger. Blockchain is ‘decentralised’ meaning people on the network maintain the ledger, requiring no central or third party intermediary involvement. […] Users known as ‘miners’ use specialized software to look for these time stamped ‘blocks’, verify their accuracy using a special algorithm, and add the block to the chain. The chain maintains chronological order for all blocks added because of these time-stamps.” (Alderman, 2015)

Hence, Blockchain, well known through the so-called bit coin, could open much more perspective and should guaranty security and the validation of all the exchange of data. In addition to open room for new business opportunities, this new technology could disrupt the legal conception of privacy, intellectual property right, and presents some issues regarding financial institution accountability given the new associated risks. As a consequence while financial institutions have been under strengths by the new regulatory requirements in the aftermath of the 2008 financial crisis, they might see their accountability rises again to address cybersecurity risks and associated prejudices related to blockchain innovation.

This paper explains how business compliance to new cyber regulatory framework is a strategic issue for financial institutions. It presents the financial institutions specific data profile and linked eventual collateral damages. It highlights blockchain innovation opportunities and associated new cybercrime challenges. It describes the current European regulatory framework and legal accountability scenarios. It then finally supports the hypothesis of cyber compliance as a corporate competitive advantage and maps out some elements
of potential recommendations to strengthen cybersecurity resilience.

Read the full strategic report here: regulatory compliance and cybersecurity


Alderman, P. (2015). Blockchain –emerging legal issues. Lexology, Global.

How is Cybercrime Evolving? (editorial)

How is Cybercrime Evolving? (editorial)

Jean-Loup Richet, Sorbonne Business School (IAE de Paris)


Firms spend enormous resources on digital advertising and promoting their brand online. In the meantime, ad-fraud undertaken by cybercriminals cost $42 billion in 2019 and could reach $100 billion by 2023. However, while digital advertisers continue to wrestle with how to effectively counteract ad-fraud, the topic of advertising fraud itself has received little academic attention. Here, we investigate this gap between practice and research through an exploration of ad-fraud communities. Our research implemented a multimethod approach for data collection in a longitudinal (18 months, October 2017 to April 2019) online investigation of this phenomenon. Integrating qualitative and quantitative analysis, we examined (1) internal interactions within ad-fraud communities and (2) ad-fraud communities’ performance and growth. Our online investigation extends our conceptual understanding of ad-fraud and explains how ad-fraud communities innovate. Our findings indicate that capabilities enacted by some communities foster requisite variety and enable the coordination of complex, iterative, and incremental dynamics (cocreation of artificial intelligence-based bots, customer involvement, and reinforcing capabilities). This research has both theoretical and practical implications for innovation in cybercriminal communities. Furthermore, we provide practical guidance for policy-makers and advertisers regarding how to improve their response to business threats. Indeed, a better understanding of how ad-fraud communities innovate enables organizations to develop countermeasures and intelligence capabilities.


• This is one of the first studies documenting the way ad-fraud communities innovate and create value for their criminal customers.
• A multimethod approach was applied for data collection, integrating qualitative and quantitative assessment of six cybercriminal communities.
• Specialized ad-fraud communities provided a wealth of knowledge and incremental innovations in ad-frauds.
• General and customer-oriented ad-fraud communities showcased the most internal interactions, as well as exhibiting better performance and growth.
• General and customer-oriented ad-fraud communities have developed specific capabilities, focusing on innovation through artificial intelligence, which fuels customer engagement and fosters (criminal) attractiveness.


Richet, J.-L. 2022. “How Cybercriminal Communities Grow and Change: An Investigation of Ad-Fraud Communities,” Technological Forecasting and Social Change (174), p. 121282.….)

How is Cybercrime Evolving

Privacy on the Internet: a sweet dream?

 Privacy on the Internet: a sweet dream?

Quentin Jaubert, Adrien Zamora


Big Brother is watching you” wrote Georges Orwell. In this groundbreaking book, Orwell describes a society in which the officials know everything that would happen inside the country by performing an omnipresent surveillance over the inhabitants. Today’s police forces and secret services own a numerous number of surveillance tools such as biometry, chips, facial recognition, localization that allow them to become very intrusive security forces. But the “policing” has now also become the property of major private companies (social media platforms, search engines, telecommunication carriers etc). A funny way of rethinking Orwell’s quote in our modern world would be: “Big Browser is watching you”.

There was a time where people had their privacy. One could go shopping when exiting the office, buy several stuffs in cash, go back home, close the doors and curtains, and run their private life. That was it. But privacy has evolved over time. If “privacy” can be defined as a “right to be let alone” (Warren and Brandeis, 1890), or even “the right to prevent the disclosure of personal information to others” (Westin, 1968), the concept has recently taken a multidimensional nature regarding “information, accessibility and expression” (Decew, 1997), and with the rise of the Internet, technology has created new privacy issues (Austin, 2003) which lead us to wonder: is online privacy a sweet dream?

In order to understand the issues linked to our online privacy and generate insights from it, we adopted the following method:

How has the privacy concept evolved with the appearance of the Internet?

In such a connected world, should we/can we protect our privacy? If yes, how?

Where will we be standing in the next 5, 10, 20 years? Will “online privacy” ever mean anything in the next decades?

Read the full strategic report here: privacy on the internet: a sweet dream?


Austin, L. (2003). Privacy and the Question of Technology. Law and Philosophy, 22(2), 119-166.

DeCew, J. W. (1997). In pursuit of privacy: Law, ethics, and the rise of technology. Cornell University Press.
Orwell, G. (2009). Nineteen eighty-four. Everyman’s Library.
Warren, S. D., & Brandeis, L. D. (1890). The right to privacy. Harvard law review, 193-220.
Westin, A. F. (1968). Privacy and freedom. Washington and Lee Law Review, 25(1), 166.

Cybersecurity, a new challenge for the aviation and automotive industries

Cybersecurity, a new challenge for the aviation and automotive industries

Hélène Duchamp, Ibrahim Bayram, Ranim Korhani

This paper will focus on cybersecurity in the civil aviation industry, but will also present some of the threats that exist in a much more daily transportation mode: personal cars.
We will present the stakeholders involved in the aviation industry, point out the sources of the vulnerability of the industry to cyber attacks, and then analyze the efforts put in place to deter cyber attacks against commercial aircraft. The same order of reasoning will be applied to the automotive industry


The aviation industry is important to the global economy. In 2013, the air transportation network carried over 48 million tons of freight and over 2.6 billion passengers. Its global economic value was estimated at 2.2 trillion dollars (AIAA, 2013). Any (cyber)-attack in this industry would result in important social and economic consequences.

With the development of new technologies such as internet, the global aviation industry is subject to a new and growing type of threat coming from cyberspace. As in the other industries, cyber threats purposes are for example the robbery of information, political actions, make profit, or simply weaken one stakeholder of the industry.

Because of its complexity and its weight in the economy, breaking the aviation industry’s security constitutes a great challenge for hackers and terrorists. Moreover, this industry relies more and more on information and communication technology (ICT). As an industry that is well known for providing one of the safest type of transportation, it is mandatory for all its stakeholders to understand the risks and to prevent any malicious events for the good of the industry, the economy, the population and the environment.

Read the full strategic report here: cybersecurity, a new challenge for the aviation and automotive industries


AIAA. (2013). The connectivity challenge: protecting critical assets in a networked world – a framework for aviation cybersecurity.

Can ISIS’s cyber-strategy really be thwarted?

Can ISIS’s cyber-strategy really be thwarted?

Kenza Berrada, Marie Boudier


Never in the history of terrorism had an organization appeared as web-savvy as the Islamic State. The extensive use of the internet allows ISIS to conduct its most vital operations. It can easily spread its hateful and violent messages to every corner of the world, reach vulnerable young people and lure them into joining the force, send orders and raise funds. All of it without much sophistication, only using available tools such as Telegram or the Deep&Dark net. Confronted to the issue, the US government, Silicon Valley’s top executives or the hackers organization Anonymous have each taken action to fight the terrorist organization’s sprawl on the internet. There is no evidence for the moment proving the effectiveness of their initiatives as ISIS continues to recruit, plan attacks and does not show any sign of weakness.


Google stated in February 2016 that more than 50,000 people search for the phrase “Join ISIS” each month. This fact illustrated the latest trend in today’s world terrorism, which is the heavy use of social media and cyber capabilities to assert their domination. The Islamic State of Iraq and Syria (ISIS) is by far one of the most advanced terrorist organizations in terms of their social media capabilities (Farwell, 2014). It is no coincidence ISIS is so successful on the virtual landscape. The group benefits from an extremely elaborate media and public relations strategy. Indeed, Al Hayat Media Center, their own media hub, produces, distributes and manages all their virtual content. With a designated press officer and their own designed mobile application, ISIS takes advantage of a true branding and marketing strategy, as if it were a regular business.
ISIS’s cyber-strategy will be studied first, looking how it uses the Internet for their personal agenda, such as recruitment, propaganda, internal communication, fundraising, and cyber-attacks. Then, focus will be on the possibility to block the Internet, and how diverse stakeholders like the US or private companies plan on controlling the terrorist organization and thwart their online presence.

Read the full strategic report here: ISIS Cyberstrategy


Farwell, J. P. (2014). The media strategy of ISIS. Survival, 56(6), 49-55.

Cybersecurity and the Internet of Things

Cybersecurity and the Internet of Things

Sarah Baker, Grégoire Frison-Roche, Barbora Kuncikova


The Internet of Things (IoT) is a topic that gets a lot of attention and has become somewhat of buzzword in business and technology today. In many ways, this hype and excitement is not misplaced, as IoT has fascinating implications and opportunities for both consumers and businesses. However, the cybersecurity threats that this explosive growth represents are sometimes overlooked or not clearly understood. This paper will introduce the concept of IoT, including the definition, trends and applications. The next section will discuss the potential cybersecurity risks for IoT, for both industries and consumers. Finally, the last section will discuss recommended preventative measures and defense mechanisms available, while considering the fast changing nature of IoT technology.

Introduction: What is the Internet of Things?

The past decades have seen huge advances in electronic communications, from the rise of the Internet to the ubiquity of mobile devices. However, this communication is now shifting from devices that simply connect users to the Internet, to communication linking the physical world to the cyber world (Borgia, 2014). Generally speaking, this notion is called Cyber Physical Systems (CPS) and includes technologies such as (i) automation of knowledge work, (ii) Internet of Things, (iii) advanced robotics, and (iv) autonomous/ near autonomous vehicles (Borgia, 2014). However, IoT is considered to be the CPS technology with the largest expected economic impact (Manyika et al., 2013).

Given IoT is one of the most talked about trends in IT, there are as many definitions of the phenomena as there are angles to study. The origins of the concept IoT can be traced back to a group at MIT, who defined it as “an intelligent infrastructure linking objects, information and people through the computer networks, and where the RFID technology found the basis for its realization’’ (Brock, 2001). Today, IoT extends far beyond RFID technology. A more recent definition describes IoT as “a highly interconnected network of heterogeneous entities such as tags, sensors, embedded devices, handheld devices and backend servers” (Malina et al., 2016). The International Telecommunication Union (ITU) describes IoT as “anytime, any place connectivity for anyone… connectivity for anything. Connections will multiply and create an entirely new dynamic network of networks – an Internet of Things’’ (ITU, 2005).

Therefore, the defining attribute of IoT is that it involves things, moving beyond networked computers, tablets or smartphones to include just about any physical object that can be connected and communicate. The value offered by IoT comes from the fact that these objects which are not machines, and do not function like machines are able to gather and communicate data, which means information can be translated into action at astounding rates (Burrus, 2014). The concept behind IoT was aptly captured back in 1999:

If we had computers that knew everything there was to know about things — using data they gathered without any help from us — we would be able to track and count everything, and greatly reduce waste, loss and cost. We would know when things needed replacing, repairing or recalling, and whether they were fresh or past their best. The Internet of Things has the potential to change the world, just as the Internet did. Maybe even more so” (Ashton, 2009)

This strategic report focuses on securing the Internet of Things. Read the full report here: Cybersecurity and the Internet of Things


Ashton, K. (2009). That ‘internet of things’ thing. RFiD Journal, 22(7), 97-114.
Borgia, E. (2014). The Internet of Things vision: Key features, applications and open issues. Computer Communications, 54, 1-31.
Brock, D. L. (2001). The electronic product code (epc). Auto-ID Center White Paper MIT-AUTOID-WH-002.
Burrus, D. (2014). The Internet of Things is far bigger than anyone realizes. Wired. Accessed November.
ITU. (2005). ITU Internet Reports 2005: The internet of things. Geneva: International Telecommunication Union (ITU).
Malina, L., Hajny, J., Fujdiak, R., & Hosek, J. (2016). On perspective of security and privacy-preserving solutions in the internet of things. Computer Networks, 102, 83-95.
Manyika, J., Chui, M., Bughin, J., Dobbs, R., Bisson, P., & Marrs, A. (2013). Disruptive technologies: Advances that will transform life, business, and the global economy (Vol. 12). San Francisco, CA: McKinsey Global Institute.

Cybersecurity, Cybercrime and cyberwarfare research