machine

now browsing by tag

 
 

Machine Gun Kelly’s dating past: 9 girlfriends and flings | #facebookdating | #tinder | #pof | romancescams | #scams

Over the years, Machine Gun Kelly (né Colson Baker) has been linked to a long list of famous women. Most recently, he has been dating Megan Fox. Here, we take […] View full post on National Cyber Security

biometrics, machine learning, privacy and being a woman in tech – Naked Security Podcast – Naked Security

Source: National Cyber Security – Produced By Gregory Evans

To celebrate International Women’s Day we invite you to this all-female splinter episode. We discuss privacy, biometrics, machine learning, social media, getting into cybersecurity and, of course, what it’s like to be a woman in tech.

Host Anna Brading is joined by Sophos experts Hillary Sanders, Michelle Farenci and Alice Duckett.

Listen now!

Source link

The post biometrics, machine learning, privacy and being a woman in tech – Naked Security Podcast – Naked Security appeared first on National Cyber Security.

View full post on National Cyber Security

Does #Cyber Security Really Need #Machine Learning #Technology?

Source: National Cyber Security – Produced By Gregory Evans

Amidst the escalating number of high-profile hacks and cyber attacks, organizations are now embracing various forms of artificial intelligence (AI) – including machine learning technology and neural networks – as a new cyber security defense mechanism. At a time when human skills and competencies appear to be overmatched, the thinking goes, machines have a nearly infinite ability to analyze threats and then respond to them in real-time.

Is machine learning really the silver bullet?
However, putting one’s faith in the ability of machines to defend entire organizations from hacker attacks and other forms of security intrusions ignores one basic fact: cyber security is an arms race, and the same weapons that are available to one side will soon be available to the other side. Put another way, the same machine learning technologies being embraced by the world’s top corporations and data scientists will soon be co-opted or adopted by the world’s top hackers.

Moreover, there is still quite a bit of work to be done before any machine learning cyber defense is fully robust. Right now, machine learning excels at certain tasks, but still needs significant human intervention to excel at others. For example, machines are extremely good at “classification,” which enables them to label and describe different types of hacker attacks. As a result, machines can differentiate between spoofing attacks, phishing attacks and other types of network intrusions.

The idea here is simple: just show a machine many different examples of hacker attacks, and they will eventually learn how to classify them very efficiently. The more raw data and data points you show machines (think of all this data as “training data”), the faster they will learn. In many ways, it is similar to the machine learning techniques used for image recognition tools – show a machine enough photos of a dog, and it will eventually be able to pick out a dog in any photo you show it.

Thus, it’s easy to see an obvious implication for machine learning and cyber security: machines can help security teams isolate the most pressing threats facing an organization and then optimize the defenses for those threats. For example, if an organization is facing a hundred different potential threats, a machine can easily sort and classify all of those threats, enabling humans to focus only on the most mission-critical of these.

The use cases of machine learning in cyber security
One of the most obvious ways to apply machine learning in cyber security involves the creation of stronger spam filters. For many organizations, a constant security threat is the ability of hackers to get inside the organization simply by sending spam emails filled with all kinds of malware. Once an employee clicks on a bad link or opens a bad attachment that makes it past conventional spam filters, it may be possible for malware to spread throughout an organization’s network.

Thus, you can immediately see why adopting machine learning for email security makes so much sense – it can provide a first layer of defense against these spam emails laden with malware. If you frame email as a “classification” problem, then machines can play an important role in sifting out the “good” emails from the “bad” emails. You simply show a machine many, many different examples of “bad” emails as well as many, many different examples of “good” emails, and it will eventually become 99.9% efficient in sorting them out (or so one common myth about machine learning goes).

Another common use case for machine learning in cyber security involves spotting irregular activity within an organization’s network traffic. For example, an unexpected surge of network activity might signal some sort of looming cyber attack (such as a DDOS attack). Or, activity in the accounts of certain employees that is out of the norm might indicate that one or more of these accounts have been compromised. Again, it matters how you frame the problem for machines: organizations must be able to show them what “normal” looks like, so that they will then be able to spot any irregular deviations from the normal state of network affairs.

Machine learning, cyber security and the enterprise
To get cyber security executives thinking more deeply on the matter (without delving too deeply into the complex data science behind machine learning), the technology research firm Gartner has proposed a PPDR model, which corresponds to the various uses of machine learning for cyber security within the enterprise:

Prediction
Prevention
Detection
Response
In short, with machine learning technology, organizations will be able to predict the occurrence of future attacks, prevent these attacks, detect potential threats, and respond appropriately. With the right machine learning algorithms, say experts, it might be possible to shield even the largest and most vulnerable organizations from cyber attacks. In the big data era, when organizations must grapple with so much data, it’s easy to see why they are turning to machines.

With that in mind, Amazon is leading the way with an application of machine learning for the cloud. At the beginning of 2017, Amazon acquired a machine learning startup, harvest.ai, for just under $20 million. The goal of the acquisition was to be able to use machine learning to search for, find and analyze changes in user behavior, key business systems and apps, in order to stop targeted attacks before any data can be stolen or compromised.

Then, in November 2017, the company’s cloud business, Amazon Web Services (AWS), unveiled a new cyber security offering based on machine learning called Amazon Guard Duty. The allure of the new offering is easy to grasp: companies with a lot of data in the cloud are especially vulnerable to hackers, and they are easy “sells” for any company that is able to promise that their cloud offerings will be safe from attack. Already, big-name companies like GE and Netflix have signed on as customers of Amazon’s new machine learning-based offering.

Clearly, there is a tremendous amount of potential for machine learning and cyber security within the enterprise. Some industry experts have estimated that, in the period from 2015-2020, companies will spend a combined $655 billion on cyber security. Other estimates have been even more aggressive, suggesting that the total could be closer to $1 trillion.

If companies are spending so much money on cyber security, though, they will want to be certain that new solutions featuring machine learning actually work. In order for machine learning to live up to the hype, it will need to offer a fully robust security solution that covers every potential vulnerability for a company – including the network itself, all endpoints (including all mobile devices), all applications and all users. That’s a tough order to fill, but plenty of organizations are now betting that machines will be up to the task.

The post Does #Cyber Security Really Need #Machine Learning #Technology? appeared first on National Cyber Security .

View full post on National Cyber Security

How to #improve #cybersecurity with #machine #learning

Leveraging machine learning for cybersecurity
Data breaches and cyber attacks have become harder to deter over the last few years. According to Cisco’s 2018 Annual Cybersecurity Report, for example, the expanded volume of both legitimate and malicious encrypted traffic on the web has made it more difficult for security professionals to recognize and monitor potential threats. As a result, many security professionals are looking to leverage machine learning to advance cybersecurity.

What is machine learning?
Before exploring the ways machine learning can improve cybersecurity, it is important to first understand what machine learning actually is. To begin with, machine learning is not one in the same with artificial intelligence (A.I.), which is part of a broader initiative to enable computers to reason, solve problems, perceive and understand language. Rather, machine learning is a branch of A.I., and involves training an algorithm to learn and make predictions based upon data input. Netflix, for example, uses machine learning and algorithms to make show recommendations, while search engine giant Google uses the technology to collect signals for better search quality.

Monitoring and responding to suspicious traffic
One way machine learning can be used to improve cybersecurity is by monitoring network traffic and learning the norms of a system. A well-trained machine learning model will be able to spot atypical traffic within a network and quarantine an anomaly. Most machine algorithms typically send an alert to a human analyst to determine how to respond to a threat; however, some machine learning algorithms are able to act on their own accord, such as thwarting certain users from accessing a network.

Automating repetitive tasks
Another way machine learning can help propel cybersecurity is by automating several repetitive tasks. For example, during a data security breach, an analyst has to juggle multiple responsibilities, including determining what was exactly stolen, how it was taken and fixing the network to stop similar future attacks. With machine learning, many of these tasks can be automatically deployed, significantly reducing the amount of time it takes to fix the vulnerability in return.

Complementing human analysis
Machine learning can also be used to complement human analysis. For example, in a paper published in 2016, MIT and PatternEx researchers demonstrated an A.I. platform could predict cyber attacks significantly better than existing systems by continuously incorporating input from human experts. Specifically, the team illustrated the platform could detect 85% of attacks, which was approximately three times better than previous benchmarks. It also reduced the number of false positives by a factor of five. Generally speaking, machine learning technologies can be used to provide around the clock analysis, or assist junior analysts who have higher error rates in their ability to assess a threat.

Preventing zero-day exploits
Additionally, machine learning can be leveraged to combat zero-day exploits, which occur whenever a cyber criminal is able to seize upon a software vulnerability before a developer is able to release a patch for it. IoT devices are largely targeted by zero-day exploits since they often lack basic security features. Vendors are typically given a certain amount of time to patch the vulnerability before it is publicly disclosed, depending upon its severity. Machine learning could be used to narrow in on and prevent these sorts of exploits before they have a chance to take advantage of a network.

Limitations
None of this is to stay machine learning will make cybersecurity perfect. Like any technology, machine learning is a double edge sword. Both cybersecurity professionals and criminals are in an arms race to outsmart each other with machine learning. Although machine learning is effective at preventing the same attack from occurring twice, the technology is challenged to predict new threats based upon previous data. Nor are all machine learning systems created equal. Different machine learning systems have different error rates in pinpointing and responding to threats. And while machine learning can be used as part of a company’s overall cybersecurity strategy, it shouldn’t be relied upon as a sole line of defense.

advertisement:

The post How to #improve #cybersecurity with #machine #learning appeared first on National Cyber Security Ventures.

View full post on National Cyber Security Ventures

The #future of #computer #security is #machine vs #machine

A growing number of computer security thinkers, including myself, think that in the very near future, most computer security will be machine versus machine–good bots versus bad bots, completely automated. We are almost there now.

Fortunately or unfortunately, I don’t think we’ll get to a purely automated defense for a long, long time.

Today’s security defenses
Much of our computer security defenses are already completely automated. Our operating systems are more securely configured out of the box, from firmware startup to the operating system running apps in secure hardware-enforced virtual boundaries, than ever before. If left alone in their default state, our operating systems will auto-update themselves to minimize any known vulnerabilities that have been addressed by the OS vendor.

Most operating systems come with rudimentary blacklists of “bad apps” and “bad digital certificates” that they will not run and always-on firewalls with a nice set of “deny-by-default” rules. Each OS either contains a built-in, self-updating, antimalware program or the users or administrators install one as one of the first administrative tasks they perform. When a new malware program is released, most antimalware programs get a signature update within 24 hours.

Most enterprises are running or subscribing to event log message management services (e.

Read More….

advertisement:

The post The #future of #computer #security is #machine vs #machine appeared first on National Cyber Security Ventures.

View full post on National Cyber Security Ventures

Cisco #report finds #AI & machine #learning still hot #topics in #cybersecurity

Source: National Cyber Security News

Artificial intelligence and machine learning in cybersecurity prove to be hot topics amongst security professionals and they’re looking to spend more on tools that can do those very tasks, according to the 11th Cisco 2018 Annual Cybersecurity Report.

According to the report, machine learning is able to help enhance network security and defences by learning how to detect unusual traffic patterns in cloud and IoT environments.

That technology is in hot demand, particularly as the volume of legitimate and malicious web traffic grows. According to Cisco statistics from October 2017, 50% of web traffic is encrypted. Over a 12-month period, Cisco researchers also spotted a threefold increase in malware samples that used encrypted network communication.

Network encryption is causing challengers for defenders who are trying to identify and monitor any potential threats – however security professionals are eager to adopt machine learning.

While machine learning comes with drawbacks such as false positives, security professionals realise that machine learning and AI technologies are still in their infancy.

The report also found that more than half of all cyber attacks result in financial damages of more than US$500,000 (AU$637,630) including lost revenue, customers, opportunities and out-of-pocket costs.

Read More….

advertisement:

View full post on National Cyber Security Ventures

6 ways #hackers will use #machine #learning to #launch #attacks

Machine learning algorithms will improve security solutions, helping human analysts triage threats and close vulnerabilities quicker. But they are also going to help threat actors launch bigger, more complex attacks.

Defined as the “ability for (computers) to learn without being explicitly programmed,” machine learning is huge news for the information security industry. It’s a technology that potentially can help security analysts with everything from malware and log analysis to possibly identifying and closing vulnerabilities earlier. Perhaps too, it could improve endpoint security, automate repetitive tasks, and even reduce the likelihood of attacks resulting in data exfiltration.

Naturally, this has led to the belief that these intelligent security solutions will spot – and stop – the next WannaCry attack much faster than traditional, legacy tools. “It’s still a nascent field, but it is clearly the way to go in the future. Artificial intelligence and machine learning will dramatically change how security is done,” said Jack Gold, president and principal analyst at J.Gold Associates, when speaking recently to CSO Online.

“With the fast-moving explosion of data and apps, there is really no other way to do security than through the use of automated systems built on AI to analyze the network traffic and user interactions.”

The problem is, hackers know this and are expected to build their own AI and machine learning tools to launch attacks.

How are cyber-criminals using machine learning?
Criminals – increasing organized and offering wide-ranging services on the dark web – are ultimately innovating faster than security defenses can keep up. This is concerning given the untapped potential of technologies like machine and deep learning.

“We must recognize that although technologies such as machine learning, deep learning, and AI will be cornerstones of tomorrow’s cyber defenses, our adversaries are working just as furiously to implement and innovate around them,” said Steve Grobman, chief technology officer at McAfee, in recent comments to the media. “As is so often the case in cybersecurity, human intelligence amplified by technology will be the winning factor in the arms race between attackers and defenders.”

This has naturally led to fears that this is AI vs AI, Terminator style. Nick Savvides, CTO at Symantec, says this is “the first year where we will see AI versus AI in a cybersecurity context,” with attackers more able to effectively explore compromised networks, and this clearly puts the onus on security vendors to build more automated and intelligent solutions.

“Autonomous response is the future of cybersecurity,” stressed Darktrace’s director of technology Dave Palmer in conversation with this writer late last year. “Algorithms that can take intelligent and targeted remedial action, slowing down or even stopping in-progress attacks, while still allowing normal business activity to continue as usual.”

Machine learning-based attacks in the wild may remain largely unheard of at this time, but some techniques are already being leveraged by criminal groups.

1. Increasingly evasive malware
Malware creation is largely a manual process for cyber criminals. They write scripts to make up computer viruses and trojans, and leverage rootkits, password scrapers and other tools to aid distribution and execution.

But what if they could speed up this process? Is there a way machine learning could be help create malware?

The first known example of using machine learning for malware creation was presented in 2017 in a paper entitled “Generating Adversarial Malware Examples for Black-Box Attacks Based on GAN.” In the report, the authors revealed how they built a generative adversarial network (GAN) based algorithm to generate adversarial malware samples that, critically, were able to bypass machine-learning-based detection systems.

In another example, at the 2017 DEFCON conference, security company Endgame revealed how it created customized malware using Elon Musk’s OpenAI framework to create malware that security engines were unable to detect. Endgame’s research was based on taking binaries that appeared to be malicious, and by changing a few parts, that code would appear benign and trustworthy to the antivirus engines.

Other researchers, meanwhile, have predicted machine learning could ultimately be used to “modify code on the fly based on how and what has been detected in the lab,” an extension on polymorphic malware.

2. Smart botnets for scalable attacks
Fortinet believes that 2018 will be the year of self-learning ‘hivenets’ and ‘swarmbots’, in essence marking the belief that ‘intelligent’ IoT devices can be commanded to attack vulnerable systems at scale. “They will be capable of talking to each other and taking action based off of local intelligence that is shared,” said Derek Manky, global security strategist, Fortinet. “In addition, zombies will become smart, acting on commands without the botnet herder instructing them to do so. As a result, hivenets will be able to grow exponentially as swarms, widening their ability to simultaneously attack multiple victims and significantly impede mitigation and response.”

Interestingly, Manky says these attacks are not yet using swarm technology, which could enable these hivenets to self-learn from their past behavior. A subfield of AI, swarm technology is defined as the “collective behavior of decentralized, self-organized systems, natural or artificial” and is today already used in drones and fledgling robotics devices. (Editor’s note: Though futuristic fiction, some can draw conclusions from the criminal possibilities of swarm technology from Black Mirror’s Hated in The Nation, where thousands of automated bees are compromised for surveillance and physical attacks.)

3. Advanced spear phishing emails get smarter
One of the more obvious applications of adversarial machine learning is using algorithms like text-to-speech, speech recognition, and natural language processing (NLP) for smarter social engineering. After all, through recurring neural networks, you can already teach such software writing styles, so in theory phishing emails could become more sophisticated and believable.

In particular, machine learning could facilitate advanced spear phishing emails to be targeted at high-profile figures, while automating the process as a whole. Systems could be trained on genuine emails and learn to make something that looks and read convincing.

In McAfee Labs’ predictions for 2017, the firm said that criminals would increasingly look to use machine learning to analyze massive quantities of stolen records to identify potential victims and build contextually detailed emails that would very effectively target these individuals.

Furthermore, at Black Hat USA 2016, John Seymour and Philip Tully presented a paper titled “Weaponizing data science for social engineering: Automated E2E spear phishing on Twitter,” which presented a recurrent neural network learning to tweet phishing posts to target certain users. In the paper, the pair presented that the SNAP_R neural network, which was trained on spear phishing pentesting data, was dynamically seeded with topics taken from the timeline posts of target users (as well as the users they tweet or follow) to make the click-through more likely.

Subsequently, the system was remarkably effective. In tests involving 90 users, the framework delivered a success rate varying between 30 and 60 percent, a considerable improvement on manual spear phishing and bulk phishing results.

4. Threat intelligence goes haywire
Threat intelligence is arguably a mixed blessing when it comes to machine learning. On the one hand, it is universally accepted that, in an age of false positives, machine learning systems will help analysts to identify the real threats coming from multiple systems. “Applying machine learning delivers two significant gains in the domain of threat intelligence,” said Recorded Future CTO and co-founder Staffan Truvé in a recent whitepaper.

“First, the processing and structuring of such huge volumes of data, including analysis of the complex relationships within it, is a problem almost impossible to address with manpower alone. Augmenting the machine with a reasonably capable human, means you’re more effectively armed than ever to reveal and respond to emerging threats,” Truvé wrote. “The second is automation — taking all these tasks, which we as humans can perform without a problem, and using the technology to scale up to a much larger volume we could ever handle.”

However, there’s the belief, too, that criminals will adapt to simply overload those alerts once more. McAfee’s Grobman previously pointed to a technique known as “raising the noise floor.” A hacker will use this technique to bombard an environment in a way to generate a lot of false positives to common machine learning models. Once a target recalibrates its system to filter out the false alarms, the attacker can launch a real attack that can get by the machine learning system.

5. Unauthorized access
An early example of machine learning for security attacks was published back in 2012, by researchers Claudia Cruz, Fernando Uceda, and Leobardo Reyes. They used support vector machines (SVM) to break a system running on reCAPTCHA images with an accuracy of 82 percent. All captcha mechanisms were subsequently improved, only for the researchers to use deep learning to break the CAPTCHA once more. In 2016, an article was published that detailed how to break simple-captcha with 92 percent accuracy using deep learning.

Separately, the “I am Robot” research at last year’s BlackHat revealed how researchers broke the latest semantic image CAPTCHA and compared various machine learning algorithms. The paper promised a 98 percent accuracy on breaking Google’s reCAPTCHA.

6. Poisoning the machine learning engine
A far simpler, yet effective, technique is that the machine learning engine used to detect malware could be poisoned, rendering it ineffective, much like criminals have done with antivirus engines in the past. It sounds simple enough; the machine learning model learns from input data, if that data pool is poisoned, then the output is also poisoned. Researchers from New York University demonstrated how convolutional neural networks (CNNs) could be backdoored to produce these false (but controlled) results through CNNs like Google, Microsoft, and AWS.

View full post on National Cyber Security Ventures

Cybersecurity on the #plant floor: #fighting the #Hacker Machine #Interface

Source: National Cyber Security – Produced By Gregory Evans

SCADA systems and cybersecurity: it remains a challenge and according to ample research even one of the major restraining elements in SCADA market growth.

While being a multi-faceted challenge, one of the many means through which attackers infiltrate SCADA systems is the Human Machine Interface (HMI).

With HMI/SCADA software being in fully evolution in the age of Industry 4.0 and Industrial IoT, exploiting vulnerabilities in the software still happens a lot.

Security expert Trend Micro looked at the state of SCADA HMI vulnerabilities and had its Zero Day Initiative Team investigate all the publicly disclosed vulnerabilities with regards to SCADA software that were fixed from 2015 and 2016. The result: a report and recommendations. An overview and some additional thoughts.

The Hacker Machine Interface: focus on patching

The majority of found SCADA software vulnerabilities are preventable using secure development practices Trend Micro states.

The major areas where SCADA software vulnerabilities occur as you can see in the graphic below are, respectively:

  • Memory corruption.
  • Credential management.
  • Lack of authentication/authorization and insecure defaults.
  • Code injection.
  • A big chunk of other areas.

The press release, revealing the findings and serving as an announcement of the report “Hacker Machine Interface: The State of SCADA HMI Vulnerabilities”, also states that the average time it takes a SCADA/HMI vendor to release a patch once a bug has been disclosed can go up to 150 days.

Patching is a significant challenge for multiple reasons. The mentioned 150 days is approximately 30 days longer than it takes highly deployed software from the likes of Microsoft or Adobe, yet far less than enterprise applications from firms such as HPE or IBM, Trend Micro says.

However, knowing that SCADA systems are a bit everywhere and certainly in critical infrastructure, making them of course interesting for the ‘bad guys’ there is certainly room for improvement in the area of patching. As per usual we need to emphasize that 150 days is an average. So, when you’re in the market for HMI/SCADA software it might be a good idea to look at the security and patching practices of the various vendors out there.

Among the many concerns with regards to the security of SCADA systems, according to Trend Micro’s Fritz Sands the weak link really is the human machine interface software part and patching comes in again.

According to Sands most HMI systems still run on old Windows operating systems whereby there are no more security upgrades for the several versions of the Windows OS. Quoting Sands from a November 2017 article, entitled ‘Dated Windows software the weak link for SCADA systems‘, “Windows is a sphere where hackers feel very comfortable. Instead of needing a complex tool set to attack SCADA controllers, they have 20 years of hacking skills used against Windows, SQL server, browsers and Adobe products.”

Top SCADA/HMI security issues according to the Hacker Machine Interface report

On top of the fact that in the age of Industrial IoT everything is increasingly connected and we shifted away from the isolated HMI and SCADA system that runs on a trusted network whereby end-to-end security by design has simply become a must, as well as many other security issues (from the inevitable human factor and insider attacks to the traditional challenge of removable media and the ever more sophisticated ways hackers use beyond old tactics such as phishing and malware) solving the old Windows version security issue seems like a no-brainer. Certainly as the stakes, scale and indeed complexity of cybercrime expand.

Back to the announcement of Trend Micro and some of the preventable SCADA/HMI issues the company found.

Below is an overview as mentioned in the announcement of the “Hacker Machine Interface: The State of SCADA HMI Vulnerabilities”. We added some quotes from the report which you can download in PDF here.

  • Memory corruption problems, which account for about 20 percent of all identified vulnerabilities, mainly represent traditional code security issues with the likes of stack- and heap-based buffer overflows and out-of-bounds read/write vulnerabilities.
  • Credential management challenges, accounting for a pretty impressive 19 percent of all vulnerabilities range from not protecting credentials enough and storing passwords in a recoverable format to the use of hard-coded passwords.
  • The category of vulnerabilities in the area of lack of authentication/authorization and of insecure defaults accounts for close to a quarter of all found SCADA vulnerabilities (23 percent to be precise). One of the issues: missing encryption. Another one: unsafe ActiveX controls marked safe for scripting.
  • The issues with regards to code injection are relatively minor in comparison with the others, accounting for 9 percent of all identified vulnerabilities. But of course, although perfect security is close to impossible, that is still far too much, certainly given the mission-critical role of SCADA and the fact that on top of the more common injection types there are also domain-specific injections as Trend Micro states.

Security strategies and security by design as the stakes get higher

Mentioning the crucial types of information such as a facility’s layout and critical thresholds SCADA system hackers can obtain (on top of the in the world of IoT not unknown phenomenon of getting device settings for future attacks) and threats such as the Stuxnet attack on an Iranian nuclear plant and Ukranian power grid attacks to provide an idea of the scope of potential damages, Trend Micro invites you to check out the various vulnerability types, cases of vulnerably SCADA Human Machine Interfaces and the much needed advice in its paper “Hacker Machine Interface: The State of SCADA HMI Vulnerabilities”.

By the way: needless to say that in times of ongoing digitization and digitalization, organized cybercrime, state-sponsored attacks and ‘cyber’ as a real weapon in warfare, cybersecurity cannot be an afterthought.

Not in SCADA/HMI software, not in SCADA systems, not in industrial transformation, not in critical infrastructure, not in Industry 4.0 and not in digital transformation or IoT projects overall.

Security by design and security strategies need to be included from the very start of any project, not just because of the risks but also because of the fact that calling in your cybersecurity folks too late is a slowing factor in digital transformation to begin with and, the other way around, security is a digital transformation accelerator.

In a SCADA/HMI security context the call to do more in the words of Trend Micro’s ‘ The State of SCADA HMI Vulnerabilities: “despite the obvious risks of obtaining unauthorized access to critical systems, the industry behind the development of SCADA systems, specifically HMI vendors, tend to focus more on equipment manufacture and less on securing the software designed to control them”.

The post Cybersecurity on the #plant floor: #fighting the #Hacker Machine #Interface appeared first on National Cyber Security Ventures.

View full post on National Cyber Security Ventures

Man and machine: How to team up to meet cybersecurity challenges

Source: National Cyber Security – Produced By Gregory Evans

Man and machine: How to team up to meet cybersecurity challenges

In today’s cybersecurity landscape, the pressure is on. CISOs and other executives are suffering “security insomnia”: attack surfaces are growing exponentially, their security teams are receiving overwhelming numbers of alerts, real threats are masked by false positives, and the numbers of serious breaches are reaching new records – the list…

The post Man and machine: How to team up to meet cybersecurity challenges appeared first on National Cyber Security Ventures.

View full post on National Cyber Security Ventures

Cybersecurity: is the office coffee machine watching you?

Source: National Cyber Security – Produced By Gregory Evans

Cybersecurity: is the office coffee machine watching you?

Troubled by something deeply unethical going on at work? Or maybe you’re plotting to leak sensitive information on the company that just sacked you? Either way, you best think twice before making your next move because an all-seeing artificial intelligence might just be analysing every email you send, every file you upload, every room you scan into – even your coffee routine.

The latest wave of cyber-defence technology employs machine learning to monitor use of the ever-expanding number of smart household objects connected to the Internet of Things – shutting down hackers before they’ve broken into corporate databases or whistleblowers before they’ve forwarded on information to the media.

One of the leading proponents is cyber-defence company Darktrace, founded in 2013 by former British intelligence officers in Cambridge and today featuring 370 employees in 23 offices globally. The company is targeting growth in the Asia-Pacific, where regional head Sanjay Aurora is promoting Darktrace’s Enterprise Immune System at the CeBIT Australia conference in Sydney on 23 May.

In an interview ahead of the conference, Aurora tells the Guardian that the Internet of Things, the interconnected everyday devices such as the smart fridge, offers more vulnerabilities to be hacked than ever before – but also more ways to scan for threats.

“In newspapers there is not a single day where we don’t read about an organisation being breached,” he says.

“At a time when even coffee machines have IP addresses, many people in security teams don’t so much as have visibility of the network.”

Where cybersecurity normally functions as a barrier to keep out previously-identified threats, Aurora says Darktrace technology behaves more like a human immune system.

“Once you understand the devices and people, once you notice subtle changes within the network, you establish a pattern of life, and whether it is lateral movement or unusual activity – maybe an employee using a device they don’t normally use, or a fingerprint scanner acting unusually – the immune system notices and takes action, detecting these things in network before they become a headline,” he says.

Darktrace’s package includes a 3D topographical real-time “threat visualizer”, which monitors everyday network activity, and the responsive Antigena system, which can decide for itself to slow systems down to give security personnel time to stop a potential breach, cut off network access to particular individuals, or mark specific emails for further investigation.

“Let’s say an employee is made redundant and becomes a potential information threat, the machine will intelligently determine what is the problem, assess the mathematical threat and then decide what action is to be taken,” Aurora says.

Darktrace claims its Enterprise Immune System has reported over 30,000 serious cyber incidents in over 2,000 deployments across the world, offering up examples such as an employee who was disgruntled about their company’s Brexit plans and was caught before they could leak the information. Another case was put forward by Darktrace co-founder Poppy Gustafsson at the TechCrunch Disrupt conference in London last year. Gustafsson cited the case of attackers sending a truck into the warehouse of a luxury goods manufacturer after uploading their fingerprints to the company’s system in order to bypass the biometric scanners.

“It’s one of the few attacks where a criminal has given their fingerprint ahead of time,” she said.

Darktrace is well on the way to establishing itself in Australia ahead of the CeBIT business tech conference, already boasting clients such as national telecommunications provider Telstra.

According to a Telstra spokesperson, the company “joined forces with Darktrace in 2016, adding it to a suite of complementary security technologies which are designed and utilised to protect customer and corporate information and the Telstra network. Darktrace, along with our other technologies, people and processes, strengthens Telstra’s internal security through its ability to detect anomalous activity and its ability to visualise all network activity, resulting in a reduced time to detect potential threats.”

The move has attracted concern from Communication Workers Union (CWU) national secretary Greg Rayner, who says the union was not consulted on the introduction of the technology.

“That’s disappointing and arguably a breach of Telstra’s obligations under the current enterprise agreement,” he says.

“They’re supposed to consult on changes that will have a significant effect on the workforce. Telstra employees have been subjected to increasingly intense electronic monitoring in recent years, including scrutiny and recording of their online activities at work. We are obviously concerned that this technology will allow further intrusions into employees’ day-to-day working lives.”

Telstra has history in regard to unions and whistleblowers – in 2008 former employee Jim Ziogas was fired after being connected to a leak to the media of internal plans to de-unionise the workforce.

Whistleblowers Australia vice-president, Brian Martin, doesn’t have a lot in common with Darktrace, but he does share a fondness for immune-system analogies. “Whistleblowers are antibodies for corruption in organisations,” he tells the Guardian. “If it were possible to prevent leaks (and that remains to be shown), this might only allow problems to fester until they become much worse. Think of what happened to Volkswagen, which lacked any whistleblowers or leakers and paid a much larger penalty than if its emissions fraud had been exposed years earlier.”

He says invading the privacy of workers has the potential to create resentment and undermine loyalty, and that a lack of independent monitoring means there are serious questions regarding the effectiveness of Darktrace’s Enterprise Immune System, particularly in regard to false positives and false negatives.

“The damage to morale done by falsely accusing an employee of planning to leak documents can be imagined,” he says.

“How about this option? Adapt the software to monitor the e-communications of top managers to see whether they are planning reprisals against whistleblowers. How do you think they would like that?”

Devised as it was by former MI5 and GCHQ agents, inspired by the challenges they were facing in counterintelligence, Darktrace technology is also an interesting proposition for governments, but the company is more coy about the countries that it counts as clients than the businesses it services.

For its part, a spokesperson for the Australian Signals Directorate (ASD) – the department of defence intelligence agency that bears the slogan “reveal their secrets, protect our own” – refused to confirm or deny use of Darktrace technology, telling the Guardian it does not “provide commentary on capability or use of commercial products”.

There are certainly plenty of rivals to Darktrace technology also promoting their cybersecurity platform’s integration of the latest machine learning capabilities, including CrowdStrike, Symantec and Cylance.

Then there are Darktrace’s true rivals – hackers themselves. Thomas LaRock, technical evangelist at IT company SolarWinds, warns that machine learning is a tool that can be used to attack just as easily as it can be used to defend.

If it is possible to use machine learning to build a model that helps them launch cyberattacks with greater efficiency, then that’s what you can expect to happen,” he says.

“Think of this as a spy game, where you have agents that go from one side to another. There is bound to be a person somewhere right now working on machine learning models to deter crime. One day they could be found to be working for the criminals, using machine learning models to help commit crime.”

Aurora defends the use of machine learning at Darktrace, arguing this is one game companies cannot afford to opt out of.

“If you look at the way the threat landscape is moving, it is just simply humanly impossible using conventional methods – the only way to react to these threats is AI and machine learning,” he says.

“We are proud to achieve on that front – pure, unsupervised machine learning, as employee behaviour changes. That is the secret sauce – continuously evolving and learning.”

Source:

The post Cybersecurity: is the office coffee machine watching you? appeared first on National Cyber Security Ventures.

View full post on National Cyber Security Ventures