now browsing by tag


#cybersecurity | #hackerspace | DEF CON 27, Artificial Intelligence Village – Tal Leibovich’s & Shimon Noam Oren’s ‘From Noisy Distorted Data Sets To Excellent Prediction Models’

Source: National Cyber Security – Produced By Gregory Evans

Thanks to Def Con 27 Volunteers, Videographers and Presenters for publishing their superlative conference videos via their YouTube Channel for all to see, enjoy and learn.


The post DEF CON 27, Artificial Intelligence Village – Tal Leibovich’s & Shimon Noam Oren’s ‘From Noisy Distorted Data Sets To Excellent Prediction Models’ appeared first on Security Boulevard.

Source link

The post #cybersecurity | #hackerspace |<p> DEF CON 27, Artificial Intelligence Village – Tal Leibovich’s & Shimon Noam Oren’s ‘From Noisy Distorted Data Sets To Excellent Prediction Models’ <p> appeared first on National Cyber Security.

View full post on National Cyber Security

#cybersecurity | #hackerspace | DEF CON 27, Artificial Intelligence Village – Anna Skelton’s ‘Deep Fakes, Deep Trouble: Analyzing The Effects of Deepfakes On Market Manipulation’

Source: National Cyber Security – Produced By Gregory Evans

Thanks to Def Con 27 Volunteers, Videographers and Presenters for publishing their superlative conference videos via their YouTube Channel for all to see, enjoy and learn.


The post DEF CON 27, Artificial Intelligence Village – Anna Skelton’s ‘Deep Fakes, Deep Trouble: Analyzing The Effects of Deepfakes On Market Manipulation’ appeared first on Security Boulevard.

Source link

The post #cybersecurity | #hackerspace |<p> DEF CON 27, Artificial Intelligence Village – Anna Skelton’s ‘Deep Fakes, Deep Trouble: Analyzing The Effects of Deepfakes On Market Manipulation’ <p> appeared first on National Cyber Security.

View full post on National Cyber Security

#cybersecurity | #hackerspace | Artificial Intelligence, The True Beginning Occurred In 1912

Source: National Cyber Security – Produced By Gregory Evans

Leonardo Torres y Quevedo’s fascinating Chess Automata, via History of Computers

Well crafted blog entry authored by Professor Herbert Bruderer at the Communication of the ACM blog, detailing the true start of Artificial Intelligence in 1912.

‘If one takes chess as a yardstick for artificial intelligence, however, this branch of research begins much earlier, at the latest in 1912 with the chess automaton of the Spaniard Leonardo Torres Quevedo (cf. Fig. 1). In the chess playing Turk (1769) of Wolfgang von Kempelen, a human player was hidden.” – via Herbert Bruderer, retired lecturer of Didactics in Computer Science at ETH Zürich

Reportedly, Professor. Bruderer is now retired from his lecturer role in Didactics of Computer Science at ETH Zürich, and recently, he has fulfilled the role of Historian of Technology.

*** This is a Security Bloggers Network syndicated blog from Infosecurity.US authored by Marc Handelman. Read the original post at:

Source link

The post #cybersecurity | #hackerspace |<p> Artificial Intelligence, The True Beginning Occurred In 1912 <p> appeared first on National Cyber Security.

View full post on National Cyber Security

#deepweb | Panorays Unveils Dark Web Insights Security Intelligence Solution | Security News

Source: National Cyber Security – Produced By Gregory Evans

Opportunities For Security Integrators In The Healthcare Vertical

The healthcare market is rife with opportunity for security systems integrators. Hospitals have a continuous need for security, to update their systems, to make repairs, says David Alessandrini, Vice President, Pasek Corp., a systems integrator. “It’s cyclical. Funding for large projects might span one to two years, and then they go into a maintenance mode. Departments are changing constantly, and they need us to maintain the equipment to make sure it’s operating to its full potential.”
The experience of Pasek Corp. is typical of the opportunities available for security integrator companies in the healthcare vertical. A single large hospital system can supply a dependable ongoing source of revenue to integrator companies, says Alessandrini. Hospitals are “usually large enough to provide enough work for several people for an extended length of time.”  Healthcare customers in Pasek’s service area around Boston provide the potential for plenty of work. “We have four major hospitals, each with in excess of 250 card readers and 200 cameras, in the Boston area,” Alessandrini says.
One appeal of the healthcare market for North Carolina Sound, an integrator covering central North Carolina, is the breadth of possible equipment they can sell into the healthcare market, including access control and video, of course, but also other technologies, such as audio-video systems in a dining room. North Carolina Sound has also installed sound masking in some areas with waiting rooms to protect private patient information from being overheard. Locking systems on pharmaceutical doors are another opportunity.
Data capture form to appear here!
IP based networked video systems
A facility’s IT folks must be convinced an IP solution will function seamlessly on their network
Among North Carolina Sound’s customers is Wayne Memorial Hospital, Goldsboro, N.C., which uses about 340 video cameras, with 80 percent or more of them converted to IP. The hospital is replacing analog with IP cameras as budget allows, building network infrastructure to support the system. The healthcare market tends to have a long sales cycle; in general, sales don’t happen overnight or even within a month or two. In fact, the period between an initial meeting with a healthcare facility and installation of a system could stretch to a year or longer. A lot happens during that time.
Healthcare systems involve extensive planning, engineering, and meetings among various departments. Physical security systems that involve the information technology (IT) department, as do most systems today, can be especially complex. Installation of networked video systems based on Internet protocol (IP) requires deep and probing discussions with the IT team about how a system fits into the facility’s network infrastructure. A facility’s IT folks must be convinced an IP solution will function seamlessly on their network.
Compatible with the network
They must vet the technology to ensure the devices and solutions will be compatible with the network, and must sign off on technology choices. And even more important is determining if the security system will adhere to cybersecurity requirements of the facility. A complete solution that integrates nearly any system that lives on or uses a facility’s network is ultimately what the healthcare vertical is moving toward, says Jason Ouellette, General Manager – Enterprise Access Control & Video, Johnson Controls.
Healthcare security professionals are early adopters of technology, implementing the best technology available”“We are hearing more and more from customers across industries that they want to be able to use their security systems and devices for more than just security: they want added value,” says Ouellette. Many want to use access control, video surveillance and other data sources to assess their business operations and/or workflows with the goal of improving efficiency.
Upgrade cost-effectively
Historically, three factors have prevented many organizations from moving forward with new technologies: lack of money, proprietary systems, and the need to “rip and replace” large parts of the installed systems, says Robert Laughlin, CEO and Chairman, Galaxy Control Systems. “Today, while funding is almost always a limiting factor at some level, the progression of industry standards and ‘open’ systems has made a big positive impact on the ability of organizations to upgrade cost-effectively,” he says.
Despite any obstacles, healthcare customers generally welcome new innovations. “I would say healthcare security professionals in general are early adopters of technology and like to implement the best technology available,” says Jim Stankevich, Global Manager – Healthcare Security, Johnson Controls/Tyco Security Products. “For most, rapid implementation is limited by budgets and available funding.”
Missed part one of our healthcare mini series? Click here.

Source link

The post #deepweb | <p> Panorays Unveils Dark Web Insights Security Intelligence Solution | Security News <p> appeared first on National Cyber Security.

View full post on National Cyber Security

Artificial Intelligence, Robotics & IoT

General Cybersecurity Conference

 August 21 – 22, 2018 | Paris, France

Cybersecurity Conference Description

We have the pleasure in inviting you to join us for the International Conference on Artificial Intelligence, Robotics & IoT (AI & IoT 2018) on August 21-22, 2018 at Paris, France organized by the Conference series LLC. AI & IoT 2018 is dedicated and designed to bring together interested group of Scientist, Researcher, Engineers, Robot Operators, Academia, Innovators, Students, Business Leaders to discuss, present, and exchange ideas related to Artificial Intelligence, Robotics, Internet of Things, knowledge based system, Big Data Analysis, Modern Artificial Intelligence, Intelligence Automation, Ethics of Artificial Intelligence, Developments of Robotics, Components of Robots, Robots in Industry, Robots In Space, Robots and Health Care System, History of Internet, IoT & Globalization, Future of Mankind and IoT. The principal objective of AI & IoT 2018 is to provide a prospect for the researchers, scientists, delegates, to interact, discuss and exchange innovative ideas in the various areas of Artificial Intelligence, Robotics & Internet of Things. The prestigious annual conference will be a great platform for everyone to interact with the leading experts from all these fields. We would be delighted to have your presence at AI & IoT 2018 to interact with leading researchers, experts, scientist, innovators, industry leaders.


The post Artificial Intelligence, Robotics & IoT appeared first on National Cyber Security Ventures.

View full post on National Cyber Security Ventures

The #Malicious #Use of #Artificial #Intelligence in #Cybersecurity

Criminals and Nation-state Actors Will Use Machine Learning Capabilities to Increase the Speed and Accuracy of Attacks

Scientists from leading universities, including Stanford and Yale in the U.S. and Oxford and Cambridge in the UK, together with civil society organizations and a representation from the cybersecurity industry, last month published an important paper titled, The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation.

While the paper (PDF) looks at a range of potential malicious misuses of artificial intelligence (which includes and focuses on machine learning), our purpose here is to largely exclude the military and concentrate on the cybersecurity aspects. It is, however, impossible to completely exclude the potential political misuse given the interaction between political surveillance and regulatory privacy issues.

Artificial intelligence (AI) is the use of computers to perform the analytical functions normally only available to humans – but at machine speed. ‘Machine speed’ is described by Corvil’s David Murray as, “millions of instructions and calculations across multiple software programs, in 20 microseconds or even faster.” AI simply makes the unrealistic, real.

The problem discussed in the paper is that this function has no ethical bias. It can be used as easily for malicious purposes as it can for beneficial purposes. AI is largely dual-purpose; and the basic threat is that zero-day malware will appear more frequently and be targeted more precisely, while existing defenses are neutralized – all because of AI systems in the hands of malicious actors.

Current Machine Learning and Endpoint Protection
Today, the most common use of the machine learning (ML) type of AI is found in next-gen endpoint protection systems; that is, the latest anti-malware software. It is called ‘machine learning’ because the AI algorithms within the system ‘learn’ from many millions (and increasing) samples and behavioral patterns of real malware.

Detection of a new pattern can be compared with known bad patterns to generate a probability level for potential maliciousness at a speed and accuracy not possible for human analysts within any meaningful timeframe.

It works – but with two provisos: it depends upon the quality of the ‘learning’ algorithm, and the integrity of the data set from which it learns.

Potential abuse can come in both areas: manipulation or even alteration of the algorithm; and poisoning the data set from which the machine learns.

The report warns, “It has been shown time and again that ML algorithms also have vulnerabilities. These include ML-specific vulnerabilities, such as inducing misclassification via adversarial examples or via poisoning the training data… ML algorithms also remain open to traditional vulnerabilities, such as memory overflow. There is currently a great deal of interest among cyber-security researchers in understanding the security of ML systems, though at present there seem to be more questions than answers.”

The danger is that while these threats to ML already exist, criminals and nation-state actors will begin to use their own ML capabilities to increase the speed and accuracy of attacks against ML defenses.

On data set poisoning, Andy Patel, security advisor at F-Secure, warns, “Diagnosing that a model has been incorrectly trained and is exhibiting bias or performing incorrect classification can be difficult.” The problem is that even the scientists who develop the AI algorithms don’t necessarily understand how they work in the field.

He also notes that malicious actors aren’t waiting for their own ML to do this. “Automated content generation can be used to poison data sets. This is already happening, but the techniques to generate the content don’t necessarily use machine learning. For instance, in 2017, millions of auto-generated comments regarding net neutrality were submitted to the FCC.”

The basic conflict between attackers and defenders will not change with machine learning – each side seeks to stay ahead of the other; and each side briefly succeeds. “We need to recognize that new defenses that utilize technology such as AI may be most effective when initially released before bad actors are building countermeasures and evasion tactics intended to circumvent them,” comments Steve Grobman, CTO at McAfee.

Put simply, the cybersecurity industry is aware of the potential malicious use of AI, and is already considering how best to react to it. “Security companies are in a three-way race between themselves and these actors, to innovate and stay ahead, and up until now have been fairly successful,” observes Hal Lonas, CTO at Webroot. “Just as biological infections evolve to more resistant strains when antibiotics are used against them, so we will see malware attacks change as AI defense tactics are used over time.”

Hyrum Anderson, one of the authors of the report, and technical director of data science at Endgame, accepts the industry understands ML can be abused or evaded, but not necessarily the methods that could be employed. “Probably fewer data scientists in infosec are thinking how products might be misused,” he told SecurityWeek; “for example, exploiting a hallucinating model to overwhelm a security analyst with false positives, or a similar attack to make AI-based prevention DoS the system.”

Indeed, even this report failed to mention one type of attack (although there will undoubtedly be others). “The report doesn’t address the dangerous implications of machine learning based de-anonymization attacks,” explains Joshua Saxe, chief data scientist at Sophos. Data anonymization is a key requirement of many regulations. AI-based de-anonymization is likely to be trivial and rapid.

Anderson describes three guidelines that Endgame uses to protect the integrity and secure use of its own ML algorithms. The first is to understand and appropriately limit the AI interaction with the system or endpoint. The second is to understand and limit the data ingestion; for example, anomaly detection that ingests all events everywhere versus anomaly detection that ingests only a subset of ‘security-interesting’ events. In order to protect the integrity of the data set, he suggests, “Trust but verify data providers, such as the malware feeds used for training next generation anti-virus.”

The third: “After a model is built, and before and after deployment, proactively probe it for blind spots. There are fancy ways to do this (including my own research), but at a minimum, doing this manually is still a really good idea.”

A second area of potential malicious use of AI revolves around ‘identity’. AI’s ability to both recognize and generate manufactured images is advancing rapidly. This can have both positive and negative effects. Facial recognition for the detection of criminal acts and terrorists would generally be consider beneficial – but it can go too far.

“Note, for example,” comments Sophos’ Saxe, “the recent episode in which Stanford researchers released a controversial algorithm that could be used to tell if someone is gay or straight, with high accuracy, based on their social media profile photos.”

“The accuracy of the algorithm,” states the research paper, “increased to 91% [for men] and 83% [for women], respectively, given five facial images per person.” Human judges achieved much lower accuracy: 61% for men and 54% for women. The result is typical: AI can improve human performance at a scale that cannot be contemplated manually.

“Critics pointed out that this research could empower authoritarian regimes to oppress homosexuals,” adds Saxe, “but these critiques were not heard prior to the release of the research.”

This example of the potential misuse of AI in certain circumstances touches on one of the primary themes of the paper: the dual-use nature of, and the role of ‘ethics’ in, the development of artificial intelligence. We look at ethics in more detail below.

A more positive use of AI-based recognition can be found in recent advances in speech recognition and language comprehension. These advances could be used for better biometric authentication – were it not for the dual-use nature of AI. Along with facial and speech recognition there has been a rapid advance in the generation of synthetic images, text, and audio; which, says the report, “could be used to impersonate others online, or to sway public opinion by distributing AI-generated content through social media channels.”

For authentication, Webroot’s Lonas believes we will need to adapt our current authentication approach. “As the lines between machines and humans become less discernible, we will see a shift in what we currently see in authentication systems, for instance logging in to a computer or system. Today, authentication is used to differentiate between various humans and prevent impersonation of one person by another. In the future, we will also need to differentiate between humans and machines, as the latter, with help from AI, are able to mimic humans with ever greater fidelity.”

The future potential for AI-generated fake news is a completely different problem, but one that has the potential to make Russian interference in the 2016 presidential election somewhat pedestrian.

Just last month, the U.S. indicted thirteen Russians and three companies “for committing federal crimes while seeking to interfere in the United States political system.” A campaign allegedly involving hundreds of people working in shifts and with a budget of millions of dollars spread misinformation and propaganda through social networks. Such campaigns could increase in scope with fewer people and far less cost with the use of AI.

In short, AI could be used to make fake news more common and more realistic; or make targeted spear-phishing more compelling at the scale of current mass phishing through the misuse or abuse of identity. This will affect both business cybersecurity (business email compromise, BEC, could become even more effective than it already is), and national security.

The Ethical Problem
The increasing use of AI in cyber will inevitably draw governments into the equation. They will be concerned about more efficient cyber attacks against the critical infrastructure, but will also become embroiled over civil society concerns over their own use of AI in mass surveillance. Since machine learning algorithms become more efficient with the size of the data set from which they learn, the ‘own it all’ mentality exposed by Edward Snowden will become increasingly compelling to law enforcement and intelligence agencies.

The result is that governments will be drawn into the ethical debate about AI and the algorithms it uses. In fact, this process has already started, with the UK’s financial regulator warning that it will be monitoring the use of AI in financial trading.

Governments seek to assure people that its own use of citizens’ big data will be ethical (relying on judicial oversight, court orders, minimal intrusion, and so on). It will also seek to reassure people that business makes ethical use of artificial intelligence – GDPR has already made a start by placing controls over automated user profiling.

While governments often like the idea of ‘self-regulation’ (it absolves them from appearing to be over-proscriptive), ethics in research is never adequately covered by scientists. The report states the problem: “Appropriate responses to these issues may be hampered by two self-reinforcing factors: first, a lack of deep technical understanding on the part of policymakers, potentially leading to poorly-designed or ill-informed regulatory, legislative, or other policy responses; second, reluctance on the part of technical researchers to engage with these topics, out of concern that association with malicious use would tarnish the reputation of the field and perhaps lead to reduced funding or premature regulation.”

There is a widespread belief among technologists that politicians simply don’t understand technology. Chris Roberts, chief security architect at Acalvio, is an example. “God help us if policy makers get involved,” he told SecurityWeek. “Having just read the last thing they dabbled in, I’m dreading what they’d come up with, and would assume it’ll be too late, too wordy, too much crap and red tape. They’re basically five years behind the curve.”

The private sector is little better. Businesses are duty bound, in a capitalist society, to maximize profits for their shareholders. New ideas are frequently rushed to market with little thought for security; and new algorithms will probably be treated likewise.

Oliver Tavakoli, CTO at Vectra, believes that the security industry is obligated to help. “We must adopt defensive methodologies which are far more flexible and resilient rather than fixed and (supposedly) impermeable,” he told SecurityWeek. “This is particularly difficult for legacy security vendors who are more apt to layer on a bit of AI to their existing workflow rather than rethinking everything they do in light of the possibilities that AI brings to the table.”

“The security industry has the opportunity to show leadership with AI and focus on what will really make a difference for customers and organizations currently being pummeled by cyberattacks,” agrees Vikram Kapoor, co-founder and CTO at Lacework. His view is that there are many areas where the advantages of AI will outweigh the potential threats.

“For example,” he continued, “auditing the configuration of your system daily for security best practices should be automated – AI can help. Continuously checking for any anomalies in your cloud should be automated – AI can help there too.”

It would probably be wrong, however, to demand that researchers limit their research: it is the research that is important rather than ethical consideration of potential subsequent use or misuse of the research. The example of Stanford’s sexual orientation algorithm is a case in point.

Google mathematician Thomas Dullien (aka Halvar Flake on Twitter) puts a common researcher view. Commenting on the report, he tweeted, “Dual-use-ness of research cannot be established a-priori; as a researcher, one usually has only the choice to work on ‘useful’ and ‘useless’ things.” In other words, you cannot – or at least should not – restrict research through imposed policy because at this stage, its value (or lack of it) is unknown.

McAfee’s Grobman believes that concentrating on the ethics of AI research is the wrong focus for defending against AI. “We need to place greater emphasis on understanding the ability for bad actors to use AI,” he told SecurityWeek; “as opposed to attempting to limit progress in the field in order to prevent it.”

The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation makes four high-level recommendations “to better forecast, prevent, and mitigate” the evolving threats from unconstrained artificial intelligence. They are: greater collaboration between policymakers and researchers (that is, government and industry); the adoption of ethical best practices by AI researchers; a methodology for handling dual-use concerns; and an expansion of the stakeholders and domain experts involved in discussing the issues.

Although the detail of the report makes many more finely-grained comments, these high-level recommendations indicate there is no immediately obvious solution to the threat posed by AI in the hands of cybercriminals and nation-state actors.

Indeed, it could be argued that there is no solution. Just as there is no solution to the criminal use of encryption – merely mitigation – perhaps there is no solution to the criminal use of AI – just mitigation. If this is true, defense against the criminal use of AI will be down to the very security vendors that have proliferated the use of AI in their own products.

It is possible, however, that the whole threat of unbridled artificial intelligence in the cyber world is being over-hyped.

F-Secure’s Patel comments, “Social engineering and disinformation campaigns will become easier with the ability to generate ‘fake’ content (text, voice, and video). There are plenty of people on the Internet who can very quickly figure out whether an image has been photoshopped, and I’d expect that, for now, it might be fairly easy to determine whether something was automatically generated or altered by a machine learning algorithm.

“In the future,” he added, “if it becomes impossible to determine if a piece of content was generated by ML, researchers will need to look at metadata surrounding the content to determine its validity (for instance, timestamps, IP addresses, etc.).”

In short, Patel’s suggestion is that AI will simply scale, in quality and quantity, the same threats that are faced today. But AI can also scale and improve the current defenses against those threats.

“The fear is that super powerful machine-learning-based fuzzers will allow adversaries to easily and quickly find countless zero-day vulnerabilities. Remember, though, that these fuzzers will also be in the hands of the white hats… In the end, things will probably look the same as they do now.”


The post The #Malicious #Use of #Artificial #Intelligence in #Cybersecurity appeared first on National Cyber Security Ventures.

View full post on National Cyber Security Ventures

Artificial #Intelligence is #Important for #Cybersecurity, But It’s Not #Enough

Source: National Cyber Security – Produced By Gregory Evans

The advent of Artificial Intelligence has brought with it a new scope for cybersecurity. Why the artificial intelligence is important for cybersecurity?

The advent of Artificial Intelligence has brought with it a new scope for cybersecurity. After all, an intelligent security system is expected to overcome any sophisticated threats. However, many security experts believe that AI is a double-edged sword and hence it could become dangerous at an epic level if it gets into the wrong hands. Let us make a quick analysis on the unison between cybersecurity and AI.

Cybersecurity is the need of the day. As if we didn’t have enough to worry about with terrorists running wild – always looking to inflict damage – we now have to worry about Cybercriminals as well. And in many cases, they can be a lot more dangerous than your average terrorist.

The significance of having a perfect cybersecurity strategy or solution has grown over the years. All the credit goes to the proliferation of smart devices on the Internet. Also, because of the growing endpoints that are always connected to the cyberspace, cybercriminals now have a plethora of opportunities to infiltrate devices.

Not only do hackers have more entry points to breach, but they also have more sophisticated tools to penetrate even into highly-secured devices or networks. How are they doing it? By mass producing sophisticated malware.

According to the 22nd threat report by Symantec, it is found that over 300 million malware were detected in 2016 alone. Not only this! John – the contributor at thebestvpn, shared the shocking statistic that one in every 131 emails contains a malware. The massive figure presents quite a shocking blow to businesses who then rush to come up with a more potent cybersecurity solution.

Moreover, we can’t ignore the fact that with the passage of time, cybercriminals have become smarter and more adept at countering traditional security practices. A survey conducted in 2017 of 70 professional hackers and pen testers found that 60% of hackers claim they can compromise a system within just 6 hours. Plus, over 80% of the hackers and testers said they could remain hidden from the network for 100 days after stealing sensitive data.

To combat such threats, we need to come up with a disruptive security technology that is not only efficient, but also proactive, faster and more intelligent. One such disruption that can prove itself an ideal security solution is Artificial Intelligence (AI).

Artificial Intelligence & Cybersecurity: A Perfect Unison or a Calamity

When we talk about Artificial Intelligence, the first thing that pops into our mind are technologies like Tesla’s self-driving cars or the Amazon Echo. This is because we take AI only as a “Buzzword” and nothing else.

Regardless, AI can offer more firepower when it comes to cybersecurity. It can cover the lack of manpower that we see in this highly complex field. Likewise, it can run things faster and hence detect threats before they could compromise a system and inflict damage.

Although there is a lot of potential in Artificial Intelligence for tackling complex cyber threats for good, there are some aspects that make it a double-edged sword. Before we move on to the other aspects of AI, let’s take a look at why it seems to be a great cybersecurity tool.

The Significance of AI as a Security Solution

IT experts at a company have a lot on their hands to monitor and analyze. They are always challenged with sifting through loads of security logs and activities, finding security threats that could pose a serious threat and coming up with mitigation strategies to contain it.

Moreover, there are weeks and months of logs that need to be scrutinized and vetted for security purposes. Identifying any abnormality in such vast amount of data and then formulating the right solution require not only more manpower but also more tools and resources.

However, an AI-powered machine can greatly assist IT personnel in monitoring, tracking and detecting anomalies efficiently.

Ryan Permeh, Cylance Chief Scientist, said in an online interview conducted by CSOOnline, “Historically, an AV researcher might see 10,000 viruses in a career.  Today there are over 700,000 per day.” He further states that his security firm uses AI to tackle such attacks.

Apart from that, AI as a security tool can help with the lack of manpower that the cybersecurity industry is currently facing. Over 40% of organizations claim that they suffer from a “problematic shortage” of talent in cybersecurity.

Shahid Shah, the CEO of Netspective Communications, claims that there is a lot of skill shortage in different cybersecurity areas such as advanced malware prevention, compliance, IDS/IPS, identity and access management, etc.

Shah further states that by implementing AI, security firms can depend on “computers to do the grunt work and leave humans to the decision-making.”

Why AI Currently Isn’t a ‘Perfect’ Cybersecurity Solution

If AI can be used to shield our systems or networks from cyber-attacks, it is rational to expect the technology being used for more attacks. Shortly, when AI becomes more automated and developed, we might see more sophisticated cyber-attacks carried out by intelligent malware or viruses.

In fact, Endgame’s security expert, Hyrum Anderson has proved just that at the DEF CON 2017. The team demonstrated an intelligent application that can re-engineer a malware and make it undetectable to even a smart antivirus. A group of researchers was successful in circumventing the protective layers of the AI-powered antivirus with its AI-powered malware 16% of the time.

The research was conducted to show that even AI can have blind spots that could be used to compromise systems.

The demonstration Hyrum Anderson presented isn’t the only research that indicates the negative implications of relying solely on AI. In fact, another research conducted by a security firm, Cylance, predicts AI “weaponization” soon.

According to the research, 62% of security experts believe that AI-powered cyber-attacks will increase in the near future, and hence the technology will be used as an intelligent cyber weapon.

“While AI may be the best hope for slowing the tide of cyberattacks and breaches, it may also create more advanced attacker tactics in the short-term,” says Cylance.

Final Say

AI-powered systems may reinforce our cybersecurity infrastructure, enabling our workforce to detect, contain, mitigate or stop cyber threats. However, relying solely on an intelligent technology that could be molded at our will can be dangerous. Plus, an AI-enabled attack may prove to be detrimental at an epidemic level.

The post Artificial #Intelligence is #Important for #Cybersecurity, But It’s Not #Enough appeared first on National Cyber Security Ventures.

View full post on National Cyber Security Ventures

Cyber Security Vulnerability Intelligence Analyst

Source: National Cyber Security – Produced By Gregory Evans

Cyber Security Vulnerability Intelligence Analyst

Dynamics of the Role

The Cyber Vulnerability Intelligence Analyst will collect, research, coordinate, and deliver intelligence gathered from various sources to increase TransUnion’s cyber vulnerability awareness and protection levels.  This person will interact with key organizational personnel, working within the Cyber Threat and Intelligence team to compose essential documentation (risk assessments, vulnerability assessment reports, vulnerability remediation tracking reports, patch management reports etc.).  The candidate’s strategic work products will enable decision makers to make more informed business decisions from actionable, timely, and relevant vulnerability intelligence.

We are looking to grow a research team with expertise in the following domains: Microsoft Windows Operating System Internals, Linux Operating Systems Internals, and Web Application design. All candidates must have expertise in at least one domain. The ideal candidate is very detailed oriented with strong written and oral communication skills, as well as a strong technical background.

How You’ll Contribute:

  • Classify and prioritize the risk of new vulnerabilities according to the specifics of TransUnion’s global IT enterprise, mitigating factors, and assessment of the impacts of internal and external threats
  • Facilitate proactive remediation of new vulnerabilities by collecting information from threat and vulnerability feeds, analyzing the impact/applicability to our environment and communicating applicable vulnerabilities and recommended remediation actions to the impacted teams
  • Provide input to our security architects and testing team for enhancing and validating the information security strategy
  • Stay current on security industry trends, attack techniques, mitigation techniques, security technologies and new and evolving threats to the organization by attending conferences, networking with peers and other education opportunities

What You’ll Bring:

  • Curiosity and interest in technology
  • Ability to effectively initiate, prioritize, and execute tasks in a high-pressure environment with minimal supervision
  • Expertise in one of the following areas: Microsoft Windows Operating System Internals, Linux Operating Systems Internals, or Web Application design
  • 3+ years of Information Security experience working with vulnerability management tools and/or security testing
  • Strong knowledge of threats and vulnerabilities associated with application and network security
  • Sense of urgency to address new technologies being deployed
  • Demonstrated ability to work effectively in a challenging environment
  • Strong oral and written communications skills
  • Strong analytical and problem-solving skills
  • CEH, CISSP, SANS, and other security related certifications a plus

What We Offer

This is an exciting time in TransUnion’s history. With investments in our people, technology and new business markets, we are redefining the role and purpose of a credit bureau. We are acquiring new businesses, launching new products, and expanding our services to businesses and consumers worldwide.

The future has never looked brighter for our associates. We work hard to offer our team members meaningful work experiences to promote professional growth, and to provide an enjoyable place to work with competitive benefits, a healthy work/life balance, and a friendly, casual culture.

Who We Are

Information is a powerful thing. At TransUnion, we are dedicated to finding innovative ways information can be used to help people make better and smarter decisions. As a trusted provider of global information solutions, our mission is to help people around the world access the opportunities that lead to a higher quality of life, by helping organizations optimize their risk-based decisions and enabling consumers to understand and manage their personal information. Because when people have access to more complete and multidimensional information, they can make more informed decisions and achieve great things.

We are an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, age, disability status, veteran status, marital status, citizenship status, sexual orientation, gender identity or any other characteristic protected by law.



Work Locations






  GT – Information Security




  Day Job

Job Type


The post Cyber Security Vulnerability Intelligence Analyst appeared first on National Cyber Security Ventures.

View full post on National Cyber Security Ventures

Business Intelligence Analyst

Source: National Cyber Security – Produced By Gregory Evans

Business Intelligence Analyst
Job Details


Corporate – Austin, TX

Full Time

4 Year Degree

Silvercar is looking for a motivated analyst with a get-it-done attitude to join our growing business intelligence team. As a BI analyst, you will be responsible for leveraging the vast amounts of data at Silvercar and enabling business users access to data needed to drive business decisions. You will work directly with the head of business intelligence team and support each of Silvercar’s business units with their BI, analytics and reporting needs.


  • Drive BI use cases throughout the company
  • Provide reliable access to high quality data through dashboards and reports
  • Work with business stakeholders across the organization to gather and analyze BI requirements
  • Provide technical leadership for and hands on experience in BI, analytics, ETL, data warehousing and reporting
  • Architect an ETL process using Treasure Data by importing data from various sources (transactional databases, business systems, and flat files), transforming data using SQL, and outputting data into BI systems for analysis and visualization
  • BA/BS in Mathematics, Computer Science, Engineering or other quantitative field
  • 3+ years of experience working directly with BI tools like Looker or Treasure Data
  • Advanced SQL skills
  • Solid spreadsheet skills (Excel, Google Sheets, etc.)
  • Experience with ETL processes, data warehousing or building out data pipelines
  • Ability to work with business stakeholders to define BI requirements
  • Ability to act as a project manager and collaboratively work with business stakeholders to define and develop BI use cases
  • Maintains a positive attitude and shares Silvercar’s values

The post Business Intelligence Analyst appeared first on National Cyber Security Ventures.

View full post on National Cyber Security Ventures

Cyber Intelligence Analyst – Network Defender (Entry-Level)

more information on sonyhack from leading cyber security expertsSource: National Cyber Security – Produced By Gregory Evans Job Description : Lockheed Martin’s (LM) Cyber Intelligence Analysts play a crucial role supporting the LM Computer Incident Response Team (LM-CIRT) team. This Cyber Intelligence Analyst position is in the Enterprise Business Services’ (EBS) Corporate Information Security (CIS) organization. This Analyst will be physically located in […] View full post on | Can You Be Hacked?