now browsing by tag
Penn sophomores launch Tinder-like app to help students make friends remotely | #tinder | #pof | romancescams | #scams
Berri was co-founded by top row (from left to right): Justin Ma and James Tseng. Middle row (from left to right): Angelina Pan and Patrick Liu. Bottom row (from left […] View full post on National Cyber Security
Canberrans launch Fluttr, the app connecting people online and in person | The Canberra Times | #tinder | #pof | romancescams | #scams
_________________________ news, latest-news, It’s not unusual to go out and see people on the dancefloor or sitting at a bar with friends, and still be on their phones. It’s something […] View full post on National Cyber Security
Two and a half years after Ikea first announced its collaboration with Swedish design collective Teenage Engineering, the products are going on sale. The Frekvens range (which translates to “frequency”) will be rolling out in US stores from today through February 1st, according to Ikea.
Teenage Engineering is best known for its music products like the iconic OP-1 synthesizer, so it’s no surprise that the Frekvens collection sees Ikea continue its expansion into the world of home audio. There are two speakers in the range, a $69.99 model and a more portable $19.99 option with a belt clip, along with a $149 subwoofer combo and a $10 light-up speaker base.
“The Items got narrowed down towards sound,” Teenage Engineering founder Jesper Kouthoofd says. “What we said was ‘why do you have to hide speakers. They are furniture in their own right.’ Sounds should not be hidden. So when you start to build a modular system and add fronts and accessories on, it’s a more fun way to think about sound.”
The rest of the range includes more traditional Ikea products and is designed to help you host a stylishly minimalist home party. There are various lights, furniture, crockery, and other somewhat incongruous items like a cajón and a reflective raincoat. Many of the lights and speakers are able to be connected together.
“We know that for younger people spontinuity is key,” Ikea creative design leader Michael Nikolic says. “The idea of gathering some friends could become a reality in minutes. What is needed to have a good party at home? That’s what we wanted to investigate with Frekvens. Together with Teenage Engineering, we have explored the possibilities of taking the party with us.”
Here are some selected items from the collection:
The post #comptia | Ikea and Teenage Engineering launch co-developed speakers and party products appeared first on National Cyber Security.
View full post on National Cyber Security
The Police Digital Security Centre (PDSC) will launch a certification scheme for cyber security companies, giving them the weight to vet new products launched by small companies and startups.
From next month, established cyber security companies can apply to be a part of the new accreditation scheme set up by the security wing of the police force and devised in partnership with the British Standards Institute (BSI).
The ‘Digital Security Innovator’ certification will be for the benefit of smaller companies seeking to make informed decisions when choosing a cyber security vendor to examine their new products or services.
This cyber security assessment includes taking a look at elements such as concepts of design. These would, in turn, allow a startup secure investment, or support any applications to an external incubator or mentorship programmes run by industry bodies.
There are two awards on offer, with each giving cyber security companies police/BSI accreditation for 12 months. Firms that apply to receive either award would have to demonstrate their cyber security tools meet established industry standards.
Four cybersecurity essentials that your board of directors wants to know
The insights to help you deliver what they need
“The awards aren’t just about the product or service,” said both techUK’s programme assistant for defence, cyber and justice & emergency services Charlie Wyatt, and programme manager for defence and cyber Dan Patefield.
“From the experience of working with thousands of SMEs over the past few years, trust is essential in building relationships with vendors that keep SMEs safe from the most common types of cyber-crime.
“Therefore, in addition to reference checks, the PDSC require all customer-facing staff to undergo police vetting.”
PDSC was established in 2015 by the Mayor of London’s office, in collaboration with the Met Police and the City of London Police, in order to help small and medium-sized businesses reduce their vulnerability to cyber crime.
This was branded the London Digital Security Centre (LDSC) until 2019 at which point it took up a national remit, and began working with industry partners, the government, academia, and other branches of law enforcement.
This launch can be added to a string of cyber security training and certification programmes targeted at giving cyber security professionals and organisations the tools and expertise to protect themselves against cyber threats.
The PDSC and BSI certification scheme will formally launch on 17 February at an event hosted by trade association techUK, with companies given more further details about how to apply for the awards on offer.
What you need to know about migrating to SAP S/4HANA
Factors to assess how and when to begin migration
Your enterprise cloud solutions guide
Infrastructure designed to meet your company’s IT needs for next-generation cloud applications
Testing for compliance just became easier
How you can use technology to ensure compliance in your organisation
Best practices for implementing security awareness training
How to develop a security awareness programme that will actually change behaviour
The post #nationalcybersecuritymonth | Police to launch cyber security certification scheme appeared first on National Cyber Security.
View full post on National Cyber Security
#cybersecurity | #hackerspace | 5G & IoT: Real-World Rollouts Launch New Opportunities and Security Threats
This e-book examines what service providers need to know as commercial rollouts of 5G technology begins in 2020.
The post 5G & IoT: Real-World Rollouts Launch New Opportunities and Security Threats appeared first on Radware Blog.
The post 5G & IoT: Real-World Rollouts Launch New Opportunities and Security Threats appeared first on Security Boulevard.
View full post on National Cyber Security
Machine learning algorithms will improve security solutions, helping human analysts triage threats and close vulnerabilities quicker. But they are also going to help threat actors launch bigger, more complex attacks.
Defined as the “ability for (computers) to learn without being explicitly programmed,” machine learning is huge news for the information security industry. It’s a technology that potentially can help security analysts with everything from malware and log analysis to possibly identifying and closing vulnerabilities earlier. Perhaps too, it could improve endpoint security, automate repetitive tasks, and even reduce the likelihood of attacks resulting in data exfiltration.
Naturally, this has led to the belief that these intelligent security solutions will spot – and stop – the next WannaCry attack much faster than traditional, legacy tools. “It’s still a nascent field, but it is clearly the way to go in the future. Artificial intelligence and machine learning will dramatically change how security is done,” said Jack Gold, president and principal analyst at J.Gold Associates, when speaking recently to CSO Online.
“With the fast-moving explosion of data and apps, there is really no other way to do security than through the use of automated systems built on AI to analyze the network traffic and user interactions.”
The problem is, hackers know this and are expected to build their own AI and machine learning tools to launch attacks.
How are cyber-criminals using machine learning?
Criminals – increasing organized and offering wide-ranging services on the dark web – are ultimately innovating faster than security defenses can keep up. This is concerning given the untapped potential of technologies like machine and deep learning.
“We must recognize that although technologies such as machine learning, deep learning, and AI will be cornerstones of tomorrow’s cyber defenses, our adversaries are working just as furiously to implement and innovate around them,” said Steve Grobman, chief technology officer at McAfee, in recent comments to the media. “As is so often the case in cybersecurity, human intelligence amplified by technology will be the winning factor in the arms race between attackers and defenders.”
This has naturally led to fears that this is AI vs AI, Terminator style. Nick Savvides, CTO at Symantec, says this is “the first year where we will see AI versus AI in a cybersecurity context,” with attackers more able to effectively explore compromised networks, and this clearly puts the onus on security vendors to build more automated and intelligent solutions.
“Autonomous response is the future of cybersecurity,” stressed Darktrace’s director of technology Dave Palmer in conversation with this writer late last year. “Algorithms that can take intelligent and targeted remedial action, slowing down or even stopping in-progress attacks, while still allowing normal business activity to continue as usual.”
Machine learning-based attacks in the wild may remain largely unheard of at this time, but some techniques are already being leveraged by criminal groups.
1. Increasingly evasive malware
Malware creation is largely a manual process for cyber criminals. They write scripts to make up computer viruses and trojans, and leverage rootkits, password scrapers and other tools to aid distribution and execution.
But what if they could speed up this process? Is there a way machine learning could be help create malware?
The first known example of using machine learning for malware creation was presented in 2017 in a paper entitled “Generating Adversarial Malware Examples for Black-Box Attacks Based on GAN.” In the report, the authors revealed how they built a generative adversarial network (GAN) based algorithm to generate adversarial malware samples that, critically, were able to bypass machine-learning-based detection systems.
In another example, at the 2017 DEFCON conference, security company Endgame revealed how it created customized malware using Elon Musk’s OpenAI framework to create malware that security engines were unable to detect. Endgame’s research was based on taking binaries that appeared to be malicious, and by changing a few parts, that code would appear benign and trustworthy to the antivirus engines.
Other researchers, meanwhile, have predicted machine learning could ultimately be used to “modify code on the fly based on how and what has been detected in the lab,” an extension on polymorphic malware.
2. Smart botnets for scalable attacks
Fortinet believes that 2018 will be the year of self-learning ‘hivenets’ and ‘swarmbots’, in essence marking the belief that ‘intelligent’ IoT devices can be commanded to attack vulnerable systems at scale. “They will be capable of talking to each other and taking action based off of local intelligence that is shared,” said Derek Manky, global security strategist, Fortinet. “In addition, zombies will become smart, acting on commands without the botnet herder instructing them to do so. As a result, hivenets will be able to grow exponentially as swarms, widening their ability to simultaneously attack multiple victims and significantly impede mitigation and response.”
Interestingly, Manky says these attacks are not yet using swarm technology, which could enable these hivenets to self-learn from their past behavior. A subfield of AI, swarm technology is defined as the “collective behavior of decentralized, self-organized systems, natural or artificial” and is today already used in drones and fledgling robotics devices. (Editor’s note: Though futuristic fiction, some can draw conclusions from the criminal possibilities of swarm technology from Black Mirror’s Hated in The Nation, where thousands of automated bees are compromised for surveillance and physical attacks.)
3. Advanced spear phishing emails get smarter
One of the more obvious applications of adversarial machine learning is using algorithms like text-to-speech, speech recognition, and natural language processing (NLP) for smarter social engineering. After all, through recurring neural networks, you can already teach such software writing styles, so in theory phishing emails could become more sophisticated and believable.
In particular, machine learning could facilitate advanced spear phishing emails to be targeted at high-profile figures, while automating the process as a whole. Systems could be trained on genuine emails and learn to make something that looks and read convincing.
In McAfee Labs’ predictions for 2017, the firm said that criminals would increasingly look to use machine learning to analyze massive quantities of stolen records to identify potential victims and build contextually detailed emails that would very effectively target these individuals.
Furthermore, at Black Hat USA 2016, John Seymour and Philip Tully presented a paper titled “Weaponizing data science for social engineering: Automated E2E spear phishing on Twitter,” which presented a recurrent neural network learning to tweet phishing posts to target certain users. In the paper, the pair presented that the SNAP_R neural network, which was trained on spear phishing pentesting data, was dynamically seeded with topics taken from the timeline posts of target users (as well as the users they tweet or follow) to make the click-through more likely.
Subsequently, the system was remarkably effective. In tests involving 90 users, the framework delivered a success rate varying between 30 and 60 percent, a considerable improvement on manual spear phishing and bulk phishing results.
4. Threat intelligence goes haywire
Threat intelligence is arguably a mixed blessing when it comes to machine learning. On the one hand, it is universally accepted that, in an age of false positives, machine learning systems will help analysts to identify the real threats coming from multiple systems. “Applying machine learning delivers two significant gains in the domain of threat intelligence,” said Recorded Future CTO and co-founder Staffan Truvé in a recent whitepaper.
“First, the processing and structuring of such huge volumes of data, including analysis of the complex relationships within it, is a problem almost impossible to address with manpower alone. Augmenting the machine with a reasonably capable human, means you’re more effectively armed than ever to reveal and respond to emerging threats,” Truvé wrote. “The second is automation — taking all these tasks, which we as humans can perform without a problem, and using the technology to scale up to a much larger volume we could ever handle.”
However, there’s the belief, too, that criminals will adapt to simply overload those alerts once more. McAfee’s Grobman previously pointed to a technique known as “raising the noise floor.” A hacker will use this technique to bombard an environment in a way to generate a lot of false positives to common machine learning models. Once a target recalibrates its system to filter out the false alarms, the attacker can launch a real attack that can get by the machine learning system.
5. Unauthorized access
An early example of machine learning for security attacks was published back in 2012, by researchers Claudia Cruz, Fernando Uceda, and Leobardo Reyes. They used support vector machines (SVM) to break a system running on reCAPTCHA images with an accuracy of 82 percent. All captcha mechanisms were subsequently improved, only for the researchers to use deep learning to break the CAPTCHA once more. In 2016, an article was published that detailed how to break simple-captcha with 92 percent accuracy using deep learning.
Separately, the “I am Robot” research at last year’s BlackHat revealed how researchers broke the latest semantic image CAPTCHA and compared various machine learning algorithms. The paper promised a 98 percent accuracy on breaking Google’s reCAPTCHA.
6. Poisoning the machine learning engine
A far simpler, yet effective, technique is that the machine learning engine used to detect malware could be poisoned, rendering it ineffective, much like criminals have done with antivirus engines in the past. It sounds simple enough; the machine learning model learns from input data, if that data pool is poisoned, then the output is also poisoned. Researchers from New York University demonstrated how convolutional neural networks (CNNs) could be backdoored to produce these false (but controlled) results through CNNs like Google, Microsoft, and AWS.
View full post on National Cyber Security Ventures
North Korea has recently threatened to conduct its first test of an intercontinental ballistic missile (ICBM). Last week, media reports even cited “unnamed” South Korean officials stating a test of a previously unknown 2-stage ICBM “may be imminent.” While the …
The post Is the Kalma Ballistic Missile Test Site Ready for an ICBM Launch? appeared first on National Cyber Security Ventures.
View full post on National Cyber Security Ventures
No Bully and Police Athletic League Launch Pilot Program in New York with Seed Support from Tickle Water, IDT911 and Cybersecurity Expert Adam Levin
10 million students are bullied each year in the US and around 1-6 adolescents is the target of cyberbullying. (No Bully, US National surveys of crime and risky behaviors) Kids spend a lot of time online and their chances of experiencing
The post No Bully and Police Athletic League Launch Pilot Program in New York with Seed Support from Tickle Water, IDT911 and Cybersecurity Expert Adam Levin appeared first on National Cyber Security.
View full post on National Cyber Security
China is taking serious and drastic measures to ensure that its data remains secure from malicious hackers. It will soon become the first nation to launch the world’s first quantum communication satellite in space, when the rocket takes off in July. China specifically developed the satellite to help it securely send and receive data by […] View full post on AmIHackerProof.com | Can You Be Hacked?