now browsing by tag
Mega-big online gaming company Ubisoft, maker of mega-hit games including Assassin’s Creed, Far Cry, Just Dance and Tom Clancy’s RainbowSix: Siege (R6S), is suing four operators of the DDoS-for-hire sites that have been launched against its RainbowSix servers.
These guys aren’t just launching attacks that kick all players on a targeted server out of a game, or degrade the game performance down to sludge, Ubisoft alleges. They also allegedly went so far as to throw up a bogus domain seizure notice on one of their sites, claiming that the domain had been seized by “Microsoft Inc. and Ubisoft Entertainment” pursuant to a fictional “Operation(D)DoS OFF”, according to the complaint (posted courtesy of Polygon) that Ubisoft filed on Thursday in the US District Court of Northern California.
Ubisoft says it was part of the operators’ attempts to rub out their tracks:
Defendants are well aware of the harm that the DDoS Services and DDoS Attacks cause to Ubisoft. Indeed, knowing that this lawsuit was imminent, Defendants have hastily sought to conceal evidence concerning their involvement.
It’s not just alleged DDoS-for-hire operators who knew this lawsuit was coming. Everybody in the gaming world knew. Ubisoft picked up on an increase in DDoS attacks in September 2019, banned the worst offenders, and said that it was talking to its legal team about legal action.
Last week, Ubisoft filed the complaint against five people whom it thinks run a network of four distributed denial of service- (DDoS)-for-hire services via various domain names and websites – the websites SNG.one, R6S.support, r6ddos.com, and (could they possibly be more redundant?) stressed-stresser-stressing-stressers.com – and that they hide behind various anonymous online aliases to do so.
The defendants: Dennis Kruk (based in Germany), Maximilian Kuehl (Germany), Kelvin Uttih (Nigeria), an individual identified as B.R. (the Netherlands), and an individual identified only by their email address: email@example.com.
Booter who, now?
Stressers – also known as booters or DDoS-for-hire – are publicly available, web-based services that launch server-clogger-upper attacks for a small fee or, sometimes, none at all.
As befits the “stresser this” and “stresser that” brand names for a lot of these services – besides the stresser-stressy-stress-o-matic name mentioned in the complaint, such services have included ExoStresser, QuezStresser, Betabooter, Databooter, Instabooter, Polystress, and Zstress – DDoS-for-hire sites sell high-bandwidth internet attack services, sometimes under the guise of “stress testing.” SNG.ONE does the same: its site describes it as a “penetration testing service.”
DDoS attacks are blunt instruments that work by overwhelming targeted sites with so much traffic that nobody can reach them. They can be used to render competitor or enemy websites temporarily inoperable out of malice, lulz or profit: as in, some attackers extort site owners into paying for attacks to stop.
One example is Lizard Squad, which, until its operators were busted in 2016, rented out its LizardStresser attack service. LizardStresser was given a dose of its own medicine when it was hacked in 2015.
You might remember Lizard Squad as the Grinch who ruined gamers’ Christmas with a DDoS against the servers that power PlayStation and Xbox consoles – an attack it carried out for our own good.
For our own good, as in, the attackers didn’t feel bad: some kids would just have to spend time with their families instead of playing games, one of them said at the time.
These services, in other words, are used a lot in the online gaming world. Booter-based DDoS attack tools offer a low barrier to entry for users looking to engage in cybercrime. Indeed, hiring a service to paralyze your enemies’, your competition’s and/or your targets’ sites makes it as easy as simply handing over the money, no technical skill required… nor much money.
Chump change for cheaters
In April 2018, when the world’s largest DoS site – Webstresser.org – got busted, we got a look at the paltry sums the crooks were being charged for unleashing mayhem. According to Webstresser’s pricing table, archived before the site was taken down, memberships $18.99/month for the “bronze” level, and $49.99/month for a “platinum” service.
According to Ubisoft’s suit, the defendants sell subscriptions for up to $299.85 for “lifetime” access to a server that dishes out DDoS attacks. The subscription tiers include Starter, Advanced and “Full Time B00ter.” Monthly pricing starts at 10 Euros (about USD $11.11) and goes on up to 270 Euros (about USD $299.85) for “lifetime” access.
Besides R6S, the complaint included a screenshot that also showed Fortnite, FIFA 20, and Call of Duty: Modern Warfare 4 as potential targets.
There are a whole lot of DDoS-for-hire services out there, but the ones named in the complaint are specifically aimed at Ubisoft games. The operators of the services not only named their offerings using Rainbow Six Siege references; they’ve also “gone out of their way” to taunt Ubisoft support, the complaint notes.
For example, the complaint included a screenshot of a tweet that mocked Ubisoft’s security efforts, including the company’s efforts to ban users of the DDoS services.
As Polygon reports, DDoS attacks are the tools of cheaters.
Cheating players use the attacks to create lag, slow the matches down and frustrate legitimate players into quitting. Ordinarily, quitting a match earns a penalty and gives the remaining player ranked points without having to do anything.
Ubisoft asked the court to shut down the alleged cheaters’ websites and to award damages and fees.
SNG.ONE hasn’t responded to media inquiries.
Latest Naked Security podcast
Click-and-drag on the soundwaves below to skip to any point in the podcast.
The post Ubisoft sues DDoS-for-hire operators for ruining game play – Naked Security appeared first on National Cyber Security.
View full post on National Cyber Security
Microsoft has today announced a data breach that affected one of its customer databases.
The blog article, entitled Access Misconfiguration for Customer Support Databases, admits that between 05 December 2019 and 31 December 2019, a database used for “support case analytics” was effectively visible from the cloud to the world.
Microsoft didn’t give details of how big the database was. However, consumer website Comparitech, which says it discovered the unsecured data online, claims it was to the order of 250 million records containing:
…logs of conversations between Microsoft support agents and customers from all over the world, spanning a 14-year period from 2005 to December 2019.
According to Comparitech, that same data was accessible on five Elasticsearch servers.
The company informed Microsoft, and Microsoft quickly secured the data.
Microsoft’s official statement states that “the vast majority of records were cleared of personal information,” meaning that it used automated tools to look for and remove private data.
However, some private data that was supposed to be redacted was missed and remained visible in the exposed information.
Microsoft didn’t say what type of personal information was involved, or which data fields ended up un-anonymised.
It did, however, give one example of data that would have been left behind: email addresses with spaces added by mistake were not recognised as personal data and therefore escaped anonymisation.
So if your email address were recorded as “firstname.lastname@example.org” your data would have been converted into a harmless form, whereas “name[space]@example.com” (an easy mistake for a support staffer to make when capturing data) would have been left alone.
Microsoft has promised to notify anyone whose data was inadvertently exposed in this way, but didn’t say what percentage of all records were affected.
What to do?
We don’t know how many people were affected or exactly what personal data was opened up for those users.
We also don’t know who else, besides Comparitech, may have noticed in the three weeks it was exposed, although Microsoft says that it “found no malicious use”.
We assume that if you don’t hear from Microsoft, even if you did contact support during the 2005 to 2019 period, then either your data wasn’t in the exposed database, or there wasn’t actually enough in the leaked database to allow anyone, including Microsoft itself, to identify you.
It’s nevertheless possible that crooks will contact you claiming that you *were* in the breach.
They might urge you to take steps to “fix” the problem, such as clicking on a link and logging in “for security reasons”, or to “confirm your account”, or on some other pretext.
Remember: if ever you receive a security alert email, whether you think it is legitimate or not, avoid clicking on any links, calling any numbers or taking any online actions demanded in the email.
Find your own way to the site where you would usually log in, and stay one step ahead of phishing emails!
The post Big Microsoft data breach – 250 million records exposed – Naked Security appeared first on National Cyber Security.
View full post on National Cyber Security
Criminals have been caught trying to sneak a malicious package on to the popular Node.js platform npm (Node Package Manager).
The problem package, 1337qq-js, was uploaded to npm on 31 December, after which it was downloaded at least 32 times according to figures from npm-stat.
According to a security advisory announcing its removal, the package’s suspicious behaviour was first noticed by Microsoft’s Vulnerability Research team, which reported it to npm on 13 January 2020:
The package exfiltrates sensitive information through install scripts. It targets UNIX systems.
The data it steals includes:
- Environment variables
- Running processes
- uname -a
- npmrc file
Any of these could lead to trouble, especially the theft of environment variables which can include API tokens and, in some cases, hardcoded passwords.
Anyone unlucky enough to have downloaded this will need to rotate those as a matter of urgency in addition to de-installing 1337qq-js itself.
What to do
The offending versions of the package are versions 1.0.11 to 1.0.9 inclusive.
The advice is to check for dependencies by generating a report using the npm audit command from the command line. This alerts admins to packages known to be malevolent as well as any other security issues that need addressing. In a perfect world, an audit will return this:
No known vulnerabilities found (x packages audited].
Malicious npm packages, particularly ones installing backdoors, have become a recurring theme in the last year or two.
A good example was last June’s targeting of the Agama cryptocurrency wallet. The thinking behind this attack was simple – upload what appears to be a useful package, wait until the specific target starts using it in their ‘build chain’, and then update the package with a malicious payload.
This kind of ruse puts a lot of pressure on npm’s security testers to spot malevolence before any damage is done. In this case, the attack was foiled.
There have been at least four other incidents with malicious packages trying to sneak backdoor attacks on npm users since 2017.
Instances of attackers targeting libraries and packages to target cryptocurrency apps by the backdoor are also on the increase.
Today’s applications are assembled from different pieces of software in a format that resembles a supply chain. Clearly, as with physical supply chains, this brings with it new risks.
The post Malicious npm package taken down after Microsoft warning – Naked Security appeared first on National Cyber Security.
View full post on National Cyber Security
Source: National Cyber Security – Produced By Gregory Evans Do you know what you were doing 3736 days ago? We do! (To be clear, lest that sound creepy, we know what we were doing, not what you were doing.) Admittedly, we didn’t remember all on our own – we needed the inexorable memory of the […] View full post on AmIHackerProof.com
Source: National Cyber Security – Produced By Gregory Evans Here’s some goodish news: the Snake ransomware seems to have made the news last week on account of its name rather than its prevalence. Because, well, SNAKE! Like most ransomware, Snake doesn’t touch your operating system files and programs, so your computer will still boot up, […] View full post on AmIHackerProof.com
Source: National Cyber Security – Produced By Gregory Evans Right at the end of 2019, we wrote about the “decade-ending Y2K bug that wasn’t” in a serious article with a humorous side. In that article, we described a perennial “gotcha” facing Java programmers faced with the simple task of printing out the year. If you […] View full post on AmIHackerProof.com
Source: National Cyber Security – Produced By Gregory Evans In September, the Federal Trade Commission (FTC) wrist-slapped Google for flagrantly, illegally sucking up kids’ data so it could target them with ads. On Monday, as part of its agreement with the FTC and the New York attorney general over violating the federal Children’s Online Privacy […] View full post on AmIHackerProof.com
Source: National Cyber Security – Produced By Gregory Evans Here they are: the baddest stories and the biggest lessons, from 2010 to 2019. From a totally made-up hoax that shocked the world, through a social networking app that promised what it couldn’t deliver, to a larger-than life cybercelebrity who was busted in a military-scale takedown […] View full post on AmIHackerProof.com
Source: National Cyber Security – Produced By Gregory Evans A curious Naked Security reader alerted us to what they thought might be a “Y2K-like bug” in Java’s date handling. The cause of the alarm was a Twitter thread that started with a headline tweet saying, “PSA: TIS THE SEASON TO CHECK YOUR FORMATTERS, PEOPLE.” PSA: […] View full post on AmIHackerProof.com
For years, organisations have been using a common tactic called the warrant canary to warn people that the government has secretly demanded access to their private information. Now, a proposed standard could make this tool easier to use.
When passed in 2001, the US Patriot Act enabled authorities to access personal information stored by a service provider about US citizens. It also let them issue gag orders that would prevent the organisation from telling anyone about it. It meant that the government could access an individual’s private information without that person knowing.
Companies like ISPs and cloud service providers want their users to know whether the government is asking for this information. This is where the warrant canary comes in. First conceived by Steve Schear in 2002, shortly after the Patriot Act came into effect, a warrant canary is a way of warning people that the organisation holding their data has received a subpoena.
Instead of telling people that it has been served with a subpoena, the organisation stops telling them that it hasn’t. It displays a public statement online that it only changes if the authorities serve it with a warrant. As long as the statement stays unchanged, individuals know that their information is safe. When the statement changes or disappears, they can infer that all is not well without the organisation explicitly saying so. Here’s an example of one.
A warrant canary can be as simple as a statement that the service provider has never received a warrant. The problem is that those statements aren’t standardised, which makes it difficult for people to interpret them. How can you be sure that a warrant canary means what you think it means? If it disappears, does that mean that the service provider received a warrant, or did someone just forget to include it somewhere? Does the canary’s death indicate a sinister problem, or did it just die of natural causes? This isn’t idle speculation – warrant canary changes like SpiderOak’s have confused users in the past.
The other problem is that these statements are designed to be read by people, which make them difficult to track and monitor at scale. That’s what the warrant canary standard would solve.
The proposed standard surfaced on Github on Tuesday. It was created by GitHub user carrotcypher, inspired by the work of organisations like the Calyx Institute (a technology non-profit that develops free privacy software) and the now-defunct Canary Watch, a project from the Electronic Frontier Foundation (EFF), Freedom of the Press Foundation, NYU Law, Calyx and the Berkman Center. Canary Watch listed and tracked warrant canaries. When it shut down Canary Watch, the EFF explained:
In our time working with Canary Watch we have seen many canaries go away and come back, fail to be updated, or disappear altogether along with the website that was hosting it. Until the gag orders accompanying national security requests are struck down as unconstitutional, there is no way to know for certain whether a canary change is a true indicator. Instead the reader is forced to rely on speculation and circumstantial evidence to decide what the meaning of a missing or changed canary is.
Canarytail seeks to change that. As it explains on its Github readme.md page:
We seek to resolve those issues through a proper standardized model of generation, administration, and distribution, with variance allowed only inside the boundaries of a defined protocol.
Instead of some arbitrary language on a website, the warrant canary standard would be a file created using the JSON language, which is notable for displaying data as a list of key:value pairs readable by both people and machines. The file would include 11 codes with a value of zero (false) or one (true). These codes include WAR for warrants, GAG for gag orders, and TRAP for trap and trace orders, along with another code for subpoenas, all of which will have specific legal implications for an organisation and its users. If the value next to any of these keys is zero, the person of software reading the file can infer that none of the warnings have been triggered. If the code changes to one, it’s cause for concern.
The file also contains some other interesting codes, including DURESS, which indicates that the organisation is being coerced somehow, along with codes indicating that they have been rated. There is also a special code indicating a Seppaku pledge, which is a promise that an organisation will shut down and destroy all its data if a malicious entity takes control of it.
In a smart bit of cryptographic manoeuvring, the proposed standard must be cryptographically signed with a public key, and includes information about the expiry date. It uses a block hash from the bitcoin blockchain to verify the freshness of the digital signature. As another safeguard, it includes a PANICKEY field with another public key. If the file is signed with this key, people can interpret it as a kill switch, causing the warrant canary to fail immediately. That’s useful if an organisation suddenly gets raided and can’t afford to wait until the current warrant canary file expires.
A standard like this could help revive warrant canaries by making them easier to track and more deterministic. In the meantime, plenty of non-standard warrant canaries have disappeared, including Reddit’s and Apple’s.
The post Proposed standard would make warrant canaries machine-readable – Naked Security appeared first on National Cyber Security.
View full post on National Cyber Security