Grindr is removing an “ethnicity filter” from its dating app as part of its support for the Black Lives Matter movement, the company announced on Monday.
The controversial feature, limited to those who stump up £12.99 a month for the premium version of the app, allows users to sort search results based on reported ethnicity, height, weight and other characteristics.
In a statement posted to Instagram, the company said “We stand in solidarity with the #BlackLivesMatter movement and the hundreds of thousands of queer people of color who log in to our app every day.
“We will continue to fight racism on Grindr, both through dialogue with our community and a zero-tolerance policy for racism and hate speech on our platform. As part of this commitment, and based on your feedback, we have decided to remove the ethnicity filter from our next release.?”
Grindr’s filter had come under intense criticism over the weekend after a now-deleted tweet from the company that read “Demand Justice. #blacklivesmatter”. Many condemned the company’s show of solidarity as hollow when taken alongside the existence of a feature that allows users to explicitly discriminate based on race.
The company has long maintained that the ethnicity filter was useful for minority users who wanted to find people like themselves, rather than enforce racism.
“We decided before we were ready to pull the plug on that, it was a conversation we wanted with our user base,” Grindr’s head of communications told the Guardian in 2018. “While I believe the ethnicity filter does promote racist behaviour in the app, other minority groups use the filter because they want to quickly find other members of their minority community.”
Grindr isn’t the only dating app which allows users to filter by race, but it is by far the most prominent. Racial discrimination on the app isn’t simply enforced algorithmically, either; a 2015 study of Australian users found that 96% had seen at least one profile that included some form of racial discrimination, ‘through language such as “Not attracted to Asians.”’ One in eight of those surveyed admitted they themselves included such language.
The announcement came on the first day of Pride month, Grindr noted. “We can still come together in the spirit of Pride, but Pride this year has an added responsibility, a shifted tone, and a new priority that will be reflected in our programming – support and solidarity for queer people of color and the #BlackLivesMatter movement.”
Ministers have been told they can no longer say there have been “no successful examples” of Russian disinformation affecting UK elections, after the apparent hacking of an NHS dossier seized on by Labour during the last campaign.
The dropping of the old line is the first official admission of the impact of Kremlin efforts to distort Britain’s political processes, and comes after three years of the government’s refusal to engage publicly with the threat.
Cabinet Office sources confirmed the position been quietly changed while an investigation into the alleged hacking of the 451-page cache of emails from a special adviser’s personal email account by the security services concludes.
Boris Johnson and his predecessor as prime minister, Theresa May, have both appeared reluctant to discuss Kremlin disinformation, with Johnson refusing to allow a report on Russian infiltration in the UK to be published before the election.
Versions of the “no successful examples” statement were regularly deployed in response to allegations of Russian interference in the Brexit referendum, to the frustration of MPs who believed a full investigation was necessary.
Officials said the revised position about Russian interference was set out by Earl Howe, the deputy leader of the House of Lords, in a parliamentary answer earlier this year, when he was asked if there were plans to investigate interference by foreign governments in December’s election.
The peer said the government was determined to protect the integrity of the democratic process in the UK. “As you would expect, the government examines all aspects of the electoral process following an election, including foreign interference, and that work is ongoing,” he said.
Stephen Kinnock, a Labour MP, said the government was being slow in acknowledging the disinformation threat from Russia. “From the hacking of NHS emails to the St Petersburg troll factories and bot farms, it’s clear that the Kremlin is pursuing a deliberate strategy of online disinformation and manipulation that is undermining our democracy.”
Security sources said that Russian strategy of “hack and leak” and “disinformation and misinformation” – which first came to prominence with the hack of Democratic emails in the run-up to the 2016 US presidential election that handed victory to Donald Trump – was becoming widespread internationally.
Last month, the Foreign Office said Russia’s GRU spy agency had carried out a series of “large-scale, disruptive cyber-attacks” in Georgia “in an attempt to undermine Georgia’s sovereignty, to sow discord and disrupt the lives of ordinary Georgian people”.
But despite the strong words in support of an ally in the Caucasus, ministers had been reluctant to publicly call out any Russian disinformation efforts in the UK – and there has been little public acknowledgement of the NHS hack during the election, first reported by the Guardian.
The scale of the Russian threat will be examined in the long-awaited report on Kremlin infiltration into British politics from the independent intelligence and security committee, which cannot be published until Downing Street appoints a new set of members following the election.
Earlier this week, it emerged that among those in the frame were the error-prone former transport secretary Chris Grayling and recently sacked environment minister Theresa Villiers.
The NHS emails are believed to have been hacked from an adviser’s personal Gmail account, and were disseminated online via Reddit, under the headline “Great Britain is practically standing on her knees working on a trade agreement with the US”.
Initially ignored, the documents covering six rounds of UK-US trade talks were eventually picked up by Labour from the posting and produced during a dramatic press conference by Jeremy Corbyn, who said they showed the NHS was “on the table” in the negotiations.
Following an investigation, Reddit concluded “we believe this was part of a campaign that has been reported as originating from Russia” and said it bore the hallmarks of the earlier Secondary Infektion disinformation operation, which was exposed by Facebook in 2018.
Interoperability rules largely welcomed, but potential privacy and security issues must be addressed, experts warn
New rules giving patients better access to their medical data have been approved by the US Department of Health and Human Services (DHSS) – but experts warn that security may not be entirely sewn up.
Currently, many electronic health record contracts contain provisions that either prevent or are perceived to prevent the sharing of information related to the records in use, such as screenshots or video.
From the beginning of next year, though, health plans doing business in Medicare, Medicaid, CHIP, and federal exchanges will be required to share patients’ health data.
Meanwhile, a new API will allow developers to create apps allowing patients to access their own data, as well as integrating a health plan’s information with their electronic health record (EHR).
“Delivering interoperability actually gives patients the ability to manage their healthcare the same way they manage their finances, travel, and every other component of their lives,” says Don Rucker, national coordinator for health information technology.
“This requires using modern computing standards and APIs that give patients access to their health information and give them the ability to use the tools they want to shop for and coordinate their own care on their smartphones.”
Predatory apps and snake oil warning
The new rules are generally being welcomed – with reservations.
“I’m not sure diving in headfirst by giving patients apps to access their own healthcare records via mobile apps is a good idea,” says Paul Bischoff, privacy advocate for security research firm Comparitech.com.
“Patients might not know what they’re agreeing to when handing over permission to apps to access their health records. This could lead to predatory apps that leverage medical records to sell snake oil.”
Meanwhile, says Tim Mackey, principal security strategist with the Synopsys Cybersecurity Research Center, the nature of the US’ insurance-based healthcare system means that patients may need to be careful about the information they share.
“Given the sensitive nature of medical records, and the potential for a pre-existing condition to negatively influence future patient care, vetting of both app creators and medical data usage in care decisions are concerns,” he says.
“As consumers embrace apps as a proxy for physical identification and their mobile devices as a central store for their most sensitive data, both the security of those apps and the potential for compromise of a mobile device become increasing concerns.”
Much-needed security standard
According to the DHSS, similar apps already exist, in the form of Medicare Blue Button 2.0, which allows patients to securely connect their Medicare Part A, Part B and Part D claims and other data to apps and other tools.
More than 2,770 developers from over 1,100 organizations are working in the Medicare Blue Button 2.0 sandbox, it says, and 55 organizations have applications in production.
But, says David Jemmett, CEO and founder of security firm Cerberus Sentinel, it could be hard to implement a comprehensive security standard.
“As things stand currently, you don’t know if your portal has been checked for security standards unless there has been certification to meet a number of additional standards,” he says.
“Often the code itself goes unchecked and third-party companies can be building them for the interface, but there is no one to go line by line, ensuring security standards are met to certify the software.”
READ MORE EU to give €100bn MedTech industry a security health check
Speaking at the Cloud Security Alliance (CSA) summit at the RSA Conference in San Francisco, Alex Stamos, adjunct professor at Stanford University’s Freeman-Spogli Institute, said that issues and decisions made by technology companies have angered people.
Stamos, who previously served as CISO of both Facebook and Yahoo, said that once he stepped out of those roles and “out of constant emergencies” he could see the bigger picture.
He said that “tradeoffs from a policy perspective are poorly understood by the public and usually go back to the engineering adage of do you want it done correctly, cheaply, or quickly—pick 1 of 3.” Stamos said that this is a basic problem of society, as people say that they don’t want companies looking at their data, but to stop bad things happening you need to see bad things. “Politicians say companies have to find the bad guys, but you cannot have two things.”
Another issue Stamos highlighted is the balance that technology companies have for “solving societal ills,” as he pointed out that technology companies provide platforms while “every bad thing [that] happened [was] done by people.”
He said that companies have to “embrace transparency and make decisions in a transparent manner.” However, the line has to be drawn around bullying and harassment, as “nothing has changed since the last election.”
Stamos said that Google, Facebook, and Twitter came up with policies on political advertising “in closed rooms with no transparency,” and these will be the rules that the 2020 election will be fought on.
He recommended that the tech industry adopt a regulatory framework similar to what Germany did regarding what speech is allowed online, but should consider how this can be adopted by countries with reduced democratic freedoms. “Or you end up with tech companies who are happy if they get regulated if they can make money, as most people who use the internet don’t live in democracies, or if they do, it is with reduced free speech.”
Stamos concluded by saying that we “have to realize that technology has made changes in good and bad ways” and take responsibility for that.
Great Britain’s three nations are not in agreement over the use of facial recognition technology by police forces.
The technology, which can be legally used by police in Wales, was officially introduced by England’s Metropolitan Police Service in East London yesterday, amid a peaceful protest by Big Brother Watch.
Use of the technology by English police forces has not been debated in parliament or approved by elected officials.
By contrast, Police Scotland announced yesterday that its plans to roll out facial recognition technology by 2026 have been put on hold pending a wider debate about the implications of its use.
Their decision comes in the wake of a report published on Tuesday, February 11, by a Scottish government committee, which concluded that facial recognition technology is “currently not fit for use” by Police Scotland.
The Justice Sub-Committee on Policing informed Police Scotland that the force must demonstrate the legal basis for using the technology and its compliance with human rights and data protection legislation before they can start using it.
In a report that was part of the committee’s inquiry into the advancement of the technology, the committee wrote: “The use of live facial recognition technology would be a radical departure from Police Scotland’s fundamental principle of policing by consent.”
The committee warned that the facial recognition technology was “known to discriminate against females and those from black, Asian and ethnic minority communities.”
Committee convener John Finnie said: “It is clear that this technology is in no fit state to be rolled out or indeed to assist the police with their work.
“Current live facial recognition technology throws up far too many ‘false positives’ and contains inherent biases that are known to be discriminatory.”
Police Scotland Assistant Chief Constable Duncan Sloan said it would now conduct a public consultation on the live software and keep a “watching brief on the trialling of the technology in England and Wales.”
In September 2019, Cardiff’s high court ruled that police use of automatic facial recognition technology to search for people in crowds is lawful. The technology is currently being used by South Wales police.
Source: National Cyber Security – Produced By Gregory Evans Boris Johnson is likely to approve the use of Huawei technology in the UK’s new 5G network against the pleas of the US government, a former national security adviser has said. Sir Mark Lyall Grant, who was Theresa May’s national security adviser, said that the security […]
View full post on AmIHackerProof.com
Source: National Cyber Security – Produced By Gregory Evans As 2019 splutters to a close, it’s time for our annual lookback at our most-read tech stories, and to ask: “What happened next?”. Facebook and its family of apps dominates this year’s list with four entries – it probably won’t be a surprise that none of […]
View full post on AmIHackerProof.com
Amazon Inc, the world’s largest online retailer, is being known these days as more of a technology company, and rightly so.
Technology is at the core of whatever Amazon does — from algorithms that forecast demand and place orders from brands, and robots that sort and pack items in warehouses to drones that will soon drop packages off at homes.
At its new Go Stores, for instance, advances in computer vision have made it possible to identify the people walking in and what products they pick up, helping add them to their online shopping carts.
Jeff Bezos, the founder of Amazon and the world’s richest man, is always pulling new rabbits out of his hat, like next-day or same-day shipping and cashier-less stores. Besides, there is Blue Origin, the aerospace company privately owned by Bezos, which is on a mission to make spaceflight possible for everyone.
Be that as it may, a lot more disruption aimed at reaching the common man is on the anvil.
The most far-reaching and impactful technologies being developed today are for Amazon’s own use, but some others have the potential to disrupt every sector.
The technology marvels that Amazon Web Services — the largest profit driving unit in Bezos’ stable — is working on could jolt several industries, including in India, in the same way that Amazon once disrupted retail.
“In retail, while things like the size of the catalogue, advertising and other stuff might play a role in success, at Amazon, I think success is largely technology driven,” said Chief Technology Officer Werner Vogels.
The ecommerce giant is using advances in technology to disrupt several sectors outside of retail though — medicine, banking, logistics, robotics, agriculture and much more. Interestingly, some of that work is happening in India.
Initially, the thinking was around allowing enterprises in these sectors to grow by using its cloud storage and computing capabilities.
Now, Amazon’s reach has become more nuanced and it has moved up the value chain.
For example, no longer is Amazon offering banks a place to securely store information, it is going beyond by offering tools to detect fraud, making it unnecessary for the lenders to build expensive data science teams in-house.
It is a similar story in other industries, made possible due to the massive amounts of data that Amazon collects and processes.
“We give people the software capability, so they no longer need to worry about that side of things. Most of our services are machine learning under the covers (and) that’s possible mostly because there’s so much data available for us to do that,” Vogels said.
Hospitals in the United States have to save imaging reports for years. Earlier these were stored on tapes, since doing so digitally cost millions of dollars.
The advent of cheaper cloud storage meant new scans could be saved digitally, making them accessible to doctors on demand.
Now, doctors could refer to a patient’s earlier CT scan and compare that with the new one to diagnose an ailment, said Shez Partovi, worldwide lead for healthcare, life sciences, genomics, medical devices and agri-tech at Amazon.
The power of cloud and AWS’ own capabilities in medical technology have only expanded since.
Healthcare and life sciences form rapidly scaling units of AWS, which is building a suite of tools that allow breakthroughs in medicine — from hospitals using the tools to do process modelling or operational forecasting, refining the selection of candidate drugs for trial or delivering diagnoses through computer imaging.
Developed markets will be the first to adopt such technologies, but AWS is seeing demand surge from the developing world, including India.
“Not everyone is within a mile of a radiologist or physician, so diagnostics through AI could solve for that. Further, there’s a lack of highly trained people, but when all you have to do is take an image, it requires a lot less training,” said Partovi.
Bezos, in his private capacity, is now looking to connect remote regions with high-speed broadband. He is building a network of over 3,000 satellites through “Project Kuiper”, which will compete with Elon Musk’s SpaceX and Airbus-backed OneWeb.
The bigger bet is in outer space though. His rocket company Blue Origin has already done commercial payloads on New Shepard, the reusable rocket that competes with SpaceX’s Falcon 9. The capsule atop the New Shepard can carry six passengers, which Bezos looks to capitalise on for space tourism, a commercial opportunity most private space agencies are looking at.
It is also building a reusable rocket – Glenn, named after John Glenn, the first American to orbit the earth — which can carry payloads of as much as 45 tonnes in low earth orbit.
Bezos’ aim, however, is to land on the Moon. His Blue Moon lander can deliver large infrastructure payloads with high accuracy to pre-position systems for future missions. The larger variant of Blue Moon has been designed to land a vehicle that will allow the United States to return to the Moon by 2024.
Amazon’s take on robotics is grounds-up.
The company has been part of an open-source network that is developing ROS 2 or Robot Operating System 2, which will be commercial-grade, secure, hardened and peer-reviewed in order to make it easier for developers to build robots.
“There is an incredible amount of promise and potential in robotics, but if you look at what a robot developer has to do to get things up and running, it’s an incredible amount of work,” said Roger Barga, general manager, AWS Robotics and Autonomous Services, at Amazon Web Services.
Apart from building the software that robots will run on, AWS is also making tools that will help developers simulate robots virtually before deploying them on the ground, gather data to run analytics on the cloud and even manage a fleet of robots.
While AWS will largely build tools for developers, as capabilities such as autonomous navigation become commonplace, the company could look to build them in-house and offer them as a service to robot developers, Barga said.
With the advent of 5G technology, more of the processing capabilities of robots will be offloaded to the cloud, making them smarter and giving them real-time analytics capabilities to do a better job. For India, robot builders will be able to get into the business far more easily, having all the tools on access, overcoming the barrier of a lack of fundamental research in robotics.
AWS might be a behemoth in the cloud computing space, but cloud still makes up just 3% of all IT in the world. The rest remains on-premise. While a lot will migrate to the cloud, some will not. In order to get into the action in the on-premise market, Amazon has innovated on services that run on a customer’s data centre, offering capabilities as if the data is stored on the cloud.
With Outposts, which was announced last month, AWS infrastructure, AWS services, APIs, and tools will be able to run on a customer’s data centre.
Essentially, this will allow enterprises to run services on data housed within their own data centres, just like how they would if it had been stored on AWS.
The other big problem that AWS is looking to solve is not having its own data centres close enough to customers who require extremely low-latency computing. For this, the company has introduced a new service called Local Zones, where it deploys own hardware closer to a large population, industry, and IT centre where no AWS Region exists today.
Both these new services from AWS could be valuable in India given the lower reach of cloud computing among enterprises as well as stricter data localisation requirements.
Artificial Intelligence/Machine learning
Amazon is moving up the value chain in offering services backed by artificial intelligence and machine learning to automate repetitive tasks done by human beings.
Enterprise customers will simply be able to buy into these services with minimal customisation and without a large data science and artificial intelligence team.
In December, AWS launched its Fraud Detector service that makes it easy to identify potentially fraudulent activity online, such as payment fraud and creation of fake accounts. Even large banks in India have struggled to put together teams to build machine learning models for fraud detection, but with such a service they can train their systems easily.
Code Guru is another service that uses machine learning to do code reviews and spit out application performance recommendations, giving specific recommendations to fix code. Today, this is largely done manually, with several non-technology companies struggling to build great software for themselves due to bad code.
Transcribe Medical is a service that uses Amazon’s voice technology to create accurate transcriptions from medical consultations between patients and physicians. Medical transcription as a service is a big industry in India, and India’s IT service giants hire thousands to review code. These services are expected to replace mundane manual tasks, freeing up resources for sophisticated tasks, and could lead to disruption.
Amazon’s voice assistant wisecracks her way through SQL injection attacks on serverless environments at Black Hat Europe
Developers in serverless environments must heed the threat posed to their applications by voice command inputs, an industry expert has warned.
Speaking at the Black Hat Europe conference in London last week, researcher Tal Melamed took control of vulnerable applications hosted on serverless environments using Alexa-guided SQL injection attacks.
‘Sounds like a dream’
Serverless architecture, which allows developers to build applications without provisioning a server, is becoming an increasingly popular choice among developers, said Melamed, who is leading the OWASP Serverless Top 10 project.
Code is executed only when needed and “you don’t pay for what you don’t use”, the researcher noted, adding that the approach is a boon for “experimentation and scaling up”.
Serverless application development “sounds like a dream,” he said. But if organizations are liberated from the burdens of server management, it does not follow that security concerns are fully outsourced to service providers like AWS, Azure, and Google Cloud Platform.
This is because serverless applications still execute code, said Melamed – and insecure code is vulnerable to application-level attacks.
Melamed, head of research at Protego Labs, told The Daily Swig that all too many developers are unaware that serverless environments demand a different security posture to their traditional counterparts.
Read more of the latest news on hacking techniques from The Daily Swig
Outsourcing the perimeter
Outsourcing server architecture might reduce workload, but it also tears down the security perimeter.
“Serverless is an event-driven architecture where code is triggered via different events in the cloud,” Melamed told The Daily Swig.
Unlike monolithic applications, developers are not limited to APIs.
“Code can now be executed due to an email that was received, a file that was uploaded or a database table that was changed. The ‘connection’ between those events to your code is transparent and is controlled by the cloud provider.”
All too many developers “are unaware of the adjustments” they need to make “to attend [to] those risks.”
Those adjustments include never trusting inputs, which should be validated before data is processed.
“However, [developers] need to get used to the fact that the input could come from unexpected sources, like Alexa voice commands,” added Melamed.
Alexa, what is my balance?
Melamed’s final demonstration, in which he stole data from a hypothetical user account, illustrated how a voice-command injection attack requires only “code [that’s] vulnerable to SQL injection, which accepts inputs from Alexa (or any other voice-enabled devices) and processes the input as part of the database queries without validating it first.”
Alexa translated his voice commands – such as “what is my balance?” – into code.
“I designed it so it would translate words of numbers into actual numbers,” he told attendees.
The voice-delivered code that cracked the user’s secret ID, unlocking the cash balance, was .
The lesson to “organizations that develop voice-enabled applications” is clear, Melamed told The Daily Swig: they “should consider voice-commands as [an] input to their application.”
Melamed also launched event injection attacks through a third-party app using rest API, against cloud storage, and via email.
Melamed said his demos – coming soon to GitHub – evidenced the importance of shrinking “the attack surface by following the least-privilege principle: narrowing down the permissions of every serverless function as much as possible.”
Attendees were also urged to automate their defensive processes wherever possible.
Telling it like it is, Alexa clearly assigned blame for successful injection attacks: “In short, the problem isn’t the cloud – it’s you [the developer]”.
Source: National Cyber Security – Produced By Gregory Evans Accel’s new India fund What’s the news? Accel India, backer of leading technology startups such as Flipkart, Freshworks and Swiggy, has raised about $550 million for its sixth India fund, taking its assets under management to $1.5 billion. This makes Accel VI among the largest corpuses […]
View full post on AmIHackerProof.com