Need

now browsing by tag

 
 

The 7.39: Who is Sheridan Smith’s partner Jamie Horn? All you need to know | #tinder | #pof | romancescams | #scams

September 22, 2020 – 20:51 BSTFrancesca Shillcock The 7.39 and Becoming Mum star recently welcomed her first child with partner Jamie – meet her partner here… Sheridan Smith is back […] View full post on National Cyber Security

Are Kristen Stewart and Drew Barrymore SECRETLY DATING? Here’s what you need to know. | #facebookdating | #tinder | #pof | romancescams | #scams

Since Wonder Woman 1984 isn’t coming to theaters in October anymore, you’re gonna want to find some new stuff to watch at home. Thankfully, Disney+ and Hulu have a wave […] View full post on National Cyber Security

#bumble | #tinder | #pof Is maskfishing the latest dating trend we don’t need? | romancescams | #scams

Blue-stalling: When two people are dating and acting like a couple, but one person in the partnership states they’re unready for any sort of label or commitment (despite acting in […] View full post on National Cyber Security

THE ABC’S OF THE S.I.U.: What Providers Need To Know | Fox Rothschild LLP | #employeefraud | #recruitment | #corporatesecurity | #businesssecurity | #

Medical record requests by payors are commonplace for health care providers. Typically, these requests are received by a front desk employee who responds to the inquiry in short order.  Yet, not all requests should be treated the same.  When a request for documentation is propounded by the “Special Investigation Unit” (S.I.U.)  of an insurance company, special care should be exercised and provider involvement is required.

What is a S.I.U. anyway?  Over the past two decades, campaigns have intensified to curb fraud and abuse in health care.  On the government side, False Claims prosecutions have markedly increased and in the private sector, insurance companies have created specific departments to combat fraud.  The S.I.U. is a department within an insurance company with a targeted focus on recovering payments from medical providers that appear to be the product of fraud.  Individuals employed by a S.I.U. include former law enforcement personnel, claims adjusters and fraud analysts, among others, who receive specific training and credentialing in fraud detection.  These investigators utilize data analytics and other methods to flag providers for claims that fall outside of the “normal range” for the type of health care provider under review.

S.I.U. “audits” or requests for information about the practice should be taken seriously and taken to the top of your organization.   In many cases, special investigators receive incentives from the insurance company for recovering payments from providers.  They often attempt to “strong arm” a resolution by threatening a fraud claim, which in a number of states includes the prospect of treble (triple) damages, punitive damages, and attorneys’ fees.  In some cases, medical records produced by the provider (or the absence thereof) will assist the fraud allegation.  In others, the records will assist in supporting a defense to the same.

Here are 5 tips for your practice:

  1. Instruct  staff that all audit requests should be forwarded to the owner of the practice and the provider whose records are being requested.
  2. If the audit is deemed routine (not S.I.U. generated), instruct staff to make a copy of the records requested and the cover letter that attaches the records so that you can memorialize exactly what was provided and when.
  3. If you receive a letter from the S.I.U., reach out to an attorney who has experience in dealing with the S.I.U. to assist you through the process.
  4. If an investigator from the S.I.U. appears at your office, ask for a business card and do not let him/her disrupt patient care.  You can call them later (or your attorney can).
  5. Do not provide access to your electronic records or files to anyone — including anyone employed by a payor.

[View source.]

Source link

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

View full post on National Cyber Security

#hacking | XSS vulnerability in CKEditor prompts need for Drupal update

Source: National Cyber Security – Produced By Gregory Evans


John Leyden

20 March 2020 at 14:20 UTC

Updated: 20 March 2020 at 14:29 UTC

Text editor flaw spawns CVE

A vulnerability in a third-party library component has had a knock-on effect on software packages that rely on it, including the Drupal content management system.

The issue involves a cross-site scripting (XSS) bug in CKEditor, a rich text editor that comes bundled with various online applications.

An attacker might be able to exploit the XSS vulnerability to target users with access to CKEditor. This potentially includes site admins with privileged access.

Exploitation is far from straightforward and would involve tricking potential victims into copying maliciously crafted HTML code before pasting it into CKEditor in ‘WYSIWYG’ mode.

“Although this is an unlikely scenario, we recommend upgrading to the latest editor version,” developers of CKEditor explain in an advisory, issued earlier this month.

CKEditor 4.14 fixes this XSS vulnerability in the HTML data processor, discovered by Michał Bentkowski of Securitum, as well as offering featuring improvements and resolution for an unrelated XSS vulnerability in the third-party WebSpellChecker Dialog plugin.

An advisory from Drupal, issued on Wednesday, instructs users to update to a version of the CMS that feature the updated version of CKEditor in order to mitigate the vulnerability.

In practice, this means upgrading to either Drupal 8.8.4 or Drupal 8.7.12.

The security flaw is described as “moderately critical” by Drupal, even though attackers would need to be able to create or edit content in order to attempt exploitation.

READ MORE WordPress Terror: Researchers discover a massive 5,000 security flaws in buggy plugins

Source link

The post #hacking | XSS vulnerability in CKEditor prompts need for Drupal update appeared first on National Cyber Security.

View full post on National Cyber Security

#deepweb | Why we need more women to build real-world AI products, explained by science

Source: National Cyber Security – Produced By Gregory Evans

Before we dive into why more women should lead AI teams, I want to share a fascinating story I heard from Tania Biland, a 3rd-year student of Lucerne University of Applied Sciences and Arts.

The story as narrated by Tania:

Last semester, our class got split into three different groups in order to develop a safety technology solution for Swiss or German brands:

Group 1: Only women (my group)

Group 2: Only men

Group 3: Four women and one man

After 4 weeks of work, each team had to present their work.

Group 1, composed of only women, developed a safety solution for women in the dark. As the jury was only male we decided to tell a story using a persona, music, and videos in order to make them feel what women are experiencing on a daily basis. We also put emphasis on the fact that everyone has a mother, sister, or wife in their life and that they probably don’t want her/them to suffer. In the end, our solution was rather simpletechnologically: using light to provide safety but connected to the audience emotionally.

Group 2, mostly composed of men, presented a more high-tech solution using AIGPS, and video conferences. They based their arguments on facts and numbers and pointed out their competitive advantages.

In Group 3, with 4 women and 1 man, the outcome didn’t seem finished. The only man in the group could not agree to be led by women and they, therefore, spend too much time discussing group dynamics instead of working.

The groups not only had different outputs but also approached the problem differently. My group (group 1) decided to start by defining each other’s work preferences and styles in order to distribute some responsibilities and keeping a hierarchy as flat as possible.

On the other hand, the two other groups elected a leader for the team. It turned out that these “leaders” were more perceived as dictators, which lead to heavy conflicts where the teams spent hours discussing and arguing while our group was just working and productive.

What science tells us about gender differences

The science landscape with regards to gender differences and effects on behavior is still evolving and has not come up with a clear set of scientific explanations for different behaviors yet. By compiling most of the research, there are two main factors that influence behaviors:

  1. Potential physiological differences between men and women
  2. Social norms and pressures forming different behaviors

In the above story, as told by Tania, women developed the solution in a Collaborative Leadership Style (adhocracy culture), adapting the leading position based on the tasks with an almost flat hierarchy. They derived their argumentation by involving all stakeholders (in this case the mothers and wives = users), showing empathy for their problems. They saw the bigger picture and also built a simpler solution that was actually finished.

Through the story, I was able to connect the dots on why most AI projects never end up moving out from the prototype phase to a real-world application.

Why AI products are not adopted?

Based on my experience, there are three main reasons why most AI and Machine Learning (ML) solutions do not move from the prototyping phase to the real-world:

  1. Lack of trust: One of the biggest difficulty for AI or ML products is lack of trust. Millions of dollars have been spent on prototyping but with very little success in the real-world launches. Essentially, one of the most fundamental values of doing business and providing value to customers is trust, and Artificial Intelligence is the most-heavily debated technology when it comes to ethical concerns and related trust issues. Trust comes from involving different options and parties in the entire development phase, which is not done in the prototype phase.
  2. The complexity of a launch: Building a prototype is easy, but there are tens of other external entities that need to be considered when moving into the real world. Besides technical challenges, there are other areas of focus that need to be integrated with the prototyping (such as marketing, design, and sales).
  3. AI products often do not take into account all stakeholders: I heard the story that Alexa and Google Home are being used by men to lock out their spouses in instances of domestic violence. They are turning up the music really loud, or they are locking them out of their homes. It is possible that in an environment with mostly male engineers building these products, no one is thinking about these kinds of scenarios. Additionally, there are many instances about how artificial intelligence and data sensors can be biased, sexist and racist [1].

Interestingly, none of the three points relate to the technical challenges, and all of them can be overcome by creating the right team.

How to make AI more successfully adopted?

In order to solve the above challenges and build more successful AI products, we need to focus on a more collaborative and community-driven approach.

This takes into account opinions from different stakeholders, especially those who are under-represented. Below are steps to achieve that:

Step 1. Involve different groups esp. women from the middle of the talent pyramid

In technology, most companies focus on hiring people at the top of the talent pyramid, where for primarily historical reasons, are fewer women. For example, most Computer Science classes have less than 10 percent of women. However, many talented women are hidden in the middle of the pyramid, educating themselves through online courses but lack opportunities and encouragement.