Science

now browsing by tag

 
 

#nationalcybersecuritymonth | Interos Completes Series B Funding to Drive Data Science

Source: National Cyber Security – Produced By Gregory Evans

Markus Spiske from Pexels

Interos announced it has raised $17.5 million in a Series B funding round to accelerate data science and engineering growth, expand personnel and boost sales to drive commercial momentum for its leading risk management platform.

The funding comes after Interos tripled its headcount, increased annual recurring revenue by 700% and hiked SaaS subscription bookings by 693% in 2019. With the funding, Interos expects to capitalize on last year’s growth and more than double its personnel in 2020, hiring more staff to augment its proprietary software, which exposes critical risks in the global supply chain for leading private and public sector customers. 

 The round was led by first-time investor Venrock with participation from Kleiner Perkins. 

 “After a strong 2019, this funding shows Interos has already secured major support in 2020 from the world’s most successful investors,” said Jennifer Bisceglie, CEO and founder of Interos. “Like our customers, investors see the value of the Interos platform, which is critical for global businesses in 2020. From events like the coronavirus to political unrest, companies need a platform that exposes risks and identifies how events affect suppliers around the world the moment they happen.” 

“Interos is one of the most compelling big data and AI companies I’ve come across in the last decade,” said Nick Beim, Venrock partner. “Over the last 20 years, global supply chains have grown so rapidly and with so much opacity that most companies don’t know who they’re working with or who they’re dependent on. There’s so much data to gather to fully understand those risks, and Interos helps companies address these urgent, strategic issues with a brand new set of capabilities.”

Interos also recently added Phil Venables, a cybersecurity and risk expert to its board of directors. Venables’ distinguished career includes previously serving as Goldman Sachs’ first chief information security officer and head of technology risk, and as its chief operational risk officer. Prior to his work at Goldman Sachs, Venables was the chief information security officer at Deutsche Bank. Venables serves on the executive committee of the U.S. Financial Services Sector Coordinating Council for Critical Infrastructure Protection, is co-chair of the Board of Sheltered Harbor, and is a member of the boards of the Center for Internet Security and the NYU Tandon School of Engineering. He is also an adviser to the cybersecurity efforts of the U.S. National Research Council and the Institute for Defense Analyses.

Interos has worked with the U.S. Department of Defense, NASA and Department of Energy critical infrastructure. Interos uses machine learning to build and maintain the world’s largest knowledge graph of over 50 million relationships to discover and monitor the entirety of a supplier ecosystem. Each month, Interos ingests over 85,000 information feeds, processing over 250 million risks a month. Interos instantly visualizes the most complex multi-tier relationships, updating and alerting to changes in risk along five factors: financial, operations, governance, geographic and cyber.

 “In today’s interconnected world, Interos is bringing clarity to the muddled, confusing nature of supplier relationships,” said Ted Schlein, partner at Kleiner Perkins. “By automating due diligence, leveraging sophisticated technology and exposing vital risks, Interos shines a light on an otherwise opaque global supply chain.”

Source link

The post #nationalcybersecuritymonth | Interos Completes Series B Funding to Drive Data Science appeared first on National Cyber Security.

View full post on National Cyber Security

#deepweb | Why we need more women to build real-world AI products, explained by science

Source: National Cyber Security – Produced By Gregory Evans

Before we dive into why more women should lead AI teams, I want to share a fascinating story I heard from Tania Biland, a 3rd-year student of Lucerne University of Applied Sciences and Arts.

The story as narrated by Tania:

Last semester, our class got split into three different groups in order to develop a safety technology solution for Swiss or German brands:

Group 1: Only women (my group)

Group 2: Only men

Group 3: Four women and one man

After 4 weeks of work, each team had to present their work.

Group 1, composed of only women, developed a safety solution for women in the dark. As the jury was only male we decided to tell a story using a persona, music, and videos in order to make them feel what women are experiencing on a daily basis. We also put emphasis on the fact that everyone has a mother, sister, or wife in their life and that they probably don’t want her/them to suffer. In the end, our solution was rather simpletechnologically: using light to provide safety but connected to the audience emotionally.

Group 2, mostly composed of men, presented a more high-tech solution using AIGPS, and video conferences. They based their arguments on facts and numbers and pointed out their competitive advantages.

In Group 3, with 4 women and 1 man, the outcome didn’t seem finished. The only man in the group could not agree to be led by women and they, therefore, spend too much time discussing group dynamics instead of working.

The groups not only had different outputs but also approached the problem differently. My group (group 1) decided to start by defining each other’s work preferences and styles in order to distribute some responsibilities and keeping a hierarchy as flat as possible.

On the other hand, the two other groups elected a leader for the team. It turned out that these “leaders” were more perceived as dictators, which lead to heavy conflicts where the teams spent hours discussing and arguing while our group was just working and productive.

What science tells us about gender differences

The science landscape with regards to gender differences and effects on behavior is still evolving and has not come up with a clear set of scientific explanations for different behaviors yet. By compiling most of the research, there are two main factors that influence behaviors:

  1. Potential physiological differences between men and women
  2. Social norms and pressures forming different behaviors

In the above story, as told by Tania, women developed the solution in a Collaborative Leadership Style (adhocracy culture), adapting the leading position based on the tasks with an almost flat hierarchy. They derived their argumentation by involving all stakeholders (in this case the mothers and wives = users), showing empathy for their problems. They saw the bigger picture and also built a simpler solution that was actually finished.

Through the story, I was able to connect the dots on why most AI projects never end up moving out from the prototype phase to a real-world application.

Why AI products are not adopted?

Based on my experience, there are three main reasons why most AI and Machine Learning (ML) solutions do not move from the prototyping phase to the real-world:

  1. Lack of trust: One of the biggest difficulty for AI or ML products is lack of trust. Millions of dollars have been spent on prototyping but with very little success in the real-world launches. Essentially, one of the most fundamental values of doing business and providing value to customers is trust, and Artificial Intelligence is the most-heavily debated technology when it comes to ethical concerns and related trust issues. Trust comes from involving different options and parties in the entire development phase, which is not done in the prototype phase.
  2. The complexity of a launch: Building a prototype is easy, but there are tens of other external entities that need to be considered when moving into the real world. Besides technical challenges, there are other areas of focus that need to be integrated with the prototyping (such as marketing, design, and sales).
  3. AI products often do not take into account all stakeholders: I heard the story that Alexa and Google Home are being used by men to lock out their spouses in instances of domestic violence. They are turning up the music really loud, or they are locking them out of their homes. It is possible that in an environment with mostly male engineers building these products, no one is thinking about these kinds of scenarios. Additionally, there are many instances about how artificial intelligence and data sensors can be biased, sexist and racist [1].

Interestingly, none of the three points relate to the technical challenges, and all of them can be overcome by creating the right team.

How to make AI more successfully adopted?

In order to solve the above challenges and build more successful AI products, we need to focus on a more collaborative and community-driven approach.

This takes into account opinions from different stakeholders, especially those who are under-represented. Below are steps to achieve that:

Step 1. Involve different groups esp. women from the middle of the talent pyramid

In technology, most companies focus on hiring people at the top of the talent pyramid, where for primarily historical reasons, are fewer women. For example, most Computer Science classes have less than 10 percent of women. However, many talented women are hidden in the middle of the pyramid, educating themselves through online courses but lack opportunities and encouragement.