We are Deep Tech for Online Harms

Cutting-edge AI built to detect, understand and defeat online harm.

We are Deep Tech for Online Harms

Cutting-edge AI built to detect, understand and defeat online harm.

Awards and distinctions by

The power of
Insikt Ai

Through the application of AI methodologies, we develop domain-specific ML models to help in the fight against different kinds of online harms. However, our flexible technology can be used for any number of commercial use cases, as well, particularly in media, business and finance.

Spotlight

OSINT Platform

Classifies and analyzes data to help Law Enforcement and Investigative professionals detect suspicious content. Powered by Artificial Intelligence Methods such as Deep Learning and Social Network Analysis, Spotlight saves time, surfaces hidden insights and cuts through the noise to effectively neutralize and prevent terror, crime and cyber threats. 

Eligible for 14 day trial
In development

Trueview​

Platform

Cutting-edge tool to map out social networks, analyze their structure and content, and identify potentially problematic disinformation in real-time. TRUEVIEW is also geared towards large enterprises and brands, who worldwide lose an estimated €70 billion yearly as a result of fake news.

Custom Machine

Learning Models

We build specific, customized models for our customers’ use cases in order to help you achieve maximum detection capabilities in complex online content domains. We can build models that learn how to distinguish these relevant posts within massive online content with very little information to start from.

Easy-to-use yet scientifically complex software with pre-trained models with millions of parameters to enhance NLP methods. We remove the need for manual data analysis and make possible the discovery of the “unknown unknown”.

Awards and distinctions by

Experts in security field
Ethical Approach
Custom AI analysis
Cutting-edge R&D
How
WE CAN HELP

We help both public and private sector stakeholders detect problematic content before it becomes a problem, identify emerging issues and predict and mitigate harmful content spread.

Disinformation is affecting the very fabric of our society, destroying democracies but also damaging businesses and disrupting markets.

Public health and safety have also become increasingly more vulnerable.

Our technology, both through our Machine Learning models and our newest product, Trueview, is helping stakeholders public and private to understand and defeat harmful disinformation.

How
we do it

Powering the detection of problematic content and threats on digital sources

Our proprietary technology, based on rigorous original research, enables government, private enterprises and intelligence operations to combat crime by identifying online threats to public safety.

Advanced
AI

By using algorithms to build models that uncover connections and hidden trends, Insikt’s technology helps make better and faster decisions without human intervention. Our approach enables the discovery of trends, threats and anomalous patterns by removing all the noise and focusing on key information that remains hidden within large volumes of online data.

Natural Language
Processing

Our patent-pending Natural Language Processing methodology facilitates the extraction of topics, concepts, entities, key ideas from any human-generated speech, post, call, or other digital data. The combination of our methods with the latest Deep Learning techniques makes our approach singular.

Social Network
Analysis

Network Analysis: Networks are patterns of relationships that connect individuals, institutions or objects. Network Analysis places a strong emphasis on those relationships because of the meaning they may hold.  Insikt’s technology enables users to perform both specific and broad searches on the data that is created from the deep analysis of these hidden networks.

Research at
our Core

As a company founded by a research team working on NLP and Network analysis for over 10 years, our R&D is our centerpiece, is continuous and fuels our innovations. We don’t just build technology, we iterate repeatedly on our scientific methods, continuously improving the accuracy and power of our Machine Learning models.

Our
use cases

Our technology is used to boost understanding and deterrence of issues with severe online and offline consequences. 

Online Abuse in Sports
Hate speech against sports figures is not fair game. See Insikt’s findings on the most recent cases.
Online Criminal Activity - Drugs on social media
Illicit drugs are openly marketed on social media platforms, high resolution images included. Find out how.
Online Radicalization
Social media is a key vector for radicalization and terrorist recruitment. This is what we found out using Spotlight.
Asian American Hate
The spread of COVID-19 was correlated with a wave of increased anti-Asian rhetoric and attacks. We analyzed the phenomenon.
Human Trafficking
Human trafficking is the trade of human beings for sexual slavery, forced labor, or any other activity the victims are coerced to carry out against their will. This is how Spotlight helps fight it.
Our
Core team

We are a multidisciplinary team of professionals with decades of experience in the AI field. 

Jennifer Woodard

Co-Founder

Sandra Cardoso

Chief Technology Officer

Guillem Garcia

Co-Founder & Chief Scientific Officer

Frequently
Asked Questions

If you’re new to AI and machine learning technologies or looking to know more about Insikt AI, these FAQs will help you learn more about our company, our values and approach.

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.

At our core, we are researchers. We began Insikt with a dream driven by the deadly terror attacks that took place across Europe in 2015-2016 – we wanted to leverage AI methods that we (the co-founders, Jennifer and Guillem) had been researching previously for automated content analysis with NLP in the private sector (PR and marketing) to be able detect and prevent radical extremist content online. This vision started out with one research project, then another and finally culminated in the methodology we use today, which is also constantly evolving. We continue to research and are always adding emerging methods to our “methodology stack”, which allows us to continuously improve our ability to tackle new problems that can be solved with our approaches.

Our methodology is unique to Insikt AI. We base our methodologies with the most advanced Deep Learning methods and develop our models specifically for the target sector. This allows us to provide more accurate features and deeper insights. We also implement advanced techniques (Transfer Learning, Few-shot Learning) to cover multiple languages and topics and combine it with Social Network Analysis to get powerful insights about the conversations and their authors.

Custom classification models are models for detecting relevant messages for the user but are not detectable by other methods. An example could be for discovering misinformation messages related to a specific sector or region, etc. We can build models that learn how to distinguish these relevant posts within massive online content with very little information to start from. We also apply Few-Shot Learning methods to teach the system how to detect this type of content with little input from the user.

Technology built for security use cases such as radicalisation and terror must be extremely accurate and scientifically robust. We can transfer the level of intelligence and insights needed for public safety and security to solving corporate problems such as reputation management, consumer insights, fraud detection, disinformation and more.

Ethical AI and particularly its application in high stakes use cases such as counterterrorism and crime fighting is core to who we are. On a practical level, we implement our own end-to-end ethical development framework which starts with the data we use to train our Machine Learning models and continues all the way to how we train end users to use our tools. We are also outspoken proponents of the use of ethical AI in Security and Policing, speaking extensively on the topic at conferences and also establishing a non-profit research institute dedicated to this: Dataietica.

Not currently, but feel free to contact us with your CV. We are always happy to have a chat.

We currently do not have any investors; our company’s technology was bootstrapped and then supported by funding from the European Commission’s SME Instrument funding program.

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.

Copyright © 2021 INSIKT AI All rights reserved

Tell us about your need
Members of

Our technology has been co-funded by the European Union’s Horizon 2020 research and innovation programme under grant agreement number 767542.