INTERVIEW with Irina Nicolae – “Compromising deep neural networks could incur severe damage and loss.”

Irina Nicolae’s talk at DefCamp holds the promise of a fascinating topic to explore: deep neural networks.
With all the talk in the industry about machine learning and its influence on cyber security products and platforms, it’s time to unwrap this topic. As DefCamp approaches, we know your patience is wearing thin, so here’s a quick peek into Irina’s experience and perspective on deep neural networks.
Boasting the promise of helping us make sense of data and put it to better use, we wanted to get Irina’s take on how deep neural networks interact with machine learning and AI.
AI has seen enormous progress in the past 50 years and now encompasses multiple disciplines, like machine learning, robotics and reasoning. Machine learning is the part of AI responsible with providing prediction models based on data. Neural networks, part of machine learning, are a type of method based on interconnected units called neurons, loosely inspired by the brain neurons. The neurons are connected together to form stacked layers, inducing the notion of ‘deepness’ often associated with neural nets these days. In spite of their intimidating name, deep neural networks (or DNN) are often represented in everyday life.
Irina reveals their popularity at the moment:

Neural networks are becoming more and more the go-to method for solving machine learning problems.

Although not many real-world systems rely heavily on neural networks for now, the knowledge to solve many tasks with deep learning (or, more generally, machine learning) is already available. The projected evolution is for these methods to be adopted by businesses in the years to follow. So what makes these networks susceptible to attack or an attractive target for cyber criminals? This is what we wanted to find out next.
Prediction models can be sensitive to attacks tampering with the data. Their attractiveness for attackers depends on the application deep neural networks is used for, as well as the potential gain that a successful attack would unlock. You may wonder about the level of sophistication that attackers should posses in order to successfully attack a deep neural network. An adversary needs to have access to the data and have some computation capacities, which will be variable depending on the scale of the attack (e.g. compromising one input as opposed to an entire database). As is the case for any computing system, security measures should be deployed around deep neural networks to protect them. Predictive models are often robust to random noise present in inputs, so the attacker cannot use trivial heuristics to tamper with the data, but will need to use more elaborate methods.

As they become more widespread, compromising deep neural networks could incur severe damage and loss.

But this is not something to worry about just yet, says Irina. The impact is for now limited by the fact that there are not many systems or businesses relying solely on deep neural networks. As their practical usage increases, so will the risk, and attackers will come up with new ways of putting prediction models to the test. That is why it is important to ensure that models are properly secured in all cases, but most crucially in critical applications.

So how do cyber criminals actually seek to compromise deep neural networks?

Irina shares some examples to paint a more accurate picture: One example is trying to bypass visual systems detecting “persons of interest” in a group of people (e.g. CCTV). Tampering with the video stream would make the detection system fail, but, under human inspection, the video would look harmless, and the attack would remain inconspicuous.
Another potential objective is related to media content on dedicated platforms or social networks, where the adversary can try to bypass detection systems against inappropriate or proprietary music or video. We couldn’t help but touch on the issue with privacy concerning deep neural networks and how this risk can be mitigated.

Deep neural networks, just like any machine learning algorithm, rely on data.

This raises the same privacy concerns like any other system based on data storage and processing. The machine learning community is actively working on providing algorithms and systems which ensure differential privacy, as well as distributed computing capacities that still preserve user privacy.
But what if deep neural networks fall into the wrong hands? Irina helps us assess the potential consequences: Deep neural networks, and more generally machine learning algorithms, can be used by malicious actors for attacks where data can be leveraged to improve their efficiency. The prediction models can help automate certain tasks for scalability purposes or refine the potential attack. While it may seem that only huge, global organizations can afford to implement defense methods based on deep neural networks, Irina believes otherwise.

AI methods can be used for security purposes for tasks like threat and fraud detection, spam detection or malware classification. The ability to automate these processes and treat massive amounts of data using machine learning can benefit businesses of all sizes. Their choice is usually between contracting an external service or building in-house solutions which require expertise. Although the cost of such a decision is not to be ignored, the potential added value to certain applications makes the case for the investment. Also, more and more organizations are making general-purpose prediction models available to the public at no cost, even for commercial use. In the future, we should expect to see built-in security as part of any product or service offer.

When she peers into the future, Irina envisions a collaborative future that she believes would greatly benefit cyber security and tech in general. The further automation of tasks, some of it done through AI, will probably shape the near future of technology. We can also expect to see a more sustained effort for bridging the gaps between fields in the purpose of solving real problems.
Raising awareness in the tech community that almost all systems need to be designed with security in mind, and that security should be taken into consideration from step 1, is going to be as much a challenge as an opportunity in the next years.
Only a few weeks left until you’ll be able to see Irina on stage at DefCamp, sharing her captivating take on “Efficient Defenses Against Adversarial Examples for Deep Neural Networks”! We hope you have your ticket, because they’re going fast!
The interview & editing was made by Andra Zaharia.
DefCamp 2017 is powered by Orange România and it’s organized by the Cyber Security Research Center from Romania (CCSIR) with the support of Ixia, a Keysight Business as a Platinum Partner, and with the help of Bitdefender, SecureWorks, Amazon, Enevo Group and Bit Sentinel.

    Related articles​

    DefCamp 2024 highlights: over 2,000 infosec ..

    BY Adina Harabagiu
    DefCamp 2024 wasn’t your average conference. It was two packed days of cybersecurity action, held in ..

    “Bad actors will begin using massive A.I. to ..

    BY Adina Harabagiu
    Edition #14 of DefCamp is just around the corner, and the excitement is building! With less than a week to go,..

    DDoS Protection Solutions by Orange

    BY Adina Harabagiu
    Protect your company’s data against DDoS (Distributed Denial of Service) attacks.