Powered by MOMENTUM MEDIA
cyber daily logo
Breaking news and updates daily. Subscribe to our Newsletter

Interview: Deepen Desai, CISO and VP of security research at Zscaler

Cyber Security Connect recently had a chance to sit down with Deepen Desai, Zscaler’s chief information security officer and vice-president of security research.

user icon David Hollingworth
Wed, 21 Jun 2023
Interview: Deepen Desai, CISO and VP of security research at Zscaler
expand image

We caught up with him at Zscaler’s Zenith Live event in Las Vegas and talked to Desai about the nature of threat intelligence, the dangers and promise of modern artificial intelligence (AI) and what keeps the CISO of a global security company up at night.

Cyber Security Connect: I’ve read a lot of threat reporting in the last six months, but I’m wondering if you can illuminate the process of gathering that kind of intelligence.

Deepen Desai: The reason we have been so successful in discovering a lot of these new things is we made a decision back in the day, about eight years back, where we structured the team, and we started hiring folks aligning with the kill chain.

============
============

We have folks that are just focusing on phishing — where the actors are getting in. So their expertise is all around that. Then we have a team that’s focused on vulnerability exploitation, which is usually the next stage. Then we have a team of experts that know everything about malware, and I’m not talking about manual analysis; they have automated a lot of stuff in that area as well.

One of the automations that I can talk about is where we’re able to automatically reverse-engineer more than 150 malware families. And we’re able to extract the configuration that the bad guys add in — we call it config extractor.

And then the final team is just focused on command and control activity — they are the ones that are tracking threat actor infrastructure. So we don’t stop at the point of “Hey, let’s block this payload” or “Let’s block this bad destination, and call it done”. My instruction to the team is I want to know the full campaign. This is just a piece of the puzzle — what happened before then, what will happen after that, and that’s the coverage strategy that my team uses.

CSC: How large is Zscaler’s research team?

Deepen Desai: We are now close to 200. And this includes our machine learning data science experts, as well. We’re tightly integrated.

So going back to your question, there was, for example, a zero-day exploit, which was impacting almost all Microsoft operating systems. IT was just a payload that we saw in the sandbox. But then, when we started piecing the puzzle together, we saw the threat actor group that’s trying to go after a certain industry vertical.

It got caught in our sandbox, but if it was successful, the next stage would be to establish persistence on that point; they will then bring in things like Cobalt Strike or MimiKatz to then move laterally and recon the environment.

CSC: And Cobalt Strike is a legitimate pen-testing tool, isn’t it?

Deepen Desai: Yeah, and it’s very expensive, but there are leaked versions, and it’s heavily abused. I mean, it’s a very good tool for red teams and pen testers, but it’s heavily abused by threat actors as well, and it’s found in almost both crimeware as well as nation-state attacks now.

Some of this stuff gets leaked, and some of this stuff is just purchased by them — look at the amount of money they’re making from all the ransom payments that are happening. They can afford a $5,000 tool.

CSC: Speaking of pen testing, many ransomware operators refer to themselves as pen testers, effectively providing a very expensive service to their victims. And I think you’ve mentioned yourself that getting hit by ransomware is a very expensive way to find out the weak points in your network.

Deepen Desai: It’s actually scary stuff, in my opinion. But — and maybe you shouldn’t write about this, because we don’t want them to stop doing this — we actually use their pen test reports to run red teaming — we do exactly what they did in the environment.

The goal is if they’re using certain tools, tactics, and procedures, at any of these stages, we should proactively run it ourselves — and make sure our policies are configured properly to safeguard against such attacks.

CSC: So, on top of running Zscaler’s ThreatLabz, you’re also the company’s CISO. What kind of attacks are you seeing your company face at the moment?

Deepen Desai: Just like any other tech vendors, we do see our share of attacks. One that happened recently was when someone took a public recording of Jay [Jay Chaudhry is Zscaler’s chief executive] and put it through a machine learning model. They generated a clip, and then they called one of our employees. It sounds just like Jay because it was using all those recordings to come up with this custom message.

So the employee gets a call from Jay and the employee knows he can recognise Jay … “Okay, this is definitely Jay.” And the caller says, “Hey, this is Jay. Can you do me a favour…”, and then it cuts off. Then it’s followed by text messages to this employee, “I’m in an area where my network is weak”, or “I’m calling from someone else’s number” — it goes on from that point onward.

The underlying attack is still the same, though. They’re trying to scam the employee.

CSC: I know that AI is a part of Zscaler’s toolset — and a powerful one at that — but it’s also being used by cyber criminals and other threat actors, so it’s really quite a double-edged sword, isn’t it?

Deepen Desai: 100 per cent. It has so much potential that you will be left out if you don’t harness the power of AI.

We can actually leverage the ability of generative AI — in combination with some of our traditional models as well — to start predicting breaches because now it’s able to ingest large, large volumes of data. And it’s able to predict what’s going to happen next.

It is a double-edged sword — because there are a lot of unknowns. There is also a lack of education across various industry verticals. And I’ll give you a few specific examples.

We saw Samsung’s developers using it for code checks, which are called beautification or syntax checks — and many other companies probably did this, too.

Now bad guys are also using these models, the public ones to ask, “Does the model have any information about Company A?” Did their employees make a mistake and submit stuff to this that I can, without even attacking them, extract that data? The second one is they can get pieces of code — which are very popular, and these guys are using it in their product — if that code is vulnerable, and you’re just blindly trusting and integrating it into your code base, our guy just needs to know that okay, this is what ChatGPT is giving out for this use case. Let me write an exploit that scans the internet for anyone that has this type of code integrated into their library — that’s another way of doing it.

The third one, and this was a proof of concept, but I’ve seen at least one vendor report an issue where [hackers] were using OpenAI APIs to download stage-two payloads. And the payload was getting compiled on the victim’s end, then executed, and they were able to bypass all EDR solutions.

CSC: That is pretty scary, but I imagine there are a lot of scary things out there on the other side of the attack surface — what keeps you up at night?

Deepen Desai: I’ll share my three.

Number one is insider threat — you know, the fact that these guys are paying employees US$30,000 a week to just gain temporary access.

CSC: Which … is a very human problem, not a technical one.

Deepen Desai: Exactly. So, how mature are you in your zero-trust implementation? And what kind of detection response controls have you invested in that will basically spot some of these anomalies, because they will use the access of an existing employee, but they will do certain things that the employee is not known to do. So I have a lot of investment internally in this area. Number one, for me, is insider threats.

Number two is supply chain attacks — and it’s the magnitude right now … as a tech vendor, I want to make sure we protect our customers, but then we also need to protect ourselves because we have stuff that we deploy as well.

Then the third is our public cloud infrastructure — because just like any other company, our employees also are working at a million miles an hour trying to innovate, right? They will have these public cloud instances spun up, and at times, some of these guys get creative; they don’t follow the process.

When we discover things after the fact, that’s a blind spot.

CSC: Another human problem.

Deepen Desai: It’s a human problem, yeah.

Cyber Security Connect was a guest of Zscaler at Zenith Live.

David Hollingworth

David Hollingworth

David Hollingworth has been writing about technology for over 20 years, and has worked for a range of print and online titles in his career. He is enjoying getting to grips with cyber security, especially when it lets him talk about Lego.

newsletter
cyber daily subscribe
Be the first to hear the latest developments in the cyber industry.