Security Superfriends Episode 9: Clint Gibler

Security Superfriends | Clint Gibler, head of security research at r2c, co-founder at tl;dr sec

I think of it like democratizing static analysis – how can we take this thing that’s very useful and powerful, and just make everyone sort of a rockstar?

Clint Gibler is THE security renaissance man. He’s part security researcher, PhD in computer science, open source contributor (to SemGrep that powers his company r2c) and runs the TLDRSec newsletter. If you’re into cloud native security, appsec, DevSecOps then you must follow Clint!

I was super motivated to interview this security superfriend. Why? (I hope he doesn’t cancel me for saying this.) He’s at the center of the shift left security movement. While not his favorite words, he confesses it’s his bag.

“So what we’re trying to do is – I’m gonna regret saying this, but “shift left.” So how do we make it easy to comment on pull requests? How can we make it easy for an individual developer to add Semgrep to CI”

Clint emphasizes that shifting left is nothing more than developer enablement. Enablement for him looks like this, “Hey I’m trying to do my job, we just got a bug bounty report for something I don’t want to find again!

As a CISO, having worked in cloud native environments, I can testify to how real this is. It’s the right and modern approach to developer enablement (one that we are taking too as it relates to infrastructure as code).

I hope you enjoy this installment of Security Superfriends! And don’t forget to sign up for TLDRSEC!

Check out edited highlights below, and be sure to watch the video or tune in via podcasts on our Soluble channels on Apple Podcasts, Spotify, Soundcloud, and Stitcher.

Richard Seiersen: Tell us about your current company r2c and Semgrep as an open source project, particularly with your background in static analysis.

Clint Gibler: Sure. There’s a lot of interesting things about r2c and Semgrep that are maybe not obvious if you’re not as familiar with the space. So one thing that’s kind of neat is there’s actually a long history behind Semgrep. It was originally called sgrep and it was built by Yoann Padioleau, who was Facebook’s first program analysis hire.

So he built this tool, they used it to enforce a ton of secure coding practices within Facebook. He ended up leaving Facebook and then years later he ended up joining r2c, and r2c had a number of different products that were all very different than what we’re doing now, and he actually made a bunch of improvements to sgrep, but it was actually just during the hack week where I was like, “okay everyone work on whatever you want,” and then he was like, “hey by the way there’s this tool I built a long time ago and that ended up being this massive thing. People loved it.”

That’s sort of the core thing our company is building (now), but the origins are actually a number of years ago. I saw a huge amount of potential in it based on my personal experience, like having interned at Fortify and building static analysis tools in grad school, there’s certainly a lot of value there.

So most tools in my experience take someone with significant domain expertise, maybe like an academic background or at least weeks of learning sort of the custom domain specific language that they use, and so one of the most key exciting insights is how do we make someone who’s just a normal security engineer or a normal developer able to build basically a very powerful linter that can capture either security properties or sort of correctness or robustness things as well.

I think of it like democratizing static analysis – how can we take this thing that’s very useful and powerful, and just make everyone sort of a rockstar and able to do it pretty well? How do we optimize for speed, ease of learning, rapid prototyping…sort of like a tool for people who are a bit technical and competent, and willing to roll their own a little bit whereas I think previous approaches were sort of heavyweight standalone big boxes that you put stuff in, you get stuff out, and it’s very hard to customize and adapt to an environment. So we just sort of went all the way in the other direction for it.

RS: That’s great. So maybe we can talk about the open source; I’ve seen you’ve done a little bit of contribution yourself even, but tell us about the Semgrep community – are they developers are they security people, and how important has that been for the company as well to have this great open source solution?

CG: The engine is closed source for many tools and the rules are closed source as well, and then more recently CodeQL has a closed source engine but open source rules. To my knowledge, Semgrep is the only tool that both the engine and the rules are open source, so I think that’s a pretty neat differentiator.

I think that because it’s open source, that sort of just appeals to developers and security professionals more because they’re like, “Ah I like knowing how things work. I like being able to extend it,” and yeah, it actually has caused a number of people to just sort of do drive-by contributions for the rules. They’re like, “Hey like I found this xxe in our repo, here’s the thing I hope other people can benefit from it.”

So yeah, it’s been very neat. I would say once we got a certain amount of traction, a lot of our work is actually very sort of user pulled where people are like, “Hey I really want to be able to do this.” Okay, cool, let’s take all the different asks people have and create sort of an intuitive way to solve this same problem.

It’s in a very fortunate position, but it didn’t start that way. It took a lot of hard work and a lot of just being very responsive on Slack. We have a community Slack where people are always asking questions and we try to give people answers. We generally try to respond to people pretty quickly because we have a bunch of people sort of constantly watching as they go about building other stuff.

So yeah it’s just making people feel empowered and enabled to not just rely on what we provided. We tried to take sort of the hard lame parts about matching code and mostly abstracted that away into an engine so you can just like say “Hey I’m trying to do my job, we just got a bug bounty report for something i don’t want to find again,” or “Show me all the other places that this occurs,” and just sort of trying to empower people to to do that on their own without sort of being blocked by some quarterly update or something.It’s like, “No, just do your thing, we’ll support you.”

RS: That’s a great model. So with the speed of cloud native development right – small teams, distributed, a lot of freedom and responsibility. So I’ve bought Fortify number of times and rolled it out. The Fortify world is very different than the Semgrep world. What have you seen in terms of the changes that cloud native brings and particularly in operationalizing something like Semgrep? Let’s say i’m a CISO, I’ve got a large distributed team of developers. Am I the one who’s bringing in Semgrep, or is it somebody else? Understanding the velocity of cloud native development, what does it really mean to operationalize something like Semgrep?

CG: Traditionally what happens with Fortify or Checkmarx or tools like that is generally the security team is either scheduling or running the scans maybe daily weekly or something like that, and then they triage the results and then the things they believe are true positive they then communicate to developers. Meanwhile, developers have sort of moved on and are doing other things and maybe you block a release or something on that.

Some companies try to send results directly to developers. In my experience, that almost never goes well, at least with those tools when I’ve helped other companies as a consultant do that. So what we’re trying to do is – I’m gonna regret saying this, but “shift left.” So how do we make it easy to comment on pull requests? How can we make it easy for an individual developer to add Semgrep to CI whether that’s github actions travis gitlab and so forth.

I will say – one thing I’ve seen that’s interesting in a number of companies is you do need some sort of engineering buy-in to add another tool to CI because generally engineering is responsible for speed and uptime, and like correctness of the CI pipeline so you don’t slow things down. So again that’s sort of why we prioritize being fast.

RS: In preparation for this interview, I saw you do an OWASP talk where you used the term “guardrails not gatekeepers.” Could you speak a bit more to what you were talking about there and how that even relates to what you were just speaking about?

CG: Yeah definitely, and to give credit where credit is due, that’s definitely highly influenced by my friends on the Netflix team and many others, so it’s not not necessarily my idea but something I’ve heard consistently from a bunch of people who I think are very smart and who I trust.

It goes back to how development has been changing and how security has been changing as well. So if we think back, say to like early 2000s, if you’re building a web application, for example, in rails you would have to add a little helper to make sure you output and code this so anywhere that you don’t remember that thing, it’s going to be vulnerable. You see that in general for like c-surf and all these different things, so for early web, you need to remember to do a thing to not be vulnerable.

More recently, a lot of frameworks are pretty good, and if you just use the defaults it’s going to be secure by default or as I also call it, sort of like “guardrails,” or “paved road” as Netflix calls it. The key insight is this is not just security – this is just safety in general, it goes back to sort of habits and how people work.

Like you don’t want to make it very complicated for your airbag to save you if you get hit or something, right? It’s like how can we sort of engineer all that complexity internally so it’s sort of transparent to the user? So similarly, we don’t want developers to have to be security experts because that’s, you know, years of study, lots of work, there’s always new attacks, there’s always new things. So basically - how can we enable developers to do what they’re best at – which is to produce features on time to meet business goals to build scalable robust performance systems?

How can we enable them to do what they’re best at, and as much as possible just not have to care about security – and at least when they do, make it clear to them when they need to care about security, and then maybe they reach out and ask for help.

So fundamentally, development is happening faster and faster you know many companies are pushing production many times a day we just fundamentally can’t do the approach we used to – which is like you can’t push until i say yes. You could try that, but there’s you know a thousand developers and two or three of you so um basically a lot of it comes down to business risk and business goals.

So what is the cost of the business of delaying features being pushed versus how much risk are we mitigating by manually reviewing all the code reviews? So the idea is how do we scale a security team with few headcount? Okay, well let’s just build secure primitives, secure libraries, secure wrapper frameworks or services that if developers use it 90% of the time, it’s safe and when it’s not safe, we have lightweight checks to find those cases, like when are you opting out of the pro protections we’re providing.

And Semgrep, i think because it’s sort of lightweight and customizable, i think is a good fit for that, but really I think the principle is like you know we can’t slow down developers too much but we still want to scale security, so how do we make it easy for them to do the right thing and hard to do the wrong thing, right?

Be sure to watch the video for more, and subscribe to our Soluble channel to see more great episodes of Security Superfriends.

Want to catch the latest and greatest in Security Superfriends? Subscribe to our Youtube channel for past shows and updates, and listen on Apple Podcasts, Spotify, Soundcloud, Stitcher, or wherever else you get your podcasts.