JS Party – Episode #293

Web dev security school

with Ron Perris

All Episodes

This week, we’re joined by Ron Perris, a Security Engineer at Reddit and software security enthusiast. Together, we dive into best practices and common pitfalls, covering topics from dangerous URLs to JSON injection attacks. Tune in for an educational conversation, and don’t forget to bring your notebooks!

Featuring

Sponsors

Convex – Convex is a better type of backend — the full-stack TypeScript development platform that lets you replace your database, server functions, and glue code. Get started at convex.dev

Appwrite – Build Fast. Scale Big. All in One Place. Appwrite is a backend platform for developing Web, Mobile, and Flutter applications. Built with the open source community and optimized for developer experience in the coding languages you love.

Changelog News – A podcast+newsletter combo that’s brief, entertaining & always on-point. Subscribe today.

Notes & Links

📝 Edit Notes

Chapters

1 00:00 It's party time, y'all
2 00:39 Sponsor: Convex
3 04:10 Welcoming Ron Perris
4 04:46 Ron's background
5 14:55 NodeJS Security Working Group
6 23:47 Sponsor: Appwrite
7 26:35 Where to get started with security
8 34:17 String injectiony stuff
9 39:42 XSS protection
10 43:40 URL-based script injection
11 50:41 Who's responsible for security?
12 52:38 Sponsor: Changelog News
13 54:19 On security teams
14 58:00 On project security
15 59:51 Sanitize & render HTML
16 1:04:45 Secure server-side rendering
17 1:05:55 Avoid JSON injection attacks
18 1:08:50 User linter configs
19 1:10:54 Thomas Eckert's question
20 1:13:57 Why aren't people taught this?
21 1:16:10 The Loco Moco conference
22 1:17:14 Frontend vs backend security
23 1:24:14 Connecting with Ron
24 1:25:45 Next up on the pod (Join ++!)

Transcript

📝 Edit Transcript

Changelog

Play the audio to listen along while you enjoy the transcript. 🎧

Hello, JS Party listeners. So excited about today’s show. With me today is none other than Chris. Hello, Chris. Welcome.

Hi. I know Chris is very monotone, but he’s really excited about today’s topic, because it’s about… JavaScript security!! And with us today is a very special guest, a former colleague of mine, Ron Perris. Welcome. Hello, Ron. Welcome to JS Party.

Hello.

Hello. Yeah, so Ron, I don’t want to butcher your intro, so why don’t you tell us a little bit about yourself?

Oh, sure. Yeah, so I’ve been working on some aspects of software for the last 20 years. Most recently, I’m at Reddit, and I’m an engineer there, and I’m focusing on code security.

Very cool. That’s some – yeah, a very tight intro. I’ll do the extended version, I guess…

Well, yeah, I mean, I could tell you a little more about what got me here, if that’s interesting to people…

Yeah, by all means.

Yeah, I think it’s been kind of an interesting road. I first got a job writing software – I don’t want to date myself too much, but in the late ‘90s. And back then, the developer, just like now, was mostly responsible for the security of their code. The organization I worked in didn’t have a dedicated security team that was gonna do any review, so the code I wrote, and whether I made security mistakes or not, that’s like kind of – whether or not the code was going to be secure was kind of hinging on that. So I ended up diving into that aspect, in the software rewriting at the time, and I found in that a product that I could kind of roll out, and I started a small company with a friend. We built a software security company - this was like early 2000s. That got acquihired by a Swedish company. Then I was the CTO of a kind of a large Swedish computer security company called [unintelligible 00:05:58.23] for six years, and led product development there. I worked with the software engineering function. Yeah, so… I mean, that was an interesting journey, but all of that ended around 10 years ago, back in 2012; I started looking at what was I going to do next, and there was a kind of – JavaScript had just recently kind of been through some changes… We had used the [unintelligible 00:06:20.29] on the frontend of the application we were building at that startup, and so I was interested to see where JavaScript was going, and I started writing JavaScript code primarily, and started working with Node.js a lot more. And then through that work, I bumped into kind of the – I don’t think anybody could avoid it around this time, 2015, the Learn to Code movement of like “Hey, let’s get everybody coding.” I kind of ran into like that crowd of “Hey, we have code schools, and code camps, and code workshops on the weekend.” My experience up to that point, having been in the industry already for over 10 years, was – it was kind of like surfing; people knew about it, and like you could do it, but not everybody was interested. You didn’t see like throngs of people on the beach, just like dying to get in the water. It’s just like, you know, people wanting to code, they learned, or didn’t; there were books, there was things, but… I saw that there was a huge demand suddenly; it was like everybody wanted to learn to surf. And there was a lot of people in my local area - because this thing was kind of localized, you know? These code schools were kind of like in your region. They weren’t like, oh, everybody went to San Francisco; or you could even do it online at that point. And so because it was regional, I felt like “Well, wouldn’t it be cool to, as a lifestyle business or something, run one of these code schools, while my kids are small, and really teach a few people software engineering from my perspective, since I’ve been doing it for a while?”

So I ended up starting a code school, I ended up scaling it up a little bit to a few instructors, taught almost 200 people how to code, help them get jobs, and learned a ton of JavaScript in the process, learned a ton about where all the pitfalls are, and where all the problem spaces are that people have learning how to do it correctly… And out of that, I kind of got myself into a weird space, which is like I was one of the few people that I knew of my social circle that had spent time thinking about like modern JavaScript, and had thought about secure coding, and had thought about how do you teach somebody.

[08:08] So one of my friends tapped me, from – a more in the security industry friend tapped me and said “Hey, a lot of people are writing modern JavaScript, but they don’t know what the proper patterns are for frameworks like React. They’re not sure how to get it right. Would you mind sitting down and just like writing out a list (back in 2018), just a list of things that people should be aware of if they’re writing code in React, so they don’t make mistakes like cross-site scripting?” And I thought – maybe I was like “Yeah, I don’t know if there’s really much to write. It might just be like a really short blog article.” He wanted me to do it as kind of like a full-fledged course that was going to be taught in a corporate setting.

And so I took that project on, and I spent 40 or 50 hours of research to try to figure out “Hey, what exactly is the attack surface of a React application?” I’m like “Where’s the API that developers are expected to use in order to stay on the secure path? Where can they potentially pull an escape hatch and kind of pop out of the intended React security model?” And I think, Amal, you had mentioned you bumped into like one of the outcroppings of that work, which is a cheat sheet that was floating around that I helped author.

Yeah, it was a hot cheatsheet that we’ll link in the show notes. It’s hot… Yes, I think it’s hot. It’s 10 React security best practices. So it’s solid… But also just in general, I don’t know, you’re very humble, and I’m gonna fill in your bio and your background a little bit… But yeah, you do a lot of work with community, and especially in the security space, and I’ve always kind of thought of you as this person that’s bridged the security world and the info – whatever; InfoSec… I don’t even know. Am I using the right – is it InfoSec? I don’t even know, there’s some something-sec. AppSec, maybe. Is it AppSec? I don’t know. Something-sec world. What’s the acronym that I’m looking for?

So I think the reason that I might be put in that application security or product security space is that I built some products that lived in that space. So that original product that I built, where I was doing the software engineering aspect for a product that was a software service product around security - at the time, a team of application security people might use that product. Or like an InfoSec team might use that product. So I think that’s how I ended up understanding that world.

And then another thing I did is – there’s a community organization out there called OWASP, which is the Open Web Application Security Project. And they had local chapters, and they had this cool mantra of rough consensus and running code is how they run their organization. And they let anyone just start like independent chapters. And so I started one – I mean, I joined one in Orange County, kind of a rebooted one in Orange County, that wasn’t running at the time… And I ended up getting pretty involved in OWASP, in the sense that I ended up helping to found one of the larger OWASP computer security conferences that was happening yearly for six or seven years, called AppSec California. And at that conference, we would bring together like builders, breakers and defenders, and have multiple tracks where we talked about various aspects of like getting it right when it comes to application security.

So I think there was a lot of interesting things there that happened, but that conference served its needs… And I also, from a community aspect, I guess, I was doing something interesting; I was looking around – this was right before I joined npm, on their security team. I was looking around at some of the libraries that I used, and I was looking at the code of some of these common libraries, and I was noticing not every library was perfectly written. Some of them contained security mistakes, of kind of a new variety, which is like - what if the library has something in it where when you use it, if you don’t really know how it works underneath the hood, you could potentially add a security bug to your application? But if you knew how to use a library correctly, you wouldn’t add that bug. These are like some of the harder ones to deal with from a security bug perspective. Because up to that point, developers, especially bug bounty style developer reporting of bugs, you basically would go and say “Hey, you have this running application sitting on a website”, like, I don’t know, stripe.com or whatever, and like “I can show that I can exploit the bug by sending some input and measuring some response.”

[12:18] When it comes to like a library bug, there’s this whole debate where you’re like “Hey, you have this library, and I could see that most users, if they use it in the normal way, would inadvertently add this security bug to their application.” And then the developer who maintains that library might say “Hey, you’re holding it wrong.” Kind of a Steve Jobs moment, where “Hey, you’re calling my library incorrectly”, or “You’re passing the wrong set of arguments” or… And so you end up in this nuanced debate of “Well, shouldn’t the way that everyone uses it be the secure way? Shouldn’t there be some secure default idea here, where if I just use it–” Like, for example, I want to pop up some kind of modal dialog on the screen, and you have some kind of parameter called text. And when I put HTML in there, my expectation is that would go into text content, not in the inner HTML. But maybe under the hood, this library maintainer, for whatever reason, takes your text and puts it into the inner HTML every time. And so you end up in this kind of like nuanced debate with them of “Hey, should we fix that? Is that a problem?” And they might say “No, it’s not a problem. You just need to know more about how my library works.”

And so I spent time in the Node.js ecosystem security workgroup, looking through all of these types of bugs, and working with maintainers to try to get them fixed. And yeah, that was like another community thing that I worked on.

Yeah, that was dope. And that was kind of around the time that I think I started – I learned about you as a person, because we worked together at the same company, and funny enough, Ron and I were both there when npm was going through its GitHub ac, and… Yeah, Ron joined the team that I was leading to help tackle a lot of the due diligence kind of security bugs that the white hat hackers found… So that’s pretty typical when a company’s – you know, they will do their due diligence and they’ll hire people to look and see how secure your code is, and they want to make sure that you fix and take care of those liabilities before they acquire you, because they don’t want to take on that liability. And so I just got to see Ron at work, basically, in that space… And it was so fascinating to see how good you were at your job, Ron. You’re really good; you’re incredibly fast at like diagnosing and finding issues. And then when I peeled back the onion a little bit and I was like “Who is this Ron Perris?” and I was like “Oh, okay, he’s so good because he teaches JavaScript courses, he does all this community work…” I felt like you had so many different perspectives to bring to the table. So yeah, you’re awesome to work with.

I just must have had like a couple of good days, because I don’t know if those are true things about me… But I appreciate it.

I don’t know, I don’t know… But no, so an incredible background. So you kind of talked a little bit about the Node.js Security Working Group. Can you tell us a little bit about what that is, and all that jazz?

Yeah. I didn’t really know what it was, I sort of stumbled into it. I had found a bug, a security bug in a URL parser. And I know what you’re thinking, you’re like “Doesn’t node have a built-in URL parser? Why would there be a library that parses URLs?” Well, because the npm ecosystem at the time - there was a library that had millions and millions of downloads, that did kind of the same thing as like the built-in URL parser. And that was because, for historic reasons, that URL parser wasn’t ideal. And I think you even worked on some of the – you had mentioned, I think, the spec there had recently been updated, or Node had recently updated their implementation towards the spec… So I guess there was some gap in what the market wanted and what Node was offering. So there was some kind of URL parsing library with millions of downloads.

[15:52] And I noticed that when it was to parse URLs, in some cases, you could give it some kind of input, and then it wouldn’t do like proper validation, and then instead it would reflect something in the output… And it would tell you something about the URL that wasn’t true. It would tell you “Oh, this URL is like an HTTP URL”, and when you asked the return to objects property, it would be like “Oh, it’s HTTP.” But in fact, underneath the hood, it was a JavaScript URL. And for those of you who know, if you take a JavaScript protocol URL and you put it on a web page, it’s going to act quite differently than just any like an HTTP or an HTTPS URL, because that JavaScript URL can execute code in the context of the current page.

So I’ve found this vulnerability, and I thought “Oh, this is something we should get fixed”, and I started looking for a place to kind of give it to, so I didn’t have to follow up on it… And that’s when I bumped into the Node Security Ecosystem Workgroup, which I think at the time had a few members, some from IBM, some from other places… And I said “Hey, do you guys take reports like this? Do you take a report on something, an ecosystem, and run it down and get it fixed?” And they had like a Hacker One page that was public at the time, and I thought “Okay, that’s a good place to report it.” And then I reported it, and then the page got closed down, and the program got paused. And I was like “Wait, what is this thing? Why is it stopping?” So I kind of dug in, I was like “Why did you guys stop taking reports?” and they’re like “Hey, you’ve got no idea, man… Like, we are taking reports for the whole ecosystem. So if anybody finds a bug in anything, they can just report it via our Hacker One program, and we have a limited amount of people that could do triage, and go in and look at these, and figure out if they’re valid, and work with maintainers to get them fixed. So that’s when I decided “Oh, I should help here.” And I joined the workgroup, and I started trying to tackle the backlog myself, and triage not just my own vulnerability that I reported, but I went on to triage a lot more vulnerabilities reported by others.

I think that that working group served a purpose for a long time. I think eventually those reports got funneled to Snyk directly. I think Snyk was a big part of that working group, so they decided “Hey, why don’t you just report them to us, because we have a vested interest in triaging them, and making them into CVEs, and putting them in our tool, and having people become aware of them.” I don’t know, I don’t have a lot to say about all that, but I think that - yeah, the Node.js Foundation workgroup was pretty cool at the time. I’s now not there, from what I can tell. And there was an attempt to restart it elsewhere, but I don’t know. I mean, it’s a hard problem. Like, whose responsibility is it if there’s a vulnerability in some library ecosystem? Some people would say it’s the maintainer. I think Node in general, as a community, has a vested interest in like those types of things being followed up on, and having people look into them and figure out if they’re valid, and then drive them towards remediation. Because as you know, a lot of libraries that are heavily used might not be actively maintained.

And so, to be clear, this is a different working group than the Node Security Working Group.

Right. This is the ecosystem working group. So the Node Security Working Group is like “Oh, we’ve found a bug in Node, and there’s actually some problem with something, and then we’re gonna create a release of Node.” This is like there’s an ecosystem, and so the charter, I think, scoped all packages within the Node.js ecosystem. And there was I think an attempt to take some responsibility there.

There was a later attempt, I think, by others, that has continued, where people are trying to scope that down to just some important core libraries, and say like “Oh, yeah, for this small subset of open source packages - we all continue to care about them, and like driver mediation for issues. But we’re not gonna take on the whole ecosystem.”

Right. That was an effort at IBM when I was there. There was like these – I don’t remember what we called them, but it was a list of popular core modules that they wanted to… I don’t know, just apply some resources to, for security and critical fixes and things.

[19:50] Yeah, it’s like the 80/20 rule, like “Protect these popular packages”, and you cover a wide base of your surface. So first of all, Ron, that was a lot you’ve just shared, lots to unpack… Let’s roll back a little bit. So first of all - whoa, in my silliness, I didn’t even realize that there was a difference between the Node Ecosystem Security Working Group and the Node.js Security Working Group. I assumed that those are the same thing, so thank you for clarifying that, Chris and Ron. And it’s really sad to hear that the Ecosystem Security Working Group isn’t really a thing anymore, but I completely understand why; that’s a massive undertask to take on… I remember when I was at npm just being so impressed by the security team, like what a small and mighty team… And just like such few people managing to kind of protect such a huge, massive ecosystem. Malware detection, and all of that… Like, stuff just worked. It was very impressive in that regard.

I think a lot of that was Adam, and Adam Baldwin, and his background. He had a deep understanding of so many aspects of security related to the Node.js ecosystem. I joined that team in an interesting way. I was running a conference dedicated to product security, which I could talk a little bit more about later, but… I was running that conference, and Adam came out as a speaker and spoke on the topic of npm security. And this was like a super-hot topic at the time, and I was doing the work in the Node.js workgroup, and I had an interest there, but I wasn’t necessarily looking to start working somewhere, like npm. But after talking with Adam, he kind of explained the problem space… There’s everything from like malware being served by the registry, to the packages themselves, and their security, to doing some instrumentation and try to figure out “Is there some behavioral patterns we can look at for how the packages act when they’re being installed, versus how they acted last time?” There was just so many interesting kind of product security/threat landscape detection… Just interesting problems there, that Adam seemed to really have his head wrapped around. And so I joined that team, and like you said, it was a small team.

Very mighty, though.

Yeah. We had some people behind the npm audit tool, which - however you feel about that, I think that that was a lot for the team (it was a small team) to write those advisories and keep them up to date, and try to manage that functionality, and the tool that was in front of so many developers… Yeah, it was a lot. But I think – probably you’ve talked about it a few times on this podcast, but we were all really excited at npm. We all had our little space, and we were all doing really fun stuff, but I don’t think any of us feel like we had enough time to really get to do the things we wanted to do while we were there.

Like you mentioned, I was on the security team, and then the acquirer said “Hey, there’s a list of vulns you need to fix.” And then I joined your team, and that was a lot of fun to go around and try to fix some of those vulnerabilities. But I think it’s interesting to see that some of your guests are now talking about problems that Adam was aware of even back then. When I joined the team, he had mentioned things like this recent manifest confusion; maybe it wasn’t the exact same thing that Darcy was explaining, but something similar to the fact that you’re auditing one thing, but you’re serving something else to the user. And some of these other problems that now you’re seeing tools get built for, and other companies are building around this solution space… I think very early on – you know, it’s crazy to think that just a few of us were trying to tackle any of that at npm.

Yeah, yeah, absolutely. No, I mean, I feel like that’s a show on its own, right?

With Adam, yeah.

With Adam. Yeah. I mean, I’m trying to get him to come on the show for a while… He’s just – you know, he’s Adam.

I think he’s in the Pacific Northwest somewhere, based on Twitter, like raising chickens, or something…

That’s so cool. That’s so cool. I’ll have to try again. It’s been a few years since I’ve tried.

Break: [23:31]

So much to get into in this show today, y’all… We’re gonna see what we can cover in the next 40-50 minutes. So what I really want our listeners to kind of walk away with is just like a better understanding of what the hell is JavaScript security, right? What does security of your application even mean from a broad sense? What can you use tooling wise, what resources are available, etc. But more specifically as well, as you’re using frameworks, you’ve taught a course in 2018 that was specific to like React security… And I see that you’re kind of updating that course to now using Lit, right? Which is cool. I’m like “Yay, Lit HTML!” I love Lit. So for your average developer, where do they start? Where do you even get started with understanding how to responsibly secure your code and your applications?

It’s interesting… I think I’m like a crossover artist here on this podcast, because I’ve spent a lot of time in the security community, and I’m obviously a developer… But I haven’t spent a lot of time talking with developers on podcasts like this, or at developer conferences. And I think that there’s a pretty big mismatch between the way that developers look at secure coding, and the topic, and the way that the security industry looks at it. So when it comes to the developers who are actually writing the code, they almost always get it right. That’s almost all code that’ –

That’s good to know…

What’s that?

It’s good to know. So people are sanitizing their inputs, and doing all the things…

Well, think about it… I mean, you’ve got bug bounty programs out there, and if people weren’t getting it right most of the time, then people would just be becoming millionaires all day reporting bugs for all this poorly-written software. So you’ve got to imagine that, at the end of the day, the stuff that’s out there in production, is for the most part written in a way where it’s not easily exploitable. And for the kind of companies that we might end up working for, that’ll probably continue to be the case. So I think that you’ve got to be fair to the development community and say that they almost always get it right. 99.9% of the time the code they write is secure. So they know the patterns. They don’t need me to come on a podcast and tell them, because they do it all the time. I think where it gets nuanced is when you’re talking about like moving between frameworks, or moving to new frameworks, or new tools… I think there’s a moment where people aren’t sure what the recommended best practice is… And the hope is that the library maintainer or the developer of the framework has already thought that through for them, so that it’s easy for them to kind of stay on whatever you want to call it, the paved path.

The golden path…

Sure. Smooth path, I’ve heard… Yeah, there’s a lot of paths. Slippery path… In the React community, when they originally built that framework, I think one of the biggest contributions was the naming of that prop, right? They called it like something extreme, right? What is it called?

Oh, I have a song about this…

[29:22] DangerouslySetInnerHTML… What’s so dangerous about HTML is inner parts…? I don’t know… But they’re dangerous… On the internet… Of React. React’s internet, of course.

Right? So that was like really cool, because that became something that the security community could talk a lot about, where they could say “Oh yeah, and watch out for React. You’ve got to make sure you don’t use dangerouslySetInnerHTML.” When I went and looked into React and tried to figure out how would you make mistakes, and looked at the code that our team had written at npm and other teams had written, I noticed that dangerouslySetInnerHTML usage is something that every single application has. So it’s not like people can avoid it. You’re gonna end up using it; I guess the hope is just like when you’re using it, just like when you’re using inner HTML, you’re just not going to make a mistake, and instead you’re gonna in some way guarantee that the stuff you’re putting in there doesn’t contain attacker-controlled scripts. I say that in a very specific way for a reason. It’s like, there’s a lot of lore about how you’re supposed to do secure coding. For example, maybe somebody says “Oh, you’ve got to sanitize all your inputs. Get those inputs sanitized, or validate them.”

[30:36] That’s me.

Okay…

That’s me. That’s what I say.

Yeah, sanitize and validate those inputs, you know? But that’s not always the control you want. It’s not always possible to do it in that way. It turns out when it comes to cross-site scripting in particular, what you’re really looking for is contextual output encoding. Because what you’re trying to do is you’re trying to say “At the time where I’m going to take the attacker-controlled value and do the dangerous thing with it, I want the result to be treated as benign.” And so you want some benign-equivalent characters, like HTML-encoded characters on the screen. That’s if you’re having to put it in a context where it could be treated as code. If you were to say “Oh, yeah, I just want to get this on the page”, you should do something where you prefer an API, like text content over inner HTML”, so that it’s really not even possible to make that mistake, because you didn’t leave yourself open for it.

I think dangerouslySetInnerHTML is like an equivalent of inner-HTML-ing something, where you’re like “Hey, I have this dangerous value, and I’m going to do something dangerous now. And hopefully, I get it right.” There is an alternative way to do this in React… But I’ve seen presentations by React experts, and they don’t seem to necessarily know the exact pattern for getting it right when it comes to user-controlled or attacker-controlled content, and then getting it on the page successfully, without using dangerouslySetInnerHTML. I don’t know if you’re interested in the nuts and bolts of that, but I could talk a little about that.

Oh, yeah. I mean, I would say dig in. Dive in, please.

Okay. So the way that the React library works is that it’s willing to create elements on your behalf. And so when you’re calling its APIs, and you’re passing props, those aren’t just like directly getting concatenated and placed in their HTML on the page. What they’re doing instead is like they’re using underneath the hood the DOM’s programmatic APIs to build those elements.

So if you imagine the most simplest way, it’s like running Create Element, running Set Attribute, running those types of APIs in order to create a DOM node tree that you can then put on the page at some point. And so I think that when you go and use something like dangerouslySetInnerHTML, you’re kind of like skipping your way out of that entire API. And a way for you to get back to the point where you’re using that API to protect you is you’ll see that there’s a few libraries out there that do this. So let’s say, to take an example, Markdown. A Markdown is a common situation where you have some Markdown and you want to put it on the page. And often, what people will do is they’ll render that Markdown into some kind of like string, and then they’ll take that string and they’ll put it in dangerouslySetInnerHTML. There’s libraries out there that go through a lot of steps in order to not do that, in order to do it in a more secure way. So what they’ll do is they’ll actually take that Markdown content and then they’ll create an abstract syntax tree out of that Markdown, and then they’ll walk that abstract syntax tree, looking for an allow list of elements and attribute types that are allowed to be in the rendered output, and then they’ll use React’s API to actually programmatically create each of those elements with those appropriate attribute types, so that it ends up on the page, but it’s rendered through the React rendering mechanism. And therefore, you get all the built-in guarantees of React dev stuff, where it makes sure you don’t use dangerous types of attributes, and makes sure you only use an allowed list of tags… Like, there’s all these security mechanisms in those rendering functions in React. So in that way, it keeps you on like that paved path. I don’t know that everybody knows that’s happening if they use the React APIs, but that is what’s happening underneath the hood.

Wow. No, that’s fascinating. And I can imagine with tools like Lit, that use tag template literals, and all kinds of other string injection-y stuff… Has that world expanded for you? I don’t know, is the security world – I don’t know, you tell me.

[34:25] Yeah. So I joined Reddit a couple years ago, and I don’t know if everybody knows – my daughter doesn’t know what Reddit is. I don’t know – I figure we all do, and your audience… Because it’s 18 years old, right? It’s an old website that people used to use. And it’s basically like threaded forums, with comments, with like up and down voting, is what you can imagine. Reddit - there’s a version of it that’s still online called old.reddit.com. And that’s a very old application that uses Python, Jinja2 templates, and it’s doing something to render content and serve it. You’ve got that still running out there. And then you’ve got everything from that to like a version that’s built using [unintelligible 00:35:02.26] and using all these modern features. And then in between those two versions of Reddit, you’ve got a version built in React, you’ve got a version built using React Redux, which is like for a mobile viewport… So for me, as kind of like a JavaScript frontend security person, Reddit was a really interesting opportunity, because there’s so many codebases that are all frontends, that are all implementing Reddit. And you have our native clients, obviously.

So I think that when I look at the attack surface, and what I worry about, I look at like, hey, we’ve got this old thing, that’s using Jinja templates, and it’s kind of battle-hardened and tested, maybe by our users, maybe by bug bounty, but it’s probably pretty good… And then we’ve got like these couple of React apps where I feel like I understand the attack surface. I’ve done some research, I’ve published some information about it, and I feel comfortable looking at those codebases and recommending “Hey, here’s the secure path.”

More recently, we’ve started serving worldwide traffic through Web Components using Lit. So we’re authoring code every day that’s either written with native web components, or it’s written using Lit. And I can talk about kind of why we’re doing both, because I think that’s a little confusing… But if you’re using Lit, are you okay, I guess? I heard one of your podcast guests, I think talked about this.

Yeah, Justin Fagnani . Yeah, he’s like the author of Lit, and he’s –

He’s like “You’re okay”, right?

Yeah, right. Well, I mean, I don’t know; I didn’t specifically ask him that, but…

But I agree. Like, you –

…I want to hear your thoughts.

Sure. No, I agree. I’m not an expert on Lit, so let me just disclaim that. I’m catching up. But as I come into it with a different – I’m not a developer in the sense that I’m not trying to ship a web component. I’m coming at it from “What’s the attack surface that we’re providing to our developers, and how many things do they have to get right in order not to make a security mistake?” And so I knew that with React I could just say “Hey, avoid DOM mutation through refs. Hey, don’t use dangerouslySetInnerHTML, don’t use server-side rendering and then concatenate things to the end of the output.” Like, I had these patterns. When it comes to Lit, I’ve been doing discovery over the last year, and kind of trying to build in those security faults into what we’re doing… And I’ll give you an example. Lit’s API provides unsafe HTML, unsafe CSS, unsafe static, unsafe SVG, template content… That’s not the complete list of escape hatches for ways to get outside of the protections that are being provided by the HTML function, which as you mentioned, uses tag template literal syntax… Just underneath the hood you know that that’s just like another way to call a function, so that’s just a way to line up its parameters and replace values.

So I think that the protections that are being provided by Lit HTML are great, and if you know where all the places are where you have to be careful, it’s great. I like that the API names so many of the functions with like the preface “unsafe”, so you could get a linter, you could just run through your codebase and say “Hey, lint for all these common function calls, and then let me know if I used them, and point me towards docs if I’m not supposed to.” But I think there’s some other surprising APIs…

[38:06] Yeah, yeah, I 100% agree. And for what it’s worth, I mean, Lit came out of Google. Google’s very, very conscious of security. In fact, a lot of these unsafe functions are stripped out of the internal library that’s used at Google; it’s widely used there. So they don’t even allow you to have the escape hatch… Which I think is great. I wish more companies did that, where they’re just sort of like “Nope, we’re not even gonna include this in our internal bundle. Not even happening.”

But yeah… And Lit, obviously – like, tag template literals are a web standard, right? So it’s not like something that Lit is inventing. And that’s since I feel like browser engineers really do think a lot about security when they’re implementing web standards, and so in that sense I do feel like yeah, it’s okay as well… It doesn’t mean you can’t do something dangerous, it just means – I don’t know, I feel a lot more confident using it.

Yeah. Going back to the original assertion, which is like developers almost always get it right - I think that developers will get it right, almost always, with Lit. It’s just a question of - when you’re switching from one framework to another, what are the things you’ve got to pay attention to? Because you are gonna end up in a scenario where you’re going to want to put attacker-controlled content on the page, surprisingly often; in the Markdown case, or others. So I think – yeah, knowing what those escape hatches are, and being able to identify them and configure your linters to catch them, and just doing that basic security work is a good idea.

Yeah, that makes sense. So what I’d like to do is to kind of focus on frontend greatest hits, and then server-side greatest hits. So we can start with some of the best practices on the frontend… So defaulting to use cross-site scripting protection when you’re doing data binding. Can you kind of talk us through that?

Sure, yeah? I think probably your audience knows how to do this correctly, but it depends on what context you’re in. I mean, if you’re on the server, which in a lot of cases as JavaScript developers we are these days, if we’re writing like some server-side-rendered React, or we’re doing server-side rendered Lit it work, you might be in a situation where you’re able to take some attacker-controlled content from a query, and then you’re trying to build a page, and you’re trying not to make this mistake that I’ve been describing, where you’re taking some attacker-controlled content that contains scripts, and putting it into a context where it can be executed as a script.

So what you need to know is there’s kind of like a source to sink mindset, where you’re like “I’m getting data from somewhere, like in a query, and then I’m gonna use it somewhere, which is the sink. And that sink - there are a certain number of them that are potentially dangerous.” So innerHTML is a sink, outerHTML is a sink… There’s a project, it’s actually built into Chrome, called Trusted Types, that identifies what all the sinks are for dangerous information that could be rendered in the DOM, and it helps you like write a policy to protect against things being placed there.

So let’s say you have this information, and you’re gonna put it somewhere dangerous. You have to know where you’re putting it in order to know what protection to apply. So if you’re putting it like in the content of an element, then you need to use like HTML entity escaping. If you’re placing it in an attribute context… Let’s say like you’ve got an attribute, and then you’ve got the equal sign and you’ve got like double quotes, if you allow double quotes in there, somebody will escape out of your double quotes and then continue to put their page content. So you have to, in that case, make sure that all attributes are escaped. Frameworks will often take care of this for you, and then once you’re inside of that context, then you need to go on and further say “Oh, I want to escape these values contextually, for this usage.”

So I think the thing to remember when it comes to cross-site scripting is, first of all, apply content security policy, because you’re not going to get all this stuff right, so you probably want like a content security policy that’s gonna be another layer of protection. But then when it comes to preventing someone from putting scripts on a page, you need to take a contextual approach, and say “Well, where is this going to be used?”

[42:10] So that’s why I kind of made fun of earlier the idea that you could validate and sanitize the inputs to prevent cross-site scripting… Because until you know where on the page it’s going to be used, and in what context, you don’t know what exact encoding or technique you’re going to need to use in order to avoid cross-site scripting.

It seems like that’s a good opportunity for creating a vulnerability where you have this input that you thought you sanitized, and you’re displaying it in one place, and now you need to display it somewhere else. And you’re like “Oh, it’s sanitized, so let’s just put it out there. Yeah, we’re fine.” But it all depends on how you output it.

Yeah, totally. Kind of a code smell from a security team is like if you look at values in a data store, and you notice that they’re escaped and sanitized in there, then you know something’s gone wrong with your security mechanism… Because unless you were escaping them for an SQL injection, which in that case it shouldn’t be noticeable, because they shouldn’t have been inputted as raw values… Yeah, if they are escaped for like HTML context, and they’re sitting in the database, then you’re applying that in the wrong place. You need to apply that control at the place where you’re using the data in the dangerous way, where there’s an interpreter present that could possibly treat that data as a script.

Yeah, that makes sense. Well, I learned some things… It may be obvious to you, security nerd… Not obvious to me, not security nerd. Okay, so I’m walking through your cheat-sheet. So number two, “Watch out for dangerous URLs and URL-based script injection.” We talked a little bit about this, but I’d love to dig in a bit more. So…

Sure, yeah. So this is the idea that at times you want to take from a user what’s going to end up being a URL destination on the page… So you’re saying like “Hey, what is your social media profile?” and then you’re taking their actual full string of what their social media profile is, and you’re letting them control that entire URL, and then you’re placing it on the page in like the href attribute of an element. And if you do that, and they give you what starts with like JavaScript:, which is a JavaScript protocol URL, then they can provide like some attacker-controlled script as a URL. So it’ll say like JavaScript:alert1, or whatever, just so they can run something in the context of the page.

What I’ve often seen - it’s one of the areas in React applications where developers have to know the right thing to do. Every time as a React developer that you go to put an attacker-controlled URL in an href, you have to check it yourself, and make sure “Oh wait, is this an HTTP or HTTPS URL? Or is this a JavaScript URL? And if it’s an HTTP or HTTPS URL, I am cool to put it on the page. But otherwise, I shouldn’t put a JavaScript one on the page.”

I’ve seen this vulnerability at every company I’ve worked at, so it’s not uncommon that people make this mistake. And then I think react wanted to solve this. So they were like “Wow, this is kind of a big hole.” So they ended up building a warning in the developer console. So when this bug exists in your codebase, you will see a warning in the developer console. It says something about “Hey, you’re using JavaScript URLs, and they’re dynamic values, and you might be introducing cross-site scripting.”

It seems like it would need to be an allow list, because there’s so many URLs – like, each app has its own URL scheme now, especially on Apple devices, right?

Yeah, and you could see why I got really interested in URL parsers for a while there five years ago, because I’m like, okay, let’s say you’re building this security control where you’re checking URLs, and you’re like “Hey, what kind of protocol is this URL? Is it an HTTP or HTTPS URL?” Well if your library has a flaw in it, where it misidentifies what type of URL you’re working with, then you could inadvertently accept URLs that were, in fact, JavaScript URLs, and you were thinking they were HTTP URLs. And that’s kind of a computer security/software security maxim/axiom, which is like “You should use the same parser to make the security decision that you do to actually run the code.”

[46:11] So if you’re using, I don’t know, on the backend urlparse, that’s like some npm library that you chose, and on the frontend you’re actually using like the browser’s built-in URL parser, there’s been research on that that shows – I think it was [unintelligible 00:46:22.03] Research, that there’s a difference between parsers and how they treat things. So in that problem space you could easily introduce some kind of thing that passed the validator, passed the parser, but then when it’s actually used, it gets treated in a different way.

Yeah. I mean, for me you’re just highlighting one of the issues that I think I first had when I was writing Node code… And I know that rhymes; try not to laugh. Node code. You think, “Oh, JavaScript. Oh.” You think URLs work the same, like they do in the frontend. You think parsing is the same. There’s so many little gotchas, where it looks the same, but it’s really not, and it’s the little nuances that get you. Those are things that I feel like there should be a book on “Okay, you’re writing JavaScript in the server now? Here’s some things you need to know. Here’s a hat that you need to put on to context-switch appropriately. This is not the browser. Fetch is not fetch.” A lot of things.

Yeah. We have a defense layer like that at work, and it’s an open source project. I think it just got archived, unfortunately… But here’s another example –

We can still link it, if it’s an archived project…

Yeah, it’s a really good project. It makes a lot of sense that they built it. What it is is you’re in this situation where you’re running code on the backend, and you’ve been provided with a URL, and you’re expected to use that URL to make a request. You don’t want to make requests to internal resources; you only want to make requests to like an external resource. So for example if someone’s giving you an image and you want to, I don’t know, grab it, and make a preview version of it or something… And so you’re gonna make a request from the backend, from some privileged service that’s sitting in your microservice fleet; you’re gonna be making a request, and if you inadvertently make an internal request, you could connect to an internal database server, or you could connect to something else, and maybe provide that data about whatever gets returned to the attacker.

So I feel like you want to do something right there; you want to do the right thing. And that vulnerability is called server side request forgery, and it’s a really common vulnerability that people are facing, because they don’t know what to do in that circumstance. They’ve been provided with a URL, and then what they do is they at the time they receive it, they look up its DNS, and they go “Oh, okay, it comes back to a public IP address. Okay, now I’ll store it in the database and I’ll consider it a valid URL.” Well, again, when you go to use it, at the time of use, DNS can change, right? DNS is not static. So it’s that whole time of check, time of use problem, where now you’re looking at a URL that resolves suddenly to an internal IP address, and then the attacker can use that. It’s called a DNS rebinding attack, to make a connection to that internal resource using that previously validated URL.

So what you want to do instead is like use a library like Advocate in the Python community, where it does all of that stuff under the hood. It protects itself from DNS rebinding, it does that validation, it’ll accept like an allow list of URL components that are allowed within the request URL… And then it kind of does all that work for you. The only thing is you have to use it in the right place. You can’t use it to validate incoming data, you have to use it at time of use, which like, when you go to make that dangerous request using that attacker-controlled data, that’s the place where you need the security control. Am I boring you to death? [laughs]

No, absolutely not. This is the look of a woman who was learning, okay? I’m learning. This is my learning face, okay? This is fascinating, and this is why I could never be a security engineer. I respect security engineers so much, but it’s not a job that I could do. I don’t think my arteries could handle it…

[50:00] The stress…

The stress, the “Oh no, did I forget to escape something?”, you know… [laughs]

Well, the best version of this job, of being a security engineer is being a product security engineer. So at that point you’re building the frameworks and libraries that other developers rely on in order to get things right. So I think one of the hosts of this show is building something that’s like an npm wrapper…

Yes, Feross.

That’s perfect, right? He totally – I’ve read a lot of his stuff…

Socket.

Yeah. That’s genius, because then no one else has to think about it. They’re just like “He did it. He nailed it.” Right? Like “I could just use this thing and never think about it again.”

Well, it’s funny you say that. So one of our listeners had a question… Josh Cramer had a question in our Slack channel. By the way, if you’re not in Slack, what are you waiting for? Join our Slack community on Changelog; join the JS Party channel, because that’s where the real fun’s at. So Josh Cramer asks “With the proliferation of frameworks that protect you against common issues (and I’m specifically thinking of client frameworks and cross-site scripting sanitation), does that reduce the responsibility of the developer to think about security, or it just shifts that responsibility elsewhere?” I think it’s a great question, and it kind of goes to the point that you’ve just raised, so…

Yeah, I think that’s exactly right. I mean, what is computing if not abstraction? We don’t understand every detail of how things are working under the hood. I’m not sitting around thinking about the opcodes on my processor. I trust someone got that right. So I think it’s the same with security, where if you’re using a library, and you’re using its render method, and you’re not doing anything exotic, you would expect that you’d be protected in some way with putting data on the page. And I think these libraries we’re talking about, like Lit and React - I think they meet that challenge. Like, they do the right thing almost every time. Where it gets dicey is when you have a common path; like, I want to put URLs on the page. That’s very common on the web. And every single person who wants to put a URL on the page in both Lit and React has to know about that thing I just talked about, about validating URLs. I feel like that’s surprising to people. Often you won’t let somebody construct the entire URL; it’ll be like a relative URL, or you’ll put something on the front of it… But anyway. I mean, people work with URLs. So there’s areas like that, where the developer is still left to get it right.

Yeah. I mean, I’m looking forward to better abstractions even there, you know? It’d be great to just even abstract that, right? Someday. So yeah, we’re only on like number two of this list. I don’t know, this is gonna be a three-hour show. Do we need to do like do a bathroom/stretch break, everyone?

[laughs] Well, people could read the list, and they’re developers, and so they could totally look at it and consume it. That’s the intent.

Break: [52:38]

So I wanted to – you said something about how software engineers mostly get it right, but this is not how security teams, like, this situation… What does that mean? Like, what’s the difference?

Oh, I’m gonna blow your mind here, Chris. Okay… [laughter] So how many people on your average security team do you think write code at all at work?

Are you asking me?

Yeah, just your general feeling.

Fair enough. I don’t know which companies you’ve worked at, but I don’t know if that’s a condemnation to those companies, but–

I think they’re writing more tooling code. They’re writing more scripts, and they’re checking… It’s like internal spyware for good, you know? [laughs]

Totally. Like automations, right? Security automations… Yeah, I think that’s generally true. I think there’s –

I guess – what is a security team to you? Because I mean, I’m thinking… Okay, so if I’m thinking like a security team, like –

Wait, should I light a cigarette? Do we need –

No, no, no, what kind of security team? I don’t know… What are we talking about?

No, it’s a good point. There’s organizations within most organizations called the security organization or department. And within that, there’s often people with job titles like security engineer; there might be a product security group that owns some of the features of the application… Like, in a modern SaaS organization, you might have some kind of security organization that has folks in IT who are engineers, who are responsible for things like authentication, and authorization, and they’re building those solutions and providing them to the rest of the org. This is in like super-modern organizations, where product security is taken like a foothold. I think what you’re describing is the security organization that’s also doing corporate security, and like talking to you about losing your laptop, and those types of things. Often those folks won’t be writing a lot of code.

So I think that when you talk about somebody like a [unintelligible 00:56:10.23] or somebody who is the leader of a security organization, all the people that work for them at the company - and this can be various types, including software engineers, including product security engineers, including corporate people who might do something with paperwork, or lost laptops. Yeah, so those folks, they don’t necessarily – because they only really get involved when there’s a security incident, and when there’s a mistake made. And that’s what they focus on at their conferences, are “Oh, there was a breach”, “Oh, there was data loss, there was this problem…” And so you end up with this lens on the developer community of “Oh, the people in the developer community are making mistakes.” And I just have a different mindset there. I’m like, they are not making mistakes. Almost every time they don’t make a mistake. And so if we want to contribute, we need to build things; we need to take responsibility and become product security engineers, or integrate in the organization and get things built, or build libraries and frameworks, contribute back to them, so that it’s easier to do the right thing.

It definitely seems there can be an adversarial relationship between security and, I don’t know, product development, or something. Right?

Yeah, there’s some natural tension there. On the security side lately we’ve been saying a lot of like “Oh, we want to be the team of saying yes”, or “We want to be like the team of enablement.” But the reality is if you’re not jumping in and like actually writing code and getting things fixed and building secure defaults - yeah, then what is your role? At that point you’re an outside critic. You’re just looking at what they did, and telling them possibly the place where they did it wrong. Or maybe you’re running some tooling that finds some stuff, and in a lot of cases they’re false positives… Maybe you’re telling them about a bunch of CVEs and their dependencies, and they’re just like “Whatever…”

“Whatever…” [laughs]

So I think if you want to make a contribution, you’ve probably got to be working at the library framework level, or building those security features into the product that your company offers.

[57:59] Right. That makes sense. So you’ve mentioned product security a bunch… So can we just kind of maybe round out your definition, and also just what your experience has been working in this space?

Sure. Yeah, I think the line between your traditional application security team and a product security team is blurry. Sometimes this is like some of the same engineers. But where the work is different is - an application security team is trying to address the security defects in all applications. And so they’re not like in the nitty-gritty of each implementation detail of like exactly what library or framework was chosen, or like what linters were configured, or like why or why not some strategy on server-side rendering, where a product security engineer would be present for those decisions and might be making them.

So if I was on a team, embedded as a product security engineer on a frontend team that was building something with Lit HTML, I would be responsible for those types of decisions, like making sure those linters get configured, and making sure that the parts that are dangerous are being addressed by actually shipping PRs and writing code to make sure that everybody else was protected. It’s like being a specialist in the sense of like an accessibility expert, right? We have frontend developers that know all about accessibility, or internationalization, or they know a lot about performance. It’s just like another – security is really just another aspect of software quality, and so if you’re a product security engineer, what you’re doing is you’re embedding with the teams and then you’re trying to add to that aspect of the software quality, and make sure that the software that comes out of your team is more secure than it would have been if you weren’t present on the team.

Yeah, that makes a ton of sense. You’re kind of there to enforce and guide and shepherd on those aspects. And I really love the analogy that you used around accessibility, internationalization, because I agree, those are also specialties where there’s usually an advocate on the team, or within the company, or whatever… So yeah, absolutely. Great analogy.

So I’m determined to make it through this list, y’all. I don’t know if we’re gonna have time to cover the notes stuff… We might just have to invite you back to talk about that. But sanitize and render HTML - we’ve covered a little bit of this already; I’ll it couple that with “Avoid direct DOM access.” So can you talk about those? The fun stuff.

Sure. Yeah, so sanitizing and rendering HTML. So you’ve decided that you have attacker-controlled content, and you have it inside of this HTML as a string, and you want to get it on the page. And this isn’t that exotic of a use case. This happens often with things like – say I gave you some text, and you autolinked it. Like, you ran around inside of it and added every place where I looked like I was mentioning a URL, you turned that into a clickable anchor tag. Now I have this thing, and I want to put it on the page. And in this case, we were talking about React. So you’re probably looking at something like dangerouslySetInnerHTML again. But you might not know, like “Is the stuff in there dangerous? I just want to make it clean.” So in the Python world we have like bleach, in the JavaScript world we have something called DOMPurify. So DOMPurify is this easy button of a function within a library, where you just say “Oh yeah, I want to use DOMPurify, and then I just want to call it on my stuff, and whatever comes out is now going to be safe for the context I plan on placing it in, which is - I’m going to place this into the page as HTML content.”

So under the hood, DOMPurify is doing some work. And the implementation of DOMPurify, what it’s actually doing is it’s creating an HTML template tag, and then it’s using an allowed list-based approach of known safe elements and known safe attributes that can be added to that template. And then after it gets done taking your attacker-controlled HTML and turning it into a nice template element made out of DOM nodes, it will actually take that - and this is where the scary part is - turns it back into text, and then has you put that into the inner HTML using dangerouslySetInnerHTML.

So I don’t know if you caught it there what the problem is… So DOMPurify, if you look at its track record as a security mechanism, it often has bypasses. It’s like, you have a version, and it’s working great, and it’s like escaping everything, and then someone out there will figure out “Oh, if you put like single tick, and then back-quote, and then you do this…”

[01:02:07.03] Oh yeah, it’s a moving target. Is DOMPurify something that Google maintains, or…? What’s the one that Google maintains that’s always like the gold standard sanitization library?

There has been a few at different times. DOMPurify is probably the most popular library that’s used for this purpose. I think like the caveat there is if you’re gonna do something dangerous and you’re gonna use a library like that, you’ve just gotta make sure it’s up to date. I think I heard that under the hood Lit is doing something similar, with building templates and then using those templates to create HTML content. I haven’t looked at the mechanism for how they do that, whether it’s like a direct translation, where they’re grabbing an abstract syntax tree or they’re literally walking DOM nodes that recreating them, or whether they’re taking the template and turning it back into a string. I doubt they’re turning it back into a string. But if they were, that same vulnerability that’s in DOMPurify, that kind of intrinsic design flaw would also exist there. Yeah, but - yeah, I guess there’s no Easy button when it comes to that process. If you use DOMPurify, great; just keep it up to date, because potentially there’ll be a bypass.

Yeah, and I wish this was a thing. Somebody please make it. I feel like there should be a way to kind of have a category of packages in your pkg.json, where you’re like “Always keep this up to date.” I mean, maybe you can use like the carrot, and whatever… But even that isn’t good enough I feel like, because developers can ignore that. So maybe you could use a renovate, or whatever… But how do you force an upgrade, basically? That would be cool.

Yeah. I know that some of the folks – I remember when someone at npm on the website shipped the feature where you could see the distribution of downloads. So you can look at a project like DOMPurify, and you can look at like - over the last month, who has been downloading which versions? And so there, you can actually see the attack surface of that library in the wild, because you can see “Oh, look at the old versions. Those are the ones that are still potentially vulnerable.”

Darn it… See, this is why – I mean, this is why I couldn’t be a security engineer. I always feel like I’m on –

It’s all depressing, right?

Yeah. And I’m easily depressed, I guess. I don’t know. My Mojo is just like “Oh…”, you know…

I think the way that you sell the DOMPurify solution is you don’t say all the stuff I just said. You just say “Use DOMPurify”, and that’s the end of the sentence.

Yeah. And just [unintelligible 01:04:22.22]

So that’s the funner way to do it. Like, “Hey, I’ve got this great way to do it.” I just don’t – I don’t feel super-comfy when I look at a codebase and there’s a lot of DOMPurify usage. It kind of feels like “Oh, we’re doing this a lot. We probably needed like a programmatic way to do this correctly, using the React APIs or Lit APIs.”

Yeah, absolutely. So number five on your list is “Secure server side rendering.” So there is insecure server side rendering, I guess, given that you use the word secure…

Yeah, think about this scenario… Like, okay, I’m on this server, I’m doing a bunch of stuff, I’m building an element tree, I got it in whatever framework, I’m using like React, and then I’m going to be outputting that as a string, and then sending it to the client for hydration. Well, what if like when I run that render to string method, what if I had some other stuff I wanted to include on the page? Maybe, I don’t know, I have some extra analytics tokens, or some other string-based HTML… Concatenating that onto the end of the output - that’s what gets rendered in the browser. That content has to also be – you have to make sure it doesn’t contain any attacker-controlled content. So if you’re going to take the output of something like a secure library React, that’s done all this work to render things correctly, you don’t want to then append to that something dangerous, using string concatenation.

I see. That makes sense. It’s like, don’t bring a friend to the party, basically. Right? [laughs]

Totally.

Got it. Okay, so check for known vulnerabilities and dependencies.

I think we got that.

Yeah, we’ve talked about that a bit. Sorry. So… Avoid JSON injection attacks.

[01:06:00.20] You know, when I wrote this cheat-sheet, I never thought, Amal, that you’d be reading it to me and have me go through it with you. This is – check that off the list for life accomplishments.

Aww… Dreams can come true, Ron, you know.

Right? So I guess what I’m talking about here - there’s a common pattern where you’ve got some state… And this was written around the time that Redux was very popular; I don’t know if it still is… But you’d often want to say like “Oh, I have some state from Redux, and I want to put that in a script tag at the top of the page”, and then when React loads, it’s gonna have that stuff in context, so it can just grab the initial Redux content, and then it can get our state up and running. Well, often what’s in that state is like cached values from previous application state. And so if you take attacker-controlled content from previous application state, and you concatenate it into a web page, it can escape out of whatever context it’s in. So if I send a JSON context, it could just use quotes to get back into HTML page content context, and introduce scripts to the page. So what you want to do instead is you want to run a serializer over your JSON, and make sure that all the values inside your JSON are potentially escaped for the JSON context, and they don’t have any characters in there that can be used to escape out of that context.

Wow… See, I would have thought that that would be redundant. I’d be like, why would I need to serialize JSON? It’s already serialized. But it’s like no, no, no, you know?

The values themselves, right? It’s about where you’re putting it. So if you have a block of JSON, like just the text that defines a JSON object, and you’re planning to put that in an HTML tag as part of a document that you’re planning to send to the browser, that’s really the problem, because the browser doesn’t know what you’re up to. It just knows, “Hey, you’re sending me some text, and in that text it looks like you define an HTML tag, and you start talking about a JSON object, and then all of a sudden use escaped out of the JSON object and you started just writing regular old JavaScript.” And you know how browsers are, they don’t like error, and they’ll drop a web page that says “Oh, can’t render.” It just tries to do the right thing. So it’s like, “I think I know what you want me to do; you want me to like read the cookie and send it to the attacker-controlled domain, or something.” [laughter] Which is what – it just goes ahead and does that.

Right. Oh, gosh… Alright. And then - use non-vulnerable versions of React. We talked about this already. Just keep your packages up to date, people; just try to keep things moving. I know it’s really, really hard in the JavaScript ecosystem, because there’s this little gnarly thing called peer dependencies… And the peer dependency matrix is like living hell on earth, and it’s kind of getting worse… And you see a lot of like this TypeScript kerfuffle going on, where people are just like “TypeScript, I’m throwing you out!” I think that’s some fatigue around having to maintain and keep this matrix in check, right? So anyway, keep your stuff up to date.

And this one’s really important, number nine - use linter configurations. This is like a super low-hanging fruit. Can you tell us a little bit about that?

Sure, yeah. I think that there’s great ESLint libraries for React. And within those basic ESLint libraries for React there are rules that help you avoid some of the mistakes we’ve been talking about, or at least flag it so that you can pay proper attention to it… And I think the same is becoming true with Lit. So if you’re using that, I’m sure there’s a – so just go out there and try to find your framework-relevant linter package, and then look through the rules. Some of them have a security-related context.

Nice. Yeah, that’s awesome. And the one that you point out is ESLint has a React security config. That’s one example. And so how much of that is a moving target, Ron, where you need to keep that updated? I’m curious, is it fairly static in the linting world, or is it also a moving target, where you really want to make sure that’s always up to date?

Yeah, I think once you have a set of rules that you’re happy with, then you’re good to go. What I’ve noticed is like as – like, when you work at a commercial company like I do, and we have reports coming in through bug bounty, that’s one of the first things we look for, is “Oh, we made this mistake here in the codebase. I bet it has like friends that are written in a similar way. Let’s try to find that linter rule that meets this need, and then maybe we can apply that linter rule to all projects, and catch all instances of this going forward.”

[01:10:06.22] It’s a version of like just-in-time developer training as well, where developers might not even be aware that some pattern they’re using in code could potentially lead to a security defect or bug. And so yeah, just helping them get those linters turned on when you do find a bug.

That makes sense. Yeah, and you just brought up an important topic about reporting, which we’re gonna get into in just a second. And the last thing - this is obvious, we talked about this a bunch already… Number 10, avoid dangerous library code, things like dangerouslySetInnerHTML, and all these other unsafe kind of properties. Just find a more programmatic way to do them. I loved your suggestion earlier of like, you know, if you’re really doing this a lot and you have a need for this, have an API return text to you, or whatever it is, right? Handle this in a more secure handshake. I really loved that.

So we got through your cheat sheet. Yay. Yeah, 70 minutes later. One of our listeners had a really great question, which relates to something that you just said; his name is Thomas Eckhart, and he posted this in Slack, so I’m going to read it verbatim. “For applications I’ve wondered before if there are people you can hire to evaluate your running application for common flaws. I try to be careful about dependencies, I read up on CEVs… I feel like the greatest threat to the security of my application is me.” And I plus one on that for Thomas. I would agree. And so can you hire people to just like poke holes in your code and give you a report, question mark?

For sure. Yeah, I mean, there’s a lot of tools in this space, commercial tools and open source tools, where – I think the categories that you might think about here are like static analysis tools. So tools that look at the code as it was written, and try to discern whether or not there’s a security mistake. Linters would be the most lightweight version of this, but there’s full-on static analysis suites that will take your application and do source to sink analysis and try to figure out if you have vulnerabilities. They generate a lot of false positives, and they’re often run by security consultants, the commercial ones especially, because they just have a high signal to noise; it’s high on the noise, low on the signal, so developers don’t often want to be the audience who runs those… But yeah, there’s static analysis tools.

There’s something called dynamic analysis, which I think bug bounty is the most common version of that; you’ve got some buddy who’s like looking at your running application and trying to poke at it and say “Oh, if I give this input, I get this response” kind of thing. There’s also automated versions of that. [unintelligible 01:12:22.09] tool that you could run against your running application, and it’ll tell you if you have vulnerabilities. Again, you’re gonna get a lot of false positives, but if you’re interested, definitely worth running. Of course, you could hire consultants to do this type of work, if you have money. And companies hire employees to do this type of work. It’s a whole function called red teaming within companies; there’s offensive security engineers that get hired, and they spend their days looking at applications from either like totally unknown to them applications, where they don’t have any special knowledge, or they’ll know the application well, and then they’ll still try to do an exercise where they attempt to compromise the security of the application.

Yeah, that makes sense. Yeah, I would say if you’re talking to your manager, or your tech lead, or you are a manager or a tech lead, or whatever, you’re in a position to advocate for this kind of thing, I would say totally worth it to have like a real person come and poke holes at your code. But on the other side, I would say it’s also really worth it to get an accessibility audit done. So those are the two things I feel like are really worth it for teams, especially once you have some traction for your app, and it’s being used by people in the wild… Yeah, get an accessibility audit; even if your linter says you’re perfect, it never hurts to get feedback from someone. The accessibility auditors actually use people who themselves use screen readers, so you can’t really replicate that with a linter… And then same thing for security; if you can get a person to look at your code, I highly recommend it.

[01:13:51.24] And so we’ll kind of like close off with this really important existential question, Ron, which is why are people not taught about this? I feel like it’s such an important thing, but no one teaches developers on average how to secure their code, or whatever. And I know you’ve made us all feel good about ourselves by saying we get it right more often than we get it wrong, but I feel like there’s still fundamental things that we don’t think about when we’re programming.

It’s a very interesting space. I bought probably 20 books on secure coding in my life, and I would say more than half of them don’t have any code inside of them, which is surprising. So the folks who actually have been in this industry historically aren’t necessarily like code-level in their recommendations, surprisingly, for secure coding.

So I think that as the developer community becomes more aware of this problem space, and want to build more stuff in, like they have into React and Lit and other frameworks, I think that those security decisions and those secure coding patterns are going to be left to those folks who decide that this is a specialization they’re interested in, and then they’ll be building it in, so that the rest of us benefit.

I don’t believe in a future where every developer figures all of this stuff I’ve been talking about out, and then applies it every day while they’re at work. That’s just unrealistic. You can hear from me trying to talk about it, it’s so nuanced, and every case has caveats. So I think it’s just an engineering focus, and I think that they just haven’t taught it historically because it hasn’t been valued in the marketplace.

Yeah, that makes sense. That’s a great answer. It really is nuanced, and it’s a moving target. That’s the other thing, is it’s nuanced and it’s a moving target… Which is why, if you have the budget, hire some security engineers at your company, or on your team. If you have a sizable codebase, or a growing codebase, this is like money well spent, right? Because it really is a full-time job.

And I guess I would also say if you’re a security-minded person who’s doing software development currently, you should check out product security rules, because in a lot of companies that’s a specialization that exists, and there are roles that are dedicated this. And a lot of times, those product security teams will hire people who are just regular old software engineers, and then they’ll let them focus on the software security aspects at work.

I have a conference in Hawaii that we throw somewhat yearly, and the focus of that conference is just focused on product security. So we get together and talk about that.

What’s the name of the conference, Ron? Or is it a secret?

It’s called the Loco Moco Security Conference. The easiest way to find it is on Twitter. Yeah, we’ve had a lot of really good guests in the past come and talk about various aspects of authentication, access control, how to get it right when it comes to content security policy… All these types of topics.

Nice. And you have another conference coming up in April, right?

Yes, we do.

Is this like the first one post-COVID?

I think we’ve had two since COVID. We had a virtual one; we had to pull the trigger on the virtual.

And then we had another one – we hosted them on different Hawaiian islands, so I think the last one was on Oahu, and we’ve had them on Kauai, on the big island in the past. And the idea is just to get together with the small number of folks out there who are interested in this topic, and have a lot of sessions where we could share like best practices. It’s a single-track conference, which is cool, because then there’s no hallway track. We’re all kind of like locked into the same content, so it better be good.

Yeah, yeah, yeah. No, that’s so cool. And then - I have to ask this, but like JavaScript security versus real backend security… So we didn’t have time to really get into the Node stuff, and I would love to have you back on the show to dig into this, maybe with some other folks from the Node team as well. But can you talk about this a little bit? Because there’s some number of things that you have to know for kind of securing your frontend application code, but then your server really is for me where it gets, like – I don’t know, honestly, servers are just like buggy state machines… It’s like you’re always plugging holes, in terms of how to secure it. I don’t know…

[01:17:56.19] Yeah, I guess I’m making some assumptions about the design of people’s systems, and just the organizations where I’ve worked or worked with, and I’ve noticed that there tends to be a microservice fleet somewhere, of applications that are all talking to each other. They might not even be talking HTTP, they might be talking gRPC, or some other intermediate language… And that fleet of services doesn’t always have like the exact same security concerns as more of the forward-facing code.

I think if you come into a company like myself, and you say “Hey, I know how to write JavaScript, or TypeScript, and I want to help with the security function”, often what they’ll point you towards is they’ll say “Oh, well, we have these frontends”, which are things that are visible to people and they can interact with. The backend of the frontend would be something that’s like responsible for rendering, or maybe it’s some small services that intermediate conversations between the microservice fleet and those services that are running like out in the production web space. And so I think that as a JavaScript security person, if you’re thinking about “Well, where’s the attack surface for me?” it’s not really those like deeply-embedded microservices on the backend. Maybe a few companies use Node there; I’m sure they do. But I think like for the companies I’ve talked to, we tend to be more responsible for stuff that ends up rendering a web page, or ends up receiving data that came directly from a web client or a web API. And so just knowing those types of problems are where you need to focus.

So we have OWASP ZAP, which is a tool to automate dynamic checks of a web app security, right? But I’m thinking – so if there’s a tool that would do the same thing, except for the backend. Now, it seems to me that sort of thing may or may not care if it’s written in Node, or Go, or anything like that, and we’re just kind of checking… You know, maybe we’re looking to see “Alright, are they running–” I mean, people have been doing this for years. Are we running like Nginx, or are we running Apache? Like, what’s serving up the backend? And any known exploits…

Yeah, I’ve given some trainings on this. I don’t know if I can talk about this on JavaScript podcast, but I’ve given trainings on Golang…

Anything goes, Ron… JavaScript is the universal solvent, okay? I established that a few episodes ago, so… Yeah.

Yeah, outside of organizations where they run Node exclusively. People do – on the backend they might write their microservices in Go or Python. And I’ve worked with those teams, and I guess what I’m saying is it’s just a different set of problems. Like, you might be worried about things like “Oh, I have, I don’t know, command injection vulnerabilities” or like “I have data store injection vulnerabilities in SQL”, or whatever the datastore layer is; a MongoDB injection. You might have a lot of the flaws in those microservice fleets are related to too much trust between microservices. So kind of all the microservices end up believing whatever they’re told by the other micro services and their request context without validating it. So they might say “Oh, yeah, I guess you must have permission to do this thing you’re asking me to do, other microservice, because we’re all back here on the backend, and you’ve asked me to do it.” And they might not check access control, and they might not do a good job of like authenticating who’s connecting to them… So a lot of the vulnerabilities on the backend are around like authentication, access control, injection in data stores…

And then you have another whole problem space of business logic flaws, which is like “Should this microservice or whatever even do the thing it’s being asked to do?” And sometimes it doesn’t have all the context to know if it’s supposed to be doing what it’s being asked to do. So you have a bunch of logical checks inside of there, and that’s when you get into conversations about “Oh, you should have some kind of generalized access control authentication and authorization framework.” Every single microservice shouldn’t be responsible for like rolling their own version of access control, and being like “If you’re like this, and if you’re like that, then you could do this.” It should be done in some centralized service, where they could say, “Hey, according to policy, I was just asked to do this, and I’m this type of thing. Should I be doing this?” And then they can find out from some centralized place where there’s a policy whether or not that particular thing is allowed by the policy.

[01:22:04.21] But yeah, not often written in JavaScript; often written in other programming languages.

Yeah, yeah. That makes sense. And thank you for sharing that insight. We didn’t have time to get into all the server side stuff. Like I said, if Ron will agree to come back…

Yeah, the backend of the frontend. We’ll talk about Node.

Yeah, the backend of the frontend, yes. Because remember, someone – actually, Gleb Bahmutov, who is one of my favorite people on Earth, said “Oh, Amal, if really want to learn and get into security, just curl GitHub, and just look at all the headers, and go understand what each of those is doing, and you’ll come back with a better understanding of what the hell is security.” And that was almost a decade ago; I don’t even remember when he told me that. But it was so enlightening to me, because I learned about that cool content, that security policy that’s like – you know what I’m talking about, right? Where it’s like, you can only run code from these origins. Or you can only run code from this same origin, which I thought was like such an easy thing to –

You might be thinking of like Content Security Policy?

It’s a monster of a header at this point. I think we’re on Content Security Policy version three. There’s a whole bunch of really cool stuff you could do in there, and you can get really nuanced about what’s allowed around the page, where can it come from…

Exactly.

Yeah. I think GitHub has a really good content security policy; the folks who wrote it did a really good job. And it starts with kind of like a denied by default approach, where it’s like “By default, you can’t do anything. And let me tell you what you can do.” Content security policies in the real world don’t typically end up that way; they kind of end up like “Hey, let’s write a policy that kind of outlines everything we’re already doing. You’re allowed to embed from YouTube, and you could do this from Vimeo, and you could do this from this place…” And those ones are a little more leaky and easy to bypass. But yeah, if you can build a strong content security policy that explains “All of our scripts come from this page. None of them come from the page contents itself. They all come from this external domain that we’ve defined. And so don’t execute anything on the page that you end up finding, unless it follows this policy.” I think I quickly mentioned that earlier, that like Content Security Policy is something you want as a defense in-depth layer, because you’re not always going to get it right when it comes to escaping.

So yeah, with that said, Ron, where can people connect with you? Where can people learn about your courses, and all the things?

Yeah, I’m just @RonPerris on Twitter. You can find me there and talk to me. My conference - you can come there and meet me in person, and we can hang out in Hawaii. Sure.

Yeah. And are your classes – I know you had some courses. Are those still on the interwebs?

I could probably dig up some links, but I think my focus has mostly been at Reddit, and trying to secure those five frontends…

Yes, yes, that’s true. Oh, yeah, on the almost five frontends… Which means – yeah, it’s signal to let you get back to work. So it’s been an absolute pleasure having you on the show today. You really took us to school, and we’ll put lots of links in the show notes, everyone. It’s just been so fun, so educational. We’ll have you back to talk about the frontend of the backend, and… Did I say that right?

Either way…

Backend of the frontend…? I don’t consider – I mean, it depends on what layer you’re using… Some people - Node is their only backend. And a lot of people it’s the middle, you know?

It depends.

It does.

So we’ll invite you back to talk about that. And so with that said, have an amazing everything. We have a really great show next week, so stay tuned for that as well. And with that said, have a great day, everybody. Cheers!

Changelog

Our transcripts are open source on GitHub. Improvements are welcome. 💚

Player art
  0:00 / 0:00