Ship It! – Episode #66

Do the right thing. Do what works. Be kind.

with Rob Mee, CEO of Geometer.io & former CEO of Pivotal

All Episodes

Why are the right values important for a company that changed the way the world builds software? How does pair programming help scale & maintain the company culture? What is it like to grow a company to 3000 employees over 30 years?

Today we have the privilege of Rob Mee, former CEO of Pivotal, the real home of Cloud Foundry and Concourse CI. Rob is now the CEO of Geometer.io, an incubator where Elixir is behind many great ideas executed well, including the US COVID response programme.

Featuring

Sponsors

HoneycombGuess less, know more. When production is running slow, it’s hard to know where problems originate: is it your application code, users, or the underlying systems? With Honeycomb you get a fast, unified, and clear understanding of the one thing driving your business: production. Join the swarm and try Honeycomb free today at honeycomb.io/changelog

Akuity – Akuity is a new platform (founded by Argo co-creators) that brings fully-managed Argo CD and enterprise services to the cloud or on premise. They’re inviting our listeners to join the closed beta at akuity.io/changelog. The platform is a versatile Kubernetes operator for handling cluster deployments the GitOps way. Deploy your apps instantly and monitor their state — get minimum overhead, maximum impact, and enterprise readiness from day one.

Flatfile – Data import is broken. We fixed it. Flatfile’s powerful out-of-the-box solution takes the data import burden off your shoulders, freeing you to solve bigger business problems and build products that people love.

Notes & Links

📝 Edit Notes

Chapters

1 00:00 Welcome
2 01:03 Sponsor: Honeycomb
3 02:30 Intro
4 07:40 Pivotal
5 10:45 How was this successful
6 17:47 ALL the layers working together
7 20:21 Pairing
8 25:38 The Pivotal values
9 32:27 Sponsor: Akuity
10 34:40 Did it work?
11 41:38 RabbitMQ
12 44:30 Cloud Foundry
13 48:16 Elixir
14 50:18 Got a real world example?
15 57:00 Sponsor: Flatfile
16 59:03 What a way to hire. How?
17 1:01:47 Optimizing for shipping it
18 1:02:31 How to act on user feedback
19 1:03:39 Do you use Pivotal Tracker?
20 1:04:44 Wrap up
21 1:07:24 Outro

Transcript

📝 Edit Transcript

Changelog

Play the audio to listen along while you enjoy the transcript. 🎧

It’s 9:06 AM, and I’m fairly certain that this will be the one that I will always remember. Welcome, Rob.

Thank you. Good to be here, Gerhard.

So let’s imagine that this is a real stand-up between you, me and our listeners. Would you like to start?

Sure. As you probably know, depending on the organization, as stand-ups get bigger, you can’t really have everybody speak. You need to do something a little more optimized, and ask people to volunteer anything that they have found interesting, and what they’ve done, or perhaps ask for help from their colleagues. And so I guess if I’m going to start, I’d ask you if you have anything interesting to report.

I do have something interesting to report. I discovered how our Bottle Dagger Brew gets updated. This is the one in the Homebrew core. And I wasn’t expecting it for a human to be involved, but it is. I was convinced that our pipeline had it covered. Apparently not.

So that was something interesting for me, because I didn’t know how that worked. I made an assumption. It was the wrong one. And interestingly, I discovered that is not what I thought was the case. And all of us believe the same thing, because we have a Homebrew Tap, and that’s the one that gets updated, but not Homebrew core. So our users, when they install the Dagger CLI, they’re getting an outdated one, because there’s a human involved, and a human didn’t update the last version, because we ship every week. So that’s something interesting. How about you? Anything interesting to share?

No, not –

Not related to Homebrew or Dagger, for sure. [laughter]

Right. Yeah. And I’m trying to think if I need help on anything either, and I don’t have any help to ask for at the moment. How about you?

Or if you’re blocked on anything. If you’re blocked on anything. Well, I would like to ask Andrea for some help. There’s a PR which is blocked on him. And another one on Tanguy. So Andrea and Tanguy, if you’re listening to this, obviously in the future, I’m sure there’ll be a PR that’s blocked on one of you… So can you help me unblock them, please? Because there’s nothing else I can do.

Also, I’m going on holiday, so everyone, please continue looking after the PRs and issues while I’m away. because maintainer duty has been on my mind. So when I’m back, hopefully there won’t be tens and tens of them which haven’t been closed. But if we’ve done this maintainer duty correctly, then the load will have been spread. It won’t be just me. So let’s see. This is a test, how well does it work in practice.

I wish I could offer you help on that, but I’m afraid I can’t. I’m not well-versed enough in Dagger.

Not today. Maybe another day. And on that, shall we finger-snap? That’s obligatory, right? It’s a stand-up. This doesn’t finish until someone snaps their fingers.

Well, snapping fingers, or clapping, or stretching and clapping seem to be a lot of variations these days. But yes, we can finger-snap if you like.

[05:57] I just love the finger-snap. It’s like so sharp… Like, listen to that. That’s just like, you know – it’s something that you don’t do often. Clapping - you may enjoy someone’s talk, and you may clap. But finger-snapping, once you do that… I think that’s one of the things which I miss about the Pivotal stand-ups, where many people would use to finger-snap… And that was, quite something, because you don’t normally see people finger-snapping.

I recently received a bit of swag from a group that – it’s a fairly large group now of open source government contributors, who are sort of crowdsourcing fixing government software systems, in a way. So the organization - I donated to them some time ago, so I’m on their list for getting updates and occasionally receiving swag as a thank you, and I received a coffee mug that had a “Stretch and clap”, and then they had their logo on it. So apparently, they identify very strongly with stand-up and the way that they do the end of stand-up. It was a little bit surprising to me to see that.

Stretch and clap… I don’t remember the stretch part. I remember the clap part. I think there were some differences between regions…

Yeah, I’ve seen it at multiple regions at Pivotal. And now, companies that have some connection or some overlap certainly in the communities of these organizations… But this is something that sort of spun out of Code for America… which has lots of overlap with Pivotal. But anyway, it’s interesting to see some of these practices being as widespread as they are.

Yeah. So we’ve been talking – well, we mentioned Pivotal a couple of times… Pivotal was a 30-year journey. I don’t think many know that. It grew to become 3,000 people, it IPO-ed in 2018, acquired by VMware in 2019. I was part of it briefly, but you were there from the beginning, all the way to the end. How was it for you? How was this journey for you? It’s a long one, and I’m sure a great one.

Yeah. Well, I certainly can’t complain to have had the opportunity to do such a journey, such an evolution and watch it go through various phases. Certainly in the ’90s we were a small band of people who did projects, and then disbanded, and traveled the world and came back and did more projects, so as sort of an early lifestyle business it was fantastic.

I had many people starting out, looking at starting their own companies ask me about that, saying “Would you recommend that? or do you recommend starting as early as you can, and building something and spending all your time working?” and I think my usual response is that was a very rich and fulfilling time of my life to do really good work with amazing people, on extremely interesting projects, and then not do it, and alternate that time.

Of course, then in the early 2000s really settling down and building Pivotal Labs, and seeing it grow and become more well-known was incredibly rewarding as well. And then having it be acquired in 2012, and spun out again, turning into Pivotal Software, and growing that to eventually go public, and build a cloud platform, and so on… It’s an opportunity that I wouldn’t have wanted to miss. And a lot of people have asked me - you know, after it was acquired the first time, and then we spun out and got much bigger quite quickly, it was certainly much more complex as a business model. Complicated. Lots of different groups and people who were not necessarily steeped in the Pivotal Labs culture, and had to come around to their own relationship with that, and how we worked, and everyone finding their own accommodation, and going through a lot of pains of having – I wouldn’t call them growing pains, but pains of having a lot of people who did not come from that background being thrust into it, and eventually finding their way into it in a good way. I think it ended up sustaining the culture and maintaining it as we got really quite big, from my perspective.

[10:27] And people would ask me, “Do you wish you sort of stopped after you’ve been acquired the first time? Because this seems difficult and it seems really complicated.” And I’ve always responded, “No, I think having had the opportunity to grow a company to that size and go public is something I wouldn’t have wanted to miss, for sure.”

Being part of Pivotal made me realize that a company which gets to thousands of employees can successfully combine people, process and technology, while staying open source, while being open and transparent about how it does things, and being on a healthy growth trajectory. What would you say was the reason behind this success and this good combination of all three elements?

I think the key to successful growth is really having a very highly intensive collaboration among all the people that work at a company. And when I say that, I’m framing it abstractly, but I am primarily talking about our practice of pairing wherever possible. Because you’re really asking “How do you keep a consistent culture as you grow and as you get to scale? How does that happen?” We certainly see many companies working on trying to have a corporate culture by doing off-sites and activities designed to help people bond together… And it always struck me that, “Well, if your workplace was oriented around that, why would you need to go somewhere else? In other words, what are you doing at work all day, if you’re not bonding, trusting each other and relying on each other, and getting to know each other sort of as that company community? Why do you need to go away and go somewhere else to do that?” It seems like work is not that then, by implication. And it’s not that we’re trying to make work a family, or something like that. I’ve always kind of felt that notion that sometimes companies will put forward felt a little bit false to me. I think people have families and community outside of work too, and they need time to do that, time to focus on that. But when you’re out at work, I think really there should be such a level of collaboration that you are bonding, and you are getting to know each other, and forming relationships.

So having a highly collaborative environment, and in our case trying as much as we can in all areas to pair means that we’re spending the day, any two people who are working on a particular problem - they’re debating what they’re doing, they are using the principles that they have understood from the process that we’re using, and they’re refining them as they go… And sort of redesigning and evolving the process and the culture, the way that we work, and the way that we interact as they do it. Everyone was certainly encouraged to do that. If you’re not sort of examining how you work at a meta level, as well as working, then you’re not really doing it quite right. Because everything is open for question, and everything is open for modification and improvement.

[13:44] And so if you do that all day, doing the work while improving the way that you work intentionally, and then you split up and pair with other people the next day, for example, you’re going to carry the improvements with you, and the changes and the evolution will propagate.

And so in a culture like that, in a system like that, the system is somewhat self-correcting and self-propagating. I hate to cause call it viral at the moment, but… It allows people to work and to improve the nature of work, and then to share that, and then to come back together having had the feedback from working with others and inform the collaboration that they’d had previously. And so this system really allows people to continue to improve and build on what they’ve had.

The way I experienced that, without realizing, for quite some time, starting when we had say ten people, and feeling all the excitement and wonder of doing meaningful projects that were quite difficult, with a very small group of people, and succeeding beyond expectations, and thinking “This is amazing, but it would never work if we had, say, 25 people”, and then getting to 25 people and it’s better, and saying, “Well, gosh, it would never work with 50 or 100”, and then getting to that point and saying, “It’s actually better than it was when I thought that last time.” And after doing that several times, finally taking a step back myself and going meta and saying, “Wait a minute, why does this keep working?” and realizing that having that kind of system in play allows you to grow and evolve without sacrificing quality. In fact, if you maintain the discipline of working in a paired environment, then it keeps getting better.

And by the way, to answer the other part of your question around the technology and how that plays into it, I think another interesting aspect of the way that we work - and take as an example test-driven development; we were looking for feedback at every point of interaction. For example, two people are giving each other feedback all the time. When you’re doing continuous integration and deployment, you find feedback from your CI system; it tells you when you made a mistake. Test-driven development is a wonderfully pure way of sort of doing – call it deep practice, if you will, but you’re building something to verify what you’re about to build for production, and then if you make a mistake, the computer kindly tells you, “That’s not quite right”, and you have a chance to reflect and correct it, submit it again, and be corrected. And so if you do that in a highly iterative way, you’re allowing the computer to help you improve at a pace that you wouldn’t do normally, because the feedback from your mistakes would come so much more slowly… And so it’s much more difficult to incorporate that feedback, and absorb it, and improve, as it is when you’re doing very rapid iteration.

And so I think, especially for software engineers, and of course, there are other people in the organization other than developers - but for them, they’re really placed in an almost ideal situation for honing their own craft and their own abilities, and the process that they use, and the culture that they are part of, by pairing and test-driving. And to me, it’s almost a sort of unique opportunity in the world of work, to build relationships, and evolve a culture, and improve a craft all at the same time, all while building product in a better way than you could otherwise. And to watch that - I mean, coming to an understanding of how profound that is took me years. But eventually I realized what was going on, and how powerful it is.

[17:49] For me, one of the key moments was when I realized how optimizing for this quick feedback, and how adapting to change, being present in all the layers made them work so well together. So whether it’s people, it’s when you pair, when you spend time together, not just with your thoughts, but with another human being to validate whether what you’re thinking is correct… Then having the tests to confirm that the implementation is correct, then having a CI that integrates everything together… And then - and I think this is a very important element - a platform that can take that artifact and deploy it really quickly and scale it out really quickly and test it in production, at scale. Having all those elements work together well - for me, it was the key to understanding how important it is for all the layers to work together, for the people, for the organizations, for the business… Everyone benefited from this integration.

Yeah, that’s extremely well put.

It’s as if I had time to think about this, many, many years. [laughs] That’s exactly what happened, yes. And being a part of different – and seeing it from different perspectives, because at Pivotal we used to consult… Pivotal Labs used to – I mean, we started as a consultancy, and consulting was a big piece of what I did. So that allowed us to see how other companies were doing it, because we were helping them. And whether these were car manufacturers, pharmaceuticals, financial institutions, banks… You know, big organizations. And then we realized that the approach that we have scales up, and scales down. Even startups. I remember us working with startups, delivering food. And it worked at all levels, extremely well. So it scaled very well, whether it was 10 people, 3,000, or 30,000. I think that was the versatility behind what works, basically. Discovering what works.

Yeah. And I think the key there is having a self-correcting system, and a self-improving system. And you can’t – it’s very difficult to impose a structure like that. In order to have a self-improving system at scale, you need to have a lot of freedom at the micro level, at the person-to-person level for them to be able to improve things.

So pairing - I know that it is controversial in some cases. Anything taken to the extreme is bad. Take sun, for example - essential to life; in large doses, it will kill you. It’s just what it is. So pairing - I think the extreme that I’ve heard many people complain about is when teams do it all day, every day, for years and years on end. I’ve been on this spectrum everywhere. So like not pairing at all, to pairing all the time. In my case, it was, I think, months on end… And it’s really, really hard. Where do you sit on this spectrum? Or do you see it as a spectrum? How do you think of pairing all the time, versus maybe when it makes sense?

I’m closer to the end of the spectrum that’s all the time, but I want to qualify that by saying that people should view a software development methodology, and probably any working methodology; it doesn’t have to be software. But my view on that is that nothing should be sacred. You should be able to question everything and analyze it, and say, “Is this actually working for us, day by day, week by week?” And if it isn’t, then you modify it. Part of that, of course, is making sure that you’re being extremely honest with yourself, because sometimes things like pairing can become very tiring. And that can be detrimental. But if you stop doing it, you may be losing a lot that’s not immediately apparent, but will become apparent over time.

So in my view, if I were managing a team, or funding a team, or advising a team, I would push pretty hard for them to pare most of the time, the vast majority of the time. And the reason for that is - you know, if you just take the one example that we talked about at length just now, which is creating and propagating and evolving a culture and method of development and continuous improvement there, there’s nothing like pairing in order to do that.

[22:29] If you want extremely high-quality code, that doesn’t have a lot of bugs, and it’s well designed, there’s nothing like pairing to do that. If you want the team to have an understanding of all the code, there’s nothing like pairing to share context and share information about a codebase, and make sure that everyone can understand most of the code, and when you lose someone, you don’t lose the ability to modify a piece of the code. There’s nothing like pairing to ensure that.

And finally, if you want to raise the skill level of people at a pace that you can’t do otherwise, there’s nothing like pairing. And I could keep going; there are many, many different layers to what it gets you. At some point, it may start to become counterproductive, as you said. So if people are doing it - and it is intense, it’s tiring; you may come to a point where some of those things are breaking down, and you’ve been pairing too long, and you ought to back off a bit and give people a break.

So certainly having people say “You know what, I don’t want to pair today. I’m done. I can’t do it anymore. I need to go and read some technical articles, or I need to go and work on some design, or do some research, or just look at the code base and noodle around a bit”, then they should do that. If they find that it’s unsupportable given the kinds of things they have to do in their work environment - suppose they have to maintain a system in production, and they have to do incident response, and get on the phone with people, maybe they need a day to do that. And so maybe they’re only pairing three-quarter time, or 80% of the time, or something like that.

So I can certainly see that, and I know that there are plenty of downsides to pairing in terms of how tiring it is, and so on. So maybe the cases when people pair, they do shorter days, or shorter weeks, and that’s reasonable as well.

Yeah. What I’m hearing is having strong opinions is okay. Having the courage to find what works for us - “us” in this case meaning your team, your context - is important. And you shouldn’t blindly follow something because someone says so or thinks so. You need to figure out what works for you. And by the way, that is the hard part. Many people don’t know what they like, or what works for them, because they’re just like in the system, or part of a system, and there’s inertia, and they just go with it. So having a system that encourages challenging assumptions, figuring out what works, and promoting the courage to just come forward and say, “Hey, this isn’t working. Can we find something better?” Having that is so important. And I know that Pivotal has been very strong on that. “Courage” was a big, big word, and a very important one. And this was in the Pivotal culture - speaking up, coming forward, having the courage to do the right thing. And by the way, no one said what the right thing is.

So that was important… But there were a couple others which stuck with me. Doing what works - very relevant to what we’re discussing. My favorite - being kind. These are the things that when I think of Pivotal is what I think of, and they apply to everything, not just software engineering. They stood the test of time, and I’m sure going forward they will not change. How do you think of those? I think it’s values that they used to be called, I think…

[26:05] Yeah. Values, and of course, it was our mission statement to transform how the world builds software. The origin story of those is always to me quite entertaining and enlightening at the same time. Quite a few people have heard this, but I don’t know if it’s widely known… Someone that I’d worked with since the early days of Pivotal, Edward Hieatt, who was running all of Pivotal Labs at this point, came into my office, and shut the door, and dramatically sort of locked it and said, “We need a mission statement and values ,and I’m not letting you out of here until we have them.” And I thought at the time “What?! What are you doing…?” And he knew my feeling on things like writing down values… I always imagined, being a developer myself, I’m very suspicious of things like that, and picture posters with clouds and doves, and things like integrity and honesty written on the posters and plastered on the wall… And so we’re a relatively small company, primarily oriented around software development. If we write down a set of values, it’s gonna backfire. People are going to find it inauthentic. And Edward said, “Well, actually, I’m here to tell you that I’m hearing from our software developers in particular that they want to understand what our values are and what our mission statement is. So I think we actually need them.”

And a similar time Edward had told me that the software developers were asking for more management at one point. So you know if your developers are asking you for things like more management or values, it’s gotten to a point where you need to do something… Because like me, they’re suspicious and cynical about those kinds of things, and they don’t take lightly to sort of corporate pabulum being thrust upon them.

So I said, “Okay, well, why don’t we just do what we really think and do what we really want if we think it would be palatable?” I mean, I said, “For me, I’d love to change how the world builds software, you know that.” And Edward said, “Yes. I would, too. That’s what I want to do.” “Yeah, but we can’t write that down. We can’t say that. That’s arrogant. That’s so ambitious.” And he said, “No, no, no. That’s a great mission statement. Mission statements are supposed to be aspirational. Why wouldn’t we say that?” And so after a time, he convinced me that that was the right thing to say. And it worked, and it stuck, and I think we grew into it, I would say, as we got bigger…

And for the values - that was a really hard one for me… But I just sat and said, “Okay, what really matters to me every day, as we work, at a meta level?” and I thought, “Well, the first thing is we’ve got to do the right thing. In other words, we have to be ethical. There’s just no gray area there. We can’t bill our clients more than we worked. We can’t allow any accounting silliness to come into play. We cannot be unethical, at any time. It’s just completely unacceptable. Alright, so let’s do the right thing. And people – we’re not gonna tell them what the right thing is, but I think given the people we hire, they will know.”

And the second thing was “Do what works.” Well, that was simple. That’s the basis of everything that we do, the way the methodology works… We’re trying to constantly do things that work and improve upon them.

And then the last one was one that I think is difficult in a situation – especially if you’re doing well on the first two, it’s pretty easy to think “Gosh, if we’re always doing things well and being righteous while we’re doing it, then we must be pretty darn good.” And then it’s easy to be, I think, contemptuous, or impatient, or mean. So you’ve got to remind yourself to be kind, all the time, and not allow yourself to succumb to those baser instincts or reactions. So I said, “Okay, let’s be kind.”

[30:26] And that was it. Those three. And Edward said, “I think those are fantastic. Let’s ship it.” And I have to confess, I was terrified that people would think we were being inauthentic, or something like that. And it didn’t turn out that way. I mean, people really latched on to those; they were on swag, they were on wireless passwords, they were everywhere. Email signatures… People put them all over the place, especially “Be kind.”

It resonated with people, that’s what it was. People secretly want that, or knew that all along. And you just putting it down and coming from the top, in this case… So the top leadership putting these things down, people realized, “Yes, of course. That’s exactly what I want to do. I want to be kind, and I want to do what works, and I want to keep it simple. I want to do all those things.” So it was so easy.

And you know, what was interesting about that moment is that when we were forced to do it, it only took us five minutes to come up with that list, and they survived for years.

I don’t think it needs to be hard. The right things, they don’t need to be hard. They just either click or don’t click. Just keep trying, until you find the right combination. And if you know your people, if you know your team and your organization and how things actually work, they should just come instinctively. Maybe not every day, because not all of us have good days every day… But on a good day, you just feel it, and everyone else does, too. That’s the beauty of it.

And I think if you have an environment where people are being honest with each other and giving each other a lot of feedback - you know, if you stray from those things, people can remind you, and they feel empowered to do so.

Did it work? Did you change how the world builds software?

I think in a way we did; maybe not the whole world, but I think we were a part of something that did change the way that the world builds software. If you look back 20-25 years ago, there was a tremendous amount of resistance to doing things incrementally, having a lot of feedback, even things like testing software… All the kinds of things that get grouped under sort of the “agile” term. These days, I sometimes find myself going through lots of job ads, and the reason I’m doing it may be to research a particular industry and see what technologies are people using in this industry for a set of companies, 20 companies that are in industry. One way to do that is to look at the job ads, and you can see, “Ah, well, they tend to use Rust, or they’re using this kind of database, or that kind of platform, or cloud” or whatnot. But the other thing that you notice as you go through - it is the style of work environment that employers are pitching. They’re saying “This is the way we work, and this is what you can expect, and this is what we would expect of you.” And you see all kinds of things. “We have the all of these technologies that support feedback, that support continuous integration. We are very, very strong on developer testing. In fact, we like to do test-first development. We have the small teams that do stand-ups every day, and do retrospectives every week, and we do planning in this very incremental fashion, and you’ll find us to be…” so on and so forth. And I’ll tell you what, it’s almost ubiquitous. Most of the companies that I’ll look at, certainly in any kind of technology area, are advertising that they work that way. And it’s become the new normal. And that was absolutely not the case 20 years ago.

[36:51] So I think certainly by having worked with not just hundreds, but thousands of clients and teams over the years, and exposing people to that way of working, I think we’ve had a pretty big impact. And because people have been very enthusiastic about working with Pivotal over that time, they then carried on that way of working to others. And we weren’t the only people doing this. There were certainly plenty of others who were doing it as well… But I think we played a part.

Yeah. I think so, too. I also think that Cloud Foundry, and Concourse, the software that Pivotal built, and Pivotal just put out in the world for everyone to use, had such a profound impact, because it was embodying the principles. Pivotal Tracker - that’s not the one that comes to mind. I’m sure there were a few others… But these are the ones that the world noticed, and the world started using in different ways, and it made them curious about why this software works the way it does, and why is it so simple, and why does it just like get out of the way and focus on what is important?

In episode 64 we talked at length about Concourse with Alex. I’m wondering, what is your take on Concourse, the Concourse CI system?

Well, I can certainly take you back to the point where it was being started and Alex was sort of working on it… You know, mostly himself, but he collaborated with a few others, and it was sort of a quiet project that was happening, but people were noticing. And [unintelligible 00:38:25.03] James Baer who were the head of engineering and product for Cloud Foundry development at the time came to me and said, “Hey, there’s this really interesting project happening called Concourse that Alex has been doing, and we think that maybe it’s worthy of some additional investment.” And I knew Alex was a terrific engineer, and certainly, if the leadership was coming to me and saying, “Maybe we should support this more”, then it was probably something pretty extraordinary. And I had seen it being used here and there, and I thought it was really interesting, but my first reaction is, “Does the world really need another CI system? Are we serious? There’s so many.”

And at the same time, we’d kind of been bouncing around CI systems to a certain degree with the Cloud Foundry team, which was getting really quite big, doing very intensive work, very intensive pipelines… And one of the problems was that none of the other CI systems were really cutting it. They were not scaling, and were not as responsive and supportive enough of the way that we worked, especially as it got to scale with Cloud Foundry, that maybe we did, maybe it was time for another CI system… But I was pretty skeptical in that sense. But they said “Great, let’s dig in, and let’s tell you why, and see if you agree with us.”

So we had a lot of discussions about it, and we talked to Alex, and we were like, “You know what - this might actually allow us to work at this scale, in the way that we work, more efficiently.” And it did. So we put some substantial resources into it, and I gave it my full support, and I think it turned out to be pretty amazing.

[40:12] It was always fun to go down to the fourth floor at 875 Howard, where we had the largest set of Cloud Foundry development going on, the entire floor, and monitors all over the walls, and up on the columns, visualizing the way our pipelines were coming together and building. That was always a fun part for me. And I could take customers down there who would say, “Well, how does this process scale? How does your technology scale? How would you do this?” And I said “Well, let me show you… Here’s this project that has dozens of teams, hundreds of people building something. And you can see all the different pipelines, for all the different teams. You can see it building, in action, continuously. And here’s where it all comes together.” And you can show them these different things in an exciting visual way that was just happening all the time, and it was doing whatever it was doing at that moment in time, whether it was green, or red, or something was building, or whether there was a problem. And it helped them understand what we did. And for – like, let’s say we’re talking to Ford, or someone like that, or a really big bank, and they’re wondering, “Well, what does it look like for us at scale?” And we could say “Well, here’s a really big group of developers doing something, teams doing something at a scale which maybe you’d approach that, but maybe not. This certainly is big enough for you to understand what it would look like at your largest scale of product.” It was very helpful.

So after Pivotal became VMware - that’s the way think about it, Pivotal became VMware; that’s exactly my view on it - we continued using Concourse on RabbitMQ. There were many years when the RabbitMQ software had the biggest Concourse pipeline you can imagine. Not just one. Tens and tens of pipelines, for different versions, for the clients… Just to understand the scale - this was a couple of years back - we had north of 600 CPUs, a few terabytes of RAM, and I don’t know how many terabytes of SSD drives, really fast ones, to run all these pipelines. I think we had like 10 or 20 – between 10 and 20 Concourse workers. So a project which is really complex, really mature, was able to run on Concourse, and we were hitting the limits of Concourse left, right and center, all the way from the web workers not being able to generate the pipelines because of how big they were. We talked about this in the previous episodes, and I’ll drop a link in the show notes… But that was like my last experience of Concourse, and this was a couple of years back.

When Concourse started, I remember being on the Pivotal Cloud Foundry data services team, and we were struggling. This was 2016-2017. We were struggling – maybe even 2014. Anyways, it was like a significant number of years ago - we were struggling with Jenkins, we were struggling with GoCD… And while things may have changed, then, at that point in time, Concourse enabled us to do things with data services that no other CI was able to. So it worked for a really long period of time, and the only thing which I missed was a managed Concourse. Like, “Let me put my card in and then let me just get this Concourse service that scales really, really well.” That was the only thing missing.

So as a software system, it worked really well for many years, and the thing is, I never saw it as a CI system. For me, it was at the core of the Pivotal distribution process, because many customers were getting Pivotal software that way. So it was so much more than a CI/CD system.

[44:00] Yeah, and that’s a whole other area of value that it unlocked. That was kind of incredible for enterprise software system delivered on premises. Yeah, it was a big enabler event.

And it worked for small teams and big teams. I think that was the beauty of it. It didn’t really care how big you were. A single process, that’s all it was. You could run it locally if you wanted to. Or you could have tens and tens of worker nodes. It was really good.

As important as Concourse was, there was this other software which I think had an even bigger impact, and that was Cloud Foundry. Remember the haiku “Here’s my code, run it. I don’t care how.” That was very memorable. How did you think about Cloud Foundry in the beginning, as more and more of the software teams were running it, were appreciating it? This was 5-6 years ago… What were your thoughts about Cloud Foundry at the time?

Yeah, I think we had a really interesting opportunity to take teams that we’re working in as optimal a way as we could figure out - incremental, evolutionary, building software quickly and getting it into production as fast as possible, getting feedback on it, and iterating. We were in a position to build a technology that was tailored to that, and to do it by interacting with those people. And in fact, building the technology itself using those processes. So it really was – you sort of couldn’t design a better crucible for heating and hardening something like that.

And at the same time, building a platform like that - they become quite expensive. I mean, it’s not just the core runtime, and all the tools that you have to build, but it’s all the integrations with technology, and the services, and so on. So team can really expand, as ours did. And so you end up having the development of a product like this push the process by forcing you into a situation of scale that you’ve never had, in terms of team size. And then the process itself is also essentially continuously critiquing the technology product.

One of the first things that I said to the Pivotal Labs team, after Pivotal Software was formed and we had sort of a nascent Cloud Foundry that we’d inherited from VMware, that wasn’t really in a production state yet, but we had it, and we decided “This is really the thing that we ought to commit to and build this out, because it’s going to be amazing…” And the Pivotal Labs teams were a bit apprehensive, and understandably so, because they said, “Are you going to force us to use this product now that we’re building the product? Of course, we’re going to have to use it, and our clients will have to use it, and so on.” And I said, “I promise you, you won’t have to use it until you think it’s the most appropriate thing for you to use, in a given situation. So you can make that decision.” And then we had to work really hard to make it so. But it was. I mean, it did become that thing.

And especially as Pivotal, in sort of its new incarnation is Pivotal Software shifted from working with 80% to 90% startups and internet technology companies, into working with 80% to 90% Fortune 500 and global 2000 companies, and the federal government, and so on - a very different type of client - Cloud Foundry became the saving grace of all of our engagements. Whenever we couldn’t use Cloud Foundry, the difficulty was profound in comparison.

[47:55] So the biggest advocates and champions for Cloud Foundry became the Pivotal Labs teams, because working without it was extraordinarily difficult, especially if you were building software that was deployed on-premises. But even if you were deploying on the public clouds, it was still substantially more difficult and painful to do it without Cloud Foundry.

So in the present, I know that you’re a big fan of Elixir. And the reason why I know that is – okay, we talked about it a few times… But also, I was watching your Code BEAM 5 America 2021, there was a panel discussion around startups, venture capital in the Erlang ecosystem… I’m going to put a link in the show notes. What do you see in Elixir specifically?

Well, certainly Erlang has been around for a long time, and we’ve had various advocates at Pivotal, and other people that I’ve known who loved Erlang, and loved its power and its capability, and ability to build distributed systems with a lot less code than almost anything else… But it always seemed like it was sort of a High Priest caste that used Erlang, and it didn’t seem as accessible to everyone else. And Elixir seemed to change that, to a certain extent. It had the power of Erlang, but also made it more friendly and productive for a wider range of developers, let’s say.

I did a fair amount of playing with Elixir when we first decided we were going to use it, and I actually did a number of math problems… Relatively small, but delightful time doing some math problems with Elixir myself to understand it, and I felt the joy of programming in this beautiful functional environment. So it seemed like for the work that we’ve been doing in my incubator, Geometer, over the last couple of years, for certain types of projects and the kinds of things that we wanted to build, Elixir would be just a wonderful tool to use, and would result in a lot of happy, motivated developers and teams… And it was true. I think people have been extraordinarily happy building things in Elixir. Very, very productive… And it’s allowed us to do some things at very high scale, with not a lot of code. So it’s been extremely effective for us.

Are there any real-world examples, success stories, companies or products that you’ve built in the incubator using Elixir, that would have been difficult otherwise? Can we see Elixir in the wild? Have we used it maybe and we haven’t known about it, but they were made possible because of it?

Well, I can give you a couple of examples. The first year of the incubator we actually sort of diverted to working on COVID response, and we worked with the government of New York and New Jersey, sponsored by the former Director of the CDC, who was working on that as well… You know, one of the biggest problems that they had in terms of dealing with COVID was actually processing the amount of data that was coming in. Because most of the time, if you’re dealing with measles or something like that, the number of cases is just relatively small, and the volume and the pace at which they’re coming in is sort of known and expected, and having been dealt with for years, and it’s relatively steady. Obviously, COVID resulted in hundreds of thousands of cases, so all the systems, all the data exchange was inadequate. You know, where faxes might have been sufficient, that no longer was even a possibility, so everywhere in the chain the links were broken.

So what we ended up doing was building some of the data interchange for the labs and the health departments and so on, and we were able to build some things that ran very, very quickly. Where they had been experiencing hours of time, or up to a day, we took it down to seconds or minutes, and built something very, very reliable, that ran continuously and just didn’t break, essentially.

[52:07] So that was a very successful use of Elixir. And it’s not that you couldn’t have done it in something else, but we were under a pretty big time crunch. It was early on in the pandemic, and things had to happen very, very fast. It was a lot of pressure, and it was a good technology to use. I think it was very effective for us.

Subsequent to that, one of the companies that we’ve been building is called Vex. It’s not open to the public yet, but it will be soon, at vex.dev. It is an API-based service to provide real-time communications, video, audio data, and one of its main features is simply scale. So we know that it can handle at least 500,000 people simultaneously watching, streaming video in real time, and we expect it will easily handle more than that. One of the challenges is simply building systems that can actually test that and verify it, building the load testing. But yeah, Elixir has been a great help in building something, again, with a lot of scale, and not a lot of code.

Those were two great stories, and seeing them in the wild – I mean, Elixir runs all of changelog.com, and we’re barely using it at its full potential. We’re definitely not using the distributed nature of Elixir today, but we will very soon. Having talked to Jason from Vex - and I’m hoping that we’ll have him soon on Ship It - I was really fascinated by how vex.dev needs to run on multiple clouds, because of the scale. That was fascinating. And how do they compare when it comes to just sheer computation? There’s a lot of audio, there’s a lot of video, real-time… And think hundreds of thousands of streams, concurrent streams. Building that with a small team - it’s really difficult, and really impressive. It’s an impressive feat to pull off.

So I think we will have a chance to talk about that more, but… I see it, too. I see it, too. And having been with RabbitMQ for a long time, I understand the power of the Erlang VM, all the way from the memory allocations, to the schedulers, to… There’s like so many amazing things which it has now. To the just-in-time compiler… Amazing properties, in the VM, as well as in the DSL, which is Elixir. It’s just the DSL which makes it more accessible, as you mentioned. I think that was a key moment for Erlang, and almost like a second life, I think.

So in January I joined this new company, Dagger, and we’ve been going through all the motions of a startup, where you scale, where you add more people, you grow a team, you’re trying to figure out the product… And while you have it figured out, you have to react to users, and how they use it, and what they’re missing, and all that… So all the beauty of that. What does your ideal first year for a small startup that grows from a few people to 30 look like? What would you recommend for such a small startup? When it comes to people, when it comes to process, when it comes to technology…

Gosh, where to start…? So many things I could talk about.

Top of your mind. The most important one.

Yeah, I mean, absolutely, I think people is really, really important. I mean, you’re familiar with how we did things at Pivotal, and the RPI, the one-hour gating interview that was just pure programming… And some joyful programming too, in my experience, when I used to conduct those all the time… But we ended up being able to really assess people well, and not just in terms of their technical prowess, but also their ability to work in a team like ours. And they were able to assess us as well.

[56:07] So I think it’s really important, when you’re building a small team especially, but also as you scale - again, this is another one of those things, how do you scale and stay consistent? You need to have a hiring process like that, that really allows you to evaluate the people on their technical ability and their cultural fit, and it allows them to evaluate you. So if you really have that bi-directional assessment built into your hiring process, and you’re very consistent with that, and very rigorous with that, I think that is a huge, huge advantage, because you can build a cohesive team, and one that will overcome obstacles together, and do it in a joyful way. That’s huge.

How do you hire like that? Because I have my own version. And it’s interesting, but even this I can trace back to Pivotal… How do you hire in the way that you just described? How does the process work?

Well, the RPI was an interview technique that I evolved over quite some time. You know, when Pivotal was smaller, I used to do those exclusively. So people would program with me for an hour first, before they went on to secondary interviews… Which are also very important, by the way. But just to focus on the RPI, it was a measurable, repeatable, 100-point scale that was looking for things like abstract thinking ability, speed, and empathy, and I think effectively did that in a consistent way… And so you could essentially cast a wide net, and not rule anybody out, but relatively objectively evaluate them based on the parameters that really matter to you… And then you could pass them on to secondary interviews.

And I think the secondary interviews are really important as well. It’s important to have a gating interview that will allow you to filter down to the very few people who might make it through, and do it efficiently. And it doesn’t have to be the RPI, but something like the RPI, that is programming, that is done collaboratively in real­-time, with a real human… I don’t find much value, by the way, in the automated ones, or the AIs, or the this and that. But that human-based filtering is also efficient.

Secondary interviews need to be much longer. I would say half a day, with at least two different people really actually working on a product, working on what you do. And again, that allows people to see how this candidate is going to fit, and it allows them to say, “Okay, I understand what you’re working on, I understand how you work in your team. Do I really want to be here? Is this a fit for me?” and vice versa. If you can do that, I think that’s it. That is a really effective way of interviewing. It takes time, but…

That’s a good one. That’s a good one.

…not as much time as some very ineffective ways I’ve seen.

Oh, yes. Yes, hiring is very important, and knowing how to hire well, and getting the right people in can have such a huge impact, especially in a small company where one person can mean 10%, or even 5% of a company. When you’re 3,000, the gravity is very different. But when you’re small, every person matters a big, big deal.

What about optimizing for shipping it, like getting it out there? Companies that optimize for execution, versus just brainstorming and planning, and all that? What would you say about that aspect?

I think it’s absolutely important. It’s easy to make mistakes in that area. I’ve certainly made many, not shipped early enough… But I think in your case, you’ve got something that’s already being used pretty widely by people. I sort of started that way… So that’s fortunate in that, obviously, I think leaning into that, continuing to ship continuously is – I don’t think I need to tell you that.

Yeah. No, no, no. When it comes to acting on user feedback, what does a healthy loop look like? When you’re paying attention to what users are saying, but then you know what is important, and you can implement those in a certain priority - what does that process look like, so that you are responding to what users are asking for, rather than continuing on tangents that you think may be a good idea?

I think one of the things that happens with users when they have requests is they don’t necessarily see them in context of all the other things that are happening. And certainly when we used to work with client teams and you’d have various stakeholders saying “This is number one priority, that’s number one priority”, if you really exposed it quite clearly and said, “Well, look at the sum total of what we’re working on and weigh these in comparison. What would you really think?”, oftentimes they’d change their mind and say, “You know what, this other thing you’re working on is more important than the thing that I was saying is top priority.” So I think having the transparency and visibility for your users is important, so that they can see what other people are asking for, and understand the relative importance of those.

Okay. Would you use Pivotal Tracker today? Would you still recommend it today?

[01:03:43.25] e do use it. [laughs] Interestingly enough, we tried not to… So I’ll just say that one of our portfolio companies at Geometer is called Kohort, with a K… And sometimes I forget that cohort is actually spelled with a c. It’s in my brain now… But that team is led by Dan Podsedly, who led the Tracker team for many years. So he’s the CEO of Kohort. And he said, “You know what, I don’t want to make the team use Tracker just because I ran Tracker all this time.” And I think he was challenging himself to do something else. He said, “We’re gonna go use something else.” And so they picked something that was getting a lot of traction, that was relatively new in the market, as the best thing that they could find. They started using it, and after a month, the team said, “Can we please just go back and use Tracker?” So reluctantly, Dan went back to using the thing that he built. So yeah, it’s still useful.

Okay. As we prepare to wrap this up, what would you say is the most important takeaway for our listeners, the ones that stuck with us all the way to the end?

This is something that occurred to me – that I saw when we were working with really large companies, government agencies - that there was always a focus on the next powerful technology… Usually AI, or machine learning, something that could come under those descriptors. But in many cases, we’ve found that the fundamentals completely lacking in the teams and the organization that was looking to adopt some of these technologies. And in a world that is increasingly reliant on software, it feels to me like getting the fundamentals right, building software extremely well, and doing it in a humane way, and building that foundation is so much more important than adopting the next technology.

In other words, “If you don’t have an AI strategy, you’re not going to do X, Y, or Z. Or if you haven’t figured out all of your security posture, then you’re vulnerable.” Fundamentally, if you have a culture of software development that is very strong and very resilient, and if you’re truly expert at that, then you can use these technologies effectively, and you can plug them into your environment, and use them 10 times better than you would have if you just run out on an AI initiative and say “We’re going to use this technology” and your foundation is a mess.

I think many large organizations are setting themselves up for ultimate failure by not focusing on building things in a very rigorous, very evolutionary, and very humane way. That was something that I saw at Pivotal, that really opened my eyes to the state of where things were with respect to the ambitions that people have for the technology that they’re going to use, and the reality of how they do things.

That is a very meaningful thought to end on, I think. Very meaningful. It will definitely stand the test of time, it will apply many years in the future. Focus on what matters to you. And technology - maybe not. Maybe there’s something else there.

Well, it’s been an absolute pleasure. Thank you very much for joining us today, and I’m looking forward to next time. Thank you.

Thank you. It’s been my pleasure being here. I appreciate it.

Changelog

Our transcripts are open source on GitHub. Improvements are welcome. 💚

Player art
  0:00 / 0:00