Deploying & Servers, with Chris Fidao

Matt Stauffer:
Welcome back to The Laravel Podcast season four. Today, we're talking to Chris Fidao about deployments and hosting and about getting your code onto the server. Stay tuned.

Matt Stauffer:
Welcome back to The Laravel Podcast season four, where every single episode is about a single topic. Today, we are talking about deployments and basically what it looks like to get your Laravel app up on the server, whatever the server means. And we're talking with Chris Fidao, who is one of the originalist OGs of the Laravel world. You may know him from such hits as, man, Shipping Docker and Servers for Hackers and that one article about hexagonal architecture that he still regrets to this day. Chris, if we were in a world where you got to meet people in the grocery store, how would you introduce them to who you are or what you do?

Chris Fidao:
All right. First, I just got tweeted about the hexagonal-

Matt Stauffer:
Oh my God.

Chris Fidao:
... article again yesterday, and I'm just like, "Uh." Anyway.

Matt Stauffer:
Don't do it.

Chris Fidao:
Right. Interesting topic. Not what we're talking about today, but also, a waste of time. Don't at me. Okay. If I meet people at a grocery store, what do I tell them I do? I just tell people I'm a programmer first, and then of course, it goes beyond that. Then there's like, "Okay, I'm a web developer. I work with Laravel and PHP and that kind of thing." But I just start with... I usually say something like, "I'm a programmer and I also work with servers," something like that.

Matt Stauffer:
Oh, okay. That's cool.

Chris Fidao:
When you start saying the word server administration, it gets boring enough where if they don't know about it, they're not going to follow.

Matt Stauffer:
Yeah, okay. They glaze over.

Chris Fidao:
Which is what I want.

Matt Stauffer:
I like that.

Chris Fidao:
Like, "All right, move on."

Matt Stauffer:
Okay, to a Laravel programmer, I think that you do have a unique job, because for anybody who doesn't know, you work with Ian Landsman, which I feel like Ian is... I've often called Ian the godfather of Laravel, and I think it's definitely true. He had stepped away for a little while, but now that he's doing Laracon Online, his presence is being felt a little bit more. So, what is your day-to-day job?

Chris Fidao:
All right, that's at UserScape. We work on Product Help Spot, which is a customer support application, which is a bit unique in that it's old, it's pre-SaaS days, and then we have a lot of customers who self-install on Windows servers or Linux servers or whatever they happen to have. So, there's a lot of server components to it, because it's not just a SaaS application on our own infrastructure.

Matt Stauffer:
Yeah. I've always wanted to ask this of y'all, and I never have. Do each of you specialize on a different part of the application, or is it really all of you all are just jumping around the whole code base?

Chris Fidao:
It's both. There has been specialization, and it's grown organically like that. I don't think that was ever a specifically outlined thing. I've gravitated towards the server stuff. Now, we have Help Spot Cloud, and that's mostly me. And then Eric is on a lot of billing, back office stuff, plus Help Spot. Matt, we have Matt on our team, who does support, but also is very technical, so he does some coding and bugs and that kind of thing, too.

Matt Stauffer:
Nice. And just one last thing just for context. Help Spot Cloud. The original Help Spot was software as a service, but then you also could get a version of it that you self-install, right?

Chris Fidao:
I mean, it was just a PHP code base, so it went through our server. So, when someone bought it, they downloaded the code base and put it on their own servers.

Matt Stauffer:
Oh, was it only on-premise at first?

Chris Fidao:
Yeah, exactly.

Matt Stauffer:
Oh, okay. It was originally... And then Help Spot Cloud is the SaaS.

Chris Fidao:
Yeah, but it's not even a SaaS. It's just like everyone gets their own server. It's just the same thing.

Matt Stauffer:
So, what ecosystem are you using to set that up?

Chris Fidao:
It's all AWS, because it's a small team, right? It's like I'm the server person. It's simple in concept. So, it's a server per customer, essentially, because that's simpler than some kind of Docker CE environment that only I understand, and if it breaks I'm like, "I don't even know," because I don't know what Linux networking is. Things can break in weird ways when you get into that world, too. It's 300 individual application servers right now, and one huge RDS database that all customers are on, and that kind of thing.

Matt Stauffer:
Got it. Sweet. I could nerd out with you on that all day, but we should talk about the actual topic here, which is deployments, deploying, and what it looks like to do that in a Laravel world. It's a little bit of a weird one to ask you the five-year-old question, but you got kids, you know this is coming, so let's start with it. If you're going to be talking about just deploying web applications in general to a five-year-old, how do you describe it?

Chris Fidao:
All right. First and foremost, the true story is that deployment is why my son knows how to say BS and has repeated it back to me.

Matt Stauffer:
Okay. See, the kids are already involved.

Chris Fidao:
It's great. Right. He's not five yet. He's three. I don't even know how to describe it to a three-year-old. But I think my really basic explanation is, "This is how you make the things I work on available so that everyone in the world can use it." For kids, I think that age is like iPad apps or Netflix or something makes the most sense to talk about than web apps. It's just like, "If I was working at Netflix and built something in my computer, then I can deploy it so everyone in the world can watch Netflix."

Matt Stauffer:
Perfect. I love it. Deploying is so complex of a topic, there's so many different ways to deploy that really the only thing that makes them consistent is that you're taking it from your local setup to the remote setup, right? Whatever local is, whatever remote is. Yeah.

Chris Fidao:
For sure.

Matt Stauffer:
So, since this season of Laravel Podcast is all about targeting people who are just getting started, I thought we might have all this varying experience, background experiences with deploying with serverless or deploying with old-school FTP and stuff like that. Let's just say we want a reset, and we want to say, "What does it look like to deploy a Laravel app from me developing it on my local ecosystem, whatever that is, to getting it out the most standard way?" Obviously, you can do it any way, but there's a little bit of a prescribed, "Here's how to get started." Could you walk us a little bit through that setup?

Chris Fidao:
Yeah. I guess the most common way I see it deployed is just the Forge Quick Deploy model, right? Git is on the server, so if you push a new code up to your GitHub repository or whatever, GitLab whatever, then your server could have a way to know that new code was pushed up there, and basically just do a Git pull so that it pulls down that lace code. So, the deployment there is basically because Git is available everywhere, right? You have it locally, it's at a central place like GitHub, and then it's also on your server, so it all has access to the code. And your server can do Git pull and have the latest code that way, which is the most basic way of putting it, because there's a lot of other stuff that goes into it, like building static assets. You don't have scripts. Or restarting services if you need to and that kind of thing.

Matt Stauffer:
Yeah. And I won't make you give us a walkthrough how Forge works, but for anybody who's not familiar, we'll be referring to that concept, which other people use, but Forge is the most common one. And basically, in Forge, you connect Forge to your server, or you create a server in Forge, and then you add a site in that server, and that site is basically... Any time somebody requests basically an IP address for this particular URL, whether it's a domain name or multiple domain names or whatever, any time they resolve at this server, they should be served from this one Laravel app, right? And then you connect that app in Forge with a GitHub repo, and then Forge subscribes to that GitHub repo to get those pushes.

Matt Stauffer:
Once it is subscribed, let's say I do that whole connection, so I point my IP address over to Forge. I've got a server connected and set up in Forge. It's pointing that IP address. And then I set up a new app for mygreatwebsite.com, and then I connect it to my GitHub repo, so now Forge is getting pushes every time I push up to Maine or whatever. What happens from there? The first time after I've done all that, I do a local push, and I say, "Git push, origin Maine," and then all of a sudden, Forge runs it. Before I've done any customization or tweaking, what does that actually look like in the Forge world? How is it deploying it, and what scripts is it running by default, and how is that actually making it serve?

Chris Fidao:
A point there is that Forge is actually doing some automation for you. Forge is a thing that is hooked up to your GitHub account, and it receives webhooks from GitHub. We're saying GitHub, but GitLab, Bitbucket, whatever.

Matt Stauffer:
Or GitLab, whoever.

Chris Fidao:
I'll just keep saying GitHub, because it's easier. Forge subscribes within GitHub, it's getting webhooks from GitHub. Forge is acting on your behalf, because it's getting these webhooks from GitHub, and it's performing an action there. If you're not using Forge and you have your own setup, then you also need this thing listening for webhooks, subscribes to changes in your GitHub repository. But we'll assume we have Forge right now, because that's doing for you, and you don't have to code that up yourself.

Chris Fidao:
It gets a webhook, then Forge sees that this webhook has come in. It's for this repository. This repository is connected to a site for some Forge customer's application, so your application in this case, and it knows that it needs to take some action, because it sees that it's been pushed to Maine, and Maine has a branch that is hooked up to this site, and then it needs to do some action based on a change to the code in that Git repository. Okay. That's the deployment, basically, because that's what Forge does, the only automated thing that Forge does based on that kind of thing. So, it'll kick off a deployment, which in the Forge world means it's going to SSH into your server, and just runs in scripts. In Forge, that's configured as what it calls a Quick Deploy, because it's just very quickly getting that webhook and then running a script on your server.

Chris Fidao:
And that'll do a Git pull, and it run a migration, right? You optionally will run some migrations, and if there's migrations to run, it'll run them. You can optionally add some extra scripts to build static assets if you are building them on your server, as opposed to building them locally and then pushing those up so they're committed into the repository. That's actually one of the first trade-offs you decide in deployment scenarios. Are you building your static assets locally? And then you have something to push up to Git. Or is your production server building assets as npm run production or whatever, using your own run production? If you're running that locally and pushing that up, or you're running that on the production server, and then you add a little bit of time that it takes for your deployment to actually be ready, because you still have that extra step that has to run on your server.

Chris Fidao:
You push up to Git, a webhook is sent to Forge. Forge is going to run a script, that script is going to Git pull to get the latest code. It might npm run production or whatever for static assets, it might run migrations. It probably is going to reload PHP-FPM, which is their web traffic through PHP, so it's running your PHP code. And what am I missing? Anything else that it does by default?

Matt Stauffer:
I think that's it. You can add lots of other things, but I don't think it does anything else by default.

Chris Fidao:
It's a batch script, so you can just fit anything you want.

Matt Stauffer:
It's cool, because it's a batch script that literally, in the application, at my application UI in Forge, it shows you that default one, which includes seeding into the directory, Git pull. You can make it so every time something happens, it doesn't even get pull. It'd be silly to do, but you could if you wanted, right? It's a totally customizable batch script.

Chris Fidao:
There's so much going into it, too. I'm just thinking about our application that I run that also pulls in private keys that it needs for API, Stork, and all sorts of stuff that can happen to them.

Matt Stauffer:
Yeah. And Forge is simpler, because the majority of things that it's doing there, it does not at the deploy step, right? When you're setting up your .env and everything, all that kind of stuff, it usually tends to do when you're setting up the application or making changes to the application. I think, for us, that's the case. For yours, when you say pulling API keys, are you pulling it in from a centralized AWS secret store?

Chris Fidao:
Yeah, exactly.

Matt Stauffer:
Okay. Got it.

Chris Fidao:
Forge also manages your .env file, too, so you can edit it and add secrets, that kind of thing, and that method just gets placed on your server.

Matt Stauffer:
Yeah. Let's talk a little bit about alternative... Well, actually, is there anything else you want to say about that Forge way? Because there's other people who have the Forge model, where you automate it in there, but I think that it's really helpful to talk about that model, just because it's so different than the "I choose to push" that we used to do, like, "I choose when to send things up to SFTP. I take the responsibility." If somebody wanted to build something like Forge, like you said, you'd have to build a tool in your server that listens for webhooks, that authenticates against it so we can actually know that it's supposed to get the webhooks, that receives the webhooks, that parses them out to figure out which action should be taken, and then also allows you to configure which actions to take, right? And you could do that on your own. Plenty of people do, but that's what Forge is giving us. Before I move on to the next way of doing it, is there anything else you wanted to say or talk about with the Forge way of doing it?

Chris Fidao:
Forge is always a great tool, and it's a great tool for Laravel people, because it does a lot of work for you. It hides some nasty details. And in doing so, it really hides some gross stuff that you have to learn even just to do that "simple" model. It's "simple" both because application development, web application development has gotten super complex, way more complex than when we started. We were just pushing WordPress PHP files to FTP. Now, there's web workers, so you got to restart your workers. If you have queue workers running, maybe you have cache config, so you got to clear their config cache, and all sorts of stuff you might be doing. Maybe we want to talk about this if we get into gotchas, and we can just table that for now, because I could just go into a mini rant about all the weird edge things you need to know about just to get a "simple" deployment to work.

Matt Stauffer:
Yeah. Let's do that in the gotchas, but we might actually be able to do that in just a second. What I'm realizing is that deploying is... I jumped right into what it takes to deploy it in any given moment, like a change. But you can't deploy something until the application is already set up on the server, right? And I do think it's worth us talking about that as well. I should've called this hosting and deploying, right? Let's talk a little bit about what are the things that Forge is doing. When I breezed really quickly through it, let's un-breeze and say, "Okay, the first time I spin up a server using Forge, it's going to spin up a new Ubuntu server with particular configuration settings that I don't have to learn, because Forge does it for me. What type of stuff am I think about that aren't there by default on an Ubuntu server that Forge or some of the other tools we'll talk about later automatically set up for me?"

Chris Fidao:
Right, yeah. Deployment really is where the rubber meets the road, because you get into all the server things that you really wish you didn't have to know. But all of a sudden, you have to know them, because your code is working on the server, but your code deployment process is also making changes on the server-

Matt Stauffer:
Working on the server. Yeah.

Chris Fidao:
... and it has to do it in a way that actually works. Almost all the complexities of that revolve around Linux permissions. Just so much of it does.

Chris Fidao:
Anyway, you're going to give Forge permission to create a server on your account, DigitalOcean, AWS, Ultra, whatever. And it spins up a server and installs a bunch of stuff on it, so it's installing Nginx and PHP, PHP-FPM, which is a little thing that sits between Nginx, the server, and your application code. It translates a web request into PHP and spins up processes to actually process PHP requests, web requests, whatever. And it configures permissions in a way that is actually not standard to what you get out of the box when you install that stuff yourself manually. And this is to help line up permissions to make your life easier.

Chris Fidao:
If you start asking people about best practices, which is a whole nother piece of dogma, but you just have to learn to either accept or ignore. If you accept it, you're going to be doing a lot of work, and if you ignore it, then you figure out how to straddle the lines to make it secure, but also not a pain to use something. I'll get into what I mean by that. The best practices for everything is... This is, again, getting into Linux permissions. Best practices are to make your stuff, all your processes, your PHP code, and your web server and all that stuff runs as a user that doesn't have permission to do a lot of stuff on the server. You can't log in as that user, for example. That user can't use sudo, that kind of thing. On Ubuntu, you might have seen this user. Www-data is the default one used for web stuff. That is the user that's going to run your PHP code. Therefore, that's the user that needs permission to write files, like your log file, for example, in your Laravel application.

Chris Fidao:
And out of the box, you can't SSH for login as that user to your server, which means if you run some code on your server, like a Bash script on the deployment scripts, that is probably going to be run as a different user, like user Ubuntu, or if you're on AWS, or whatever user you create on the server if you're doing it yourself and not through Forge. Which means that user is not user www-data, which means if you do a Git pull from that user and new files come in, then they're going to be owned by a different user that is not the correct user that the web server's running its processes as.

Chris Fidao:
Forge simplifies that. By running everything as user Forge, Forge is a user that you can log into. Forge is a user that can use sudo to act as user root, essentially. But it is also the user that PHP-FPM is running as, so your application runs its user Forge. So, all the permissions line up when you're doing stuff on your server. You log in as user Forge, the user that does deployment is user Forge. If it creates new files during the deployment by Git pull or whatever, the permissions just line up automatically. You don't have to think about it. Which is actually a security trade-off. It's not "best practices" to do that. But it makes your life so much easier that it's actually how I build all of my servers now outside of Forge, because the trade-off is completely acceptable to me.

Matt Stauffer:
That's really helpful.

Chris Fidao:
That was long-winded. And also, Linux permissions are terrible. They're not terrible. They're nice. But you have to actually understand them.

Matt Stauffer:
One of the things that, going to that, makes it very helpful to me is I think that a lot of folks who have not had to do that themselves in the past maybe don't recognize what the benefit and value we're getting from tools like Forge and other similar tools, and sometimes leads them to try and do it on their own. If you don't understand what the value you're getting, then you might not appreciate it.

Matt Stauffer:
So, I think the biggest thing from that is to say that's just one tiny, tiny, tiny piece of managing your own server. Right? That's just one. And I think the biggest message I would say to new people is you might look at the price of something like Forge and just say, "I'm not sure if I can afford that." Well, there are cheaper options that are more limiting, but I would say that spinning up your own DigitalOcean box is not a good use for anybody who does not want to be a deep-level knowledge system administrator, basically. We used to have to do that. It was awful. We all worked with super, super limited hosts, and when Forge came along and automated this process with full-access hosts where we could do everything, and yet, it still was saving us this, that was why Forge blew up so much. It's because we have full control over it. It's not this super-limited environment, and yet, we don't have to know how to set all that up exactly the right way.

Chris Fidao:
Yeah, definitely. It being your own server on your own account in Forge and just layer that control stuff on it is very nice, but also, you have full access to the server. Like you said, it's not a Bluehost back in the day that you only get user access in cPanel and limited ability to do anything.

Matt Stauffer:
Yep. Yep. When you set up your whole server, you got all those different things running, and Forge does other really nice ones that you may or may not use in yours. You might look for Redis, you might look for Memcached, or whatever these things are. But this is not actually a Forge sales pitch. It's really talking about what does it look like to deploy a Laravel app to a server. First thing you need to do is... Permissions is actually one of the most common things that I see when people are trying to host their own Laravel app, so I'm really glad you got into that, because when people don't want to use Forge or whatever, they run into some certain problems pretty consistently, and permissions is really high on that list.

Matt Stauffer:
There is another just really quick nuance note, since we're talking about shared hosting. Some people do use shared hosting for Laravel apps, and the number one issue I see with that, I'm sure there's other issues that you might be able to bring up, is that a lot of them don't know to correctly point their service, or they're not allowed to, to the public directory. So, they're instead serving their app out of the root directory, which means that anybody could come along and just type yourwebsite.com/env and get access to all your credentials. If you're tempted to put something like this in shared hosting, be very, very careful. No matter where you are, if you're pointing at yourself, you must point it at the public directory, no matter what. Are there any other notes you would have on shared hosting, if anybody's going to end up doing that?

Chris Fidao:
Yeah. Many, to my knowledge, don't let you run extra processes like you would to run a queue worker, so you end up with... I don't know what people do. Maybe they use Cron, and you can only have one job run every one minute or something. I don't know.

Matt Stauffer:
Yeah, exactly.

Chris Fidao:
But that's a big limiter, because queue jobs, queue workers have become so mainstream, because Laravel makes them so easy.

Matt Stauffer:
Yeah. Yeah. That's a great point. And if you have not used queue workers yet, you may not recognize it, but a lot of the tooling that's built into Laravel, you don't even have to build a queue job for it. You can just mark it as queueable, so you can queue up all your mail, you can queue up all your events for all these things, and it just makes them asynchronous for you. And if you're on a server that already has something like Redis running, you can spin up queue workers really easily. It's basically for free. You're offloading that work to the server for free without it running in parallel, or running synchronously with the user's request. But if you find yourself in something like shared hosting, you're not able to do that, so just be wary of the limitations you're introducing yourself if you choose an environment like that.

Matt Stauffer:
But I don't want to go there all day long, so one last thing before we move on to the next type of thing. That was talking about spinning up servers, but we also have to spin up a site. Can you talk us just really quickly through what a tool like Forge is doing when I go, "Add new site, mybestwebsite.com"?

Chris Fidao:
Right. On the server side, you're thinking?

Matt Stauffer:
Yep. Yeah.

Chris Fidao:
Okay.

Matt Stauffer:
Just sites available and that kind of stuff.

Chris Fidao:
Yeah. It's creating a site, which means there's a few components that have to change on the server. One, Nginx has to know about it, Nginx being the web server that's accepting web requests. It gets a configuration file, that configuration file sends up a new site, and it's stored out of this directory, which will be the public directory of the application code. Nginx gets a configuration, and that is almost it, because everything else is already pre-configured, because at that point, it's just offloading to PHP-FPM, except Forge has a newer feature where you can segment every site to run as its own user, I think. They're segmented, so it might do some extra configuration there. In that case, PHP-FPM gets its own configuration to have its own little separate set of processes that's run by a different user.

Matt Stauffer:
Okay.

Chris Fidao:
I don't even know if they do different users in Forge. It sounds like they would, but I haven't played with that feature to see. But that's the typical way to segment the processes so the PHP processes don't intermingle with each other. It's just like one user has this set of processes, this pool of processes to run one application, and then there's another pool of processes that serve application for your other sites on the same server. I think that's basically it, so let's get on the Nginx side, because PHP-FPM is just, for the most part, just running in the background, just server requests or whatever it gets.

Matt Stauffer:
Yeah. If you have a domain name, you have five domain names, and they're all pointing to the same IP address, that means that same server, its Nginx is hitting all those requests, and it matches each of those requests against one of those configurations that Forge or whatever else spins up. And that configuration says, "Should this incoming request match against me or not?" And that's based on basically what domain name it is. And if it does match, then it does all sorts of internal Nginx parsing, which you can learn about later if you want, that eventually sends it over to your public/indexed PHP for that particular app, and then Laravel handles it there. I think that's what I wanted to handle in terms of the Forge side. I want to talk a little about Envoy and Envoyer. Is there anything else that you wanted to talk about before we jump over?

Chris Fidao:
No. I think that's good. As always, there's so many different topics you can get into.

Matt Stauffer:
There's millions. Yeah. The problem is there's so many little things we can get to today that this is going to go for three hours if we don't keep moving.

Chris Fidao:
Right. Just you mentioning, "Oh, yes, that was header that matters, that's how Nginx matches up to websites so that it knows which virtual host to connect to and all that stuff." There's just so many levels and details that goes into everything.

Matt Stauffer:
Yep. And I bet you that I don't know half the details that are piquing your brain, which is why you're the one here. Please, just ahead of time, go follow Chris on Twitter and buy all of his courses, because they all are freaking brilliant.

Matt Stauffer:
So, let's real quick talk about Envoy and Envoyer. Again, not to specifically talk about Laravel products, but those are two additional methodologies for thinking about what it looks like to deploy to your server. I think let's start with Envoyer. I want to talk less about Envoyer and a little bit more about just zero downtime deployment just really briefly. So, let's say I spun up my server on Forge, but instead of using Forge's auto-deploy script, like we were just talking about, I just let it sit there as an empty folder with nothing actually there, and I choose a tool like Envoyer or Capistrano or whatever else to do zero downtime deployments. Could you tell us a little bit about how a zero downtime deployment system works?

Chris Fidao:
Yeah. First and foremost, why Forge Quick Deploy is not zero downtime itself is because when a deployment is kicked off, the script is run, and it's going to do a few things. One is a Git pull, so the code is updated at that point. Plus, not really instant, but it's close enough to instant that it doesn't matter. But then also, it may build static assets, which means there might be a period of time, like 20 seconds, between when it starts building your assets and when it finishes, so you could have this mismatch where it's serving old assets, like old JavaScript from your previous commit, and it hasn't caught up to the latest because it hasn't finished building your static assets and that kind of thing.

Matt Stauffer:
It does a Composer install, too, right?

Chris Fidao:
Oh, God, you're right. We didn't talk about Composer.

Matt Stauffer:
We didn't mention it in the last one. Yeah.

Chris Fidao:
I totally forgot. Yeah. Of course. All that good stuff. Absolutely, if you have any packages and stuff that has to do with Composer install, which means people might get errors if your newest code tries to use code that's not there yet. And then this isn't even getting into if you have multiple servers that you're deploying to, but let's just table that for now, because that's a lot.

Chris Fidao:
That's why it's not a zero down deployment, because there's stuff that has to happen, and it doesn't all happen instantaneously. So, a zero downtime deployment is this concept that... I think Capistrano is the first way I came across it, and then-

Matt Stauffer:
Me, too. Yeah.

Chris Fidao:
... I learned Envoyer's been using it, and uses symlinks, and uses it in a funny way. Or not a funny way, but it uses it in a nice where you can switch from one directory that your application is in to the other one. And it does it automatically, so it's quick and correct, and there isn't any weird situations where you end up serving from two places at once.

Chris Fidao:
That concept is that your application is actually in one or more folders, and every time you have a deployment, it creates a new folder, puts the application in there, does all the steps that it has to do. And once it's finished with all those steps, then swaps over so web traffic is being served from the new directory with your latest code in it. And that's using a symlink. The symlink is like an alias, right? It points to one directory, but that's a fake directory. That fake directory is actually pointing to a different location, and you can swap to what that different location is. If your symlink points to your home directory, Forge, whatever example, that app.com slash, what is it, current it uses in Envoyer?

Matt Stauffer:
Yep. Yeah.

Chris Fidao:
So, the current directory is your symlink, and that symlink is pointing to a real directory that actually exists that has your application current in it, and that's named based off the commit SHA or a date or something. I forget what the directory nae is. And then just switches what that current directory points to after your deployment is finished, and the new application code with the new static assets and the new Composer dependencies and all that stuff is present and ready to have web traffic pointed to it.

Matt Stauffer:
Yeah. That's perfect. You got it. And the only one other little gotcha there just to mention is that if your Laravel application has certain folders or files, for example the .env file, that you want to be consistent and aren't in the Git repo, you can put them at the web root, and then tell Capistrano or Envoy or whatever else to symlink them into each build. So, every time it deploys a new folder and it's a new copy of this app and it's running npm install and stuff, it'll also symlink in your .env, it'll symlink in your uploads folder or whatever else it ends up being.

Chris Fidao:
Yeah, your storage directory has a lot of stuff in it.

Matt Stauffer:
Exactly, so you have persistence between all of them. Great. All right. And then last, before we move on to the more complex ones like Docker and all that kind of stuff, let's talk about Envoy, which I don't think people talk about very often. And again, Envoy is just a particular instance of this pattern, which to me is having your deploy scripts written in something like Bash, although Envoy is written in Blade, where it's all managed locally. And I know that you did Vaprobash quite a while ago, which was not this, but it was the same idea, like building a whole bunch of Bash tooling around your deploy and your local environments. But let's talk more about using Bash or Envoy or something like that for deploys. If you were to choose today to not use something like Forge and instead to manually trigger every single one of your deploys to an existing server using something like Envoy, not to be mistaken with Envoyer, what does that process look like?

Chris Fidao:
Right. That is a class of tool that is basically, I think, usually referred to as a SSH task runner.

Matt Stauffer:
Oh. That makes sense.

Chris Fidao:
Python Fabric is very similar, and I think Envoy has taken a lot of ideas from that.

Matt Stauffer:
Cool. I know that.

Chris Fidao:
And it uses PHP and Blade instead of Fabric, I think it's called, which is Python, which probably uses Fabric for templates. I don't even know if it does templated stuff. So, it's an SSH task runner, so it'll SSH into a server and run tasks. So, very much like the Forge Quick Deploy script, it's just running scripts against a server. In this one, it's run locally, probably, although you could run it some other places. But it'll SSH into one or more servers and just run the commands that you give it. It uses Blade, so you can actually template some of it, so you can have some elements that are more dynamic than-

Matt Stauffer:
Yeah. Like loops and stuff.

Chris Fidao:
Yeah, exactly. In that case, you can do anything you want. You could do a Capistrano-style deployment, which is actually something I did. I have a deploy PHP course, and I actually use Fabric in that instead of Envoy, but I built up the Capistrano-style things in that course, where it does the symlink thing and the zero downtime deployment. So, you could do something simpler, where it's just Git pull and Composer install, or you could orchestrate your own zero downtime deployment-type script. And in that case, you're just running it whenever you want, as opposed to whenever you push to Git.

Matt Stauffer:
Yeah, and you're saving yourself from having to integrate with GitHub or anything like that, and you can make it a manual thing that you run. But on the other hand, every time you want to run the thing, somebody has to go into their server and then type PHP Envoy, or whatever it is, run deploy or run whatever. But the cool thing about what you just said there is I love the name, because I've always tried to tell people Envoy is not just for deploys. It's for anything that you want to run on remote servers.

Matt Stauffer:
So, let's say you find yourself once a week, or every time X thing breaks, having to SSH into one server and run the same thing. You can build an Envoy script for that and just say, "Envoy, run whatever," and it's just going to SSH into one server or multiple servers, and just run those commands for you. Which is cool, because it now locks that stuff in the code, right? The Envoy configuration is code that can be committed into your repo.

Chris Fidao:
Right, yeah.

Matt Stauffer:
Yeah. Okay. Now, I'm ready to move into the wide, wide world of stuff outside of Laravel. It's a little terrifying. I've got serverless, I've got AWS, I've got Docker. Where do you want to start in these things? Where would your next description to people about... If you're going beyond that world, where do you go?

Chris Fidao:
All right. Where do you go? You don't have to containerize everything. In other words, use Docker. Docker is not simpler, although, after you do a ton of work, you get to a point where it feels simpler, which is... I don't know. Docker is... Okay, actually, let's talk about it this way.

Chris Fidao:
If you set up some automation so that you build artifacts out of your application and then you deploy the artifact, that's another way to go. What is an artifact? An artifact is like if you have some process that you do locally or automated, whatever, it bundles your application into something that you could just save somewhere, like Amazon S3. It's just a .zip file maybe, and that .zip file might have all your Composer dependencies, it might have your production, static assets, JavaScript, CSS, and your application code and all that stuff. And instead of putting it directly on a server by running Git pull or whatever, it's actually just a ZIP file that's in S3 or something. And then you have those assets stored and archived, and you can pull from any of them that you need, and your deployment script can pull from that, as opposed to running a Git pull. That's the idea of deploying from an artifact and building an artifact when you deploy. By that point, it's not even a deployment. That's just a build. You build up your application, and then you store it somewhere.

Chris Fidao:
Docker is another artifact. Your container, or your Docker container, cannot process in it. It can run Nginx and PHP-FPM and all that stuff, which is typically how you use Docker for local development. If you do that, your code is still on your local work station, your Mac, your Windows, whatever, and Docker is just the process that's running it, which is just a way to segment server stuff between projects locally. You could run variety B in one project and MySQL in another, and they don't conflict.

Chris Fidao:
But Docker in production is... The idea that is usually espoused is to make it an artifact that has your code built into it, so it might have processes that PHP-FPM and Nginx and all that stuff, but it also has a specific build of your code inside of the container. And then you can have multiple versions of your Docker image. Your image is the thing you build that has that... How do you describe images? Images is like a class, and then your Docker container is the instance of the class, right? You're building Docker images. You're not building Docker containers.

Chris Fidao:
The artifact is your Docker image, and you have different tags and different versions of your image. One example is that people usually commit or name a Docker image based off of the commit SHA that was built, so you have a mapping of a commit SHA in Git or GitHub or whatever to a specific Docker image that has that version of the code in it. And then you can deploy your Docker container, your Docker image, and run it as container, wherever. Kubernetes, Amazon UCS, Docker swarm thing, a Nomad cluster, all the millions of ways that exist to run Docker or containers in general, because Docker isn't even a container people run anymore. It's just the run time, so it's just a container.

Matt Stauffer:
Yeah. So, I've used Docker locally, and I've got some friends who tell me about how Docker's the future, and you can have the exact same environment running locally and remotely. Should I jump right into Docker in production? Who should use Docker in production?

Chris Fidao:
There are different ways that Docker is nice, and whether you want to use it in production or not is whether you can capitalize on the ways that it is nice. One, it is not simpler, because you have to learn Docker on top of everything else, and Docker is basically learning servers again, because it's still an Ubuntu-based image, you still have to know how to install stuff and configure stuff. You don't get away from that.

Chris Fidao:
And then there's extra complexities, because you have to figure out how Docker networking works. Getting web traffic into a Docker container from the outside world is its whole other thing. And if you have multiple Docker containers, because the whole point oftentimes is the load balance between containers and different clusters, different servers inside of a server cluster, then you have load balancing involved. And then reverse proxies and ones that are very smart and use key-value stores that have their own cluster that are stored somewhere, and the key-value stores know where each instance of the Docker containers run, and they can tell the load balancer dynamically how to balance traffic and what servers they actually live on, instances of a container live on. All this stuff, all your network, service, mesh, whatever keywords you can throw at it, it's not simpler, except some services may get simpler. Amazon ECS is fairly simple. You can use their serverless mode version of it and get a container running up fairly quickly, except there's so many options there that even that gets confusing.

Chris Fidao:
So, the nice part about Docker is that it is a self-contained thing that you can serve out of anything that can run a container. The underlying server-operating system matters less. The only thing that matters is that it has some kind of run time that can run a container. And then deployment also becomes a little bit nicer, because you have to have a process that automates the building of the Docker image, the artifact, we were calling it. But then once you have that artifact, then deployment becomes a little bit nicer in many situations, because you just replace one Docker image with another, and you no longer have to worry about the orchestration of where the npm run production is happening, that kind of thing.

Matt Stauffer:
It's sort of like that Capistrano flow, but instead of for just your code, it's for your entire server ecosystem.

Chris Fidao:
Yeah, and it happens ahead of time somewhere, and it's not done... That step is already done, so it's not necessarily part of your deploy process. That's the building of your application, and that might be done separately from deployment. So, you have the option, then, of saying, "Okay, this has been ready and built for a week, and it's been tested, but now, we can actually deploy it."

Matt Stauffer:
Yeah. That's cool. Yeah, and like you said, you're not just testing the code. You didn't just build the code. It's actually the entire thing that has been tested, because you're running that same Docker locally, that same Docker image you built and that artifact you built in instance of it locally, tested it all locally, and it's being running on the... When it runs in production, it's the same image running in the same architecture, basically. You probably have the closest local production parity of any of the systems, right?

Chris Fidao:
Yeah, it certainly is closer. Not always exact, but definitely usually closer. And the ways that I've seen it not being exactly the same is locally, I'll have ex debug enabled or something, and in production, you don't. But you could also build your image in a way that it gets disabled in production, so it's still the exact same Docker image, if you want. But sometimes, there's differences.

Chris Fidao:
I think the dream of Docker is what Kubernetes is trying to be, which is you have some YAML files in your code, then you have the structures code, and that describes everything, and then you push that up, and then that just magically happens. But between those two things happening is so much complication. It's ridiculous. And simplifying it is that they're services that do Kubernetes for you, like DigitalOcean or Google Cloud or EKS, which is Amazon's Kubernetes cluster of stuff. But even then, there's still a lot of knowledge about what Kubernetes is doing, so I totally understand what your YAML files are telling you to do.

Chris Fidao:
But, at the end of that, you could potentially have your own little Heroku, where you just push a code, and it just runs somewhere, and it's magic. But that's very complicated to get to, and I've only really heard about that being done in team environments. Not small teams, but medium to large teams.

Matt Stauffer:
Yeah. Yeah, it seems to me like Kubernetes and even Docker in production in any shape makes a lot more sense when you've got teams, when you've got devops people, or ops people. It's their sole job to think about these types of things. It's not the type of thing where I think, "Hey, we're a startup with three people or five people," or, "Hey, I'm a solo developer," or anything like that. It's just the work. The benefit is beautiful, and we can all see the benefit, but the cost seems to be something that's not necessarily going to be paid well on smaller projects, smaller teams, individuals, or anything like that.

Chris Fidao:
Right. If you're up to the point where making a Docker image isn't a scary thing and it makes sense to you, then maybe Amazon ECS with the serverless version of it is easier to get set up and up and running, because that's fairly easy. But that's all relative, because that also assumes knowledge of the AWS, which is a whole other-

Matt Stauffer:
Right, yeah. How easy is anything in AWS?

Chris Fidao:
Maybe that's a lot easier. Exactly.

Matt Stauffer:
Yeah. And there's probably at least one or two people listening to this who have learned all these things, and I think one of the things I would note for anybody who are those people and who aren't those people is that two people who have learned everything it takes for Docker, Docker seems really simple, because there's a lot less work that you're doing once you've learned it than you would in other systems. Right? It does automate certain things. However, I think a lot of folks who have learned Docker really well have forgotten the learning curve, or haven't actually run into the full learning curve. It's that same thing where somebody's new at something, and so they tell everybody about how amazing it is, but not actually having fully-

Chris Fidao:
Yeah, that's true.

Matt Stauffer:
... encountered all of the drawbacks.

Chris Fidao:
My best years of doing the Servers for Hackers-type content of teaching people are all behind me, because now, I have so much assumed knowledge that I just don't remember that I didn't know it.

Matt Stauffer:
When you didn't know. Uh-huh (affirmative).

Chris Fidao:
I could never teach programming to someone who hasn't done programming before, because I just can't get in the beginner's mind anymore. Programming and basic server stuff is just... I don't know.

Matt Stauffer:
It's so ingrained.

Chris Fidao:
I don't remember when I didn't know anymore.

Matt Stauffer:
Yeah. I've found that to be the case with a lot of Docker people. They just say, "I don't understand why you wouldn't recommend making your own Docker Compose file for this." And I just say Docker is wonderful. I built a tool around Docker. You built courses around Docker. But it is important to know that for newcomers, Docker in production... I'm just going to say my official opinion is if you are a newcomer, if you are not on a team, if you're a solo programmer, if you're a young startup or something like that, Docker in production's not for you. Sorry to anybody who that offends, but it's a wonderful tool that is a lot of cost. Chris didn't say that. He didn't give me a thumbs-up when I said that. But I'm going to say that's my official opinion.

Matt Stauffer:
But I do think that there's something that gives us a lot of this value that is becoming more approachable because of Vapor. So, can we talk a little bit about serverless? I don't know how much of a serverless guy you are, but I know that you at least understand the concept. Could you just give me a real quick introduction to what serverless is and what it's like to use it with Vapor?

Chris Fidao:
Mm-hmm (affirmative). So, the general idea of serverless, the joke people say is that it's someone else's servers and not yours, which is absolutely true.

Matt Stauffer:
Yeah, it's not serverless.

Chris Fidao:
It's technically correct. You are paying a premium to not have to worry about the server. Inside of AWS, which is Amazon Web Services, which is one of the larger serverless places, places that provide serverless services. I keep saying the word service and server and services all over again.

Chris Fidao:
But it looks like a few different things. The main thing that everyone was thinking of as serverless, especially in terms of Vapor, is Lambda, which is just running a function in the cloud, essentially, and it just runs inside of a thing, and you don't have to care about where that thing is. It's serverless, because that's not a server. The way it is presented to you is that you write code and all executes inside one function, and it just runs in finishes and tells you the result. But then you can stack on other tools to make it more complicated and more useful, so you stack on API Gateway. API Gateway and AWS is a thing that just accepts web requests, HDP predominantly, but others as well, and then does something to that web request. That something might be... God, I don't know. Oh, it might just go right to a different ECT server, a different server, right?

Matt Stauffer:
Yeah.

Chris Fidao:
It might go to just make a job in SQS, which is a queue job. I might save a file to S3, or I might fire off a Lambda function, a serverless Lambda function. What Laravel Vapor is doing is that it sets up an API Gateway for you. If you have a domain, like myexample.org, whatever, that gets pointed to AWS, its API Gateway. So, API Gateways accept the web request, API Gateway is going to spin up a Lambda function for every web request it gets, and then Laravel has done fancy things with Lambda so that it can run an entire Laravel application inside of that function call that gets run. I call it a function call, because it's literally given to you as... give us this bit of Python or JavaScript or Golang or something, and it's just going to call this one specific method, and then you do inside of that method whatever you want.

Chris Fidao:
Within Lambda, you can build your own custom run time, so Taylor has built a PHP run time within Lambda, so it's running actual PHP code in there. And that spins up a Lambda thing, and then when it's done, Lambda goes away, and then it will essentially handle "unlimited" traffic, because AWS's ability to handle all that traffic, and we're on a Lambda function every time it gets a work request, is vast. It has a lot of capacity. Not infinite, but certainly lots, which is why you also need to control your cost by limiting how many concurrent Lambda functions can get called at a time, that kind of thing. So, if someone decides to attempt to DDoS you and run your bill up, usually, they could, except for those concurrency controls that you can put in place.

Matt Stauffer:
Yeah. Yeah, because normally, your cost in DDoS is limited, because at some point, the DDoS just takes your server down. But in Lambda, it just keeps scaling and scaling and scaling, as do your costs.

Chris Fidao:
Right.

Matt Stauffer:
Okay. So, the magic of Vapor is even more magical than Ubuntu, because I don't think a lot of us are building our own PHP run times. A lot of people, for the longest time, just said it wasn't possible to run an application stack like Laravel in Lambda, in part because Lambda seems to be more around functions. Like you said, it's literally called a Lambda function. You are sending it. The idea would be like, "Hey, I'm going to go write a single function in JavaScript or something like that, and that single function in JavaScript has something that I want to do almost like a worker, right? I want to pass an input in an upload, and then I want it to do some work on it and spit out the thing." But Taylor was actually getting to the point where your entire application is crammed down into that function, so this is not the type of thing that we're doing on our own, right? If you want to do something like this with Laravel, you're basically going to use Vapor or a similar tool.

Chris Fidao:
Yeah. Outside of Laravel Vapor, the other ways I've seen Lambda talk about the most in devops Slacks and stuff I'm in, it's just to use it as part of a tool chain to get some tasks done by processing some bit of data that comes, or workers or a queue job, a queue workers, as we said. I haven't seen many people talk about serving a real entire application out of it, except for maybe specific API endpoints, just a few in Laravel and Forge. It's been done, but Laravel and Vapor specifically is definitely doing this kind of special thing of getting an entire application into it.

Chris Fidao:
Although, Lambda just came out with the ability to run containers within it, which Vapor can do now. I don't know for sure, because I haven't done this whole process myself, but I think that makes it a little simpler. If you can build a Docker container, then you can get that PHP run time in place without having to go through the hoops that you had to before.

Matt Stauffer:
Yeah, okay. So, we are at 50 minutes already. We have talked about Forge, we've talked about what deployments are, we've talked about Forge, we've talked about basic Ubuntu server and site setup, we've talked about SSH task runners, we've talked about zero downtime deploys, we've talked about Docker in production, we've talked about Kubernetes, and we've talked about serverless and Vapor. Before we actually get into what I hope to be a decent amount of just tips and gotchas, are there any other ways that you think we should address that people are often thinking about or doing deploy and hosting for Laravel that we haven't covered?

Chris Fidao:
Yep. Of course. There's too much to know.

Matt Stauffer:
Yeah. There must be something we missed. We probably missed seven different options.

Chris Fidao:
We talked about permissions. Permissions is the biggest thing. That still sticks out in my head as a huge thing. There's file permissions. There's file permissions, like who owns the file and who can do what to the file. There's process permissions, so PHP-FPM needs to be able to write to your files. Also, if your PHP code is ready to file, it's like you're a log file that Laravel creates.

Chris Fidao:
Then there's other things like restarting services, which usually requires sudo. You can do sudo service PHP-FPM reload, which you might do if you have Zend Opcache enabled. That's something Forge actually does for you as well, but it sets up sudo in a way that it doesn't get prompted for a password so that you can actually automate that using what's called a sudoers file. It defines who can run sudo and how. It's not just magic that these can do anything, and then your Forge user can use sudo. The Forge user can use sudo, but it needs a password. But can tell it to not need a password for any sudo command, or you can say, "You don't need a password to use sudo for specific commands," which is what Forge sets up for. So, you can say, "PHP-FPM, reload without needing sudo." But you might need sudo... I'm sorry, without needing a password.

Matt Stauffer:
Needing a password. Yeah.

Chris Fidao:
But you might need a password to do PHP-FPM stop. It gets pretty granular. The sudoers is the other thing I have in my notes here that we didn't really talk about there.

Chris Fidao:
We talked about web workers, which is the thing you can set up. You need another program to monitor the worker processes so that they don't die and just stop. That's usually a program called supervisor, although there are others. You can install supervisor and configure it, which is again a thing Forge will do for you, and tell it to monitor your queue worker, so if your queue worker stops, it'll restart it. Cron is always a thing. I think everyone's kind of familiar with Cron, because that's always been given to you by cPanel-type hosting to run periodic tasks. We talked about making artifacts instead of just doing a Git pull immediately.

Chris Fidao:
We didn't really talk about deploying to multiple servers, which is a whole nother huge topic, but the tl;dr of it is that you have to pick your trade-offs of downtime versus what you do. You could spin up an entire new cluster of multiple web servers and get your latest traffic in that, and then switch over from the old cluster to the new cluster. Or you could just run Git pull on every server consecutively so that you have... You could get into weird states with your code where one server has one bit of code and the other server has the newer code. Or you could try to do as concurrently as possible, although you can still get into weird conditions where they're hosting different things.

Chris Fidao:
And any of those are valid. For example, one application I work on at work that's not Help Spot, it's Thermostat, which is an NPS survey app. Our deployment process is kicked off from continuous deployment, a CI pipeline, and it will, one at a time, update servers, and we found out that is acceptable at the level of traffic that it gets. It hasn't been an issue where one application server had one version of code and the other didn't, and didn't cause errors. So, it's totally valid to do that kind of thing, because it's simpler to reason about.

Chris Fidao:
But when you have multiple servers, you have to care about a lot of other things. It's way more complex than one server. Typically, that needs a load balancer and carrying around all the different things. Your database can't be just on one server anymore. It has to be centralized on its own different location. Same thing with Redis, same thing with state. That's besides the point, but in terms of deployment, you have to figure out how you want to deploy and what's important to you, like zero downtime versus a quick deployment versus a slower deployment cycle. If it's slower, then do you deploy automatically, or does someone have to push a button to deploy it? If it takes 20 minutes to deploy, then you can't just deploy every time someone pushes to Git, because that could be happening multiple times a minute. So, multiple servers is a whole nother ballgame that you have to figure out.

Matt Stauffer:
So, if I've got maybe a simpler multiple-server setup, one of the things that Envoyer lets you do is say which of your servers receive deployments of this. Let's say I had three servers, they're all in DigitalOcean, they're all using DigitalOcean's load balancer, which makes things pretty simple. You don't have to be a super genius to do that, but it's still tough. But I could do it, and that says a lot. One of the things it lets you do is pick which servers are getting which commands. Are there any tips or gotchas in terms of recognizing the fact that most things you're going to want to run in every server, but you might not want to run PHP artisan migrate on every server, because then it might be trying to migrate the same database three times? Which, again, Laravel doesn't care that much. But are there any other things we should be thinking about if we had that multi-server setup about just things that should be done once, or is most of it all stuff that happens on every server?

Chris Fidao:
Yeah, that's interesting. Migrations especially probably will work out, because Laravel tracks which migrations are run, so it won't run the same one twice. Except if you're running concurrently, then maybe you get into a weird condition where maybe the database hasn't realized that it's getting run twice. So, running that in one server is a good idea.

Chris Fidao:
But anything else there is very dependent on your setup, so I don't know if I can certainly, but anything where concurrency matters. So, anything against the database, because your database is probably the one separate location that's not separately in each server, because you want your data to be in one place. And anything that touches state is going to have the potential to be an issue like that, so anything pushed to S3, anything in your database, anything in your Redis or just cache, anything like that, state is where things get complicated, state being your data. If you have multiple servers and a load balancer, you probably are not saving any kind of state to your web servers or application servers. You're probably storing it in a centralized database, in a centralized Redis store, in a centralized S3-type file store. In that case, that's when you have to care about concurrency.

Matt Stauffer:
Which one's doing what or whatever. Okay. We've done so many things right now that both you and the listeners are like, "Oh my God, there's so much." And there is so much, right? This is why you have so many courses, this is why there's so many different things here. But let's recap real quick.

Matt Stauffer:
If you're just getting started, we would definitely recommend, I would recommend considering starting with Laravel Forge. I wouldn't recommend the more complex things like... To me, it's always about YAGNI, which is you ain't going to need it, and KISS, keep it simple, stupid. Those are so valuable for so many aspects of programming. I think it applies in devops as well. A lot of people are saying, "Hey, we're a brand-new startup, and we don't have any traffic right now. Do we need 17 auto-scaling blah, blah, blah?" I'm like, "No, throw up a thing on Forge."

Matt Stauffer:
Granted, every once in a while, we do get a client who says, "Because of a marketing campaign, we're going to go from zero to three million on day one." Okay. Well, we can talk about it then. But usually, when they do that, they've already got their own teams working on it, and if not, that might be a time for you to start saying, "We need some dedicated time purely to figure out this hosting situation."

Matt Stauffer:
But for the average Laravel project, throwing something up on a one or two or three or five or eight-gigabyte DigitalOcean server on Forge has been more than enough, because PHP is super performant. MySQL, especially with some caching, is super performant if you write your queries well. Laravel is super performant, so the likelihood you actually need a lot of these complexities for performance reasons is actually pretty low. It's more like it makes sense to do what makes the most sense for your team's makeup, right? Do you have a massively complex team where deploys are really difficult because you got 17 different people, and it's in six different environments, blah, blah, blah? Okay, maybe you need a complex deploy system. Are you one programmer pushing something up so that your 600 clients can use it? Don't do all that stuff. Just throw it up in Forge and get back to programming, right?

Chris Fidao:
It's all very driven by your needs, and if you're not sure what those are, then you might not have those needs, so the simplest-

Matt Stauffer:
That's great.

Chris Fidao:
... is the best. And really, if you want something that you're worried about getting all that web traffic, then you can pay $40 a month instead of $15 a month and go to Vapor. And then you have the extra cost. AWS costs extra and all that stuff. But if you're worried about having a campaign and millions of people come in but you're just right now on a single DigitalOcean server, then moving to Vapor is the next easiest thing when you're not managing stuff, but it can handle that traffic.

Matt Stauffer:
Yep. And if you want to upgrade Forge and that's not reason to upgrade Forge, but it's more along the lines of zero downtime deploy and that kind of stuff, or multiple servers, Envoyer's a really nice upgrade to Forge. So, you do Forge, and then you move to Envoyer when you need zero downtime deploys, when you need more than one server. I think that's the same thing to me.

Matt Stauffer:
So, the really big question for me about do I upgrade and start using Envoyer, do I upgrade and start using Vapor, really is just around that question of, "Well, why am I upgrading?" Just like you said, right? It's about your needs. Why am I upgrading? Am I upgrading because I want to run exactly the same way and I just have the need for completely flexible scale, or do I just need to bump it up to two servers? Am I getting complaints because the deploys are introducing four minutes of Composer errors, and we can't have that, and we need zero downtime deploys or whatever?

Matt Stauffer:
Okay. So, you are the deployment guru. If everybody walked away from this, having heard all the things we've said, is there one thing that you wish we had talked about that you just say, "Oh, man, everyone gets stuck on this," or, "If only everybody knew this, this is the one thing that I wish they all knew," that we haven't got a chance to talk about yet?

Chris Fidao:
No. The big thing that always sticks out in my head is permissions, just Linus permissions. Figure that out and understand that.

Matt Stauffer:
Is there any really good resource you know about learning about that, or should I just Google and find some good links?

Chris Fidao:
No. I have stuff on Servers for Hackers, specifically because I haven't found a good resource, so maybe some articles I have on there are a good place to start.

Matt Stauffer:
All right. I'll try to get those links in the show notes for you then.

Chris Fidao:
Cool.

Matt Stauffer:
All right. Normally, I end up asking is there anything else we haven't covered, and then, are there any other really good resources? So, I'm just going to start out with your resources. Servers for Hackers, and literally any course that Chris has ever done, is absolutely fantastic. We'll put a link to every single one of them in the show notes. It's all really good stuff. The people at Tighten, we all use it at various times, and our most server-minded people are the biggest fans of Chris, which tells you something. It's not the opposite way around. But other than your own resources, are there any places you would turn people to go for learning about just this kind of stuff in general?

Chris Fidao:
I am in a few different devops Slacks that are public. You can find them and get into them. I ask people questions there all the time, and a lot of people are super helpful, so that's a very good place. And then there's the Laravel Discord that has a servers area and that kind of thing, too. Googling around is still your best bet. You'll often land up on the DigitalOcean article. DigitalOcean's resources are really good.

Chris Fidao:
And I've just started in the last year doing that trick on Udemy and that kind of thing, where you go into safe private browser mode, and they only charge you $11 bucks for a course instead of $150. That's a good trick to get cheaper courses in Udemy and that kind of thing, and I've done that few times to pick up knowledge about specific topics. I found a few good courses on gRPC. I'm not going to get into what it is.

Matt Stauffer:
Okay. Cool. You learn so we don't have to.

Chris Fidao:
It's not a server thing. It's more of a programmer thing than servers, so we don't have to get into it. But that was just one example that I did. I also did that for Kubernetes, but then got into two videos, and then realized it was old, and Kubernetes had moved beyond it already. That was frustrating. That's as far as I got learning Kubernetes.

Matt Stauffer:
Got it. Okay. People almost always reference the Laravel docs, but I don't think the docs talk all that much about deploying, because it's just one of these... There's so many options here. Definitely check out the docs, but I don't think it's something it talks about as much.

Matt Stauffer:
The note about DigitalOcean is really good. DigitalOcean, whether or not you're using them as a server, they pay people to just write great posts for people doing server stuff, whether or not it's actually about how to do it on DigitalOcean. So, if you end up seeing a DigitalOcean result in your search and you say, "Oh, well, I'm not on DigitalOcean," don't worry about it. It's usually just good stuff, period. It's not good stuff tailored for DigitalOcean. And of course, they're doing that to earn goodwill with us, which ends in doing the job. But just note that you don't have to skip it just because you're not using them as a host.

Chris Fidao:
For sure.

Matt Stauffer:
All right, Chris. For the longest time, everybody in the entire world knew you by one picture, and it was the one picture with you looking down with a blue beanie. Was it blue, I think, or blue or green?

Chris Fidao:
I don't remember.

Matt Stauffer:
Is that still you?

Chris Fidao:
Yeah.

Matt Stauffer:
Is that still you?

Chris Fidao:
That's not changing.

Matt Stauffer:
Oh, it's still you. Okay.

Chris Fidao:
I don't have the original image, so I can never go back to it. I don't have the hat anymore, so I can't make a new photo.

Matt Stauffer:
Oh, no, it's a great one. And you look like a completely different person. Every time somebody sees you in person, they're like, "Oh, that's not what I expected at all." Of course, all these fun moments are often about very meaningful and significant things about people's lives. I just want to know, is there a story behind that beanie? Do you love it? Do you still have it? Do you remember where you got it from? Do you know anything about that beanie?

Chris Fidao:
I lost that hat so long ago.

Matt Stauffer:
Oh, no.

Chris Fidao:
What I really hate is that I think I lost it in a coffee shop, because I saw the guy working at the coffee shop wear the exact same hat. I was like, "Oh, I have that hat, too." But then, two weeks later, I realized I had lost the hat, so I think he-

Matt Stauffer:
And you think it was him.

Chris Fidao:
Maybe. I don't know. Who else has the hat? So, I've tried to find it-

Matt Stauffer:
Frigging barista.

Chris Fidao:
... on eBay. But I can't find it.

Matt Stauffer:
Do you know what brand it was?

Chris Fidao:
It was a Banana Republic-

Matt Stauffer:
Banana Republic.

Chris Fidao:
... random hat. I don't even know how I came across it.

Matt Stauffer:
You must've been in your 20s in that picture, right?

Chris Fidao:
Yeah.

Matt Stauffer:
That's a while ago. Okay.

Chris Fidao:
I was working at my first job. I'm only on my second job, which is funny, too.

Matt Stauffer:
It's awesome.

Chris Fidao:
Yeah. How old was I? 24, 25? Something like that.

Matt Stauffer:
Just a baby.

Chris Fidao:
Mm-hmm (affirmative).

Matt Stauffer:
Nice. Well, that was it. I wanted to hear if there was any story behind the hat or anything.

Chris Fidao:
No, there's no story. I took the picture, I added a filter on it, and that was around the time that I made the Fideloper account, too, so I had an old Twitter account that was my other name that I used to use for stuff. And it's been that picture, and I just kept with it. Mostly, I've kept with it because of Ian. Ian was like, "You had a brand," or whatever. It was a brand, and I was like, "You're right. I guess I can't change it now."

Matt Stauffer:
Which is funny. I also have never heard Ian call you Chris once. He always calls you Fideloper.

Chris Fidao:
It's true.

Matt Stauffer:
No matter what. I love it. Well, anything else you want to chat about before we're done for the day?

Chris Fidao:
I don't think so. This topic is broad and deep, and it just gets so complicated. A lot of the stuff we talked about, I'm almost afraid we went through stuff way too quickly. Some stuff can get too deep.

Matt Stauffer:
Well, I think the good thing is that it was a really good intro to the entry-level stuff, and that's what I wanted. And then touching your toe on a whole bunch of other things. So, if anybody's sitting out there well-actually-ing Chris, first of all, I'm sure you can hit him up on Twitter, but second of all, he knows the things. He's trying to give us a nice, short version, but I think my target market here is people who are new. And I know the people who are new are going to appreciate both the robust introduction you gave us to things like Forge and servers and Linux permissions, and then also just the quick intro to all the other things, and hopefully the note to say, "Don't worry about those right now." I really appreciate you for everything you teach us all all the time and for you spending time with me today to share with everybody.

Chris Fidao:
Right. Sounds good. Happy to.

Matt Stauffer:
Thanks, man. And see you all next time.

Creators and Guests

Matt Stauffer
Host
Matt Stauffer
CEO Tighten, where we write Laravel and more w/some of the best devs alive. "Worst twerker ever, best Dad ever" –My daughter
Deploying & Servers, with Chris Fidao
Broadcast by