Testing, with Adam Wathan

Matt Stauffer:
Welcome back to the Laravel podcast, season four. Today we're talking with Adam Wathan, who you probably know as the creator of Tailwind CSS and all sorts of other amazing things, but he got his start talking about testing in Laravel, so that's what we're going to do today. Stay tuned. Welcome back to the Laravel podcast, season four, where every single episode is about an individual topic. Today, we're talking about testing with the man, the myth, the legend, Adam Wathan.

Matt Stauffer:
Adam has done a lot of things. You actually probably heard of him more recently because of things like Tailwind and all that. But one of the first things that Adam did in the Laravel world was introduce a lot of us to what a more, I don't even know the best term for it, but a better way of testing. I'm going to call it that. Testing... He did a lot of stuff, teaching about testing and tweets and book and course and all this kind of stuff. Well, I guess the book wasn't about testing, but yeah, course, all this stuff.

Matt Stauffer:
Adam, I just talked about you a little bit, but when you meet people, I love hearing... Because we all know who you are, everybody in the Laravel community does. But when you meet people who have no freaking clue that this whole section of the Internet thinks you're God basically, how do you introduce yourself? What do you say you do?

Adam Wathan:
Yeah. These days, I just tell people that I'm a computer programmer, and I run a small company that builds tools that helps other people make websites.

Matt Stauffer:
Okay. That makes sense. Do you get people asking to make websites for them? They're like, "Oh yeah. My mom and pop needs a website," or something.

Adam Wathan:
No. I have not had that happen too often lately. But I think it has happened to all of us. Yeah.

Matt Stauffer:
Yeah. I'm curious if maybe because you're saying, "I build tools," instead of, "I build websites,"

Adam Wathan:
I think it's mostly that I don't meet that many people.

Matt Stauffer:
That's true. Right. There's not much actual socialization is there, in your life?

Adam Wathan:
Yeah. My in-laws and stuff don't need websites, thankfully.

Matt Stauffer:
Yeah.

Adam Wathan:
Yeah.

Matt Stauffer:
Who else do you see?

Adam Wathan:
Exactly.

Matt Stauffer:
I love it. Well, so again, the reason we've got Adam here, there's a lot of different things he knows, but he did Test-Driven Laravel, which is a extraordinarily long and wonderful test course teaching all about testing.

Matt Stauffer:
If anyone's not familiar with testing, we're talking primarily about automated testing versus tests that are scripted. I'm not going to go any further into that, because the first question we ask in every single episode is, if you were to describe this topic, and I'm just going to say automated testing, you don't need to worry about PHP and Laravel, to a five-year-old, how would you describe it?

Adam Wathan:
Yeah. I think the way I would describe it that automated tests are a way to teach the computer to let you know when you've made a mistake, basically, so that you can fix your mistakes before it's too late.

Matt Stauffer:
I freaking love that. That is maybe the most concise answer of the entire season. That's nice. All right. So yeah. So you're teaching the computer how to know when something's not doing what it's supposed to. So the test is basically make sure that it's doing what it's supposed to, right?

Adam Wathan:
Yeah, totally.

Matt Stauffer:
Most often the next thing I ask here is, let's say you were speaking to a programmer who has little bit of experience programming, but doesn't have experience with automated testing. Could you tell them a little bit more about what that process is like of teaching it how the things should function?

Adam Wathan:
I think if I was going to talk to someone who knows how to program, but doesn't have experience with automated testing, I'd probably talk to them a little bit about how they check that their code works right now.

Adam Wathan:
If you're not writing tests, then you're still testing your code, but you're doing it by hand usually, which means usually you make a change, then maybe you delete some stuff in the database, then refresh your site, go back to the page where you create the new product, create the new product, go to the edit page, click the dropdown, switch it to something else, and hope that you don't see the exception screen this time. You know what I mean?

Matt Stauffer:
Right, yeah.

Adam Wathan:
You keep doing that, and checking TablePlus or SQLPro until, "There's that value that I was hoping would show up now that I've made this little transformation to this API call," or whatever. We all test our code. I mean, most of us test our code in some way, shape or form.

Matt Stauffer:
Right.

Adam Wathan:
Some people write code that they never execute...

Matt Stauffer:
Set it and forget it.

Adam Wathan:
... and ship to production.

Matt Stauffer:
It's true.

Adam Wathan:
But generally, we do verify that our code works somehow. Automated testing, again, is a way to try and capture that process for verifying that something worked, in code in a way that you can just repeat it over and over again with a keystroke, and take it out of your brain and put it into a system.

Adam Wathan:
So you don't have to always remember every single thing that you have to test, and think about, "Okay, well, okay, I made this change here, and maybe this is... Where could that have caused something else to break that I have to think about? Could that have broken the import system? Maybe there's a way it could have broken the import system."

Adam Wathan:
But if you have a test suite, that to me is the nicest thing about it. All this pressure to just have an awareness of how everything in the system is connected to each other is pulled off of your shoulders and onto the computer's shoulders.

Matt Stauffer:
Yeah. I love that. Even further than you remembering the import system, it's more the ones that you don't ask the question about. It breaks and you're like, "Well, I never thought that might be connected," right?

Adam Wathan:
Yeah.

Matt Stauffer:
How has this broken on prod? I never imagined that this one line of code halfway across the app would have touched it. Turns out it did, and you didn't think about it.

Adam Wathan:
Yeah, especially if you don't realize or the bug doesn't appear until a few days later.

Matt Stauffer:
Yes.

Adam Wathan:
And now it's like, "When was this?"

Matt Stauffer:
"Which commit introduced this?"

Adam Wathan:
"What commit was this introduced in? I have no idea."

Matt Stauffer:
100%. Yeah, I love that.

Adam Wathan:
That's very common, especially if you're working on something with a lot of users in production, probably you catch things like that pretty easily. But for the sorts of apps that I used to work on when I was working with you guys at Titan, there would be a lot of times where something is pre-production still.

Matt Stauffer:
Exactly, yeah.

Adam Wathan:
You know what I mean? So it's not getting hammered, and all the different end points aren't getting hit all the time. So it could be very easy for something to sneak by and have a hard time tracing it back. So yeah.

Matt Stauffer:
Or it touches a system that people don't use all that much, like the user onboarding. Maybe you've got thousands of users, but you only onboard a user once a week or something.

Adam Wathan:
Sure, yeah. Yeah, totally.

Matt Stauffer:
That's really good. I think that one of the things that you mentioned there, you didn't flesh this up the whole way, is the fact that it's not just about making sure that there's no bugs, but it's taking it out of your brain, the responsibility of remembering what testing the app for bugs it looks.

Matt Stauffer:
One of the things I love about that is the fact that that continues whether or not you're the person working on the project. The knowledge of the last person of how to test it carries on.

Matt Stauffer:
We often get called into work with teams where they say, "Well, the person who wrote all this code left." We say, "Okay." They say, "Can you make some changes?" And we say, "Yeah. Are you going to test to make sure the app still works?" And they say, "We don't know how, because we don't remember all the things to check. We don't know the things that are connected." So unit testing would help you there.

Matt Stauffer:
Speaking of unit testing, I don't want to go to a deep, deep dive of the different terminologies, but could you give me a high level of just a couple of the phrases that I want to understand or the terms I want to understand in testing?

Adam Wathan:
Sure, yeah. People throw around terms like end-to-end testing, acceptance testing, integration testing, unit testing, isolated unit testing, feature testing, blah, blah, blah, blah, blah. A lot of these things mean different things to different people. I think probably at the top of the pyramid, not the top of the testing pyramid, mind you, but the top of people's mind...

Matt Stauffer:
Common, yeah.

Adam Wathan:
The most common term that people debate the meaning of is unit testing. A lot of people will say that a unit test is when you test a function or a method on a class and test them. I give it these arguments, I get this output or this side effect.

Adam Wathan:
Isolated unit testing, which is what a lot of people mean when they say unit testing, is I want to be able to test that function on its own, without running any other functions, or without talking to an API, or without talking to a database.

Adam Wathan:
Then a lot of other people will argue that unit testing just means testing a unit of the system. Your definition of unit is sort of up to you as a developer and is context dependent. So it might mean a function that calls 10 other little functions. It could mean testing a whole import process. It could mean a lot of things. So I always found the term unit testing to not be super, not useful, as a categorization concept.

Adam Wathan:
Anyways, I think it's generally more useful to think of tests in, I don't know, two categories, maybe? I usually think of things as testing, features of a system. So, I have something that I know I want the app to do from an end users perspective. And, it doesn't necessarily mean that you're testing it through the browser or a headless browser or something, but more just I have something that I need this project to do in this way. And, I want to test it as close to the perspective of the consumer as possible. And, those are the sorts of tests that you use to catch regressions, make sure that system's working sort of tests that live around forever. And, then there's tests that some people call them programmer tests.

Matt Stauffer:
Oh, yeah.

Adam Wathan:
Which is another word for what people often call unit tests. But, these are tests that you write because they're helping you solve a problem. But at the end of the day, usually could be deleted without losing test coverage. So, this is I'm working on some little algorithm inside of some feature and I really want to just focus on this one piece of it. So, I'm going to write some tests, so I don't have to keep testing it manually, even though maybe I have a bigger feature test that covers the whole import process already. That's already currently failing because I haven't actually implemented it.

Matt Stauffer:
Love that, yeah.

Adam Wathan:
But, I'm working on this one little transformation. I read all the test for it I get it working, and then maybe I keep the test around. But, if I deleted it, I know that I wouldn't be losing test coverage because if I broke the code that was being covered by this test, that bigger kind of feature tests would also fail and might not give me as nice of an error, it might just say "Exception was thrown, but don't know why." Whereas the program retest it, might've been, "You passed a null" and it was expecting a number or whatever.

Adam Wathan:
But, those are kind of the two ways I think about tests. Tests that helped me that I want to keep running to verify that the system is working and tests and I'm just kind of writing, just sort of offload things into the computer temporarily. And, yeah, often you will keep those tests but the problem is, if you want to make some drastic refactoring that doesn't change the behavior of the system from an outside perspective, it could change the expectation of that test. And, now that test could start failing, even though the feature test is passing, that's not a good situation. It's not that it's not a good situation to be in, but tests like that again, they're for your benefit as a programmer for solving the problem, not for verifying that the system's working. So, you shouldn't be afraid to trash it.

Matt Stauffer:
I freaking love that because we've had these unit tests and then the feature tests, directories in Laravel and I was actually just about to ask you about those, but you got to them. So, if you're new to Laravel, you're new to unit testing or feature testing or anything, automated testing and you open those up, you might ask the question of which goes where, and I'll just be honest with you all the vast majority, I just throw everything in unit testing because the delineation between the two doesn't matter that much.

Matt Stauffer:
But, this is the most compelling reason I've ever heard to differentiate them because, so I'm going to give an example and please tell me if this is me following it right, because I think that this is more important than anything else we talked about here.

Adam Wathan:
Sure.

Matt Stauffer:
So, if I had an importer, I've built multiple apps where there's a user can import a CSV of all their contacts, right? So, I would have a test that says, "Log in as user one, grab the CSV and then send it as a post to the import slash create end point" or something. And then, click a couple buttons and then make sure at the end, these three users that were in my CSV are also in the database.

Adam Wathan:
Sure.

Matt Stauffer:
So, that's more like a feature test, right?

Adam Wathan:
Yeah.

Matt Stauffer:
A user can import a CSV. And, then the unit test there, one of them might be the CSV reader and there might be a class whose responsibility is taking a CSV and transforming it to an array.

Adam Wathan:
Yep.

Matt Stauffer:
So, the test on that thing would be unit tests because later, instead of having a CSV reader, we might have some other kind of system and we might use a manager system that has multiple different types of... So, the architecture of supporting this feature might be different. So, in that moment we would change the architecture, our unit tests around CSV might break, but the feature of can I still upload a CSV and see the records and data set at the end still passes. Is that kind of what you're thinking about for a definition?

Adam Wathan:
Yeah. Yeah. And, even another way to sort of make the point that's using the same example is say you have these unit tests for this CSV reader that basically is testing that you can parse the CSV into an array and that is required for this import process. Those tests are all passing and stuff, but then you discover that someone just released an awesome CSV parsing package. "Okay, I'm going to install that package instead. I'm going to delete my CSV reader class." Now, all of a sudden my unit tests for my CSV reader are failing because the class doesn't exist anymore. But, the feature still works because I've replaced it with the library that did it. You know what I mean?

Matt Stauffer:
Yeah.

Adam Wathan:
So, that's kind of the most heavy handed example of why that test is it's only important, it's not really that important on its own because again, the thing that it's in service of is still working and still passing. So, you don't necessarily need to keep that around.

Matt Stauffer:
Where, on the other hand, if you use that same CSV reader, multiple places around your app and later delete the CSV importer, you may end up keeping the unit tests around. So, it's valuable at times to have both.

Adam Wathan:
Yeah. It's complicated, right? One way that I like to think about it too, is sometimes your apps unit tests could be another project feature tests. So, this is...

Matt Stauffer:
Interesting, yeah.

Adam Wathan:
The same way we're kind of talking about this whole pulling in a library thing, say you want it to take your CSV reader and use it on another project. So, you want it to extract it into a library yourself that you could reuse. Well, now those unit tests are the feature tests of that library.

Matt Stauffer:
Yeah, in your CSV library.

Adam Wathan:
So, you want to test at the API of the thing that you're building, but what the API is sort of depends on sort of the...

Matt Stauffer:
Like, the end user?

Adam Wathan:
Sort of, what's getting reused and what's not getting used or, and not even necessarily reused, but yeah, what is being used by the outside world.

Matt Stauffer:
I love that.

Adam Wathan:
And, what's ultimately being used internally and that can change and that can change even throughout the course of a project that is a project that you're working on yourself. Another situation that I've run into where it makes sense to sort of keep unit tests or, I mean, I don't want to get too into the weeds because I feel...

Matt Stauffer:
I know, I'm nerd enough.

Adam Wathan:
...we're sort of talking about abstract advanced concepts maybe.

Matt Stauffer:
Yeah.

Adam Wathan:
Before getting into stuff.

Matt Stauffer:
It's my fault.

Adam Wathan:
But yeah, maybe we can kind of get grounded for a minute here and jump back in and get some of the stuff. But, yeah, there's definitely this idea that I like to think of seams in your application, especially when it comes to wanting to introduce test doubles for things, which is very related to what we're talking about right now. So, maybe we can shelve that for a minute.

Matt Stauffer:
Get back to another. Another day. I love that.

Adam Wathan:
And, get back to reality. We can get into that again in a bit.

Matt Stauffer:
Well, I appreciate that. And, for those of you who are a little bit more advanced level, and if you're hearing these things wish we kept going, well, hopefully I'm going to get to it at the end. But, if not, you can hit both of us up on Twitter. I'm sure we'd be happy to nerd. Well, Adam, more than me, but I'll nerd out with you in any way I'm capable of.

Matt Stauffer:
Okay. So, we're going to get back into stuff. So, everybody remember the primary focus of this season is of people who are new to it. So, one of the questions I often ask in the season is when's the last time you used this system, I'm going to step away from this system in PHP, because I think we should talk in a second about what the system looks like in PHP and in Laravel but just higher level. Do you ever write anything without tests these days?

Adam Wathan:
Yes, but only because I still haven't made investments in certain areas to get fast at it. But, it's very rare that I work on anything without writing tests. So, even today, for example, I've written tests today. I was working on a new feature for Tailwind where we wanted to support using an array for a Tailwind config instead of an object and I had to write a bunch of tests to make sure that all that worked. So I do write tests every day that I program. I think the places where I don't are usually UI code. Do you know what I mean?

Matt Stauffer:
Yes. 100%.

Adam Wathan:
Like testing a view component and stuff that. I still haven't found a workflow that feels like I'm productive. I don't know. A lot of time it feels like I want to test this in the browser anyways, because there's this intermingling of both behavior and design sort of happening at the same time so you end up just playing with things and yeah, but I would still to get better at that too, honestly. So, yeah.

Matt Stauffer:
Me too. And just to give a sense to everybody here about kind of what's normative in the Laravel community. I think it can be easy to hear about automated tests, especially hearing about the idea of design tests and browser tests, and think that you must be really good at those and I'm just going to be super transparent. Very few people in the Laravel community are doing really effective front end tests, other than probably those that are working in full-stack frameworks where they're writing tests of their front end logic in those frameworks, like if you write a component that has some calculations in it or a computed property that you want to make sure that given a certain set of data it computes one way.

Matt Stauffer:
Those things, people are testing. They're not testing nearly as extensive as we used to community test for Laravel code, but those are testing, but especially things more clicking buttons or positioning things or whether things are hidden or not hidden when it's not just a computed state, but kind of that stuff, I feel we're all at this place of not being sure.

Matt Stauffer:
And I know we've done some jest at Titan and some other stuff, but it's definitely not like this everybody does it thing in the same way that I think PHP tests are.

Adam Wathan:
Sure. For sure. And some of it is just so hard to test, like we're working on these UI libraries on our team right now and Robyn, who's building them, has had to invent testing harnesses to make some of this stuff possible. We have to build this react transition component and writing tests for that, that could actually verify that when I click this button, this panel opens and it takes this long and it waits for this other stuff, like that is some gnarly stuff to deal with. Right?

Adam Wathan:
So I think just the fact that how often you have to like invent almost your own testing framework, not in like the PHP unit sense of the word, but like your own custom assertions and all that sort of stuff. The amount of work that you have to invest into your own test tooling I think can expose how uncommon it actually is, otherwise more of this stuff would just exist out in the wild.

Adam Wathan:
So that's, I think, a really helpful transition to talk a little bit before we talk about the common challenges and gotchas is just like, let's say someone's never written a PHP in a test in Laravel before. Could you give us a quick walk through of maybe, like let's say that I wanted to write a feature test? Because I think, well, would you still recommend that if somebody has never tested before that they should start with higher level outside in feature tests? Or would you start with unit tests?

Adam Wathan:
Good question. I think, so there's arguments to be made for both. I think it's useful to learn the mechanics of testing in a more unit testing way, just like okay, I have a function that's supposed to do something like remove the spaces from a string. You know what I mean? Like something that's just very isolated, in a very not complicated problem. It doesn't have a lot of weird interactions going on with outside world.

Adam Wathan:
If you just kind of want to learn, okay, how do I write a test function in PHP unit? What are the three stages of testing and what type of assertions exist and how do I use PHP unit data providers and that sort of thing. Definitely, you're going to be better served by keeping the actual code and problem that you're trying to test as simple as possible so you can devote your brain power to learning the tool and the features of the tool and the overall sort of workflow. Right.

Matt Stauffer:
I love that. Yes. Yeah.

Adam Wathan:
But I think once you get comfortable with that, I would quickly start trying to test some application logic because I think one of the biggest frustrations people run into when learning testing is that a lot of the information out there focuses on these very overly simplistic cases. And when you try to test something that needs to talk to a database or make an API call or queue a background job or send an email, which is 99% of the real work that you're doing, like most of the work we're doing as application developers is making data move from this to this and kind of wiring together other stuff.

Adam Wathan:
Yeah. Testing that stuff can be a little trickier, especially if you try to apply the ideas that are evangelized in very unit testy environments. You know? So I think it's good to start practicing with that right away. As soon as you start to understand the mechanics, best to start trying to apply it to the real work that you do.

Matt Stauffer:
I love that. Yeah. And I was tempted to start asking you questions about the really rudimentary basics of how to do it. But I think that's not time well spent because you can learn those rudimentary basics a million different places, about assert true and assert false, the steps and stuff that.

Adam Wathan:
Sure.

Matt Stauffer:
I don't think that that's worth your time. So I would say anybody, if you've never actually seen a line of PHP unit code, if you wouldn't mind just hit pause real quick and just go Google and if I find a really great one, I'll throw it in the show notes so check the show notes too. Just like introduction to PHP unit syntax. There might be even a free Laravel cast video on that. If so, I will link it in the show notes too.

Adam Wathan:
For sure.

Matt Stauffer:
Just get that in your brain so that I'm not going to spend Adam's time right now doing that. So let's say everyone we're talking to understands basic PHP unit syntax. They understand assertions and they understand the stages of testing and everything like that.

Matt Stauffer:
But one last question before we get into it. Could you give me the simplest or the most common useful application tests that you think that it would be a good place for somebody who's now comfortable with unit testing, understands the mechanics of PHP unit? Where do you think that the average Laravel app would benefit having it's and simplest and easiest to understand and step into application test written?

Adam Wathan:
Yeah, sure. So the example that I've given a lot in the past when I've done testing workshops and stuff is testing like a Twitter clone and how to start with a testing an application that. So I would, the example that I've always started with is I think, there's a couple of different ones that we could start with, but they're all similar in a sense, maybe something like viewing a tweet. You know what I mean?

Matt Stauffer:
Right.

Adam Wathan:
Like being able to go to a URL, like twitter.com/mattstouffer/abc123, and testing that the right tweet comes up. Right?

Matt Stauffer:
Yep.

Adam Wathan:
So testing that will force you to touch a bunch of different things and the Laravel testing ecosystem, which will help you get familiar with some of that stuff really quickly. So we can touch on some of the testing basic stuff while we talk about this because it is all relevant, but the three phases of a test that people talk about generally are arrange, act, assert. So you start, at the beginning of the test, you set up the world and the way that you want it. And in like a hyper unit test, usually that's just like, okay, the input string that I want to strip all the spaces from is hello space world space, my space name space is space Adam. You know what I mean?

Matt Stauffer:
Right.

Adam Wathan:
That's like your arrange. All you're doing is creating like, okay, what's our starting point? Then your act phase is where you take that input. You do what you need to do with it to get the output. So usually that's calling the function that you passed the parameter into. And then there's the assert phase, where you take the output of that or the state of the world, more likely.

Matt Stauffer:
Right.

Adam Wathan:
And in the case for an application test, and you ask some questions about it to make sure that the answers are what you expect. Like is this string now this same thing, but with the spaces removed or in an application world, is it okay, now, does the tweets table have one row in it where the tweet type content is this. In a layer of all application testing perspective, we're testing that we want to be able to see a tweet, then we have to sort of create all the stuff that would be necessary for that to happen, especially if you're doing this from TDD perspective, which is a little bit different than testing after the fact. I mean, we could talk about both perspectives, but from a TDD perspective, often here you're setting up a world that doesn't even exist yet.

Matt Stauffer:
Yeah. I love that.

Adam Wathan:
Say you already had all the database tables and stuff for your little Twitter clone and you were just going to try and arrange things. What you might do is first say, "Okay, well, I know I want to hit this URL." It's mattstauffer/the tweet ID. "I know I'm going to need a tweet, so I better end my act or my range phase." I'm going to say, "Tweet=tweet::create," and pass in some arguments, but what does it need? Well, it's going to need to belong to Matt Stauffer.

Matt Stauffer:
Yep.

Adam Wathan:
I'm going to need to say user_ID=Matt ID. It's like, "Okay, well, I don't have a Matt variable, so I need to create Matt now." Above that, I'm going to step back and I'm going to say, "Okay, well Matt=user::create email Matt@tighten.co or whatever. That's kind of all you need for this simple test. Those are the two things that you need. Now, in this case for that to work, those actually have to get saved to a database somewhere. Right? Laravel has a bunch of different ways for doing this. You can use a SQLite in memory database, which is a really great way to get started because it basically requires no external configuration and everything just kind of happens in this sort of clean immutable way.

Adam Wathan:
Sometimes, though, if you start leveraging more complex features of whatever database you're using, you'll have to sort of graduate to running a real test database in TablePlus in your MySQL instance that you have to sort of refresh. I don't know if this has changed since the last time I set up a brand new Laravel lab, but there's a trait that's refresh database or something.

Matt Stauffer:
I think that's still what it, yeah.

Adam Wathan:
I think it's smart enough to sort of know whether you're using SQLite or MySQL and if it's using MySQL, it'll truncate all the tables at the beginning of each test. If it's using SQLite, it doesn't need to do anything. It'll just destroy it and recreate it every single time, but that'll just kind of give you a nice clean database environment. Yeah, we run these two lines of code to create a user, create a tweet, and then we have to basically visit the URL where we expect to be able to get the information about the tweet.

Adam Wathan:
In Laravel, I think it's just this arrow get, and then the URL, or you can use the route helper or something to expand the URL. Pass in the URL, you'll get like a test response back and usually what I would do in this case is, there's a couple of things you could do. You can get the whole string body of the HTML and make some assertions that string contains the tweet text and stuff like that, which is a great way to start.

Adam Wathan:
Usually I personally don't assert against HTML responses because they are a little bit fragile sometimes. It's also just hard to make assertions about because the data could get manipulated or you're dealing with all these HTML tags and it can be a little bit nasty. The approach I generally take instead is I try to just make sure that my templates are as logicalist as possible. Maybe there's an if statement for a thing or two here and there, but generally I'm not doing real work in the template.

Adam Wathan:
The benefit of that to me is that you can use this helper on the test response. I think it's response arrow data. I have to remember sometimes what are macros that I create in projects and what are ships with Laravel, but there's a way to ask the response for what was the data that the controller passed to the view, just get it raw.

Matt Stauffer:
Oh.

Adam Wathan:
I'll use that to make assertions about, "Okay, the tweet that was sent to the view had this ID, which matches the ID of the tweet that I created and the content matches what I created."

Matt Stauffer:
Oh, I like that.

Adam Wathan:
As long as you feel comfortable that you're not doing anything very weird in your views that reduces your confidence in this accurately testing the system, then this is a nice way to make those sorts of assertions in a way that's very clean and sort of data-driven instead of kind of scraping HTML pages and stuff.

Matt Stauffer:
Yeah. I love that.

Adam Wathan:
I'll usually write an assertion that first I'll check to see, "Okay." I think you can do response view or something and get the template name that was used, that was returned from a controller.

Matt Stauffer:
Yeah, but there's also assertions from these. I didn't mean to interrupt you, but do you of assert view.

Adam Wathan:
Oh, yeah. There's assert view is, assert view has. Yeah. You're right.

Matt Stauffer:
Exactly.

Adam Wathan:
My recommendations here are coming from the pre customer assertions era.

Matt Stauffer:
Exactly.

Adam Wathan:
That's even easier. You can just say response assert view is tweets.show and a response assert view has. There's a couple of ways I think you can do that. You can just pass in an array that has a string for tweet and then the tweet, but that'll probably fail because it's not the same object instance as the one that's in your test.

Matt Stauffer:
Yeah, exactly.

Adam Wathan:
It fetched it from the database fresh.

Matt Stauffer:
Yeah. Good point.

Adam Wathan:
What you can do instead is pass a callback. You can say assert view has, pass in a callback, and as long as the callback returns true, the assertion will pass. Usually what I'll do there is I'll say, "Return view, arrow, tweet arrow ID, triple equals, tweet ID." As long as the IDs are the same, check the content, then you're good to go.

Matt Stauffer:
Nice.

Adam Wathan:
That would be a very simple sort of six line test you could write that hit a system sort of end-to-end, touches the database such as the HTTP layer and stuff like that. Yeah, that's kind of a great place to get started. Then there's a lot of other stuff we can get into like testing queues, and email, and all that sort of stuff. That's sort of our read test and then you might want to test creating a tweet, right?

Matt Stauffer:
Yeah.

Adam Wathan:
Often this is a little bit different because instead of just testing the HTTP response, you might want to test something about the HTTP response, right, depending on how you're building it. Maybe you're building it as, it's an Ajax call that doesn't refresh the page.

Matt Stauffer:
Oh, yeah.

Adam Wathan:
You might just want to check that you get a 204 response code back, right?

Matt Stauffer:
Yeah.

Adam Wathan:
That's not enough to really know that you actually saved the tweet to the database. Usually what I'll do is I'll often do maybe a pre assertion. In your arrange phase, you might just, "Let's just before we run anything, let's just test to be sure that right now the tweets table is empty." We'll say, "This assert true tweet::count is zero," or whatever.

Matt Stauffer:
Zero, yeah.

Adam Wathan:
Assert count zero. No, I think you'd have to assert equal or assert true. I mean, assert true, you can test anything with assert true, really. That's the only assertion you really need at the end of the day.

Matt Stauffer:
Eventually, yeah. They all get down to that event at some point.

Adam Wathan:
I'll often do that at the very beginning of a test to just say, "Okay, well, let's just make sure that the world is in the state that I think it's in." Then you might, "Okay, well, to create a tweet, we need to have a user. I'll say Matt equals user:: create." Then I might say, to make the request, you can say, "This acting as Matt post/tweets and then the content," or post JSON, depending on how you're doing it. Then you'll get a response back, but even if that response come back as a 204, you don't know for sure that that a tweet was saved. Right?

Matt Stauffer:
Yeah.

Adam Wathan:
Because you could just write an empty controller action that just returned a response 204, it doesn't actually do anything. How do you test that you actually got anything into the database? Well, this is where you have to start basically asserting about side effects in addition to just the response. Instead of just assert response or response, assert, okay, or assert status 204, or whatever, you'd also want to probably query the tweets table again and see if the tweet is there. It's a very simple and it's easy. But if you listen to some of the advice that you see out there, people will tell you this is not a good way to test, which I struggle to comprehend because it's such a simple and easy way to test that gives you so much confidence in your code, but it is how I absolutely would recommend getting started with testing. So you can just make an assertion directly about the contents of the database, tweet equals tweet, colon, colon first, which you can kind of reliably do. Now that you've made that pre-assertion to check that the tweets table was empty before.

Adam Wathan:
This assert equals, "Hello, world." Tweet content. Maybe assert that there's only one tweet in the tweets table, so you didn't actually save two copies of it, or something like that. And you're good to go. So that kind of covers the two most straightforward situations from how I test Laravel applications, anyways. I always think of the routes as really the UI for the app itself. So if you're testing just features of the application, that usually means interacting with the routes and seeing what you get back and what it does to the database at its most simple level. So yeah, that's where I would start.

Matt Stauffer:
I love that. And there's so many good nuggets in there, including lots of ways where I think that this pushes back on some of the criticisms people have had of application testing. And I don't want to spend this whole podcast with you and me just kind of saying, "Here's why some criticisms are wrong." But I feel like people are going to hear those criticisms some time and need to hear at least baseline. So, two things that Adam did there that I think are... There's a lot of great things, but there's two that are really great that both have to do with addressing the fragility of application testing that is actually interacting with views.

Matt Stauffer:
A lot of the criticisms that people have of this full application testing, who are saying, "No, you shouldn't do that. All your tests should be in isolation." One of the things that they'll often say is, "How can you test HTML when everything's super fragile?" And there's a really good example in terms of why Adam said, don't just test the full HTML string. Before I learned that, I would often have a false positive, because I'd say, "Make sure something shows up on this page," and that same string would be in the sidebar of the page, but it wouldn't be where I wanted it to be in the body of the page. Right? And so, there's ways around that, like assert C in.

Matt Stauffer:
But in general, it is more fragile to do it that way. But this whole assert view has concept allows you to say, "I don't care what the view renders out to be. I just want to make sure that I'm passing the right data into this view and I'm passing it to the correct view." And so, it's both fully encompassing, as long as your templates aren't broken, which you can handle separately and do testing if you want. Then you're getting that right data there, but you're avoiding that whole kind of cruft of testing HTML, right?

Adam Wathan:
Yeah.

Matt Stauffer:
I think it's the same thing on the way in where one of the things that really scared me about this integration application testing was, what happens when I need to test user login? Well, I'm going to have to figure out how to find the email address thing, and then type text into there, and then hit tab, and then find the other thing, and then type text in there, and then click the name of the button. But what happens if the button name changes? And we actually did used to do some of that stuff. And that was super overwhelming because it seems very fragile. Things could break as soon as your user interface changes.

Matt Stauffer:
And so, with Adam's way, you're bypassing that, right? And that doesn't mean that you get exactly the same level of coverage, right? If somebody breaks the form that that it's pointing to...

Adam Wathan:
Yeah, if someone breaks the form, they mess up the URL on the form. Maybe they got a little carried away with multi-cursor and sublime or whatever, and accidentally changed part of the action in the form. Yeah, you're in trouble. So, it's all about trade-offs. Right?

Matt Stauffer:
Exactly.

Adam Wathan:
But that doesn't mean that you can't do both. So another sort of category of tests that we haven't really talked about that I think is helpful, but I wouldn't use them in a TDD sort of way necessarily, is Dusk tests, right? Like browser tests. But I don't think of these as the same as automated tests. They are automated tests, but I don't think of them as the sort of thing that I'm constantly running with a keystroke every time I'm making changes to make things work. I think of it as a substitute for a QA department. And so, instead of having a checklist of things that you manually go through the app and test all this stuff, you write browser tests that automate all that stuff. I think of them as, if you had a QA department, they would probably be rewriting these Dusk tests themselves.

Matt Stauffer:
Yeah, exactly.

Adam Wathan:
You know what I mean?

Matt Stauffer:
Yep.

Adam Wathan:
But the programmers are probably not. They're writing the stuff that helps them be-

Matt Stauffer:
I love that.

Adam Wathan:
... productive and do the stuff that they want to do. That feels like a whole different category of stuff to me. It's a checklist of stuff you want to make sure is working in the application at a very, very high level, because the reality is too, there's always mistakes that you're going to make or things you forget to test or edge cases you forget to test, and you can only test the things that occur to you to test. You know what I mean?

Matt Stauffer:
Yeah.

Adam Wathan:
So often doing some real manual QA stuff is going to unearth stuff that you might not have written a test case for, or might not have thought of. And if those sorts of tests were very fast to run and fast to write, you probably could just use those as your main tool for testing everything, but it is just a little bit slower. And it also is a little bit indirect, which can make it hard to see why something is broken, if it's broken. Hard to get 100% confidence that you're getting exactly the outcome that you want.

Adam Wathan:
Being able to check that there's three records in the database feels like a lot more like, "Okay, I know for sure that it is exactly what I want." Then being able to check that there's three LIs in the HTML with a DomCrawler from the browser. It's sort of like the right balance, in my opinion, of testing the system from the outside and sort of executing all of it, but also having enough of a direct insight into exactly the data that you're working with, that you can react to failures and stuff more quickly. So yeah, that's how I think about it anyways.

Matt Stauffer:
I love that, because you're bringing up the idea of there's a little bit of a continuum from how "Pure" unit you are on one end, and on the other end, how perfectly you cover every single potential interactive use case and things like Dusk and like JavaScript tests do a great job of handling the one far end. And then these hyper-isolated unit tests get you on the other end. But I think this functional application test that Adam was talking about, I think that gives us more of a place where we're getting a more broad coverage and we're not ensuring we get everything.

Matt Stauffer:
And that's why I said, if somebody pointed that form to the wrong place, it could be fixed. But to me, I agree with you completely. I would rather put that in a Dusk test because what I want these PHP-based tests to be testing is that the PHP functions the way I want it. Right? And that's more about once the PHP gets that HTTP post, what does it do? And then what is the state of the backend after that? And I'm a little bit less concerned about the interactions, which feel more safe in Dusk. So I love that, because you're not saying everybody should do the most absolute, crazy, interactive front end tests because there's downsides of that too. And you named some of them.

Matt Stauffer:
But it's just that, purely isolated unit tests are not the only way. Right? And sometimes they also result in people writing really crappy code. Somebody might say, "I want to be able to test this tweet process." And the only way to do that in isolation is to build a class whose responsibility is posting tweets. And then you build a tweet poster and then a tweet repository.

Matt Stauffer:
Exactly. And you just build all this complexity that's not necessary. I would encourage you all if you're new to this, try it this way, because it's elegant. It lines up with writing tests in sync with the code that you just wrote. You're not stepping into a very different environment and a different kind of space of thinking. You're writing code in line with the thing that you just... You're writing that code about the HTTP post or right before you write the controller that handles that HTTP post and does some stuff with it. You're writing that database assertion right before or after you wrote the code that interacts with the database. Right? So it's very in line with what you're writing. So it's a really easy place to start.

Adam Wathan:
Yeah, for sure.

Matt Stauffer:
Okay, so Adam, that was brilliant. I love it. So, because we're as far in as we are, let's step forward a little bit and probably some more things will come up, but what are some things where you think people who are new to testing, whether that is automated testing in general, whether that's testing in PHP or testing on Laravel. What do you think are some common things that trip people up or some challenges they often run into or some ways that you wish that they would think differently when they first got started? What message would you have for people who are just getting started?

Adam Wathan:
Sure. A couple things. The first one is the whole isolated unit testing thing. We've sort of touched on it a little bit. There's a lot of information out there that'll sort of basically spread things, if you can't test your code in isolation then it's poorly designed. This sort of guilt shaming thing that you'll see and I think that's totally wrong. I think testing your code in isolation leads to hard to refactor code a lot of the time. And a lot of the time, if you isolate too much, you get to a point where you're not actually testing anything. You're not testing that the code actually works. You're only testing that the code is written the way you expected it to be written for the tests to pass.

Adam Wathan:
If you're saying, assert that the name method was called on user with this argument, those tools are useful in some situations, but a lot of the stuff out there will make you think that you should be doing that for everything. And if you get to a point where you're just testing, you get to a point where your tests, when you're writing that way, are almost like, assert file get contents, user dot PHP, triple equals this string that starts with a PHP opening tag, and then has a bunch of namespace imports, you know what I mean? You're just asserting that you wrote specific code, you're not asserting that the outcome of the code is what you want.

Adam Wathan:
That's like the extreme, basically, of what happens if you go too far in that direction. So you have to be careful to not mirror your implementation with your tests, no matter what you read from a lot of blogs out there. The thing is about educational content on the internet, in general, is that the people who are most excited and passionate to teach topics often are the people who don't have a lot of experience in those topics, but people who are just really excited about learning them. And what'll happen is someone who gets really, really good at testing isn't really. It feels solved in their head now, and it's not an interesting problem anymore and they stopped talking about it.

Adam Wathan:
So when I was really getting into testing, I used to talk to Dave Marshall a lot who he actually is the maintainer of Mockery, which is a mocking library, but he's like really smart and knows a lot about testing, but doesn't talk about it as much as you'd think, given like the wealth of knowledge that he has. He's like the most experienced tester that I basically ever had the chance to talk to about this stuff. And he would basically tell me all the same things that I'm kind of saying now, but you know, if you read blog posts from people who just read the goose book for the first time, they're going to be saying other stuff. So you have to be careful where you're getting your information from. And don't automatically assume that because someone wrote something down that it's necessarily good advice. So that would be the first thing.

Adam Wathan:
I think another thing is to be willing to sort of write your own tooling, to make it possible to test the things that you want to test. So an example of this is back in the Laravel early Laravel 5.0 days, I think. There wasn't really any good way to test email that, you know, so you had a end point that cued a job, that sent an email, and you wanted to test the queue job, and you wanted to be able to assert that somehow an email got sent. The best you could sort of do is trying to mock Laravel's mailer and stuff, and that got pretty nasty. And it led to writing assertions, like, make sure this method is called with this arguments.

Adam Wathan:
Whereas really, if you take a step back and ask yourself, okay, well, what do I actually want to be able to verify? I want to be able to verify that an email was sent to this email address with this subject and this content. So how can I, what do I have to build to be able to express that in my tests? So when I ran into that problem, I built a Laravel mail testing library called MailThief at the time. And you know, I had to do a bunch of work to create a fake implementation of the mailer that had the same API and exposed a bunch of methods for inspecting, like what mail objects they had received during the course, like life cycle of their request. But at the end of the day, it let my tests be like really, really expressive.

Adam Wathan:
And I had to do stuff like that on almost every project. Not relying on mocking and stuff like that, but instead, figuring out, okay, how can I replace this mailer with sort of like a test-friendly mailer, like a mailer that is willing to let me in the back door and say like, come see what came in while the request is happening, you know what I mean? Laravel ships with a lot of stuff that works that way now, but it didn't at the time. And the reason it does now, is because we were working on libraries like that, that kind of exposed that, wow, there's a better way to do this sort of thing.

Adam Wathan:
So it's been very common in projects that I've worked on to have to write tools to be able to make the assertions that I want. And that has always paid off. So never assume that the reason something is hard to test is because it's just supposed to be hard to test and you have to figure out a way to mock it and do all this crazy stuff. Develop a mindset of assuming that the reason this is hard to test is because no one has made it easy to test yet, and I can be that person because I'm just as capable as anybody else, and I can do that myself and it'll take a day. But now I can write the tests that I want and feel kind confident about things working the way that I want.

Adam Wathan:
So those are probably the two biggest takeaways that I like to tell people for sure. There's lots of other stuff we can get into too, but those are kind of two good places to start, I think.

Matt Stauffer:
That's good. And one note about those, a lot of times when people have done that they're responsible for, and you mentioned this a little bit, they're responsible for the testing framework existing as it does today, I mean. So, you wrote MailThief, and I don't know the story with Taylor, but I know that at some point, you know, Taylor saw that and he also saw the Log One. I can't remember who wrote the Log One, but someone had written similar to Log One. And those things are now in the core and Jeffrey wrote the one where you click around on things. And then that got moved into the core. And the thing with using what's it called the SQLite In-Memory databases for the clean and early ones. That was something you were teaching a lot of people. And now it's in the core.

Matt Stauffer:
And so when you find yourselves like, you know, Taylor and Adam have often mentioned that what they have that helps them most as programmers is a low tolerance for something frustration or something?

Adam Wathan:
Yeah, just like a low pain tolerance, basically.

Matt Stauffer:
Yeah. And so they're like, I don't want to have to do this crap every time I write a test, so I'm going to write something, you know? And so I love the encouragement to be sensitive to your own pain tolerance and testing. And if you're doing something that's miserable, that might be an opportunity to make it less miserable for you this time around and then maybe next time and then maybe other people.

Adam Wathan:
That's great.

Matt Stauffer:
So since we're coming a little bit close to the end here, is there anything else you'd really like to talk about on this topic?

Adam Wathan:
I don't think so, honestly. I think we've kind of covered some of the big ideas, for sure. I'm trying to look through my test-driven Laravel course outline just to see if there's any fun topics, you know what I mean? That are things that are worth looking at. I think maybe the only other one would be testing things that integrate with third party APIs and stuff like that.

Matt Stauffer:
Yeah. You were the King of that for a long time, and for asking people really good questions around this.

Adam Wathan:
I mean, that was just like the one topic where I just felt like no one was giving them a good answer.

Matt Stauffer:
Yeah, you had a good answer.

Adam Wathan:
But I feel like I have a good answer for it now, which is basically if you need to test talking to Stripe or talking to the GitHub API or that sort of thing, mocking that is just not going to prove anything. Saying that, "Assuming that Stripe sends me this webhook... " Well, webhooks are hard. Here's the other thing that we'll get into, is that some of this stuff is literally impossible to write automated tests for that can actually give you any confidence.

Adam Wathan:
But best-case scenario, in a case like Stripe, say you want to test taking a payment. Stripe gives you a sandbox account that you can use for testing with test API keys and stuff like that. And if you want to make sure that your code is actually talking to Stripe properly, the absolute only way to do that is to write tests that make HTTP requests to Stripe using your test tokens, and then make more HTTP requests to Stripe after that to fetch the data back and make sure that you get the data that you want. So, a lot of people will say like, "You should mock these APIs," or, "You should use... " I can't even remember what the tools are anymore, like a VCR and stuff like that-

Matt Stauffer:
To replay your...

Adam Wathan:
... which that can help because that can let you sort of do it once and then be able to make those same test assertions offline. But generally, what I find the best approach to be is take the code that's supposed to talk to a service like Stripe, and isolate it into one place, and make the API to it very simple, and basically try to avoid conditionals and stuff in that code as much as possible. Basically, just put as few things in there that can break as possible, and try to extract anything that's complex or fragile out of it, and write unit tests for the class that talks to Stripe. And then in your own code, like your HTTP tests, your feature tests, maybe you want to test a payment form. You don't necessarily want to test that that talks to Stripe correctly; you just want to test that that talks to your sort of adapter correctly.

Adam Wathan:
So, I'll usually replace the Stripe class in the IoC container with a fake, kind of like the mailing thing. That'll let me make assertions about that. And that'll make sure I can run that test offline and stuff like that. And then I can do it fast. But then I'll still have these tests for the actual Stripe adapter, the real one that talks to Stripe, that I can run whenever I want to. And those are what I usually call integration tests, things that integrate with external things.

Adam Wathan:
And there's a risk though when you write stuff this way, that your fake implementation can get out of sync with the real one, or might not behave exactly the same way. So, say, you have a test for... one with Stripe is you might want to be able to test that you can make a payment and that you can retrieve that payment, right?

Matt Stauffer:
Right.

Adam Wathan:
So, you can write tests for that on its own, but there's no guarantee that the fake version that you've built that uses an in-memory array for storing those payments works the same way. So, what I do to avoid that issue is I write what I... basically, I call them... I can't even remember what I called them when I came up with a term for them. Interface tests or something. Let's see if I can find these now. Interface... oh, tests. Contract tests. That's the word I was looking for. So, basically, I'll write a test that has a bunch of assertions that says like, "Okay, when I pay Stripe and retrieve it, I get the thing back." And what I'll do is I'll run that same test against both the fake and the real thing to make sure that I get the same result for both.

Matt Stauffer:
Yeah, love that.

Adam Wathan:
And I wrote a blog post about this back in 2016 called "Preventing API Drift With Contract Tests." So, this is a good way to be able to test stuff that talks to Stripe in isolation, without having to make all your other tests talk over the network, and use fakes for that stuff, but not introduce that risk of them drifting away from each other because you have this test suite in the middle that links to both of them.

Adam Wathan:
And the way that I've usually done it is extracted all the tests into a trait, and then I'll write a test for the fake that pulls in the trait and a test for the real one that pulls in the trait, and then you just add a method on each one of those tests that's like "get Stripe adaptor." And the fake one just says "return new fake Stripe." And the other one says "return real Stripe with these keys" and stuff like that. And, yeah, that's been really helpful.

Adam Wathan:
Now, there's situations where you're integrating with who-knows API from nowhere, with no sandbox environment, that has not taken any consideration about your developer happiness into account. And in those situations, all you can do is... You only have one, real live integration you can talk to. You know what I mean? And you can't write tests against it because maybe there's no way to delete the data that you send to it. So, you're just loading up a production database with test data. So, that's no good.

Adam Wathan:
So, in those cases, the most important thing to recognize is that that code is basically untestable. And the best thing you can do is, like I said before, minimize the surface area of that code to be as simple as possible, with no conditionals. Write it. Manually test it. Get it working. Make sure that there's basically nothing that can go wrong with it. And then fake your class that does all that throughout the rest of your application. But you have to just acknowledge and respect the fact that that code is untested. If they make a change to their API, your code can break without your tests failing. But there's literally nothing else that you could've done in that situation.

Adam Wathan:
So, that was an important realization for me when it came to this question of like, "How do you test stuff that integrates with third-party services?" Everyone would just say, "Mock the network. Mock the network. Mock the network." But I was like, "What happens if they change the response format, and my code is expecting this Format A and they gave me back Format B?" And no one would give me a straight answer. But the reality is just if that happens, your code will break, and there's nothing you can do. And the important thing is to just accept that as fact. And knowing that fact, what can you do to basically feel as at peace with that risk as possible? You know what I mean?

Matt Stauffer:
Mm-hmm (affirmative).

Adam Wathan:
Because pretending that you can get around it by mocking the network and stuff like that, is just going to lead you to make sort of worse decisions than just knowing that that literally is the most fragile piece of the code. So, how can we put that up on the top of the bookcase, surrounded by pillows so that if the toddler comes into the kitchen... You know what I mean?

Matt Stauffer:
Yeah. I love that.

Adam Wathan:
Yeah. So, it's important to just, when something's untestable, recognize it as untestable and arrange things accordingly, basically. And that would basically be my advice on testing third-party APIs.

Matt Stauffer:
I love that. And I really do remember the time when you just asked everybody about that. One thing I hadn't heard you say before was this idea that in this class that you're building directly in interface with that, you should keep it as simple as possible. And I'm making assumptions about why you're saying that. Would you actually, just kind of real quick, talk about what is the benefit from removing conditionals and all that kind of stuff to make that class as simple as possible?

Adam Wathan:
Yeah. So, the fewer conditionals there are, the fewer code paths there are, right?

Matt Stauffer:
Mm-hmm (affirmative).

Adam Wathan:
... so the fewer things that can possibly happen, essentially. So, every time you introduce a conditional or a nested conditional, you double the amount of code paths in the application, which is just more places where things could go wrong. So, if you can extract all that stuff as much as possible, so that all that can happen is the data goes in and it makes the network request and the response comes back and maybe you have some failure handling for if the network request fails or whatever.

Adam Wathan:
But if you can keep it to just that, then there's less reasons to ever go and change that code which means there's less chance of that code breaking. If you have important business logic in there about, "Oh, we've decided that now when someone signs up, the trial period should be this long instead of this long, as long as they were referred by someone who created their account before this date." If that logic lives in your Stripe adapter and you're touching it all the time... And Stripe is not as bad of a situation because it's tested, but say, in your authorized unnet adapter you're in there, changing stuff all the time, and there's no way to test that it actually worked you're just increasing the likelihood of it breaking, right?

Matt Stauffer:
I love that.

Adam Wathan:
You just want as few reasons to ever have to edit code in that file as possible.

Matt Stauffer:
That's great. And yeah, for anyone who didn't totally follow that, if you were to imagine that there's an HTTP call that gets made by a method in a Stripe adapter, and then there's some business logic that says, if X, Y, and Z make this HTTP call with this parameters, if it's A, B and C, make that HTTP call. What he's saying here is, keep it so there's only one method for each of those and have something else doing those conditionals. Because then when you test the something else, let's say it's a class responsible for subscriptions or something, you're now mocking this Stripe adapter, and you can just say, make sure that this particular method was called, which is a simple method with no conditionals in it, so this thing stays not a perfect one to one, but like a simple layer between HTTP calls with Guzzle, whatever and/or their SDK, whatever and your consuming code. I love that.

Adam Wathan:
Put the conditionals in the queue job or in the controller or the layer that the controller calls or whatever, but basically, imagine there's a line between the code I can test and code I can not test. And on that side that can't be tested, you want there to be as little code there as possible.

Matt Stauffer:
I love that.

Adam Wathan:
So conditionals, try and throw them over the fence as much as you can. Yeah, instead of having a accept payment method on the Stripe adapter with a bunch of conditionals in it, just have, maybe there's five methods that all make different requests to the Stripe API for different things. Again, I can't think of exactly what you might do in that case, but even if it's just, okay, instead of having it take a user sign up as the argument to paying Stripe or paying authorized on net, you really just want it to take an amount and an email address and that's it. You don't want it to take an amount and a discount percentage and an email address, because now you have to calculate the discount in the Stripe thing, and there's an opportunity you can make a mistake. So if you can just calculate the discount before you pass it in and only pass in the final amount, well, now, there's like one last piece of code that you can make a mistake in, right?

Matt Stauffer:
I like that.

Adam Wathan:
It's just pushing that principle as far as you can, basically.

Matt Stauffer:
Yeah. That's awesome. I like the idea of the line, let's just keep the stuff on the side of the line that you can't test as simple and as few lines of code and as little conditional logic as possible. And having talked, I promise I'll be done at the end of this, but having talked through XDebug with Derek, the guy who creates XDebug, if anybody didn't see me doing that, I was basically saying the hardest thing to debug is when I've got a whole bunch of loops and conditionals and something breaks, and I don't know if we got inside the if or we didn't get inside the if, which iteration of the loop had this break or whatever. Even when you do find yourself-

Adam Wathan:
If I equals 365-

Matt Stauffer:
Exactly. And so if you are, let's say this stuff is the hardest to test. And so therefore this stuff is the stuff you're most likely to get an exception on prod. Will you want that to be as easy to debug as possible as well? So if you've got an exception and there's only one method there and there's no conditionals to that exception, again, like figuring out where it went wrong and the whole stack trace of that is also a lot easier. So I love that.

Adam Wathan:
Yep.

Matt Stauffer:
Okay. So if somebody wanted to learn about testing and learn about, I'm just going to say off the box, the best my favorite resource ever is your test driven Learn About Course. And it's a couple of years old and I still think it's the absolute best resource. And I know it's a paid course and you're humble guys. You're not just going to tell anybody to do it. So I will. You all should go pay for that. That's what I would say. However, outside of that, since I've already plugged that, are there any other resources and especially, let's say somebody just wants to get started for free, where would you turn them to learn about testing and Laravel or testing in general?

Adam Wathan:
For sure. I think honestly, I will still recommend like some of my own free content. If you go and look at like conference talks that I've given. I gave like an hour and a half long test-driven, Laravel talk at Lira con in 2016, I gave a talk at Laracon Online, which was like lies you've been told about testing, which I think is one of my favorite talks that I've ever done.

Matt Stauffer:
I love it.

Adam Wathan:
That had a lot of good stuff in it. Honestly, I bet you there's like lots of good stuff on LaraCasts too, but I haven't watched it because it's like, I feel like I know it now. You know what I mean? So I haven't looked at it in a while. I can also recommend like some of the resources that I kind of learned testing from, too. I think if I had to just pick like a couple, it would be, there's a book called The RSpec Book, which is like, you'd think it's on using RSpec, which is like the Ruby test framework. And it's an old book, but it's actually more on testing applications with like BDD and using RSpec as the tool.

Adam Wathan:
I guess it's 10 years old. It's literally a ten-year-old book, maybe older because who knows if this is like a second edition or whatever, but that book is so good and everything I learned about testing and Laravel basically came from reading books about testing in rails, books about testing and Java and trying to translate that information. So that's the sort of the first party sort of thing, where I picked stuff up from.

Adam Wathan:
The Growing Object Oriented Software Guided By Tests book is good too. Although careful not to interpret it as you should mock everything because it's not really about that. But they do get into a lot of sticky situations with explain that you should do that. So I could see how that could be misinterpreted. And then there's The Practical Object Oriented Design and Ruby Book by Sandy Mattis. That just has a chapter on testing. And that chapter in that book is like one of the best testing resources like I've ever really ever read.

Matt Stauffer:
I totally agree.

Adam Wathan:
So that's what I would check out myself. Maybe you know what some of the more modern, or kind of current layer about testing resources would be.

Matt Stauffer:
I feel like Jason McMurray did a free one, but I could be wrong, but if he did, and everything that Adam just said I'll put up in the show notes. And I also think, yeah, he did a, basically like a really long blog post that's called Start Testing Your Layer of Applications about a year ago. So I'll link that in the show notes. I'll also go through layer casts and see if there's any free testing courses in there. And I'll make sure I link them because they're sure to be good. Also linked to the layer up docs about testing because they'll be good.

Adam Wathan:
Yeah definitely.

Matt Stauffer:
I want to agree with you that I would say if you're just getting started while all the written resources that, and I want you to correct me if I'm wrong, Adam, but all the written resources, all the books that he just said are really, really good. But for your average person, they're probably not going to get started with the RSpec book.

Adam Wathan:
No, it's definitely if you want to get deep into the origins of some of these ideas, then I would start there, but if you just want to know what to do, then you can just benefit from everyone else's information harvesting and check out some of the talks I've given or articles that I've written. Like my blog in the archive section is just loaded with testing content.

Matt Stauffer:
There's so much in there.

Adam Wathan:
Jason McCreary. Definitely. I kind of forgot that he did this confident Laravel series recently, too, which is specifically about how to add tests to an existing Laravel app. Whereas a lot of my stuff is more about test driven development and I know lots of people are in situations where their app they didn't write tests because they're moving fast and trying to get things done. And then a couple of months later, all of a sudden it's like, "Oh crap. Now I wish I had tests." So that would be definitely something worth checking out, too.

Matt Stauffer:
Yeah, that's good. And I do want to say that, while I think Adam's course is absolutely freaking fantastic, you can learn everything you need to know for free on any of the things we've talked about. These courses just help you get through it a little bit faster and easier, and a little bit more robustly. So, if you are just looking at saying, "You know what? His course looks amazing. I can't afford that right now," totally understood. If you're thinking about it, think about it for sure. But, just go consume all of Adam's free stuff. Go consume all of Jason McCreary's free stuff. Go consume Laracasts free stuff. Between those three, you're going to get a lot in there, but you can also go look at a lot of open source code bases that are in the Laravel world. We've linked to them previously. But, if you just go look at any of Adam's, I think you probably have at least one or two open source, probably still at some point?

Adam Wathan:
Most likely. Yeah.

Matt Stauffer:
Titan has some. Titan has some, and you wrote the code in some of them that we have. Just go take a look at the test at any of the Titan Open Source repos or anything like that. There are people out there that they're doing testing in the open. So, you don't have to pay money, but if, especially if you've got a company and you're trying to get your whole company set up, well, the fastest way would be something like Adam's course, but there's options for all of you available out there.

Adam Wathan:
For sure.

Matt Stauffer:
All right. So, the last thing at the end of every single one is a personal fun moment specific to the person I'm talking to you. And I had a really hard time deciding, because there's 17 different things that I know that you're interested in, that I could really nerd with you about for a while or I wanted to have you teach me about. All this weightlifting stuff and this kind of stuff. I almost asked you about your food intake recently because it's super interesting, but I realized the one I want to know more than anything is I know that you're big into, and I'm sorry if I use the wrong word, but metal or hardcore. I don't know what the right genre is.

Matt Stauffer:
And I've done a little bit of screamo, some Norma Jean and some Underoath in my past, but it's not the same. So, I wanted to hear from you, now you don't have to say perfectly, but if you were to pick one intro song to introduce all of us, and you can take your time. We'll edit out the space, the time we need. What would be the best song, maybe not the best song ever, but at least the best song for people who aren't in that world.

Adam Wathan:
No, I think I have an answer.

Matt Stauffer:
Oh, all right. What is it?

Adam Wathan:
I think, well, I mean, it depends on if I'm trying to get people to... Oh, yeah, exactly. Right. It depends what the goal is. I would say if I had to just pick one song that is... If I had to be stuck on a desert island with one metal song for the rest of my life, it'd be Ghosts of War by Slayer. That song, there's just this riff in the middle that, to this day, even 20 years after hearing it the first time, every time it comes on, I get goosebumps on the back of my neck, you know what I mean? That is what I live for with music. That physical reaction to hearing something. I can feel it right now. Just imagining it, it's like, "Whoa, what a cool riff."

Adam Wathan:
So, that would be the one track that I would recommend. And yeah. I'm trying to think if there, if I have anything else to really say on that front. This morning, for reference, what I was listening to, because I think there's metal, as a genre, is quite vast and people's ideas of what... A lot of times, two people can be metal fans that hate all the music that the other person listens to, right? So, I was never really into screamo bands or anything that was remotely radio friendly, I guess.

Adam Wathan:
But, this morning I was listening to this band called The County Medical Examiners, which is, I love it, a band that's not even a real band, everyone in the band uses a pseudonym and their whole story is that they're all medical doctors who remotely started this band, that's a Gore metal band, basically, when really they're all just members of other metal bands. But, no one knows who they are still to this day.

Matt Stauffer:
Really? That's awesome.

Adam Wathan:
Yeah. But, yeah. That's the stuff that I listened to day-to-day. The Slayer stuff is, that's my classic. My favorite band of all time. But in a lot of ways, the least extreme music I listen to, so that's a good little sampling of my day-to-day listening.

Matt Stauffer:
So, is there any song that you can tell us that either would be the one that would totally wow other metal heads? Or the one that maybe is a little bit more extreme? Or is it hard to even really share-?

Adam Wathan:
It's so many. I mean-

Matt Stauffer:
Its... Okay.

Adam Wathan:
It's hard to pick just one.

Matt Stauffer:
Got it.

Adam Wathan:
But, if I think of one, maybe we can stick it in the show notes or something.

Matt Stauffer:
Yeah. Yeah. For sure. I got a couple of weeks. So, if you think of anything else, you message me, and I'll put it in the show notes. But I'll at least get the Slayer one in there for sure. I'll also linked that The County Medical Examiner for sure.

Adam Wathan:
Yeah, that'll probably be the first backlink to them on the internet in the last four years. Yeah.

Matt Stauffer:
That's awesome. Awesome. All right. So, the last thing of every single one is I want people to be able to follow you and pay you money and let's assume that not everybody already knows those things. So, tell us about Twitter. What projects are you on these days? What's up?

Adam Wathan:
Yeah. So, these days, my main focus is I run a little company called Tailwind Labs that we created Tailwind CSS, and now we work on other stuff related to that. So, we have Tailwind UI, which is a basically a Tailwind CSS component directory that's a commercial product that people can check out if they're interested. But, just generally, if you want to keep up with what I'm working on, you can just follow me on Twitter at Adam Wathan. And you can... I mean, I tweet all day, every day, so you'll see what I'm up to pretty quick because I like to kind of work in public. If you want to listen to me talk more, I host a podcast called Full Stack Radio. Lately, I've been co-hosting it with my friend Jack McDade, we're kind of switching up the format a little bit, going from the interview format to just me and him catching up every week or two and talking about what we're working on, which has been fun. And so, that's a good way to sort of get some behind the scenes details too. But, yeah. That's kind of it.

Matt Stauffer:
You guys still have a newsletter at Tailwind Labs, right? Or is it not active?

Adam Wathan:
In a sense, but it's so buried that no one signs up for it anymore. I need to figure out a way to solve that problem because I really would like to have a place to just be able to email people cool stuff like when a new Tailwind release comes out or when I opened a PR for something that I think is worth explaining some interesting ideas behind or something. And I think people would be interested in that too.

Adam Wathan:
I mean, it's hard to find a place on your website to put a newsletter thing that people are ever going to interact with. You know what I mean? Because, it's in the footer, it's just like, Oh, yeah. It's the token newsletter sign up, that's in every footer on every site that no one uses. But, it would be cool to figure out a way to get people to actually sign up for something. Perfect world is just RSS makes a come back and then I can just post stuff on a website and people can subscribe that way. Because I don't think... I don't really care about being able to force my way into people's inbox or anything anyways, right? But it's nice to be able to have a real audience that's somewhere that's not on Twitter where they could ban you or whatever.

Matt Stauffer:
Yeah. Exactly. Where you're totally beholden to some corporation.

Adam Wathan:
Yeah. Exactly. But, yeah.

Matt Stauffer:
So, RSS, we need to bring back RSS, is the thing.

Adam Wathan:
There's JSON Feed that never really took off.

Matt Stauffer:
I know. I mean, I still write code. I build little internal tools for Titan that uses RSS on a weekly basis. I literally just got off a call yesterday with somebody and consume these seven RSS feeds, put them in together in this thing and then do this and expose and add them, whatever. So, I'm still in the RSS train, but we need to hype it up a little bit more. Need to make a t-shirt for it or something.

Adam Wathan:
There just needs to be a cool RSS reader. Someone needs to just like-

Matt Stauffer:
Yes. I wish I had the resources to... I mean, I feel like it must take incredible resources, right? Otherwise why would they all shut down?

Adam Wathan:
I don't know. I think probably there's not as, there's no money in it or something I think is probably the real reason, unfortunately. And even as someone authoring content, there's more money in having an email list than an RSS feed. Even though from, ethical isn't like the right word, but in the sort of empathizing with fellow developers I love the idea just being able to just like... I would love if I never had to have an email list and could still reach all the people that I wanted to reach, just because they had a place where they looked to see if new articles were posted every day from the people that they were interested in hearing from. But email is currently the best way for that.

Matt Stauffer:
Proxy for that. Okay. You got me thinking a little bit. We'll see where we can go with this. Adam, you are amazing. Just for anybody who doesn't know, everything I know about testing I learned from Adam when he worked at Titan. This dude leads up, he leads down, he leads left, he leads right, he teaches. I think that one of the most prominent forces of people wanting to be someone who creates and teaches and shares, just for the sake of sharing in the Laravel community, has been Adam's prominence there. We all owe him a great debt.

Matt Stauffer:
In addition to the fact that most of us are using something he wrote or inspired at some point during our day. Adam, you are the freaking man. The amount of things that you have done is good for all of us. Even when day-to-day, you're not writing Laravel right now, like day-to-day you're still writing tailwind, which is in Laravel now. We're all still betting for it. Thank you so much. Thank you for coming on today, making time out of your super busy schedule and thank you for teaching us, man.

Adam Wathan:
Thank you for having me on, man. I really appreciate the kind words.

Matt Stauffer:
All right. We'll see y'all next time.

Creators and Guests

Matt Stauffer
Host
Matt Stauffer
CEO Tighten, where we write Laravel and more w/some of the best devs alive. "Worst twerker ever, best Dad ever" –My daughter
Testing, with Adam Wathan
Broadcast by