I LOVE TLS
Download MP3Last year,
we started building the simplest,
single-purpose, single-tenant CDN
for changelog.com.
It's based on open source Varnish,
and it runs globally
across 16 Fly.io regions.
And during the first
"Let's build a CDN" session
we ended on...
It's also only available
in Varnish Enterprise.
What, SSL?
Yeah.
Really?
Yeah.
Okay.
Okay, well this is not
going to work then, is it?
"This is not going to work."
The problem is that out of the box,
Varnish open source does not support
HTTPS backends, they have to be HTTP.
So enter Nabeel!
The same one that three years ago
introduced us to KCert,
a simpler alternative to cert-manager,
and it's so nice to get
together again with you, Nabeel!
Yeah thanks, great to be here, yeah.
You said something, this is not the first
time that we're doing this, right?
So just before we started,
you were saying...
"Third time is the charm."
So this is...
our third attempt
at wrapping this up.
We got really close
both times, but yeah...
Right.
And it was a huge outage last weekend,
so that just, like, derailed us a little bit
And before that, we just weren't finished,
there were a couple of things left.
So whenever you think you're done
and you want to do it, like record
the actual thing, you realize,
ah... that one thing.
Tests, for example.
Yeah, tests, right? So I mean, you know,
from last Sunday, we were
supposed to, we were trying
to finish it on Sunday last week, and
we didn't and, you know, so
in the meantime, I had enough
time to write tests, and I found an edge
case in the code where it
wasn't working correctly,
wasn't handling, I think,
query parameters on the URL correctly.
And I also cut the code in half,
so it went down from 100 lines to 50 lines.
So I think it was worth it then.
Yeah, I love that idea of keeping
the lines of config as little as possible,
or the lines of code as little as possible,
which is how we started
the whole CDN adventure.
First of all, I'm wondering what is about
TLS and SSL that makes it interesting for you,
because you keep working
in this problem space.
Why? How? How come?
Like, what attracts you to it?
That's a really good question.
I think I am fascinated by
I'm interested in encryption
and security in that sense, right.
How do you secure your data?
How do you secure your connections?
Like, between the servers,
and I find the idea of
these asymmetric keys,
the public key and the private key,
I just find it a very interesting concept.
And so I kind of just
like playing with them.
I do also think that
the tools that exist today,
most of them just aren't easy to use
and they could be a lot easier.
And so I think those are the two factors
that attract me to that.
Well, now we will dig a little bit in that
because I did a bit of research
and we looked at it
in our previous attempts SSL in Varnish
it's a fascinating topic
and I just kept pulling on the thread,
and wow! Wow!
So we will circle back to that.
Yeah.
So...
How is KCert doing?
It's been some number of years.
Is it still going?
Is it still a thing?
Yeah, you know,
people do still use it
as far as I'm aware.
That's a good question, actually.
why don't I take a quick look now at Docker?
Yeah, it hasn't been updated in a year.
But it says over 50k downloads.
So not super popular, really.
Last year, someone came in
and made a lot of
suggested improvements.
And I really appreciate
the changes they made.
They really did improve several things.
But they submitted it
all in one giant PR.
I started to review it,
and there were just so many different
things that I needed to
change or wanted to like,
you know, clean up first.
And I just ran out of time.
You know, it was maybe a couple months
before the baby was born.
And then the baby was born.
And yeah, no more time for anything.
Big pull requests,
not a good idea.
No, keep them small.
In hindsight, he could have
split it into like five
pull requests, you know,
and it would have been a lot easier.
Yeah,
Yeah, but thank you for the contribution
though, but life happened.
Yeah.
Okay, do you still use KCert?
I don't. So, you know, when I started
with KCert, my goal was to remove
the cert-manager part of
my Kubernetes cluster with, you know,
replace it with KCert.
And I achieved that.
But then I was in a situation
where I was using nginx plus KCert,
you know, the nginx-ingress...
Or is it ingress-nginx?
I forget what's the name.
I think both exist actually.
But then I got to thinking:
Oh, you know...
I think I can write
my own proxy too, you know?
And then I could have
one piece of software
that is my reverse proxy
and does my certificate management.
So what I ended up doing was, I took,
I actually just copy pasted
the code from KCert
put it into a new project with...
Under the hood I'm using YARP,
which is a .NET library
for implementing reverse proxies.
YARP stands for "Yet Another Reverse Proxy"
Right.
And that tells you everything you
need to know about the project.
Yeah, exactly.
And so, yeah,
so my Kubernetes cluster...
Well, I'm actually off of Kubernetes now,
but for a very long time I was running...
I was running my Kubernetes cluster
without nginx, without cert-manager,
just with my custom piece of code.
And I do want to open source it,
but it's just not ready yet
for open sourcing, yeah.
You're not in a place
to accept big pull requests.
That's what it is.
Yeah, yeah, exactly.
It's just so fun to
work on it on my own too.
Like, I can just make changes.
I'm now rewriting it from C# to Go,
you know, like, so yeah, it's just like
you can just do anything, you know,
without having to worry about like
breaking a bunch of people.
Yeah.
It is nice, I have to say.
I forget which
project it was, but there
was one and I'll dig it up.
The author, the original author,
he just disabled issues and pull requests.
He said,
I forget which one. It was a popular one.
Again, I'll look for it.
but I really like that approach,
you know, like, and I'm wondering,
should we do the same for Pipely?
Maybe not, maybe not.
Let's just not get carried away.
But I think it's an interesting idea
where, you know, you have like
an idea for the project.
I don't know how others would contribute,
maybe email you patches?
I hear that's still a thing.
You know?
Like, I'll push back and say,
I think maybe you should, right?
Because I mean,
especially if you think about KCert,
you know, my personal
reverse proxy that I'm building,
Pipely, right?
They're very focused on solving
the needs of the author of that tool, right?
Yeah.
So I'm just writing this
to solve it for myself.
And maybe the right
thing is for, you know,
if someone wants to use it,
they should just fork it,
make their own little tweaks,
and it'll be a little bit of a pain
to, like, merge future updates
if they want...
But then it just gives them the control.
And if the project is small enough,
that's not really super painful.
Yeah, yeah.
You could leave like, you know,
contributions through,
you know, pull requests open, right?
Like people can propose
changes if they want to.
But no issues, maybe?
Interesting.
Well,
when we'll have that problem,
when there's too many issues
and we can't, you know, do
the maintenance that's required
rather than
you know, issues pilling up,
that is an idea.
You know what?
Yeah,
if it's a super popular project,
I can imagine just closing
pull requests and issues, right,
just to avoid the bombardment.
And if I recall, I think it's SQLite,
actually, that's the one
that's close to contributions.
Yeah, I think you may be right.
You know what?
Let's check it out.
They work for... okay.
And then the second paragraph
or second section of that.
Open-source, not Open-Contribution.
It's open-source.
Okay.
Let's talk about sql-exterminator.
And I would like to talk about...
Sorry, what did I say?
tls-exterminator, please!
Crazy, you know, I didn't mean that.
Okay, good thing this is not live.
It's fun, it's fun....
All right.
So, before we talk about tls-exterminator,
and the issue, and the pull request
that you've opened Nabeel,
let's just do a quick recap
of what Pipely is.
I mean, it's pretty much right here
in the about page.
20-line Varnish config
that we deploy around the world.
And it's like: Look at our CDN!
It has quite a bit of history here.
You can just like click
the links and see, you know,
what was there.
This is what we're doing now:
Support TLS origins.
This is the...
a.k.a. Roadmap step
that we're currently at.
Yeah.
So, remember this?
Oh yeah.
A simpler alternative to...
A little bit younger.
I had less hair there,
if that's possible.
And this was March 18th, 2022,
so, it's coming up to 3 years.
Yeah, yeah.
It's coming up to 3 years
crazy how time flies, cool.
It does!
So, this is the issue that you created.
Tell us a little bit about it.
Right, so...
I heard on one
of the podcast episodes,
I think it was probably
one of the Kaizen episodes
where you all were
talking about this limitation
with TLS, not supported as,
you can't use a TLS backend
with the open-source Varnish.
And I just got to thinking like,
wow, that's a silly limitation.
You know... why would they...
Why would they limit that?
It almost like,
initially I kind of
assumed worse intentions
and I was like:
Wow, so Varnish just wants to
move you to Enterprise for
a silly feature like TLS.
I think I reached out to you
and also just kind of confirmed
what the problem was exactly
because I think I did also
misunderstand it slightly.
But it really was
just that Varnish itself
can't talk to a TLS endpoint.
It can only talk to a
plain HTTP endpoint.
That's correct.
And, you know, I've been playing
around with a reverse proxy
and this is basically
something a reverse proxy can do.
I mean, usually your
reverse proxy serves TLS
and might go to an HTTP
backend in your local network.
But, like, why?
It shouldn't be a problem
to do the reverse,
I did do a couple
of experiments with YARP
and got it working and
I shared it with you.
It worked, but the .NET
framework is kind of heavy
and I think it was using
about 20MB of RAM,
which I think actually is
better than it used to be.
I think a lot of .NET apps will consume
like 100MB of memory quite easily.
So, I looked at Go,
the Go solution at startup
is like 1MB or 2MB,
or something like that.
Yeah.
It's a very simple program.
Like, all it does is listen on a port,
provides an HTTP endpoint
and then it just takes that request,
sends it to the HTTPS endpoint
that you configure it to
and then it just
returns that result back.
So very simple code.
Yeah, that's it: 55, 56 lines of code.
It's a very simple program.
It just takes two parameters.
Like, what port do you
want me to listen to?
What domain do you want me
to forward the request to?
And that's really it, yeah.
I really like the simplicity!
I mean...
things should be this simple.
Memory consumption should be low,
CPU cycles should be low,
and if things are this simple,
it should be low.
But you mentioned something
really interesting.
The whole state of TLS in Varnish.
And...
initially, I thought,
this is an upsell, right?
Yeah, that's exactly what I assumed
an upsell, yeah.
That was my first thought
go to Varnish enterprise
and then you'll have it.
So, I did a bit of digging, and...
this is one of the first results
that you find when you search for
Varnish, SSL or Varnish TLS.
And they say:
"First, install Hitch."
This is done by the
varnish-plus-addon-ssl package.
Okay, interesting.
So varnish-plus to me suggests
that you need to have some sort of a
Varnish extra repository.
I think this will go
into the enterprise, right?
Because how else are you...?
I think I'm not sure because
I haven't used this.
And then you configure Hitch...
But looking at this configuration,
it just doesn't make sense.
And the reason why it doesn't make sense,
you listen on 443.
Why would you listen on 443?
Because what we need
is we need to go to 443,
which is one of the backends.
But why does Hitch
listen to port 443?
And then the Hitch forwards
to port 80.
So in my mind, it's like the reverse.
Okay, so I did like
a bit of digging, like let's
just see Hitch, Varnish Hitch.
A scalable TLS proxy by Varnish Software.
Okay, so far so good.
There's stars,
watchers.
This, usually when there's a failure,
I'm thinking,
"Hmm, two months ago, okay."
Well, maybe who knows, just a bad one.
Like a bad commit, whatever.
But that hasn't changed in two months.
I think that's my first red flag.
Exactly, last release,
that's the other thing, right?
You go down here,
Hitch TLS proxy failed. Okay,
well, let's click on that
That's exactly what I did. Like, let's
see what's going on going on there
I mean it's nice that they
have CI, but if it's broken...
Exactly, that's good.
"Something Unexpected Happened"
Maybe refresh, sure, let's refresh.
It did load before,
so when I looked at this,
it did load before,
but let's see if it loads now.
Nope.
Nope, so still down.
Okay, so that's not a good sign, right?
Right.
And then I come back.
Hitch is a network proxy
that terminates TLS/SSL connections
and forwards the unencrypted
traffic to some backend.
Well, we want the exact opposite.
Yeah, that's not what you want. Yeah.
Exactly! We want the reverse.
So, the second stop,
"Why no SSL?"
So, that's exactly what we're thinking.
Why doesn't Varnish have SSL?
First, I have yet to see an SSL library
where the source code is not a nightmare.
As I'm writing this,
the Varnish source code tree
contains 82,000 lines of .c and .h files,
including JEmalloc and Zlib.
OpenSSL, as imported to FreeBSD,
is 340,000 lines of code.
Wow, yeah.
not exactly the
best source in the world.
I hope that you know what you're doing,
but let us assume that
a good SSL library can
be found.
What would Varnish do with it?
I think this is where you're getting into
the really interesting stuff.
We would terminate SSL sessions
and we would burn CPU cycles doing that.
You can kiss the highly optimized
delivery path and
Varnish goodbye for SSL.
We cannot simply tell the kernel to put
the bytes on the socket.
Rather, we have to corkscrew the data
through the SSL library
and then write it to the socket.
This is the most important part.
This page.
Will that be significantly different,
performance-wise,
from running a SSL proxy
in a separate process?
Yeah.
This is it. This is the key.
No, it will not, because the way Varnish
would have to do it would be to...
start a separate process
to do the SSL handling.
So, that's our answer.
So, who wrote this article?
Poul-Henning
I didn't know who Poul-Henning was,
but we did our research.
Right? So, 2011.
So, this was 14 years ago.
So, this guy
wrote this 14 years ago.
So, who's Poul-Henning?
... is a Danish computer software developer
known for work on various projects
including FreeBSD and Varnish.
Interesting...
Interesting!
... has been committing to FreeBSD
for most of his duration, okay...
He's responsible for the widely used
MD5crypt implementation.
of the MD5 password hash.
Okay, GBDE, okay, FreeBSD Jails!
And the FreeBSD
and NTP timecounters code.
Okay, so this guy,
I think he knows what he's doing, okay?
When he says something, yeah, you can
probably take it seriously.
And yeah, and he actually okay, so
he wrote Varnish cache, right?
He is the lead architect.
And so I think he wrote most of Varnish cache.
Let's just double check that theory.
So, Varnish cache.
Down here, contributors.
Boom.
Yeah, so
like, four times more,
almost four times
more than the next person.
And he's been, like, there since 2006.
and I mean he's still by far
the largest contributor even though he
really slowed down his
contributions like after 2020.
Yeah, yeah, okay.
So, let's click on this.
Let's go to his website.
PHK's bikeshed.
Okay.
"I'm the author of Varnish."
I think that's... I like this.
Like, when people start like this,
you know exactly who they are.
Please buy a Varnish Moral License.
Oh, interesting.
Let's click on that.
Okay.
Very interesting.
Read more. Okay,
accounting.
Very transparent. I like, wow. Crazy.
Look at that.
Very good. A lot of transparency.
I like this. I like where this is going.
Wait, this is exactly where we started.
"Does that mean you're
the same Poul-Henning who...?"
Yes, FreeBSD, MD5crypt, jails, nanokernels
timecounters, scsi-ping.
Should we?
And the Bikeshed.
Should we?
So...
Poul-Henning Kamp is the guy
that invented the Bikeshed.
I've seen this exact post
like, a long time ago,
like, same color and everything.
Yeah.
Now it brings back memories,
like, years ago.
1999. There you go. Okay, so
Thank you Poul for the Bikeshed.
We have bikeshed a lot
about Varnish and TLS,
so, thank you for that.
And we have a framework to do it in.
And do you know what's
the best part about this?
I mean, by the way,
besides capturing this,
which I'm such a huge fan of,
my favorite feature about this is...
The color changes.
When you click it, the color changes?
When you refresh, the color changes.
Oh.
How good is that?
So yeah, whoever came up
with this, thank you.
This was funny.
This was so fun to discover.
Cool.
So...
I think that we should listen to the guy.
And I think that we should start
a separate process to the SSL handling.
I think this is good.
By the way, I was also
doing a bit more research.
So, let me just do Varnish.
And another thing which I found
here, this is afterwards.
Hmm
was SSL again.
This is a hot topic.
So this is SSL, again,
two or three, whatever.
So 4 years ago I wrote a rant about
why Varnish has no SSL support
and the upcoming 4.1 release
is a good excuse to revisit the issue.
In 2011 I criticized OpenSSL's source code
as being a nightmare,
and as much as I hate
to say "I told you so",
I told you so.
See "HeartBleed".
Okay, so yeah, that was a good one.
Handling certificates, still a bad idea.
So yeah, no,
Varnish still won't add...
So, I'm very curious, Poul,
what are you thinking today?
2025,
which is 11 years,
actually 10 years later.
Ah
Still a bad idea?
Maybe.
Anyway.
So,
this was the issue
that got us started.
And this was the TLS Exterminator, right?
The repository where
it contains the code.
And this is your pull request.
If we didn't have pull requests,
this would not be possible.
Where
you add support for TLS backend.
I just took the liberty
of changing the title
because that's what's happening.
So,
thank you for kicking this off.
Not that many lines of code
and we'll see in a minute what they do.
and I took it and I just
did a couple of changes,
but this is the one to
watch pull request eight
where we'll capture
as much of the history.
All right, so.
I have it...
locally,
this is where it's at.
And if I look at status, I'm right here,
improve on Nabeel's contribution.
And I have some changes
which are not staged.
So, there are still a few things which,
no, this one and this one
that I would like to go in.
So, a few more changes coming up.
Okay.
All right, so.
With that in mind,
what do we have here?
So, besides the main project,
I think your main addition
was this Dagger directory?
Wait, oh, to the Pipely one.
To the Pipely.
I've lost track
of what I did there.
What I distinctly remember probably,
I might've tried doing it in Dagger
and then given up.
Yeah.
Yeah, get it to work with plain docker.
Okay, so maybe we should
clean that up because I took it,
like I was like, okay, Dagger, very, very
close to my heart. So I took
it a bit further. Cool. So.
Let's just have a quick look at what it
means, functionality-wise.
Yeah, I am better at Dagger
since this week, so
yeah, like if I were to try to
redo it again, I might succeed.
Yeah, yeah.
Okay, well, let's see if this
is the moment where, you know,
I can give back some of the time
that you've put into this and
the effort and share some
of my Dagger knowledge. So...
The justfile is an
important file because I use just,
just like as a wrapper
around a bunch of things, right?
So we had tests, we had report before,
and if we just see what it contains.
Actually, let me just load it like
in an actual editor.
The default one, fmt,
these are called commands.
It has some default variables.
We install Hurl.
We use Hurl for the test still.
And debug, this is like the new one,
which just basically makes it
easier to type commands.
Sometimes, I like to add
some various arguments
or variables, shall I say,
and also we have publish,
which is an interesting one.
Okay, so that's it,
like it's, again, 60 lines.
Not that much stuff,
most of it is just the package
and I think most of this will go away.
Anyway, so
We have that.
You've seen me type
single letters, I use this a lot.
I have aliases for pretty much everything.
So, you know, "J" is an alias for Just,
"D" is an alias for Dagger.
Do you want to guess
what my Docker alias is?
Capital "D"?
Look at that.
You did not see this.
We did not cover this
in the previous, oh yeah,
look, see, it's just like so natural
It was gonna be that or "do", maybe?
Yeah.
and "L" for Lvim, yeah.
Cool.
So.
Dagger functions
is what, see, I can't even type.
That's why I have those
shortcuts because I can't type.
So, Dagger functions shows us
everything that we can do
with the code that was added.
We can, for example,
get a container with
all the dependencies
wired up and ready for production.
You would be,
many would be tempted to
use a Dockerfile for this,
which is perfectly fine.
But then what happens is then
you need to make small changes.
Then you need to run tests.
Then you need to, for example,
publish the container.
Do you have a script?
Well, most do.
Most have a script.
I have Just,
and I could use Just for that,
but then exactly as you've seen me,
now I have to start installing Hurl.
Because I have dependencies
and then I have to install Hurl
in a platform specific way.
And it just like, it just
ends up in quite the mess.
I do want to add kind of just to this,
which is one of the
things that I've realized,
Actually, you said it before as well,
but the Docker, like
replacing Dockerfile,
just a plain Dockerfile,
like especially a very simple one
with a like Dagger pipeline that does it,
you don't really get too much out of it.
Like, it's actually a little more effort.
And, you know, again, you
just don't get that much,
anything extra out of it.
But where, you know,
on the TLS exterminator side,
building the integration test,
right, in Dagger
was amazing, right?
Because that just, it worked
with like a single command.
Like, if I were to do this,
I actually did it without
dagger first, right?
I had to build, you know, two Docker,
two new Docker files just for testing.
And then I also had to build...
and the Docker
files
can't reuse each other, right?
You just have to copy-paste
like the entire code.
Yeah, Docker test and
Docker test server, right?
And then I had to build the
docker-compose.test.yml
to run like the,
all of the different containers.
And so then I have to have,
you know, basically
in two different, you know,
terminal windows, like, you know,
I'll start up Docker compose,
and then in a separate window,
I'll run, you know, go test, right?
And then all of that
happens on my local machine.
So if I happen to be using
that port somewhere else,
there's a conflict and it'll crash,
whereas like you build it
in a, in a Dagger, you know,
job or workflow,
whatever you want to call it,
it all just self-contained.
It runs, like, beautifully.
So let's see what you've done here.
Build test server.
Okay, so these are the functions
and that's what we call them
on the on the command line.
Functions, yeah.
So, build test server...
I wonder if I started,
I might have done this a little...
...wrong actually.
Oh no, no, no, no.
Test server's right.
So, I built a very simple
Go application
for the test server,
it serves a TLS endpoint
with a self-signed certificate.
Yeah, I can see it here.
So, that's all it does.
It's just a, you know,
a generated, you know,
certificate.
And all it does is it serves
that certificate
and all it does is, it responds
with a JSON blob that tells you
like, what it received in the request.
So, it'll tell you
I got a post with these query parameters,
it's a nice little
debugging application, basically.
Now, if you go down one more
to the next function,
there is a build...
te-, no, not just test, there's
build test TLS Exterminator.
So, this is where the reusability
and Dagger is just amazing, right?
So, I run m.build online 74.
"m.build" that is the production version
of TLS Exterminator, right?
I bring the production version
and then I add
the custom certificate there, right?
And that's it.
I was able to reuse
the production build
and just add on
this thing that I only want in test.
And then when you go down to
the last function, the test function,
I'm creating two test server instances,
two TLS Exterminator instances,
running them all in parallel, right?
And then I start my Go test
to actually, you know,
hit and measure all of these things.
And so, I'm using Go test a little less,
a little untraditionally, right?
It's not a unit test at all, right?
It actually hits like HTTP endpoints,
but, for this type of application,
I think the end to end style tests
are more valuable
than unit tests.
Of course.
And you control everything.
I think that's...
That's really helpful in this case
because you don't have
external dependencies.
Everything is local, right?
Everything, like once you pull everything,
you control all these
different services or processes,
however you wanna call them.
And then because it's a service, right?
It's a process that runs a service.
It has a port,
it has all that stuff.
You just wired everything together.
Exactly.
Imagine doing this with
Docker compose and scripting
Yeah, no, I did it.
I did it. It's in there.
Right, let's see it.
What does that look like, by the way?
What does the other version look like?
Yeah, so first there's
dockerfile.test, right?
Yes.
Build's the..., I think, TLS Exterminator
with the test version.
But again, I have to copy the entire...
Like, there's not really any ability to
reuse the production Docker file.
So, I have to just copy-paste everything.
Which is this one. Yeah, okay. Oh, I see
Yeah, it's one line difference, right?
It's just the extra
copy of the custom cert.
Okay. Okay. Yeah, I see that
Yeah, and then there's Docker test server.
That's building that
small test server application,
which again, same thing as
what we already saw there.
And then you have a
Docker compose file.
This one, yeah.
That's also just giant, you know,
like blob of YML,
which, I'm also not a huge fan of.
To run this, first you
run Docker compose up,
and you have to remember to do
Docker compose down
and Docker compose build,
if you change your code,
because Docker compose doesn't
really do that automatically.
And then once you have
everything running in Docker compose,
then, you go to a separate terminal
and run, the Go tests. Yeah.
Yeah, yeah.
Cool, well I'm glad that you've had
this experience, because guess what?
We have a lot of Docker files.
Oh yeah, I'm gonna delete them
from my project as well.
And these would need cleaning up
and a couple of other things,
but, this is amazing!
This is really, really amazing, cool.
We have just to see
the top level here.
We'll start with the Dagger JSON.
Whenever you see this,
this is like the entry point
for Dagger, if you wish.
It has other name,
the name of the module,
because this is called the Dagger module.
The engine version that
this was developed against,
16.1 is currently the latest.
SDK, the source is Go,
and the source is where
is the source code for this module.
In this case, what this means is it's
in the side of Dagger directory.
If you go in the Dagger directory,
main.go,
because this is a Go module.
By the way, you can use
TypeScript or Python or even
Java, even PHP.
It's crazy.
Elixir as well.
Anyway, there's quite a few options.
Rust! I forgot about Rust!
Anyway,
then you go to "main".
And you can start
seeing the struct, right?
We have the application,
so, the app container,
Golang, Varnish, and Source.
And then new, this is
called the constructor,
which we know we have
a couple of defaults.
For example, the source directory,
I know that you're looking at this.
This is like an,
almost like an annotation.
I'm thinking of it as metacode,
where if you don't provide
a value as a flag,
it will just default to
whatever you have there.
So, we have quite a few of those.
And then what we do here is
we can, we take Pipely, right?
Like, just create a new Pipely struct.
Golang, Varnish.
So, we put the Golang container
in the varnish container
because that gives us
some nice properties.
TLS Exterminator, as you can see, I...
go install directly from your repo.
So, with exec I just run Go,
I'm using the Golang container.
So, in that context,
I'm going to install
TLS Exterminator at a specific version.
And where's this version determined?
Well, there you go.
That's the default. This is the git sha.
By the way,
Would love to have some tagging,
some versioning going on,
we can loop back on that.
Oh yeah.
Some SemVer.
Okay, cool, so that's one.
What's next? So, Goreman,
that's something that you introduced,
where we can run multiple
processes in a single container,
in a single, in this case,
when we deploy to Fly, or
even if it runs anywhere else,
that's really helpful.
Yeah, maybe to just take a
step back and kind of explain that.
If this was being deployed to Kubernetes,
you could just deploy
two containers on one pod,
and then they could
just communicate directly
to each other.
But since you're deploying to Fly,
that's not really an option.
You need to only deploy one container.
And so we kind of have to stitch together
a custom container that
contains Varnish,
the TLS Exterminator,
and then we need to run
them together as well.
And so Goreman is the glue
that runs them both, yeah.
Yeah, we'll see the Procfile
and all of that.
Actually, this is the Procfile,
the Procfile, which is generated just in time.
The Procfile has two entries.
pipely
and tls-exterminator
and it takes the tlsExterminatorProxy.
If we did have two containers,
you would have two images, right?
Or you'd have an image
and you'd write it
in two different ways.
So, that complexity would need
to be pushed elsewhere, but
you would still have
the same concern.
Because, these are so closely coupled,
I really like the idea of
the TLS Exterminator
living in the same space.
as Varnish,
because, it provides like a
direct service to Varnish
and it's like all exactly there
and we'll see some nice things
about that.
If we had multiple containers
this would be a little bit more
just involved to know
where you're talking,
where's the other container
and stuff like that.
I personally initially disagree
with that statement.
Sure!
Let's keep going!
Yeah, so okay.
If we were to deploy this
as two separate containers,
you would need to somehow manage...
there will be a point where
you need to do the Goreman equivalent.
Right?
And you would capture it
in a Kubernetes manifest, most likely.
Yes, yeah.
That is very Kubernetes specific.
Exactly, yeah, that is probably the
biggest problem because Kubernetes has
this weird concept where
two containers run in a pod
and they share the port space.
They're both on localhost.
And so you can just hit localhost
and hit the other container.
Yeah, that is a disadvantage,
'cause not many
platforms have that by default.
So on Fly, for example,
you could define these
as multiple processes.
You could do that,
like, TLS Exterminator
would be one
and Varnish would be another one.
They would create two separate machines.
That's how the implementation
behind it works.
And then the question is,
are they using
resources efficiently?
If you wanna scale them separately,
they have to go on separate machines.
I guess the concept of a pod
is you just, you know, you're just,
by definition saying that all of the
containers in this pod
should scale together.
This is where you have to deal with
different implementations.
If you were, for example, to try this
on Nomad or something else.
Whatever they do there,
you just need to account for that.
This couples them.
I'm very aware of
this coupling that happens,
but it happens in almost, like, in a very nice way.
It's almost, like,
TLS Exterminator was meant
to only exist with Varnish
in this context and not separately. So...
But yeah, health checks, again,
if TLS Exterminator is healthy,
that's interesting.
But really what I care
is for the Varnish backend
to be healthy
and the Varnish backend, in this case,
it means something that hits TLS exterminator.
But yeah, I know what you mean.
Both are valid.
I could argue for both.
Yeah, the thing is, I do wonder if using
Goreman is better
versus are you trying to...
Because this is the problem when you want
to try to have a
cross-platform solution, right?
You end up building something
that's the least common denominator, right,
across all of the ones you want.
And so it's kind of
like a suboptimal solution.
You almost always end up with a
suboptimal solution just because you have
to figure out something that
works on all of your options.
Agreed.
The other thing is the way this
is implemented now
means that we cannot use
Varnish without TLS Exterminator.
So, the two are coupled.
That's true, actually. That's a very good
argument for just putting
them all in one container.
Like, they literally, yeah, like, like,
Varnish has to work with TLS Exterminator.
Like, it assumes it's there, yeah.
Yeah, yeah.
So moving on we have a couple
of configurations.
This is the whole app, right?
We have the proc file, like
everything is put together.
But this is the Pipely,
this returns the Pipely struct,
in this case,
and a Pipely app is just one
of the properties of that struct.
Now this is where, again, you already
mentioned how nice it is
to start reusing things,
and that's exactly what we're doing here.
So here we're creating a debug container
which has some extra things.
And by the way, I just
realized I have to add one more thing.
tmux, curl, httpstat, plus gotop,
which is the thing which I just installed
just before we started recording.
and sasqwatch
Oh, I don't know what saswqatch is.
Right, well we'll find out.
See, I did tell you that we will learn
a bunch of things today.
I didn't know what sasqwatch was,
until I, like, had to install it.
The only reason why
I installed sasqwatch,
By the way, I use hwatch locally.
Like, literally, "hwatch".
"Watch", okay, I see the sasqwatch.
But the sesqwatch is "sasqwatch".
That's how you type it.
I don't have it installed locally, but
hwatch
then if I do an "ls"
That's what it does.
It's written in Rust.
You need Cargo
and it's slightly more
involved to install it.
So it watches file changes?
So, it can run any command.
and it shows you changes to that command.
So "ls" was not a good one,
so let me just exit.
Let me do "curl https://changelog.com"
And let me run it.
Let me just do it a little less because
there's too much information there.
So let me actually do "httpstat".
I usually wrap these.
httpstat
There we go.
So, now I can see what comes back,
and if you see, if I go back,
I can see the differences between them.
So, I can see,
it runs it every two seconds,
but it's a TUI, a terminal UI, to watch
where I can see the differences...
That is really cool!
...between the requests.
You have the history,
you can do coloring,
you can do a bunch of things.
It just shows you everything
and you can go through them.
It's just a nice utility.
That is really nice!
Now,
hwatch, or hwatch,
as I call it,
I had to install Rust and Cargo.
I didn't want to do that, so
that's why I use sasqwatch.
S A S Q W A T C H
gotop
Have you used "gotop" before?
Uh... no, but I...
it's a replacement for top,
I assume, right?
Yes.
It's written in Go, yeah.
But it's nice.
And it has graphs.
That's what it looks like just running locally.
Actually yeah...
that is nice!
The CPU charts are really
beautiful, actually.
Yeah, yeah, yeah, cool.
So, we installed that,
we have a bunch of things, right,
and then we do tmux,
Neovim! I also added Neovim,
that was like last addition
oha,
in this case, "oha" is also coming
from Rust, Cargo, like that world,
the Rust ecosystem,
but they distribute as a binary
and I can just pull it down directly.
So in the case of "oha",
as you can see...
where was it? oha...
It was there.
I can just call "dag.HTTP",
so I can do an HTTP request.
and I can pull down,
I can do the platform.OS
and platform.Architecture,
and it's all bundled in.
So, you know, not all tools
are distributed as binaries.
If it was for different platforms,
I would have grabbed
hwatch as well.
Cool, so we have that.
I also installed "just", and look, "just" does
like even a different way, like to
install "just", you need to do "curl", but
anyway, just like to get
all those tools in.
So, now that I think of it,
my list is a bit unwieldy.
Debug container with...
I'll just stop listing them,
with various useful tools.
So, use "just" as the starting point.
You know, when it comes
to testing and debugging,
I don't think you should be penalized for
putting too much in there, right?
Like it's okay, you know...
tls-exterminator is 50 lines of code, but...
you know, but there might be 200 lines
of code of testing, you know test code,
and build stuff, which is fine.
I think that's acceptable.
My worry is that
this has changed so many times
and now I have a list
of all the tools which I'm using here.
Do I want to do that?
Do I want to create a list?
of this list, of these tools
that I have to maintain,
and I have to update
if I add another tool,
I have to remember it, if I remove it,
I have to remember it to
remove it from that list as well.
I would make a reusable
component that's like, you know,
Look at that.
You're already thinking like a nerd.
Gerhard's toolkit, right?
That just adds it
onto whatever container.
All right, so only because you asked.
I'm going to show you
this only because you asked.
Remember, "D" Docker.
D pull ghcr.io/gerhard/sysadmin
It exists.
It's called sysadmin and it
pulls out all these tools.
Okay, so it pulls all these layers.
And when it's finished pulling them down.
Let's see.
Okay, run,
obviously I want to run.
I didn't write the README,
but PR would be welcome.
Ah, it's not public! I just remembered.
This one is not going to work.
The reason why it's not going to work
is because it has a sleep in it.
So I need to do --entrypoint
It has sleep infinity.
---entrypoint...
bash... I think that will work.
Maybe I need to put it in a different place. No.
So, it's the sleep.
So, what I need to do there,
it's --entrypoint bash
Okay, so now I'm there.
So, it has "btop", for example.
Another way, it has all the tops.
It has "atop".
It has "htop".
It has, does it have "gotop"?
No, it doesn't have gotop.
See, that's the one that I forgot to add,
but it has a lot of tools.
And if I do a...
So, let me not do a pull,
let me do an inspect... history?
Yeah, "history --no-trunc"....
We are riffing again.
You can see all the things
which it does here.
So, let me just go boom,
let me go there,
all the way to the bottom.
exec...
I just take that one.
So, "inxi"!
Do you know "inxi"?
No, I don't.
Oh man, it's such a cool tool.
I'm very much a low-tech user.
I just stick a lot of...
mostly I just stick with regular
whatever comes on the box.
So, there you go: "inxi", it gives you
various details about the actual system.
So, for example,
you can say, give me details
about the disks of the system.
Oh yeah, I mean
that's a lot nicer than...
Give me the weather.
What?!
Seriously, it's all there!
Look, broken clouds, 13 degrees,
56 Fahrenheit, conditions...
It's all there!
OpenWeatherMap
ghcr.io/gerhard/sysadmin
Okay, you need to open source that repo
because I'm gonna steal it all.
Yeah, but it's all there.
So, there we go.
And it has so many options,
so many options, seriously.
Like, so many flags, so
many things that you can run.
Serial number, ZFS raid,
it understands that.
Panel/tray/bar/dock
info and desktop output.
Crazy, location uses weather,
that's the W that I used.
Devices if present, serial number,
yeah, pretty much everything.
Extras,
look at the flags.
I'm still looking at the flags.
There's a lot of flags.
So, yeah.
Inxi is a really cool tool.
Anyway, coming back, coming back...
In here I can do "just bench-origin".
And if I do a dry run...
What it does, it's just an "oha".
with a couple of defaults,
HTTP version 1.1
Okay, so let's just run this
and see how it behaves.
So, we're going to send
a thousand requests
over 50 connections
as fast as possible.
And we can see last second,
it's going around...
...requests, I mean
there's a bit more there.
Let's see what it does.
We've sent 70 requests per second.
and the total was...
a 1000 actually.
So, this one skips Varnish
and goes straight to the origin.
Like, you know, that's just
our baseline, essentially.
Correct. This is our baseline. Exactly.
The baseline to see what it does,
and...
we have like some traffic,
We can run it again
to see what it does.
I'm going to run it again
just to see network.
What are we pushing?
1.1 MB/s
1.2 MB/s
That seems to be the limit,
and it is the limit of this origin.
So, I'm going to let it run again.
Okay, we see the distribution, right,
most 0.700 milliseconds.
And again, 68 requests per second.
Yeah.
All right.
So I'm wondering
what happens-- so this is not production.
This is when you go
to the origin directly.
I mean, it is production,
but not what users connect to.
Right.
So, if I go to
changelog.com
Just to do a comparison
to see what it does,
we can do HTTP 1 or 2,
it doesn't make that much
of a difference really.
But, let's run that.
That's how fast that was.
Woah.
Exactly.
So, it's not, like,
a connection thing, right?
So, we can see that we got 1800
requests per second,
for those thinking,
let's just go, like, 10,000
to see how fast that is.
And that's so fast.
I mean, I just, boom, it's done.
That's 10,000 requests,
and this was actually 10,000
requests per second.
That's why, like, it finished in a second.
So, let's just go a 100,000.
This is now getting serious.
It's just a fast connection.
So, I'm pushing actually
2 GB/s
Alright, so...
people thinking, hey,
is it your internet slow?
It's not.
So yeah, so we just-
like 100,000 requests.
This was about 11,000,
that seems to be,
in this case, my limit.
And I'm hitting basically
the limits of my internet connection
with 2 GB
Cool.
So, this is what's possible.
We can see-- and this is why--
it's one of the reasons why
I'd want to put a fast CDN
in front, right?
We went from 60 to 10,000+.
So, this is nice, very, very nice, cool.
All right, so.
What else can we benchmark here?
So we benchmark the origin.
Let's bench TLS Exterminator,
Okay, same thing.
What do we expect to happen?
Well, it's kind of a
bit of a trade-off, right?
So, if Varnish is doing caching,
you'll get some performance
improvements there,
because it can respond locally.
If it's always forwarding to the...
...origin
you'll probably get
something a little worse
because you have an extra hop,
you have some overhead for that.
That's exactly what happened here.
So we're going to TLS Exterminator.
We're not going to Varnish.
We're getting about
63 requests/second, 64.
Right, so a little bit slower,
10% slower maybe,
but, not that much.
I mean some of this...
This is TLS Exterminator.
You're not hitting Varnish in this case.
No.
Just TLS Exterminator, exactly.
So, I'm going to local,
but we know that TLS Exterminator
has to go right to the origin.
Right.
Does a TLS handshaking,
the whole thing,
so, this basically shows that.
I mean, a few days ago, right?
We, well.
I think last week
or maybe two weeks ago,
I can't remember.
I think we're hitting about like 5,000,
6,000 requests/second.
That seems to have been the limit
of TLS Exterminator.
And the reason why
we did that because we
were proxying TLS Exterminator,
we were proxying to the CDN,
to changelog.com.
Yeah.
What we need to do
really is proxy to the origin
because this will replace
the CDN eventually.
At least that's the idea.
Now we can see here
that it was actually 66.
So there's some variability.
Honestly, I would say
there's not much difference
because
There's no slowdown in TLS Exterminator.
That's not the bottleneck.
The bottleneck is the origin
that can only push,
I say only,
I mean, I'm going cross-Atlantic,
there's a couple of other things,
speed of light,
The requests are big,
there's quite a few things there.
Yeah, yeah.
But really that's
what's hitting the database,
a bunch of things.
Are you ready for this?
Yes.
Varnish, let's see.
Really?
That's it.
Wow, okay.
That's it.
All right, so let's see.
I think you're going to have to increase...
We have to slow it...
We have to, like, we got like 133,000!
Requests per second.
So, this is an order of
magnitude faster plus...
than the real CDN.
because when you
get a cache hit,
it's coming from localhost, right?
Correct! Exactly...
Like, you have no penalty at all to pay.
So let's see, let's just go,
I don't know, let's do that.
That's it.
That was a 100,000!
I think we have to go more.
What was the request
rate that you got there?
I don't know, like...
Okay, let's just go up
200...
200,000 requests/second!
This is unreal.
I mean, you're basically limited
by your hardware, you know?
The only way to improve this number
anymore would be just
to get a better server.
Yeah, pretty much. I mean, look at that.
It's 0.00, the response time
was basically instant.
Nanoseconds, yeah.
Yeah, 99.99%
They finished within like...
So, this is basically local, right?
So, let's just push a lot more.
Let's see what's going to happen.
So, we're pushing quite a bit.
It's still too fast.
It's incomprehensibly fast!
We're pushing...
233,000 and it's still going.
What about the CPU and memory
during that time actually?
that'd be really interesting...
I'm just going to push, like, a billion,
and let's see what happens,
that will take a while.
So, this is the actual system,
but what I would like to do
is jump on the actual system.
So "h22",
let's do "htop".
Okay, so this is the actual system.
Yeah.
Let's just do "btop",
I think that will be better.
So, this is a Ryzen 7 5800X.
You can see that at this point,
the cores are fully maxed out.
Yeah.
Okay, and this is like all...
This is...
This is actually running in Dagger.
Yeah, yeah, it is.
It's running in Dagger,
and it's able to utilize
all of your hardware.
Everything.
That's also good.
Like, that's a good thing.
"oha" is using 5 cores
and Varnish is using 10 cores.
So, at this point
all my 16 cores
are fully maxed out.
And it's still going, right?
Like, it's still holding up.
I mean I did send a billion requests.
Right, so...
look, "Data" 23 GB/s
I think this is crazy.
I don't know if that's correct.
164 GB/s,
requests, this will take a while.
It's localhost to localhost,
so yeah, why not?
But this is like, again,
this is just literally pushing,
Let's just come back...
This is pushing what's possible
on the CPU.
At this point, the CPU is the bottleneck.
And of course,
everything's happening locally.
But if network was
infinitely fast,
which it isn't,
and if latency was...
I think that's what it means,
both throughput and latency,
this is what we would
expect to get from Varnish.
Yeah.
So...
Poul-Henning Kamp...
We agree.
Yeah, yeah, definitely.
We agree, no TLS!
Alright, well,
as we wait for this to finish,
what happens next?
What does happen next, yeah?
I don't know... ship it?
Yeah! Ship it! Exactly!
Exactly!
So yeah,
let's just finish this,
get everything merged.
So, pull request one,
pull request eight.
That will resolve the TLS issue.
If I go back like to just
keep popping back the stack,
we will have support for TLS origins.
Right?
Then the next one would be
really to add the "Feeds" backend.
So, start adding more backends,
and then send logs to Honeycomb.
Ah, yeah.
That'll be the next one.
Get the Varnish logs
and send them in
an efficient way to Honeycomb.
And then also to "S3" for stats.
And look at that,
we're almost at the end.
Ooh, "PURGE" is going to be interesting.
Yeah, that will be interesting.
Because all instances we need to
have like a worker, a distribution,
a bunch of things, but...
Yeah.
But, yeah.
That'll be cool.
Nice, so, let's see here.
You might want to pin
to the latest version
of tls-exterminator.
I just merged your PR
that you added this morning.
Okay,
so let's just quit this, right
we did two...
I know a lot, basically 5...
50 million.
And it's 200,000.
So, that seems to be...
the max,
over a long period of time.
We have the distribution 12 ms
Right, so that's
basically, like, when it's
fully maxed out, it could go faster.
Cool.
So, let's do that.
Let's just like merge a couple of things
and let's just get out of this.
So, this is all running.
Boom.
I'm just back from the container.
That's the actual host.
We can see how
everything dropped. Cool.
So, you mentioned to merge...
the latest tls-exterminator.
So, we have the "tlsExterminatorVersion",
which is declared here
as a default that
takes basically the git SHA.
I could do "latest",
but, I'm not a big fan of that.
I always like pinning, version pinning.
Yeah, I agree as well.
Yeah, when you will have something,
for example, like semver,
whatever it is, I think that
will be better.
But for now, this is good.
This works.
So, if we go to...
where was it?
tls-exterminator
1 hour ago, so, let's click on this one,
as we were recording.
You did this, cool.
So, that's the one.
So I'm just going to put it...
dropping it here.
Okay, there we go.
So, that's changed.
And let's run again.
If I run "debug", of course,
I have like a shortcut
to do that and I can tab it.
That's the other thing, auto completion.
And there you go, it's just picking up
the latest version of
tls-exterminator,
it's doing a "go install"
from that version.
And let's see it
assemble everything else.
By the way, as a side note on the
philosophy of tls-exterminator as well,
a couple of weeks ago I decided to let
tls-exterminator
support multiple backends,
so, you could give it multiple ports
to listen to and multiple destinations.
But then I got to thinking...
I'm adding more complexity.
I'll have to add tests
for all of this as well.
And so I just stripped it back out.
You know...
I like that.
So essentially, if you want multiple...
if you need multiple TLS terminations,
just run another instance of it, you know?
And that's just another
line in your Goreman file,
in your Procfile.
Yeah.
How about we do that now?
Like, you know, it's there.
Yeah
Why don't we do that now?
So let's do that.
So, we have tls-exterminator.
So, this will be tls-exterminator.
So, rather than tls-exterminator,
I'm going to call the backend.
or the origin.
So, I'll say "changelog".
For example,
Yeah.
You see?
And then that's the...
whatever the process needs to be,
the process that it runs.
That's the tls-exterminator proxy,
so, let me remind myself...
oh I see, okay, so this...
the idea was that it only supports one.
Yeah, so you just run a second instance
on a different port
with a different destination.
And I mean,
you don't have to
build anything new,
it's just that same binary
that you're just
running multiple times.
Yeah, I see, I see.
So I think for this,
I would like to use
like something like enums.
I know they're supported.
I would just need to
look at the documentation.
This can actually be provided
like a comma separated string,
and
I would take that,
I would loop over that,
and I would basically put it in here.
I mean, how many backends
do you expect to have?
3 backends.
I don't know.
Is it worth like adding a loop,
that might have a bug in it?
Okay.
I see we're going with this.
All right, so
Let's do this,
let's do the "changelogProxy".
This will be now
very specific to changelog.
Yeah, which I honestly think that's fine.
Cool, "changelogProxy",
Let's go for that.
So, this is the changelogProxy.
The next one is the "feedsProxy".
And we'll do the same thing
we'll add a default.
Yeah.
So...
what would you recommend here?
1 or 10?
What type of person are you?
Huh, I usually just
increment by 1,
but, I don't know.
What's the advantage of--
I like to leave space,
just in case you want to
sneak one in between.
This was, like, an old trick.
that we used to do...
Did you ever program in old school BASIC,
where you had to put in the
line number before every...
Not in BASIC, no, not in BASIC,
but I used to deal with configs
and config fragments.
when if you,
for example,
use this incrementally
and then you want to
put a config in between
because of precedence,
you had to reorder everything.
So, you'd always leave yourself
a bit of space
just in case like there's one,
like a weird one
that's needs to sneak in.
There's space to do that
and you don't have to change
anything else.
Yeah, yeah, that's definitely that.
That's the philosophy there for sure.
I've definitely I don't know.
It's like one of the very first, like
when I was a kid, there was a program.
I think it was BASIC like the...
there's a version of it where
you actually prefix
like, every line of code
you write, you put a number.
And so you can write all of
your commands out of order.
And then when you click compile run,
it sort of reorders
your lines in whatever...
And then if you do by 10s
then you can sneak things into the middle
very easily without having
to edit the whole file.
Makes sense.
So, this is the one
that we want to go to.
Feeds.
Yeah.
I know there's another one.
I don't know what it is now,
but, this is the one
that we're going to use now.
So, this one...
There you go,
that's the feedsProxy.
Okay, okay, so tls-exterminator,
this is the "changelogProxy"
and the next one is the "feedsProxy".
So, we take this,
we duplicate.
feeds
tls-exterminator.
Boom!
There we go.
Done, yeah.
We have the backend...
Let me think about this.
because now we have multiples
and now we need to carry this
through the Varnish config.
The Varnish file is going
to need another backend,
or unless it already has one, yeah.
Okay, so let's do "default.vcl"
So, we have the default one,
we have the new origin.
We have the BACKEND_FQDN.
Ah, I see.
So, this one is not going to work.
Okay, so this one starts
getting a bit complicated
because now what we need
to do is based on the route,
like, the request URL.
It needs to use different backends.
It can't use the same backend.
Oh, I see.
So, this is when we start,
now we need to start adding more
stuff into the "default.vcl"
to deal with the default backends,
because before we only had one,
now that we have multiple,
well, we have two, there's going
to be a third one,
they each need to...
based on the route,
it just needs to use a different backend.
Yeah, that sounds like it.
So I think this is a bigger refactoring.
That's probably not...
not something worth
doing live on video.
No, no, no, no,
we will leave this one for next time.
So, yeah.
We will leave this one for next time.
Nice.
This was incredibly helpful!
So I want to thank you very much,
you know, for giving your time,
with two little kids,
to a passion project
that I know is close to your heart.
TLS, SSL,
you just love living in that space.
Yeah, and I like building tools
for other developers as well,
in general.
And, so, that also fits in
definitely with my preferences and stuff.
We learned a bunch of things together.
We did!
Dagger.
SSL in Varnish.
That was quite the joyride.
Poul-Henning Kamp.
Poul-Henning?
Poul-Henning Kamp.
What I hope will happen
is that one day
we will get him to join
this little band of rebels.
and he will be able to correct us live.
So...
I will reach out to him,
see if he is curious,
just, like, to join the fun.
There's 4 of us now!
Yeah.
Really passionate about this.
You'll be contributor
number four to Pipely.
And I'm very curious
where it goes next.
Really, really curious.
Yeah, no, this has been really fun.
Like, it's nice to...
Yeah, to dabble in these...
Yeah, there's just a lot more...
less constraints, more flexibility
to just try and experiment
and things and yeah, it's really fun.
Thank you very much Nabeel,
I'm looking forward to the next one!
Yeah, yeah, me too!
