Keep Alert Chaos in Check

Download MP3

We last spoke in December 2021, which is

almost three years ago.

And the title of that conversation was

Keep on-call simple

And you check it out at

shipit.show/36

So I'm wondering what is new?

Actually, everything new

except the word "Keep".

Because "Keep" is the name of a new

startup we are speaking today about.

And it's no more on call.

It's still trying to make

it simple, keep it simple.

And actually, if like last time we talked

about Grafana Labs, how Grafana Labs

acquired the startup I built together

with my co-founder,

Ildar, back in the days.

And this startup was dedicated to

help on call engineers

set up on call rotations,

set up on call escalations,

And Keep, the new startup

we're talking about today.

It's not focusing on who to notify,

and how to notify,

it's focusing on what.

So we also have Tal joining us today.

I tell Matvey all the time that I think

he's super humble because he managed to

completely change the

whole market of IRM,

Incident Response Management tools.

I think since they built Amixr, which

was acquired by Grafana Labs and today is

known as Grafana OnCall.

There was a huge shift in that market.

We saw tons of new

players coming into that field.

And I also think that it's somewhat the

grounds for what Keep is building to be.

My last job before I became

a startup founder was

an engineer on call.

So I have a lot of empathy to everything

happening during incidents,

before incidents, after incidents,

and especially with people

who are handling incidents.

And when I think about what

what I want to work on

is the first thing

which comes to my mind.

And I'm lucky I met Tal and

our third co-founder, Shahar, who

we share this experience with.

It's safe to say that I guess

every engineer or someone with

engineering background shares the same,

at least perspective about this world.

Before I met Matvey, what brought me

into this world was the exact same thing

as Matvey just mentioned.

Being an engineer,

being an engineering manager,

facing incidents,

facing alert fatigue,

muting different Slack channels,

all of that.

So if you were to take us through an

incident that had all the

things about it were wrong.

How it started, how it happened, how it

unfolded, how it eventually

ended and the follow through.

Do you have such an example in mind, Tal?

I have a few examples just

from the last couple of months.

CrowdStrike is a very large,

I can say antivirus company.

It has agents installed on endpoints in

different companies.

They released some update to their

software that caused

basically all the computers to crash.

Blue screen, if that

says something to somebody.

And I think what's fascinating about this

incident is that it actually caused

hundreds of millions in dollars in loss

for so many companies.

So that's one thing I have

in mind that was just recent.

And the other thing I have in mind is

there was some incident with a major

cloud provider, Google cloud provider,

where they basically made some mistake.

I think it's not even public what was the

main reason for that, but they made some

mistake and they erased a complete

environment for some very

large insurance company in the US,

which also caused like, I think three

days of downtime to that company.

And that was a big thing as well.

And it's the type of things that you as

an engineer, when you think about

the one guideline you have is

in today's cloud environment is that

major cloud providers, they can fail.

And you learn that they actually can.

So I guess those are my two examples.

Everyone makes mistakes, right?

And systems that are able to handle mistakes

they're hard to design

they're hard to maintain

And the more complexity is added

the more difficult this becomes, for sure.

So CrowdStrike, I think that's a great

example because it, first of all, it was

by far, I think, like the biggest outage

in the history of all

information technology.

It was recent.

It was July.

It was just like, as you mentioned, like

some number of months ago, it wasn't even

a year ago that this happened.

It affected everyone.

Government, hospitals,

this was like really serious.

And even though machines crashed that,

you know, like they shouldn't have

crashed, they couldn't restart.

I think that was the big one.

And a fix was rolled

out, you know, within hours.

Like the fix was out, but the fix

couldn't be applied fast enough.

Everyone was in damage control.

It was messy, really messy.

I was looking on Wikipedia.

I was looking this up because I knew that

we would talk about this.

And apparently, the financial damage was

closer to 10 billion dollars worldwide.

And it's an estimate by the financial

institutions, but that's huge.

This reminds me of something that

Matvey mentioned when he first spoke.

He mentioned about the black swans

and the black swan events.

And I think this

definitely classes as one.

So what are your thoughts there, Matvey?

By the way, I just want to throw a fun

fact, but I think it's definitely

a black swan for CrowdStrike.

Because I'm looking at their stock price

from like six months ago.

And it used to be like

almost 400 US dollars per stock.

And it's like less than 300 right now.

So I think for them, it was a big thing.

Other fun fact about CrowdStrike and it's

a little bit closer to us.

It's like a small story.

Since our startup is relatively young

we are selling to

enterprises, large enterprises.

Practically all of them.

I think all of them were affected.

And one of our customersZZZ,

prospects at the time, like customers now,

they wanted to onboard to Keep actually

to mitigate such things faster,

such events faster, such black swans.

Going back to your

question about black swans.

most black swan for me was

major outage

of one of cloud providers

we used at Grafana.

And I can't disclose a lot about it, but it

was a huge surprise for me that,

you know, when you look at

cloud providers' uptime pages

like status pages

you see a lot of items like

SQL service

like hosted SQL, something else, DNS.

And you assume that if they will go down,

like one of them will go down or maybe

two of them will go down.

It's hard for you to imagine that

actually the whole screen could be red.

I think that's the market we operate in.

Like there is a Murphy's Law that says:

everything that can go wrong, will go wrong.

You know, you try to build walls behind

large scale systems, but eventually,

you know, people make mistakes.

Software is not complete

There will always be something happening.

It can be like from electricity going down

in some, I don't know, a server factory

to a line of code that somebody

changes and breaks everything.

Things go wrong

That's one true phrase

you can always rely on.

Yeah, things are not going to work.

What are you going to about it?

I like that take.

So if we think about CrowdStrike and if

we think about having Keep.

So let's assume that the customer

that you mentioned

that is now customer

if they had Keep at the time

before CrowdStrike happened

how would that have changed

how things unfolded for them?

Customers of that scale and I'm speaking

about companies with probably hundreds

and thousands of physical sites worldwide

and dozens of thousands of people working

in those enterprises.

They have multiple solutions to keep

track of what's going on in their

infrastructure, monitoring systems.

Actually, the phrase which we heard on

one of customers calls is that we usually

ask like which monitoring system you use

and they answered as like, actually,

anything you could you could think of.

We have everything in

somewhere in our org.

I think you have some counter for the

number of monitoring

tools out there, right?

I tried to count, like I found 370 plus,

and I kept finding new ones.

So those large enterprises,

they monitor a lot of things,

and all those monitoring systems, they

generate signals, alerts, alarms,

how some people call them.

And the problem is that if they collect

all those signals in one place,

they can't actually manage this scale.

So we talk with people who receive like

thousands and thousands an hour.

There is one particular customer who is

dealing with 70,000 alerts a day.

And the problem they are facing is a

problem of finding the

needle in the haystack.

Like this is an actual incident.

This is not noise.

And the rest is noise.

This is a problem.

We at Keep, we don't solve the problem of

who to notify, and how to notify,

and what to do

actually during the incident.

This is IRM space.

This is the space for such products as

like incident.io

Grafana on call, PagerDuty.

But we're building the best software

which will help you to look at it,

look at this stream of events and say

like, those five events,

they happened in the past.

And most probably,

this is a real incident.

I think people sometimes underestimate

like, you know, with

the CrowdStrike event,

what you as end user get is the report

that CrowdStrike publishes about the

event that happened.

But people underestimate the chaos that

companies went through

to try to understand what happens before

CrowdStrike even realized

that they released

something that breaks everything.

So just think about it as a point in time

where all the screens start to become red

and there are people sitting in, usually

it's like network

operation centers or, you know,

individuals monitoring their systems and

everything becomes red.

And that's a point in time

where you are just in chaos.

And now you need to start figuring out...

Okay, what's happening?

Where does it start?

What's the single point in time where

everything started to collapse?

Why do you think that companies end up in

a situation where they

have a lot of alerts?

Like, how do you end up in that place?

First of all, how do you end up with

having so many monitoring solutions?

I understand two or three, right?

Because you want the monitoring to check

your other monitoring.

So you would have a

few, but more than that?

I have a lot of answers for that.

But one of the things I want to check is

like the number of, I think

Netflix is famous for that,

but the number of microservices they

have, I think it's more

than a thousand microservices

that they have.

They have this famous graph of Netflix

microservices and like

the connections between them

and the different frameworks they're

running, the different

database systems they're using,

the different queue

systems they're using.

It's a modern company,

but it started a while ago.

So just think about, at least that's a

good example from my perspective.

Just think about a company that starts,

it has some infrastructure,

some complexity of stuff.

People join the company.

It's growing.

It has a major growth such as Netflix.

More complex infrastructure.

Team members who joined from different

companies, they like to work with the

tools that they're used to.

New technology comes in.

It has its own monitoring tool, or it

adopts 10 different monitoring tools.

A new guy comes in, he brings his own

methodology of how you do things.

And as the company grows,

a few years later, you find

yourself with people's legacy,

like the things they brought in, the

software they wrote, the

tools that they were using,

eventually that all

sticks with the company.

So you get to a point in larger

enterprises where you just

see everything of everything.

Technical debt is

something that you always aggregate,

and you try to find a balance to,

you have your company, it has

its goals, it's a business.

Eventually you need to

make the business grow.

It's very hard to wrap up or accumulate

the return on

investment you have when you

need to fix those kinds of stuff.

The things that doesn't necessarily push

the business forward,

but actually they have hidden costs.

So I think that's how you find, at least

from my perspective, companies who use

tens of different tools.

Okay.

And does Keep, in this context, mean

keep everything you have,

we'll help you make sense of it?

You mean the name?

Yes.

No, the name actually has a cuter story.

Actually, Shahar, our co-founder, that is

not here, we met each other 13 years ago.

We served together in the

Israeli Defense Forces

and both of us,

we used to play Age of Empires,

and we just took something like a keep is

something that you

have in Age of Empires,

a building that you can build there, and

it was just a random story

that we chose something from

that game.

Oh wow, we can see it.

So that's a keep.

It's like a tower.

Okay.

Is that for archers?

Is that where archers would be?

I think so.

Yeah, I think so.

Okay.

I remember playing the game too.

That was a while back, but you're right,

that was a fun game.

Can you still play today, by the way?

I think you can already

play that in the browser.

So some people migrated the whole game

into the browser, which is kind of cool,

because you don't

have to install anything.

You remember there used to be CDs, and

you needed to have

license keys and everything

like that.

Now you can play Age of Empires in the

web browser, which is quite cool, but

yeah, you can play it.

What is the significance to the like day to day?

So you have this keep.

What is it protecting?

Or what is it watching over?

It's like a watchtower.

Is that what it's supposed to be?

It's protecting people

who deal with alerts.

One of the examples, we had a call with a

small team of four who

managed alerts in their like

internally built system.

And those people practically looking at

alerts multiple hours a day,

staring at those screens and trying to

see like in a matrix movie, like, is

something happening?

This is not a fun job, I guess.

We want those people to be busy with

something maybe more interesting

to apply some best practices to fix some

core reasons or build

something for future.

So I think that people can try and

imagine what this looks

like when it's all set up

and how it works.

Before we go into the demo part, which

I'm very keen to see how

this works out in practice.

But I'm wondering, why would someone pick

this, which for example,

maybe they don't have that

much volume?

What does that mean?

Even if you have maybe tens or hundreds,

you don't need to have

thousands for this to be useful.

Why would someone pick this?

When we started the way of Keep, what we

used to do to get some

feedback from users was to post

stuff on Hacker News.

You usually get the truth

right to your face over there.

And one of the posts that we wrote that

worked very well for us

actually, and brought in very

meaningful feedback was we posted

something that says GitHub actions for

your monitoring tools.

So one of the main capabilities that you

have within Keep is to

create workflows, automations

that are based, like the triggers are

based on alerts that you get on the

alarms, on the events

that go inside Keep.

I think that was super

interesting for a lot of companies.

We actually saw a lot of other monitoring

tools and even

companies from the IRM space

actually implementing

workflows pretty much the same day.

I'm not saying anybody copied or anything

like that, but we

actually saw them posting

like a few days later that they also

support this type of thing.

And I find a lot of the smaller startups,

smaller companies find a

lot of value with just writing

automation.

So before they become this huge

monitoring beast with this sea of alerts

and sea of signals that

they get, they can actually leverage the

automation capabilities

that we have within Keep.

So they can do from the smallest thing of

creating a ticket

when they get some alert

so they can handle it

and fix it for later.

Or if they want to try to see if they can

automatically fix the

issue, maybe it's just

a matter of restarting

something in their Kubernetes cluster.

That's everything you can do with Keep's

workflow automation.

And I think that's what a lot of the

smaller companies find

interesting in what we're building

within Keep.

Okay, okay.

Now that sounds interesting.

Yeah, happy to share the

Hacker News post as well.

Yes, we'll put it in the show notes.

That's a good idea.

That's a good idea.

But again, there's nothing better, I

think, than seeing the thing in action

and seeing its limits as well, where it

maybe stops its edges.

And also, yeah, those AHA moments

that you can only have

when you see something.

So I'm looking forward to that.

It's time for the demo.

I hope it's a live demo,

so I'm a little bit nervous.

Things that can go wrong will go wrong.

Exactly.

Tal is here to support you.

Everything will go wrong Matvey,

so don't worry.

Being honest, I prepared a little bit.

If it will go wrong, it

will be a surprise for me.

Well, we can help you fix it.

How about that?

Nice.

We can try.

Well, we can at least try helping.

I don't know if we will

succeed, but we're here for you.

I want to wrap a little bit

before I jump into the demo

because before that moment, we spoke

about possibilities of

how Keep is able to find

incidents in the lake of alerts.

And this feature,

those AI features we have,

those are actually pretty advanced

features we sell to large enterprises.

And those features, they are based in our

open source Keep, which

is pretty well adopted by

large enterprises and small companies who

don't have this amount of alerts.

And I will speak about

this open source part.

So if you are interested in AI, reach out

to us, but we will not spend your time on

advertisement here.

And this part, this open source part,

it's like a Swiss knife.

So Tal mentioned, it has workflows like

GitHub Actions for

alerts, it has deduplication,

it has enrichments from CSV filtering.

And I will show one specific use case

which is covering workflows.

And I will try launching the whole

software from ground up

as typical open source user

to bring more risks to this demo.

Right.

I see.

We love that.

We love because this is closer to what

anyone would experience

when they would start.

This is the real deal.

Thank you.

So let's think about our use case.

Usually, based on my experience, when

people bring Keep into their

infrastructure, they

want as a step one for them is to

integrate it well with

everything else they have.

And the exact ways they want to build

this integration is always different.

And I was thinking about some use case

and the use case I want to cover today

is receiving alerts

from monitoring system.

Once I received an alert, I want to go to

actually application

database and check something.

And if, for example, my job didn't

publish a result to the

database in the last 15 minutes,

I want to shoot this alert

to the on call engineer

to some IRM system.

So I will build this

use case from ground up.

How does it sound?

Yeah, sounds great.

Let's try it!

To see the demo, find the YouTube video

link in the show notes.

And if you enjoy this content

and want to support it, go to

makeitwork.tv, join as a member

and watch the full

conversation in 4K straight off the

Jellyfin media server.

Yes, offline download

is enabled.

We now rejoin Matvey and Tal

just as they made it work.

Moment of truth.

Okay

It's spinning.

Wow, 200.

And we received our

alert with environment.

Thank you Tal for the help.

It worked.

And that's it.

Let's sum it up.

So what did we do?

We received an alert

from the monitoring system.

We went to...

We built a workflow which is automatically

going to the MySQL

database, making a query.

And based on the results of this query,

it's republishing this

alert to a third party system.

That's actually it.

OK.

So does this need an alert

to be triggered for

the workflow to start?

So I know that we mentioned that

workflows can be triggered

by different types of events.

We mentioned an incident

can trigger a workflow.

We've seen an alert triggering a workflow.

Is there something else

that can trigger workflows?

There is one more option

to trigger them manually.

But if I trigger them

manually, I will not

have alert content here.

So I intentionally, when I was debugging,

I wanted to...

to debug it using alert.

Probably something for

us to improve in the UI.

You can run it manually for the alert.

Right.

Yeah, it's already implemented, yeah.

One of the things we had in mind here

is that one-- another

pain point with alerts

is that you always find

it hard to debug them,

to understand whether

it really works or not.

Like just what Matvey just did.

So sending the alert over and over again,

see if the threshold is right.

Maybe you need to adjust it.

Maybe you need to change something.

Maybe there is some typo in

the query you just did.

And this was one of the things we

wanted to implement within

Keep, is to allow somebody

to test this workflow very easily.

So instead of having to send the

triggering event over and over

again, you can just very

simply either manually execute

the workflow and feed it

with everything that it expects.

So for example, environment in this case,

you could just fill it in manually

and test if the workflow runs properly.

Or you can actually take a past alert

and make it trigger the workflow again

and fill in all the information.

One more thing that I

think is interesting

is that besides events, you can also

have interval, which is

basically also a very common use

case that we see from our users.

Just run this workflow every 30 seconds

to check if the event happened or not,

and then decide to do something or not.

Do you have any sort

of reporting capability

in terms of showing how

many alerts were triggered,

workflows?

And I'm also wondering,

because off of this,

we've seen workflows fail.

So would you trigger an

alert when the workflow fails?

I think it's a cool feature idea.

We actually have emails

sending when workflows fail.

So if Matvey is the one

who uploaded the workflow,

he will get an email saying that the

execution of the workflow failed.

But now that you mentioned it, I think

it's super cool to

maybe have a workflow that

runs when other workflows are failing.

That's a cool idea.

Yeah, because that would be like my worry

when I set the system like this up.

What monitors the system

that's supposed to alert me?

And when there's a failure,

I would want to know about it.

I mean, the fact that you

have emails is a good idea,

because maybe you shouldn't

use the same thing that maybe is

failing, because then you end up in a

constant endless loop.

The thing is failing.

It's triggering other workflows.

Maybe there's like--

I can imagine that being a problem.

So having emails is important, again,

so that it gives you

assurance that when there's a problem,

you will know about it.

Actually, it's a really

good question about metrics

and about observability for

Keep, like how Keep works.

And that's something we

recently started to invest more.

And one of the latest

features is a metrics endpoint,

actually, for primitives.

So you could export to Prometheus

how many alerts you have,

how many alerts per incident you

have, and actually, you

could filter by labels here

from alerts.

So this metric

endpoint, it seems like small,

but it's really powerful in order

to get some usage statistics back to you.

Yeah.

So when it comes to deploying Keep,

we've seen like the

Docker Compose approach.

We've also seen that there

is a way to deploy it in Kubernetes.

I'm wondering what are the

requirements for Keep to run.

So it's sending emails.

I'm assuming there needs to

be some sort of integration

with like an email provider of some sort,

to be able to send those emails, maybe

SMTP, but also a database.

So does it use SQLite?

Do you need MySQL?

How does the Keep installation, like the

simplest Keep installation,

look like?

So yeah, we have a few

different deployment types.

Actually, Matvey, maybe you

can open the specification page

from the documentation once again.

But we have-- I'm just

discussing it with our team

right now, but I think

it's somewhat shooting

ourself in the foot.

But we support various

different dialects of databases.

So both SQLite, MySQL, Postgres, and

Microsoft SQL Server.

Some of our customers have that.

It's an engineering

experience to do that.

But this is what we support.

It very much depends on

the scale of the events

that you're sending into Keep.

So we have a page that describes what

kind of infrastructure

do we expect in different scale settings.

Something else we have

within the infrastructure

is at some scale, you need to

also have some queuing system

for the events to properly digest.

So we also use Redis with something

that is called ARQ, which is basically

an implementation for queues over Redis.

But this page in our documentation

basically describes everything that we

need in terms of CPU

and memory.

And we support, again,

Kubernetes, OpenShift.

Somebody from our community wrote a guide

on how to deploy this to

AWS ECS, which is also cool.

OK, this is great.

That was a cool demo, by the way, Matvey.

Thank you very much.

Oh, thank you.

And the fact that it didn't

fully work the first time, that's how we

know it was real, right?

It wasn't recorded, right?

We did it, you know, we insert like all

the commands and we started from scratch.

And there was like a small little thing

that we had to figure out, but it was

all very smooth, I have to say.

So thank you very much for sharing that.

Thank you.

As we are approaching the end,

I'm wondering what are you thinking for next

year in the context of Keep?

Are there some things that you would like

to get to, some maybe big

challenges that you see ahead?

How do you see 2025 for Keep?

I can share my thoughts about this and

it's generally two

paths that I see for Keep.

So something like we didn't really

discuss but it's a major part of what

Keep is, is that we're open source.

And like we discussed before, there's a

lot of use cases for the smaller

startups, smaller

companies when they use Keep.

And I think we love open source besides

the fact that it opens a business world

for us because also

bigger companies, enterprises,

they look at open source as the big next

thing and they try to migrate some of the

tools that they're already

using today to open source.

So one path in my eyes is that we want to

keep on nurturing this community.

We want to make it, we want to make Keep

the go-to for engineers or for operations

as this Swiss knife of everything that is

alerts or events and the

things they can do with it.

On the other hand, we have huge, huge,

huge things that we still need to figure

out with AI and everything that is

evolving practically every minute.

Like I guess there's a new GPT model that

just released while we

were recording this podcast.

So there's a lot of things we need to

figure out there. There's a lot of work.

One of the things you mentioned a few

minutes ago without maybe even knowing

how important it is for

us is the reliability part.

So how can we make sure we are reliable

being the most crucial part in the

reliability pipeline in companies?

And that's something we need to invest a

lot in. And I think that's the two major,

major things we are

focusing on right now.

Matvey, I don't know if you see the

things the same way, but I guess I can

guess you can agree with that.

Absolutely.

Now you hash it out.

I am like we are very, very much an

engineering team and all of us are

engineering founders.

And when we speak about technical

challenges like reliability, I'm not

worried about them because technical

challenges are not that challenging here.

What's challenging actually is to prove

the market that we can do what we claim

in the AI space, which we almost didn't

touch today, that we can correlate alerts

to incidents at scale on premises

environment with no internet connection.

We could summarize incidents, provide

insights from the past runbooks and

that's a huge scope of work we do.

And that's actually what we sell. And we

see that in the market nowadays, there is

a lot of skepticism about it.

Like there are few products who tried to

do it in the past and

didn't do it that well.

And people who tried them, they're like,

maybe it's not a real, real thing.

Who tried it like many years ago, it

didn't work that well actually before.

But nowadays with what we see in the AI

world with LLMs, with local models, open

source models, it's all getting possible.

And challenge for us, huge challenge for

us for 2025 is to have like a few good

examples how it works at scale for large

customers and make them

published with customer stories to change

the perception of AI ops in general, that

AI ops could actually be AI.

Right. If people want to learn more about

this side of Keep, where should they go

to read or watch

something? What would you recommend?

We will publish more soon.

For now, the best way

is to go to our website.

There is a contact us form, and we

actually just do demos.

That's the best way.

The very similar call, but

mostly focused on the AI part,

we run a huge instance,

which with a lot of alerts

and it's happening on the

fly, sometimes doesn't work.

Like today.

But that's why we have demos.

Cool.

Matvey, thank you very much for the demo,

and thank you very much for reconnecting

after a couple of years.

I really enjoyed that.

Tal, it's been great knowing you and

talking a little bit

about what you're doing next.

I'm excited to see what you do.

You're right.

It is challenging.

The reliability part is challenging.

I deal with that on a daily basis,

and I know how far it goes.

It's all systems, and

people are part of those systems.

So even though tech is the good part,

it's the people that make mistakes.

What you're trying to

prevent, or at least we're

trying to make it more

obvious when they happen.

So that's the hard part.

So I do have an appreciation for that.

But the AI, all the stuff

that's coming, you're right.

It's moving very fast, but the

capabilities are amazing.

So I'm excited about the

intersection of the reliability

part, the AI part, the complexity part,

how we're trying to just

make sense of the sea of alerts,

which I think is a

great way to think of it.

We're just getting

overloaded with signals

from all over the place.

Most of it is noise,

but in that haystack,

there's a needle that's really important.

And if you don't pay attention to it,

CrowdStrike 2 is just around the corner.

That's a placeholder for

whatever the next big thing is,

but we know it's going to happen.

Thank you for having us.

This was amazing.

And probably a sneak peek

for maybe we meet again.

I hope not in three

years, but maybe sooner.

Yes.

It is a teaser.

Sooner, yes.

We have some plans.

We spoke about the AI

today, and we intentionally

didn't show it because we find it

valuable to show open

source part in such podcasts,

something you can jump in.

Indeed.

And if you're working

in large enterprise,

you will reach out to

us and we'll show you AI.

But a sneak peek that

follow us because we

have a strong belief in open source,

and we really, really want

to downstream some of our AI

features to open source once

they will be polished enough.

So stay tuned.

Love that.

All right.

See you next time.

Bye bye.

Thank you.

If you enjoyed this episode, I will

appreciate a rate and review in your

favorite podcast app.

You may want to subscribe for free to

makeitwork.tv so that you don't miss

the one year anniversary newsletter post.

Thank you for tuning in, until next time.

Creators and Guests

Keep Alert Chaos in Check
Broadcast by