All Episodes
Episode 01 ยท Storytelling

The Day the Internet
Almost Died

๐Ÿ• ~20 min ๐Ÿ“… 2025 ๐ŸŽฏ Level: B1โ€“C1 ๐Ÿ“ 10 vocab words
Listen on Yandex Music Spotify Apple Podcasts

Intro

Welcome back to Real Stories, Real English. I'm Ryan.

Today's episode is about something that almost certainly affected your life โ€” even if you had no idea it was happening. It happened on a completely ordinary Tuesday morning. And it was over in less than an hour. But for that one hour, something terrifying became clear: the internet โ€” the thing billions of people depend on every single day โ€” is far more fragileEasy to break or damage. Like a glass sculpture or a spider's web โ€” looks strong, but falls apart the moment something goes wrong. than most of us think.

fragile โ€” easy to break or damage. Like a glass sculpture, or a spider's web. Something that looks strong from a distance, but falls apart the moment something goes wrong. You might say: "The peace between the two countries was fragile" โ€” or: "Her confidence was still fragile after the failure."

This is the story of June 8th, 2021. The day the internet almost died.

Part 1 โ€” An Ordinary Tuesday

It was 9:47 in the morning, London time.

In offices across the city, people were making their second cup of coffee, opening their laptops, clicking on bookmarks they visited every single day. The BBC. Reddit. Amazon. The New York Times.

And then โ€” nothing.

Not a slow page. Not a spinning wheel. Just a blank white screen, or a short, cold message: "Error 503. Service Unavailable."

Within minutes, the same thing was happening everywhere. In New York, journalists trying to file their morning stories found their own publication's website completely unreachable. In California, software engineers trying to push their morning code discovered that GitHub had simply vanished from the internet. Gamers trying to open Twitch saw nothing. Shoppers on Amazon got errors. People trying to check government services in the UK found official pages returning blank screens.

For a brief but deeply unsettlingMaking you feel worried or uneasy โ€” like something is wrong but you don't quite know what. moment, it felt like the internet itself had broken.

unsettling โ€” making you feel worried, uncomfortable, or uncertain. A little stronger than just "uncomfortable." You might say: "There was an unsettling silence in the room after the announcement" โ€” or: "The results of the experiment were unsettling for the scientists."

Across the world, people did what they always do when the internet breaks: they went to Twitter to complain about it. Twitter, at least, still worked. And the complaints came fast.

"Is the internet down?" "Why can't I open Amazon?" "Reddit is gone, I have nothing to do."

But behind all of those confused users and frustrated jokes, a very small team of engineers at a company most people had never heard of was already staring at their screens in absolute horror.

The company was called Fastly. And the next 49 minutes would define their entire existence.

Part 2 โ€” The Company Nobody Knew

Let me ask you something. When you open a website โ€” any website โ€” what do you think actually happens? You type an address, you press enter, and the page appears. Simple, right?

The reality is a little more complicated. And to understand what happened on June 8th, you need to understand one concept: the CDN โ€” Content Delivery Network.

The easiest way to understand it is with an analogy. Imagine you run a bakery in Moscow. Every morning, thousands of people from all over Russia want fresh bread from your bakery. If every single person drove to Moscow to pick up their loaf โ€” the roads would be gridlockedCompletely blocked, unable to move. Most often used for traffic, but works for any situation where things are stuck., and the bread would be two days old by the time it reached Vladivostok.

gridlocked โ€” completely blocked, unable to move. You hear it most often about traffic: "The city centre was gridlocked for hours." But you can use it for any stuck situation: "The negotiations were gridlocked โ€” neither side was willing to move."

So instead, you set up distribution points all over the country โ€” small warehouses in Saint Petersburg, Novosibirsk, Yekaterinburg โ€” all stocked with your bread, ready to deliver locally. That is exactly what a CDN does โ€” but for the internet. Instead of every user connecting to one central server in San Francisco, a CDN creates copies of a website's content across hundreds of data centres worldwide.

Fastly was one of the world's biggest CDN providers. Their network sat quietly, invisibly, between you and some of the biggest websites on the internet. Amazon used them. Reddit used them. Twitch. The New York Times. GitHub. The British government.

Most people had never heard the name. But Fastly was, in a very real sense, part of the skeleton of the internet. And on the morning of June 8th, that skeleton cracked.

Part 3 โ€” The Bug That Was Always There

Here's the thing about what happened. It wasn't a hacker. It wasn't a cyberattack. Nobody broke in. Nobody stole anything.

The cause of one of the largest internet outages in history was a software bug. A flaw in the code. And not even a new one.

Back in May 2021 โ€” about three weeks before the outage โ€” Fastly's engineers had released a software update. Inside that update, hidden quietly in thousands of lines of code, was a bug. A small logical error. A mistake. But the bug didn't cause any problems right away. It just sat there, dormantInactive, sleeping โ€” but capable of becoming active again. Like a volcano that hasn't erupted in centuries., like a tiny time bomb.

dormant โ€” inactive, sleeping, not doing anything โ€” but capable of becoming active again. You often hear it used for volcanoes: "The volcano had been dormant for three hundred years before it suddenly erupted." Or for diseases: "The virus can remain dormant in the body for months." In this case, the bug was dormant โ€” it existed, but it was doing nothing. Until one specific moment.

For three weeks, nothing triggered it. Millions of web requests passed through Fastly's network every hour. The bug waited.

And then, on the morning of June 8th, at 9:47 AM UTC โ€” one customer changed a setting. That's it. One customer. One setting. A perfectly normal, allowed configuration change. We don't know who they were โ€” Fastly never released the name. But whatever they changed, it was exactly the combination the dormant bug had been waiting for.

Within seconds, the bug woke up. And it didn't wake up slowly.

Part 4 โ€” The Cascade

In technology, there's a phenomenon called a cascade failure. The word cascade comes from the image of a waterfall โ€” water flowing over one edge, then the next, then the next, each level triggering the one below it.

A cascade failure works the same way. One system fails. That failure puts pressure on the next system. Which then also fails. Which puts pressure on the next one. And so on, faster and faster, until what started as a small problem has become a catastropheA sudden, terrible event that causes a lot of damage โ€” much worse than just a "problem" or a "setback.".

catastrophe โ€” a sudden, terrible event that causes a lot of damage or suffering. Much worse than just a "problem." A natural disaster is a catastrophe. You might say: "The fire was an absolute catastrophe for the local community" โ€” or: "Losing that contract would be a catastrophe for the business."

That is exactly what happened inside Fastly's network on June 8th. The bug activated. It started sending incorrect instructions to Fastly's servers. Those servers, confused by the bad instructions, began to crash. But Fastly's network was designed so that if one server crashes, others automatically take over its traffic. The problem? Those other servers were now receiving MORE traffic than they were designed to handle โ€” and the same bad instructions. So they crashed too.

It took less than sixty seconds for 85% of Fastly's entire global network to go down. Think about that for a moment. A network that had taken years and hundreds of millions of dollars to build. Thousands of servers, spread across every continent. Gone. In sixty seconds.

Back in the real world, users were staring at error screens, reaching for their phones. In newsrooms, journalists were calling their IT departments. On gaming platforms, streamers were losing their connections mid-broadcast. And somewhere, in Fastly's offices, the monitoring systems that watch the health of their network were screaming.

The engineers on call looked at their dashboards. And they saw numbers that should not have been possible.

Part 5 โ€” 49 Minutes

Here is where the story gets, in a strange way, almost impressive.

Because while the outside world was in chaos โ€” users confused, journalists writing emergency articles, company social media teams scramblingMoving or working in a rushed, disorganized way, trying urgently to deal with a difficult situation. to explain what was happening โ€” inside Fastly, something remarkable was taking place.

scrambling โ€” moving or working in a rushed, disorganized way, trying urgently to deal with a difficult situation. You scramble when there's no time to be calm or prepared: "Rescue teams were scrambling to reach the survivors." Or: "The government was scrambling for answers after the scandal broke."

Within one minute of the outage beginning โ€” just sixty seconds โ€” Fastly's engineers had already identified the problem. They could see the pattern in their data: one configuration change, one customer, one setting โ€” and then everything fell.

They knew the bug existed in a specific feature. So they did the simplest thing available to them: they disabled that feature. Across their entire network. Globally. Simultaneously.

That decision โ€” to turn off one feature โ€” was the fix. And it worked.

Within 49 minutes of the outage starting, most services were back online. Reddit returned. Amazon came back. The BBC was live again. GitHub reappeared. The internet, as billions of people knew it, quietly resumed.

The actual code change that fixed the problem? Three lines. Three lines of code, and the crisis was over.

Later that day, Fastly's CEO published a detailed, transparent post explaining exactly what had happened. No excuses, no corporate language designed to hide responsibility. Just the facts: here is the bug, here is what triggered it, here is what we did, and here is what we are changing. In the world of tech crises, it was considered a model example of how a company should communicate when things go wrong.

Part 6 โ€” The Lesson

Let me ask you a question I want you to actually think about.

How many companies do you think are responsible for keeping the internet running? Not using the internet โ€” actually keeping it running. The infrastructure. The pipes and cables and servers and systems that make the whole thing work.

The answer might surprise you. A significant portion of the world's internet traffic passes through just a handful of CDN providers. Fastly. Cloudflare. Akamai. That's it. A tiny number of companies, largely invisible to most users, holding together something that billions of people treat as a basic utility โ€” like water, or electricity.

This concentrationHaving a lot of something โ€” power, control, resources โ€” gathered in one place instead of spreading it out. of power in a few hands is something internet experts have been worried about for years.

concentration โ€” having a lot of something gathered in one place. When we say "concentration of power," we mean too much control belongs to too few: "There's a dangerous concentration of media ownership in this country." In economics: "A concentration in an industry usually means a lack of competition."

And June 8th proved those experts right. Because the internet was not designed to be this centralised. In its earliest form, it was built specifically to be decentralised โ€” spread out, with no single point that, if broken, would bring everything down. But as the internet grew, as businesses scaled, as speed became everything, the infrastructure quietly consolidatedCombined or merged into one larger, stronger unit. Companies consolidate when they merge. Power consolidates when it moves from many sources into one. around a few giant players.

consolidated โ€” combined or merged into one larger, stronger unit. Companies consolidate when they merge. Power consolidates when it moves from many small sources into one big one: "The industry had consolidated significantly โ€” where there had once been twenty companies, now there were three."

Now, one bug in one company's software, triggered by one customer's configuration change, can take down Amazon, Reddit, Twitch, the BBC, and GitHub all at once. Is that a problem? Most experts say yes. It creates what engineers call a single point of failure โ€” one place where, if something goes wrong, everything goes wrong.

But here is the other side. Fastly's engineers fixed a global internet crisis in 49 minutes. That is, frankly, extraordinary. The same concentration that made the failure so widespread also made the recovery incredibly fast. If the internet were truly decentralised, coordinating a fix across thousands of small companies would have taken hours. Maybe days.

So we are left with a paradox. The thing that makes the internet vulnerable is also the thing that makes it resilientAble to recover quickly from difficulties. Not about never being damaged โ€” about how quickly and completely you recover..

resilient โ€” able to recover quickly from difficulties. One of the most useful words in English for people, systems, or materials that can take a hit and bounce back: "She was remarkably resilient after everything she'd been through." Or: "The economy proved surprisingly resilient during the crisis." Resilience is not about never being damaged. It's about how quickly, and how completely, you recover.

Outro

On June 8th, 2021, the internet took a hit. For 49 minutes, a huge chunk of the digital world went dark. Millions of people were confused, frustrated, or just bored without Reddit. Businesses lost money. Journalists couldn't publish their stories.

And then it came back. Fixed by a small team of engineers working urgently through their most terrifying morning. Restored by three lines of code and one brave decision.

The internet survived. But it left a question hanging in the air โ€” one that nobody has really answered yet.

How many more bugs are sitting dormant in the code that keeps the world running? How many more configuration changes, made innocently by unnamed customers, are waiting to find them?

We don't know. And that, perhaps, is the most unsettling thing of all.

That's it for Episode One of Real Stories, Real English. Every episode comes with a full transcript โ€” you're reading it right now. Next episode: another real story. Another chapter of the world as it actually happened.

I'm Ryan. Thank you for listening. I'll see you next time.

Vocabulary Summary

Word Meaning Example
fragile Easy to break or damage "The peace was fragile โ€” one wrong move could end it."
unsettling Making you feel uneasy or worried "There was an unsettling silence after the news."
gridlocked Completely blocked, unable to move "Traffic was gridlocked for hours."
dormant Inactive but capable of becoming active "The volcano had been dormant for centuries."
catastrophe A sudden, terrible, damaging event "Losing the data was a catastrophe for the company."
scrambling Moving urgently in a rushed, disorganized way "Teams were scrambling to find a solution."
concentration Power or control gathered in one place "A dangerous concentration of power in few hands."
consolidated Combined into one larger unit "The market had consolidated around three players."
resilient Able to recover quickly from difficulties "She proved remarkably resilient after the setback."