FactOTD

The Internet Ran Out of Addresses: The IPv4 Exhaustion Crisis Explained

March 28, 2026 · 3 min read

The Fact

IPv4, which uses 32-bit addresses, was expected to have enough addresses when created in 1983. The world ran out of new IPv4 addresses in 2011.

Four Billion Addresses and a Catastrophic Miscalculation

In the early 1980s, the internet was a research network used by a small community of academics and military researchers. The idea that billions of ordinary people would one day carry internet-connected devices in their pockets, or that homes would contain dozens of connected appliances, was not yet even speculative science fiction. It was simply outside the frame of possibility.

IPv4 — Internet Protocol version 4 — was formalized in 1981 and defined the rules by which computers on the internet identify themselves and route data. Each device on an IPv4 network is assigned a numerical address: four numbers between 0 and 255, separated by dots, such as 192.168.1.1. With 32 bits allocated for these addresses, the system supports 2 to the 32nd power unique addresses — approximately 4.3 billion.

In 1983, when the internet officially adopted the TCP/IP protocols that included IPv4, 4.3 billion seemed more than enough. The entire internet at that time consisted of a few hundred machines. Even the most optimistic projections did not envision a network of that size for decades, if ever.

The Arithmetic of Exhaustion

The internet grew far faster than anyone predicted. The commercialization of the web in the early 1990s triggered explosive growth in the number of connected computers. By the late 1990s, with the dot-com boom underway and personal computers becoming household items, the eventual exhaustion of IPv4 addresses moved from theoretical concern to engineering emergency.

Compounding the problem was inefficiency in how addresses had been allocated. In the early internet, large blocks of addresses — entire /8 networks containing 16 million addresses each — had been assigned to universities, corporations, and government agencies on the assumption that they would need them. MIT, for example, was allocated a block of 16 million addresses for its campus network. Many of these large allocations were underused, effectively withdrawing addresses from circulation while demand elsewhere went unmet.

Engineers developed techniques to extend IPv4's useful life. Network Address Translation, or NAT, allowed multiple devices on a local network — such as all the computers in a home or office — to share a single public IPv4 address. This dramatically reduced the number of public addresses needed and bought years of additional time, but it also introduced complexity and limitations that created problems for certain applications.

The Day the Pool Ran Dry

The Internet Assigned Numbers Authority (IANA), the body responsible for managing the global pool of unallocated IPv4 addresses, formally exhausted its supply of addresses to distribute to regional registries on February 3, 2011. Individual regional registries continued distributing addresses from their own reserves for varying periods after that, but the fundamental supply had been consumed.

The transition to IPv6 — which uses 128-bit addresses, creating a theoretical address space of 340 undecillion unique addresses (a number so large it effectively cannot be exhausted by human civilization) — had been planned since the mid-1990s. But the transition proved far slower than hoped, complicated by the need to update enormous quantities of existing networking infrastructure, software, and devices to support the new protocol.

Progress has been substantial: major internet service providers, operating systems, and websites have adopted IPv6, and global IPv6 traffic has grown significantly. Yet IPv4 and IPv6 continue to coexist, with complex translation mechanisms bridging devices that speak only one language. The miscalculation of 1983 left an infrastructure legacy that the internet is still working to fully overcome.


F

FactOTD Editorial Team

Published March 28, 2026 · 3 min read

The FactOTD editorial team researches and verifies every fact before publication. Our mission is to make learning effortless and accurate. Learn about our process →

Related Articles

technologyThe First Internet Message Was 'LO' — Because the System Crashed After Two LettersThe first message ever sent over the network that would become the internet was supposed to be 'LOGIN.' Instead it was 'LO' — because the receiving computer crashed after two characters. The accidental poetry of that truncated greeting, inadvertently echoing 'hello' or 'lo and behold,' seems fitting for the birth of the technology that would eventually connect most of humanity.technologyEmail Is Older Than the Web — Ray Tomlinson Invented It in 1971Most people encounter email and the web as parts of the same internet ecosystem, but email is roughly 20 years older than the web. Ray Tomlinson sent the first email between two computers in 1971 — and in doing so, invented the @ symbol as an addressing convention that has become one of the most recognized typographic marks in the world.technologyHTTP at 35: The Invisible Protocol That Runs the Entire WebEvery time you load a webpage, watch a video, or submit a form online, a protocol designed in 1989 by a single scientist is silently coordinating the exchange. HTTP is so fundamental to the web that it is easy to forget it had to be invented by someone.technologyThe $7.5 Million Domain Name: How Business.com Sparked the Gold Rush of the Early WebIn 1999, a seven-character domain name sold for $7.5 million — more than many companies were worth at the time. That transaction was not merely expensive; it was a signal that the internet had developed its own real estate market, with its own logic, its own speculation, and its own boom.