Internet Access Is Now A Basic Human Right


People may joke that others spend too much time on the internet, but this intricate series of tubes has become an important part of everyday life—so much so that it’s become a human rights violation to take it away.

That’s according to the United Nations Human Rights Council, which passed a non-binding resolution in June that condemns countries that intentionally take away or disrupt its citizens’ internet access.

The resolution was passed last Friday, but was opposed by countries including Russia, China, Saudi Arabia, South Africa, and India. The issue was with the passage that “condemns unequivocally measures to intentionally prevent or disrupt access to our dissemination of information online.”More than 70 states supported the resolutions, according to a statement released by Article 19, a British organization that works to promote freedom of expression and information. Thomas Hughes, the executive director of Article 19, wrote:

“We are disappointed that democracies like South Africa, Indonesia, and India voted in favour of these hostile amendments to weaken protections for freedom of expression online...A human rights based approach to providing and expanding Internet access, based on states’ existing international human rights obligations, is essential to achieving the Agenda 2030 for Sustainable Development, and no state should be seeking to slow this down.”

The resolution notes what many of us already know: It’s important to increase access to the internet, as it “facilitates vast opportunities for affordable and inclusive education globally,” or provides other resources for education, especially across the digital divide. In accordance with the 2030 Agenda for Sustainable Development, the organization also recognized that the spread of technology has the “great potential to accelerate human progress.”

It’s all here: your news organizations, your job-hunting resources, and your credit card statements. It’s become impossible to live without basic internet access.

Other countries have already stressed the importance of open access, including President Barack Obama, who in 2015 said that “today, high speed broadband is not a luxury, it’s a necessity.”

The resolution also highlights a number of issues that need to be addressed, including that the issue of freedom of expression on the internet. Also among the points presented were statements:

  • Calling upon all states to address security concerns in “a way that ensures freedom and security on the Internet,”
  • Ensuring accountability for all human rights violations and abuses committed against persons for exercising their human rights,
  • Recognizing that privacy online is important,
  • Stressing the importance of education for women and girls in relevant technology fields.

The UN can’t enforce resolutions legally. Rather, they’re issued to provide guidelines for participating nations and to put pressure on any that may have dissenting views. These are just general statements on how governments should shape laws when it comes to the internet. It’s nice to see, even if it does little beyond filling a few pieces of digital paper.

The next step is for those countries to start actively addressing problems, including laws pertaining to freedom of expression and how those rights can be abused to spread violence, terrorist ideals, and harassment. The more we discuss the problems that come along with the free reign of the internet, the closer we’ll get to Valhalla (or so I’ve heard).

What is IoT?

The Internet of Things (IoT) is a scenario in which objects, animals or people are provided with unique identifiers and the ability to transfer data over a network without requiring human-to-human or human-to-computer interaction. IoT has evolved from the convergence of wireless technologies, micro-electromechanical systems (MEMS) and the Internet.


A thing, in the Internet of Things, can be a person with a heart monitor implant, a farm animal with a biochip transponder, an automobile that has built-in sensors to alert the driver when tire pressure is low -- or any other natural or man-made object that can be assigned an IP address and provided with the ability to transfer data over a network. So far, the Internet of Things has been most closely associated with machine-to-machine (M2M) communication in manufacturing and power, oil and gas utilities. Products built with M2M communication capabilities are often referred to as being smart. (See: smart label, smart meter, smart grid sensor)

IPv6’s huge increase in address space is an important factor in the development of the Internet of Things. According to Steve Leibson, who identifies himself as “occasional docent at the Computer History Museum,” the address space expansion means that we could “assign an IPV6 address to every atom on the surface of the earth, and still have enough addresses left to do another 100+ earths.” In other words, humans could easily assign an IP address to every "thing" on the planet. An increase in the number of smart nodes, as well as the amount of upstream data the nodes generate, is expected to raise new concerns about data privacy, data sovereignty and security.

Although the concept wasn't named until 1999, the Internet of Things has been in development for decades. The first Internet appliance, for example, was a Coke machine at Carnegie Melon University in the early 1980s. The programmers could connect to the machine over the Internet, check the status of the machine and determine whether or not there would be a cold drink awaiting them, should they decide to make the trip down to the machine.

Kevin Ashton, cofounder and executive director of the Auto-ID Center at MIT, first mentioned the Internet of Things in a presentation he made to Procter & Gamble. Here’s how Ashton explains the potential of the Internet of Things:

“Today computers -- and, therefore, the Internet -- are almost wholly dependent on human beings for information. Nearly all of the roughly 50 petabytes (a petabyte is 1,024 terabytes) of data available on the Internet were first captured and created by human beings by typing, pressing a record button, taking a digital picture or scanning a bar code.

The problem is, people have limited time, attention and accuracy -- all of which means they are not very good at capturing data about things in the real world. If we had computers that knew everything there was to know about things -- using data they gathered without any help from us -- we would be able to track and count everything and greatly reduce waste, loss and cost. We would know when things needed replacing, repairing or recalling and whether they were fresh or past their best.”

Dr. John Barrett explains the Internet of Things in his TED talk:

How microwaves could help you surf the Internet at the speed of light


In theory, under the very best conditions, data would be able to travel across the Internet at the speed of light. In reality, as we all know, that doesn’t happen for a variety of reasons such as the fact that we don’t live in a vacuum, bandwidth constraints create bottlenecks, and communication protocols slow things down. However, new research suggests that much of what’s keeping us from surfing at the speed of light is latency caused by the physical infrastructure of the Internet and that there’s a surprisingly cheap and realistic solution to the problem.

Researchers from the University of Illinois at Urbana-Champaign and Duke University recently looked at the main causes of Internet latency and what it would take to achieve speed-of-light performance in a paper titled Towards a Speed of Light Internet. Reducing latency on the Internet, the authors posit, could have many positive benefits, such as improved user experience, expanded use of thin clients, and better geolocation. “We want to push to the limits of that endeavor; speed-of-light is the only *fundamental* limit,” one of the paper’s authors, Ankit Singla, who will soon be joining the faculty at ETH Zürich, told me via email. “Our work is an examination of why this is worth doing, and what it might take.”

Infrastructure latency is the main culprit

To get a sense for just how much slower than the speed of light the Internet currently is, Singla and his colleagues measured the time it took to fetch the index page HTML of 28,000 top web sites from clients at 186 locations around the world in December 2014 (SSL sites were not included for this study). Using the time it would take light to make the round trip between the client and the web server as a baseline, they found that the median fetch time was about 35 times as long it would take light to travel the same distance, while the fetch time for 80th percentile was more than 100 times as long.

To find out where the slowdowns were coming from, the researchers also broke down the fetch time by various steps: The median DNS-lookup time was 7.4 times as long it would take light to travel the same distance, TCP handshakes were 3.4x, request responses 6.6x, and TCP data transfers 10.2x. However, while it seems the overhead associated with these protocols is causing the bulk of the delay, it turns out that much of it is really coming from the latency of the underlying infrastructure, which works in a multiplicative way by affecting each step in the request. When the researchers adjusted for the median ping time from clients to servers, 3.2 times longer than what it would take light to travel the same distance, the true protocol overheads dropped to 2.3x for DNS-lookup, 1.1x for the TCP handshake, 1.0x for the request response, and 3.2x for the TCP transfer.

In other words, if the underlying infrastructure latency could be removed, without making any improvements to protocol overhead, the speed of the Internet could be brought down from what is often more than two orders of magnitude slower than the speed of light to just one order of magnitude slower, or less. As the authors wrote in the paper, “inflation at the lower layers plays a big role in Internet latency inflation.”

A cheap and easy speed-of-light Internet

The second part of the paper proposes what turns out to be a relatively cheap and potentially doable solution to bring Internet speeds close to the speed of light for the vast majority of us. The authors propose creating a network that would connect major population centers using microwave networks. Why microwaves? Because microwave networks have already proven to be extremely fast and (somewhat) reliable. For example, microwaves are used to transfer data at nearly the speed of light between financial markets in Chicago and New York City for high frequency trading, where minimal latency is critical, with 95% reliability. Also, other potential solutions, such as hollow fiber and line-of-sight optics, aren’t yet mature enough (or cheap enough) for consideration.

The drawback with microwave is low bandwidth. To get around that, their solution would rely on the microwave network between cities for web and data traffic for which minimal latency is important. Other things for which latency isn't as critical, like video consumption (which is currently 78% of web traffic), could continue to use existing infrastructure, so congestion wouldn’t be an issue. Traditional fiber would be used to bring data to users up to 100km away from the microwave endpoints; even at that distance, the latency introduced by fiber would be minimal.

The authors estimate that the cost of creating a network that would bring near speed-of-light Internet performance to 85% the U.S. population using microwave repeaters on existing towers would be a mere $253 million in set-up costs and $96 million a year in operational expenses. That's a relatively small investment compared to the billions of dollars currently being spent to lay new fiber optic cables across the Arctic Ocean.

Of course, there are potential issues with such an implementation. For example, getting approval from the FCC to use existing towers for microwave is not a given. Also, some applications are both latency-sensitive and high-bandwidth, so this solution may not work for those at scale. Setting up microwave networks across oceans to expand beyond the U.S. wouldn’t be simple, either.

All in all, though, Singla and his colleagues feel that their proposed solution is not unrealistic.

“We think this setup with two parallel networks — the current fiber backbone which provides huge bandwidth, but higher latency; and a microwave-based network that provides nearly speed-of-light latency, but much lower bandwidth — is very interesting,” he said, “and a plausible way of getting a lot of the benefits of low-latency networking at very little cost.”

They also feel that, whatever the ultimate solution, a speed-of-light Internet isn’t just a pipe dream, but something that we will have someday. “I think this will eventually happen,” Singla told me, "the challenge for us is to make it happen *soon*, for example, getting really close to speed-of-light latencies within a decade, at least within certain geographies."