Mise à jour : 5 juillet 2010 (Rédaction initiale : 9 juin 2010 )

Parutions : I. Articles Isolés

I-1.9 : Net Neutrality : an economic perspective

Net Neutrality first became news in the United States just after the year 2000. The issue arose when the Federal Communications Commission (FCC) took decisions regarding conflicts between Internet Service Providers (ISPs) and cable operators. The FCC's decisions focused on ensuring non discriminatory conditions for ISP's access to the Internet communications market using existing cable networks. These issues became the basis of the debate on Internet users' rights, namely their right to freely access the content and services of their choice. On September 25, 2005 the FCC published a policy statement recognizing these issues, and clearly defined four main principles for Network Neutrality. 


FRENCH

La Neutralité du Net : une perspective économique

La neutralité du Net apparu aux États-Unis au tout début des années 2000. Les problèmes ont surgi lorsque la "Federal Communications Commission -FCC" (Commission fédérale en charge des communications) a dû prendre des décisions concernant les conflits entre les fournisseurs d'accès Internet (ISP) et les opérateurs de réseaux cablés. Les décisions de la FCC étaient axées sur la garantie du caractère non discriminatoire des conditions d'accès des fournisseurs de services sur le marché des communications sur Internet via les réseaux câblés. Ces questions ont servi de base au débat sur les droits des utilisateurs d'Internet, à savoir leur droit d'accès aux contenus et aux services de leur choix. Le 25 Septembre 2005, la FCC a publié une déclaration de principe  reconnaissant ces questions, et clairement définie autour de quatre grands principes.

 

GERMAN

Die Netzneutralität: eine wirtschaftliche Perspektive

Die Netzneutralität ist in den Vereinigten Staaten im Beginn den Jahren 2000 entstanden. Die erste Problemen sind entstanden, als die Federal Communications Commission (FCC - die amerikanische Bundeskommunikationsbehörde) Entscheidungen über Widerstreite zwischen Internetdienstanbieter und Kabeloperatoren treffen sollte. Diese Entscheidungen bestanden darauf, dass die Internetdienstanbieter durch Kabelnetzwerke Zugang zum Markt ohne Benachteiligung erhalten. Diese Fragen haben zufolge die Diskussionen über den Zugang den Internetsverbrauchern zu bestimmten Inhalten und Diensten hervorgebracht. Am 25. September 2005 hat die FCC eine Grundsatzerklärung über diese Fragen veröffentlicht, und hat vier Hauptprinzipien für die Netzneutralität dargelegt.

 

GREEK

 

Άρθρο: Ουδετερότητα του Δικτύου: μια οικονομική προοπτική
 
Η ουδετερότητα των δικτύων έγινε για πρώτη φορά γνωστή στις ΗΠΑ αμέσως μετά το 2000. Το ζήτημα ανέκυψε όταν η Ομοσπονδιακή Επιτροπή Τηλεπικοινωνιών (FCC) έλαβε αποφάσεις σχετικά με διαφορές μεταξύ των Παρόχων Υπηρεσιών Internet και των φορέων καλωδιακής τηλεόρασης. Οι αποφάσεις της Ομοσπονδιακής Επιτροπής Τηλεπικοινωνιών έδωσαν έμφαση στη διασφάλιση ύπαρξης μη διακριτικών όρων πρόσβασης των παρόχων υπηρεσιών Διαδικτύου στην αγορά τηλεπικοινωνιών Internet, με τη χρήση των υπαρχόντων καλωδιακών δικτύων. Τα θέματα αυτά αποτέλεσαν τη βάση της συζήτησης αναφορικά με τα δικαιώματα των χρηστών Internet, και συγκεκριμένα του δικαιώματός τους για ελεύθερη πρόσβαση στο περιεχόμενο και τις υπηρεσίες της επιλογής τους. Στις 25 Σεπτεμβρίου 2005, η Ομοσπονδιακή Αρχή Τηλεπικοινωνιών δημοσίευσε μια δήλωση πολιτικής, αναγνωρίζοντας τα θέματα αυτά και ορίζοντας παράλληλα ξεκάθαρα τέσσερις γενικές αρχές σχετικά με την Ουδετερότητα του Δικτύου.


POLISH

 

Neutralność Internetu : Perspektywa ekonomiczna

 

Neutralność Internetu  pojawiła się jako nowy temat w Stanach Zjednocznych w 2000 roku. Problemy dały o sobie znać w momencie, kiedy « Federal Communications Commission –FCC » (amerykańska federalna komisja do spraw komunikacji) musiała podjąć decyzje rozwiązujące spory pomiędzy dostawcami dostępu do internetu (ISP) i operatorami sieci kablowych. Decyzje FCC dotyczyły zagwarantowania warunków, które nie dyskryminują dostawców usług na rynek kommuikacji internetowej, korzystających z dostępu do internetu za pomocą sieci kablowych. Te kwestie stały się podłożem debaty o prawach użytkowników Internetu ; mianowicie o prawie do wolnego dostępu do wybranych treści i usług. W dniu 25 września 2005 roku FCC opublikowałapodstawową deklarację rozwiązując i uznając te kwestie, określając przy tym wyraźnie cztery główne zasady neutralności Internetu.
 
 
SPANISH
 
La neutralidad al acceso a la red: una perspectiva económica

La neutralidad al acceso a la red se trajo al primer plano por primera vez en los EEUU en el año 2000. Este tema surgió cuando la “Federal Communications Commission” (FCC – la Comisión americana de comunicaciones federales) tomó decisiones concerniendo los conflictos entre los “Internet Service Providers” (ISPs – los proveedores americanos de servicios de Internet) y los operadores de cables. Las decisiones de la FCC tenían el objetivo de asegurar condiciones no discriminatorias para el acceso de los ISPs al mercado de comunicaciones en Internet. Estos temas se volvieron la base del debate sobre los derechos de los usuarios de Internet, especialmente su derecho al acceso libre al contenido y a los servicios de su preferencia. El 25 de septiembre del 2005, la FCC publicó un programa que reconoce estas cuestiones y que define claramente cuatro principios esenciales para la Neutralidad sobre la red.

 

Pièces jointes

 

 In 2008, Comcast[1] blocked its subscribers’ peer to peer file exchanges. Comcast’s action targeted BitTorrent traffic and was carried out via TCP RST packets (reset, re-initialization). The traffic was primarily comprised of file exchanges, which for the most part were pirated videos. Comcast felt that these bandwidth greedy file exchanges were saturating its networks, with negative repercussions for all subscribers. To ensure better service quality, Comcast believed it was authorized to block the file exchanges, without informing its subscribers. As the leading ISP, Comcast’s decision to curtail user access to the web, services and content, without informing users, placed network neutrality in the media limelight. The FCC consequently became the recognized guarantor of the net neutrality principles which it had laid out in 2005. As such, the FCC told Comcast it had 30 days to put an end to BitTorrent filtering. In April 2010, the Federal Appeals Court of the District of Columbia annulled the FCC’s decision, stating that the FCC did not have the legal authority to intervene in an operator’s traffic management. Today, the situation probably requires a law to govern Net Neutrality.

While Europe will quite likely follow another road, the end result will address the same issues as in the United States. For one, Europe’s net neutrality was first referred to in 2009 with the adoption of the new electronic communications directive. While the directives do not explicitly refer to this idea, Internet access was recognized when it was adopted by the European Parliament as an essential right for European Union citizens. The European Parliament also passed the directive as a means to guarantee the preservation of public liberties as a right to information. On November 25, 2009, the European Commission published a declaration stating that Net Neutrality was a political and regulatory objective to be fully integrated in the implementation of the new electronic communications directive for all members.

Concomitantly, Europe’s key telecommunication companies (Deutsch Telekom, Orange, Telefonica) declared that they would ask major Web service companies to contribute to infrastructure financing. The telecommunication companies felt that their networks were quickly saturated because of the massive rise in video traffic. These companies therefore wanted to require Web service companies to contribute to financing infrastructure as a means to compensate the offer of videos over the Web. More, as some Web service companies were posting higher profit margins than the telecommunication companies’ own profits, their request seemed all the more legitimate. In short, risk and value were not equally shared. Google was explicitly targeted.

Further, in December 2009, an event in America further bolstered the European operators’ position. ATT, America’s number two mobile telecommunications company, had to shut down its mobile network for more than one day in greater New York. This shut down was due to congested mobile networks linked to an overwhelming rise in mobile Internet connections. The Apple iPhone, exclusively distributed in the United States by ATT, was cited as the main culprit. The following observation can be made: saturated networks may be more of a mobile problem than a landline one.

Regardless of the outcome, asking Web based companies (applications and services) to financially contribute to telecommunications companies’ infrastructure to provide better quality networks led to an uprising. A number of onlookers felt that the principle of offering equal access to all Web based companies was not respected, and would lead to enhanced discrimination between companies and services based on whether or not they were controlled by ISPs. This consequently questioned the Net Neutrality principle, and even the very essence of the Internet.

More, the issues guaranteed by the FCC regarding the concept have neither clarified nor reunited players around common ground, but instigated controversy.

This article assesses Net Neutrality from an economic viewpoint, with the crux being potential network congestion. To ensure that networks are not impacted, management priority must be defined for traffic, services and or access to networks. This also raises the question of financing the required infrastructure to enhance current capacity. To position the debate we will start by reviewing the four net neutrality principles. These four principles can be viewed in the same vein as tiered services which ISPs already implement, apparently problem-free.

The next section focuses on how closely the American and European Net Neutrality frameworks are parallel, despite different legal and regulatory frameworks.

How American and European outlooks converge

The Federal Communications Commission defined net neutrality as a set of 6 key principles, four of which were defined in 2005 and an additional two in 2009:

 

1- Freedom to access content: “consumers are entitled to access the lawful Internet content of their choice”.

2- Freedom to use applications: “consumers are entitled to run applications and use services of their choice, subject to the needs of law enforcement”

3- Freedom to choose any device: “consumers are entitled to connect their choice of legal devices that do not harm the network”.

4- Freedom to obtain service plan information: “consumers are entitled to competition among network providers, application and service providers, and content providers”;

5- Non-discriminatory principle: contents, applications and lawful services can not be discriminated;

 

 

 

As stated in the introduction, the Federal Appeals Court of the

District of Columbia contested the FCC’s legal authority regarding Comcast. With the Court deeming that the FCC’s actions were legally inappropriate, Congress may consequently enact a law in 2010 to make net neutrality an FCC prerogative. Further, legislative action will outline the net neutrality’s legal concept and regulatory mechanisms. 

 

Within the European Community, net neutrality was included in the electronic communications directive. Five directives were adopted in 2002[2] and reviewed in 2009[3], with a final revision on November 25, 2009. The European Commission published its views on net neutrality, and the passage is worth citing: “the Commission attaches high importance to preserving the open and neutral character of the Internet, taking full account of the will of the co-legislators now to enshrine net neutrality as a policy objective and regulatory principle to be promoted by national regulatory authorities (1) alongside the strengthening of related transparency requirements (2) and the creation of safeguard powers for national regulatory authorities to prevent the degradation of services and the hindering or slowing down of traffic over public networks (3)” (underlined by the author).

 

Member states must transpose the new directives on a national level before May 25, 2011. Transposing the new directives will be the occasion to hear Europe’s national debates on net neutrality and, will, as usual, be a heroic feat to ensure that the 27 member states’ national policies converge. All the more as the economies, development level and consumption of the European Union monitor closely: “The Commission will monitor closely the implementation of these provisions in the Member States, introducing a particular focus on how the “net freedoms” of European citizens are being safeguarded in its annual Progress Report to the European Parliament and the Council. In the meantime, the Commission will monitor the impact of market and technological developments on “net freedoms” reporting to the European Parliament and Council before the end of 2010 on whether additional guidance is required, and will invoke its existing competition law powers to deal with any anti-competitive practices that may emerge.”

 

The legal calendar is consequently, very similar on both sides of the

Atlantic.

 

A closer look shows that the European and American stances converge on the main issues. Indeed, the new framework directive on new Internet Freedom Provision states in article 8 : “Measures taken by Member states regarding end-users’ access to information and use of, and use of services and applications”. These points can very explicitly be linked to the FCC’s first and second principles regarding content and applications.

 

The directive’s articles 20 and 21 on universal service ensure that: « Consumer contracts must specify…what services they subscribe to and, in particular, what they can or cannot do with those communications services, when conditions are authorized by national member states, national telecom authorities will have the powers to set minimum quality levels for network transmission services so as to be informed about the nature of the service to which they are subscribing, including traffic management techniques and their impact on service quality. Once again we see a mirroring between the European Union and the FCC texts covering principles 3, 4 and 6.

 

Finally, when the Commission states that it will enforce its competition skills regarding all anti-competition, this generally refers to abusive discrimination practices. Once again, the European texts are parallel to the FCC’s non-discrimination principle and also address

Europe’s Internet regulation issues.  

 

So, both the FCC and the European texts converge.

 

From discrimination to differentiation 

 

It is clear that the principles as they have been written do not hinder reasonable network management practices. The FCC explicitly states this point in its texts, whereas the above mentioned European Commission’s articles 20 and 21 state the potential existence of these practices and at this stage request that consumers have a right to better and more transparent information. So, network operators may be deemed as practicing reasonable network management when they seek to reduce network congestion, offer tiered service quality, limit traffic which can damage equipment, networks or other user accounts, or forbid the transfer and exchange of illicit content.  

 

As such, if these practices are acceptable, tiered service quality, traffic routing rules, etc., are possible on the basis that they are objective, transparent and non discriminatory. It is, therefore, important to talk about discrimination, since common language and legal use differ from the economic meaning, and therefore be confusing.

 

In legal terms, and generally in common language, discrimination designates a practice which seeks to treat buyers of the same good or service differently even if they present the same usage and consumption characteristics. This type of practice is generally forbidden. In economic terms, this practice is referred to as first degree discrimination and is harmful for both consumers and welfare.

 

Two buyers of the same good or service, however, with different consumption characteristics can be offered different conditions. A client, for example, can benefit from a lower unit price if the purchase involves large quantities. As a result, the unit sales price for the large quantity buyer will be lower than for the client buying a smaller quantity. In a legal context this differentiation between buyers points to a quantity-based discount. This type of discount is both a recognized and accepted practice. In economics, the term second degree price discrimination is often used to qualify this type of differentiation practice, and the term is, therefore, not necessarily negative. Second degree price discrimination can be positive, since second degree discrimination or differentiation can enhance the consumer’s welfare. This explains why these practices are not per se deemed illegal.

 

Nevertheless, in some circumstances, differentiation practices can have a negative impact on competition and/or consumers: differentiation therefore becomes discriminatory, and can be sanctioned (competition law) or forbidden (regulation). The difference between differentiation and discrimination does not lie in the different nature of these practices, but the degree of how the practice is implemented. If differentiation is authorized and discrimination is prohibited, when does a differentiation practice become discriminatory? An economic analysis views the answer as straightforward and simple: it depends on the context, intentions, the players and their position, etc.,.  

 

Regarding the subject germane to us, net neutrality, the difficulty lies in the uncertainty of ex ante qualification of discriminatory practices when they stem from a differentiation framework. While telecommunication operators can be authorized to differentiate clients regarding services or applications to optimize network management, does such reasonable differentiation pave the way to discrimination? Of course fair economic “equilibrium”, is the equilibrium which maximizes welfare. This equilibrium is situated between a total laissez-faire logic regarding differentiation, a position which a number of leading telecommunication companies have adopted[4], and completely forbidding differentiation, such as fully backing the associations representing Web based service companies and their users.

 

Congested networks, be they fixed or mobile, are the origin and therefore, the crux of net neutrality. This problem surfaced recently and is likely to become more and more crucial. The core of the problem lies in the cost of additional network capacity: while cost has tumbled over the last 20 years, it does not explain the exponential rise in demand, mainly due to watching videos. The alternative seems to be rising network congestion (poor service) or a higher cost in network use to finance additional investment and passing the higher cost to consumers and/or application, service or content suppliers.

 

This raises a number of questions: how are networks currently managed? How can these management processes be changed? How can we evaluate network congestion, at market level, at an operator level? Who assesses network congestion? Do all the networks (fixed, mobile and cable) face the same congestion problems? What does reasonable network management mean in terms of either less congestion or absorption? What type of differentiation is allowed and to what extent is differentiation acceptable in competitive terms? What about in terms of accessing information and various means of expression? What is the logic and possibility of having application and/or content providers contribute financially to the infrastructure as a means to alleviate network congestion management?

 

The next part of this article will try to answer some of these questions.

 

Positive differentiation in quality and accessibility of Internet services

 

Overall, telecommunications companies and Internet Service Providers have always managed their own networks.

 

In both layman terms and from the beginning of the Internet, or open Internet, network management has meant that operators reserve available capacity on the network for this service. Is available capacity to be shared among all users accessing the service on a non-discriminatory basis (void of priority access). The operator never guarantees access quality to a server or a subscriber on the Web. The operator can only commit its service to a best effort network quality to connect the subscriber to the requested service. This best effort network management rule is satisfactory as long as available capacity can easily meet demand. Subscribers may, of course, occasionally note slowdowns of varying degrees in terms of receiving or sending information (peak time), and with no apparent impact on Internet use. But when network capacity cannot meet demand, service is hindered and generalized (all subscribers are impacted) for a timeframe which can be long enough for Internet subscribers to reconsider using the Internet, especially for those who are the most sensitive to ongoing service quality.

 

To reduce the effects of network congestion for services or subscribers, operators manage their own networks.  

 

Network operators must respect network management rules which include obligations to provide specific services, such as emergency calls, security, etc. As such, these calls or specific services are considered priority services and must therefore meet specific requirements. In the same way that a physical highway has emergency lanes reserved for priority vehicles, telecommunications networks have specific obligations for priority calls such as to the police, fire stations, etc. These specific obligations mean that these calls are ensured priority handling over all network traffic, and are consequently called special services. Beyond these obligations, telecommunications operators offer a large number of specialized services to meet clients’ specific requests in terms of network reliability, security and availability or quality of service. These special services are dedicated to banks, telemedicine, or to corporate virtual private networks. There are many services using Internet networks, and which do not meet the open Internet’s best effort network management rules. Obviously, maintaining this superior quality entails a cost, and these services, are sold for higher prices to subscribers or to service providers than the ordinary best effort Internet services sold to the average customer. To differentiate these service qualities, either required or desired, the telecommunications company must ensure tiered service handling, with different types of subscribers paying different rates. Operators already implement this differentiation in their network management procedures, and for the time being, and unless otherwise proven true, this differentiation shows no characteristics of discrimination. Advocates of the open Internet do not directly contest these tiered services practices, since they are quite logically not under the aegis of the open Internet. What is, however, contested is that these practices could be extended to applications, services, or uses which are currently applied under open Internet rules. If this were the case, the open Internet would shrivel to become the bandwidth used for network capacity, and by extension Internet use. More, services which from the beginning were only available via the open Internet could be accessed via the capacity reserved for special services and would benefit from special treatment compared to equivalent services which can only be accessed by the open Internet. Is this discrimination? Or an act of unfair competition, which in fine harms consumer welfare? It doesn’t seem to be.

A subscriber buying a broadband Internet plan, or triple play service, is essentially buying three types of distinct client services: telephone communication services, open Internet access service, and television broadcasting. Without really knowing it, the subscriber accesses three different tiered service management quality levels. In this instance, quality means that the operator has established different priority management to each service depending on congestion.  

  • In this regard, the telephone service offers the best quality since this service has priority over other services and across the entire network. The goal is to ensure that telephone conversations are not interrupted by network glitches. While the telephone continues to be the main service used by clients, operators must also guarantee the routing of special service numbers.

  • Customer requests for continuous service is also true for TV, and therefore, requires a certain quality level. To ensure that a client can watch TV without unsolicited interruptions, operators set aside network capacity for each channel broadcast to the subscriber. On the local loop, (copper wire), the part which is located right before the last network node and the subscriber’s outlet, a specific channel is reserved for broadcasting to the client when s/he watches TV.

  • The open Internet service uses the remaining available bandwidth. This is the only service for which information sent respects the “best effort” rule. Available capacity for the open Internet is maximized if other services are not used, and minimized if other services require bandwidth.

 

Managing service priority is the operator’s obligation to the triple play subscriber. The term means that there are different service qualities between the services “managed” by the operator and services available over the open Internet. Open Internet platforms offer telephone services, such as Skype, referred to as VOIP (Voice On Internet Protocol). The VOIP service competes head-on with the operator’s triple play service called TOIP (Telephony On Internet Protocol) and are a means to distinguish it from VOIP. While VOIP and TOIP are sold at the same price, TOIP quality and use are much better than VOIP. When TOIP is available, as is the case in

France, clients prefer the TOIP service, meaning VOIP is marginalized. More, since TOIP uses the fixed line, fixed line use rises. This is the case in the French market and fixed line growth has risen with the advent of triple play and TOIP. Inversely, in countries where TOIP is not included in broadband Internet, VOIP gains ground and mobile use is greater than fixed line use, which continues to wane. TOIP consequently appears to favor consumer welfare and the economy at large. So in economic terms, existing quality differentiation between TOIP and VOIP for triple play offers are neither excessive nor discriminating since the positive effects of this practice are greater than the negative effects.

 

In the present case, service quality was differentiated between open Internet services and other Internet services managed by operators. So, even open Internet services have tiered service quality. A telecommunications company providing open Internet services must buy an access to an operator so that the server hosting the service can be accessed by Internet surfers worldwide. And the more bandwidth the service provider buys from the telecommunications company, the greater the number of available simultaneous connections and/or the faster the upload of the service’s pages and files on the user’s computer. The capacity to access a site directly depends on the client’s capacity to buy Content Delivery Network services (Akamaï for example). These services keep the content close to Internet subscribers by implementing specific network servers to store content close to the Internet subscribers and therefore ensure enhanced access quality. Web service providers rely heavily on CDNs (namely the Web’s powerhouses, like Google). This use leads to a differentiation in accessibility as well as the quality perceived by web surfers, like premium services based on the client’s capacity to pay for specific network services. Of course, open Internet service providers have the capacity to pay for these services, ensuring that their end services are better received than direct competitors whose pockets are not as deep, and who, therefore, cannot buy these quality delivery services. Google services, namely the maps, Google Earth, Maps, etc. are successful because of the speedy upload of the services to all Web surfers. This differentiation allows players to set themselves apart across the open Internet and for some to avoid the undesirable effects of congestion which may arise as a link in the network.

 

The Internet uses differentiation or tiered management practices daily. Net neutrality issues have surfaced in recent debates since differentiation can also be exacerbated in a growing congestion context.  

 

Congestion: a structural problem in case of negative profitability for additional network investment  

 

The possible prioritization of traffic flows across a network surfaces when demand is greater than a network’s capacity. In short, when there is network congestion. So, when there are no congestion problems the original open Internet management, based on the best effort logic prospers without constraints and limits.  

Generally speaking, demand for network capacity rises exponentially, due to ever-growing video consumption. For clients, greater available network capacity is mandatory to maintain current Internet quality. Enhanced capacity needs require additional investments.

 

The first question to be addressed focuses on the return on investment of additional investments. Over the last thirty years, telecommunications network investments were ever-rewarding. Traffic volumes of all types (voice, text, sounds, images) exploded pushed by technical progress, and the unit price of transported information has plummeted. The rise in network capacity has been maintained with investment providing excellent returns. Can this dynamic end? In short, will unit costs of carrying information across telecommunications networks continue to be profitable in light of a greater need for capacity? Another question surfaces: where is lower profitability located in the network? In the network’s core or the local loop? It is likely that it is along the network elements with negative profitability that structural arbitrage crises will arise and require taking a decision between additional capacity investments and exacerbated congestion.  

 

First off, we need to objectively assess if this unrelentless demand or bandwidth capacity is accompanied by a higher average cost of access and use of telecommunications networks over the long term. If the costs rise, we need to estimate if and how demand will impact retail Internet plans, especially in light of competition among Internet Service Providers. At present, these questions have not been clearly addressed. Of course, telecommunications operators explain the need for greater bandwidth in terms of higher capacity needs across their networks, which entails higher investments. While this position is clear, it doesn’t state if profitability linked to additional bandwidth investments will be positive or negative. In economic terms, this profitability issue only arises when the additional investment is negative. In this specific instance, network congestion becomes structural and not based on the context. Pertinent economic answers are required. The first question to answer focuses on higher differentiation practices, in particular, seeking additional revenues, which until present were neglected by operators (In this instance we can refer to the possible contributions of providing Internet services over the Internet).

 

Given the critical nature of objective evaluations regarding the reality of congestion and negative profitability of additional bandwidth capacity, only regulatory authorities can successfully evaluate the situation in transparent and opposable means to define the required guarantees for the diagnosis. Indeed, asymmetric information between operators and other Internet players regarding real network capacity and the costs to bear are crucial. The scope of possible imbalances revealed by operators alone underlines how sensitive this issue is. 

 

It is important to note that the imbalances could differ between wire and wireless networks. In wireless networks, the local loop is mutualized by users and frequency, as a resource, is intrinsically limited. This implies that the wireless local loop can bear the economic consequences of a surge in bandwidth needs stemming from higher Internet connections via mobile phones. This point, however, must also be the focus of objective evaluations, especially in light of very fast technical evolutions (the birth of the 4th mobile generation called LTE and the drastic drop in the cost of equipment).

 

Supposing that the profitability of additional investments in capital is negative, there is no incentive for network operators to spontaneously invest. When growth in demand meets limited available capacity, structural congestion must be managed by notable differentiation. And all the more when congestion levels are high. When new differentiation practices are implemented, open Internet management principles can be over-stepped and ISPs may be tempted to adopt practices with anti-competitive effects.  

 

Congestion: how to manage it ?

 

In this context, congestion means not being able to deliver the service quality that the end client expects. Service quality refers to the time required to send information in terms of client needs. So, since service quality depends on end client needs and perception, it is subjective. Depending on the type and use of services, quality can be lesser or greater. For example, an end client whose telephone conversation or live television show is interrupted, will view the service as severely downgraded. In the same vein, while end clients accept some slowdown in Internet page uploads or peer to peer site video uploads, “the accepted slowdown level varies with clients”. Overall, some services are more prone to congestion than others. This explains why the ISP will try to manage its own network traffic and favor services which are sensitive to congestion (telephone or broadcasting) over services which are not quality-sensitive (consulting Internet sites over the open Internet). So, differentiating service handling is positive since it increases the collective welfare.  

 

On an economic basis, managing congestion can take two very different routes. The ISP can manage traffic priorities based on what it believes end clients want. In this case the ISP gives priority management to quality-sensitive services and either slows or blocks services which are not quality-sensitive. In short, congestion is managed as being “rationed in terms of quantity”. This solution does not maximize collective welfare. Since the end of the 19th century, price rationing has been preferred over quantity rationing, and means that the end client with the greatest needs for quality will pay the highest price or the service provider can set prices for a higher quality level.

 

When ISPs offer tariff plans which differ on a client by client basis, they are merely applying tested transportation tariff plans. Plane or train tickets are not homogenously priced in terms of date of purchase and the prices are higher when there is demand, before long weekends, vacations etc. French highways practice the same timeframe tariff plans, with tariffs set to alleviate heavy end of weekend traffic. The same is true for the city of

London which set up tolls for non-London residents driving into London. Other cities have opted for a different type of rationing by setting up alternate circulation based on even and odd license plates. Managing congestion is a very prevalent issue found in fields where the goal is to optimize capacity or to manage a peak in demand: electricity, postal services, hotels, etc.

 

In this perspective, consumers take responsibility for their consumption and purchasing habits, and yet are also the cause for network congestion. We can take this analogy even further, stating that neither the vacuum cleaner (or hoover) manufacturer or the washing machine manufacturer finance electricity grids or electricity generation plants. And in the automotive industry, neither the oil companies, nor the insurance companies finance highways. Clients determine infrastructure use and bear the financial consequences of congestion management, even when goods and service suppliers directly benefit from the consumption of infrastructure use. By using available infrastructure mutualized by clients, service suppliers constantly differentiate between the different types of service quality they make available: speed, comfort, delays, guaranteed always-on access and so on. Clients, therefore, pay for quality.  

 

Bearing in mind the above-mentioned conditions to objectively measure congestion and the incremental costs of traffic on electronic communication networks, the same logic could be applied to Internet services. Regulators must, therefore, ensure that ISP offers are transparent and respect European directives, especially given the infinite possibility of tariff plans and the boundless marketing creativity of ISPs. Could tariff plans be differentiated to guarantee access quality to some services and not others (video streaming for example)? While the idea seems feasible in terms of managed services, is it true for access to services in an open Internet environment? Charging an Internet access by the hour with prices based on downloaded quantities could also become an Internet tariff plan element (which the user already has a hard time controlling: the shock bill)?

 

Instead of focusing on their end-clients, ISPs could offer different service qualities with variable access tariffs to content suppliers and application providers, especially since the latter would be in a position to pay to ensure access to Internet prospects. In this sense, the ISP is like a mall, and considers that its own network provides the infrastructure (the mall’s commercial space) to attract the prospects while Web content and application suppliers benefit from this environment and pay to do so (rent). This position is upheld by

Europe’s key ISPs, as stated in the introduction. The vision is short-term and not necessarily optimized for two reasons. 

 

First, Web-based services and applications have always paid to access networks. The rise in the amount of bandwidth used by a service or an application is due to greater demand by Web surfers. This means that service or application suppliers must invest in additional capacity and pay network operators. Otherwise, the service quality will be downgraded meaning Web surfers will not be able to use the service or application as required, de facto threatening the livelihood of the service.

Second, the political history shows that tariff prices and network quality have given Web-based service providers easy Web access with very low barriers to entry. This implies that any actor, no matter how tiny, can obtain a quasi-equal access to the world wide web’s surfer population. The birth and baby steps of the global Web stars initially began with very limited network means. The ease and equality of access to network resources, means that all players, big or small, are under constant pressure to innovate and compete, especially application providers (including the powerhouses like Google, e-bay, Facebook, etc.). In short, Web surfers push Web-based service and application providers to constantly innovate, which in turn benefits the network operators.

If the service becomes a paying one, it is likely to become more limited and server requirements will drop, regardless of the type of payment adopted (even if billed as call termination data). These tariff barriers not only encourage players to sign exclusivity deals with network operators, service providers, content and content delivery providers, they also encourage vertical integration practices.  

Implementing a regulation based on positive tariff differentiation could counter these negative effects by making service providers pay proportionately for the services which are the most congested. Is this feasible? Can it be implemented ? We don’t think so. Unfortunately, and all too often advanced as a plausible solution, service-based congestion cannot be measured via average bandwidth (Youtube is an oft-cited example). The bandwidth which needs to be accounted for is the bandwidth used at the time of congestion (or at peak times). In other terms, positive tariff differentiation does not mean making a passenger who often takes the train at off-peak times pay a steep price. Positive tariff differentiation means making the train passenger pay for the train, even if it’s only once a year. In the World Wide Web, this means making the service or application provider pay for the share of congestion they are responsible for at the time of congestion or peak load. Given the rampant use of services and consumption, we feel this is difficult to implement both objectively and transparently. If these two conditions, objectivity and transparency, are not met, the positive effects of differentiation (improvement of collective welfare) may very well become negative (anti-competitive effect).

 

The remaining solution and which Europe’s key ISPs are de facto thinking of is to reduce congestion by making both responsible parties pay: Web surfers and service or application providers. An operator’s network can be designed as a platform which simultaneously operates across two distinct and competitive markets: represented by end clients for one and by service and application providers for the second. Equilibrium prices on two interdependent markets, or two-sided markets, can be set to maximize public welfare, and consequently develop network capacity and service diversity at a minimal cost for the end client. In short, the market segments where demand is less price-sensitive can finance the market segments where demand is price-sensitive.  

 

Here too, controls will have to be implemented. On two-sided markets, the frontier between optimized and abusive behavior is both very small and difficult to discern. All the more so, when this assessment is carried out ex post by a competition authority. Does this mean we should intervene ex ante?

 

Is regulation required ?

 

Although the ISP manages congestion over its network, it is not necessarily responsible for end-to-end quality. Information moves from one point of the network to another point, and the speed at which the information is transmitted is determined by the smallest bandwidth of all the networks crossed. The emitter of this information can be located on a different network than the current ISP meaning the quality for the end client may be degraded and not necessarily be the ISP’s responsibility. Quality may be impacted by upstream networks, or more directly due to a site which may not have rented enough bandwidth capacity to ensure a sufficient number of simultaneous connections (many sites do not often foresee enough simultaneous connections upon site opening, especially when there has been much pre-site and site opening buzz, and seems to particularly apply to institutional sites).  Designating the network responsible for the congestion is just as difficult, as it is for an ISP to absorb network congestion.  

 

Both the need for highly differentiated quality which vary with services and clients, as well as the unknown origin of the degraded quality create asymmetric information between the ISP and clients.  Given these circumstances, the economic analysis highlights that laissez-faire doesn’t maximize welfare. Quality is degraded at the clients’ expense, even impacting the offer. Only normalized information and transparency imposed by a third-party regulator could restore a possible equilibrium maximizing welfare.  Recent European directive on electronic communications uphold this viewpoint, especially in terms of universal service and contracts potentially granting all powers and means to national regulatory authorities to impose this transparency for quality, a crucial point to instore effective competition between ISPs.

 

To uphold the quality offered to clients and in the above mentioned context, ISPs could be incited to better manage their networks. This could also lead to better differentiate service quality depending on whether or not services are managed. An ISP, for example, can offer a quality VOD service with a high definition film catalogue. This catalogue may soon include movies in 3D as well, with fast and guaranteed download. The ISP will ensure that the download is a priority service, thereby guaranteeing subscribers with promised quality of service levels. To guarantee service quality the ISP has to have end to end control which means an agreement with the server which emits the signal. Signing a number of such agreements is not feasible since it would mean reserving major bandwidth capacity for managed services which can only be carried out at the expense of bandwidth available for the open Internet. Encroaching on open Internet bandwidth means that subscribers to other services would see a drop in quality—and which is also guaranteed. Finally, requiring perfect transparency in terms of quality could be a means to legitimize exclusive agreements between infrastructure operators and content suppliers. What would this mean in terms of competition between the same ISP which provides a VOD service included as part of its managed services with guaranteed quality and the same VOD service accessed via the open Internet and merely abiding by a best effort service rule?  This scenario would only be plausible if bandwidth was limited and incremental bandwidth supports decreasing returns.  

 

Conclusion

At least the Net Neutrality debate has the true merit of highlighting the ISP’s current network management rules and lends more transparency to the elements establishing the right practices and the behavior to condone. Albeit a few and rare exceptions, today’s auto-regulated practices have worked thus far, and have not yielded any major dysfunctions, at least in Europe. There are two reasons for this. First, the request for more network capacity has helped absorb demand with lower costs. Second, the dynamics of the Internet community provides a very positive security ring against major deviance. Looking forward where are the problems? They will arise if the dynamics of available bandwidth does not meet demand. The scarcity of a good, in this instance bandwidth, will give more clout to infrastructure providers over the global community of network users (Web surfers, content and application providers). Today’s debate focuses on avoiding a power struggle: demand versus supply which would give the ISPs the possibility of reinforcing detrimental behavior. And if bad behavior can be identified, as we have shown in the answers to the questions we have put forth in this article, controlling the ISPs will be a true challenge, unless very strict and methodical regulation is implemented, severely condoning bad behavior. And yet, this action lies at the   very opposite of the Internet’s own nature.

 

 


 


[1] America’s leading ISP and cable provider

[2] The functioning of the five directives comprising the existing EU regulatory framework for electronic  communications networks and services (Directive 2002/21/EC (Framework Directive), Directive 2002/19/EC (Access Directive), Directive 2002/20/EC (Authorisation Directive), Directive 2002/22/EC of the European Parliament and of the Council of 7 March 2002 on universal service and users' rights relating to electronic

communications networks and services (Universal Service Directive), and Directive 2002/58/EC of the European Parliament and of the Council of 12 July 2002 concerning the processing of personal data and the protection of privacy in the electronic communications sector (Directive on privacy and electronic communications).

 

[3] Directive 2009/140/EC of the European Parliament and the Council of 25 November 2009 (Better regulation Directive)and Directive 2009/136/EC of the European Parliament and of the Council of 25 November 2009 (Citizens rights Directive)

[4] David Young of Verizon later added a familiar lobbyist refrain that he "doesn't understand what the problem is that we are trying to solve" with openness rules. Verizon has already deployed 194 lobbyists at a cost of more than $13 million in 2009 to fight Net Neutrality both at the FCC and in Congress.

Related articles

http://www.regulatorylawreview.com/spip.php?article254

votre commentaire