PRICING MODELS FOR ISPs IN INDIA

67
PRICING MODELS FOR ISPs IN INDIA By Vaibhav Kumar Arun Unni Tasneen Padiath Anshu Anand Rupali Bhandari TABLE OF CONTENTS INTRODUCTION 4 BRIEF HISTORY OF THE INTERNET 4 THE COMPONENTS OF NETWORK TECHNOLOGY 5 The economic model 6 Issues involved in these interactions 7 Network Access Providers 7 Pricing for Services 8 Liability for the cost of communication 9 Network construction costs 10 Maintenance and upgrade costs 10 THE INDIAN SCENARIO 11 National backbone 11 International bandwidth 12 Satellite or fibre connectivity 12 Economic Bottlenecks 14 Legal and regulatory issues 15 DEMAND FOR INTERNET ACCESS AND USAGE 17 Components of Usage Demand 17 The Network Effect and the Marginal Benefit of Size 19

Transcript of PRICING MODELS FOR ISPs IN INDIA

PRICING MODELS FOR ISPs IN INDIA

By

Vaibhav Kumar

Arun Unni

Tasneen Padiath

Anshu Anand

Rupali Bhandari

TABLE OF CONTENTS

INTRODUCTION 4

BRIEF HISTORY OF THE INTERNET 4

THE COMPONENTS OF NETWORK TECHNOLOGY 5

The economic model 6

Issues involved in these interactions 7

Network Access Providers 7

Pricing for Services 8

Liability for the cost of communication 9

Network construction costs 10

Maintenance and upgrade costs 10

THE INDIAN SCENARIO 11

National backbone 11

International bandwidth 12

Satellite or fibre connectivity 12

Economic Bottlenecks 14

Legal and regulatory issues 15

DEMAND FOR INTERNET ACCESS AND USAGE 17

Components of Usage Demand 17

The Network Effect and the Marginal Benefit of Size 19

The Network Effects and Changes in Demand 20

PRICING BASICS 22

Background on Network Pricing 22

Pricing Alternatives 23 Flat Pricing Scheme 24 Usage based 26 Priority 27 Tiered usage 28 Congestion pricing 28 Two-part tariff 29

Future pricing schemes 29 The Precedence Model 29 The Smart Market Mechanism 30 Usage-based 31

CONGESTION 36

The Costs of Congestion 36

The Causal Model of Internet Congestion 37 Incompatibility issues 37 Privatization, Commercialization, and Massification 37 Implications & Key Issues 38

The Congestion Externality, Demand and the Marginal Social Benefit 40 The Application Data Unit (ADU) 40

CASE STUDIES 47

AOL Pricing History 47 Until December 1994 47 January 1995 to July 1996 48 July 1996 to December 1996 48 Since December 1996 49 Lessons learned 51

New Zealand and Chilean Internet Experience 51 Background 51 Pricing schemes 52 Pros and Cons of pricing methods in New Zealand and Chile 55

Conclusion 57

Interaction with Other Groups 57 Network and Path Dependent Effects 57 Human Factors 58 Collaborative Design 58 Legal and Regulatory 59 Industrial Organization 59 Inter-Organizational Design 60 Standards 60

IMPACT ON INDIAN INFRASTRUCTURE DEVELOPMENT 61

BIBLIOGRAPHY 66

Introduction

In the past decade, we have all witnessed the Internet's rapid expansion, which has

outgrown any other industry. We have entered an era dominated by network

technology. The advancement of networking technology is bringing about the

convergence of computing and communication technologies. This convergence,

including technologies such as television, telephony, and computers, has in turn

stimulated the reach of the innovations of the Internet. Digital video, audio, and

interactive multimedia are growing in popularity and increasing the demand for

Internet bandwidth. However, there has been no convergence on the economics of

the Internet. While advanced information and communication technologies make the

network work, economic issues must also be addressed to sustain the growth cited

above and expand the scope of the network.

Brief history of the Internet 1

In the 1960's, in response to the nuclear threat during the cold war, the Advanced

Research Projects Administration (ARPA) of USA engaged in a project to build a

reliable communication network. The network deployed as a result of this research,

ARPANet, was based on packet-switching protocols, which could dynamically reroute

messages in such a way that messages could be delivered even if parts of the

network were destroyed. ARPANet demonstrated the advantages of packet-

switching protocols, and it facilitated the communication among the research

institutes involved in the project. As more universities were connected to the network,

ARPANet grew quickly and soon spanned the United States. In the mid 1970's the

existing protocols were replaced by the TCP/IP protocols, a fact that was facilitated

by their integration into Berkeley UNIX.

In the 1980's the American National Science Foundation (NFS) created several

supercomputer centers around the country. The NFS also deployed a high-speed

network based on Internet protocols to provide universities with remote access to the

supercomputer centers. Since connection to the NFSNet was not restricted to

universities with Department of Defence (DoD) contracts, the network grew

dramatically as all kind of non-profit entities, as well as universities and research

groups, connected to it. A nonprofit Michigan-based consortium, Michigan

Educational Research and Industrial Triad (MERIT) managed NFSNet. Since Internet

1http://www-inst.eecs.berkeley.edu/~eecsba1/s97/reports/eecsba1b/Final/final.html

access was subsidized by the NFS and by the non-profit entities connected to the

network, economic issues such as accounting, pricing and settlements were for the

most part ignored.

As NFSNet grew and its potential became obvious, many for-profit entities wanted

access to the Internet. Since the NFS did not want to subsidize Internet access for

these private groups, it gave control of NFSNet to the nonprofit corporation Advanced

networks and Services (ANS). ANS was created from resources provided by MERIT,

MCI and IBM, and was free to sell Internet access to all users, both nonprofit and for-

profit. Meanwhile, some for-profit backbone providers such as PSI and UUNET

started selling Internet interconnection services. As the Internet became more

commercialized, people began studying and experimenting with the Internet

economics.

In 1995, ANSNet was sold to America Online, and a new commercial Internet

replaced the NFSNet-based Internet. The new Internet consists of a series of

network backbones interconnected at Network Access Points (NAPs). NFS is

phasing out its subsidies of the backbones but still subsidizes four NAPs: San

Francisco (PacBell), Chicago (Ameritech), Washington DC (MFS) and New Jersey

(Sprint). The popularization of the Internet and the perception of an imminent

convergence of voice, video and data networks provided impetus to the

telecommunications deregulation in 1996. At the same time, it became even more

obvious that such a network convergence will require a coherent system of

settlements and pricing. With the different networks being able to provide the same

(or similar) services, the old telephone and cable pricing structures may become

inadequate, and new structures must be created to replace the old ones.

The components of network technology 2

Network services include not only those products provided by the Internet but any

kind of service that is provided by or cannot be produced without the presence of a

network. For example, in addition to the great amount of information goods and

electronic commerce activities on the Internet, phone calls and cable TV are other

examples of network services. While there exist users (buyers or consumers) to

purchase the services, there will be sellers (or producers) to provide them.

There are three main kinds of Services available for consumption on the network,

namely, electronic commerce, information goods, and software applications

distributed over the network. Users are defined as the individuals who consume

2 http://www-inst.eecs.berkeley.edu/~eecsba1/s97/reports/eecsba1b/Final/final.html

Services via the network. Network Access Providers (ISPs) are defined as the

companies that provide network access to Users and Services so that they can

communicate. Finally, Infrastructure is defined as the physical network infrastructure

and its protocols to allow information exchange in the network.

Therefore the four main components in the network services market are the Users,

Network Access Providers, Infrastructure, and Services.

The economic model

Three parts of network technology - User, Network and Services - have interactions

with each other. The Infrastructure is "embedded" in Network Access Providers

because they have no direct interaction with the Users and the Services. Access of

the network is "retailed" by Network Access Providers. For example, in the context of

the telephone industry, the telephone companies are Network Access Providers

which utilize the Infrastructure (i.e. the telephone lines) to provide Services (i.e.

phone calls) to the Users (i.e. telephone customers).

There are four main kinds of economic interactions in the world of network

technology figure.

Essentially the User can browse the web from home because the connection service

provided by a Network Access Provider - more specifically, an Internet Service

Provider, or ISP. The ISP in turn charges him / her a price for gaining access to the

Internet. The ISP can provide this service because it rents a part of the Internet

Infrastructure in order to provide network access service to Users. The ISP has to

pay the company who provides the infrastructure (most likely a telephone company in

this example). The homepage of the Network Service Provider is on the web

because the company pays another (or possibly the same) ISP for the connection to

the Internet, in order to provide this electronic commerce as a Service. Finally the

User pays for this Service to buy the toy.

Thus, there are economic interactions between User and Network Access Provider,

Network Access Provider and Infrastructure, Network Access Provider and Services,

and lastly, User and Services.

Besides the economic interactions between different components, there may also be,

especially in the case of Internet, economic interactions within a component. For

example, there are settlement issues between different providers of the infrastructure

in "pass through" traffic.

Issues involved in these interactions3

In the economic interactions between the Network Access Providers and Users, the

issues are:

� An efficient, feasible pricing strategy

� Pricing schemes that have been suggested

In the economic interactions between Network Access Providers and Services, the

issues are:

� Responsibility for collecting the tariff for network usage

In the economic interactions between Network Access Providers the issues are:

� Is there a need for settlements between Network Access Providers?

In the economic interactions between Infrastructure and Network Access Providers,

the issues are:

� How Infrastructure should be priced in order to recover the substantial sunken

cost

� Problem of dial-in access to the Internet

� Should there be settlements between Infrastructure and Network Access

providers

In the economic interactions between, Infrastructure Providers, the issues are:

� Is there a need to do settlements on "pass through" traffic

� How interconnection agreements should be made

In the economic interactions between, User and Services, the issues are:

� Pricing of the service.

� Impact of the service to the way people live.

Network Access Providers

A Network Access Provider (ISP) or an Internet Service Provider (ISP) is defined as

a company that provides network connections to Users and Services. ISPs can be

thought of as providing access to end-users. For Network Access Providers four

different relationships involving interactions with different network entities can be

seen:

1. Network Access Providers and Users

2.Network Access Providers and Services

3.Network Access Providers and Network Access Providers

4.Network Access Providers and Infrastructure

3 Electronic Commerce -- An Introduction, May 1996

Another important interaction that affects the Network Access Providers are those

between the Infrastructure providers.

1. Network Access Providers and Users

ISPs usually suffer from diseconomies of scale when dealing with users. Customer

support, accounting, billing and hardware maintenance all increase disproportionately

with the number of users. Furthermore, anything that inconveniences the user will not

be tolerated4. Pricing, therefore, must recover the fixed and growing marginal cost

but not inconvenience the users. Also, a pricing scheme should also provide

incentives for both the Network Access Provider and the Users to act in a socially

responsible way.

Costs that ISPs incur are:

� Hardware and software: A ISP must recover the costs of hardware, software and

customer support. The hardware and software costs will vary depending upon the

type of access the ISP will support (most support also depends upon the

customer's preference). Customers can choose between dialup or leased line

access. Dialup service requires that the ISP purchase a terminal server, modem

pool and dial-up lines. The software support costs of providing dialup service are

negligible. Occasionally, the hardware must be upgraded. These upgrade costs

tend to incur in large chunks rather than incrementally over time. ISPs providing

leased line access are required to provide a router at either end of the leased line

(one at the ISP site and one at the customer site), but terminal servers and

modems are not necessary5. The software required for leased line service is

more complicated than that required for dialup service as configuration for the

former case may take considerably more time.

� Customer support: Customer support costs can be categorised into three support

types that occur over the life of the ISP/customer relationship: costs of acquiring a

customer, costs for supporting an ongoing customer, and costs of terminating a

customer relationship.

1. Network Access Providers and Services

Pricing for Services

Many Services need access to the Internet before being able to market their goods

on the Information Superhighway. In this capacity, the Services are much like the

4 Hal Varian, "Economic Issues Facing the Internet", June, 1996

5 Padmanabhan Srinagesh, "Internet Cost Structures and Interconnection Agreements", presented at the MIT Workshop on Internet Economics, March, 1995.

Users above in that they need to purchase Internet access. Hence, the pricing

schemes for Users can also be targeted to Services in their capacity as network user.

The "advertising alternative" to pricing mentioned above would not be applicable,

however, since the Services are the targets of that cost recovery model rather than

the benefactors.

Liability for the cost of communication

It is unclear which entity, Users or Services should be liable for the communication

charges. For example, if a user pays for some Service's software, who pays for the

communication cost of downloading the software to the user (we are assuming in this

example that downloading is the method of delivery for the software). It might be

more efficient to have the ISP collect charges from the Service. This would certainly

be the case if the User and the ISP did not have an existing relationship. However,

the User and the ISP do have a pre-existing relationship where the User pays the

ISP for network access. Therefore, the accounting and collection methods are

already in place at the ISP/User level. It would seem, then, that there is no benefit

from imposing the liability for communication costs onto the Services.

However charging for actual usage is difficult from a practical standpoint because of

the processing power that would be necessary to measure the usage. Imposing the

liability for the cost of communication onto the Service would greatly simplify the

accounting procedure for usage-based accounting: the server knows a priori exactly

how much bandwidth is necessary to transmit each product and would simply need

to add the cost to the customer's bill. Although the Service would have to initially

measure the cost before selling the product, this is a one-time calculation. Further,

because they know the cost beforehand they could simply include a line for cost in

the User's bill for the software product. This imposes no inconvenience on the User.

2. Network Access Providers and Network Access Providers

Because ISPs tend to agree that providing users with full Internet connectivity is a

basic requirement, interconnection settlements between ISPs covering the case

when two users with different ISPs are communicating are not necessary. The

rationale is that when ISP1- user communicates with an ISP2-user, both ISPs get

paid by their respective customers, so no settlement is necessary. The present

practice is to sign either a multi-lateral which allow all foreign traffic to be accepted by

an ISP, or several bi-lateral agreements which are agreements between two specific

ISPs. Currently, 70% of the ISPs sign multi-lateral agreements; the remaining 30%

sign bi-lateral agreements. The rationale above does not apply to the case of transit

traffic, (or more commonly referred as "pass through" traffic), however.

3. Infrastructure and Network Access Provider

Network construction costs

The major network construction costs are buying and installing the links and nodes.

Currently, most long haul infrastructure providers use optical fibres for their

transmission links. The costs of constructing the fiber optic links include the cost of

the fibers, of trenching and of labor installation. Since the cost of the fiber is relatively

small compared to the total cost of installation, excess fiber is typically installed.

Between 40% and 50% of the fiber installed by the typical interexchange carriers is

"dark", i.e. the lasers and electronics required for transmission are not in place.

Private lines can be provided out of this surplus capacity. The costs for connecting a

private line include lighting up the fiber with lasers and electronics (if it is originally

"dark") and customer acquisition.

Maintenance and upgrade costs

Although the sunken cost of network construction is substantial, once the physical

infrastructure is established, the incremental cost of carrying packets is negligible.

However, maintenance and upgrade costs have become a nightmare recently. The

heavy telephone usage at the local loops by Internet users has imposed big

problems for the telephone companies. In order to accommodate the ever-increasing

network traffic, larger and faster switches are constantly replacing the old ones. This

cost has been huge [but the telephone companies are not getting any compensation

for carrying the extra Internet traffic.

Capacity

Although there are dark fibers and upcoming new technologies (such as xDSL and

ATM) to increase network capacity, in short term and on a regional scale, congestion

is a big and very real problem. It is not prudent to rely on the belief that the capacity

can be increased indefinitely in the long run, from both the technological and

economical points of view. The key therefore is to build enough infrastructure to

satisfy a statistical demand, and use economic methods to manage the actual

demand.

The Indian Scenario

Like most developing countries India is also faced with the problem of being able to

provide only limited access to internet services. The main cause for this is the limited

infrastructure currently available in terms of telephone access. While phenomenal

growth is expected ISPs may face challenges in getting enough telephone lines in

the four big Indian cities - Bombay, Delhi, Bangalore and Madras – from where up to

70 per cent of new ISP connection demand is expected to come.

India has less than 25 million telephones and 0.7 million Internet Access for 1000

million people while it needs 150 to 200 million Telecom and Internet connections to

meet the expected demand. With the new ISP policy not permitting last mile

connectivity for dial-up access, this requirement will need to be met by the operators

of basic telecommunications services. Past experience over the last three years has

shown the requirement of access lines and their quality to be the most significant

limiting factor for the growth of Internet in India. It will need to be seen how this will

be overcome under the new liberalised scenario by merely increasing the number of

ISPs.

The forecast of online access (source: Financial Times) shows that dial-up access

will remain by far the most common method of access outlining cable or ISDN access

by a factor of over 10:1. Under these circumstances the policy announcement of

opening up Internet access via Cable TV is not believed to provide a radical solution

to the issue of access to Internet services. It will require a concerted and planned

effort to meet this demand over a short time frame of less than six months.

National backbone6

Another major issue in the provision of Internet services is that of providing a

national backbone for India-wide connectivity as well as inter-connection

between the multiple ISPs. Under the ISP policy dispensation, a Statewide

access been provided under the dialling scheme of "17222" access which

connects a subscriber to the nearest ISP node. With more than 800 cities in

India being available on STD/ ISD and having a high potential for gro wth of

Internet services, the above type of access is likely to lead to a loading of the

Indian trunk network which is designed for "high tariff low holding time traffic"

with ‘low paying, high holding time traffic". This will undoubtedly put

considerable load on the already scarce resources and is not a long term

solution. Moreover, merely providing access to the nearest ISD node does not

solve the issue of connecting to more than 50 ISPs who may exist within a

State. There is, thus, an urgent need to provide a common access backbone to

which customers from any part of the State can dial to access any lSP Internet

node. The need is, therefore, to isolate the access service from the ISP service

and the content services. A national backbone can be provided by private

operators in addition to those provided by DOT and

VSNL, and this should form an indispensable part of the National Information

infrastructure. No ISP policy would be complete without defining a national

infrastructure for India-wide Internet access by multiple ISPs.

International bandwidth

With the operation of six gateways by VSNL, and in addition the use of optical

fibre submarine cables, the bandwidth already provided by VSNL is over 80

MBPS and is adequate for meeting India’s requirements with the current level

of subscribers. India is also well connected with optical fibre cable systems1

with FLAG (5 Gbps per fibre), SEA-ME-WE-3 (10 Gbps per fibre) and other

cables in the pipeline. Complemented by multiple satellite connectivity, no

shortage is envisaged, and the requirement of all ISPs can be fully met from

"day one" so far as international Internet connectivity requirements are

concerned. VSNL has already prepared itself for this scenario, whereby its

leased lines to the ISPs can be increased to accommodate all the new ISPs

requiring connectivity via VSNL. It maybe mentioned that VSNL has today over

350 leased line circuits operating for Internet alone and is, thus, well versed

with this business.

Satellite or fibre connectivity

It is now w ell known that Internet growth is not simply a growth in the number of

users, web sites or customers. The real growth of Internet is now being driven

by an increasing use of bandwidth-hungry applications. As per Sprint, the

average message travelling over its networks is today ten times larger than it

was a year ago. Users are downloading larger and larger web pages and using

steadily increasing data-hungry applications such as video conferencing, LAN

connectivity and enterprise networks. Within the next three years the

bandwidth requirement of Internet could grow by a factor of 50 to 100,

6 http://www.bnetindia.com

overtaking the telephone network bandwidth by a significant margin. On the

Internet backbone, the traffic is today doubling every 100 days and major

backbones at peak times are suffering packet losses which can go up to 400A.

The question is how will such large bandwidths be provided for India?

Fortunately, VSNL’s advanced planning in cable systems comes to the rescue of this

urgent and pressing issue. VSNL, through its acquisition of capacity in FLAG and

SEA-ME-WE-3 can ensure that the country’s requirements are met for the next five

years with over 30 Gbps of capacity being available. VSNL has already signed a

MOU for Project Oxygen, which will be a 300 Gbps system operational in the year

2000-1. The increasing number of ISPs in India will drive up the bandwidth demand.

This, coupled with larger bandwidth per user through the use of bandwidth-hungry

applications, will make it possible to order large bandwidths. This will then provide

economies of scale and make the bandwidth available at lower and lower costs.

VSNL is already negotiating for much larger capacities on the optical fibre systems

for Internet. India-US connectivity for a D53 (45 Mbps capacity) can today be had at

US$ 150,000 per month. This is roughly the equivalent of US$7,000 per month as

against an average figure of US$ 21,000 per month payable via satellite circuits to

USA. It is, thus, evident that even at relatively lower levels of bandwidth utilisation,

e.g., 45Mbps, the cable systems provide a price advantage of 3:1 over satellite

circuits. VSNL believes that this will be a significant factor in gradually lowering the

cost of Internet access for international Internet connectivity. As the bandwidth

requirements increase beyond D53 to ATM levels, e.g., 155 Mbps, the cost could

come down further by a factor of 2 or 3, thus reaching levels at which such

bandwidths are available in developed countries.

With the arena now clear for a deregulated and open playing ground for ISPs, the

stage is set for the rapid growth of Internet in India. However, the growth will be

critically dependent on how some of the factors such as requirement of access lines

and national backbone connectivity are addressed. With the wisdom which has gone

into the formulation of the new ISP policy, it is believed that these issues requiring

urgent resolution will also be addressed on priority and resolved, opening up a path

for multi-fold growth in Internet services in India.

Economic Bottlenecks7

Economic Bottlenecks that limit access in developing countries are in terms of

service affordability. The average expenditure per month on communications in USA

is $30 which almost 90% of the households can afford.

In India the situation is shown in the table below

India (in 2001)

Annual Household Income % of households Expense on communication*

> $5000 1.6% > $350

$2500 - $5000 6.3% $175- $350

$1000 - $2500 23.3 % $ 70 - $175

$ 500 - $1000 31.8% $ 35 - $ 70

*Assuming 7% of family income for communications

The above table clearly shows that even if we take the lower US prices @ $30 per

month only 1.6% of the households can afford the service, thus acting as a severe

constraint in improving access to these services in the country.

Also in USA a $ 360 per year revenue may justify $1000 per line Network cost and is

affordable to almost 90% of households. This implies that there is no incentive to

reduce cost rather the focus of R&D is not to reduce costs but to enhance basket of

services and features while keeping cost constant.

In developing countries even with $ 125 per year revenue, the service is affordable to

only the top 30 % of households. Thus here the emphasis continues to be on

reducing network costs given the market size of hundreds of millions.

7 from a paper discussd at Commsphere 2000, IIT Madras, by Prof. Ashok Jhunjhunwala

The above graph shows the importance of improving service affordability. Access

would increase if affordability could be increased. As more number of households get

access the network costs per line would come down and this circularity would

continue. Thus it is imperative that service costs be brought down.

Given low affordability even providing access may not be enough as the user would

still have to pay call charges which might be prohibitive. Some steps to rectify this

situation are being considered e.g. Government telecom service provider and soon-

to-be ISP Mahanagar Telephone Nigam Ltd. (MTNL) is considering waiving local

phone call charges for Internet users, or bundling calling charges for its own Internet

service

Legal and regulatory issues

Legal and regulatory challenges still arise in areas like setting of access tariffs for

private ISPs, international gateways, Internet telephony, and opening up of the last-

mile telecom market.

The most revolutionary aspect of India's Internet policy is letting ISPs do the last mile

connect and this could well be the source of litigation from basic service license

holders who worry about voice over IP.

On 6th November 1998 a new ISP policy was unveiled. The policy permitted

unlimited number of Internet players with no licence fees for the first five years, thus

setting the stage for a completely deregulated operating environment.

A deregulated environment requires the most disciplined set of regulations to

oversee the growth and to ensure and protect the interests of the customers and the

country It will be important to ensure that no anti-competitive practices are indulged

in by any of the operators, particularly those responsible for providing infrastructural

facilities.

Network Effect

0

20

40

60

80

0 500 1000 1500

network cost per line

% o

f hou

seho

lds

(cum

ulat

ive)

% of households(cumulative)

Demand for Internet Access and Usage

The first issue, which needs to be highlighted, is the difference between demand for

Internet access, and demand for usage. This is an important distinction since "the

important characteristic of Internet demand for access is that it is binary. An end user

either has access or he does not. By contrast, Internet usage refers to an individuals

utilisation of Internet resources once access has been obtained. The rate of

data/traffic transferred in a given period will be used as the measure of Internet

usage. Other methods of measuring usage could be total amounts of data transferred

or hours of time connected to a service.

Components of Usage Demand

There are five factors, which influence demand for Internet usage. These elements

are the price of usage, cross-price effects, income effects, user preferences and

lastly the network effect. The network effect is a particularly important component of

the demand for Internet usage and will therefore be considered separately.

1. The price of usage

This can also refer to the type of pricing scheme. That is whether a flat access fee is

charged - in which case the usage price is zero, or whether a usage sensitive pricing

scheme is used.

2. Cross-price effects

It is important to recognise the impact of both substitutes and compliments on

demand for usage. In relation to Internet usage it is important to recognise that the

substitutes tend not to be substitutes for Internet usage per se, but rather substitutes

for different components or uses of the Internet. For example the substitutes for email

might include telephone and fax calls as well as postal mail. The substitute for

purchasing CD's online is buying them through mail order, over the phone or via the

home shopping television channels.

If we therefore assume that substitutes tend to be for components of Internet usage,

rather than Internet usage as a whole, the cross-price elasticity for Internet usage will

then be related to the proportion of total usage made up by the substitutable

component. For example, the cross-price effect on Internet usage of a rise in fax or

postal mail prices will be influenced by the proportion of total Internet usage the

substituted component (e.g. email) makes up. From this it may be possible to argue

that the size of the cross-price elasticity between a particular substitute and total

Internet usage is likely to be low, although this would need to be confirmed through

empirical study.

The price of compliments also has an impact on the demand for Internet usage. The

compliments to Internet usage are not just the costs of computer equipment required,

but also the cost of the users time. An increase in time saving computer applications

will increase a users demand for usage since for any given amount of time they will

be able to get more valuable usage from that time.

3. The Income Effects

Income also affects demand for Internet usage. It is suggested that Internet usage is

a normal good. That is, it rises with income. But if the rise in income is associated

with a rise in the users wage rate or hours of employment this increases the value

placed on time by the user. They may therefore be more prepared to pay for a

substitute (such as a conventional telephone call), than wait for the level of

congestion on the Internet to decline to a level where the user can utilise real time

audio applications without degradations in quality. Thus under some circumstances

Internet usage may be considered an inferior good.

4. User Preferences and other miscellaneous factors

Consumer preferences are used to account for differences not attributable to the

above factors or to the network effect. Changing demographics is one factor

impacting the demand for usage. However it is possible to imagine a wide range of

other factors which could alter consumer preferences and have an impact on the

demand for usage. For example, changes in the weather may induce users to stay

indoors and use the Internet rather than undertake activities outdoors, whilst a

popular television show may reduce the demand for Internet usage at that time.

5. The Network Effect

The final component of demand is the network effect. The network effect refers to the

situation where "the value of the good increases with the expected number of units to

be sold." Network effects can be either direct or indirect. It is the direct network effect,

which has an impact on the demand for Internet usage. Direct network effects arise

as the result of the effect a new user has on the benefits received by the existing

network users. The positive network effect will be focussed on, although the impact

of negative network effects will also be considered.

The operation of the network effect is best explained by the simple example of the

telephone or fax networks. In the extreme case where there are zero users of the

network, there is no benefit in being connected since there is no-one to talk to!

However as the network grows, the benefits to an individual of becoming connected

increase since there are more people to communicate with. As the network size

increases, so does the benefits to participants since "a new user joining the network

increases the range of choice open to other members."

The same may be argued to exist in relation to use of the Internet. The more WWW

sites, or users on the Internet Relay Chat (IRC) network, the greater the benefit a

user can expects to receive.

The Network Effect and the Marginal Benefit of Size8

Figure 1 below is a graphical representation of a possible relationship between

network size, Marginal Private Benefit (MPB) and Marginal Social Benefit (MSB). In

Figure 1, Marginal Benefit (MB) is drawn on the vertical access and network size on

the horizontal.

In the context of the Internet, network size could refer to the number of users

connected to the Internet or the number of WWW sites accessible by Internet users,

the choice depending on the particular problem one wishes to analyse. It is important

then to remember, that in this analysis network size does not refer to the physical

capacity of the network. The model to be developed in this essay also assumes that

increases in network size do not result in changes in the physical capacity of the

network.

For all network sizes up to N* the marginal benefit of increased size is positive. This

reflects the earlier explanation of larger network size creating more 'goods'.

Furthermore, the MSB curve is drawn above the MPB for all network sizes less than

8 http://users.hunterlink.net.au/~ddhrg/econ/honours/demand3.html

N* due to the presence of the external benefits which existing network users receive

from new users joining the network.

Examining the shape of the marginal benefit curves it can be seen that initially the

marginal benefit is positive and increasing due to the increasing benefits from

expanding the network. Eventually however the marginal benefit reaches a maximum

after which the marginal benefit of increased size begins to diminish. The diminishing

marginal benefit observed in Figure 1 may be due to a number of influences.

Although some activities may require a certain critical mass to exist, increased size

beyond this level may not contribute significant gains. Internet shopping is a possible

example. Although a certain number of Internet users may be necessary for on-line

shopping to become viable the gains once this critical point has been reached may

begin to fall.

Increased network size may also lead to higher levels of 'spamming' and other anti-

social behaviour, which may reduce the marginal benefit of increased size. Similarly,

increased network size can produce the Internet equivalent of highway traffic jams,

again acting to reduce the benefits of increased size. The problem of congestion is

significant enough that it will be examined in more detail later. Eventually the effects

of these negative factors may actually change the network externality from being a

positive to a negative.

The Network Effects and Changes in Demand9

Examine the relationship between network size and the demand for Internet usage.

In developing this new explanation it will be necessary to use the simplifying

assumption relating to the relationship between the network effect and demand. The

assumption made is that all users value the network effect equally and therefore that

the increased willingness to pay due to the presence of the network effect "is the

same for each unit sold, irrespective of its position on the demand curve.". This is

significant since it means that the network effect shifts the demand curve without

altering its slope.

Since the demand for Internet usage is based on the benefits received, where the

increase in network size produces a positive marginal benefit this will result in an

increase in demand. When the increase in size produces a negative marginal benefit

then this will result in a decrease in demand. Figure 2 illustrates this relationship.

9 http://users.hunterlink.net.au/~ddhrg/econ/honours/demand3.html

At network size N+, the corresponding demand curve is DN+. At a price of P* the

quantity demanded is Q+ megabytes per hour. As the network size increases to N*,

the demand curve shifts outwards to the right and is represented in Figure 2 by the

demand curve DN*. At price P* a greater quantity, Q* megabytes per hour is

demanded. However as the network grows past N* and moves to N- , the marginal

benefit becomes negative due to the previously mentioned influences. The demand

curve thus shifts inwards to the left. At price P*, quantity Q- megabytes per hour is

demanded, with Q- < Q*. Demand for Internet usage is therefore maximised at the

network size where the marginal benefit of increased size is zero.

Pricing Basics

Background on Network Pricing

In industries that exhibit perfect competition, economic theory dictates that firms will

end up pricing at marginal costs. In a perfectly competitive market structure there is a

large number of suppliers, none of which is too large relative to the overall market,

the outputs of these suppliers are homogeneous10 and there are no barriers to entry.

It is assumed that the industry exhibits diminishing returns to scale and that the fixed

costs are relatively small.

However, the telecommunications and information services industries require huge

fixed costs in the deployment of their required infrastructure and they exhibit

increasing returns to scale. Therefore, it is not efficient for them to apply the classic

economic practice of pricing at marginal cost (which is close to zero).

Since uniform pricing at marginal cost is not efficient in this industry, suppliers devise

other pricing strategies.

One such strategy is to employ differential pricing schemes. Different consumers of a

certain product usually place a different value on that product and, therefore, the

willingness to pay for the product varies across the consumer population. Firms try to

extract as much of this value from the consumers as possible by using differential

pricing schemes. The amount extracted is limited by the consumers' willingness to

pay for the product.

Differential pricing schemes can be divided into two-part tariff schemes and price

discrimination schemes.

1. In a two-part tariff scheme, users are charged an attachment fee to connect to the

network, and a usage fee for their incremental use of the network. The entry

(attachment) should be set to cover the fixed costs of the network infrastructure,

plus any consumer surplus derived from the attachment. The usage fee may be

metered by time, packets, bandwidth used etc., and should also include the

marginal consumer surplus derived from that usage.

2. In a price discrimination scheme, consumers are divided into segments and are

charged according to the segment to which they belong. There are three types of

price discrimination schemes:

� First-degree price discrimination: Each consumer is charged for his or her

individual willingness to pay. This scheme extracts the maximum consumer

10 Michael L. Katz and Harvey S. Rosen. "Microeconomics" 2nd Edition. IRWIN, Inc

surplus from each individual consumer. However this scheme is usually very

difficult or impossible to implement, and sometimes its implementation may be

illegal.

� Second-degree price discrimination: Consumers are divided into segments

based on some attribute that the consumers are induced to reveal. An example

of second-degree price discrimination in a network context would be the

versioning of a network service. For example, regular e-mail may be free, but

with no delivery time guarantees, while urgent e-mail may have an attached

fee, with certain delivery guarantees attached.

� Third-degree price discrimination: Consumers are divided into segments based

on some verifiable attribute, such as students or senior citizens.

In price discrimination schemes, profit-seeking firms try to extract as much consumer

surplus from each segment as possible. Each segment is charged an optimal price

based on the estimated willingness to pay of that segment. For example, businesses

who rely on telecommunication services are willing to pay (and therefore are

charged) higher rates than individuals and households.

The established algorithm for determining prices is:

1. If the firm can use a two-part tariff, then a per-unit price is found by intersecting

the demand and marginal cost curves.

2. An entry fee is set at the individual consumer's surplus when she can buy as

many units as wanted at the set price.

3. If consumers cannot be divided on the basis of a verifiable attribute, then

second-degree price discrimination is used (e.g. versioning or quantity

discounts).

4. The firm has to segment the market and price each individual segment.

5. Lastly, when considering a network pricing structure, it is important to

distinguish between four types of charges that the above (and other) pricing

schemes can apply11. These are:

Pricing Alternatives

Given below are some pricing schemes that may be implemented:

� The case for public subsidy12

11 J. Walrand and P. Varaiya, "High Performance Communication Networks", chapter 8.

12 Sandra Schickele, "The Internet and the Market System: Externalities, Marginal Cost and Public Interest", August,

1993

Before considering any one pricing scheme, it is useful to ask whether it is

technically, economically and socially feasible to charge for Internet service at all.

There are some schools of thought that believe that the answer is "no". Some units of

pricing, such as number of packets (units of communication) sent, require more

computing resources to do the packet accounting than to send the packet, thus

rendering those pricing schemes infeasible. As far as the economic and social

feasibility is concerned, there is a very strong argument that the Internet access

market cannot succeed and, therefore, the prices charged will neither be

economically nor socially optimal. From the trend of ISP insecurity and price

flexibility, one can conclude that the Internet access market is currently competitive.

There is a strong belief, however, that a market "shakeout" will occur from which only

a few ISPs will survive. If this is the case, then those firms will be able to charge

prices much higher than at the marginal cost. Market failure is said to occur at that

point - when the market is incapable of producing an economically efficient and

socially optimal allocation of resources. When a market fails, economic theory says

that government intervention is required at that point, especially for quasi-public good

such as the Internet.

Flat Pricing Scheme 13

Some economists support the argument that a flat price for Internet access,

regardless of the amount of resource use, is the only feasible pricing scheme. The

argument bases on the fact that we are building a general purpose network where

the uses will be very diverse, and therefore the resulting dynamic allocation of

resources (usually bandwidth) will become increasingly difficult to meter and

expensive to track. This argument is most compelling, however, at the Infrastructure

level in regard to charging ISPs for resources. It is not as compelling at the ISP level,

where the amount of resources used is more easily monitored as there is a known

and finite set of destinations (the customers) that need to be tracked. Lastly, flat rate

pricing does not provide the user with any incentive of not causing congestion.

The Arguments against Flat-rate Pricing

13 Loretta Anania and Richard J. Solomon, "Flat: The Minimalist B-ISDN Rate",

presented at the MIT Workshop on Internet Economics, March, 1995.

Flat-rate pricing in the current context of the Internet is likely to run into severe

problems. the continuance of flat rate pricing is likely to severely impair the current

discursive nature of the Internet .

The basic role of a pricing mechanism is to lead to an optimal allocation of scarce

resources, and to give proper signals for future investments. The mechanism in place

should lead to the optimization of social benefits by ensuring that scarce resources

are utilized in such a manner as to maximize productivity in ways society thinks fit.

One critical issue however is the basis on which an appropriate pricing scheme can

be designed.

Given that the marginal cost of sending an additional packet of information over the

network is virtually zero once the transmission and switching infrastructures are in

place, marginal cost pricing in its simplistic form is inapplicable. Cost-based return on

investment (ROI) pricing is both not feasible, given the multiplicity of providers who

would have to chip in to bring about an end-to-end service, and inefficient, given the

chronic problem of allocating joint costs. A "what the market can bear" policy would

be likely to have unforeseen implications, especially if the markets are not

competitive in each and every segment of the network.

The principle that is most likely to be effective in this scenario is a modified version of

the marginal cost approach, where the social costs imposed by the scarcity of

bandwidth - the bottleneck resource - is taken into consideration. Bandwidth being

the speed at which data is transmitted through its networks, its scarcity implies

delays due to network congestion. This then is the social cost that needs to be

incorporated into any efficient pricing scheme.

The major fear in some quarters is that the present system of flat-rate, predictable

pricing for a fixed bandwidth connection will be replaced by some form of vendor

preferred, usage-based metered pricing. Users feel that the Internet should continue

to function primarily as a vast, on-line public library from where they can retrieve

virtually any kind of information at minimal costs.

In addition to the fear that a popular discussion would have to pay enormous

amounts to send messages to its members, it is feared that usage based pricing

would introduce a wide range of problems regarding the use of ftp, gopher, and

mosaic servers, since the providers of the "free" information would be liable to pay, at

a metered rate, the costs of sending the data to those who request for it. This would

have a negative effect on such information sites, and would eliminate many such

sources of free information.

In essence, the argument is that usage based pricing would imply severe economic

disincentives to both users and providers of "free" information, and would therefore

destroy the essentially democratic nature of the Internet.

Usage based

Usage based charges are determined by the quantity of use and can theoretically be

measured in a number of different ways: speed of the connection (i.e. the modem

speed), connection time, number of packets sent, length of the connection to the ISP

in minutes, and so on. Pricing based on the number of packets actually sent has an

advantage in that it is fair in the sense that the users are charged for exactly what

they use. Pricing based on number of minutes of the connection is unfair, however,

because it does not distinguish between the length of the connection from the

number of packets actually downloaded, although there may not be any correlation

between the two. It is certainly feasible, for example, that a user reads information

downloaded at the beginning of a session for an hour; another user could download a

new page of information every five minutes. Usage based pricing does provide a

disincentive for users to be wasteful of network resources since they must pay for the

resources they use. In practice, however, setting rates and measuring the usage is

very difficult. It could take more computing power to compute the resources used by

sending a packet than to actually send the packet. Therefore, usage pricing based on

number of packets is economically infeasible. When other accounting measures such

as connection speed or connection time are used, however, users will complain

these are unfair because people (with the same connection speed / connection time)

receiving and sending different amounts of traffic would be charged the same. Also,

there could be a lot of "idle" times in which no network traffic is done but the

connection is still maintained. Finally, usage based pricing is very controversial

because it endangers the vitality of the Internet. Users would undoubtedly not "surf"

the web as freely when there is a virtual meter ticking in the background. Where

usage based pricing has been tried, the growth has slowed down.

The Telephone Pricing Model

One form of usage based pricing would be to use the system of posted prices as in

telephony. One way to do this would be to adopt the telephone model, where the cost

of Internet usage is based on the distance between the sender and the receiver, and

on the number of nodes through which data need to travel before they reach their

destination. This however would be difficult to implement given the inherent nature of

the connectionless net technology, which is based on redundancy and reliability,

where packets are routed by a dynamic process through an algorithm that balances

load on the network, while giving each packet alternative routes should some links

fail14. The associated accounting problems are also enormous. In addition, the

sender would prefer that packets are routed through a minimum number of nodes in

order to minimize costs, while the algorithm in the Internet would base its calculations

on the concept of redundancy and reliability, and not necessarily on the fewest links

or the lowest costs.

The telephone model of pricing is not likely to work for another reason. Posted prices

are not flexible enough to indicate the state of congestion of the network at any given

moment15. As we have seen earlier, congestion in the network can peak from an

average load very quickly depending on the kind of application being used. Also,

time-of day pricing means that unused capacity at any given moment cannot be

made available at a lower price whereby it would be beneficial to some other users.

Conversely, at moments of congestion, the network stands to lose revenue because

users who are willing to pay higher amounts than posted rates are being crowded out

of the network through the randomized first-in-first-out (FIFO) process of network

resource allocation.

In essence, the system of posted fixed prices implies multiple problems: while it does

not allow for revenue maximization or lead to optimal capacity utilization, it also does

not address the social costs of congestion because it cannot allow for prioritization of

packets. It is thus clear that the answer to the Internet's pricing problem does not lie

at either ends of the pricing spectrum defined by flat-rate pricing and pure usage

based pricing, but possibly in an innovative approach.

Priority

In a priority scheme (also sometimes called Quality of Service scheme), the user

chooses the quality of service that they want and pay a flat fee for that quality of

service. A user could choose between high or low priority connections, for example.

Another example of priority pricing is to allow the user to actually choose the priority

of their packets (both sending and receiving) in the Internet. This latter type of priority

pricing is not currently available because the underlying infrastructure does not

differentiate between different packets' priorities. However, this type of pricing might

provide better quality of service than a faster line because although the faster line

could provide better service at the endpoint of the user's connection, it does not

provide the end-to-end guarantee that packet priorities would. The idea behind

priority pricing is that the user pays for what they get but does not have to deal with

14 (Varian & MacKie-Mason, 1993, p. 3)

that "ticking meter" feeling. Priority charges also have the advantage that they allow

the ISPs to charge for "luxury items" and, therefore, attempt to charge a price closer

to the user's willingness to pay. However, priority based schemes may not provide

enough granularity to allow ISPs to charge at the highest level possible for each

customer.

Tiered usage

In a tiered usage pricing scheme, the user is charged a certain amount for the first X

units of use, then a higher amount for the next Y units of use, etc. The advantage of

tiered pricing is that it might allow whimsical browsing without encouraging excessive

use. The disadvantage is that the user would be inconvenienced by having to keep

track of their usage. However as stated previously, user inconvenience is not

acceptable. This could be remedied in a number of ways, however: perhaps by

sending a message to the user once they've crossed the threshold of a new tier or

allowing the user to access to their account records thus far.

Congestion pricing

One reason to introduce pricing schemes into the Internet is to make users

understand the value of what they are gaining (an ability to communicate and to

access information) and to give them an incentive to act in a socially conscious way

which reduces the harm to others. For example, everyone is accustomed to higher

daytime rates for long-distance telephone service. The rates are higher during the

day because phone lines are congested during that time. Higher prices serve to

inform the customer of the extra value of calling during periods of congestion. The

customer, then, will meter their daytime use according to their willingness-to-pay for

that telephone call: if the call is relatively urgent, they will phone during the daytime; if

not, they will wait until the evening. In the Internet, something similar can be done by

charging according to the state of congestion of the network. However the drawback

of a congestion-pricing scheme is that it provides an incentive for the ISP to cause

congestion by restricting its capacity. There are several ways in which congestion

can be spuriously introduced. For example, an ISP can:

� Withhold capacity: An ISP can purposefully not build capacity to match the

demand. Besides introducing congestion, this would save in management

costs since smaller systems are cheaper to maintain than larger systems.

15 (Varian & MacKie-Mason, 1993, p . 19)

� Hide capacity: An ISP can simply shut down a portion of their modems for

dial-up service. Another benefit is, if and when they turn the modems back on,

they can take credit for innovation in upgrading.

� Augment Demand: An ISP can cause congestion by "demand pseudo

augmentation," in which the apparent demand is increase by some kind of

supplier self-dealing. The ISP, or its collaborator, can use the bandwidth

wastefully just to drive up the price.

Two-part tariff

A two-part tariff is comprised of a fixed (f) portion and a variable (v) portion. The fixed

portion includes charges for network access and capacity (a capacity charge is

based on the network's maximum possible bandwidth). This is determined by the

fixed costs, the willingness to pay of the customer population, and the size of the

population. The variable portion would be based on the actual usage of the users and

the priority of their service. The fact that the variable portion extracts the consumer

surplus means that the two-part tariff scheme maximizes the consumer surplus

extracted from customers, and therefore provides the ISP with a disincentive to

induce congestion, which would reduce the number of connections and the

network usage.

Future pricing schemes

In the long run, there is the possibility of implementing new pricing schemes to

alleviate the growing congestion, although at present it is clear that the overhead cost

is too high. The new pricing schemes, if implemented successfully, will be an

effective alternative of doing settlements on top on flat-rate pricing.

Two innovative pricing schemes have been suggested. Bohn et al. have proposed

the "Precedence" model, while Varian & MacKie-Mason have developed the "Smart

Market" mechanism.

The Precedence Model

The Precedence model proposes "a strategy for the existing Internet, not to support

new real-time multi-media applications, but rather to shield the existing environment

from applications and users whose behaviour conflicts with the nature of resource

sharing". Criteria should be set to determine the priority of different applications,

which will then be reflected in the IP precedence field of the different data packets.

Packets would receive network priority based on their precedence numbers. In the

event of congestion, rather than rely on the current randomized decision, the

Precedence model presents a logical basis for deciding which packets to send first

and which to hold up or drop. While noting that their proposed system is vulnerable to

users tinkering with precedence fields, it is felt that this approach would "gear the

community toward the use of multiple service levels, which is the essential

architectural objective"

However, this model has some inherent weaknesses. Given that the Precedence

model rests on priority allocation of packets, the central issue is how these priorities

will be set and who will set them. There seems to be an inherent assumption of an

increased governmental role in regulating content, and as Varian and MacKie-Mason

point out, "Soviet experience shows that allowing bureaucrats to decide whether work

shoes or designer jeans are more valuable is a deeply flawed mechanism"

The system would also require continuous updating of the priority schemes as newer

products and applications become available. Real time video may be assigned a

lower priority than ftp, but it is possible that the video transfer of data is concerned

with an emergent medical situation. Application- based priority will be limiting, and it

would not be possible to define each and every usage situation in a dynamic

environment.

Also, the model relies heavily on the altruism of net users, and the correct reporting

and non-tinkering with precedence fields by computer-savvy netters. The continuing

survival of such a system is at odds with current social trends.

The Smart Market Mechanism

Proposing the Smart Market mechanism as a possible model to price Internet

usage16, suggest a dynamic bidding system whereby the price of sending a packet

varies minute-by-minute to reflect the current degree of network congestion. Each

packet would have a "bid" field in its header wherein the user would indicate how

much he is willing to pay. Packets with higher bids would gain access to the network

sooner than those with lower bids, in the event of congestion. This mechanism is

preliminary and tentative and is only one approach to implementing efficient

congestion control; moreover, it would only ensure relative priority without being an

absolute promise of service.

The Smart Market mechanism has great theoretical potential as a basis for

implementing usage-based pricing. By charging for priority routing during times of

congestion, traffic that does not claim priority status, such as a large Internet mailing

list of a listserv conference. would travel for free during off-peak hours. During

congestion, users would bid for access and routers would give priority to packets with

16 Varian & MacKie-Mason (1994)

the highest bids. A great deal of consensus will be required along the network for

smooth functioning and to ensure that priority packets are not held up .

Users will be billed the lowest price acceptable under the routing "auction," and not

necessarily the price that they have indicated as their bid. A user would thus pay the

lower amount between his bid and the bid of the marginal user, which will be

necessarily lower than the bids of all admitted packets. As a result, the Varian and

MacKie-Mason model ensures that while everyone would have the incentive to reveal

his or her true willingness to pay, there are systemic incentives to conserve on

scarce bandwidth while simultaneously allowing effectively free services to continue.

Smart Market proposal

The Smart Market proposal provides an intelligent way to price the variable portion

(v) of the two-part tariff mentioned above based on network congestion. In an ideal

world, the price charged for network use would be a continuous function of the

congestion. The congestion level would determine the price charged to the user at

the time the packet was transmitted. However, this would be inconvenient for the

user and the ISP as the ISP would constantly have to monitor congestion and the

user would have to constantly monitor the price to determine if the price has

surpassed the user's willingness to pay.

The Smart-Market proposal suggests that users specify a bid for each packet sent.

That bid should reflect the user's willingness to pay. In times of congestion, packets

are prioritised according to their bids. Packets are charged at the bid of the highest

priority packet that is dropped not the bid on each packet. This provides an incentive

for the users to bid based on their true willingness to pay.

� Selling advertising and marketing information

Perhaps it is not necessary for the user to pay at all. Rather, the ISP could bring in

profits just by selling advertising space. ISPs are often in the Service industry as well

and, as such, might set up charge accounts for their customers. By gathering the

personal information (such as taste) of their customers, they can sell this kind

information to other companies. Although people tend to regard their privacy as

sacred, they are surprisingly willing to give up that privacy for a very small amount.

Selling advertising space or username lists are alternatives for all ISPs, not just for

those that provides information services.

Usage-based

Considerations for the implementation of usage-based pricing at the ISP level also

apply to the infrastructure. In addition, a usage-based pricing for the infrastructure

can provide extra revenue for the development of more efficient and increased

capacity networks. Provided that an environment is available which makes the

adoption of a usage-based pricing attainable, charges based on the volume of traffic

is a relatively simple and cost-effective scheme. Costs for providing this service

include accounting hardware, software, and a business unit to bill the users. In an

environment that is hooked on flat-rates, such as in the United States, attractive

features of usage-based pricing must exist before the customers (ISPs) will accept

the switch.

Building a Case for Regulation

Although the dynamic bidding mechanism is very attractive as a theoretical basis for

pricing usage, it renders the system wide open to potential abuse by those who

control the system bottlenecks. A case is therefore made for establishing some form

of regulatory oversight to ensure against anti-competitive activities and abuse of

market- power. In essence a usage-based pricing scheme needs to be combined

with some form of regulatory oversight that aims at making the access of emerging

networks to the Internet open and nondiscriminatory, and that the firms which control

the bottleneck facilities in the emerging structure do not indulge in anti-competitive

behavior. Dynamic pricing would enable higher overall use of network capacity, while

allowing price-sensitive users to access telephone services at lower prices on a

dynamic and daily basis.

The Weakness of the Dynamic Bidding Model

The essential weakness of the Smart Market proposal as a stand-alone, free market

pricing system that does not need any regulatory oversight for its proper

implementation lies in its assumptions, summarized below.

Perceived Homogeneity

First, the model proposes to price the scarce network resource based on the

perceived network load. It seems that a uniform load factor is presumed across all

points of the network on which basis bandwidth is priced. However, this is simply not

true. The Internet is not a homogeneous network. The load factor and the resultant

level of congestion is going to be very different along the different

nodes/switches/lines between the sender and the receiver.

It may be argued that the price of sending a message can be based on the most

congested point of the network. However, the path that a packet will take cannot be

predicted with any degree of certainty. It is thus close to impossible to base pricing

on an algorithm related to the network load at the most congested point of the

network along the path that the packets have to traverse in order to be able to reach

their destination.

Also, network load is unpredictable, and is prone to sudden peaks and troughs. It is

entirely possible that the load at a particular node changes rapidly and the bid is

simply not good enough to receive priority from that node at that moment, even

though it might have been so earlier. It may be argued that through consensus a

system could evolve where "regional" congestion is calculable, and the price

determined on the basis of an algorithm that considers all possible routings and all

possible levels of network loads. However, given the diversity of the Internet and the

multiple levels of players, this sounds extremely far-fetched and difficult to achieve

without any neutral, oversight agency.

Manipulation of network load

Second, and more importantly, a pricing system based on network load opens itself

up to potential abuse by those who control the facilities at the system bottlenecks. It

may be argued that any system would be vulnerable to abuse, but the anonymity of

data transferred along the Internet would make this system especially vulnerable: for

example, unscrupulous firms in control of the various nodes would have both the

incentive and ability to manipulate the network load to keep it artificially high so as to

create an upward pressure on the price of network usage. Given that marginal costs

are almost zero, the firm would attempt to maximize revenue. It can do this by

tracking network usage and artificially keeping the network load at a point where

overall revenue realization is maximized.

The system is therefore open to abuse by bottleneck- controlling firms who peg the

network load at high levels in order to maximize revenue, thereby manipulating the

price of network usage upwards. For the system to operate fairly and efficiently, there

would either have to be no motivation for exploitation of market power, or a strict

system of controls against abuse.

Internet Pricing: A Case for Regulation

These two issues--the perceived homogeneity and the possibility of manipulation--

are the fundamental reasons why the Smart Market mechanism, or any variation of it,

needs to be combined with an institutional form that is responsible for (a) consensus-

building, and (b) ensuring against manipulation, anti-competitive behavior, and abuse

of market- power. Given the experience of the telecommunication industries, it should

be amply clear that there is an essential contradiction in free market operations. The

greater the degree of freedom, the greater becomes the role for regulation.

It is important to address the control of bottlenecks and their role in influencing the

pricing mechanism. Although an oversight agency could, hypothetically, ensure that

the consumer surplus generated is not collected as excess profits by the firms and is

returned to consumers (MacKie-Mason, 1994), it is more desirable to design a

system wherein the transfer of excess funds does not happen in the first place. While

it is true that competition is the best form of regulation, the privatization of the

Internet's facilities and the emergence of the NAPs indicate that the owners of the

underlying trunks and access paths (the Regional Bell Operating Companies, the

Inter Exchange Carriers, and the CAPs) are likely to have more market power than

any private organization has had over the Internet to date.

Whether one envisions Internet carriage emerging as a competitive industry or one

that is effectively oligopolistic, there seems to be a role for regulatory agencies.

There is a need to regulate pricing and control anti-competitive behavior in the event

that the industry is less than competitive. On the other hand, even if the system is

highly competitive, the dynamics of network pricing need to be implemented by some

form of nonprofit consortium or by a public agency to ensure consumer protection on

the one hand, and coordination and consensus among the different service providers

on the other. In the absence of such consensus building activities and an imperfect

market situation, dynamic pricing is likely to have a chaotic effect where the cost of

accounting and regulatory oversight is extremely high. This might have an

undesirable effect on the implementation of such a scheme in the first place.

An added factor that needs to be assessed is how technology is expected to develop

over time. Similar to pricing schemes, every technology also has its own bias. Since

technological development is likely to be unbalanced, and breakthroughs can be

expected to be sporadic both in terms of time and space, the pricing schemes that

are implemented need to be accordingly tailored to reflect or obviate the effects of

technological imbalances.

For example, transmission technology, which is dependent on fiber-optics, is slated

to develop much faster than switching technology, which is currently electronic

based. Should the expectation be that switching technology will develop quickly and

fiber-optic technology implemented, the fear of congestion at the nodes will no longer

be a valid one. The bottleneck will then change back to the transmission lines, not in

terms of the physical capacity of the fiber optic trunk lines, but in the costs associated

with overlaying all user lines, especially the last loop that connects the customers

premises to the nearest switch.

In all likelihood, the market is going to be transformed in an incremental manner.

Initially, some form of usage-based pricing, possibly dynamic pricing may be

combined with flat- rate pricing. For applications that require resource reservation,

usage-based pricing would be necessary to control their proliferation and to ensure

network performance. For more traditional forms of net usage, such as email, flat-

rate access would continue to be the norm. In other words, the pricing system that is

likely to evolve would move the industry towards multiple service levels. While it

would be difficult to predict the exact form of pricing that will emerge, it seems clear

that there will be a role for oversight agencies and regulators as the Internet evolves.

Congestion

The Costs of Congestion

The packet-switching technology of the TCP/IP protocol embedded in the Internet

has an essential vulnerability to congestion. A single user, overloading a sub-regional

line that connects to the regional level network, can overload several nodes and

trunks, and cause delays or even data loss due to cell or frame discarding for other

users. The specific manner in which the problem manifests itself depends on the

protocols used, and on whether the network is simply delaying or actually discarding

the information (Campbell, 1994). Since backbone services are currently allocated on

the basis of randomization and first-come-first-served principle, users now pay the

costs of congestion through delays and lost packets (Varian & MacKie-Mason, 1994).

The cost of congestion on the Internet is therefore a tangible problem, and not merely

the pessimistic outpourings of a band of dystopians. The Internet is not designed to

allow most users to fill their lines at the same time. Also, as new applications such as

desktop videoconferencing and new transport services such as virtual circuit

resource reservation come in, it will become more and more necessary for the

network to provide dedicated and guaranteed resources for these applications to

operate effectively (England, telecomreg, 7 May, 1994). In the Internet system, which

is essentially designed for connectionless network services, the requirement of

bandwidth reservation implies that an incompatible class of service needs to be

provided over it, thus necessitating costs in developing added functionality to its

edges, and in decreasing its overall efficiency.

There is a social cost imposed by those who are making unlimited use of the newer

bandwidth-hungry, incompatible applications. This cost is being borne by others in

the form of delays and data dropouts while making use of the more traditional

applications such as email, ftp, and gopher. The flat-rate pricing mechanism is

therefore inefficient in sending out corrective signals to minimize social costs and as

a resource allocator since it can hardly be argued that the social benefits of a

democratic discourse are less beneficial to society than an undergraduate sending

out real-time video to his friends.

There is a potential danger here. Continuance of the current pricing system may

result in a situation where the new applications drive out traditional uses. The

inherent bias of flat-rate pricing, whereby heavy users are subsidized by light users,

is a threat to the more traditional forms of net usage as applications requiring heavy

bandwidth are coming of age. It is however clear that a new form of pricing scheme

needs to be developed in order to ensure that the net retains part of its original

character as it evolves into a more potent and futuristic medium of communication.

At the far end of the spectrum is pure usage-based pricing. Given the shortfalls of the

flat-rate based scheme, it seems certain that there will eventually be prices for

Internet usage, and the only real uncertainty will be which pricing system is used.

The Causal Model of Internet Congestion

It is argued that three main factors--incompatibility of the newer applications with the

Internet's architecture, massification of the Internet, and privatization and

concomitant commercialization of the Internet--are responsible for an inherent

change in the Internet's dynamics, thus mandating a reexamination of the economic

system that surrounds the Internet.

Incompatibility issues

New network applications are all tending to require heavy bandwidth in near-real

time. Their high bandwidth- duration requirements are so fundamentally at odds with

the Internet architecture, that attempting to adapt the Internet service model to their

needs may be a sure way to doom the infrastructure. Their technical characteristics

and, consequently, their demand on the network are very different from the more

conventional, traditional electronic communication and data transfer applications for

which the Internet has been designed. While conventional electronic communication

is typically spread across a large number of users, each with small network resource

requirements, newer applications such as those with real-time video and audio

require data transfers involving a continuous bit stream for an extended period of

time, along with network guarantees regarding end-to-end reliability. Even though the

data-carrying capacity of the networks is constantly being enhanced through

upgrades in transmission capacity and switching technology, current developments in

communication software, especially those related to multimedia, are creating network

applications that can consume as much bandwidth as network providers can supply

(Bohn, Braun, Claffy, & Wolff, 1994).

In essence, the trend is towards applications that are, first, heavy bandwidth

consumers and second, require near real-time transmission--both characteristics that

are essentially incompatible with the inherent architecture of the Internet.

Privatization, Commercialization, and Massification

Simultaneously, we are witnessing a privatization of the Internet's facilities,

increasing commercialization of the net, and a political agenda promoting the rapid

deployment of the NII. All these are resulting in a massification of the Internet, as it

becomes easier to get "wired" in. The bottom line implication is that the demand for

bandwidth is possibly rising beyond current levels of supply.

All these players, the telephone, software, and cable companies are in a position to

strongly affect one critical aspect of market: accessibility. User-friendly software,

enhanced services, and marketing skills are together likely to have a dual effect: one,

allow computer literate users who have been to date outside the periphery of the net

the opportunity to connect, and two, drive the development of user-friendly tools of

navigation, which would have a multiplier effect on both network usage and the

number of people who would be able to navigate through the Internet effectively and

access desired information bases productively .

The bottom line implication is that the number of Internet users is going to increase

manifold, as opportunities to interconnect with the network become ubiquitous

through the efforts of the telephone, software, and cable companies, and as user-

friendliness and utility of the applications develop further.

Implications & Key Issues

The implication of these forces--the incompatibility of the new bandwidth hungry

applications, infusion of new users, and the privatized and commercialized nature of

the Internet- -is that the demand on network resources will increase exponentially,

and will possibly be much more than the supply of bandwidth. As network resources

become scarcer and as the system is driven towards a free-market model, resource

rationing through a change in the pricing system is inevitable.

The key issue is that the pricing mechanism should be able to (a) preserve the

inherent discursive nature of the net, (b) send the right signals to the marketplace,

and also (c) be flexible and adaptive to changes brought about through technology,

political initiatives, and software development.

Points of Congestion

The flow of data/traffic on the Internet can suffer from congestion at a number of

points. Possible points of congestion include the server's Internet connection, the link

that the data travels over, the regional network which delivers the traffic, the

connection to the Internet and finally the LAN which the user is located on. Any or all

of these links can suffer from congestion as a result of the traffic being carried

exceeding the capacity of the connection.

The Congestion Externality

As mentioned earlier, increased network size may eventually produce negative

effects which reduce the benefits of increased network size and in some cases the

marginal benefit of increased size can become negative. This essay will therefore

now examine the existence and impact of congestion of Internet data/traffic.

The congestion externality arises as "during congested times, one user imposes a

cost in the form of a degradation of service or delays on other users when the first

uses the network" An analogy used to explain Internet congestion is that of traffic

jams and delays on highways. When choosing the highway, road users only consider

their marginal private cost, not the marginal social cost of their actions. At rush hour,

the addition of each extra car imposes a cost, in terms of extra delay, on all other

road users. This means the marginal social cost due to delays and congestion,

exceeds the marginal private cost to the individual road user giving rise to a negative

congestion externality.

A similar situation can be argued to exist in relation to Internet data/traffic. Consider

Figure 3 below.

Figure 3 illustrates the congestion externality for Internet data/traffic. In Figure 3 there

is an Internet connection with a fixed capacity of Q# megabytes per hour. For all

quantities of traffic up to Q# , the marginal cost of additional traffic is constant at P0

and no delays are experienced. Indeed, it has been suggested that on an

uncongested network the marginal cost of additional traffic is close to zero. However

once the quantity of traffic demanded exceeds Q# , all traffic is delayed. Thus, as a

user generates additional traffic past Q# they experience delays. The Marginal

Private Cost MPC increases since "time spent by users waiting for a file transfer is a

social cost, and should be recognised as such in any economic accounting”.

Furthermore, since the additional traffic generated creates delays for all users, not

just the user generating the marginal traffic, the Marginal Social Cost MSC lies above

the MPC. In Figure 3, this is represented by both the MPC and MSC rising once the

capacity of the link Q# is exceeded.

The negative congestion externality arises because of this divergence between MPC

and MSC. Consider the demand curve D0 in Figure 3. Let us assume that the

consumer pays a price equal to the marginal private cost of their additional traffic.

With demand curve D0 the quantity demanded is Q0 and the consumer pays a price

of P0. Now, let us assume that there is a positive relationship between network size

and network traffic4. As the network size grows the quantity of data/traffic demanded

will increase and eventually exceed the capacity of the network.5 In Figure 3 this is

represented by an increase in demand from D0 to D1.

With demand curve D1 consumers now demand quantity Q1 megabytes per hour,

where the price P2 equals the MPC of that quantity of traffic. However at Q1 the

traffic of all users has been slowed and the MSC is equal to P1. The socially optimal

level of traffic occurs where price = marginal social cost. In Figure 3 this is at price P3

and quantity Q2. Therefore at quantity Q1 a negative congestion externality of P1-P2

exists.

It is important to remember however that the increase in demand from D0 to D1

could also be bought about by other factors besides the network effect. For example,

the increase in demand could be the result of changes in the price of substitutes and

compliments, changes in income or changes in consumer preferences. These factors

can produce variations in demand even though the network size may be constant.

The existence of the congestion externality has been demonstrated. Once the

capacity of the network is exceeded, all users experience delays, not just the user

creating the additional traffic. A divergence between MPC and MSC thus occurs,

resulting in a negative congestion externality.

The Congestion Externality, Demand and the Marginal Social Benefit

The Application Data Unit (ADU)

In estimating the size of the congestion externality it is necessary to determine the

best measure of the impact of congestion on Internet users. One method used to

measure the impact of delay is to measure the time delay for sending a 'packet' of

data to a destination. This was the method used by MacKie-Mason and Varian

(1994), who measured the round trip time delay of single packets to several major

cities. This approach however has been criticised on the grounds that because of the

operation of TCP/IP, single packet delay does not accurately capture the impact of

congestion on Internet users.

Since a communications network is as good, or as bad, as its users perceive it, a

measure which takes in to account the users evaluation of network performance is

thus necessary. This supports the view that "the criterion a user has for evaluating

network performance is the total elapsed time to transfer the typical data unit of the

current application, rather than the delay for each packet’. An ADU is simply the

typical size of a data element for an application. Thus the ADU for someone using a

web browser is the size of a typical web page whilst for someone using email it is the

size of a typical email message.

This means that the impact of delay will be related to the size of the ADU for the

application. Applications with larger ADU's will be more adversely affected by any

reduction in data throughput. Furthermore, some applications such as real-time video

and audio applications experience significant degradations in quality as congestion

increases. Users' assessment of congestion will therefore be related to the type of

application they are using as well as the ADU transfer time. Table 1 provides a

suggested ranking of the relationship between application type and the impact of

congestion.

Table 1: Application Type and Impact of Congestion

____________________________________________________

Likely Impact of Congestion

Application

____________________________________________________

Very Severe Real time audio and video

Severe Web Browsers - Graphics On

Large FTP transfers

Moderate Web Browsers - Graphics Off

Slight IRC

Telnet

Minor Email, Usenet

____________________________________________________

Now, consider Figure 4 below.

In order to simplify our analysis let us assume that a users evaluation of the cost of

delay is related to the type of application they are using. In Figure 4, three different

MPC curves have been drawn representing the cost of delay for different application

types. The impact of the congestion at Q0 will be different depending on the

application being used. For example MPC0 could represent email, MPC1 no-

graphics web browsing and MPC2 web browsing with graphics on. These have

corresponding marginal costs at Q0 of P1, P2 and P3. This illustrates the point that

an application with a large ADU or poor tolerance of delay will be more adversely

effected by congestion.

What then determines the MSC of congestion? The impact of congestion on Internet

users as a whole will depend on the mix of applications in use at the time the

congestion occurs. For example, if there is a large number of individuals utilising web

browsers with graphics switched on, then the impact of congestion, and hence the

MSC, will be larger than if the dominant use of the Internet at that time is for email.

The size of the MSC thus depends on the application mix at the time the congestion

occurs.

A similar diagram to Figure 4 can thus be constructed to show the difference in MSC

for different application mixes. Figure 5 shows the MSC for two different application

mixes. Firstly, MSC1 illustrates the marginal social cost where the application mix is

predominantly applications with small ADU's. MSC2 illustrates the marginal social

cost where the application mix is predominantly applications with large ADU's. It can

be seen from this that as with the MPC in Figure 4, for any given congestion level

such as Q0, the MSC is higher where the application mix is dominated by

applications with larger ADU's and poor tolerance of delay.

Since the congestion externality is equal to the gap between the MPC and MSC, the

size of the congestion externality will thus depend on the application mix of Internet

users when the congestion occurs. Where the application mix is dominated by larger

ADU applications, the congestion externality is greater.

Capacity Expansion as a Response to Congestion

Introduction

A model incorporating network size, congestion, and demand for Internet usage, has

been shown in the last section. This model will now be utilised to examine alternate

strategies for reducing the congestion externality as it applies to Internet traffic. The

first alternative to be evaluated involves evaluating the impact of expanding network

capacity as a means of alleviating congestion.

Figure 5 presents the typical situation of a network connection of fixed capacity Q#

with an initial demand curve D0. Again, assuming that price is set equal to MPC, a

price of P2 results in quantity Q0 of traffic being demanded. A congestion externality

of P1-P2 exists at this quantity of traffic.

Figure 5: Capacity Expansion

Now let us consider the impact of a decision to reduce the congestion externality by

increasing the capacity of the network. In Figure 5, this is represented by extending

the horizontal portion of the marginal cost curves from Q# to Q##. The MSC and

MPC curves are thus shifted outwards to the right by the amount of the capacity

expansion. The new marginal cost curves MSC1 and MPC1 are drawn parallel to the

original curves. With the new curves MSC1 and MPC1 , a higher quantity of traffic Q0

is demanded at the lower price P3. The congestion externality at this quantity of

traffic is now a lower amount, P2-P3.

However it has been shown previously that because lowering congestion raises the

marginal benefit of usage at that network size, that lowering congestion would lead to

an increase in demand for usage. Another explanation which supports this

hypothesis is based on the argument that time is complimentary to Internet usage

and that congestion reduces the quality of service of the Internet. Thus a reduction in

congestion will increase demand because it reduces the time taken to complete a

given task and is seen by most Internet users as a quality improvement.

By reducing the congestion at the network size associated with demand curve D0,

this will result in an increase in the marginal benefit of network size at that particular

network size. As a consequence of this, assuming no other changes, demand will

increase until a price/quantity equilibrium is reached which restores the original

congestion externality. The new demand curve is given by D1 with a new

price/quantity equilibrium at price P2 and quantity Q1. At this combination the

congestion externality is again equal to P1-P2. Thus, whilst a temporary reduction in

the congestion externality was achieved by expansion of the physical capacity of the

network, the original congestion externality will be restored ceteris paribus.

This situation, where a lowering of congestion leads to an increase of demand and a

return to the original problem of congestion, is similar to the situation with respect to

highways and motor vehicle traffic. Adding more roads to alleviate traffic congestion

may not provide relief from congestion as suppressed demand soon soaks up the

expanded capacity.

However let us now consider the impact of the policy if the reduction in congestion

leads to more companies and individuals deciding to connect to the Internet,

increasing the network size. In Figure 5, the capacity expansion initially reduced the

congestion externality however the externality was eventually restored at the original

size. Now, if the reduction in congestion produces an increase in network size, what

will be the impact on the congestion externality?

Fig 6:Capacity expansion with increased network size

In Figure 6, demand curve D1 represents the demand curve for the original network

size with expanded capacity. (i.e. The final position of the demand curve after the

expansion and subsequent increase in demand.) However if the reduction in

congestion increases the network size, then so long as the marginal private benefit of

increased network size is positive, the demand curve for the larger network size will

sit to the right of the original network size. In Figure 6 this is represented by the

demand curve D2. The higher level of demand D2 produces a new equilibrium at a

higher quantity of traffic Q2 and higher price P4. However what is more significant is

that P5-P4 > P1-P2 . That is, the congestion externality is larger than under the

original network size. Thus capacity expansion, where it leads to increased network

size, may increase the size of the congestion externality.

In a competitive environment with excess capacity, there is a tension between the

large sunk costs of physical networks and very low incremental costs of usage. On

the one hand, the need to recover sunk costs suggests using price structures with

high up-front charges and low (or zero) usage rates. On the other hand, with

significant excess capacity present, short-run profits can be increased by selling at

any price above incremental cost. Economic theory would suggest that the pricing

outcome in this situation might be unstable, unless regulatory forces or other

influences inhibiting competition were present.

Expansion of capacity is closely related to the concept of overprovisioning.

Overprovisioning means maintaining sufficient network capacity to support the peak

demands without noticeable service degradation Overprovisioning is not without its

problems. Firstly, it results in excess capacity during non-peak times. The extent of

excess capacity required depends on the nature of the traffic. For traffic, which flows

at a constant rate or can tolerate delay, an average utilisation rate of 90% may

provide enough excess capacity to accommodate peak periods. For traffic or tasks

which are 'bursty' or are intolerant of delay, the average utilisation may need to be

kept below 10%. This means that during off-peak times there can be large amounts

of capacity not being utilised.

Keeping ahead of the growth in Internet traffic is also a problem for strategies based

around expansion of capacity. Although the cost of expanding capacity is falling by

30% per year , increased use of multimedia applications and the rapid growth of the

Internet means this capacity is quickly soaked up . Indeed, "experience suggests that

application developers will have little difficulty in designing new services that use up

all available resources".

Expansion of capacity, by itself, thus does not appear to be a sustainable strategy for

dealing with congestion. Expansion of capacity can produce an increase in demand

for usage which results in a restoration of the original congestion. If the capacity

expansion leads to an increase in network size, capacity expansion may,

paradoxically, increase the congestion externality. Attempting to reduce congestion

through overprovisioning is also hampered by the nature and rate of growth of

Internet traffic. Alternate strategies for alleviating Internet congestion must therefore

be evaluated.

Case Studies

The case studies presented include:

(1) AOL Pricing History, in which the focus is on the aspect of ISPs pricing users.

(2) The New Zealand and Chilean Internet Experience, where the issues of pricing at

both ISP and infrastructure level are discussed.

AOL Pricing History

America Online (AOL) is a proprietary network that provides online services to

consumers, including electronic mail, conferencing, news, sports, weather, stock

quotes, software, computing support online classes, Internet access and a broad

array of informative content. It develops and markets interactive services for

businesses including the design, development and operation of wide area networks.

AOL is at the same time a service, a network access provider and a part of the

infrastructure. AOL possesses its own proprietary infrastructure, called AOLnet,

which carries most of AOL's network traffic.

Since its creation, AOL emphasized the service part of its business. It developed its

own content and network access technology, and user connectivity was limited within

the AOL network - there was no interconnection with other networks. As the Internet

became more popular, AOL users began to demand Internet connectivity. AOL was

forced to interconnect with the Internet gradually, first by providing Internet email, and

later on World Wide Web browsing. But this interconnection to the Internet

threatened AOL's service technology and its role as a service provider. AOL was

forced to shift its focus from a

service provider to a network access provider. As competition from ISPs increased,

AOL became a major ISP itself. The Internet network externalities and its path effects

made AOL network access and content technologies obsolete, as users and content

providers favored TCP/IP, HTML and other established Internet technologies other

than the AOL proprietary ones.

Until December 1994

From its inception, AOL used a two-part tariff scheme, with a monthly access charge

of $9.95 and a usage charge of zero for up to five hours, and $3.50 per hour

thereafter. AOL was a self-contained network, and users had a high willingness to

pay for the unique services it offered.

At that time AOL had no interconnection with the Internet, which was still unknown to

most users, nor with other online services. This lack of interconnection and limited

competition allowed the other online services, such as CompuServe and Prodigy, to

use similar two-part tariff schemes.

AOL had its own proprietary network access and content technologies. Similarly,

other online services had developed their own proprietary technologies. Because the

content technologies of each online service were different, independent content

providers usually were forced to provide their content through only one of these

online services, thus limiting their audience to the users of that service.

January 1995 to July 1996

The $9.95 access charge was retained, but the usage charge after five hours

dropped to $2.95. At that time, the Internet was becoming more popular and the

number of ISPs was increasing rapidly. However, the $19.95 flat rate for Internet

access had not become universal yet. The increased competition from ISPs, the

popularization of the Internet, and the imminent introduction of the Microsoft Network

gave users many more options. The increased demand naturally brought down

customer's willingness to pay, and forced AOL to drop its hourly rate and to

interconnect with the Internet. The other online services were subject to

the same pressures and also interconnected with the Internet, and therefore with

AOL.

Network externalities became more important during this period. The popularization

of the Internet and the proprietary online services meant that more people were

connected to a network. Since people were subscribed to different networks, they

demanded interconnection to the Internet, and this forced online services to provide

Internet e-mail, and later on WWW browsing.

Companies realized that they could reach a large number of people in an economic

manner by developing WWW sites. Using the Internet technology was simpler and

more economical than contracting and using the proprietary technology of an online

service like AOL. As more WWW sites came into being, the network externalities

became more powerful, and the number of WWW sites exploded. Both network

externalities and economics were making AOL proprietary technologies obsolete.

July 1996 to December 1996

In order to retain customers while still extracting as much consumer surplus as

possible, AOL introduced second-degree price discrimination. The existing plan was

retained as the "standard" plan, with a monthly access charge of $9.95 and a usage

charge of $2.95 after five hours of use. Additionally, a new "Value" plan was

introduced, with a monthly access charge of $19.95 and a usage of charge of zero

for up to 20 hours, and $2.95 per hour thereafter.

With the two pricing plans, AOL tried to target two distinct groups of customers:

Those with a low willingness to pay, who were happy to have limited access to the

network.

Those who wanted Internet access, and who would be willing to switch to a local ISP

for a lower rate. At this time the flat $19.95 monthly rate for Internet access had

become almost universal. Apparently AOL market analyses estimated that most

Internet users staid online for an average of 20 hours a month or less. It was

becoming clear that the Internet was guiding AOL's strategy. Even though AOL was

clearly converging towards the Internet, AOL was still trying to extract maximum

customer surplus by using price discrimination and multi-part tariff pricing.

In October 1996, AOL introduced an Internet service, Global Network Navigator

(GNN) targeting users desiring a full-featured Internet-based service. It charged a

monthly access fee of $14.95, with a usage fee of zero for up to 20 hours of use, and

$1.95 per hour thereafter.

Since December 1996

AOL made further refinements in its second-degree price discrimination structure. It

included the following options:

� A flat monthly rate of $14.95 with a minimum two-year contract. This attracted

price-conscious users and prevented them from switching to other providers.

� A flat monthly rate of $19.95 without minimum contract. This was the same

universal rate that all ISPs used. Since this also provided unlimited access to

AOL's own content, AOL was not extracting any consumer surplus from the

additional service it provided.

� A "standard" plan with a monthly access charge of $9.95 and an hourly usage

charge of $2.95 after five hours of use. This targeted the shrinking segment of

users who made little use of online services.

� A "light" plan with a monthly access charge of $4.95 and an hourly usage charge

of $2.95 after three hours of use. This was the same pricing plan introduced by

the Microsoft Network (MSN), and could be a convenient way to have new

customers experience the service at a very low cost.

� A "bring your own access" flat rate of $9.95 for users accessing the AOL content

through their own ISP. It is interesting to compare this with the $19.95 flat rate.

Apparently AOL valued its content at $9.95 a month, but was giving it free to

users who used AOL as their own ISP.

GNN was now absorbed into AOL. The adoption of the $19.95 flat fee is important

because it signaled the absorption of AOL into the Internet. AOL had been

transformed from a service company, whose main product was its content and in

which the network was just a necessary means to access the service, to a network

company, whose main product was network access. The multitude of pricing

schemes may

indicate signs of desperation and loss of focus, since apparently AOL was trying to

match all existing pricing schemes from competitive networks (ISP's, MSN), and had

made AOL disregard what had been one of its main core competencies: its content.

A flat rate pricing scheme does not extract as much consumer surplus as a multiple-

part tariff scheme does. In fact, a flat rate pricing scheme may barely cover the

services' huge fixed costs. Therefore AOL's new emphasis is on expanding its

customer base and on developing alternative sources of income. Given the

knowledge that an ISP like AOL has about its customers (e.g. address, online

navigation habits), advertising and sales are obvious choices for alternative sources

of income.

However, the aggressive acquisition methods that AOL has used have had major

economic consequences - acquisition costs are from $50 to $300 per new user

(depending on the sources), and churn rates are very high. Acquisition costs are

deferred over several months, so the actual profitability of the company may not be

what is indicated by its financial statements.

The flat rate pricing scheme, together with the aggressive acquisition campaign,

attracted a huge number of customers, who remained connected for extended

periods of time. As a result, AOL's infrastructure became congested - users had a

very hard time accessing the system, and when they were successful, the system

was painfully slow. AOL miscalculated the impact of the introduction of a flat rate,

and as a result it alienated thousands of customers and faced many lawsuits. Since

one of the main features that differentiated AOL from other ISPs was the ease of

installation and connection, this lack of sufficient infrastructure put AOL in a very

dangerous position. AOL reacted by investing millions of dollars in additional

infrastructure.

Lessons learned

America Online (and other online services) initially positioned itself as a service

provider, and limited access to its services to users of its proprietary network. It did

not license its content technologies, so they remained proprietary and incompatible

with those of the competition. When an alternative technology (WWW) emerged in

the public domain, people had a big economic incentive to use the open technology.

As happens many times, when the company took notice of the new technology, there

was already a critical mass of people who had adopted the new technology. So AOL

had to abandon its proprietary technology in favour of the open one. A flat rate

scheme encourages network congestion, because users are not conscious of the

resources that they are consuming and the cost of those resources. As a result, the

quality of the service provided by the network is degraded. Investing more in

infrastructure may alleviate the problem somehow, but only temporarily. Furthermore,

eventually companies may stop further investments in infrastructure that the flat rate

will not be able to recover.

Multiple-part tariff schemes such as the access+usage scheme used originally by

AOL and other online services are easy to implement under monopolistic conditions.

However, under intense competition, services seem to gravitate toward flat-rate

schemes. Part of this phenomenon may be due to the characteristics of the TCP/IP

protocols, which were designed when the Internet was a subsidized, not-for-profit

network. New protocols that allow the implementation of different types of services,

such as those based on quality or congestion may allow services to implement

differential pricing strategies. Meanwhile, services may be forced to subsidize their

flat-rate pricing plans through other means of revenue, such as selling marketing

information or advertising.

New Zealand and Chilean Internet Experience

Background

New Zealand: The development of the New Zealand network (NZGate) began in

1990 when six New Zealand universities and NASA established a 9600 bps analog

cable link from New Zealand to Hawaii. In April 1991, the network expanded to link all

of the seven New Zealand universities to form the Kawaihiko network. Later, the Tuia

network was established. It linked Kawaihiko to two pre-existing government

managed networks - the Department of Scientific and Industrial Research (DSIR) and

Ministry of Agriculture and Fisheries (MAF) - on an informal basis.

In July 1992, the Tuia Society was created, which consisted of three major

management groups, i.e. Kawaihiko representing the universities; Industrial

Research Limited (IRL) which was the old DSIR; and AgResearch which was the old

MAF. Two smaller groups, the National Library and Ministry of Research, Science

and Technology (MoRST), also joined the Tuia Society. At that time, a Frame Relay

backbone was also set up to provide connectivity between the groups. The Frame

Relay backbone was provided by a private organization, Netway Communications,

which was a subsidiary of Telecom New Zealand.

Figure 5 and Figure 6 summarize the interconnections and the configuration of the

management groups and sites within the Tuia Society and Kawaihiko up to 1992,

respectively.

Chile: In 1991, a large government-funded project was proposed in Chile to create a

national TCP/IP backbone that would link all national universities and provide a

single international link to the Internet. The project was entitled REUNA. Government

support, however, would cover only costs for the initial set-up of the Internet.

Therefore, continued operation and development costs would have to be shared

among the member institutions. Unfortunately, as a result of disagreements between

members regarding the distribution of costs and the control of the network, a few

universities left REUNA to create their own national network, named Unired. Both

organizations quickly created their independent national networks and by 1992, two

international links were established separately linking REUNA and Unired to the

Internet. It is important to note that communication between members on different

infrastructures within Chile (i.e. REUNA and Unired) was difficult. The traffic had to

travel through the international link since there is virtually no connection between

REUNA and Unired.

Pricing schemes

New Zealand: The general principles followed by the New Zealand institutions for the

establishment, maintenance, and development of their network were:

(1) initially share the traffic costs and if possible, have each site pay for their own

access costs and

(2) once a proper accounting system was established, "pay for what you use" (both

access and traffic costs).

For the initial establishment of New Zealand's connection to the U.S. in 1990, NASA

provided the majority of the support for the costs of the U.S. end of the link, but no

subsidy was provided by the New Zealand government for the New Zealand end of

the link. As a result, all the costs had to be recovered by charging the users. An

agreement was made between the six universities that each site would pay for 1/6 of

the start-up and ongoing costs to get the project established. A similar pricing

scheme was used to establish the Kawaihiko network in 1991, where costs were

divided in fixed proportions with Lincoln University paying for 1/13 and each of the

other six sites paying 2/13 of the costs. (There are seven universities in the

Kawaihiko network.)

In April 1992, when the entire Tuia network went under re-engineering, sites within

the Kawaihiko were provided with the opportunity to pay for their own access costs.

Netway Communications (an infrastructure provider), which provided the Frame

Relay, charged a monthly fee for both the access and traffic costs. Sites within the

Kawaihiko management groups could select their own access rates (i.e. speed) at

different prices. Since some sites had more costly access fees than others, they

agreed that each site would pay its own access charges. Moreover, access costs for

sites providing common access for other sites were divided using a set of

percentages agreed locally at each site. Traffic costs were still shared among

participants

as they were initially, since an accounting system was not yet implemented to

monitor traffic volumes between sites.

The past success of a usage-based pricing for international Internet traffic helped to

encourage the sites to initially share the start-up costs. They knew that once an

accounting system was established, users eventually would only have to "pay for

what they used".

Usage-based pricing was first implemented for international traffic, just after the

NZGate connection was made in 1990. They adopted a volume-charge pricing

scheme, with the following characteristics:

� Measure traffic in both directions through NZGate for each participating site and

charge for it by volume, i.e. for the number of Megabytes moved in and out each

month

� Charge enough to cover actual costs, plus a percentage for development

� Use the resulting development funds to buy more capacity as demand grows

"Committed traffic volume" pricing methodology

The notion of "committed traffic volume" provided users with predictability as to how

much they would be charged per month. The pricing method was as follows: Each

site made an initial choice of their committed volume, and thus their monthly charge.

If a site's traffic fell into a different "charging step" for more than one month, that site's

committed volume was updated to reflect the actual traffic. However, for that unusual

month, the site would still be responsible to pay for their previous committed volume,

whether their actual usage had changed or not. This provided a site at least a

month's warning of a change in the monthly fees. Committed volumes were updated

automatically by the NZGate Management, which simplified the administrative work.

Because of the success of this volume-based pricing, sites within the Kawaihiko

group, in particular, were willing to divide the costs for the initial establishment of the

network with a view that a fair pricing scheme would later be implemented.

In summary, the key factors that brought about the success of usage-based pricing in

New Zealand were:

� Unified group of major organizations which agreed and encouraged the

implementation of a usage-based pricing scheme

� Single, dominant infrastructure provider

� Cost-effective accounting system

� "Fair" and attractive pricing methodologies

The common pricing philosophy and mutual trust between and within the

management groups were essential for both the initial establishment and eventual

adoption of the usage-based system. The availability of a cost-effective accounting

system, as well as a simple and "predictable" pricing system, further encouraged the

implementation of a cost-effective "pay-what-you-use" system. Moreover, the

existence of a single, dominant infrastructure provider, significantly simplified and

reduced the accounting

costs that otherwise would most likely make usage-based pricing cost-ineffective.

Chile: After the establishment of both the REUNA and Unired networks in 1992, both

organizations were facing the problem of finding a proper pricing scheme to cover

both the maintenance and development costs. It was quite difficult for the groups to

come up with a solution. In fact, this difficulty actually led REUNA to select a very

unreasonable solution. The heads of the member institutions of REUNA decided that

all the network costs were to be split in proportion to the budgets of the institutions,

with the exception that the international traffic would be charged at a per-megabyte

rate. This of course brought about serious disapproval, and eventually forced REUNA

to implement a flat rate with unlimited access for national traffic. However, REUNA

still kept a usage-based pricing scheme for international traffic. Unired, on the other

hand, implemented a flat rate pricing scheme for both national and international

usage for their academic and non-profit customers. To recover some costs,

commercial customers were charged heavily for international traffic, but were still

provided the option of flat fees for national traffic.

In contrast to the New Zealand experience, the network in Chile found it difficult to

implement usage-based pricing. The political competition and unreasonable pricing

solutions in the past left both REUNA and Unired with no reasonable alternative but

to charge flat fees with unlimited access. Any other pricing besides flat-rate pricing

was not encouraged, in fear that an "unfair" and expensive usage-based pricing

would be implemented. It has been argued that it would be difficult for REUNA even

to implement a volume-based charging system for international traffic, especially

since their competitor, Unired, had implemented a flat-rate system for its non-profit

customers.

If, however, by reducing costs to the users, REUNA or Unired could gain complete

market share, then they could implement a usage-based pricing scheme more easily.

Alternatively, within a competitive market, a possible situation that would encourage

usage-based pricing would be if the congestion was so heavy that people desired to

have improved quality of service for real-time applications, for example, video

conferencing.

Pros and Cons of pricing methods in New Zealand and Chile

The benefits which New Zealand customers experienced were virtually no congestion

problems, and having to pay only for their own traffic and access fees, and not

other's. Even until now, New Zealand does not have nor foresee any congestion

problem primarily because users are conscientious of their use of the Internet.

Moreover, since the network has the ability to monitor traffic, areas with heavy traffic

can be readily identified; and problems, ideally, can be quickly resolved. In addition,

usage-based pricing may be more attractive to customers who do not use the

Internet often, especially if costs are less than flat rates. Hence, usage-based pricing

could encourage universal access. However, since Netway predominately possessed

a monopoly of the New Zealand Internet infrastructure, the costs may not have been

very cheap at all, and universal access may not be encouraged.

Another issue regarding Netway's virtual monopoly is that New Zealand may suffer

from less infrastructure developments. From a 1995 comprehensive study conducted

by Organization for Economic Co-operation and Development (OECD), they

concluded that generally countries which had less competition resulted in higher fees

for the consumer and had less infrastructure and system developments. However,

the issue of slow growth in development may or may not be a problem for New

Zealand, since the major institutions such as Kawaihiko have aggressively requested

for improved infrastructures. But it is generally true

that the price for upgrading the infrastructure and quality of service in a monopoly will

be more expensive than the case if a competitive market existed. In this regard,

developments would be discouraged if costs were too high.

Fortunate also for Netway is that in addition to New Zealand's philosophy of "pay

what you use", there is also the consensus that members should "pay for what you

want". As a result, Netway does not have to carry the full burden of investing large

costs for implementing new infrastructures. If an organization wants a special

service, they must commit to a monthly access fee. For example, Waikato and

Victoria desired a special 128kbps capacity connection between them. For this extra

service, both groups were charged monthly for their access of that line. Hence, both

the infrastructure and customers shared the burdened development costs.

In contrast, a market which is "too competitive", as in the Chilean experience, could

be counterproductive. The extreme position due to past disagreements resulted in a

non-connected local infrastructure between Reuna and Unired, and communication

between the sites on different infrastructures must travel through the US. This is a

complete waste of resources. Due to the unlimited access option, the infrastructure

also suffered from heavy congestion. Indeed, competition forced the flat rate to be so

low that it can only barely recover the costs. This resulted in less development and

decrease in quality of service, since the funding for both the infrastructure and

existing services was barely supported. Eventually, to recover costs, flat fees have to

be increased; but this discourages universal access since it may become too costly

for people who do not use it often.

The table below provides a summary of the pros and cons that must be considered

when implementing a usage-based pricing system in relation to the New Zealand and

Chilean Networks.

New Zealand

Chile

Accounting system Simple and cheap Expected to be expensive

Possibility for Universal

Access

Maybe, not very cheap

since have monopoly

Maybe, flat rates too

expensive

Ease for developments Possible Difficult

Traffic congestion None Heavy

General overview Usage-based pricing –

congestion minimized; but

monopoly

Cheap for customer, but

difficult to support

infrastructure.REUNA and

Unired disjointed.

Conclusion

In conclusion, to implement a usage-based pricing methodology in a monopolisitic

and cooperative environment which desires usage-based pricing is not so difficult. In

a competitive environment where disjointed service and infrastructures exist, a

usage-based pricing system could be implemented by:

� Obtaining a monopoly (i.e. by reducing costs to user to gain complete market

share), OR

� Consolidating the disjointed organizations to agree to implement a usage-based

pricing scheme, OR

� Convincing the users to demand for it, i.e. they should not pay for others' access

and traffic costs (but an inexpensive accounting system must be available, else it

may cost the users same or even more to use the Internet) and have a better

quality of service, OR

� Having the government enforce it.

Interaction with Other Groups

Network and Path Dependent Effects

The most obvious issue involving network effects in network pricing is the popularity

of the low flat-rate pricing offered by several network access providers. As studied in

the AOL case, a network access provider would need an average of 6 months to

recover its network access costs by charging a flat-rate of $19.95 a month. The

motivation for the flat-rate pricing is, especially for AOL, to capture market share in

order to take advantage of the network effect.

As the technology evolves, there will be a need for adopting a new pricing scheme in

order to better control network usage. Whenever there is a switch from an old system

to a new system, a switching cost is involved. In the United States, however, flat-rate

pricing has prevailed and, as it becomes more and more widespread, it is relatively

more difficult to change the popular pricing paradigm and, say, charge based on

usage and congestion.

There is a "lock-in" effect in the development of infrastructure. Currently network

accesses are mainly done via dial-in. While the equipment is geared towards this

mode of access and the users are accustomed to paying a normal phone line's rate

to gain access to the medium, the switching cost of going from dial-in to broadband

home access could be huge. In fact this is the main reason why infrastructure

providers do not have the incentives to invest in broadband home access and

wireless infrastructures at present.

Human Factors

While pricing schemes should help to control the network flow, they should not cause

too much inconvenience to the users. Clearly any pricing schemes other than flat-

rate pricing will bring some form of "extra" inconvenience to the users. For instance, if

usage-based pricing is adopted, the users might find keeping track of the amount of

their traffic inconvenient; and if congestion pricing is used, they might find that

differentiating between the congested hours, non-congested hours and learning the

prices charged at each point inconvenient. Unless these procedures can be made

transparent to the users, it is arguable that many users will be reluctant to use a new

pricing scheme, even if they would have to pay a little less compared to flat-rate

pricing.

Human factor is also involved in terms of getting used to new pricing schemes and

settlement models, when they are implemented. Productivity is expected to decrease

while cost is expected to increase due to this factor. It may hinder the pace at which

new pricing schemes and settlement models are adopted.

Collaborative Design

Collaborative technologies have brought exciting new network applications like "white

board" and video conferencing. Along these lines, it is possible that the development

of collaborative technologies could eventually enhance the successful

implementation of different pricing schemes and settlement models. For example, the

success of developing a network with symmetric links (the network links nowadays

are mostly asymmetric, that is, with high speed downstream but low speed upstream)

may make the accounting of packets much easier and cost effective.

Legal and Regulatory

The most controversial one is the FCC regulation on the exemption of Enhanced

Service Providers (ESPs). Currently the Internet service providers do not

compensate the telephone companies for their customers' dial-in access to the

network. This has caused considerable complaining from the telephone companies

because of the congestions at the "local loop". A call to a network access provider is

on average much longer than a normal phone call and the phone companies would

have to build larger switches and increase the network capacity to accommodate the

normal traffic. Worse still, if the phone companies do not receive settlements from the

ISPs, the ISPs can continue offering low prices to the users so that users will have no

incentive to scale back the length of their access calls. Another negative result is that

the phone companies also have neither the money nor incentive to further the

development of broadband access to the home.

In addition, the government can play an important role in the implementation of new

pricing schemes and settlement models. First, it can subsidize the network access

providers and the infrastructure providers to aid the cost at the initial stage of

implementation. Second, it can enforce interoperability of networks, which in turn will

affect the settlement models. Third, the government could possibly enforce the

development of content-aware or application-aware network architectures in the

future (to deal with indecency, for example). Were this last enforcement in place, the

pricing schemes would be affected because a content-aware or application-aware

network can easily account for the type of packets being sent. Finally, it is possible

that Internet access becomes as "necessary" as telephone service in the future.

If the government were to impose legislation requiring "universal service" of Internet

access, the pricing structures will most certainly be impacted.

Industrial Organization

While people are discussing which pricing scheme and settlement model should be

adopted, an interesting question arises is whether a new industrial organization

should be formed to facilitate the adoption of these new pricing schemes and

settlement models?

The following are the possibilities. First, an "external" organization could be formed to

explicitly deal with all the network settlements among network access providers and

infrastructure providers. Second, some form of alliance between network access

providers and infrastructure providers could be formed. For example, an alliance of

infrastructure providers can be formed to standardize the next generation of

infrastructure to facilitate new pricing schemes, or alliances of infrastructure providers

and network access providers can be formed to ensure the smoothest possible

transition.

Inter-Organizational Design

The issues related to Inter-Organizational Design are very similar to those related to

Industrial Organization. The difference is in the way in which we identify the

organizations, and the way in which companies will form the organizations.

Therefore, the related issue here is that when the network access providers and

infrastructure providers see a need to collaborate so that settlements issues can be

resolved more easily and properly, how will they come together and form an

organization or alliance - a standard body, a consortium, a joint venture or a

technology web.

Standards

An infrastructure standard standardizing the degree of content awareness, should

one be adopted either by legislation or de facto, would affect the possibility of

implementating different pricing schemes. And surely, if different standards of

infrastructure arise, it would be difficult to unify the pricing schemes and settlement

models.

Similarly, it is feasible that standards for pricing users and services could evolve in

the future and whether they will evolve and when they will evolve would depend on

government decisions and economics of the network. For instance, since flat-rate

pricing makes network access providers difficult to recover their costs, it is possible

that usage-based or congestion will become standards because of the driving force

of economics.

Finally, one last interesting issue to be addressed here is that the de facto standard

of TCP/IP (IPv4) actually hinders the development of some pricing schemes, such as

priority pricing, because it has no mechanism for differentiating packet priorities. With

ATM and IPv6 under development, it remains a question as to whether TCP/IP (IPv4)

is the best protocol for network transports.

The ISP scenario in India

ISPs are frantically searching for new sources of revenue in India17. Indian ISPs are

increasingly pursuing the market for internet related corporate services. Within which,

the largest source of revenue could come from running virtual private networks. VPN

networks link together a company’s offices — corporate intranets — as well as its

suppliers and distributors which are corporate extranets.

Other sources of revenues involve activities like web hosting and e-commerce.

Access revenues are still the major source of revenue as of now, but in the long run,

ISPs will have to look for newer revenue sources.

Even existing ISPs like Satyam Infoway are trying to figure out viable revenue

models.

New entrants to the market include HCL Infinet, the ISP arm of infotech major HCL

Infosystems, and new ISP ventures from the Reliance and the Tata group. Caltiger,

which has already shaken up the market with its free access model, claims a

subscriber base of two lakh, less than a year after its launch. Other major new

players could be Wipronet, Wipro’s ISP arm and BPL.net.

These new entrants will pose a major challenge to what can be described as the

original or old set of private ISPs. These include Sify, the Bharti group controlled

Mantra Online, the C Sivasankaran controlled Dishnet DSL and, of course, the

granddaddy among ISPs, Videsh Sanchar Nigam Limited.

The subscriber base numbers do not as yet add up to any sort of plausible revenue

model. Consider advertising revenue, for instance. In calendar year 1999, adspend

on the internet was between Rs 25-30 crore. This is supposed to double in 2000, to

all of Rs 60 crore. Thus advertising revenue models are not plausible in India as of

right now.

Adspend on the internet is bound to increase as the number of subscribers and users

increase, but the base is as yet too small. Nasscom estimates that adspend on the

internet will reach Rs 250 crore by 2003.

The search for a viable revenue model has, therefore, become an imperative for

Indian ISPs. This is all the more so since access charges have dropped dramatically.

Dishnet DSL, for instance, offers unlimited access for Rs 250 per month while those

wishing to access the internet for one hour per day need to pay only Rs 99 per

month. Satyam charges Rs 299 per month. Mantra Online’s tariffs are similar. In the

17 __________, Drawing The Bottomline, CORPORATE DOSSIER, Economic Times, Aug 04 - Aug 09 2000

context of falling access charges, a lot of the interest has focused on Caltiger, the

Calcutta-based ISP, which was the first to introduce free access.

It is still a debate as to whether free access is viable or not.

According to Merrill Lynch

“We believe that Caltiger will encounter challenges to its free access business model.

That is because there is limited advertising in India....in our opinion only revenue

sharing between ISPs and telcos that will bring about the true advent of free ISPs in

India,"

Other debates rage as to whether free ISPs will be able to offer high quality access.

Caltiger officials, however, deny that their revenue model is not advertising based.

"Only 20 per cent of our revenues will come from advertising. 40 per cent will be from

running corporate networks while 30 per cent will be from e-commerce. We will also

get 10 per cent from transferring our proprietary technology for delivering

advertisements to viewers," says R Vishnu Kumar, vice-president North,

Caltiger.com. Caltiger’s e-commerce related revenues will be from integrating

customer relationship management and supply chain management systems. Caltiger

also has a proprietary technology in delivering advertisements to viewers. This

consists of an ad bar which stays in place for the whole duration during which a

customer is logged on to Caltiger’s web site. Caltiger has transferred this technology

to an Hungarian and a Sri Lankan company for $1 million. It hopes to earn between

$3-4 million per annum by way of royalty payments on this technology. "Because of

our unique ad model we expect to capture a proportion of advertising revenues on

the internet," says Kumar.

However, it is the battle to grab a chunk of the market for internet related corporate

services which could decide who emerges the winner in the internet sweepstakes.

Corporate services include access provision, VPN networks, and application service

provision. Of these, provision of VPN services could be the most lucrative part of the

corporate services market. According to the Merrill Lynch report as many as 10,000

corporates in India may wish to set up VPNs. Merrill Lynch assumes five locations

per corporate and a cost of Rs 5,00,000 per location. Given these assumptions, total

spending on VPNs could be much as Rs 2,500 crore or $582 million at an exchange

rate of Rs 43 to a dollar. Even if cost per location is halved to Rs 2.5 lakh, the

potential VPN pie adds up to Rs 1,250 crore. All these are potential numbers

because Merrill Lynch estimates that only 500 crorporates have initiated the process

of setting up VPN networks. The major contenders for VPN market are likely to be

Satyam Infoway, HCL Infinet, Wipro and BPL. Dishnet-DSL, which has been a

pioneer in slashing access rates, is also pitching for the corporate market. Right now

pure access provision accounts for a major portion of its revenue, the company is

unwilling to discuss exact numbers, but corporate services would gain in importance.

"We are well positioned to serve the corporate market by providing value added

services such as VPN, collocation and web hosting. DSL technology provides access

to broadband", says Bill Crawley, senior vice-president sales and marketing, Dishnet

DSL. Direct Subscriber Line technology helps boost capacity of ordinary copper lines.

Mumbai-based ISP Pacific Internet expects 60 to 70 per cent of revenues to originate

from access fees, running VPN, advertising and web hosting. Despite falling rates,

revenue from access continues to be the predominant source of revenue for most

ISPs. Mantra Online, a joint venture between the Bharti group and British Telecom,

presently derives 80 per cent of its revenues from access. The remaining 20 per cent

comes from advertising on its portal.. Like all ISPs Bharti-BT also plans to offer

corporate services. VSNL, the country’s largest ISP, currently derives all its revenues

from pure access. In fiscal 1999-2000, the revenue accruing to its internet business

was Rs 275 crore. VSNL makes a lot of money from providing leased lines and

bandwidth to other ISPs. Among the new entrants Reliance plans to emerge as an

ISP’s ISP by leasing or selling bandwidth to ISPs. While RIL declined to comment for

this story, sources familiar with the company’s plans say that its revenues would

come mainly from hiring out bandwidth.

Conclusion

The pricing model is India does seem to be following the Chile way at least in terms

of Chile, if only in the way prices are crashing. Very soon, pure access pricing is

unlikely to bring in viable revenues. Pricing is going to focus more on usage and

quality of usage. As of now, precedence and smart market pricing seem to be far

away.

Impact on Indian Infrastructure development

We live in a time where the flow of information and access to it is as important as the

flow and access to goods and services. This project tries to show how access can be

increased in a country like India through appropriate pricing of Internet services.

In the wake of the increased demand for bandwidth, which is imminent as users

graduate to higher level, bandwidth hungry applications, proper pricing will act as an

allocating mechanism. It is this issue that we have explored in this project by looking

at various pricing models and the context in which their use may be warranted. Given

the comfortable position with respect to bandwidth that we currently find ourselves

thanks to VSNL’s planning, it makes sense from a social point of view to go in for free

access models in order to increase access levels in the country. But as the project

has discussed, this model has its drawbacks especially in India where alternative

sources of revenues for ISP’s are few.

When bandwidth requirements do catch up with the supply then ensuring access

may not be good enough as access alone would not guarantee usage. As more

bandwidth hungry applications like internet telephony etc. take off users may find it

difficult to log on to the net. In such a scenario, the role of pricing becomes all the

more important in ensuring proper usage of the service. It is in this situation that

usage based systems could be explored. This would ensure efficient and fair

allocation of charges to users.

Why is this issue relevant in the context of overall development?

The answer to this question is explained in the context of the possible impact of the

Internet on health and education in a developing country

Internet for Health Internet for Health

Poor information leads to poor health. The net provides an opportunity (for the

government or private parties) to provide consultation online, to make available

access to medical publications and databases. Medical records can be maintained

online and users can be reminded about checkups. News of imminent disasters,

epidemics and how to deal with them can help save millions of lives. Other issues

could be regulating and financing online health etc. Improved health would translate

into higher productivity and worker efficiency, which in turn would translate into

higher output.

Internet for Education Internet for Education

The net could be a medium for providing quality primary and secondary education in

an environment of falling educational standards. Access to information from around

the world would give a boost to tertiary education and research, which have their own

positive effect on society. The net could also be the source of training and continuing

education especially adult education. Improved education levels would increase

options for the users and possibly translate into better living standards.

The above assumes that issues like localization of content, regular and consistent

service standards and access are taken care off, issues that need to be addressed in

the Indian context. It is the access aspect that we have explored in this project.

Bibliography

1.David G. Messerschmitt, "The convergence of telecommunications and computing:

What are the implications today?" , May 1996

2.Hal Varian, "Pricing Information Goods" (postscipt), June 1995

3.Hal Varian, "Versioning Information Goods" (pdf), January 1997

4.Hal Varian, "Pricing Electronic Journal", June 1996

5.Michael L. Katz and Harvey S. Rosen. "Microeconomics" 2nd Edition. IRWIN, Inc

6.J. Walrand and P. Varaiya, "High Performance Communication Networks", chapter

8

7.Jeffrey K. MacKie-Mason and Hal R. Varian. "Some Economics of the Internet"

(postscript), February, 1994

8.Hal R. Varian. "Differential Pricing and Efficiency". June 1996

9.Language Service International (LSI), International Internet Statistics

10.Electronic Commerce -- An Introduction, May 1996

11.Hal Varian, "Economic Aspects of Personal Privacy", December, 1996

12.Milton Mueller, Joseph Hui, Che-hoo Cheng, "The Hong Kong Internet Exchange".

13.Hal Varian, "Economic Issues Facing the Internet", June, 1996

14."The Accidental Superhighway", Economist, July, 1995

15.Sandra Schickele, "The Internet and the Market System: Externalities, Marginal

Cost and Public Interest", August, 1993.

16.David W. Crawford, "Pricing Network Usage: A Market for Bandwiph or Market for

Communications?", presented at MIT Workshop on Internet Economics, March,

1995.

17.Padmanabhan Srinagesh, "Internet Cost Structures and Interconnection

Agreements", presented at the MIT, Workshop on Internet Economics, March, 1995.

18.Loretta Anania and Richard J. Solomon, "Flat: The Minimalist B-ISDN Rate",

presented at the MIT Workshop on Internet Economics, March, 1995.

19.Bell Atlantic, "Superhighway Traffic Taxes Current LEC Networks", Telephony

Magazine, July 29,1996.

20.Report of Bell Atlantic on Internet Traffic

21.MacKie-Mason, Shenker, Varian, "Network Architecture and Content Provision:

An Economic Analysis" (pdf), June 29,1996.

22.Hal Varian, "Some FAQs about Usage-based Pricing", September, 1994.

23.Pacific Bell Network Access Point Features

24.Information obtained from enquiring personel at Chicago NAP.

25.Butler, "US plans for Virtual Laboratories by Internet", March 14, 1996.

26.INTER@CTIVE Week, May 20, 1996

27.AOL financial statements, 1996

28.Dow Jones News Service

29.American Online Homepage

30.Nevil Brownlee, "New Zealand Experiences with Network Traffic Charging",

December, 1994.

31.Baeza-Yates, R., et.al., "The Chilean Internet Connection or I Never Promised

You a Rose Garden", Proceedings of INET 1993, pp. GFC 4-9.

32.Personal conversation with Rex Croft, University of Waikato, New Zealand

33.OECD, "Information Infrastructure Convergence and Pricing: The Internet", Paris,

1996.