Conception & Realization Of A Dynamic Web Site For The University Of Kasdi Merbah Ouargla

98
People's Democratic Republic of Algeria Ministry Of Higher Education And Scientific Research Dissertation submitted in partial fulfillment of the requirements for the Degree of " Academic Applied Studies Diplomas" in Computing Sciences Supervised by : Realized by : Mr. A. HERROUZ GUERGAB Toufik KHEDRANE Atallah Academic Year: 2007/2008 Kasdi Merbah-Ouargla University Faculty Of Engineering Sciences Computing Department

Transcript of Conception & Realization Of A Dynamic Web Site For The University Of Kasdi Merbah Ouargla

People's Democratic Republic of Algeria Ministry Of Higher Education And Scientific Research

Dissertation submitted in partial fulfillment of the requirements for the Degree of " Academic Applied Studies

Diplomas" in Computing Sciences Supervised by : Realized by : Mr. A. HERROUZ GUERGAB Toufik

KHEDRANE Atallah

Academic Year: 2007/2008

Kasdi Merbah-Ouargla University Faculty Of Engineering Sciences

Computing Department

The Conception & Realization Of A Dynamic Web Site Promotion 2008

2

The Conception & Realization Of A Dynamic Web Site For The University Of

Kasdi Merbah Ouargla

By KHEDRANE ATALLAH &

GUERGAB TOUFIK

The Conception & Realization Of A Dynamic Web Site Promotion 2008

3

The Conception & Realization Of A Dynamic Web Site Promotion 2008

4

To our God,, our parents, professors & all our friends for making

All this possible

The Conception & Realization Of A Dynamic Web Site Promotion 2008

5

Table Of Contents

General Introduction …………………………………………………………. 07 Chapter 01 : Web & Internet Technology & compenents. I. 1. Introduction ……………………………………………………………………… 09 I. 2. The Internet ……………………………………………………………………… 09 I. 2. 1. Internet History ………………………………………………………………. 09 I. 2. 2. Internet Management ……………………………………………………….. 11 I. 3. Internet Protocols ………………………………………………………………. 12 I. 3. 1. Architecture model TCP/IP ………………………………………………… 13 I. 3. 1. 1. The TCP/IP Model ……………………………………………………….. 13 I. 3. 1. 2. TCP/IP Model Layers ……………………………………………………. 13 I. 3. 2. IP Protocol …………………………………………………………………… 17 I. 3. 2. 1. IP Addressing ……………………………………………………………... 17 I. 3. 2. 2. IPv6 (IPng) ………………………………………………………………… 18 I. 3. 2. 3. Domain Name System (DNS) ……………………………………………. 19 I. 3. 3. The TCP protocol ……………………………………………………………. 19 I. 3. 4. The HTTP protocol ………………………………………………………….. 20 I. 3. 5. The FTP protocol ……………………………………………………………. 21 I. 3. 6. The SMTP protocol ………………………………………………………….. 22 I. 3. 6. 1. POP3 ……………………………………………………………………….. 22 I. 3. 6. 2. IMAP ……………………………………………………………………….. 23 I. 4. Internet Services ………………………………………………………………… 23 I. 4. 1. Electronic Mail (E-mail) ……………………………………………………. 23 I. 4. 2. World wide web ……………………………………………………………… 24 I. 4. 3. Internet forums ………………………………………………………………. 24 I. 4. 4. Internet Relay Chat (IRC) ………………………………………………….. 24 I. 4. 5. Telnet ………………………………………………………………………….. 24 I. 5. The World Wide Web …………………………………………………………... 25 I. 5. 1. History of the world wide web ……………………………………………... 25 I. 5. 2. The Web ………………………………………………………………………. 26 I. 5. 3. How the web works? ………………….…………………………………….. 27 I. 5. 4. Web Architecture ……………………………………………………………. 28 I. 5. 5. Weaknesses of the WWW …………………………………………………… 29 I. 5. 6. Success of the WWW ………………………………………………………… 29 I. 6. Hyper Text Markup Language (HTML) …………………………………….. 30 I. 6. 1. What is HTML ? ……………………………………………………………... 30 I. 6. 2. HTML history ………………………………………………………………… 30 I. 7. Addressing system URL ………………………………………………………... 32 I. 7. 1. Dynamic URLs vs. Static URLs ……………………………………………. 33 I. 8. Web Databases ………………………………………………………………….. 37 I. 8. 1. Scripting language CGI …………………………………………………….. 37 I. 9. Client / Server architecture ……………………………………………………. 38 I. 9. 1. The Web server ………………………………………………………………. 39 I. 9. 2. The Web client ……………………………………………………………….. 39

The Conception & Realization Of A Dynamic Web Site Promotion 2008

6

I. 10. Internet security ………………………………………………………………. 39 I. 10. 1 Principales Defense Technologies ………………………………………... 40 I. 10. 1. 1 FireWalls ………………………………………………………………….. 40 I. 10. 1. 2 Cryptography …………………………………………………………….. 40 I. 10. 1. 3 Virus .………………………………………………………………………. 41 I. 10. 1. 4 Proxy Servers …………………………………………………………….. 41 I. 10. 1. 5 VPN ………………………………………………………………………... 42 I. 10. 1. 6 Intrusion Detection System (IDS) ……………………………………… 42 I. 11. Conclusion ……………………………………………………………………... 44 Chapter 02 : Web Site Security ………………………………………………. II. 1. Cross Site Scripting XSS ……………………………………………………… 46 II. 1. 1. What is Cross Site Scripting? …………………………………………….. 46 II. 1. 2. Site owners are always confident, but so are hackers! ………………… 47 II. 1. 3. The repercussions of XSS ………………………………………………….. 47 II. 1. 4. A practical example of XSS on an Acunetix test site …………………… 48 II. 1. 5. Why wait to be hacked? ……………………………………………………. 50 II. 2. SQL Injection …………………………………………………………………... 51 II. 2. 1. SQL Injection: What is it? …………………………………………………. 51 II. 2. 2. SQL Injection: An In-depth Explanation ………………………………… 51 II. 2. 3. SQL Injection: A Simple Example ………………………………………... 52 II. 2. 4. The impact of SQL Injection ………………………………………………. 53 II. 2. 5. Preventing SQL Injection attacks ………………………………………… 53 II. 4.Conclusion ………………………………………………………………………. 54 Chapter 03 : Kasdi Merbah University Presentation ……………………………... III. 1. Introduction …………………………………………………………………… 56 III. 2.Genesis and evolution of the University of Kasdi Merbah Ouargla ……. 56 III. 3.Ouargla University Administrative management …………………………. 57 III. 3. 1. Rector ……………………………………………………………………….. 57 III. 3. 2. The Governing Council …………………………………………………… 57 III. 3. 3. Scientific Council ………………………………………………………….. 58 III. 3. 4. The Directorate ……………………………………………………………. 58 III. 4.Conclusion ……………………………………………………………………… 59 Chapter 04 : Analyse & conception ………………………………………….. IV. 1. Introduction ……………………………………………………………………. 61 IV. 2. Scope statements ……………………………………………………………… 61 IV. 2. 1. Contexte …………………………………………………………………….. 61 IV. 2. 2. General description ……………………………………………………….. 61 IV. 2. 3. Data Description …………………………………………………………... 62 IV. 2. 4. Operations and Traitements ……………………………………………… 63 IV. 4. Conclusion …………………………………………………………………….. 64 Chapter 05 : Implementation ……………………………………………………….. V. 1. Introduction …………………………………………………………………….. 66 V. 2. The workspace compenents …………………………………………………... 66 V. 3. Implementation Tools …………………………………………………………. 66

The Conception & Realization Of A Dynamic Web Site Promotion 2008

7

V. 4. What is PHP ? ………………………………………………………………….. 67 V. 4. 1. How do PHP works ………………………………………………………… 69 V. 4. 2. The key difference between PHP and JavaScript ………………………. 70 V. 5. MySQL …………………………………………………………………………... 71 V. 5. 1. An Overview of MySQL Architecture …………………………………….. 72 V. 5. 2. MySQL Characteristics ……………………………………………………. 72 V. 6. How do PHP and MySQL work together? …………………………………. 73 V. 7. PhpMyAdmin …………………………………………………………………… 74 V. 8. The web Server software ……………………………………………………… 74 V. 8. 1. Internet Information Server ……………………………………………….. 75 V. 8. 2. Apache Server ……………………………………………………………….. 75 V. 8. 3. Apache & IIS comparision ………………………………………………… 76 V. 9. Other Scripting Languages …………………………………………………… 78 V. 9. 1. JavaScript ……………………………………………………………………. 78 V. 9 .2. XML …………………………………………………………………………... 78 V. 10. AJAX …………………………………………………………………………… 78 V. 11. The system presentation …………………………………………………….. 79 V. 11. 1. Administrator space ………………………………………………………. 80 V. 11. 2. Professor space ……………………………………………………………. 84 V. 11. 3. Student space ………………………………………………………………. 88 V. 11. 4. The guest ……………………………………………………………………. 90 V. 12. Conclusion …………………………………………………………………….. 91 General Conclusion ………………………………………………………………….. 93 Bibliography & webography ……………………………………………………….. 82

The Conception & Realization Of A Dynamic Web Site Promotion 2008

8

General Introduction

The Conception & Realization Of A Dynamic Web Site Promotion 2008

9

General Introduction :

The Internet is a worldwide system of computer networks, a network of

networks in which users at any one computer can, if they have permission, get

information from any other computer (and sometimes talk directly to users at other

computers). Today, the Internet is a public, cooperative, and self-sustaining facility

accessible to hundreds of millions of people worldwide. Physically, the Internet

uses a portion of the total resources of the currently existing public

telecommunication networks.

In this project we’ll present the process of the realization of a dynamic web

site and sharie the university of Kasdi Merbah Ouargla data base on the net to keep

the interconnection between the administration, professors and the students.

This dessirtation include five chapters witch are:

The first chapter we described the web and internet technology and its

components wich is considered as the base of our system.

The second chapter we discussed the web sites security giving an overview

on the most common hacking technics used on the web and also giving some

practical examples and proposed some useful solutions.

The third chapter we presented the University Of Kasdi Marbah Ouargla.

The forth chapter was reserved for the analyzation and conception phase of

our project, describing in a detailed and a complete manner the deferents functions

supported by our system, also the data description in the DB.

The last chapter completes the previous one in exposing the deferent parts of

the system and presenting the work space components and the site features.

Finally we ended this dessirtation with a general conclusion.

The Conception & Realization Of A Dynamic Web Site Promotion 2008

10

1. Web & Internet Technology

& Components

The Conception & Realization Of A Dynamic Web Site Promotion 2008

11

I. 1. Introduction :

The Internet and the World Wide Web are not one and the same. The Internet is

a collection of interconnected computer networks, linked by copper wires, fiber-

optic cables, wireless connections, etc. In contrast, the Web is a collection of

interconnected documents and other resources, linked by hyperlinks and URLs.

The World Wide Web is one of the services accessible via the Internet, along with

various others including e-mail, file sharing, online gaming and other services..

Today, the Internet is a public, cooperative, and self-sustaining facility

accessible to hundreds of millions of people worldwide. Physically, the Internet

uses a portion of the total resources of the currently existing public

telecommunication networks.

I. 2. The Internet :

The Internet is named after the Internet Protocol, the standard communications

protocol used by every computer on the Internet. The Internet can powerfully

leverage the ability to find, manage, and share information. Never before in human

history has such a valuable resource been available to so many people at such little

cost.

I. 2. 1. Internet History :

The conceptual foundation for creation of the Internet was significantly

developed by three individuals and a research conference, each of which changed

the way we thought about technology by accurately predicting its future:

• Vannevar Bush1 wrote the first visionary description of the potential uses for

information technology with his description of the "memex" automated

library system.

• Norbert Wiener1 invented the field of Cybernetics, inspiring future

researchers to focus on the use of technology to extend human capabilities.

1 Vannevar Bush established the U.S. military / university research partnership that later developed the ARPANET, and wrote the first visionary description of the potential use for information technology, inspiring many of the Internet's creators.

The Conception & Realization Of A Dynamic Web Site Promotion 2008

12

• The 1956 Dartmouth Artificial Intelligence conference2 crystallized the

concept that technology was improving at an exponential rate, and provided

the first serious consideration of the consequences.

• Marshall McLuhan3 made the idea of a global village interconnected by an

electronic nervous system part of our popular culture.

In 1957, the Soviet Union launched the first satellite, Sputnik I, triggering US

President Dwight Eisenhower to create the ARPA4 agency to regain the

technological lead in the arms race. ARPA appointed J.C.R. Licklider5 to head the

new IPTO6 organization with a mandate to further the research of the SAGE7

program and help protect the US against a space-based nuclear attack. Licklider

evangelized within the IPTO about the potential benefits of a country-wide

communications network, influencing his successors to hire Lawrence Roberts8 to

implement his vision.

Roberts led development of the network, based on the new idea of packet

switching discovered by Paul Baran9 at RAND, and a few years later by Donald

Davies10 at the UK National Physical Laboratory. A special computer called an

Interface Message Processor11 was developed to realize the design, and the

ARPANET12 went live in early October, 1969. The first communications were

1 Norbert Wiener developed the field of cybernetics, inspiring a generation of scientists to think of computer technology as a means to extend human capabilities. 2 The 1956 Dartmouth Artificial Intelligence (AI) conference gave birth to the field of AI, and gave succeeding generations of scientists their first sense of the potential for information technology to be of benefit to human beings in a profound way. 3 Marshall McLuhan was the first person to popularize the concept of a global village and to consider its social effects. 4 Advanced Research Project Agency : is the innovative Research & Defense organization that funded the development of the ARPANET. 5 Joseph Carl Robnett "Lick" Licklider developed the idea of a universal network, spread his vision throughout the IPTO, and inspired his successors to realize his dream by creation of the ARPANET, which then led to the Internet. He also developed the concepts that led to the idea of the Netizen. 6 Information Processing Techniques Office : was funded the research that led to the development of the ARPANET. 7 Semi-Automatic Ground Environment : The SAGE program significantly advanced the state of the art in human-computer interaction, influenced the thinking of J.C.R. Licklider, caused the establishment of the MIT Lincoln Laboratory where Lawrence Roberts later worked, and established one of the first wide-area networks. 8 Lawrence (Larry) Roberts was the ARPANET program manager, and led the overall system design. 9 Paul Baran developed the field of packet switching networks while conducting research at the historic RAND organization, a concept embedded in the design of the ARPANET and the standard TCP/IP protocol used on the Internet today. 10 Donald Davies and his colleagues at the UK National Physical Laboratory independently discovered the idea of packet switching, and later developed a smaller scale packet-switched version of the ARPANET. 11 The Interface Message Processor provided a system independent interface to the ARPANET that could be used by any computer system, thereby opening the Internet network architecture from the very beginning. 12 The ARPANET was the first wide area packet switching network, the "Eve" network of what has evolved into the Internet we know and love today.

The Conception & Realization Of A Dynamic Web Site Promotion 2008

13

between Leonard Kleinrock's research center at the University of California at Los

Angeles, and Douglas Engelbart's center at the Stanford Research Institute.

The first networking protocol used on the ARPANET was the Network

Control Program1. In 1983, it was replaced with the TCP/IP2 protocol developed

by Robert Kahn, Vinton Cerf, and others, which quickly became the most widely

used network protocol in the world.

In 1990, the ARPANET was retired and transferred to the NSFNET3. The

NSFNET was soon connected to the CSNET4, which linked Universities around

North America, and then to the EUnet5, which connected research facilities in

Europe. Thanks in part to the NSF's enlightened management, and fueled by the

popularity of the web, the use of the Internet exploded after 1990, causing the US

Government to transfer management to independent organizations starting in 1995.

I. 2. 2. Internet Management:

It is often said that there is no central control, administration, or

management of the Internet. While this is generally true, there are several well-

known organizations that work together in a relatively well structured and roughly

democratic environment to collectively participate in the research, development,

and management of the Internet, shown with inter-relationships in the chart below :

1 The Network Control Protocol (NCP) was the first standard networking protocol on the ARPANET. NCP was finalized and deployed in December 1970 by the Network Working Group (NWG), led by Steve Crocker 2 The Internet's open and efficient TCP/IP protocol is the foundation of an inter-networking design has made it the most widely used network protocol in the world (we’ll talk more about this protocol later). 3 National Science Foundation Network : enlightened management of the NSFNET facilitated the Internet's first period of explosive public growth. 4 The Computer Science Network : helped introduce what was fast becoming the Internet to universities around the world, and laid the groundwork for development of the NSFNET. 5 The European Network : spread the ARPANET throughout the research community in Europe, and connected universities and research centers in a similar way to how the CSNet worked in the United States.

The Conception & Realization Of A Dynamic Web Site Promotion 2008

14

Figure 01 : Internet Management

I. 3. Internet Protocols :

In information technology, a protocol (from the Greek protocollon, which

was a leaf of paper glued to a manuscript volume, describing its contents) is the

special set of rules that end points in a telecommunication connection use when

they communicate. Protocols exist at several levels in a telecommunication

connection. For example, there are protocols for the data interchange at the

hardware device level and protocols for data interchange at the application

program level. In the standard model known as Open Systems Interconnection

(OSI)1, there are one or more protocols at each layer in the telecommunication

exchange that both ends of the exchange must recognize and observe. Protocols are

often described in an industry or international standard.

1 Open Systems Interconnection : is a standard description or "reference model" for how messages should be transmitted between any two points in a telecommunication network.

The Conception & Realization Of A Dynamic Web Site Promotion 2008

15

I. 3. 1. Architecture model TCP/IP :

The OSI reference model consists of seven layers that represent a functional

division of the tasks required to implement a network. It is a conceptual tool that it

often use to show how various protocols and technologies fit together to implement

networks. However, it's not the only networking model that attempts to divide

tasks into layers and components. The TCP/IP protocol suite was in fact created

before the OSI Reference Model; as such, its inventors didn't use the OSI model to

explain TCP/IP architecture (even though the OSI model is often used in TCP/IP

discussions today.

I. 3. 1. 1. The TCP/IP Model :

The developers of the TCP/IP protocol suite created their own architectural

model to help describe its components and functions. This model goes by different

names, including the TCP/IP model, the DARPA model and the DOD model (after

the United States Department of Defense, the “D” in “DARPA”). It was called the

TCP/IP model since this seems the simplest designation for modern times.

Regardless of the model we use to represent the function of a network—and

regardless of what we call that model!—the functions that the model represents are

pretty much the same. This means that the TCP/IP and the OSI models are really

quite similar in nature even if they don't carve up the network functionality pie in

precisely the same way. There is a fairly natural correspondence between the

TCP/IP and OSI layers, it just isn't always a “one-to-one” relationship. Since the

OSI model is used so widely, it is common to explain the TCP/IP architecture both

in terms of the TCP/IP layers and the corresponding OSI layers.

I. 3. 1. 2. TCP/IP Model Layers :

The TCP/IP model uses four layers that logically span the equivalent of the

top six layers of the OSI reference model; this is shown in Figure 02. (The physical

layer is not covered by the TCP/IP model because the data link layer is considered

The Conception & Realization Of A Dynamic Web Site Promotion 2008

16

the point at which the interface occurs between the TCP/IP stack and the

underlying networking hardware.) The following are the TCP/IP model layers,

starting from the bottom.

Figure 02: OSI Reference Model and TCP/IP Model Layers

The TCP/IP architectural model has four layers that approximately match six

of the seven layers in the OSI Reference Model. The TCP/IP model does not

address the physical layer, which is where hardware devices reside. The next three

layers—network interface, internet and (host-to-host) transport—correspond to

layers 2, 3 and 4 of the OSI model. The TCP/IP application layer conceptually

“blurs” the top three OSI layers. It’s also worth noting that some people consider

certain aspects of the OSI session layer to be arguably part of the TCP/IP host-to-

host transport layer.

• Network Interface Layer :

As its name suggests, this layer represents the place where the actual TCP/IP

protocols running at higher layers interface to the local network. This layer is

The Conception & Realization Of A Dynamic Web Site Promotion 2008

17

somewhat “controversial” in that some people don't even consider it a “legitimate”

part of TCP/IP. This is usually because none of the core IP protocols run at this

layer. Despite this, the network interface layer is part of the architecture. It is

equivalent to the data link layer (layer two) in the OSI Reference Model and is also

sometimes called the link layer. You may also see the name network access layer.

On many TCP/IP networks, there is no TCP/IP protocol running at all on

this layer, because it is simply not needed. For example, if you run TCP/IP over an

Ethernet, then Ethernet handles layer two (and layer one) functions. However, the

TCP/IP standards do define protocols for TCP/IP networks that do not have their

own layer two implementation. These protocols, the Serial Line Internet Protocol

(SLIP) and the Point-to-Point Protocol (PPP), serve to fill the gap between the

network layer and the physical layer. They are commonly used to facilitate TCP/IP

over direct serial line connections (such as dial-up telephone networking) and other

technologies that operate directly at the physical layer.

• Internet Layer :

This layer corresponds to the network layer in the OSI Reference Model

(and for that reason is sometimes called the network layer even in TCP/IP model

discussions). It is responsible for typical layer three jobs, such as logical device

addressing, data packaging, manipulation and delivery, and last but not least,

routing. At this layer we find the Internet Protocol (IP), arguably the heart of

TCP/IP, as well as support protocols such as ICMP1 and the routing protocols

(RIP, OSFP, BGP, etc.) The new version of IP, called IP version 62, will be used

for the Internet of the future and is of course also at this layer.

• (Host-to-Host) Transport Layer :

This primary job of this layer is to facilitate end-to-end communication over

an internetwork. It is in charge of allowing logical connections to be made between 1 Internet Control Message Protocol (ICMP/ICMPv4 and ICMPv6) 2 We’ll talk about it more further.

The Conception & Realization Of A Dynamic Web Site Promotion 2008

18

devices to allow data to be sent either unreliably (with no guarantee that it gets

there) or reliably (where the protocol keeps track of the data sent and received to

make sure it arrives, and re-sends it if necessary). It is also here that identification

of the specific source and destination application process is accomplished

The formal name of this layer is often shortened to just the transport layer;

the key TCP/IP protocols at this layer are the Transmission Control Protocol (TCP)

and User Datagram Protocol (UDP). The TCP/IP transport layer corresponds to the

layer of the same name in the OSI model (layer four) but includes certain elements

that are arguably part of the OSI session layer. For example, TCP establishes a

connection that can persist for a long period of time, which some people say makes

a TCP connection more like a session.

• Application Layer :

This is the highest layer in the TCP/IP model. It is a rather broad layer,

encompassing layers five through seven in the OSI model. While this seems to

represent a loss of detail compared to the OSI model, scientist think this is

probably a good thing! The TCP/IP model better reflects the “blurry” nature of the

divisions between the functions of the higher layers in the OSI model, which in

practical terms often seem rather arbitrary. It really is hard to separate some

protocols in terms of which of layers five, six or seven they encompass. (we didn't

even bother to try in this Guide which is why the higher-level protocols are all in

the same chapter, while layers one through four have their protocols listed

separately.)

Numerous protocols reside at the application layer. These include

application protocols such as HTTP, FTP and SMTP for providing end-user

services, as well as administrative protocols like SNMP1, DHCP 2and DNS3.

1 Simple Network Management Protocol (SNMP). 2 Dynamic Host Configuration Protocol (DHCP). 3 Domain Name System or Service (DNS)

The Conception & Realization Of A Dynamic Web Site Promotion 2008

19

I. 3. 2. IP Protocol :

Even though the name seems to imply that it's the fourth iteration of the key

Internet Protocol, version 4 of IP was the first that was widely used in modern

TCP/IP. IPv4, as it is sometimes called to differentiate it from the newer IPv6, is

the Internet Protocol version in use on the Internet today, and an implementation of

the protocol is running on hundreds of millions of computers. It provides the basic

datagram delivery capabilities upon which all of TCP/IP functions, and it has

proven its quality in use over a period of more than two decades.

In this section we provide extensive detail on the operation of the current

version of the Internet Protocol, IPv4. There are four main subsections, which

represent the four main functions of IP. The first subsection provides a

comprehensive discussion of IP addressing. The second discusses how data is

encoded and formatted into IP datagrams for transmission. The third describes

datagram size issues and how fragmentation and reassembly are used to convey

large datagrams over networks designed to carry small frames. The last subsection

covers matters related to the delivery and routing of IP datagrams. After the four

main subsections we conclude a look at IPv4 with an overview of IP multicasting,

which is used for delivering a single datagram to more than one recipient.

I. 3. 2. 1. IP Addressing :

This definition is based on Internet Protocol Version 4. We Note that the

system of IP address classes described here, while forming the basis for IP address

assignment, is generally bypassed today by use of Classless Inter-Domain Routing

(CIDR) addressing.

In the most widely installed level of the Internet Protocol (IP) today, an IP

address is a 32-bit number that identifies each sender or receiver of information

that is sent in packets across the Internet. When the user request an HTML page or

send e-mail, the Internet Protocol part of TCP/IP includes his IP address in the

message (actually, in each of the packets if more than one is required) and sends it

The Conception & Realization Of A Dynamic Web Site Promotion 2008

20

to the IP address that is obtained by looking up the domain name in the Uniform

Resource Locator the user requested or in the e-mail address he is sending a note

to. At the other end, the recipient can see the IP address of the Web page requestor

or the e-mail sender and can respond by sending another message using the IP

address it received.

An IP address has two parts: the identifier of a particular network on the

Internet and an identifier of the particular device (which can be a server or a

workstation) within that network. On the Internet itself - that is, between the router

that move packets from one point to another along the route - only the network part

of the address is looked at.

I. 3. 2. 2. IPv6 (IPng):

IPv6 (Internet Protocol Version 6) is the latest level of the Internet Protocol

(IP) and is now included as part of IP support in many products including the

major computer operating systems. IPv6 has also been called "IPng" (IP Next

Generation). Formally, IPv6 is a set of specifications from the Internet Engineering

Task Force (IETF). IPv6 was designed as an evolutionary set of improvements to

the current IP Version 4. Network hosts and intermediate nodes with either IPv4 or

IPv6 can handle packets formatted for either level of the Internet Protocol. Users

and service providers can update to IPv6 independently without having to

coordinate with each other.

The most obvious improvement in IPv6 over the IPv4 is that IP addresses

are lengthened from 32 bits to 128 bits. This extension anticipates considerable

future growth of the Internet and provides relief for what was perceived as an

impending shortage of network addresses.

IPv6 describes rules for three types of addressing: unicast (one host to one

other host), anycast (one host to the nearest of multiple hosts), and multicast (one

host to multiple hosts). Additional advantages of IPv6 are:

The Conception & Realization Of A Dynamic Web Site Promotion 2008

21

• Options are specified in an extension to the header that is examined

only at the destination, thus speeding up overall network performance.

• The introduction of an "anycast" address provides the possibility of

sending a message to the nearest of several possible gateway hosts with the idea

that any one of them can manage the forwarding of the packet to others. Anycast

messages can be used to update routing tables along the line.

• Packets can be identified as belonging to a particular "flow" so that

packets thatare part of a multimedia presentation that needs to arrive in "real time"

can be provided a higher quality-of-service relative to other customers.

• The IPv6 header now includes extensions that allow a packet to

specify a mechanism for authenticating its origin, for ensuring data integrity, and

for ensuring privacy.

I. 3. 2. 3. Domain Name System (DNS) :

The domain name system (DNS) is the way that Internet domain names are

located and translated into Internet Protocol addresses. A domain name is a

meaningful and easy-to-remember "handle" for an Internet address.

Because maintaining a central list of domain name/IP address

correspondences would be impractical, the lists of domain names and IP addresses

are distributed throughout the Internet in a hierarchy of authority. There is

probably a DNS server within close geographic proximity to the user access

provider that maps the domain names in the Internet requests or forwards them to

other servers in the Internet.

I. 3. 3. The TCP protocol :

TCP (Transmission Control Protocol) is a set of rules (protocol) used along

with the Internet Protocol (IP) to send data in the form of message units between

computers over the Internet. While IP takes care of handling the actual delivery of

The Conception & Realization Of A Dynamic Web Site Promotion 2008

22

the data, TCP takes care of keeping track of the individual units of data (called

packets) that a message is divided into for efficient routing through the Internet.

For example, when an HTML file is sent to you from a Web server, the

Transmission Control Protocol (TCP) program layer in that server divides the file

into one or more packets, numbers the packets, and then forwards them

individually to the IP program layer. Although each packet has the same

destination IP address, it may get routed differently through the network. At the

other end (the client program in your computer), TCP reassembles the individual

packets and waits until they have arrived to forward them to you as a single file.

TCP is known as a connection-oriented protocol, which means that a

connection is established and maintained until such time as the message or

messages to be exchanged by the application programs at each end have been

exchanged. TCP is responsible for ensuring that a message is divided into the

packets that IP manages and for reassembling the packets back into the complete

message at the other end. In the Open Systems Interconnection (OSI)

communication model, TCP is in layer 4, the Transport Layer.

I. 3. 4. The HTTP protocol :

HTTP (Hypertext Transfer Protocol) is the set of rules for transferring files

(text, graphic images, sound, video, and other multimedia files) on the World Wide

Web. As soon as a Web user opens their Web browser, the user is indirectly

making use of HTTP. HTTP is an application protocol that runs on top of the

TCP/IP suite of protocols (the foundation protocols for the Internet).

HTTP concepts include (as the Hypertext part of the name implies) the idea

that files can contain references to other files whose selection will elicit additional

transfer requests. Any Web server machine contains, in addition to the Web page

files it can serve, an HTTP daemon, a program that is designed to wait for HTTP

requests and handle them when they arrive. The Web browser is an HTTP client,

The Conception & Realization Of A Dynamic Web Site Promotion 2008

23

sending requests to server machines. When the browser user enters file requests by

either "opening" a Web file (typing in a Uniform Resource Locator or URL) or

clicking on a hypertext link, the browser builds an HTTP request and sends it to

the Internet Protocol address (IP address) indicated by the URL. The HTTP

daemon in the destination server machine receives the request and sends back the

requested file or files associated with the request. (A Web page often consists of

more than one file.)

The latest version of HTTP is HTTP 1.1.

I. 3. 5. The FTP protocol:

File Transfer Protocol (FTP), a standard Internet protocol, is the simplest

way to exchange files between computers on the Internet. Like the Hypertext

Transfer Protocol (HTTP), which transfers displayable Web pages and related

files, and the Simple Mail Transfer Protocol (SMTP), which transfers e-mail, FTP

is an application protocol that uses the Internet's TCP/IP protocols. FTP is

commonly used to transfer Web page files from their creator to the computer that

acts as their server for everyone on the Internet. It's also commonly used to

download programs and other files to your computer from other servers.

The user can use FTP with a simple command line interface (for example,

from the Windows MS-DOS Prompt window) or with a commercial program that

offers a graphical user interface. The Web browser can also make FTP requests to

download programs selected from a Web page. Using FTP, we can also update

(delete, rename, move, and copy) files at a server. We need to logon to an FTP

server. However, publicly available files are easily accessed using anonymous

FTP.

Basic FTP support is usually provided as part of a suite of programs that

come with TCP/IP. However, any FTP client program with a graphical user

interface usually must be downloaded from the company that makes it.

The Conception & Realization Of A Dynamic Web Site Promotion 2008

24

I. 3. 6. The SMTP protocol :

SMTP (Simple Mail Transfer Protocol) is a TCP/IP protocol used in sending

and receiving e-mail. However, since it is limited in its ability to queue messages

at the receiving end, it is usually used with one of two other protocols, POP31 or

IMAP2, that let the user save messages in a server mailbox and download them

periodically from the server. In other words, users typically use a program that uses

SMTP for sending e-mail and either POP3 or IMAP for receiving e-mail. On Unix-

based systems, sendmail is the most widely-used SMTP server for e-mail. A

commercial package, Sendmail, includes a POP3 server. Microsoft Exchange

includes an SMTP server and can also be set up to include POP3 support.

SMTP usually is implemented to operate over Internet port 25. An

alternative to SMTP that is widely used in Europe is X.400. Many mail servers

now support Extended Simple Mail Transfer Protocol (ESMTP), which allows

multimedia files to be delivered as e-mail.

I. 3. 6. 1. POP3 :

POP3 (Post Office Protocol 3) is the most recent version of a standard

protocol for receiving e-mail. POP3 is a client/server protocol in which e-mail is

received and held by the Internet server. Periodically, the user (or the client e-mail

receiver) check his mail-box on the server and download any mail, probably using

POP3. This standard protocol is built into most popular e-mail products, such as

Eudora and Outlook Express. It's also built into the Netscape and Microsoft

Internet Explorer browsers.

POP3 is designed to delete mail on the server as soon as the user has

downloaded it. However, some implementations allow users or an administrator to

specify that mail be saved for some period of time. POP can be thought of as a

"store-and-forward" service.

1 Post Office Protocol 3. 2 Internet Message Access Protocol.

The Conception & Realization Of A Dynamic Web Site Promotion 2008

25

The conventional port number for POP3 is 110.

I. 3. 6. 2. IMAP :

IMAP (Internet Message Access Protocol) is a standard protocol for

accessing e-mail from a local server. IMAP (the latest version is IMAP Version 4)

is a client/server protocol in which e-mail is received and held for the Internet

server. The user (or the e-mail client) can view just the heading and the sender of

the letter and then decide whether to download the mail. He can also create and

manipulate multiple folders or mailboxes on the server, delete messages, or search

for certain parts or an entire note. IMAP requires continual access to the server

during the time that he is working with his mail.

• POP and IMAP deal with the receiving of e-mail and are not to be

confused with the Simple Mail Transfer Protocol (SMTP), a protocol for

transferring e-mail across the Internet. The user sends an e-mail with SMTP and a

mail handler receives it on the recipient's behalf. Then the mail is read using POP

or IMAP.

IMAP can be thought of as a remote file server. POP3 can be thought of as a

"store-and-forward" service.

I. 4. Internet Services :

I. 4. 1. Electronic Mail (E-mail) :

E-mail, short for electronic mail and often abbreviated to e-mail, email or

simply mail, is a store-and-forward method of composing, sending, receiving and

storing messages over electronic communication systems. The term "e-mail" (as a

noun or verb) applies both to the Internet e-mail system based on the Simple Mail

Transfer Protocol (SMTP) and to X.400 systems, and to intranet systems allowing

users within one organization to e-mail each other. Intranets may use the Internet

protocols or X.400 protocols for internal e-mail service supporting workgroup

collaboration. E-mail is often used to deliver bulk unsolicited messages, or "spam",

The Conception & Realization Of A Dynamic Web Site Promotion 2008

26

but filter programs exist which can automatically delete some or most of these,

depending on the situation.

I. 4. 2. World wide web :

All the resources and users on the Internet that are using the Hypertext

Transfer Protocol (HTTP).

I. 4. 3. Internet forums :

Is a web application for holding discussions and posting user-generated

content. Internet forums are also commonly referred to as Web forums, message

boards, discussion boards, (electronic) discussion groups, discussion forums,

bulletin boards, fora (the Latin plural) or simply forums.

I. 4. 4. Internet Relay Chat (IRC) :

Internet Relay Chat (IRC) enables people all over the world to talk together

over the Internet in real-time sessions in virtual rooms.

I. 4. 5. Telnet :

Telnet is a user command and an underlying TCP/IP protocol for accessing

remote computers. Through Telnet, an administrator or another user can access

someone else's computer remotely. On the Web, HTTP and FTP protocols allow

the user to request specific files from remote computers, but not to actually be

logged on as a user of that computer. With Telnet, we log on as a regular user with

whatever privileges we may have been granted to the specific application and data

on that computer.

Telnet is most likely to be used by program developers and anyone who has

a need to use specific applications or data located at a particular host computer.

The Conception & Realization Of A Dynamic Web Site Promotion 2008

27

I. 5. The World Wide Web :

The World Wide Web is a way of exchanging information between

computers on the Internet, tying them together into a vast collection of interactive

multimedia resources. Thousands upon thousands of computers around the world

are now connected to the web and offer a tremendous variety of information and

services to visitors.

I. 5. 1. History of the world wide web:

The World-Wide Web began in March 1989 at CERN1. "CERN is a meeting

place for physicists from all over the world, who collaborate on complex physics,

engineering and information handling projects. CERN1 Thus, the need for the

WWW system arose from the geographical dispersion of large collaborations, and

the fast turnover of fellows, students, and visiting scientists, who had to get up to

speed on projects and leave a lasting contribution before leaving.

CERN possessed both the financial and computing resources necessary to

start the project. In the original proposalTC1 Berners-Lee outlined two phases of

the project :

• First, CERN would make use of existing software and hardware as

well as implementing simple browsers for the user's workstations, based on an

analysis of the requirements for information access needs by experiments.

• Second, they would extend the application area by also allowing the

users to add new material.

Berners-Lee2 expected each phase to take three months with the full

manpower complement: he was asking for four software engineers and a

programmer. The proposal talked about a simple scheme to incorporate several

different servers of machine-stored information already available at CERN. This 1 CERN was originally named after its founding body the "Conseil Europeen pour la Recherche Nucleaire," and is now called "European Laboratory for Particle Physics." 2 Sir Timothy John Berners-Lee : who, with the help of Robert Cailliau, and a young student staff at CERN, implemented on December 25, 1990, the first successful communication between an HTTP client and server via the Internet.

The Conception & Realization Of A Dynamic Web Site Promotion 2008

28

"scheme" was to use hypertext to provide a single user-interface to many large

classes of stored information such as reports, notes, data-bases, computer

documentation and on-line systems help.

Set off in 1989, the WWW quickly gained great popularity among Internet

users. For instance, at 11:22 am of April 12, 1995, the WWW server at the SEAS

of the University of Pennsylvania responded to 128 requests in one minute.

Between 10:00 and 11:00, it responded to 5086 requests in one hour, or about 84

per minute, Even years after its creation, the Web is constantly maturing: in

December 1994 the WWW was growing at roughly 1 per cent a day, a doubling

period of less than 10 weeks.

As popular as it is at the moment, the WWW is not the only possible

implementation of the hypertext concept. In fact, the theory behind the WWW was

based on a more general project "Xanadu1," that is being developed by Ted

Nelson2.

I. 5. 2. The Web:

The web is built around hypertext and hypermedia. A hypertext document

has certain keywords or phrases linked to other online documents. A person

reading a hypertext document about dogs, for example, might be able to select the

highlighted word beagle and call up another document for more information about

that particular breed. With documents intertwined by links into a web of

information, we can select paths to browse online resources, a process often

referred to as surfing.

Hypermedia extends the concept of hypertext to other forms of information,

including images, sounds, and video clips. A person reading a hypermedia

1 Xanadu is the original hypertext and interactive multimedia system. 2 Ted Nelson : an American sociologist, philosopher, and pioneer of information technology. He coined the term "hypertext" in 1963 and published it in 1965. He also is credited with first use of the words hypermedia, transclusion, virtuality,

The Conception & Realization Of A Dynamic Web Site Promotion 2008

29

document about dogs, for example, might select a picture of a beagle and watch a

video clip about beagles.

The web subsumes previous Internet information systems such as Gopher1

and FTP. These resources can still be accessed through the web, but the web

provides a wealth of additional capabilities not previously offered by these more

restricted connection methods.

The Web consists of :

• The users personal computers.

• Web browser software to access the Web.

• A connection to an Internet service provider (ISP).

• Servers to host the data.

• Routers and switches to direct the flow of data.

I. 5. 3. How the web works?

• Web pages are stored on web servers located around the globe.

• Entering the Uniform Resource Locator or URL of a web page in the

web browser or clicking a link sends a request to the server that hosts the page.

• The server transmits the web page data to user’s computer and the

web browser displays it on the screen.

1 Gopher : a distributed document search and retrieval network protocol designed for the Internet. Its goal is to function as an improved form of Anonymous FTP.

The Conception & Realization Of A Dynamic Web Site Promotion 2008

30

I. 5. 4. Web Architecture:

The WWW project is based on the principle of universal readership: "if

information is available, then any (authorized) person should be able to access it

from anywhere in the world." The Web's implementation follows a standard client-

server model. In this model, a user relies on a program (the client) to connect to a

remote machine (the server), where the data is stored. The architecture of the

WWW (see Figure 3) is the one of clients, such as FireFox, Internet Explorer, or

Lynx, which know how to present data but not what its origin is, and servers,

which know how to extract data", but are ignorant of how it will be presented to

the user.

Figure 3: Architecture of the WWW

One of the main features of the WWW documents is their hypertext

structure. On a graphic terminal, for instance, a particular reference can be

represented by underlined text, or an icon. The user clicks on it with the mouse,

and the referenced document appears. This method makes copying of information

The Conception & Realization Of A Dynamic Web Site Promotion 2008

31

unnecessary: data needs only to be stored once, and all referenced to it can be

linked to the original document.

I. 5. 5. Weaknesses of the WWW:

The World-Wide Web began as a set of simple protocols and formats. As

time passed, the Web began to be used as a testbed for various sophisticated

hypermedia and information retrieval concepts. Unfortunately, these concepts were

quickly absorbed by the general WWW community. This means that experimental

extensions of dubious use are now established parts of the Web.

Another flaw in the current structure of the WWW is the presence of many

hypertext links that point to no longer existent documents. These occur when

authors rename or delete their works from the Web. Since the system has no way

of registering links to one's document, an author can not notify his readers of the

reorganization. The Xanadu system, on the other hand, does not have this problem

since it does not allow users to delete documents from the system.

I. 5. 6. Success of the WWW:

What is the reason for the immense success of the World-Wide Web?

Perhaps, it can be explained by CERN's attitude towards the development of the

project. As soon as the basic outline of the WWW was complete, CERN made the

source code for its software publicly available. CERN has been encouraging

collaboration by academic and commercial parties since the onset of the project,

and by doing so it got millions of people involved in the growth of the Web.

The system requirements for running a WWW server are minimal, so even

administrators with limited funds had a chance to become information providers.

Because of the intuitive nature of hypertext, many inexperienced computer users

were able to connect to the network. Furthermore, the simplicity of the HyperText

Markup Language, used for creating interactive documents, allowed these users to

contribute to the expanding database of documents on the Web. Also, the nature of

The Conception & Realization Of A Dynamic Web Site Promotion 2008

32

the World-Wide Web provided a way to interconnect computers running different

operating systems, and display information created in a variety of existing media

formats.

I. 6. Hyper Text Markup Language (HTML) :

I. 6. 1. What is HTML?

HTML (Hypertext Markup Language) is the set of markup symbols or codes

inserted in a file intended for display on a World Wide Web browser page. The

markup tells the Web browser how to display a Web page's words and images for

the user. Each individual markup code is referred to as an element (but many

people also refer to it as a tag). Some elements come in pairs that indicate when

some display effect is to begin and when it is to end.

HTML is a formal Recommendation by the World Wide Web Consortium1

(W3C) and is generally adhered to by the major browsers, Microsoft's Internet

Explorer and Netscape's Navigator, which also provide some additional non-

standard codes. The current version of HTML is HTML 4.0. However, both

Internet Explorer and Netscape implement some features differently and provide

non-standard extensions. Web developers using the more advanced features of

HTML 4 may have to design pages for both browsers and send out the appropriate

version to a user. Significant features in HTML 4 are sometimes described in

general as dynamic HTML. What is sometimes referred to as HTML 5 is an

extensible form of HTML called Extensible Hypertext Markup Language

(XHTML).

I. 6. 2. HTML history:

HTML is a subset of the Standard Generalized Markup Language (SGML).

SGML is an international standard (ISO 8879) published in 1986 as a format for

structuring and marking up documents. HTML adopts a simplified set of SGML's

1 The World Wide Web Consortium (W3C) is the main international standards organization for the World Wide Web.

The Conception & Realization Of A Dynamic Web Site Promotion 2008

33

structural, semantic, and formatting tags, keeping the emphasis on the content

rather than on the document itself. An important addition to HTML was the

inclusion of support for hypertext, which enabled authors to define a semantic

network of linked information.

The evolution of HTML :

HTML is an evolving language. It doesn’t stay the same for long before a

revised set of standards and specifications are brought in to allow easier creation of

prettier and more efficient sites.

• Level 0 HTML : At level 0, HTML offered a platform-independent

means of marking data for interchange. The concept was that servers would store

and supply data and clients would retrive and display it.

• Level 1 HTML : The idea of a HTML container was added, with a

HEAD element seprate from the Body element. Opening and Closing tags were

required for some elements.

• HTML+ : HTML+ incorporated graphical and display elements into

HTML. Elements for superscripts and subscripts, footnotes, margins, inserted and

deleted text.

• HTMl 2 : Level 2 added the FORM element with INPUT, SELECT,

OPTION, and TEXTAREA plus the BR element for line breaks. It also added the

META element for detailed document description, which also provided an avenue

for indexing and cataloging the contents; also changed the description of the head

and body section.

• HTML 3 : HTML 3 included a FIG element that supported text flow

around figures; ALIGN attributes that enabled left, right or center justification;

additional attributes for background images, tabs, footnotes and banners.

HTML started fading away with the advent of Microsoft's Internet Explorer

and Netscape Navigator.

The Conception & Realization Of A Dynamic Web Site Promotion 2008

34

• HTML 3.2 : It added the SCRIPT and Styles tags. It offered new

elements and attributes that enlivened Web pages with animation, colors and

sound.

• HTML 4 : The latest version of HTML enables seprating physical

styles from the content markup by relying on style sheets; introduces the

OBJECTS element; included the STYLE, DIV and SPAN elements for

incorporating style sheets.

• XHTML 1.0:

Close to the beginning of the 21st century the W3C issued their

specifications of XHTML 1.0 as a recommendation. Since January 26, 2000 it

stands as joint-standard with HTML 4.01. XHTML marks a departure from the

way new specs have worked, it is an entirely new branch of HTML, taking in ideas

from XML, which is a far more complicated markup language than HTML. There

aren’t many new or deprecated tags and attributes in this version of HTML, but

there are things that have changed with a view of increased accessibility and

functionality. It’s mainly just a new set of coding rules.

I. 7. Addressing system URL :

Abbreviation of Uniform Resource Locator, the global address of

documents and other resources on the World Wide Web.

The first part of the address is called a protocol identifier and it indicates

what protocol to use, and the second part is called a resource name and it specifies

the IP address or the domain name where the resource is located. The protocol

identifier and the resource name are separated by a colon and two forward slashes.

For example, the two URLs below point to two different files at the domain

pcwebopedia.com. The first specifies an executable file that should be fetched

using the FTP protocol; the second specifies a Web page that should be fetched

using the HTTP protocol:

The Conception & Realization Of A Dynamic Web Site Promotion 2008

35

• ftp://www.pcwebopedia.com/stuff.exe

• http://www.pcwebopedia.com/index.html

I. 7. 1. Dynamic URLs vs. Static URLs :

Websites that utilize databases which can insert content into a webpage by

way of a dynamic script like PHP or JavaScript are increasingly popular. This type

of site is considered dynamic. Many websites choose dynamic content over static

content. This is because if a website has thousands of products or pages, writing or

updating each static by hand is a monumental task.

There are two types of URLs: dynamic and static. A dynamic URL is a page

address that results from the search of a database-driven web site or the URL of a

web site that runs a script. In contrast to static URLs, in which the contents of the

web page stay the same unless the changes are hard-coded into the HTML,

dynamic URLs are generated from specific queries to a site's database. The

dynamic page is basically only a template in which to display the results of the

database query. Instead of changing information in the HTML code, the data is

changed in the database.

But there is a risk when using dynamic URLs: search engines don't like

them. For those at most risk of losing search engine positioning due to dynamic

URLs are e-commerce stores, forums, sites utilizing content management systems

and blogs like Mambo or WordPress, or any other database-driven website. Many

times the URL that is generated for the content in a dynamic site looks something

like this:

http://www.somesites.com/forums/thread.php?threadid=12345&sort=date

A static URL on the other hand, is a URL that doesn't change, and doesn't

have variable strings. It looks like this:

http://www.somesites.com/forums/the-challenges-of-dynamic-urls.htm

The Conception & Realization Of A Dynamic Web Site Promotion 2008

36

Static URLs are typically ranked better in search engine results pages, and

they are indexed more quickly than dynamic URLs, if dynamic URLs get indexed

at all. Static URLs are also easier for the end-user to view and understand what the

page is about. If a user sees a URL in a search engine query that matches the title

and description, they are more likely to click on that URL than one that doesn't

make sense to them.

A search engine wants to only list pages its index that are unique. Search

engines decide to combat this issue by cutting off the URLs after a specific number

of variable strings (e.g.: ? & =).

For example, let's look at three URLs:

http://www.somesites.com/forums/thread.php?threadid=12345&sort=date

http://www.somesites.com/forums/thread.php?threadid=67890&sort=date

http://www.somesites.com/forums/thread.php?threadid=13579&sort=date

All three of these URLs point to three different pages. But if the search

engine purges the information after the first offending character, the question mark

(?), now all three pages look the same:

http://www.somesites.com/forums/thread.php

http://www.somesites.com/forums/thread.php

http://www.somesites.com/forums/thread.php

Now, you don't have unique pages, and consequently, the duplicate URLs

won't be indexed.

Another issue is that dynamic pages generally do not have any keywords in

the URL. It is very important to have keyword rich URLs. Highly relevant

The Conception & Realization Of A Dynamic Web Site Promotion 2008

37

keywords should appear in the domain name or the page URL. This became clear

in a recent study on how the top three search engines, Google, Yahoo, and MSN,

rank websites.

The study involved taking hundreds of highly competitive keyword queries,

like travel, cars, and computer software, and comparing factors involving the top

ten results. The statistics show that of those top ten, Google has 40-50% of those

with the keyword either in the URL or the domain; Yahoo shows 60%; and MSN

has an astonishing 85%! What that means is that to these search engines, having

your keywords in your URL or domain name could mean the difference between a

top ten ranking, and a ranking far down in the results pages.

• The Solution :

So what can we do about this difficult problem? we certainly don't want to

have to go back and recode every single dynamic URL into a static URL. This

would be too much work for any website owner.

If we are hosted on a Linux server, then we will want to make the most of

the Apache Mod Rewrite Rule, which is gives us the ability to inconspicuously

redirect one URL to another, without the user's (or a search engine's) knowledge.

We’ll need to have this module installed in Apache. This module saves us from

having to rewrite our static URLs manually.

How does this module work? When a request comes in to a server for the

new static URL, the Apache module redirects the URL internally to the old,

dynamic URL, while still looking like the new static URL. The web server

compares the URL requested by the client with the search pattern in the individual

rules.

For example, when someone requests this URL:

http://www.somesites.com/forums/the-challenges-of-dynamic-urls.html

The Conception & Realization Of A Dynamic Web Site Promotion 2008

38

The server looks for and compares this static-looking URL to what

information is listed in the .htaccess file, such as:

RewriteEngine on

RewriteRule thread-threadid-(.*)\.htm$ thread.php?threadid=$1

It then converts the static URL to the old dynamic URL that looks like this,

with no one the wiser:

http://www.somesites.com/forums/thread.php?threadid=12345

We now have a URL that only will rank better in the search engines, but we

end-users can definitely understand by glancing at the URL what the page will be

about, while allowing Apache's Mod Rewrite Rule to handle to conversion for us,

and still keeping the dynamic URL.

If we are not particularly technical, we may not wish to attempt to figure out

the complex Mod Rewrite code and how to use it, or we simply may not have the

time to embark upon a new learning curve. Therefore, it would be extremely

beneficial to have something to do it for us. This URL Rewriting Tool can

definitely help us. What this tool does is implement the Mod Rewrite Rule in our

.htaccess file to secretly convert a URL to another, such as with dynamic and static

ones.

We have multiple reasons to utilize static URLs in our website whenever

possible. When it's not possible, and we need to keep our database-driven content

as those old dynamic URLs, we can still give end-users and search engine a static

URL to navigate, and all the while, they are still our dynamic URLs in disguise.

When a search engine engineer was asked if this method was considered cloaking,

he responded that it indeed was not, and that in fact, search engines prefer you do it

this way. The URL Rewrite Tool not only saves our time and energy by helping us

The Conception & Realization Of A Dynamic Web Site Promotion 2008

39

use static URLs by converting them transparently to our dynamic URLs, but it will

also save our rankings in the search engines.

I. 8. Web Databases :

The power of the WWW comes not simply from static HTML pages - which

can be very attractive, and the important first step into the WWW - but especially

from the ability to support those pages with powerful software, especially when

interfacing to databases. The combination of attractive screen displays,

exceptionally easy to use controls and navigational aids, and powerful underlying

software, has opened up the potential for people everywhere to tap into the vast

global information resources of the Internet.

I. 8. 1. Scripting language CGI :

The Common Gateway Interface (CGI) is a standard for interfacing external

applications with information servers, such as HTTP or Web servers. A plain

HTML document that the Web daemon retrieves is static, which means it exists in

a constant state: a text file that doesn't change. A CGI program, on the other hand,

is executed in real-time, so that it can output dynamic information.

Since a CGI program is executable, it is basically the equivalent of letting the

world run a program on your system, which isn't the safest thing to do. Therefore,

there are some security precautions that need to be implemented when it comes to

using CGI programs. Probably the one that will affect the typical Web user the

most is the fact that CGI programs need to reside in a special directory, so that the

Web server knows to execute the program rather than just display it to the browser.

This directory is usually under direct control of the webmaster, prohibiting the

average user from creating CGI programs. There are other ways to allow access to

CGI scripts, but it is up to the webmaster to set these up for his clients.

A CGI program can be written in any language that allows it to be executed

on the system, such as:

• C/C++

The Conception & Realization Of A Dynamic Web Site Promotion 2008

40

• Fortran

• PERL

• TCL

• Any Unix shell

• Visual Basic

• AppleScript

I. 9. Client / Server architecture :

Client/server describes the relationship between two computer programs in

which one program, the client, makes a service request from another program, the

server, which fulfills the request. Although the client/server idea can be used by

programs within a single computer, it is a more important idea in a network. In a

network, the client/server model provides a convenient way to interconnect

programs that are distributed efficiently across different locations. Computer

transactions using the client/server model are very common.

The client/server model has become one of the central ideas of network

computing. Most business applications being written today use the client/server

model. So does the Internet's main program, TCP/IP. In marketing, the term has

been used to distinguish distributed computing by smaller dispersed computers

from the "monolithic" centralized computing of mainframe computers. But this

distinction has largely disappeared as mainframes and their applications have also

turned to the client/server model and become part of network computing.

In the usual client/server model, one server, sometimes called a daemon, is

activated and awaits client requests. Typically, multiple client programs share the

services of a common server program. Both client programs and server programs

are often part of a larger program or application. Relative to the Internet, your Web

browser is a client program that requests services (the sending of Web pages or

files) from a Web server (which technically is called a Hypertext Transport

Protocol or HTTP server) in another computer somewhere on the Internet.

Similarly, your computer with TCP/IP installed allows you to make client requests

The Conception & Realization Of A Dynamic Web Site Promotion 2008

41

for files from File Transfer Protocol (FTP) servers in other computers on the

Internet.

Other program relationship models included master/slave, with one program

being in charge of all other programs, and peer-to-peer, with either of two

programs able to initiate a transaction.

I. 9. 1. The Web server :

A computer that delivers (serves up) Web pages. Every Web server has an IP

address and possibly a domain name. For example, if we enter the URL

http://www.ouargla-univ.dz/index.html in the browser, this sends a request to the

server whose domain name is ouargla-univ.dz. The server then fetches the page

named index.html and sends it to your browser.

Any computer can be turned into a Web server by installing server software

and connecting the machine to the Internet. There are many Web server software

applications, including public domain software from NCSA and Apache, and

commercial packages from Microsoft, Netscape and others.

I. 9. 2. The Web client :

The client, or user, side of the Web. It typically refers to the Web browser in

the user's machine. It may also refer to plug-ins and helper applications that

enhance the browser to support special services from the site. The term may imply

the entire user machine or refer to a handheld device that provides Web access.

Contrast with Web server.

I. 10. Internet security :

In the computer industry, refers to techniques for ensuring that data stored in a

computer cannot be read or compromised by any individuals without authorization.

Most security measures involve data encryption and passwords. Data encryption is

the translation of data into a form that is unintelligible without a deciphering

mechanism. A password is a secret word or phrase that gives a user access to a

particular program or system.

The Conception & Realization Of A Dynamic Web Site Promotion 2008

42

I. 10. 1. Principales Defense Technologies :

I. 10. 1. 1. FireWalls :

Is a system designed to prevent unauthorized access to or from a private

network. Firewalls can be implemented in both hardware and software, or a

combination of both. Firewalls are frequently used to prevent unauthorized Internet

users from accessing private networks connected to the Internet, especially

intranets. All messages entering or leaving the intranet pass through the firewall,

which examines each message and blocks those that do not meet the specified

security criteria.

There are several types of firewall techniques :

• Packet filter:

Looks at each packet entering or leaving the network and accepts or rejects it

based on user-defined rules. Packet filtering is fairly effective and transparent to

users, but it is difficult to configure. In addition, it is susceptible to IP spoofing.

• Application gateway:

Applies security mechanisms to specific applications, such as FTP and Telnet

servers. This is very effective, but can impose a performance degradation.

• Circuit-level gateway:

Applies security mechanisms when a TCP or UDP connection is established.

Once the connection has been made, packets can flow between the hosts without

further checking.

In practice, many firewalls use two or more of these techniques in concert.

A firewall is considered a first line of defense in protecting private

information. For greater security, data can be encrypted.

I. 10. 1. 2. Cryptography :

The art of protecting information by transforming it (encrypting it) into an

unreadable format, called cipher text. Only those who possess a secret key can

decipher (or decrypt) the message into plain text. Encrypted messages can

sometimes be broken by cryptanalysis, also called codebreaking, although modern

cryptography techniques are virtually unbreakable.

The Conception & Realization Of A Dynamic Web Site Promotion 2008

43

As the Internet and other forms of electronic communication become more

prevalent, electronic security is becoming increasingly important. Cryptography is

used to protect e-mail messages, credit card information, and corporate data. One

of the most popular cryptography systems used on the Internet is Pretty Good

Privacy1 because it's effective and free.

Cryptography systems can be broadly classified into symmetric-key systems

that use a single key that both the sender and recipient have, and public-key

systems (asymmetric) that use two keys, a public key known to everyone and a

private key that only the recipient of messages uses.

I. 10. 1. 3. Virus :

Is a program or piece of code that is loaded onto the user computer without his

knowledge and runs against his wishes. Viruses can also replicate themselves. All

computer viruses are manmade. A simple virus that can make a copy of itself over

and over again is relatively easy to produce. Even such a simple virus is dangerous

because it will quickly use all available memory and bring the system to a halt. An

even more dangerous type of virus is one capable of transmitting itself across

networks and bypassing security systems.

Since 1987, when a virus infected ARPANET, a large network used by the

Defense Department and many universities, many antivirus programs have become

available. These programs periodically check the computer system for the best-

known types of viruses.

Some people distinguish between general viruses and worms. A worm is a

special type of virus that can replicate itself and use memory, but cannot attach

itself to other programs.

I. 10. 1. 4 Proxy Servers :

A server that sits between a client application, such as a Web browser, and a

real server. It intercepts all requests to the real server to see if it can fulfill the

requests itself. If not, it forwards the request to the real server.

1 Pretty Good Privacy (PGP) is a computer program that provides cryptographic privacy and authentication.

The Conception & Realization Of A Dynamic Web Site Promotion 2008

44

Proxy servers have two main purposes:

• Improve Performance:

Proxy servers can dramatically improve performance for groups of users.

This is because it saves the results of all requests for a certain amount of time.

Consider the case where both user X and user Y access the World Wide Web

through a proxy server. First user X requests a certain Web page, which we'll call

Page 1. Sometime later, user Y requests the same page. Instead of forwarding the

request to the Web server where Page 1 resides, which can be a time-consuming

operation, the proxy server simply returns the Page 1 that it already fetched for

user X. Since the proxy server is often on the same network as the user, this is a

much faster operation. Real proxy servers support hundreds or thousands of users.

The major online services such as America Online, MSN and Yahoo, for example,

employ an array of proxy servers.

• Filter Requests:

Proxy servers can also be used to filter requests. For example, a company

might use a proxy server to prevent its employees from accessing a specific set of

Web sites.

I. 10. 1. 5. VPN :

VPN is Short for Virtual Private Network, a network that is constructed by

using public wires to connect nodes. For example, there are a number of systems

that enable you to create networks using the Internet as the medium for

transporting data. These systems use encryption and other security mechanisms to

ensure that only authorized users can access the network and that the data cannot

be intercepted.

I. 10. 1. 6. Intrusion Detection System (IDS) :

An intrusion detection system (IDS) inspects all inbound and outbound

network activity and identifies suspicious patterns that may indicate a network or

system attack from someone attempting to break into or compromise a system.

The Conception & Realization Of A Dynamic Web Site Promotion 2008

45

There are several ways to categorize an IDS:

• Misuse detection vs. Anomaly detection:

In misuse detection, the IDS analyzes the information it gathers and

compares it to large databases of attack signatures. Essentially, the IDS looks for a

specific attack that has already been documented. Like a virus detection system,

misuse detection software is only as good as the database of attack signatures that

it uses to compare packets against. In anomaly detection, the system administrator

defines the baseline, or normal, state of the network’s traffic load, breakdown,

protocol, and typical packet size. The anomaly detector monitors network

segments to compare their state to the normal baseline and look for anomalies.

• Network-based vs. Host-based systems:

In a network-based system, or NIDS, the individual packets flowing through

a network are analyzed. The NIDS can detect malicious packets that are designed

to be overlooked by a firewall’s simplistic filtering rules. In a host-based system,

the IDS examines at the activity on each individual computer or host.

• Passive system vs. Reactive system:

In a passive system, the IDS detects a potential security breach, logs the

information and signals an alert. In a reactive system, the IDS responds to the

suspicious activity by logging off a user or by reprogramming the firewall to block

network traffic from the suspected malicious source.

Though they both relate to network security, an IDS differs from a firewall

in that a firewall looks out for intrusions in order to stop them from happening. The

firewall limits the access between networks in order to prevent intrusion and does

not signal an attack from inside the network. An IDS evaluates a suspected

intrusion once it has taken place and signals an alarm. An IDS also watches for

attacks that originate from within a system.

The Conception & Realization Of A Dynamic Web Site Promotion 2008

46

I. 11. Conclusion :

« As with any new facility, there will be a period of very light usage until the

community of users experiments with the network and begins to depend upon it.

One of our goals must be to stimulate the immediate and easy use by a wide class

of users. » - Steve Crocker1; Host Software; 7 April 1969.

Words that were written in 1969 and that we are leaving it now days…

As of March 31, 2008, 1.407 billion people use the Internet according to

Internet World Stats2.

1 Steve Crocker : the inventor of the Request for Comments series, authoring the very first RFC and many more. 2 World Internet Users and Population Stats : www.internetworldstats.com.

The Conception & Realization Of A Dynamic Web Site Promotion 2008

47

2. Web Site Security

The Conception & Realization Of A Dynamic Web Site Promotion 2008

48

II. 1. Introduction :

Web site security is possibly today's most overlooked aspect of securing

data. In this chapter provides information about the most important web attacks,

such as SQL injection and Cross site scripting. Besides explaining how they work,

we’ll also provide information on finding and fixing these web vulnerabilities.

II. 1. Cross Site Scripting XSS:

II. 1. 1. What is Cross Site Scripting?

Cross Site Scripting (or XSS) is one of the most common application-layer

web attacks. XSS commonly targets scripts embedded in a page which are

executed on the client-side (in the user’s web browser) rather than on the server-

side. XSS in itself is a threat which is brought about by the internet security

weaknesses of client-side scripting languages, with HTML and JavaScript (others

being VBScript1, ActiveX2, HTML, or Flash) as the prime culprits for this exploit.

The concept of XSS is to manipulate client-side scripts of a web application to

execute in the manner desired by the malicious user. Such a manipulation can

embed a script in a page which can be executed every time the page is loaded, or

whenever an associated event is performed.

A basic example of XSS is when a malicious user injects a script in a

legitimate shopping site URL which in turn redirects a user to a fake but identical

page. The malicious page would run a script to capture the cookie of the user

browsing the shopping site, and that cookie gets sent to the malicious user who can

now hijack the legitimate user’s session. Although no real hack has been

performed against the shopping site, XSS has still exploited a scripting weakness

in the page to snare a user and take command of his session. A trick which often is

used to make malicious URLs less obvious is to have the XSS part of the URL

encoded in HEX (or other encoding methods). This will look harmless to the user

who recognizes the URL he is familiar with, and simply disregards and following

‘tricked’ code which would be encoded and therefore inconspicuous. 1 VBScript is an interpreted script language from Microsoft that is a subset of its Visual Basic programming language designed for interpretation by Web browsers. 2 ActiveX is the name Microsoft has given to a set of "strategic" object-oriented programming technologies and tools.

The Conception & Realization Of A Dynamic Web Site Promotion 2008

49

II. 1. 2. Site owners are always confident, but so are hackers!

Without going into complicated technical details, one must be aware of the

various cases which have shown that XSS can have serious consequences when

exploited on a vulnerable web application. Many site owners dismiss XSS on the

grounds that it cannot be used to steal sensitive data from a back-end database.

This is a common mistake because the consequences of XSS against a web

application and its customers have been proven to be very serious, both in terms of

application functionality and business operation. An online business project cannot

afford to lose the trust of its present and future customers simply because nobody

has ever stepped forward to prove that their site is really vulnerable to XSS

exploits. Ironically, there are stories of site owners who have boldly claimed that

XSS is not really a high-risk exploit. This has often resulted in a public challenge

which hackers are always itching to accept, with the site owner having to later deal

with a defaced application and public embarrassment.

II. 1. 3. The repercussions of XSS:

Analysis of different cases which detail XSS exploits teaches us how the

constantly changing web technology is nowhere close to making applications more

secure. A thorough web search will reveal many stories of large-scale corporation

web sites being hacked through XSS exploits, and the reports of such cases always

show the same recurring consequences as being of the severe kind.

Exploited XSS is commonly used to achieve the following malicious results:

• Identity theft

• Accessing sensitive or restricted information

• Gaining free access to otherwise paid for content

• Spying on user’s web browsing habits

• Altering browser functionality

• Public defamation of an individual or corporation

• Web application defacement

• Denial of Service attacks

The Conception & Realization Of A Dynamic Web Site Promotion 2008

50

II. 1. 4. A Practical example of XSS on an Acunetix test site:

The following example is not a hacking tutorial. It is just a basic way to

demonstrate how XSS can be used to control and modify the functionality of a web

page and to re-design the way the page processes its output. The practical use of

the example may be freely debated; however anyone may see the regular reports

which describe how advanced XSS is used to achieve very complex results, most

commonly without being noticed by the user.

1) Load the following link in your browser:

http://testasp.acunetix.com/Search.asp, you will notice that the page is a simple

page with an input field for running a search.

2) Try to insert the following code into the search field, and notice how a

login form will be displayed on the page:

Please login with the form below before proceeding: <br><br>Please login with the form below before proceeding:<form

action="destination.asp"><table><tr><td>Login:</td><td><input type=text length=20

name=login></td></tr><tr><td>Password:</td><td><input type=text length=20

name=password></td></tr></table><input type=submit value=LOGIN></form>, then simply hit

the search button after inserting the code.

The Conception & Realization Of A Dynamic Web Site Promotion 2008

51

Through the XSS flaw on the page, it has been possible to create a FAKE

login form which can convince gather a user’s credentials. As seen in step 2, the

code contains a section which mentions “destination.asp”. That is where a hacker

can decide where the FAKE login form will send the user’s log-in details for them

to be retrieved and used maliciously.

A hacker can also inject this code by passing it around via the browser’s

address bar as follows: http://testasp.acunetix.com/Search.asp?tfSearch=%3Cbr%3E%3Cbr%3EPlease+login+with+the+form

+below+before+proceeding%3A%3Cform+action%3D%22test.asp%22%3E%3Ctable%3E%3Ctr%3E%3Ctd%3E

Login%3A%3C%2Ftd%3E%3Ctd%3E%3Cinput+type%3Dtext+length%3D20+name%3Dlogin%3E%3C%2Ftd%3

E%3C%2Ftr%3E%3Ctr%3E%3Ctd%3EPassword%3A%3C%2Ftd%3E%3Ctd%3E%3Cinput+type%3Dtext+length

%3D20+name%3Dpassword%3E%3C%2Ftd%3E%3C%2Ftr%3E%3C%2Ftable%3E%3Cinput+type%3Dsubmit+v

alue %3DLOGIN%3E%3C%2Fform%3E

The Conception & Realization Of A Dynamic Web Site Promotion 2008

52

This will create the same result on the page, showing how XSS can be used in

several different ways to achieve the same result. After the hacker retrieves the

user’s log-in credentials, he can easily cause the browser to display the search page

as it was originally and the user would not even realize that he has just been fooled.

This example may also be seen in use in all those spam emails we all receive. It is

very common to find an email in your inbox saying how a certain auctioning site

suspects that another individual is using your account maliciously, and it then asks

you to click a link to validate your identity. This is a similar method which directs

the unsuspecting user to a FAKE version of the auctioning site, and captures the

user’s log-in credentials to then send them to the hacker.

II. 1. 5. Why wait to be hacked?

The observation which can be made when new stories of the latest hacks are

published is that the sites which belong to the large brands and corporations are

hacked in exactly the same way as those sites owned by businesses on a much

smaller budget. This clearly shows how lack of security is not a matter of

resources, but it is directly dependant on the lack of awareness among businesses

of all size. Statistically, 42% of web applications which request security audits are

vulnerable to XSS, which is clearly the most recurring high-risk exploit among all

The Conception & Realization Of A Dynamic Web Site Promotion 2008

53

the applications tested. The effort to raise awareness about how easy it is for an

expert hacker to exploit a vulnerable application does not seem to be going too far.

It is still very common to see the “We’ll see when I get hacked” mentality still

lingering among site owners who finally risk losing a lot of money and also the

trust of their customers. Anybody with the interest to research this matter will see

how even individuals claiming to be security experts feel comfortable to state that

XSS is over-rated and cannot really be used to achieve serious results on a web

application. However further research will also prove that statistical figures speak

for themselves, and those same statistics keep growing at a rate which will

eventually overcast the claims of those incredulous “experts”.

II. 2. SQL Injection:

II. 2. 1. SQL Injection: What is it?

SQL Injection is one of the many web attack mechanisms used by hackers to

steal data from organizations. It is perhaps one of the most common application

layer attack techniques used today. It is the type of attack that takes advantage of

improper coding of your web applications that allows hacker to inject SQL

commands into say a login form to allow them to gain access to the data held

within your database.

In essence, SQL Injection arises because the fields available for user input

allow SQL statements to pass through and query the database directly.

II. 2. 2. SQL Injection: An In-depth Explanation

Web applications allow legitimate website visitors to submit and retrieve data

to/from a database over the Internet using their preferred web browser. Databases

are central to modern websites – they store data needed for websites to deliver

specific content to visitors and render information to customers, suppliers,

employees and a host of stakeholders. User credentials, financial and payment

information, company statistics may all be resident within a database and accessed

by legitimate users through off-the-shelf and custom web applications. Web

applications and databases allow us to regularly run our business.

The Conception & Realization Of A Dynamic Web Site Promotion 2008

54

SQL Injection is the hacking technique which attempts to pass SQL

commands (statements) through a web application for execution by the backend

database. If not sanitized properly, web applications may result in SQL Injection

attacks that allow hackers to view information from the database and/or even wipe

it out.

Such features as login pages, support and product request forms, feedback

forms, search pages, shopping carts and the general delivery of dynamic content,

shape modern websites and provide businesses with the means necessary to

communicate with prospects and customers. These website features are all

examples of web applications which may be either purchased off-the-shelf or

developed as bespoke programs.

These website features are all susceptible to SQL Injection attacks which arise

because the fields available for user input allow SQL statements to pass through

and query the database directly.

II. 2. 3. SQL Injection: A Simple Example

Take a simple login page where a legitimate user would enter his username

and password combination to enter a secure area to view his personal details or

upload his comments in a forum.

When the legitimate user submits his details, an SQL query is generated from

these details and submitted to the database for verification. If valid, the user is

allowed access. In other words, the web application that controls the login page

will communicate with the database through a series of planned commands so as to

verify the username and password combination. On verification, the legitimate user

is granted appropriate access.

Through SQL Injection, the hacker may input specifically crafted SQL

commands with the intent of bypassing the login form barrier and seeing what lies

behind it. This is only possible if the inputs are not properly sanitised (i.e., made

invulnerable) and sent directly with the SQL query to the database. SQL Injection

vulnerabilities provide the means for a hacker to communicate directly to the

database.

The Conception & Realization Of A Dynamic Web Site Promotion 2008

55

The technologies vulnerable to this attack are dynamic script languages

including ASP, ASP.NET, PHP, JSP, and CGI. All an attacker needs to perform an

SQL Injection hacking attack is a web browser, knowledge of SQL queries and

creative guess work to important table and field names. The sheer simplicity of

SQL Injection has fuelled its popularity.

II. 2. 4. The impact of SQL Injection:

Once an attacker realizes that a system is vulnerable to SQL Injection, he is

able to inject SQL Query / Commands through an input form field. This is

equivalent to handing the attacker our database and allowing him to execute any

SQL command including DROP TABLE to the database!

An attacker may execute arbitrary SQL statements on the vulnerable system.

This may compromise the integrity of our database and/or expose sensitive

information. Depending on the back-end database in use, SQL injection

vulnerabilities lead to varying levels of data/system access for the attacker. It may

be possible to manipulate existing queries, to UNION (used to select related

information from two tables) arbitrary data, use subselects, or append additional

queries.

In some cases, it may be possible to read in or write out to files, or to execute

shell commands on the underlying operating system. Certain SQL Servers such as

Microsoft SQL Server contain stored and extended procedures (database server

functions). If an attacker can obtain access to these procedures, it could spell

disaster.

Unfortunately the impact of SQL Injection is only uncovered when the theft is

discovered. Data is being unwittingly stolen through various hack attacks all the

time. The more expert of hackers rarely get caught.

II. 2. 5. Preventing SQL Injection attacks:

Firewalls and similar intrusion detection mechanisms provide little defense

against full-scale web attacks. Since our website needs to be public, security

mechanisms will allow public web traffic to communicate with our databases

servers through web applications. Isn’t this what they have been designed to do?

The Conception & Realization Of A Dynamic Web Site Promotion 2008

56

Patching our servers, databases, programming languages and operating

systems is critical but will in no way the best way to prevent SQL Injection

Attacks.

II. 4.Conclusion:

In this chapter we have presented the most common and popular hacking

technics used on the web and also giving some practical examples and proposed

some useful solutions.

The Conception & Realization Of A Dynamic Web Site Promotion 2008

57

3. Kasdi Merbah University

Presentation

The Conception & Realization Of A Dynamic Web Site Promotion 2008

58

III. 1. Introduction :

In this chapter we’ll present Kasdi Merbah University Ouargla which is a

regional institution of higher education and research, which grants academic

degrees at varios levels (associate, bachelor, and master) in a variety of subjects.

III. 2.Genesis and evolution of the University of Kasdi Merbah Ouargla:

The first nucleus of the University of Kasdi Merbah Ouargla was established

in September 1987, and knew many and rapid changes in organizational and

pedagogical structure. From teachers high school in 1987 to a university centre in

1997 and then to the University of Ouargla in the year 2001.

The teachers High school established under the 88/65 Decree of 22/03/1988,

where work began with the bachelor's degree specialization in the exact sciences

(physics, chemistry & mathematics). And the school has experienced a significant

and rapid Base structures and pedagogical development, and under the 91/119

Executive Decree of 27/04/1991 and the convention concluded between the

Ministry of Higher Education and Ministry of Education that have been annexed

the training centre and Technical school to the High school increasing the capacity

considerably. At the 1990/1991 university entry marked the opening of four new

branches: Desert irrigation, Automation and Information Management ,Trunk joint

science and technology and Exact Science , License Degree in English, while the

number of students who did not exceed 139 students during the 1987/1988 season

rose to more than 600 university students in the season 1990/1991.

In 1997 the Graduate School became a university centre under the Executive

Decree No. 07/159 of 10/03/1997, with the annex of the National High Institute of

desert farming to the Centre under Decree No. 97/337 of 10/09/1997. Then five

institutes were established and are: the Institute of Industrial Chemistry , Institute

of arts and Languages, and the Institute of irrigation and desert farming, Exact

Sciences Institute and the Institute of Social and Human Sciences.

The University of Ouargla was established under Decree No. 01-210 dated

23/07/2001 containing the establishment of the University of Ouargla, the number

of professors was 477 from various grades (Professor of Higher Education,

The Conception & Realization Of A Dynamic Web Site Promotion 2008

59

Lecturer, Assistant Professor charged with lessons, Assistant Professor, Assistant

Professor engineer) .

The number of students entering the university during 2005/2006 was up to

17055 students (333 students in post-graduation first –Masters); deployed on three

colleges:

1. Faculty of Science and Science Engineering.

2. College of arts and Human Sciences.

3. Faculty of Law and Economic Sciences.

The structures of the university is constituted of :

26 scalars, 156 lecture Halls and Practical applications Classes, 28

pedagogical labs, 06 university libraries, 08 reading Halls, 12 Internet Halls, 01

Academy of Technology, 01 invested agricultural area of 32 hectares.

The capacity of the structures of the university is estimated to 15609

Pedagogical seats, with countenance of six (06) university Residences, four of

which are reserved for males and two fore females.

III. 3.Ouargla University Administrative management :

III. 3. 1. Rector:

Is the primarily responsible for General Administration of the University and has a

power on all users, and represents the university in all civil life acts.

III. 3. 2. The Governing Council:

consists of a representative from the Higher Education Ministry and Scientific

Research (Chairman), a representative of all the same Nature ministries, a

representative of the Viceroy (Elwali), representatives from the key mandate

sectors, representatives of professors, one of each College, representatives of the

administrative workers and representatives of the students, also some Foreign

persona can participate as a consultant voice.

From the tasks of the Governing Council we can find :

• The deliberation of the university development schemes.

• Propose exchange programs and scientific cooperation.

The Conception & Realization Of A Dynamic Web Site Promotion 2008

60

• Ratification of the annual report on the activities of the university.

• As well as the ratification of the next budget draft.

III. 3. 3. Scientific Council:

Consists of the Rector (Chairman), Vice-Director and deans of faculties,

heads of scientific councils of the faculties, administrators of research units, library

Responsible, representatives of professors (two from each College), two University

Professors belonging to another university.

One of the tasks of the Scientific Council of the University is expressing its views

and recommendations on :

• The annual Schemes of formation and scientific research.

• National and international scientific Exchange programs cooperation.

• Orient research and scientific and technical documentation policy.

III. 3. 4. The Directorate:

Its only mission is in assisting the Director in carrying out his functions, and its

consisting of:

• Rector of the university: Prof. Dr. / Bouterfaya Ahmed (Chairman).

• Vice Rector in charge of the higher Formation and the continuous Formation

and Diplomas : Professor Saouli Saleh.

• Vice Rector in charge of the revitalization and promotion of scientific

research and external relations and cooperation: Professor Dahou Foudile.

• Vice Rector in charge of development and Psychology and Guidance: Dr. /

Messitfa Ammar.

• In charge of communication and scientific demonstrations: Professor Khalifa

Abdelkader.

• Dean of Science & Engineer Science Faculty: Dr. Dadda Moussa Belkheir.

The Conception & Realization Of A Dynamic Web Site Promotion 2008

61

• Dean of Law and Economic Sciences Faculty : Dr. Bengrina Mohammed

Hamza.

• Dean of literature and Human Sciences Faculty: Dr. / Qureishi AbdelKarim.

• The University General Secretary : Mr. Botahraoui AbdelHalim.

III. 4.Conclusion:

In this chapter we have presented Kasdi Merbah University, its creation,

evaluation and its Administrative management in a general manner.

The Conception & Realization Of A Dynamic Web Site Promotion 2008

62

4. Analyse & conception

The Conception & Realization Of A Dynamic Web Site Promotion 2008

63

IV. 1. Introduction :

The conception is the most important step in the computing projects. It consists

in fixing the informations choices and the manipulated operations in the system.

(It’s a global view on the system).

In this chapter we’ll present the analyse and conception phases.

IV. 2. Scope statements :

IV. 2. 1. Contexte:

It consists on a Dynamic Web Site, mading an online portal for the universirty

of Ouargla.

The most important feature of this aplication is to ignore the distance problem

for both proffeseurs in putting notes and for students in consulting them. In

addition to a lot of other operations and documents.

IV. 2. 2. General description:

The work we’re suposed to do is a dynamic web site that computes the notes

delevring operation from the proffessurs and keep the privacy of each student in

viewing his notes, this site will be associated to a data base that keeps and deliver

this informations.

The people that will interact with our site and get used of this automatisation

are:

• The site administrator: is the person who has a full access to the

database and can do all the operations, including the site management.

• The Professor: has a unique user name and password that give him the

ability to modify onley the notes of the module he is responsible of.

• The Student: has a unique user name and password that give him the

ability to view onley the notes of the modules he is studying.

• The Guest: has the ability to navigate in the site without any permission

to the database.

The Conception & Realization Of A Dynamic Web Site Promotion 2008

64

IV. 2. 3. Data Description:

Students (students):

• Student Id (sid): integer (10).

• Student name (sname): varchar (30).

• Student birthday (sbirthday): varchar (11).

• Student inscription date (sdateins): varchar (11).

• Student gender (sgender): varchar(1)

• Student branch (sbranch): integer (5).

• Student grade (sgrade): integer (1).

• Student email (semail): varchar (100).

• Student photo (sphoto): text.

• Student password (spass): varchar (20).

• Student login counter (slogin): integer (4).

Professors (profs):

• Professor Id (pid): integer (10).

• Professor name (pname): varchar (30).

• Professor gender (gender): varchar(1)

• Professor diplomat (diplome): varchar (100).

• Professor email (email): varchar (100).

• Professor password (pass): varchar (20).

• Professor login counter (login): integer (4).

Modules (modules):

• Module Id (mid): integer (10).

• Module name (mname): varchar (50).

• Module professor Id (pid): integer (10).

• Module branch (mbranch): integer (5).

Notes (notes):

• Note Id (nid): integer (20).

The Conception & Realization Of A Dynamic Web Site Promotion 2008

65

• Student Id (sid): integer (10).

• Module Id (mid): integer (10).

• Note (note): float.

• Examen kind (exam): varchar (10).

• Year (year): integer (4).

Branches (branches):

• Branch Id (bid): integer (5).

• Branch name (bname): varchar (100).

Classes (classes):

• Class Id (cid): integer (3).

• Class name (cname): varchar (100).

• Faculty Id (cfaculty): integer(2).

Faculties (faculties):

• Faculty Id (fid): integer (1).

• Faculty name (fname): varchar (100).

IV. 2. 4. Operations and Traitements

Our application may allow the following operations:

• The site administrator :

ü Full access to all site resources.

ü Add new professor.

ü Add new student.

ü Add new modules.

ü Chang professor data.

ü Chang student data.

ü Chang existing modules.

ü Delete existing professor.

ü Delete existing student.

ü Delet existing module.

• The Professor:

The Conception & Realization Of A Dynamic Web Site Promotion 2008

66

ü Add his student’s notes.

ü Change his student’s notes.

ü Change his personal Email.

ü Change his personal password.

• The Student:

ü Change his personal Email.

ü Change his personal password.

IV. 4. Conclusion:

In this chapter we have described in a detailed and a complete manner the

deferents functions supported by our system, also the data description in the DB.

Now it rest onley the implementation phase to get an operational system.

The Conception & Realization Of A Dynamic Web Site Promotion 2008

67

5. Implementation

The Conception & Realization Of A Dynamic Web Site Promotion 2008

68

V. 1. Introduction :

In this chapter we’ll present our system, starting by the required environement

for establishing the system then the detailed presentation of the application.

V. 2. The workspace compenents :

The workspace required compenents for developping the system in the

realisation phase could be picked from the list below :

Operating system

• Windows 9x • Windows XP • Windows NT • Windows Vista • Linux • Mac OS

Web Server • Apache • IIS

MYSQL data base server • MYSQL for windows • MYSQL for Linux

Administration tool for MYSQL • PhpMyAdmin (web client) • MyAdmin (windows client)

Scripting Language • Php, JavaScript Web Browser • Firefox, Opera, IE.

Other Tools DSV PHP Editor, Notepad++, Yukon Dev SQL Editor.

V. 3. Implementation Tools :

For the system realisation we have used a configured Pc as alocal developement

server then hosting it on the net.

• PC : P4 Intel 3.6Ghz, 1Gb RAM, 80Gb Hard drive space.

• OS : Windows Xp.

• Web browser : Firefox, Opera, IE.

• XAMPP 1.6.6a wich includs :

ü Apache HTTPD 2.2.8 + Openssl 0.9.8g

ü MySQL 5.0.51a

ü PHP 5.2.5

ü PHP 4.4.8

The Conception & Realization Of A Dynamic Web Site Promotion 2008

69

ü phpMyAdmin 2.11.4

ü FileZilla FTP Server 0.9.25

ü Mercury Mail Transport System 4.52

• NotePad++ V4.9.1.

• DSV PHP Editor V1.0.0

• Yukon Dev SQL Editor V.1.8.16.39

• Photoshop CS3.

V. 4. What is PHP ?

PHP originally stood for Personal Home Page. It began in 1994 as a set of

Common Gateway Interface binaries written in the C programming language by

the Danish/Greenlandic programmer Rasmus Lerdorf1. Lerdorf initially created

these Personal Home Page Tools to replace a small set of Perl2 scripts he had been

using to maintain his personal homepage. The tools were used to perform tasks

such as displaying his résumé and recording how much traffic his page was

receiving. He combined these binaries with his Form Interpreter to create PHP/FI,

which had more functionality. PHP/FI included a larger C implementation and

could communicate with databases enabling the building of simple, dynamic web

applications. He released PHP publicly on June 8, 1995 to speed up the finding of

bugs and improving the code. This release was named PHP version 2 and already

had the basic functionality that PHP has today. This included Perl-like variables,

form handling, and the ability to embed HTML. The syntax was similar to Perl but

was more limited, simpler, and less consistent.

Zeev Suraski and Andi Gutmans, two Israeli developers at the Technion IIT3,

rewrote the parser in 1997 and formed the base of PHP 3, changing the language's

name to the recursive initialism PHP: Hypertext Preprocessor. The development

team officially released PHP/FI 2 in November 1997 after months of beta testing.

1 Rasmus Lerdorf (born November 22, 1968 in Qeqertarsuaq, Greenland) is a Danish-Greenlandic programmer and the creator of the PHP programming language. 2 Perl is a dynamic programming language. Perl borrows features from a variety of other languages including C, shell scripting (sh), AWK, sed and Lisp. 3 The Technion – Israel Institute of Technology is an institute of higher education in Haifa.

The Conception & Realization Of A Dynamic Web Site Promotion 2008

70

PHP, which stands for "Hypertext Preprocessor", is a server-side, HTML

embedded scripting language used to create dynamic Web pages. Much of its

syntax is borrowed from C, Java and Perl with some unique features thrown in.

The goal of the language is to allow Web developers to write dynamically

generated pages quickly.

In an HTML page, PHP code is enclosed within special PHP tags. When a

visitor opens the page, the server processes the PHP code and then sends the output

(not the PHP code itself) to the visitor's browser. It means that, unlike JavaScript,

you don't have to worry that someone can steal your PHP script.

PHP offers excellent connectivity to many databases including MySQL,

Informix, Oracle, Sybase, Solid, PostgreSQL, and Generic ODBC. The popular

PHP-MySQL combination (both are open-source products) is available on almost

every UNIX host. Being web-oriented, PHP also contains all the functions to do

things on the Internet - connecting to remote servers, checking email via POP3 or

IMAP, url encoding, setting cookies, redirecting, etc.

The principal Php concurents are Perl, Microsoft Active Server Pages (ASP)

and Java Server Pages (JSP). Among Php principal advanteges :

• It is easy to understand and learn, especially for those with backgrounds in

programming such as C, javascript and HTML.

• PHP doesn’t‚ use a lot of the system‚ resources so it runs fast and doesn’t‚

tend to slow other processes down.

• PHP offers many levels of security to prevent malicious attacks.

• PHP connective abilities, it uses a modular system of extensions to interface

with a variety of libraries such as graphics, XML, encryption, etc.

• PHP has tons of server interfaces, database interfaces and other modules

available. Of the server interfaces, PHP can load into Apache, IIS, Roxen,

THTTPD and AOLserver. It can also be run as a CGI module. Database interfaces

are available for MySQL, MS SQL, Informix, Oracle and plenty of others.

The Conception & Realization Of A Dynamic Web Site Promotion 2008

71

V. 4. 1. How do PHP works:

When we go to a Web site (we’ll use http://www.ouargla-univ.dz/php/index.php

as my example here), our Internet Service Provider directs our request to the server

that holds the http://www.ouargla-univ.dz/php/index.php information. Because this

site was designed in PHP, the server reads the PHP and processes it according to

its scripted directions. In this example, the PHP code tells the server to send the

appropriate Web page data to our browser. This data is in the form of HTML that

the browser can display as it would a standard HTML page. In short, PHP creates

an HTML page on the fly based on parameters of my choosing; the server contains

no static HTML pages.

Figure 1 This graphic demonstrates how the process works between a Client,

the Server, and a PHP module (an application added to the server to increase its

functionality) to send HTML back to the browser (albeit in very simplistic terms).

All server-side technologies (ASP, for example) use some sort of third-party

module on the server to process the data that gets sent back to the client.

Figure 01

With a purely HTML-generated site, the server merely sends the HTML data to

the Web browser; there is no server-side interpretation.

Figure 2 Compare this direct relationship of how a server works with basic

HTML to that of Figure 1. This is also why HTML pages can be viewed in our

browser from our own computer since they do not need to be "served," but

dynamically generated pages need to be accessed through a server which handles

the processing.

The Conception & Realization Of A Dynamic Web Site Promotion 2008

72

Figure 02

To the end user and their browser, there may not be an obvious difference

between what http://www.ouargla-univ.dz/php/index.php and http://www.ouargla-

univ.dz/php/index.html look like, but how the pages arrived at that point are

critically different. The major difference: By using PHP, you can have the server

dynamically generate the HTML code. In this example, the index.php page

referenced above displays news items that it retrieves chronologically from a

database.

V. 4. 2. The key difference between PHP and JavaScript:

The key difference between JavaScript and PHP is simple. JavaScript is

interpreted by the Web browser once the Web page that contains the script has

been downloaded. Conversely, server-side scripting languages such as PHP are

interpreted by the Web server before the page is even sent to the browser. And,

once it’s interpreted, the results of the script replace the PHP code in the Web page

itself; all the browser sees is a standard HTML file. The script is processed entirely

by the server, hence the designation: server-side scripting language.

Figur 03 : How Php works

Output Input

Client

Browser

<html><head> <title>Home</title> </head> <body> <br /> <p id="top">welcome</p> </body> </html>

Server

Database MYSQL

Web Server (httpd)

PHP Engin as CGI

program

< ?php $var= "welcome " ; If ( !empty($var)) Echo $var; ?>

Request

The Conception & Realization Of A Dynamic Web Site Promotion 2008

73

V. 5. MySQL :

MySQL is a relational database management system (RDBMS) based on SQL

(Structured Query Language). First released in January, 1998, MySQL is now one

component of parent company MySQL AB's product line of database servers and

development tools.

Many Internet startups became interested in the original open source version of

MySQL as an alternative to the proprietary database systems from Oracle1, IBM,

and Informix. MySQL is currently available under two different licensing

agreements: free of charge, under the GNU2 General Public License (GPL) open

source system or through subscription to MySQL Network for business

applications.

MySQL runs on virtually all platforms, including Linux, Unix, and Windows. It

is fully multi-threaded using kernel3 threads4, and provides application program

interfaces (APIs) for many programming languages, including C, C++, Eiffel,

Java, Perl, PHP, Python5, and Tcl6.

MySQL is used in a wide range of applications, including data warehousing, e-

commerce, Web databases, logging applications and distributed applications. It is

also increasingly embedded in third-party software and other technologies.

According to MySQL AB, their flagship product has over six million active

MySQL installations worldwide. Customers include Cisco, Dun & Bradstreet,

Google, NASA, Lufthansa, Hyperion, and Suzuki.

1Oracle is the world's leading supplier of software for information management but it is best known for its sophisticated relational database products. 2 GNU is a Unix-like operating system that comes with source code that can be copied, modified, and redistributed. 3 The kernel is the essential center of a computer operating system, the core that provides basic services for all other parts of the operating system. 4 A thread is a sequence of responses to an initial message posting. This enables you to follow or join an individual discussion in a newsgroup from among the many that may be there. 5 Python is an interpreted, object-oriented programming language similar to Perl, that has gained popularity because of its clear syntax and readability. 6 Tcl is an interpreted script language developed by Dr. John Ousterhout at the University of California, Berkeley, and now developed and maintained by Sun Laboratories.

The Conception & Realization Of A Dynamic Web Site Promotion 2008

74

V. 5. 1. An Overview of MySQL Architecture:

MySQL is based on a tiered architecture, consisting of both primary subsystems

and support components that interact with each other to read, parse, and execute

queries, and to cache and return query results.

• Primary Subsystems :

The MySQL architecture consists of five primary subsystems that work

together to respond to a request made to the MySQL database server:

ü The Query Engine

ü The Storage Manager

ü The Buffer Manager

ü The Transaction Manager

ü The Recovery Manager

The organization of these features is shown in Figure 4.

Figure 04 MySQL subsystems

V. 5. 2. MySQL Characteristics:

• Atomicity :

A transaction is defined as an action, or a series of actions, that can access or

change the contents of a database. In SQL terminology, a transaction occurs when

one or more SQL statements operate as one unit. Each SQL statement in such a

unit is dependent on the others; in other words, if one statement does not complete,

the entire unit will be rolled back, and all the affected data will be returned to the

state it was in before the transaction was started. Grouping the SQL statements as

part of a single unit (or transaction) tells MySQL that the entire unit should be

executed atomically.

The Conception & Realization Of A Dynamic Web Site Promotion 2008

75

• Consistency :

Consistency exists when every transaction leaves the system in a consistent

state, regardless of whether the transaction completes successfully or fails midway.

• Isolation :

Isolation implies that every transaction occurs in its own space, isolated from

other transactions that may be occurring in the system, and that the results of a

transaction are visible only once the entire sequence of events making up the

transaction has been fully executed. Even though multiple transactions may be

occurring simultaneously in such a system, the isolation principle ensures that the

effects of a particular transaction are not visible until the transaction is fully

complete.

• Durability :

Durability, which means that changes from a committed transaction persist

even if the system crashes, comes into play when a transaction has completed and

the logs have been updated in the database. Most RDBMS products ensure data

consistency by keeping a log of all activity that alters data in the database in any

way. This database log keeps track of any and all updates made to tables, queries,

reports, and so on. If we have turned on the database log , we already know that

using it will slow down the performance of our database when it comes to writing

data. (It will not, however, affect the speed of our queries.)

V. 6. How do PHP and MySQL work together?

PHP and MySQL compliment each other to do with neither can do alone. PHP

can collect data, and MySQL can in turn store the information. PHP can create

dynamic calculations, and MySQL can provide it with the variables it uses. PHP

can create a shopping cart for our web store, but MySQL can then keep the data in

a format PHP can use to create receipts on demand, show current order status, or

even suggest other related products.

Although PHP and MySQL can each be used independently, when we put them

together it opens up countless possibilities for our site. As the internet progresses,

it becomes more and more necessary to deliver dynamic content to keep up with

The Conception & Realization Of A Dynamic Web Site Promotion 2008

76

the demands of web surfers and their desire to have information instantly delivered

to them online. By learning to use PHP and MySQL we can deliver this

information to them on demand.

V. 7. PhpMyAdmin :

A set of PHP-scripts to administrate MySQL over the WWW in a web interface

without needing of a deep knowledge of Sql queries. PhpMyAdmin is intended to

handle the administration of MySQL over the WWW. Currently it can :

ü Create and drop databases.

ü Create, copy, drop and alter tables.

ü Delete, edit and add fields.

ü Execute any SQL-statement, even batch-queries.

ü Manage keys on fields.

ü Load text files into tables.

ü Create and read dumps of tables.

ü Export and import CSV data.

ü Administer one single database.

V. 8. The web Server software:

A computer program that is responsible for accepting HTTP requests from

clients, which are known as web browsers, and serving them HTTP responses

along with optional data contents, which usually are web pages such as HTML

documents and linked objects (images, etc.).

There are many varieties of World Wide Web server software to serve different

forms of data we mention the most popular like Apache server, Internet

Information Server, CERN server1, NCSA server2, GoServe3, MacHTTP4 and

others.

1 CERN server : The World Wide Web daemon program, full featured, with access authorization and research tools. This daemon is also used as a basis for many other types of server and gateways. Platforms: unix, VMS. 2 NCSA server : A server for files, written in C, public domain. Many features as CERN's httpd. Platforms: unix. 3 GoServe : A server for OS/2 supporting both HTTP and Gopher, from Mike Cowlishaw of IBM UK Laboratories. 4 MacHTTP : Server for the Macintosh.

The Conception & Realization Of A Dynamic Web Site Promotion 2008

77

The most popular and effective server softwares are the Apache server and

Internet Information Server (IIS).

V. 8. 1. Internet Information Server:

IIS (Internet Information Server) is a group of Internet servers (including a Web

or Hypertext Transfer Protocol server and a File Transfer Protocol server) with

additional capabilities for Microsoft's Windows NT and Windows 2000 Server

operating systems. IIS is Microsoft's entry to compete in the Internet server market

that is also addressed by Apache, Sun Microsystems, O'Reilly, and others. With

IIS, Microsoft includes a set of programs for building and administering Web sites,

a search engine, and support for writing Web-based applications that access

databases. Microsoft points out that IIS is tightly integrated with the Windows NT

and 2000 Servers in a number of ways, resulting in faster Web page serving.

V. 8. 2. Apache Server:

Apache is a freely available Web server that is distributed under an GPL1 "open

source" license. Version 2.0 runs on most Unix-based operating systems (such as

Linux, Solaris2, Digital UNIX, and AIX3), on other UNIX/POSIX4-derived

systems (such as Rhapsody, BeOS5, and BS2000/OSD), on AmigaOS, and on

Windows 2000. According to the Netcraft (www.netcraft.com) Web server survey

in February, 2001, 60% of all Web sites on the Internet are using Apache (62%

including Apache derivatives), making Apache more widely used than all other

Web servers combined.

1 The GNU General Public License, often shortened to GNU GPL (or simply GPL), lists terms and conditions for copying, modifying and distributing free software. 2 Solaris is the computer operating system that Sun Microsystems provides for its family of Scalable Processor Architecture-based processors as well as for Intel-based processors. 3 AIX is an open operating system from IBM that is based on a version of Unix. 4 POSIX (Portable Operating System Interface) is a set of standard operating system interfaces based on the Unix operating system. 5 BeOS is a personal computer operating system that its makers describe as designed for the multimedia applications of the future.

The Conception & Realization Of A Dynamic Web Site Promotion 2008

78

V. 8. 3. Apache & IIS comparision:

Microsoft Internet Information Services Apache

Server Type WEB WEB

Latest Version 6.0 2.2.8

Price Details

Included with all Windows Server 2003 versions 0

Vendor Microsoft Corp. Apache Software Foundation

Description Web server that works in

conjunction with Windows Server operating systems

The predominant open source Web server

Features Administration GUI configuration GUI setup Remote administration SNMP

configurable/monitorable Futureproofing/scalability

.Net compliant 64-bit port Cluster support IPv6 support J2EE 1.4 certified J2EE 1.4 compliant

Other Features

Multiple logs Supports Microsoft

ISAPI Virtual servers Web-based user interface

Programming/Scripting

Includes source Own API

Administration GUI configuration GUI setup Remote administration SNMP

configurable/monitorable Futureproofing/scalability

.Net compliant 64-bit port Cluster support IPv6 support J2EE 1.4 certified J2EE 1.4 compliant

Other Features

Multiple logs Supports Microsoft

ISAPI Virtual servers Web-based user interface

Programming/Scripting

Includes source Own API

The Conception & Realization Of A Dynamic Web Site Promotion 2008

79

Own scripting/batch language

Supports external scripting/batch language Security

ActiveDirectory authentication

Antispam features Antivirus features Built-in firewall

capabilities Built-in proxy

capabilities Internal user access

scheme LDAP authentication Other/system

authentication SSL (hardware) SSL (software)

Support

Commercial support available

Forum support Free telephone support GSA scheduled Mailing list support Service-level agreement

offerings available

Own scripting/batch language

Supports external scripting/batch language Security

ActiveDirectory authentication

Antispam features Antivirus features Built-in firewall

capabilities Built-in proxy

capabilities Internal user access

scheme LDAP authentication Other/system

authentication SSL (hardware) SSL (software)

Support

Commercial support available

Forum support Free telephone support GSA scheduled Mailing list support Service-level agreement

offerings available

The Conception & Realization Of A Dynamic Web Site Promotion 2008

80

V. 9. Other Scripting Languages :

V. 9. 1. JavaScript:

Javascript is a scripting language developed by Netscape1 to enable Web

authors to design interactive sites. Although it shares many of the features and

structures of the full Java language, it was developed independently. Javascript can

interact with HTML source code, enabling Web authors to spice up their sites with

dynamic content. JavaScript is endorsed by a number of software companies and is

an open language that anyone can use without purchasing a license. It is supported

by recent browsers from mozilla, Netscap and Microsoft, though Internet Explorer

supports only a subset, which Microsoft calls Jscript.

V. 9 .2. XML:

XML (eXtensible Markup Language) is a flexible way to create common

information formats and share both the format and the data on the World Wide

Web, intranets, and elsewhere. For example, computer makers might agree on a

standard or common way to describe the information about a computer product

(processor speed, memory size, and so forth) and then describe the product

information format with XML. Such a standard way of describing data would

enable a user to send an intelligent agent (a program) to each computer maker's

Web site, gather data, and then make a valid comparison. XML can be used by any

individual or group of individuals or companies that wants to share information in

a consistent way.

V. 10. AJAX:

Ajax (Asynchronous JavaScript and XML) is a method of building interactive

applications for the Web that process user requests immediately. Ajax combines

several programming tools including JavaScript, dynamic HTML (DHTML),

XML, eXtensible Style Language Transformation (XSLT)2, cascading style sheets

(CSS)3, the Document Object Model (DOM)1, and the Microsoft object,

1 Netscape Communications Corporation It revolutionized the computer software market by giving away for free its popular Navigator Web browser until it had acquired an overwhelming market share for this category of software. 2 XSLT : the language used in XSL style sheets to transform XML documents into other XML documents. 3 a new feature being added to HTML that gives both Web site developers and users more control over how pages are displayed. With CSS, designers and users can create style sheets that define how different elements, such as headers and links, appear. These style sheets can then be applied to any Web page.

The Conception & Realization Of A Dynamic Web Site Promotion 2008

81

XMLHttpRequest. Ajax allows content on Web pages to update immediately when

a user performs an action, unlike an HTTP request, during which users must wait

for a whole new page to load. For example, a weather forecasting site could

display local conditions on one side of the page without delay after a user types in

a zip code.

Ajax is not a proprietary technology or a packaged product. Web developers

have been using JavaScript and XML in combination for several years. Jesse

James Garrett2 of the consultancy firm Adaptive Path is credited with coining the

name "Ajax" as a shorthand way to refer to the specific technologies involved in a

current approach.

V. 11. The system presentation :

Now we’ll present the system and explian its principal fonctions.

First, the Index page allows an easy navigation on our site and also to

manage the operations according to the Log-In type of each user.

Figur 05 : Index page

1 DOM, the specification for how objects in a Web page (text, images, headers, links, etc.) are represented 2 Jesse James Garrett is an experience designer and founder of Adaptive Path, a user experience strategy and design firm. Garrett co-founded the Information Architecture Institute, and his essays have appeared in New Architect, Boxes and Arrows, and Digital Web Magazine.

The Conception & Realization Of A Dynamic Web Site Promotion 2008

82

V. 11. 1. Administrator space:

The Administrator is the person who is in charge of the whole site including

the data base and its managements. We’ll list the key operations that can the

Administrator perform in our site.

Now and after clicking on the Log-In and entering the right Username and

password of the Administrator we’ll have the following operations.

• Operations on professors :

ü View professors list :

This action can be performed by clicking on the Professor link in the

Administrator space.

Figur 06 : View Professors list

The Conception & Realization Of A Dynamic Web Site Promotion 2008

83

ü Add new professor :

This action can be performed by clicking on the Add link under the professor

liste table.

Figur 07 : Add a new Professor

ü View single professor data :

This action can be performed by clicking on the professor name in the list

table.

The Conception & Realization Of A Dynamic Web Site Promotion 2008

84

Figur 08 : View single Professor data

ü Chang existing professor data :

This action can be performed by clicking on the Edit Icon on the top of the

table.

The Conception & Realization Of A Dynamic Web Site Promotion 2008

85

Figur 09 : Edit existing Professor data

ü Delete existing professor:

This action can be performed by clicking on the delete Icon

Figur 10 : Delete existing professor

The Conception & Realization Of A Dynamic Web Site Promotion 2008

86

• Operations on students :

The operations on students are the same as the once performed on the

professors.

V. 11. 2. Professor space:

ü View professor personal data :

This action can be performed by clicking on the My space link after the Log-

In operation.

Figur 11 : View professor personal data

ü Edit professor personel data:

This action can be performed by clicking on the Edit Icon on the top of the

table.

The Conception & Realization Of A Dynamic Web Site Promotion 2008

87

Figur 12 : Edit professor personal data

ü View student notes list :

This action can be performed by clicking on the Module name in the table.

The Conception & Realization Of A Dynamic Web Site Promotion 2008

88

Figur 13: View student notes list

ü Add students notes :

This action can be performed by clicking on the Add link on the bottom of

the table.

The Conception & Realization Of A Dynamic Web Site Promotion 2008

89

Figur 14 : Add student notes

ü Edit students notes :

This action can be performed by clicking on the Edit Icon on the right of the

table.

The Conception & Realization Of A Dynamic Web Site Promotion 2008

90

Figur 15 : Edit student notes

V. 9. 3. Student space:

ü View student personal data & notes:

This action can be performed by clicking on the My space link after the Log-

In operation.

The Conception & Realization Of A Dynamic Web Site Promotion 2008

91

Figur 16 : View student personal data & notes

ü Edit student personel data:

This action can be performed by clicking on the Edit Icon on the top of the

table.

The Conception & Realization Of A Dynamic Web Site Promotion 2008

92

Figur 17 : Edit student personel data

V. 9. 4. The guest:

The gust is the person that can navigate throw the site without any

permission to modify the data base resources, but he can get used of the site

contents from news, documents and also participating on the polls that are made to

interact with the visiters.

The Conception & Realization Of A Dynamic Web Site Promotion 2008

93

Figur 18 : Guest news page

v The Site test:

The site was successfully hosted on the Freehostia servers; wich is a free

web hosting provider with an interesting features.

V. 10. Conclusion:

In this chapter, we have described in a detailed and a complete manner the

components and the features of the site environment, the implementation tools and

the introducing the deferent interfaces of our application.

The Conception & Realization Of A Dynamic Web Site Promotion 2008

94

General Conclusion

The Conception & Realization Of A Dynamic Web Site Promotion 2008

95

General Conclusion :

The realization of this project was beneficial for us witch allowed as to

create an experience and competence in a lot of web development tools such as

PHP, JavaScript, HTML, and MySQL…

We’re also hoping that our system will be integrated in the University of

Kasdi Merbah web site, and we also notice that our site is online and hosted on a

free hosting provider under the URL: http://ouargla-univ.freehostia.com.

The Conception & Realization Of A Dynamic Web Site Promotion 2008

96

Bibliography & webography

The Conception & Realization Of A Dynamic Web Site Promotion 2008

97

Bibliography & webography

1. Build Your Own Database Driven Website Using PHP & MySQL,

3rd.Edition, Kevin Yank, SitePoint editions, 2006.

2. PHP and MySQL Web Development For.Dummies, Janet Valade, Wiley

Publishing, 2008.

3. PHP Manual, PHP developement team, PHP Documentation Group,

2007.

4. http://www.acunetix.com/

5. http://www.buildwebsite4u.com

6. http://www.developpez.com

7. http://www.devshed.com

8. http://kb.iu.edu/

9. http://www.learnthenet.com

10. http://www.livinginternet.com

11. http://www.searchwinit.com

12. http://www.serverwatch.com

13. SitePiont forums, http://www.sitepoint.com/forums/

14. http://www.tcpipguide.com

15. http://www.webopedia.com

16. http://www.zeltser.com

The Conception & Realization Of A Dynamic Web Site Promotion 2008

98