What Is HTTP?
In this post I will explain what HTTP is.
HTTP (Hypertext Transfer Protocol) provides a network protocol standard that web browsers and servers use to communicate. It's easy to recognize this when visiting a website because it's written right in the URL (e.g. https://eztechnicalbots.blogspot.com/ ).
This protocol is similar to others like FTP in that it's used by a client program to request files from a remote server. In the case of HTTP, it's usually a web browser that requests HTML files from a web server, which are then displayed in the browser with text, images, hyperlinks, etc.
HTTP is what's called a "stateless system." What this means is that unlike other file transfer protocols such as FTP, the HTTP connection is dropped once the request has been made. So, once your web browser sends the request and the server responds with the page, the connection is closed.
Since most web browsers default to HTTP, you can type just the domain name and have the browser auto-fill the "http://" portion.
Components of HTTP-based systems
HTTP is a client-server protocol: requests are sent by one entity, the user-agent (or a proxy on behalf of it). Most of the time the user-agent is a Web browser, but it can be anything, for example a robot that crawls the Web to populate and maintain a search engine index.
Each individual request is sent to a server, which will handle it and provide an answer, called the response. Between this request and response there are numerous entities, collectively designated as proxies, which perform different operations and act as gateways or caches, for example.
In reality, there are more computers between a browser and the server handling the request: there are routers, modems, and more. Thanks to the layered design of the Web, these are hidden in the network and transport layers. HTTP is on top at the application layer. Although important to diagnose network problems, the underlying layers are mostly irrelevant to the description of HTTP.
Client: the user-agent
The user-agent is any tool that acts on the behalf of the user. This role is primarily performed by the Web browser; a few exceptions being programs used by engineers, and Web developers to debug their applications.
The browser is always the entity initiating the request. It is never the server (though some mechanisms have been added over the years to simulate server-initiated messages).
To present a Web page, the browser sends an original request to fetch the HTML document from the page. It then parses this file, fetching additional requests corresponding to execution scripts, layout information (CSS) to display, and sub-resources contained within the page (usually images and videos). The Web browser then mixes these resources to present to the user a complete document, the Web page. Scripts executed by the browser can fetch more resources in later phases and the browser updates the Web page accordingly.
A Web page is a hypertext document. This means some parts of displayed text are links which can be activated (usually by a click of the mouse) to fetch a new Web page, allowing the user to direct their user-agent and navigate through the Web. The browser translates these directions in HTTP requests, and further interprets the HTTP responses to present the user with a clear response.
The Web server
On the opposite side of the communication channel, is the server which serves the document as requested by the client. A server presents only as a single machine virtually: this is because it may actually be a collection of servers, sharing the load (load balancing) or a complex piece of software interrogating other computers (like cache, a DB server, e-commerce servers, …), totally or partially generating the document on demand.
A server is not necessarily a single machine, but several servers can be hosted on the same machine. With HTTP/1.1 and the
Host
header, they may even share the same IP address.Proxies
Between the Web browser and the server, numerous computers and machines relay the HTTP messages. Due to the layered structure of the Web stack, most of these operate at either the transport, network or physical levels, becoming transparent at the HTTP layer and potentially making a significant impact on performance. Those operating at the application layers are generally called proxies. These can be transparent, or not (changing requests going through them), and may perform numerous functions:
- caching (the cache can be public or private, like the browser cache)
- filtering (like an antivirus scan, parental controls, …)
- load balancing (to allow multiple servers to serve the different requests)
- authentication (to control access to different resources)
- logging (allowing the storage of historical information)
Basic aspects of HTTP
HTTP is simple
Even with more complexity, introduced in HTTP/2 by encapsulating HTTP messages into frames, HTTP is generally designed to be simple and human readable. HTTP messages can be read and understood by humans, providing easier developer testing, and reduced complexity for new-comers.
HTTP is extensible
Introduced in HTTP/1.0, HTTP headers made this protocol easy to extend and experiment with. New functionality can even be introduced by a simple agreement between a client and a server about a new header's semantics.
HTTP is stateless, but not sessionless
HTTP is stateless: there is no link between two requests being successively carried out on the same connection. This immediately has the prospect of being problematic for users attempting to interact with certain pages coherently, for example, using e-commerce shopping baskets. But while the core of HTTP itself is stateless, HTTP cookies allow the use of stateful sessions. Using header extensibility, HTTP Cookies are added to the workflow, allowing session creation on each HTTP request to share the same context, or the same state.
HTTP and connections
A connection is controlled at the transport layer, and therefore fundamentally out of scope for HTTP. Though HTTP doesn't require the underlying transport protocol to be connection-based; only requiring it to be reliable, or not lose messages (so at minimum presenting an error). Among the two most common transport protocols on the Internet, TCP is reliable and UDP isn't. HTTP subsequently relies on the TCP standard, which is connection-based, even though a connection is not always required.
HTTP/1.0 opened a TCP connection for each request/response exchange, introducing two major flaws: opening a connection needs several round-trips of messages and therefore slow, but becomes more efficient when several messages are sent, and regularly sent: warm connections are more efficient than cold ones.
In order to mitigate these flaws, HTTP/1.1 introduced pipelining (which proved difficult to implement) and persistent connections: the underlying TCP connection can be partially controlled using the
Connection
header. HTTP/2 went a step further by multiplexing messages over a single connection, helping keep the connection warm, and more efficient.
Experiments are in progress to design a better transport protocol more suited to HTTP. For example, Google is experimenting with QUIC which builds on UDP to provide a more reliable and efficient transport protocol.
What can be controlled by HTTP
This extensible nature of HTTP has, over time, allowed for more control and functionality of the Web. Cache or authentication methods were functions handled early in HTTP history. The ability to relax the origin constraint, by contrast, has only been added in the 2010s.
Here is a list of common features controllable with HTTP.
- Cache
How documents are cached can be controlled by HTTP. The server can instruct proxies, and clients, what to cache and for how long. The client can instruct intermediate cache proxies to ignore the stored document. - Relaxing the origin constraint
To prevent snooping and other privacy invasions, Web browsers enforce strict separation between Web sites. Only pages from the same origin can access all the information of a Web page. Though such constraint is a burden to the server, HTTP headers can relax this strict separation server-side, allowing a document to become a patchwork of information sourced from different domains (there could even be security-related reasons to do so). - Authentication
Some pages may be protected so only specific users can access it. Basic authentication may be provided by HTTP, either using theWWW-Authenticate
and similar headers, or by setting a specific session using HTTP cookies. - Proxy and tunneling
Servers and/or clients are often located on intranets and hide their true IP address to others. HTTP requests then go through proxies to cross this network barrier. Not all proxies are HTTP proxies. The SOCKS protocol, for example, operates at a lower level. Others, like ftp, can be handled by these proxies. - Sessions
Using HTTP cookies allows you to link requests with the state of the server. This creates sessions, despite basic HTTP being a state-less protocol. This is useful not only for e-commerce shopping baskets, but also for any site allowing user configuration of the output.
HTTP flow
When the client wants to communicate with a server, either being the final server or an intermediate proxy, it performs the following steps:
- Open a TCP connection: The TCP connection will be used to send a request, or several, and receive an answer. The client may open a new connection, reuse an existing connection, or open several TCP connections to the servers.
- Send an HTTP message: HTTP messages (before HTTP/2) are human-readable. With HTTP/2, these simple messages are encapsulated in frames, making them impossible to read directly, but the principle remains the same.
GET / HTTP/1.1 Host: developer.mozilla.org Accept-Language: fr
- Read the response sent by the server:
HTTP/1.1 200 OK Date: Sat, 09 Oct 2010 14:28:02 GMT Server: Apache Last-Modified: Tue, 01 Dec 2009 20:18:22 GMT ETag: "51142bc1-7449-479b075b2891b" Accept-Ranges: bytes Content-Length: 29769 Content-Type: text/html <!DOCTYPE html... (here comes the 29769 bytes of the requested web page)
- Close or reuse the connection for further requests.
If HTTP pipelining is activated, several requests can be sent without waiting for the first response to be fully received. HTTP pipelining has proven difficult to implement in existing networks, where old pieces of software coexist with modern versions. HTTP pipelining has been superseded in HTTP/2 with more robust multiplexing requests within a frame.
HTTP Messages
HTTP/1.1 and earlier HTTP messages are human-readable. In HTTP/2, these messages are embedded into a new binary structure, a frame, allowing optimizations like compression of headers and multiplexing. Even if only part of the original HTTP message is sent in this version of HTTP, the semantics of each message is unchanged and the client reconstitutes (virtually) the original HTTP/1.1 request. It is therefore useful to comprehend HTTP/2 messages in the HTTP/1.1 format.
There are two types of HTTP messages, requests and responses, each with its own format.
Requests
An example HTTP request:
Requests consists of the following elements:
- An HTTP method, usually a verb like
GET
,POST
or a noun likeOPTIONS
orHEAD
that defines the operation the client wants to perform. Typically, a client wants to fetch a resource (usingGET
) or post the value of an HTML form (usingPOST
), though more operations may be needed in other cases. - The path of the resource to fetch; the URL of the resource stripped from elements that are obvious from the context, for example without the protocol (
http://
), the domain (heredeveloper.mozilla.org
), or the TCP port (here80
). - The version of the HTTP protocol.
- Optional headers that convey additional information for the servers.
- Or a body, for some methods like
POST
, similar to those in responses, which contain the resource sent.
Responses
An example response:
Responses consist of the following elements:
- The version of the HTTP protocol they follow.
- A status code, indicating if the request has been successful, or not, and why.
- A status message, a non-authoritative short description of the status code.
- HTTP headers, like those for requests.
- Optionally, a body containing the fetched resource.
APIs based on HTTP
The most commonly used API based on top of HTTP is the
XMLHttpRequest
API, which can be used to exchange data between a user agent and a server.
Another API, server-sent events, is a one-way service that allows a server to send events to the client, using HTTP as a transport mechanism. Using the
EventSource
interface, the client opens a connection and establishes event handlers. The client browser automatically converts the messages that arrive on the HTTP stream into appropriate Event
objects, delivering them to the event handlers that have been registered for the events' type
if known, or to the onmessage
event handler if no type-specific event handler was established.How HTTP Works
HTTP is an application layer protocol built on top of TCP that uses a client-server communication model. HTTP clients and servers communicate via HTTP request and response messages. The three main HTTP message types are GET, POST, and HEAD.
- HTTP GET messages sent to a server contain only a URL. Zero or more optional data parameters may be appended to the end of the URL. The server processes the optional data portion of the URL, if present, and returns the result (a web page or element of a web page) to the browser.
- HTTP POST messages place any optional data parameters in the body of the request message rather than adding them to the end of the URL.
- HTTP HEAD request works the same as GET requests. Instead of replying with the full contents of the URL, the server sends back only the header information (contained inside the HTML section).
The browser initiates communication with an HTTP server by initiating a TCP connection to the server. Web browsing sessions use server port 80 by default although other ports such as 8080 are sometimes used instead.
Once a session is established, the user triggers the sending and receiving of HTTP messages by visiting the web page.
No comments:
Write comments