web development – What is the need for methods such as GET and POST in the HTTP protocol?

Please note the question has changed / has been clarified since this reply was written for the first time. Another answer to the last iteration of the question is after the second horizontal rule

What is the need for methods such as GET and POST in the HTTP protocol?

With a few other elements such as header formats, rules for separating bodies and headers, form the basis of the HTTP protocol.

Can not we implement the HTTP protocol by simply using a request body and a response body?

No, because what you created would not be HTTP

For example, the URL will contain a query, which will be mapped to a function based on the server-side programming language, for example a servlet. In response, an HTML and JavaScript response will then be transmitted.

Congratulations, you have come up with a new protocol! Now, if you want to set up a standards body to manage and maintain it, develop it, etc., it could exceed HTTP one day.

I like it to be a little provocative, but there is nothing magical about the Internet, TCP / IP or communication between servers and clients. You open a connection and send words on the thread, forming a conversation. The conversation must really respect certain specifications ratified at both ends if the requests must be understood and the reasonable answers delivered. This is no different from any dialogue in the world. You speak English, your neighbor speaks Chinese. Hope your hand waving hand, pointing and waving your fist will be enough to convey your message that you do not want him to park his car in front of your house.

Back on the Internet, if you open a socket on an HTTP-enabled Web server and send the following:

EHLO
AUTH LOGIN

(The beginning of an SMTP e-mail transmission), you will not get a reasonable answer. You can create the most perfect SMTP compatible client, but your web server will not talk to it because this conversation is about the shared protocol – no shared protocol, no joy.

This is why you can not implement the HTTP protocol without implementing the HTTP protocol; if what you write does not comply with the protocol, then it's just not the protocol – it's something else, and it will not work as specified in the protocol

If we run with your example for a moment; where the client connects and simply states something that looks like a URL. And the server understands it and just sends something that looks like HTML / JS (a Web page), so of course it could work. What did you save? A few bytes not to say GET? The server has also backed up some – but if you can not determine what it sent you? What happens if you have requested a URL ending in JPEG format and bytes that allow you to create an image are sent to you, but in PNG? An incomplete PNG to this. If only we had a header that said how many bytes it was To go send, we would know if the number of bytes we received corresponds to the whole file or not. What if the server compressed the response to save bandwidth but did not tell you? You are going to spend a lot of computing power trying to figure out what it sends.

At the end of the day, we need metainformation – information on information; we need headers; the files must have names, extensions, dates created. We need people to have their birthday, say thank you, etc. – the world is filled with protocols and information about the context so that we do not have to sit down and fix everything from scratch. It costs a bit of storage space, but it's worth it


Is the implementation of different HTTP methods really necessary?

It can be argued that it is not necessary to implement the entire specified protocol, which is usually the case for any matter. I do not know all the words in English. My Chinese neighbor is also a software developer, but in a different industry and he does not even know Chinese for certain terms used in my industry, let alone English. The good thing is though, we can both take a document on the implementation of HTTP, it can write the server and I can write the client, in different programming languages ​​on different architectures, and they always work because they adhere to the protocol

It's possible that none of your users will ever send anything other than a GET request, not use persistent connections, send anything other than JSON as a body or accept to accept anything other than text / plain write a really clean web server that only responds to very limited requests from the client browser. But you can not arbitrarily decide to remove the basic rules that define "a text transmitting a socket" as "HTTP". you can not abandon the basic notion that the request will be a string like:

VERB URL VERSION
header: value

maybe_body

And the answer will have a version, a status code and maybe some headers. If you change something – it's no longer HTTP – it's something else, and will only work with something that is designed to understand it. HTTP is what it is by these definitions, so if you want to implement it, you have to follow the definitions.


Update

Your question has evolved a little, here is an answer to what you ask:

Why does the HTTP protocol have a notion of methods?

Historically, you have to understand that the design and implementation were much more rigid, even to the extent that the script did not exist and even in the idea that the pages could be dynamic, generated on the fly in memory and stuck in the socket. to be a static file on the disk requested by the client, read and pressed into the socket, did not exist. As such, the beginnings of the Web were centered on the concept of static pages containing links to other pages. All pages already existed on disk and the navigation would have been performed by the terminal mainly by performing GET requests for pages at the URL level. the URL to a file on the disk and send it. There was also the notion that the network of documents related to each other and elsewhere had to be evolutionary and evolving. It was therefore logical to create a series of methods that allowed authorized users with the appropriate qualifications to update the Web without necessarily. have access to the file system of the server – indicate the use case corresponding to PUT and DELETE, and other methods such as HEAD return only meta-information on a document so that the client can decide s & D It will recover it (remember that it's the days of remote access modems, really very slow and very primitive technology .This could be a great economy to recover the meta of a half-megabyte file and find that it has not changed and that the local copy is cached instead of being downloaded again.

This gives a historical context to the methods – in the past, the URL was the inflexible bit, and simply referred to the pages on the disk; the method was therefore useful because it allowed the client to describe its intent for the file and the server to process the method variably. There was not really any notion of virtual URL or used for switching or mapping in the original vision of a hypertext website (and it was only text)

I do not want this answer to be a documentation of the history with the dates and references quoted indicating exactly when things started to change – for this you can probably read Wikipedia – but suffice it to say that Over time the desire of Web to be more dynamic and at each end of the server-client connection, the opportunities to create a rich multimedia experience that we develop. Browsers supported a huge proliferation of tags for formatting content, each seeking to implement compelling multimedia richness features and new ways to make things more stylish.

The scripts came from the client side and plug-ins and browser extensions, all meant to make the browser a powerful powerhouse of everything. At the server level, active generation of content based on algorithms or database data has been essential and it continues to grow as there are probably few files on the disk – of course we keep an image or script file as a file. the web server and the browser get it, but more and more the images displayed by the browser and the scripts that it runs are not files that you can open in your file explorer, they are generated as the result of a compilation process performed on demand. , SVG describing how to draw pixels rather than an array of pixels, or JavaScript emitted from a higher level script, such as TypeScript.

When creating modern pages of several megabytes, only a fraction of that content is now attached to a disk; the database data is formatted and formatted in HTML that the browser consumes and is made by the server in response to several different programming routines that are referenced in one way or another. another by the url

I mentioned in the comments to the question that it was a bit like a complete circle. At a time when computers cost hundreds of thousands of dollars and rooms filled, it was common to allow multiple users to use the same very powerful central computer through hundreds of stupid terminals – a keyboard and a mouse, a green screen, send text, get text out. As computing power and lower prices have grown, users have begun to purchase more powerful desktops than the first mainframes and can run powerful applications locally. so that the mainframe model has become obsolete. This has never disappeared, however, because things have simply evolved to change direction and are in some way replaced by a central server providing most of the useful features of the application, as well as a hundred or so client computers that do very little drawing on the screen, then submit and receive / from the server. This interim period when your computer was smart enough to run its own copy of Word and Outlook at the same time has given way to Office Online, where your browser is a device for drawing images on the screen. and edit the document / email that you "rewrite as a thing that lives on the server, is saved there, sent and shared with other users, just like the notion that the browser is just a shell that provides a view partial at any time of this thing that lives elsewhere

From the answers, I understand a little why the concept of methods is there. This leads to another related question:

For example, in the gmail compose link, the PUT / POST request and the data will be sent. How does the browser come to know which method to use?

It uses GET by default, by convention / specification, because this is what is expected to occur when you type a URL and you press Enter.

Does the gmail page sent by the server include the name of the method to use to call a gmail dialing request?

This is one of the key things I am talking about in the comments above. In the modern Web, it's not even about pages anymore. Once the pages were files on the disk, that the browser would get. Then, they became pages mainly dynamically generated by inserting data into a template. But it was still a "request a new page to the server, get a page, display a page" process. The permutation of pages has become really smooth; you did not see them load, resize and change their layout to get a smoother image, but the browser always replaced one page or part of a page with another

The modern way of doing things is to use a single page application. the browser has a document in memory that is displayed in some way, writes calls to thebservr and retrieves an information nugget, and manipulates the document so that part of the page changes visually to display new information – everything happens without the browser always loading another new page; it's simply become a user interface where parts of it are updated, such as a typical client application such as Word or Outlook. New items appear above other items and can be moved around simulated dialog windows, and so on. All of this is the browser's scripting engine that sends the requests using the developer's desired http method, retrieves the data, and examines the document drawn by the browser. You can conceive that the modern browser is a brilliant device that looks like a complete operating system or virtual machine; a programmable device that has a fairly standardized method for drawing elements on the screen, reproducing sound, capturing user input and sending it for processing. All you need to do to draw your user interface is to provide it with a html / css file that makes it a user interface, then adjust the HTML code permanently for the browser to modify its design. Heck, people are so used to seeing the address bar change / use it as intentional direction as a single-page application will change the URL programmatically even if no navigation ( request for new pages) is carried out.

when we call www.gmail.com, we must use the GET method, how does the browser know that this method is used?

True. Because it's specified. The first query is as it always was – GET the initial HTML code to draw a user interface, then stitch and manipulate it forever, or get another page with another script that points, manipulates, and creates a reactive reactive interface

As some answers say, we can create new users with the DELETE method, which raises the question of the intent behind the notion of method in the http protocol, because ultimately the function of the server depends on the function to which they want to associate a URL. . Why should the client tell the server which methods to use for a URL?

History. Heritage. We could theoretically throw all the methods http tomorrow. We are at a level of programming sophistication where the methods are out of date because URLs can be processed as they may be the switching mechanism that tells the server that you want to save the data. the body as a draft e-mail or delete a draft – there is no file / emails / draft / backup / 1234 on the server – the server is programmed to separate this URL and know how to use the data body – save as draft email under ID 1234

It is certainly possible to remove the methods, except for the enormous weight of inherited compatibility that surrounds them. It's better to use them for what you need, but ignore them for the most part and use what you need for your thing to work. We still need methods as specified because you have to remember that they have meaning for the browser and the server on which we created our applications. The client-side script wants to use the underlying browser to send data, it must use a method that will force the browser to do what it asks – probably a POST because GET wraps all of its variable information in it. URL and has a limited length. in a lot of servers. The client wants a long answer from the server. Do not use HEAD because they are not supposed to have a response body. The browser and server that you have chosen may not be restricted, but perhaps one day they will each encounter a different implementation at the other end, and that in one spirit of interoperability, sticking to a specification improves its operation

HTTP 200 AJAX Error in Views Views Frame

I have just installed the view field view module (https://www.drupal.org/project/views_field_view), which allows to add an existing view as a field in a view.

But when I want to add a field of view, I get an AJAX error (HTTP 200)

And the displayed field configuration form is not usable.

Web Development – How does the browser know which HTTP method, such as GET, POST, etc., to use?

I had asked a related question below and from the answers, I understood why there is a notion of methods like GET, POST, and so on. in the protocol http:

What is the need for methods such as GET and POST in the HTTP protocol?

This raises another question, how does the browser know which HTTP method, such as GET, POST, etc., to use? For example, in the gmail compose link, the PUT / POST request and the data will be sent. How does the browser come to know which method to use? Does the gmail page sent by the server include the name of the method to use when calling the Gmail dialing request URL? when we call www.gmail.com, we must use the GET method, how does the browser know that this method is used?

http: //localhost/img.png or ./img.png what is the best way to put the html img tag?


or


or


What is the best way to perform?

exchange – automatic discovery with http redirection works with all clients except iOS

I have a 2016 exchange with autodiscover.
Mail receives and sends works as a charm, auto discovery also works, except on iOS clients.

If I add the mailbox on Outlook 2019 or 2016, the settings are found through Autodiscover.
I can add the same mailbox on Apple Mail and Autodiscover works as well. I just need to fill in the username / e-mail address and password.
But as soon as I want to add this mailbox to an iOS device (12.4.1), whether it's an iPhone or an iPad, it can not find the settings automatically.

Is there a difference between Autodiscover for macOS and iOS?

linux – Unable to access HTTP inside another LXC container on the same host

I use LXC to host some of my services. One of them – container A, for example, is a nginx running as a reverse proxy. Call B, another service that needs to access container A through the host domain (example.com).

Now, for the nginx to work, I set up a PREROUTING rule in IPTABLES transfer the traffic from the 80 and 443 host ports to the A container. In the outside world, everything works fine, but the B container fails with "Connection denied" when I try to access example.com.

j & # 39; uses lxc-net to set up networking in both containers (they are both unprivileged, by the way). I also do not use IPv6 to facilitate this initial setup.

here is my iptables config, 10.0.3.10 is my container A:

*filter
:INPUT ACCEPT (0:0)
:FORWARD ACCEPT (0:0)
:OUTPUT ACCEPT (0:0)
-A INPUT -i lxcbr0 -p tcp -m tcp --dport 53 -j ACCEPT
-A INPUT -i lxcbr0 -p udp -m udp --dport 53 -j ACCEPT
-A INPUT -i lxcbr0 -p tcp -m tcp --dport 67 -j ACCEPT
-A INPUT -i lxcbr0 -p udp -m udp --dport 67 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 80 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 443 -j ACCEPT
-A INPUT -i eth0 -p tcp -m tcp --dport 80 -j ACCEPT
-A INPUT -i eth0 -p tcp -m tcp --dport 443 -j ACCEPT
-A FORWARD -o lxcbr0 -j ACCEPT
-A FORWARD -i lxcbr0 -j ACCEPT
-A FORWARD -i eth0 -p tcp -m tcp --dport 80 -j ACCEPT
-A FORWARD -i eth0 -p tcp -m tcp --dport 443 -j ACCEPT
-A FORWARD -d 10.0.3.10/32 -i lxcbr0 -p tcp -m tcp --dport 80 -j ACCEPT
-A FORWARD -d 10.0.3.10/32 -i lxcbr0 -p tcp -m tcp --dport 443 -j ACCEPT
COMMIT

*nat
:PREROUTING ACCEPT (0:0)
:INPUT ACCEPT (0:0)
:POSTROUTING ACCEPT (0:0)
:OUTPUT ACCEPT (0:0)
-A PREROUTING -i eth0 -p tcp -m tcp --dport 80 -j DNAT --to-destination 10.0.3.10:80
-A PREROUTING -i eth0 -p tcp -m tcp --dport 443 -j DNAT --to-destination 10.0.3.10:443
-A POSTROUTING -s 10.0.3.0/24 ! -d 10.0.3.0/24 -j MASQUERADE
COMMIT

*mangle
:PREROUTING ACCEPT (0:0)
:INPUT ACCEPT (0:0)
:FORWARD ACCEPT (0:0)
:OUTPUT ACCEPT (0:0)
:POSTROUTING ACCEPT (0:0)
-A POSTROUTING -o lxcbr0 -p udp -m udp --dport 68 -j CHECKSUM --checksum-fill
COMMIT

I've tried to add -A PREROUTING -i lxcbr0 -p tcp -m tcp --dport 80 -j DNAT --to-destination 10.0.3.10:80, but this only caused Container B network clients to hang when attempting to connect to host / container A via HTTP.

What should I change in my iptables allow access to the HTTP server on container A from other containers via the host domain?

Kerberos documentation on http

The NTLM over HTTP documentation is available here https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-ntht/f09cf6e1-529e-403b-a8a5-7368ee096a6a

Where is the official Kerberos documentation on HTTP?

I want to install Drupal 8 and I want to access it as http: /// drupal rather than at the root

I want to install drupal 8 and access it as http / // drupal rather than root.
But it does not respect the prefix "/ drupal /" for most of the admin part.
Can any one please help?

Support for partial file downloads via HTTP / REST

I have a client / server architecture in which the client has to periodically download large files from the server. Suppose the client downloads 9GB of a 10GB file and temporarily loses its Internet connection. When the client regains the connection, I want the client to download the remaining 1GB of the file rather than having to download it again.

Is there a platform, libraries, frameworks, etc.? who manages this? If no, does anyone have an idea on how best to tackle this problem?

For reference, my current configuration is:

Server – Python REST Framework / Django / Django

Client – Android / iOS / c ++

xss – how to make an http request in a jsonp callback?

I'm trying an XSS challenge.
I found an exploit that breaks the CSP using a JSONP callback.
I can have an alert to appear by putting something like


But I can not get it to send an http request.

I've tried putting functions by changing window.location, but that does not seem to perform any of my anon functions.

Thank you