What would be a good way to collect data sent as URL requests by a client as a server?

Situation:

I manage a set of Linux-based computers: laptops and desktops with mostly Ubuntu 16.04 lts and Debian 9.x. (This is for about 120 users and the number is increasing)

I use l /etc/rc.local and crontab to retrieve curl scripts from a web server with commands to download and execute additional scripts, install packages, modify configurations, etc …

I modify the script files on the web server and the next time I run boot or crontab, the PC retrieves the current script file and performs idempotent running tasks, as specified in the script.

(I also use ansible for ad hoc changes)

Question: I've heard that it is possible as a web client to inform a web server by retrieving a specially crafted URL that can populate a queryable database to get information about all the URLs requested by web clients via http requests.

for example:

  • curl http://webserver.example.com/$VARIABLE

or

  • curl http://webserver.example.com/$OUTPUTOFSCRIPT

What would it take for a web server to store these requests? (and make them easily / orderly recoverable / searchable as a database):
fill a database with

  • hostname / webclient IP
  • timestamp of URL retrieval
  • the variable that was in the URL (for example: $ VARIABLE, $ OUTPUTOFSCRIPT)

?

(Maybe there are better methods, these are also welcome, opensource only please)