16 bit is only used for internal calculations. What you actually see on your monitor is 8-bit. Since an 8-bit monitor can only display 256 levels (0-255) and cannot display the difference between two 16-bit values which are both converted to the same 8-bit value, the GUI uses 8-bit values. (Even if you use a 10-bit monitor and a graphics card, Photoshop only uses 8 bits per channel to display colors.)
If your monitor could differentiate the 65,536 levels, it still couldn't draw a detailed curve on your screen with only 1080 (FHD), 1440 (2K), or even 2880 (8K) vertical resolution lines to work with. Not to mention that your input device would also be limited to much less than that amount of detail, and even if it weren't, your hands or fingers operating a mouse, stylus, scroll wheel, etc. do not have enough precise movement to take advantage of anywhere near many different unique positions.
The main purpose of using 16-bit values internally is to reduce the effect of rounding errors when multiple operations are performed on the raw data. This reduces things like banding and posterization when the response curves are stretched or compressed. The actual output will always end with a much lower bit depth because our display systems are far from being able to make such fine distinctions between colors.
The two most voted responses to What is the point of capturing 14-bit images and editing them on 8-bit monitors? delve deeper into the underlying principles as to why it makes sense to keep the finest increments until the image is rendered on a screen. There is no need to rewrite it all here.