azure sql data warehouse – DataBricks Table Inserts …how does it write out files

I have reviewed the links below

https://docs.databricks.com/spark/latest/spark-sql/language-manual/sql-ref-syntax-ddl-create-table-datasource.html

https://docs.databricks.com/spark/latest/spark-sql/language-manual/sql-ref-syntax-dml-insert-into.html

And so I am aware of the syntax options for CREATE TABLE…

CREATE TABLE ( IF NOT EXISTS ) table_identifier
    ( ( col_name1 col_type1 ( COMMENT col_comment1 ), ... ) )
    ( USING data_source )
    ( OPTIONS ( key1 ( = ) val1, key2 ( = ) val2, ... ) )
    ( PARTITIONED BY ( col_name1, col_name2, ... ) )
    ( LOCATION path )
    ( COMMENT table_comment )
    ( TBLPROPERTIES ( key1 ( = ) val1, key2 ( = ) val2, ... ) )
    ( AS select_statement )

It is my understanding that databricks tables are simply “pointers” (to some other source such as a folder with csv or parquet files)

So, I cannot determine how databricks actually would “write” the data below

CREATE TABLE students (name VARCHAR(64), address VARCHAR(64), student_id INT)
    USING PARQUET PARTITIONED BY (student_id);

INSERT INTO students VALUES
    ('Amy Smith', '123 Park Ave, San Jose', 111111);

Would it create a bunch of parquet files?

how would it know where?

how would it know how many parquet files to create?