## education – Why do I always have something missing in my understanding of the subjects that always leads me to solve the problems incorrectly?

I am a master's student in computer science, I come from the middle of engineering and not from CS, my problem is whenever I have a problem, a programming task or an exam. I'm still trying to understand the question and think about the right answers, but I'm usually stuck or have the wrong answer, and when I seek help I find that I didn't understand completely the subject of the question itself, some of the information or even a misunderstanding of certain parts.
So my question is how to approach a computer subject "for operating systems for example" and have a good understanding with the right depth to have a better understanding and be able to better execute programming tasks and exams.

## java – Understanding gradle.properties and JAVA_OPTS

I want to understand how to run gradle.properties and java_opts for building Android.

I know what Xmx, Xms, Xss are.

Use -Xmx to specify the maximum heap size
Use -Xms to specify the initial Java heap size
Use -Xss to set the size of the Java thread stack

I have a dockerfile for construction. I am running it with low memory.

``````docker run -m=256M -it android-image
``````

The contents of the gradle.properties file

``````org.gradle.jvmargs=-Xmx1536m
``````

I want to obtain `java.lang.OutOfMemoryError` Fault. Then, I will change the contents of the gradle.properties file and the JAVA_OPTS variable. And I'm trying to understand how gradle.properties and JAVA_OPTS work.

But it works (the maximum use of ram is 99% during construction). The usage of RAM is below:

How can I produce OutOfMemoryError during construction?

## mysql – Understanding INSERT … ON DUPLICATE KEY UPDATE blocking scenario

I am trying to understand a scenario where a `INSERT ... ON DUPLICATE KEY UPDATE` The instruction causes blockages in the event of high simultaneity.

The two tables in question:

hosts:

``````        "CREATE TABLE `hosts` (" +
"`id` int(10) unsigned NOT NULL AUTO_INCREMENT," +
"`osquery_host_id` varchar(255) NOT NULL," +
"`created_at` timestamp DEFAULT CURRENT_TIMESTAMP," +
"`updated_at` timestamp NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP," +
"`deleted_at` timestamp NULL DEFAULT NULL," +
"`deleted` tinyint(1) NOT NULL DEFAULT FALSE," +
"`detail_update_time` timestamp NULL DEFAULT NULL," +
"`node_key` varchar(255) DEFAULT NULL," +
"`host_name` varchar(255) NOT NULL DEFAULT ''," +
"`uuid` varchar(255) NOT NULL DEFAULT ''," +
"`platform` varchar(255) NOT NULL DEFAULT ''," +
"`osquery_version` varchar(255) NOT NULL DEFAULT ''," +
"`os_version` varchar(255) NOT NULL DEFAULT ''," +
"`build` varchar(255) NOT NULL DEFAULT ''," +
"`platform_like` varchar(255) NOT NULL DEFAULT ''," +
"`code_name` varchar(255) NOT NULL DEFAULT ''," +
"`uptime` bigint(20) NOT NULL DEFAULT 0," +
"`physical_memory` bigint(20) NOT NULL DEFAULT 0," +
"`cpu_type` varchar(255) NOT NULL DEFAULT ''," +
"`cpu_subtype` varchar(255) NOT NULL DEFAULT ''," +
"`cpu_brand` varchar(255) NOT NULL DEFAULT ''," +
"`cpu_physical_cores` int NOT NULL DEFAULT 0," +
"`cpu_logical_cores` int NOT NULL DEFAULT 0," +
"`hardware_vendor` varchar(255) NOT NULL DEFAULT ''," +
"`hardware_model` varchar(255) NOT NULL DEFAULT ''," +
"`hardware_version` varchar(255) NOT NULL DEFAULT ''," +
"`hardware_serial` varchar(255) NOT NULL DEFAULT ''," +
"`computer_name` varchar(255) NOT NULL DEFAULT ''," +
"`primary_ip_id` INT(10) UNSIGNED DEFAULT NULL, " +
"PRIMARY KEY (`id`)," +
"UNIQUE KEY `idx_host_unique_nodekey` (`node_key`)," +
"UNIQUE KEY `idx_osquery_host_id` (`osquery_host_id`)," +
"FULLTEXT KEY `hosts_search` (`host_name`)" +
") ENGINE=InnoDB DEFAULT CHARSET=utf8;",
``````

réseaux_interfaces:

``````        "CREATE TABLE `network_interfaces` (" +
"`id` INT(10) UNSIGNED NOT NULL AUTO_INCREMENT," +
"`host_id` INT(10) UNSIGNED NOT NULL," +
"`mac` varchar(255) NOT NULL DEFAULT ''," +
"`ip_address` varchar(255) NOT NULL DEFAULT ''," +
"`broadcast` varchar(255) NOT NULL DEFAULT ''," +
"`ibytes` BIGINT NOT NULL DEFAULT 0," +
"`interface` VARCHAR(255) NOT NULL DEFAULT ''," +
"`ipackets` BIGINT NOT NULL DEFAULT 0," +
"`last_change` BIGINT NOT NULL DEFAULT 0," +
"`mask` varchar(255) NOT NULL DEFAULT ''," +
"`metric` INT NOT NULL DEFAULT 0," +
"`mtu` INT NOT NULL DEFAULT 0," +
"`obytes` BIGINT NOT NULL DEFAULT 0," +
"`ierrors` BIGINT NOT NULL DEFAULT 0," +
"`oerrors` BIGINT NOT NULL DEFAULT 0," +
"`opackets` BIGINT NOT NULL DEFAULT 0," +
"`point_to_point` varchar(255) NOT NULL DEFAULT ''," +
"`type` INT NOT NULL DEFAULT 0," +
"PRIMARY KEY (`id`), " +
"FOREIGN KEY `idx_network_interfaces_hosts_fk` (`host_id`) " +
"REFERENCES hosts(id) " +
"ON DELETE CASCADE, " +
"FULLTEXT KEY `ip_address_search` (`ip_address`)," +
"UNIQUE KEY `idx_network_interfaces_unique_ip_host_intf` (`ip_address`, `host_id`, `interface`)" +
") ENGINE=InnoDB DEFAULT CHARSET=utf8;",
``````

Latest information on the deadlock:

``````------------------------
LATEST DETECTED DEADLOCK
------------------------
2020-01-20 00:09:06 0x2b033abd2700
*** (1) TRANSACTION:
TRANSACTION 78516922, ACTIVE 0 sec inserting
mysql tables in use 1, locked 1
LOCK WAIT 5 lock struct(s), heap size 1136, 3 row lock(s), undo log entries 2
MySQL thread id 286926, OS thread handle 47297573750528, query id 1045761878 10.107.51.236 username update
INSERT INTO network_interfaces (
host_id,
mac,
ip_address,
broadcast,
ibytes,
interface,
ipackets,
last_change,
mask,
metric,
mtu,
obytes,
ierrors,
oerrors,
opackets,
point_to_point,
type
) VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)
ON DUPLICATE KEY UPDATE
id = LAST_INSERT_ID(id),
mac = VALUES(mac),
broadcast = VALUES(broadcast),
ibytes = VALUES(ibytes),
ipackets = VALUES(ipackets),
last_change = VALUES(last_change),
mask = VALUES(mask),
metric = VALUES(metric),
mtu = VALUES(mtu),
obytes = VALUES(obytes),
ierrors = VALUES(ierrors),
oerrors = VALUES(oerrors),
opackets = VALUES(opackets),
point_to_point = VALUES(point_to_point),
type = VALUES(type)
*** (1) WAITING FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 258 page no 2729 n bits 408 index FTS_DOC_ID_INDEX of table `kolide`.`network_interfaces` trx id 78516922 lock_mode Xinsert intention waiting
Record lock, heap no 1 PHYSICAL RECORD: n_fields 1; compact format; info bits 0
0: len 8; hex 73757072656d756d; asc supremum;;

*** (2) TRANSACTION:
TRANSACTION 78516915, ACTIVE 0 sec inserting
mysql tables in use 1, locked 1
18 lock struct(s), heap size 1136, 33 row lock(s), undo log entries 12
MySQL thread id 281276, OS thread handle 47292870371072, query id 1045761879 10.107.78.241 username update
INSERT INTO network_interfaces (
host_id,
mac,
ip_address,
broadcast,
ibytes,
interface,
ipackets,
last_change,
mask,
metric,
mtu,
obytes,
ierrors,
oerrors,
opackets,
point_to_point,
type
) VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)
ON DUPLICATE KEY UPDATE
id = LAST_INSERT_ID(id),
mac = VALUES(mac),
broadcast = VALUES(broadcast),
ibytes = VALUES(ibytes),
ipackets = VALUES(ipackets),
last_change = VALUES(last_change),
mask = VALUES(mask),
metric = VALUES(metric),
mtu = VALUES(mtu),
obytes = VALUES(obytes),
ierrors = VALUES(ierrors),
oerrors = VALUES(oerrors),
opackets = VALUES(opackets),
point_to_point = VALUES(point_to_point),
type = VALUES(type)
*** (2) HOLDS THE LOCK(S):
RECORD LOCKS space id 258 page no 2729 n bits 408 index FTS_DOC_ID_INDEX of table `kolide`.`network_interfaces` trx id 78516915 lock_mode X
Record lock, heap no 1 PHYSICAL RECORD: n_fields 1; compact format; info bits 0
0: len 8; hex 73757072656d756d; asc supremum;;

*** (2) WAITING FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 258 page no 2729 n bits 408 index FTS_DOC_ID_INDEX of table `kolide`.`network_interfaces` trx id 78516915 lock_mode Xinsert intention waiting
Record lock, heap no 1 PHYSICAL RECORD: n_fields 1; compact format; info bits 0
0: len 8; hex 73757072656d756d; asc supremum;;

*** WE ROLL BACK TRANSACTION (1)
``````

The program starts a transaction, updates a host line and uses this same transaction in a loop through all interfaces of the host and issues an INSERT … ON DUPLICATE statement for each interface. As I understand it, because the transaction begins with an UPDATE (exclusive) lock on the host table, another transaction cannot update the same host. So I don't think this is a scenario where two connections are trying to update the same set of host interfaces (this could easily cause a crash).

I think it could be due to different host updates competing on the AUTO_INCREMENT index of network interfaces? I just don't understand how, even after looking at the MySQL documents on locks. I understand that transaction 1 is waiting for an exclusive insert lock, transaction 2 has an exclusive lock and is also waiting for an exclusive insert lock. What I don't understand is why TRANSACTION 2 has the exclusive lock `lock_mode X` to begin.

## exploit – Understanding the Windows PoC vulnerability CRYPT32.DLL (CVE-2020-0601)

Kudelski Security provided an interesting explanation of the current vulnerability of CVE-2020-0601 and how it can potentially be exploited.

After reading this, I understand the basics of what was wrong with Windows implementation and how PoC is supposed to work. The site also has a PoC configuration where they generate a certificate which is signed using a red private key for a known certification authority (generated by manipulating the parameter `G` and public key known to CA).

I downloaded the generated certificate and used OpenSSL to view its details

``````\$ openssl x509 -inform der -in cert.crt -text
Certificate:
Data:
Version: 3 (0x2)
Serial Number:
13:96:a7:9a:d9:71:d8:47:c3:fe:89:b2:b7:b6:57:40:28:9b:38:01
Signature Algorithm: ecdsa-with-SHA256
Issuer: C=CH, ST=Vaud, L=Lausanne, O=Kudelski Security PoC, OU=Research Team, CN=github.com
Validity
Not Before: Jan 16 00:03:54 2018 GMT
Not After : Oct 12 00:03:54 2020 GMT
Subject: C=CH, ST=Vaud, L=Lausanne, O=Kudelski Security, CN=github.com
Subject Public Key Info:
Public Key Algorithm: id-ecPublicKey
Public-Key: (256 bit)
pub:
04:c6:54:aa:2c:11:14:b6:f5:c4:39:ea:80:95:7b:
2c:b3:76:b0:90:f5:17:ec:7d:d6:48:6e:cd:63:58:
cb:80:71:6b:bc:97:f5:26:4d:d0:7f:7b:cf:cb:05:
0c:24:f3:29:55:5d:52:1d:74:2d:89:78:d9:9d:91:
96:12:c5:cb:be
ASN1 OID: prime256v1
NIST CURVE: P-256
X509v3 extensions:
X509v3 Subject Alternative Name:
DNS:*.kudelskisecurity.com, DNS:*.microsoft.com, DNS:*.google.com, DNS:*.wouaib.ch
Signature Algorithm: ecdsa-with-SHA256
30:65:02:31:00:f9:1b:4a:7b:d5:01:4d:f4:e3:42:5a:17:8c:
45:6f:39:ce:fd:ec:38:04:f0:78:93:84:5d:db:9c:db:41:07:
a3:97:cf:f3:6d:f6:8b:7b:38:5b:95:4e:a7:1f:9e:4a:0e:02:
30:08:29:0e:f2:d8:9c:e3:e4:15:67:b7:22:f6:de:80:56:18:
01:a0:d8:3e:28:ec:6c:bf:2a:28:a2:8f:fb:8a:b7:1e:c7:8f:
25:36:22:cd:86:1d:bf:6d:fa:fd:0f:a0:6f
-----BEGIN CERTIFICATE-----
MIICTzCCAdWgAwIBAgIUE5anmtlx2EfD/omyt7ZXQCibOAEwCgYIKoZIzj0EAwIw
fDELMAkGA1UEBhMCQ0gxDTALBgNVBAgMBFZhdWQxETAPBgNVBAcMCExhdXNhbm5l
MR4wHAYDVQQKDBVLdWRlbHNraSBTZWN1cml0eSBQb0MxFjAUBgNVBAsMDVJlc2Vh
cmNoIFRlYW0xEzARBgNVBAMMCmdpdGh1Yi5jb20wHhcNMTgwMTE2MDAwMzU0WhcN
MjAxMDEyMDAwMzU0WjBgMQswCQYDVQQGEwJDSDENMAsGA1UECAwEVmF1ZDERMA8G
A1UEBwwITGF1c2FubmUxGjAYBgNVBAoMEUt1ZGVsc2tpIFNlY3VyaXR5MRMwEQYD
VQQDDApnaXRodWIuY29tMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAExlSqLBEU
tvXEOeqAlXsss3awkPUX7H3WSG7NY1jLgHFrvJf1Jk3Qf3vPywUMJPMpVV1SHXQt
iXjZnZGWEsXLvqNRME8wTQYDVR0RBEYwRIIWKi5rdWRlbHNraXNlY3VyaXR5LmNv
bYIPKi5taWNyb3NvZnQuY29tggwqLmdvb2dsZS5jb22CCyoud291YWliLmNoMAoG
CCqGSM49BAMCA2gAMGUCMQD5G0p71QFN9ONCWheMRW85zv3sOATweJOEXduc20EH
o5fP8232i3s4W5VOpx+eSg4CMAgpDvLYnOPkFWe3IvbegFYYAaDYPijsbL8qKKKP
+4q3HsePJTYizYYdv236/Q+gbw==
-----END CERTIFICATE-----
``````

Certificate appears to use valid EC curve `P-256`. How can a person / process inspecting the certificate verify that it has handled the EC parameters correctly and that it is a fake?

## Quantum computing: understanding the state vector

I just started to learn QC. It is said that

The quantum state of N qubits can be expressed as a vector in a space
of dimension 2 ^ N

If there is 1 qubit, then we have two possible state vectors | 0> and | 1> or (0.1) and (1.0) respectively. To arrive at 2 qubits, we have 4 possible state vectors `(1,0,0,0), (0,1,0,0), (0,0,1,0) and (0,0,0,1)`. Note that in each case, all entries are null except 1. The point I'm trying to get to is:

1. 2 ^ N seems to be a large space but given a vector in this space – all
components will be zero except 1. There are therefore only 2 ^ N possible
values ​​that the state vector can take. Isn't that correct? If not why?

1. Why don't we say that space is N dimensional. A string of N bits has 2 ^ N
possible values.

## Understanding the proof of \$ mathbb {E}[X^n] = int_ {0} ^ infty n x ^ {n-1} (1 – F (x)) dx \$

For a non-negative random variable $$X$$ with CDF $$F$$ and $$n geq 1$$,

begin {align *} mathbb {E} (X ^ n) & = int_ {0} ^ infty x ^ n d F (x) \ & = int_ {0} ^ infty left ( int_ {0} ^ x n y ^ {n-1} dy right) d F (x) \ & = int_ {0} ^ infty int_ {y} ^ infty ny ^ {n-1} dF (x) dy \ & = int_ {0} ^ infty n y ^ {n-1} (1 – F (y)) dy end {align *}

1. I have never seen the scoring in the first tie before. All i know is $$mathbb {E} (X ^ n) = int_ {0} ^ infty x ^ n f_X (x) dx$$ but i know the pdf $$f_X$$ does not necessarily exist. What is $$dF (x)$$ in the first tie means?
2. How to go from second to third equality? The exchange of the order of the integrals should give $$int_ {0} ^ x int_ {0} ^ infty$$; how to understand the new limits?
3. 4th tie apparently uses the result $$1 – F (y) = int_ {y} ^ infty dF (x)$$ that I don't understand because I don't know what $$dF (x)$$ means in the first place.

## I'm having trouble understanding the dependent and independent languages ​​of the c # operating system as an example .Net

**

NOTE: all of the following assume that the processor architecture is
the same For each operating system

**

**

Another NOTE: please, if you can make it simple so that a stupid like
me i can get it and simplify your english because i'm not native
speaker

**

"This is the first time I have come here to ask questions, so please, if you see any bad practices on my part, forgive me and teach me. I am also not more English speaking. typos "

So basically, I was hogging .Net Framework and all the resources say that the

the languages ​​covered by .Net are all platform independent (independent of the operating system) because the .Net languages ​​have

a layer between them and the operating system why this makes it independent and languages ​​like C

The language depends on the platform (OS) even if the architecture of the processor is the same and that there is a

compiler or interpreter (I don't really know what C uses) to the C language in other operating systems and the

have independent languages ​​and iterpreter or a compiler? I was looking for why and I had a

answer that was like this: it's because every program communicates with the operating system before

the processor and uses the libraries provided by the operating system like the libraries that make it use input and output

and threads and memory and so forth which is specific to the operating system so what makes a program platform dependent

this is why the C language depends on the operating system even if other operating systems have a compiler and languages

which is independent of the operating system has a layer between them and the operating system so that it does not communicate directly with

the OS instead, it communicates with the layer, then the layer communicates with the OS, so I think

even with the layer, it is not enough to make the linguistic platform independent because even if

language does not communicate directly with the OS the layer always communicates with the OS

directly and therefore the layer must be platform dependent and the proof is that for each OS

there is a .Net version there too, they also said that the problem is that the dependent languages ​​in

often it uses the libraries provided by the operating system itself like threads and I / O systems etc.

and that shouldn't be true because even independent languages ​​use libraries like that as

threads and IO so please help me i think i have a bad understanding please can you clarify

for me

## unit – Understanding Send message

I'm trying to understand the message sending function but I'm a little confused. Currently, I have a game object character and I want him to send a message to the main body if one of his kids hits something.

On the Child object, I have:

``````void OnTriggerEnter2D(Collider2D col) {

Destroy(col.gameObject);
Debug.Log(this.name + " Hit!");
gameObject.SendMessage("PlayerHitting", this.name);
``````

On the parent object, I have

``````public void PlayerHitting(string Move) {
Debug.Log("Player hits with " + Move);
}
``````

Destroy and Debug.Log code on child labor, so it collides, but I still get an error that there is no receiver.

``````SendMessage PlayerHitting has no receiver!
UnityEngine.GameObject:SendMessage(String, Object, SendMessageOptions)
``````

I don't quite understand how to use Send Message to send messages and notifications between objects. Can anyone explain?

## reward program – Understanding bitwise left shift logic to determine nSubsidy

I'm trying to figure out how `>>=` operation in `nSubsidy >>= (nBestHeight / 210000);` actually cuts the subsidy in half every 4 years on Bitcoin v0.01.

It is clear that the average of 210,000 blocks is around 4 years and how dividing the last block height will return an integer that will give us the number of times it will have to be divided by two. What's confusing is how to spend a little `nSubsidy` will give us the desired grant.

I wrote this loop to help visualize the bit offset, but I still don't know how in binary, a simple 1 bit offset gives us the desired result.

``````#include
#include

int main()
{
uint64_t COIN = 100000000;
uint64_t nSubsidy;
std::string binary;

//halving and non halving block height examples
uint64_t nBestHeight() = {-1, 0, 1, 210000, 210001, 420000, 420001, 630000, 630001, 840000, 840001, 1050000, 1050001};

for(unsigned int a = 0; a < sizeof(nBestHeight)/sizeof(nBestHeight(0)); a = a + 1 )
{
nSubsidy = 50 * COIN;

//Shift bits
nSubsidy >>= (nBestHeight(a) / 210000);

//Convert to binary to visualize bit shift
binary = std::bitset<64>(nSubsidy).to_string();

std::cout << "nSubsidy (64 bit binary) = " << binary << "    |    (nBestHeight/210000) = " << (nBestHeight(a) / 210000) << "    |    nSubsidy = " << nSubsidy << "    |    nBestHeight = " << nBestHeight(a) << "n";
}

}

/*

OUTPUT

BINARY TO VISUALIZE BIT SHIFT                                                                       HALVING ERA                      MINING REWARD                 CURRENT BLOCK HEIGHT
nSubsidy (64 bit binary) = 0000000000000000000000000000000100101010000001011111001000000000    |    (nBestHeight/210000) = 0    |    nSubsidy = 5000000000    |    nBestHeight = 0
nSubsidy (64 bit binary) = 0000000000000000000000000000000100101010000001011111001000000000    |    (nBestHeight/210000) = 0    |    nSubsidy = 5000000000    |    nBestHeight = 1
nSubsidy (64 bit binary) = 0000000000000000000000000000000010010101000000101111100100000000    |    (nBestHeight/210000) = 1    |    nSubsidy = 2500000000    |    nBestHeight = 210000
nSubsidy (64 bit binary) = 0000000000000000000000000000000010010101000000101111100100000000    |    (nBestHeight/210000) = 1    |    nSubsidy = 2500000000    |    nBestHeight = 210001
nSubsidy (64 bit binary) = 0000000000000000000000000000000001001010100000010111110010000000    |    (nBestHeight/210000) = 2    |    nSubsidy = 1250000000    |    nBestHeight = 420000
nSubsidy (64 bit binary) = 0000000000000000000000000000000001001010100000010111110010000000    |    (nBestHeight/210000) = 2    |    nSubsidy = 1250000000    |    nBestHeight = 420001
nSubsidy (64 bit binary) = 0000000000000000000000000000000000100101010000001011111001000000    |    (nBestHeight/210000) = 3    |    nSubsidy = 625000000     |    nBestHeight = 630000
nSubsidy (64 bit binary) = 0000000000000000000000000000000000100101010000001011111001000000    |    (nBestHeight/210000) = 3    |    nSubsidy = 625000000     |    nBestHeight = 630001
nSubsidy (64 bit binary) = 0000000000000000000000000000000000010010101000000101111100100000    |    (nBestHeight/210000) = 4    |    nSubsidy = 312500000     |    nBestHeight = 840000
nSubsidy (64 bit binary) = 0000000000000000000000000000000000010010101000000101111100100000    |    (nBestHeight/210000) = 4    |    nSubsidy = 312500000     |    nBestHeight = 840001
nSubsidy (64 bit binary) = 0000000000000000000000000000000000001001010100000010111110010000    |    (nBestHeight/210000) = 5    |    nSubsidy = 156250000     |    nBestHeight = 1050000
nSubsidy (64 bit binary) = 0000000000000000000000000000000000001001010100000010111110010000    |    (nBestHeight/210000) = 5    |    nSubsidy = 156250000     |    nBestHeight = 1050001

*/
``````

Is it really that by moving a little to the right or to the left we duplicate or half the value? Are there resources or any other terminology to better verify this? Are the above assumptions even correct?

## SQL Server – Understanding Aggregate Window Functions

Consider the following table:

``````CREATE TABLE T1
(
keycol INT         NOT NULL CONSTRAINT PK_T1 PRIMARY KEY,
col1   VARCHAR(10) NOT NULL
);

INSERT INTO T1 VALUES
(2, 'A'),(3, 'A'),
(5, 'B'),(7, 'B'),(11, 'B'),
(13, 'C'),(17, 'C'),(19, 'C'),(23, 'C');
``````

Currently, I'm looking at the window functions and trying out the aggregated window functions. Although I feel that I understand how windows are defined with the `OVER` and `PARITION` clauses, I don't know how the aggregated window functions are calculated, such as `AVG() OVER ()`.

I am trying to understand the following three queries.

``````-- Query 1
SELECT keycol, col1, AVG(keycol) OVER (PARTITION BY col1) AS RowAvg
FROM T1
``````
```keycol | col1 | RowAvg
-----: | :--- | -----:
2 | A    |      2
3 | A    |      2
5 | B    |      7
7 | B    |      7
11 | B    |      7
13 | C    |     18
17 | C    |     18
19 | C    |     18
23 | C    |     18
```
``````-- Query 2
SELECT keycol, col1, AVG(keycol) OVER (ORDER BY keycol) AS RowAvg
FROM T1
``````
```keycol | col1 | RowAvg
-----: | :--- | -----:
2 | A    |      2
3 | A    |      2
5 | B    |      3
7 | B    |      4
11 | B    |      5
13 | C    |      6
17 | C    |      8
19 | C    |      9
23 | C    |     11
```
``````-- Query 3
SELECT keycol, col1, AVG(keycol) OVER (PARTITION BY col1 ORDER BY keycol) AS RowAvg
FROM T1
``````
```keycol | col1 | RowAvg
-----: | :--- | -----:
2 | A    |      2
3 | A    |      2
5 | B    |      5
7 | B    |      6
11 | B    |      7
13 | C    |     13
17 | C    |     15
19 | C    |     16
23 | C    |     18
```

Request 1: I believe RowAvg should be the average of the rows for each col1 level. Are the numbers 2 and 7 the FLOOR of the average or is my understanding incorrect?

Request 2: I'm not too sure what is being done to produce RowAvg here. Since no SHEET MUSIC or framing is used here, I believe that the window should be the entire table, is this correct? Also, how is the RowAvg found?

Request 3: Does it find the average (FLOOR) for each partition, but does it incrementally? In other words, for line 1 of the first partition (& # 39; A & # 39;), we find the average of this line. Then, for line 2 of the first partition, we find the average of the first 2 lines.

General question: Does he present `ORDER BY` in the aggregation window function execute the aggregation function "consecutively" as in queries 1 and 2? It’s interesting to see that in query 1, `AVG` is performed for each partition as a whole, while in queries 1 and 2, the RowAvg are almost different for each row.