sql server – Calendar Event table – best practice setup for range queries and individual retrieval

This seems like a generic problem that should have been solved already, but I can’t find anything about this. In general this question is – given a table where data is read by a date range, what is the best, most efficient setup?

We have a calendar event table that will quickly grow to millions of records.

The schema is something like:

CREATE TABLE (dbo).(CalendarEvent)(
(Id) (uniqueidentifier) NOT NULL,
(DtStart) (datetime) NULL,
(DtEnd) (datetime) NULL,
(Created) (datetime) NULL,
(LastModified) (datetime) NULL,
(CalendarEventType) (nvarchar)(255) NULL,
(CalendarId) (uniqueidentifier) NULL
PRIMARY KEY CLUSTERED 
(
    (Id) ASC
    )WITH (STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF) ON (PRIMARY)
) ON (PRIMARY)

Forget about recurring events, etc. as that doesn’t bear on our problem.

Most queries will be of the type:

select * from CalendarEvent where CalendarId = 'b5d6338f-805f-4717-9c0a-4600f95ac515' AND dtStart > '01/01/2020' AND dtStart < '10/22/2020'

Notice no joins, etc.

But we will also have some that select for individual events, and include joins:

select * from CalendarEvent ce join tags t on ce.Id = t.CalendarEventId where Id = '17606330-5486-496a-a91c-f5d0e123bfff'

Questions and ideas:

  1. Should we keep the Id as the PK, but make the start date the clustered index?
  2. Should we just make an index on dtStart?
  3. Should we partition by month?
  4. Should we denormalize a little and break duplicate the dtStart data by include year and month columns that we can index and use in our range queries?

In general, when you do your querying on a table by date range, what is the best setup for this type of table?

Note: If you think this question could be improved to help more people, make it more generic and widely applicable, such as removing references to a Calendar Event table specifically, and making this just about date range querying in any type of table, please help me do that.

MongoDB: Queries running twice slow on NEW server compared to OLD server

I transferred current/old running DB into a new standalone server for MongoDB. To do this, I performed the following:

  1. Took dump of data from OLD server
  2. Restored data from the generated dump into NEW server
  3. Configured the server for authentication

Issue:
I noticed that after performing the above, few queries on the NEW server were running slow almost twice the time compared to their performance on the OLD server.

Configurations:
The configurations of both the servers are same however the NEW server has 32 GB RAM while the OLD server had 28GB RAM. OLD server had other applications and servers running as well. While the NEW server is a dedicated server only for this DB.

CPU consumption is similar however RAM is heavily occupied in the OLD server while it is comparatively less occupied on NEW server.

Therefore, NEW server is better equipped in hardware and RAM consumption. Also NEW server is standalone dedicated to only this DB.

Question:
Why could my NEW server even though it is standalone be slow compared to OLD one? How can I correct this?

performance – What can cause higher CPU time and duration for a given set of queries in trace(s) ran on two separate environments?

I’m troubleshooting a performance issue in a SQL Server DR environment for a customer. They are running queries that consistently take longer in their environment than our QA environment. After analyzing traces that were performed in both environments with the same parameters/filters and with the same version of SQL Server (2016 SP2) and the exact same database, we observed that both environment were picking the same execution plan(s) for the queries in question, and the number of reads/writes were close in both environments, however the total duration of the process in question and the CPU time logged in the trace were significantly higher in the customer environment. Duration of all processes in our QA environment was around 18 seconds, the customer was over 80 seconds, our CPU time was close to 10 seconds, theirs was also over 80 seconds. Also worth mentioning, both environments are currently configured to MAXDOP 1.

The customer has less memory (~100GB vs 120GB), and slower disks (10k HHD vs SSD) than our QA environment, but but more CPUs. Both environments are dedicated to this activity and should have little/no external load that wouldn’t match. I don’t have all the details on CPU architecture they are using, waiting for some of that information now. The customer has confirmed they have excluded SQL Server and the data/log files from their virus scanning. Obviously there could be a ton of issues in the hardware configuration.

I’m currently waiting to see a recent snapshot of their wait stats and system DMVs, the data we originally received, didn’t appear to have any major CPU, memory or Disk latency pressure. I recently asked them to check to see if the windows power setting was in performance or balanced mode, however I’m not certain that would have the impact we’re seeing or not if the CPUs were being throttled.

My question is, what factors can affect CPU time and ultimately total duration? Is CPU time, as shown in a sql trace, based primarily on the speed of the processors or are their other factors I should be taking in to consideration. The fact that both are generating the same query plans and all other things being as close as possible to equal, makes me think it’s related to the hardware SQL is installed on.

8 – Extracting user field values from dynamic SQL queries

Aim

I have successfully written a fairly long dynamic sql query, however am struggling with a seemingly simple part at the end.

Although, I am able to successfully extract mail and name from the users table, when I try to extract field_first_name it returns the error below.

The users table has a column with the machine name: field_first_name

Code

    $database = Drupal::service('database');

    $select = $database->select('flagging', 'f');
    $select->fields('f', array('uid', 'entity_id'));
    $select->leftJoin('node__field_start_datetime', 'nfds', 'nfds.entity_id = f.entity_id');
    $select->fields('nfds', array('field_start_datetime_value'));
    $select->leftJoin('node_field_data', 'nfd', 'nfd.nid = f.entity_id');
    $select->fields('nfd', array('title'));
    $select->leftJoin('users_field_data', 'ufd', 'ufd.uid = f.uid');
    // TODO extract first name
    $select->fields('ufd', ('mail', 'name', 'field_first_name'));

    $executed = $select->execute();
    $results = $executed->fetchAll(PDO::FETCH_ASSOC);

    $username = $result('name');
    $email = $result('mail');
    $first_name = $result('field_first_name');

Error

DrupalCoreDatabaseDatabaseExceptionWrapper: SQLSTATE(42S22): Column not found: 1054 Unknown column 'ufd.field_first_name' in 'field list': SELECT f.uid AS uid, f.entity_id AS entity_id, nfds.field_start_datetime_value AS field_start_datetime_value, nfd.title AS title, ufd.mail AS mail, ufd.name AS name, ufd.field_first_name AS field_first_name FROM {flagging} f LEFT OUTER JOIN {node__field_start_datetime} nfds ON nfds.entity_id = f.entity_id LEFT OUTER JOIN {node_field_data} nfd ON nfd.nid = f.entity_id LEFT OUTER JOIN {users_field_data} ufd ON ufd.uid = f.uid; Array ( ) in event_notification_cron() (line 63 of /app/modules/custom/event_notification/event_notification.module).

Field

enter image description here

domain name system – External DNS server (centos named) inconsistently resolving user queries

I have a cluster of DNS servers which sit behind a public IP.

These servers resolve some of the time but other times they just return a ServFail error code for any queries

The setup I have is not typical (this was inherited).

Basically on the server there is a namespace called gi, here is where named service is been used by a new service call srv-gi
”’

#!/bin/sh

start_service() {
        ip netns exec gi /usr/sbin/zebra -d -A 127.0.0.1 -f /etc/quagga/zebra.conf
        ip netns exec gi /usr/sbin/bgpd -d -A 127.0.0.1 -f /etc/quagga/bgpd.conf 
        #DNS service
        ip netns exec gi  /usr/sbin/named -u named -c /etc/gi-named.conf
}

start_service

”’

The named.conf file has also been renamed to gi-named.conf file.

//
// named.conf
//
// Provided by Red Hat bind package to configure the ISC BIND named(8) DNS
// server as a caching only nameserver (as a localhost DNS resolver only).
//
// See /usr/share/doc/bind*/sample/ for example named configuration files.
//
// See the BIND Administrator’s Reference Manual (ARM) for details about the
// configuration located in /usr/share/doc/bind-{version}/Bv9ARM.html

options {
        listen-on port 53 { Public IP; };
        #listen-on-v6 port 53 { ::1; };
        directory       "/var/named";
        dump-file       "/var/named/data/cache_dump.db";
        statistics-file "/var/named/data/named_stats.txt";
        memstatistics-file "/var/named/data/named_mem_stats.txt";
        recursing-file  "/var/named/data/named.recursing";
        secroots-file   "/var/named/data/named.secroots";
        allow-query     { any; };
        allow-query-on  { PublicIP; };

        /*
         - If you are building an AUTHORITATIVE DNS server, do NOT enable recursion.
         - If you are building a RECURSIVE (caching) DNS server, you need to enable
           recursion.
         - If your recursive DNS server has a public IP address, you MUST enable access
           control to limit queries to your legitimate users. Failing to do so will
           cause your server to become part of large scale DNS amplification
           attacks. Implementing BCP38 within your network would greatly
           reduce such attack surface
        */
        recursion yes;
        allow-query-cache { Internal Range; };
        allow-query-cache-on  { PublicIP; };



        query-source address Public IP ;

        dnssec-enable yes;
        dnssec-validation yes;

        /* Path to ISC DLV key */
        bindkeys-file "/etc/named.iscdlv.key";

        managed-keys-directory "/var/named/dynamic";

        pid-file "/run/named/named.pid";
        session-keyfile "/run/named/session.key";
};


logging
{
/*      If you want to enable debugging, eg. using the 'rndc trace' command,
 *      named will try to write the 'named.run' file in the $directory (/var/named).
 *      By default, SELinux policy does not allow named to modify the /var/named directory,
 *      so put the default debug log file in data/ :
 */
        /*channel default_debug {
                print-time yes;
                print-category yes;
                print-severity yes;
                file "data/named.run";
                severity dynamic;
        };*/
        channel queries_log {
                file "/var/log/queries" versions 1 size 20m;
                print-time yes;
                print-category yes;
                print-severity yes;
                severity debug 3;
        };

        category queries { queries_log; };
        category client { queries_log;  };
};

zone "." IN {
        type hint;
        file "named.ca";
};

include "/etc/named.rfc1912.zones";
include "/etc/named.root.key";

Also to note i have a quagga riuter configured to allow DNS resolution via Public IP

/etc/quagga/bgpd.conf

!
! Zebra configuration saved from vty
!   2019/10/11 10:11:45
!
!
router bgp AS
 bgp router-id PublicIP
 network PublicIP/32
 network CoreIP/32
 neighbor DUB1-WGW peer-group
 neighbor DUB1-WGW remote-as AS
 neighbor DUB1-WGW soft-reconfiguration inbound
 neighbor DUB1-WGW route-map XXXXX out
 neighbor CoreBGPIP peer-group DUB1-WGW
 neighbor CoreBGPIP peer-group DUB1-WGW
!
ip prefix-list XXXX seq 5 permit PublicIP/32
ip prefix-list XXXX seq 10 permit PrivateIP/32
!
route-map DNS_TO_GI permit 10
 match ip address prefix-list XXXXX
!
line vty
!

/etc/quagga/zebra.conf

!
! Zebra configuration saved from vty
!   2019/10/11 10:11:45
!
hostname hostname
!
interface ens160
 ipv6 nd suppress-ra
!
interface ens192
 ipv6 nd suppress-ra
!
interface ens192.890
 ipv6 nd suppress-ra
!
interface ens192.892
 ipv6 nd suppress-ra
!
interface XX
 ipv6 nd suppress-ra
!
interface lo
!
ip prefix-list XX seq 5 permit PublicIP3/32
ip prefix-list XX seq 10 permit PrivateIP/32
!
route-map XXXX permit 10
 match ip address prefix-list XXX
!
!
!
line vty
!

# show ip route
Codes: K - kernel route, C - connected, S - static, R - RIP,
       O - OSPF, I - IS-IS, B - BGP, A - Babel,
       > - selected route, * - FIB route

B>* 0.0.0.0/0 (20/10) via neighbor IP, ens192.892, 00:02:18
C>* 127.0.0.0/8 is directly connected, lo
C>* Public IP/32 is directly connected, lo
C>* NeighborSubnet/30 is directly connected, ens192.890
C>* NeighborIP/30 is directly connected, ens192.892
C>* LocalIP/32 is directly connected, lo

I am testing resolution using a test APN and while I can get resolution one one APN as sson as I introduce a second APN i just encounter the following errors below from a tcpdump:

11:29:38.065284 IP PublicIP.domain > internal IP.p2pcommunity: 30622 ServFail 0/0/0 (44)
11:29:38.265736 IP PublicIP.domain > internal IP.32209: 12606 ServFail 0/0/0 (37)
11:29:38.266037 IP PublicIP.domain > internal IP.10793: 26678 ServFail 0/0/0 (37)
11:29:38.295727 IP PublicIP.domain > internal IP.ibm_wrless_lan: 23483 ServFail 0/0/0 (33)
11:29:38.296038 IP PublicIP.domain > internal IP.22097: 8347 ServFail 0/0/0 (33)
11:29:38.297532 IP PublicIP.domain > internal IP.31026: 23400 ServFail 0/0/0 (38)
11:29:38.298117 IP PublicIP.domain > internal IP.23707: 26481 ServFail 0/0/0 (38)

and from /var/log/queries

22-Sep-2020 11:31:07.552 client: debug 3: client InternalIP#61793 (www.facebook.com): error
22-Sep-2020 11:31:07.552 client: debug 3: client InternalIP#61793 (www.facebook.com): send
22-Sep-2020 11:31:07.552 client: debug 3: client InternalIP#61793 (www.facebook.com): sendto
22-Sep-2020 11:31:07.552 client: debug 3: client InternalIP#48008 (2.android.pool.ntp.org): error
22-Sep-2020 11:31:07.552 client: debug 3: client InternalIP#61793 (www.facebook.com): senddone
22-Sep-2020 11:31:07.552 client: debug 3: client InternalIP#61793 (www.facebook.com): next
22-Sep-2020 11:31:07.552 client: debug 3: client InternalIP#61793 (www.facebook.com): endrequest
22-Sep-2020 11:31:07.553 client: debug 3: client InternalIP#48008 (2.android.pool.ntp.org): send
22-Sep-2020 11:31:07.553 client: debug 3: client InternalIP#48008 (2.android.pool.ntp.org): sendto
22-Sep-2020 11:31:07.553 client: debug 3: client InternalIP#48008 (2.android.pool.ntp.org): senddone
22-Sep-2020 11:31:07.553 client: debug 3: client InternalIP#48008 (2.android.pool.ntp.org): next
22-Sep-2020 11:31:07.553 client: debug 3: client InternalIP#48008 (2.android.pool.ntp.org): endrequest

I am really unsure of how to resolve this issue, any pointers ort advice would be greatly appreciated

Outputs of dig command

dig facebook.com

; <<>> DiG 9.9.4-RedHat-9.9.4-74.el7_6.1 <<>> facebook.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 7204
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4000
;; QUESTION SECTION:
;facebook.com.          IN  A

;; ANSWER SECTION:
facebook.com.       93  IN  A   31.13.86.36

;; Query time: 2 msec
;; SERVER: internal DNS#53(Internal DNS)
;; WHEN: Tue Sep 22 19:38:58 UTC 2020
;; MSG SIZE  rcvd: 57


dig @PublicIP facebook.com

; <<>> DiG 9.9.4-RedHat-9.9.4-74.el7_6.1 <<>> @PublicIP facebook.com
; (1 server found)
;; global options: +cmd
;; connection timed out; no servers could be reached

dig @208.67.222.222 facebook.com

; <<>> DiG 9.9.4-RedHat-9.9.4-74.el7_6.1 <<>> @208.67.222.222 facebook.com
; (1 server found)
;; global options: +cmd
;; connection timed out; no servers could be reached

ip netns exec gi tcpdump -n -f 'port 53' -i any
09:55:35.676645 IP PublicIP.domain > InternalIP.46571: 36451 ServFail 0/0/0 (32)
09:55:35.676939 IP PublicIP.domain > InternalIP.37817: 52592 ServFail 0/0/0 (32)
09:55:35.677865 IP PublicIP.domain > InternalIP41737: 52624 ServFail 0/0/0 (32)
09:55:35.713870 IP PublicIP.34042 > 193.0.14.129.domain: 11264 (1au) A? mtalk.google.com. (45)
09:55:35.713914 IP PublicIP.11218 > 193.0.14.129.domain: 3623 (1au) NS? . (28)
09:55:35.768649 IP 193.0.14.129.domain > PublicIP.11218: 3623*-| 0/0/1 (28)
09:55:35.784456 IP 193.0.14.129.domain > PublicIP.34042: 11264-| 0/0/1 (45)
09:55:36.045130 IP PublicIP.wcbackup > 192.112.36.4.domain: 28368 A? update.googleapis.com. (39)
09:55:36.063323 IP InternalIP.49382 > PublicIP.domain: 57145+ A? accounts.google.com. (37)
09:55:36.064459 IP PublicIP.48169 > 193.0.14.129.domain: 15825 (1au) A? accounts.google.com. (48)
09:55:36.065883 IP APNIP.54312 > PublicIP.domain: 53585+ A? accounts.google.com. (37)
09:55:36.080202 IP 192.112.36.4.domain > PublicIP.wcbackup: 28368- 0/13/14 (499)
09:55:36.120905 IP 193.0.14.129.domain > PublicIP.48169: 15825- 0/15/27 (1182)
09:55:36.170289 IP InternalIP.59759 > PublicIP.domain: 52061+ A? www.google.com. (32)
09:55:36.224316 IP PublicIP.5346 > 192.112.36.4.domain: 40438 A? www.facebook.com. (34)
09:55:36.257993 IP 192.112.36.4.domain > PublicIP.5346: 40438- 0/13/14 (494)
09:55:36.441576 IP PublicIP.domain > InternalIP.65408: 45517 ServFail 0/0/0 (39)
09:55:36.441666 IP PublicIP.domain > InternalIP.60664: 54663 ServFail 0/0/0 (39)
09:55:36.442994 IP PublicIP.domain > InternalIP.48634: 56799 ServFail 0/0/0 (39)
09:55:36.443474 IP PublicIP.domain > InternalIP.36045: 34980 ServFail 0/0/0 (39)

sql server – How should I structure my SQL database to make search Queries fast

  • My computer has 64 cores
  • Microsoft SQL Server Data Tools 16.0.62007.23150 installed

One initial question: Which SQL version would be best for 64 cores?

I am new to SQL databases and have understood that it is important how you structure the database so it will go faster later to search and extract the information needed(Queries).

I beleive I have also understood that using DataTypes that takes up less memory is good for speed later on also, like using a smallint instead of an int if it will work with a smallint etc.

I like to ask if my structure that I am thinking of is well designed in order to extract information later or if I should do this a bit different. The database will add stock symbol data and as I notice this database will be extremely big which is the purpose of this question.

This is the whole structure that I have in mind (Example comes after explanation):

  1. I will use 4 columns. (DateTime|Symbol|FeatureNr|Value)
  2. DateTime has format down to the minute: 201012051545
  3. Symbol and FeatureNr has smallint. For example: MSFT = 1, IBM = 2, AAPL = 3. So as you see. Instead of using strings in the columns, I have put smallint that represent those symbols/featureNr. This so search Queries goes faster later.
  4. The database will for example have 50 symbols where each symbol has 5000 features.
  5. The database will have 15 years of data.

Now I have a few big questions here:
If we just filling this database with data for 1 symbol. It will be this many rows in the database:

1440 minutes(1 day) * 365 days * 15 years * 5000 features = 39,420,000,000

Question 1:
39,420,000,000 rows in a database seems like alot or is this no problem?

Question 2:
The above was just for 1 symbol. Now I had 50 symbols which would mean:
39,420,000,000 * 50 = 1,971,000,000,000 rows. I don’t know what to say about this. Is this to many rows or is it okay? Should I have 1 database per symbol for example and not all 50 symbols in one database?

Question 3:
Not looking at how many rows it is in the database. Do you think the database is well structured for fast search queries. What I ALWAYS will search for everytime is this(This will later return 5000 lines(features). Notice that I search for one symbol ONLY and a specific datetime.
I will always do this exact search, and Never any other type of search, if you have any idéa how I should best structure the database with those 50 stock symbols.

As in Question 2. Should I have one database per symbol. Will this result in faster searches for example?
(symbol = 2, smalldatetime = 1546) where I want to return the featureNr and value which would be the below lines: (I will ALWAYS ONLY do this exact search)

201012051546 | 2 | 1 | 76.123456789
201012051546 | 2 | 2 | 76.123456789
201012051546 | 2 | 3 | 76.123456789

Question 4:
Wouldn’t it be the most optimal to have 1 table for each symbol and datetime
With other words: 1 table for symbol = 2 and smalldatetime 1546 which holds 5000 rows of features and then do this for each symbol and datetime?
This will result in so many tables per symbol. Or is this not good in any other way?

1440 minutes(1 day) * 365 days * 15 years = 7,884,000 tables


My idéa to the database/table structure:
smalldatetime | symbol (smallint) | featureNr (smallint) | value (float(53))

201012051545 | 1 | 1 | 65.123456789
201012051546 | 1 | 1 | 66.123456789
201012051547 | 1 | 1 | 67.123456789
201012051545 | 1 | 2 | 65.123456789
201012051546 | 1 | 2 | 66.123456789
201012051547 | 1 | 2 | 67.123456789
201012051545 | 1 | 3 | 65.123456789
201012051546 | 1 | 3 | 66.123456789
201012051547 | 1 | 3 | 67.123456789

201012051545 | 2 | 1 | 75.123456789
201012051546 | 2 | 1 | 76.123456789
201012051547 | 2 | 1 | 77.123456789
201012051545 | 2 | 2 | 75.123456789
201012051546 | 2 | 2 | 76.123456789
201012051547 | 2 | 2 | 77.123456789
201012051545 | 2 | 3 | 75.123456789
201012051546 | 2 | 3 | 76.123456789
201012051547 | 2 | 3 | 77.123456789

201012051545 | 3 | 1 | 85.123456789
201012051546 | 3 | 1 | 86.123456789
201012051547 | 3 | 1 | 87.123456789
201012051545 | 3 | 2 | 85.123456789
201012051546 | 3 | 2 | 86.123456789
201012051547 | 3 | 2 | 87.123456789
201012051545 | 3 | 3 | 85.123456789
201012051546 | 3 | 3 | 86.123456789
201012051547 | 3 | 3 | 87.123456789

collision detection – Optimizing a quadtree for circles and circular queries

I can’t recommend a structure that actually implements what you’re asking for, but it definitely won’t work like a quad tree. It may not be a tree at all and it might not even exist…

A quad tree has a tree structure because each node represents the node above it divided into four quadrants. At any level these four quadrants cover all of the space covered by their parent. Every single point in the 2D space can be inserted at exactly one leaf node somewhere on the tree.

If you try to divide space into circles, you’re not going to be able to find a set of circles that cover their parent circle completely and evenly. This geometric problem rules out the possibility that you can store these circles in a tree that will be useful to traverse for collision detection. There will always be points that are either not covered by a circle or covered by more than one circle. (That’s bad because it means a point in 2D space either has nowhere to go in the tree or it has more than one place to go!)

Testing whether two circles intersect is simple, but I am not aware of anything simpler than just checking if the distance between them is less than the sum of their radii. You can save some CPU power on so many circle-circle tests by comparing the squared radii and distances and skipping the sqrt calculation.

sql server – How to capture execution plan from sql profiler for filtered queries only

Using sql profiler, we trace all slow queries (filter on duration/reads) to see where we can optimise.
Events used:

  • RPC:Completed
  • SQL:BatchCompleted

Filter on Duration.

If I add the event ShowPlan XML, then I cannot filter on the duration of the underlying query, creating a huge load as hundreds/thousands of queries arrive each second

How to only capture the execution plan for the entries that match the filter duration/reads of the captured queries from the other events?

magento2 – Magento 2.3.4 – Server temp folder gets full because of SQL queries

I am using Magento 2.3.4, and the server temp folder gets full because of SQL queries. I am looking for someone who can optimize SQL server and queries from Magento 2. The below error appears when the site crashes.

xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

SQLSTATE(HY000): General error: 1021 Disk full (/dev/shm/#sql_2e1b28_7.MAI); waiting for someone to free some space… (errno: 28 “No space left on device”), query was: SELECT main_table.* FROM eav_attribute AS main_table
INNER JOIN eav_entity_type AS entity_type ON main_table.entity_type_id = entity_type.entity_type_id
LEFT JOIN eav_entity_attribute ON main_table.attribute_id = eav_entity_attribute.attribute_id
INNER JOIN catalog_eav_attribute AS additional_table ON main_table.attribute_id = additional_table.attribute_id WHERE (entity_type_code = ‘catalog_product’) AND ((additional_table.is_used_in_grid = 1)) GROUP BY main_table.attribute_id

xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

Can someone help me

Queries to Google from different countries?

Check geolocation:
Greetings to all .. I have a query about the geolocation of the positioning in google and if anyone knows how it works …. | Read the rest of https://www.webhostingtalk.com/showthread.php?t=1823054&goto=newpost