plotting – Y axis not showing when graphing large x-values

When I run ParametricPlot({g, r}, {x, 0, 250000000}), where the output values for function r are expected to always be extremely small ($< 100$), I get the following graph, where the values are stuck on the x-axis and not scaled appropriately. I am struggling to understand why.

error graph

The same equations used for smaller $t$ values (ParametricPlot({g, r}, {x, 0, 100})), gives a graph much more like I am expecting.

good graph

I am new to Mathematica, so sorry if this is a trivial question. Any help would be appriciated.

sql server – Is there anyway I can speed up this large full-table query?

I have a query that selects from only one table and with one WHERE filter. However it takes a very long time to execute and even times out occasionally. This is likely because it is filtering about 4 million rows out from a table of 13 million rows (the other 9 million records are older than 2019), and it is returning all of the columns, of which there are 101 (a mix of datetime, varchar, and int columns). It has two indexes, a clustered one on its primary key interaction_id, and an unclustered index on interaction_date which is a datetime column that is the main filter. This is the query:

SELECT *
FROM (Sales).(dbo).(Interaction)
WHERE
year(Interaction_date) >= 2019

Is there anything obvious I can do to improve this query’s performance by adding/tweaking indexes or tweaking the query itself? Before I go into an ETL processes or fight back on the group that needs this query (they are a hadoop sqooping team who insist they need to sqoop all of these records all the time with all of the columns), I want to see if I can make it easier on people by doing something on my end as the DBA.

The query plan by default ignores my non-clustered index on the interaction_date column and still does a full clustered index scan. So I then tried forcing it to use it by including WITH (INDEX(IX_Interaction_Interaction_Date)) in the select.

This forces it into the query plan startign with an index scan of the non-clustered index, with estimated rows 4 million but estimated rows to be read as all 13 million. Then after a short time it spends the rest of the execution on the key lookup of the primary clustered index.

But ultimately, it doesn’t seem to speed up the query at all.
Any thoughts on how I can handle this? Thanks.

untagged – Can a large corporation make a believable promise

First off, welcome to the IS Stack Exchange!

However, if the third party makes a secret copy, then they can covertly sell it for large amounts of money.

The thing about this is when it comes to a pivotal and highly-valued asset such as said key, it should never be,

  • created
  • transmitted
  • kept
  • deleted

by a third-party vendor. These processes if possible should be done internally.

Let’s say if you would want to task a third-party vendor to delete said assets, it should be done with automation and limit that human/employee interaction, or keep it to the bare minimum.

Another solution would be, another layer of encryption on this already encrypted key. So that even if the vendor would to lay their eyes on this key, it wouldn’t mean much to them, only your organization can see it, but this kinda deems the purpose of the third-party vendor a tad bit redundant.

How does having a large mempool and allowing a greater transaction ancestry set change the interaction with your peers?

Bitcoin Core by default allows up to 300 MiB of mempool data, and restricts unconfirmed transaction trees to an ancestry set of at most 25 transactions and 101 kvB of transaction data. Since these are config values, you can obviously use other values than these. How does this change your node’s interaction with its peers? Do peers send data that exceeds your node’s preferences and your node drops that data upon arrival or does your node inform its peers what to send? Is this the same for mempool data generally and unconfirmed chains specifically? If you allow more data via higher values, does your node forward previously unacceptable data to its peers once their backlog clears enough for the data to be accepted by them?

SkyNet Not Suitable For Large Forums


Before I go any further I shold preface this post by saying that on the whole I have been quite happy with SkyNet. Customer service has been generally good apart from my last ticket which was answered by someone who obviously had only read the subject and not the content! But given I have around 50 sites with them, not too bad.

However, a few weeks ago I moved a fairly large forum from AWS to my SkyNet VIP Reseller account and then the problems began.

When I say “large” I mean 16,000 registered members and approx 2000 visitors per day. So not massive but not small either.

After I moved the forum and updated it to the latest version of phpbb everything seemed ok until I started enabling the extensions that I had disabled before the forum update. I should mention that all the extensions had also been updated to be compatible with phpbb 3.3

I got as far as re-enabling 5 extensions but could do no more. When I tried for instance to enable the ‘Stop Forum Spam’ extension…which I was using previously, the forum went down and I could see in Cpanel that CPU and Memory usage was at 100% and processes were running at 87%.

I finally managed to disable the extension and everything returned to normal, until I tried another extension I had been using for ages, again with the same result.

I then though perhaps if I disable one or more of the enabled extensions it may give me some headroom but again on just trying to disable an extension I hit the same issue of maxing out cpu and memory usage.

The forum was running fine on AWS along with 10 extensions enabled. The reason I moved was because I could not update to the latest version of phpbb without paying extra to have the server instance also updated to PHP 7.3. The site generates no revenue so it is not something I can afford right now.

So I can only conclude that a SkyNet VIP reseller account is just not suitable for this type of site. I am not blaming SkyNet. All hosting has limitations of one sort or another. I just want to save others time if they are looking for hosting for a site similar to mine.

Look for something with more than 2Gb allocated memory and as many CPU cores as you can afford.

database – How to improve wordpress mysql performance on large tables?

I’ve installed WordPress 5.4.1 on Ubuntu 20.04 LTS on AWS EC2 (free tier as I’m starting).

My instance has 30 GB of disk space and 1 GB of RAM.

My website has at about 9000 pages and I’ve imported 7800 so far with “HTML Import 2” plugin.

wp_posts table has 7,800 rows and 66 MB size and, since this table has grown, wordpress has become super slow. Any change I make to the database is super slow as well.

While trying to make changes, I keep getting this error:

Warning: mysqli_real_connect(): (HY000/2002): No such file or directory in
/var/www/wordpress/wp-includes/wp-db.php on line 1626 No such file or
directory

Error reconnecting to the database This means that we lost contact
with the database server at localhost. This could mean your host’s
database server is down.

What could I do in order to achieve a better speed and make it usable?

dnd 5e – Can a Large creature move past a Small and a Medium creature simultaneously?

It a question of Interpretation of the Squeezing rules

Concerning your first question, the rules are very clear, see PHB, p.191, if you don’t use squeezing:

A creature’s space is the area in feet that it effectively controls in combat, not an expression of its physical dimensions.

If a medium hobgoblin stands in a 5-foot-wide doorway, other creatures can’t get through unless the hobgoblins lets them.

These rules suggest that two halflings/humans can control a 10 foot ledge and that the wolf cannot pass without a special strategy unless the blockers let it. The wolf could pass through the halfling’s space but not the human’s.

The rules on squeezing (PHB p. 192) state:

Thus , a Large creature can squeeze through a passage that’s only 5 feet wide.

It is not entirely clear if this applies only to actual physical width of spaces or if the wolf can willingly squeeze itself together to avoid the human’s zone of control. This is up to the GM.

The speed for moving through would be 1/3.
Moving through another creature’s space always counts as difficult terrain. Both the rules on difficult terrain and on squeezing state that the movement costs one foot extra per foot moved. Therefore, each foot would cost two feet extra and the total would be three feet,making speed one third of the normal.

The human needs to be overrun, which has advantage.
As stated above, the wolf would need to move through the space of both defenders. Successfully overrunning a creature (DMG p. 272) allows moving through that creature’s space once on the same turn. Since the wolf is allowed to move through the halfling’s space anyway, only the human needs to be overrun to be allowed to move through their space as well. Doing so has advantage since a human is Medium which is smaller than Large. Moving through the defenders works at half speed. Their spaces are still difficult (even if the wolf would also overrun the halfling which is unnecessary), but squeezing is not necessary in this case.

Large Creature Overrun of a Meduim and Small Creature

A halfling and a human are blocking a group of winter wolves (large) on a 10′ wide ledge. Can the wolves just squeeze through the halfling’s space that is two sizes smaller; at 1/2 speed or 1/4 speed? Do the wolves need/get to overrun both at advantage since they only take up two squares a winter wolves take up 4?
Playing G2 from Tales from the Yawning Portal.

RackNerd – Large Storage KVM VPS PROMOS! 2.5GB RAM, 40 GB SSD Cached, 6TB Bandwidth for $27.80/yr & more in Los Angeles!

RackNerd, a favorite among the Low End Box community (and a Resident Host), is back with another exclusive offer for our users, this time with a variety of “High Storage” and “High CPU” offers on their KVM VPS product line hosted at their Los Angeles, CA based datacenter. RackNerd was last featured in April, and their listings always seem to generate a tremendous interest from our users, routinely ranking as the most viewed and most commented offers in our monthly round up. It is also worth noting that their Los Angeles network is “Asia Optimized” utilizing peering and connectivity from the likes of China Telecom and China Unicom for improved performance and reduced latency to Asia/China.

Their WHOIS is public and you can find their ToS/Legal Docs here. They accept PayPal, Credit Cards, Alipay, Bitcoin, WeChat Pay, UnionPay, Amazon Pay and Boleto as payment methods.

Here’s what they had to say about their products, in their own words:

“RackNerd LLC introduces infrastructure stability and provides Dedicated Servers, Private Cloud solutions, DRaaS (Disaster-Recovery-as-a-Service), flexible Colocation, Virtual Private Servers, Shared Hosting, Reseller Hosting and advanced DDoS Mitigation services — maintained by a team with decades of experience in managed services, datacenter operations, and Infrastructure-as-a-Service. With an intrinsic focus on client success and growth, RackNerd has grown steadily while continuing to provide high-quality hosting services at competitive rates. For more information please visit RackNerd at: https://www.racknerd.com/

The LowEndBox and LowEndTalk community is important to RackNerd. We’ve seen increased demand in our Los Angles KVM VPS, and a huge part of that is due to the community members and tech enthusiasts that visit the community! We’ve received tons of positive feedback here, and most recently RackNerd was featured as the #1 provider with the most comments, reflective of our excellent service! We’d like to sincerely thank the community, and at the same time give back! RackNerd would like to present the community with competitive shared hosting, reseller hosting, and KVM VPS specials!”

What makes this offer special, in their own words: We’ve seen increased demand in our Los Angeles KVM VPS, and a huge part of that is thanks to the LowEndBox community. In light of the positive feedback we received here, as well as LEB’s new specialization section for high storage offerings, we are introducing some KVM VPS specials with large/healthy amount of storage allocations. These are powered by our same SSD cached configurations we use across all of our KVM VPS nodes, providing you with the best performance possible. Customers interested in additional storage can always upgrade!”

Here are their offers: 

LARGE STORAGE – 2GB KVM

  • 2x vCPU Core

  • 50GB SSD Cached RAID-10 Storage

  • 2GB RAM

  • 4000GB Monthly Bandwidth

  • 1Gbps Network Port

  • Full Root Admin Access

  • 1 Dedicated IPv4 Address

  • KVM / SolusVM Control Panel

  • LOCATION: Los Angeles

  • JUST $36/YEAR!

  • (ORDER HERE)

LARGE STORAGE – 4GB KVM

  • 3x vCPU Core

  • 80GB SSD Cached RAID-10 Storage

  • 4GB RAM

  • 5000GB Monthly Bandwidth

  • 1Gbps Network Port

  • Full Root Admin Access

  • 1 Dedicated IPv4 Address

  • KVM / SolusVM Control Panel

  • LOCATION: Los Angeles

  • JUST $59/YEAR!

  • (ORDER HERE)

LARGE STORAGE – 6GB KVM

  • 4x vCPU Core

  • 140GB SSD Cached RAID-10 Storage

  • 6GB RAM

  • 6000GB Monthly Bandwidth

  • 1Gbps Network Port

  • Full Root Admin Access

  • 1 Dedicated IPv4 Address

  • KVM / SolusVM Control Panel

  • LOCATION: Los Angeles

  • JUST $95/YEAR!

  • (ORDER HERE)

2.5 GB KVM Flash Sale

  • 3x vCPU Core

  • 40GB SSD Cached RAID-10 Storage

  • 2.5GB RAM

  • 6000GB Monthly Bandwidth

  • 1Gbps Network Port

  • Full Root Admin Access

  • 1 Dedicated IPv4 Address

  • KVM / SolusVM Control Panel

  • LOCATION: Los Angeles

  • JUST $27.80/YEAR!

  • (ORDER HERE)

Reseller-LEB-40GB

  • 40GB SSD Disk Space
  • 1TB Monthly Transfer
  • 15 cPanel Accounts
  • Free SSL Certificates
  • CloudLinux Powered
  • Softaculous Script Installer
  • LiteSpeed Web Server
  • Free Daily Backups Included
  • cPanel & WHM Control Panel
  • Pricing: $27.99/year
  • (ORDER HERE)

Network information and host node specification details available after the break.

Racknerd offers are usually highly commented by community members so please keep sharing your experience and opinions.

NETWORK INFO:

Test IPv4: 204.13.154.3

Host Node Specifications:

KVM VPS Nodes:

– 2x Intel Xeon E5-2690

– 256 GB RAM

– 4x 4TB Enterprise HDD’s

– LSI Hardware RAID10 w/ SSD Caching

Reseller Hosting Nodes:

– Intel Xeon E3

– 32 GB RAM

– 4x 1TB Enterprise SSD’s

– LSI Hardware RAID10

– Dual 1Gbps Uplinks

Please let us know if you have any questions/comments and enjoy!

Jon Biloh

I’m Jon Biloh and I own LowEndBox and LowEndTalk. I’ve spent my nearly 20 year career in IT building companies and now I’m excited to focus on building and enhancing the community at LowEndBox and LowEndTalk.

postgresql – Pre Caching Index on a large table in PostgrSQL

I have a table with about 10mln rows in it with a primary key and an index defined on it:

    create table test.test_table(
        date_info date not null,
        string_data varchar(64) not null,
        data bigint
        primary key(date_info, string_data));
        create index test_table_idx 
        on test.test_table(string_data);

I have a query that makes the use of the test_table_idx:

select distinct date_info from test.test_table where string_data = 'some_val';

The issue is that first time around it could take up to 20 seconds to run the query and < 2 seconds on any subsequent runs.

Is there a way to pull load the entire index into memory rather then have DB load information on first access?