dnd 5e – How do I adjudicate PCs attempting to tie up a significantly stronger enemy?

It might help you to think about the steps involved with achieving the end result and how the relevant characters/creatures would react. In your situation, the end result is something like a PC standing triumphantly with one foot resting atop a hogtied zombie, Captain Morgan style.

Obviously, the zombie (to the extent that a mindless undead has any sort of thought process) doesn’t want this and so it will resist the PC’s efforts. The steps in the process would fall under the rules for contests or contests in combat and might look something like this:

How to hogtie a zombie

1. Get the zombie on the ground.

The basic rules cover grappling and shoving a creature. The PC could attempt to shove the zombie down using an opposed check. This would involve the player using their action to start an opposed check between the character’s athletics and the zombie’s acrobatics or athletics.

…Except that Zombies, surprise surprise, don’t have any skills. This means that they just use their Str or Dex modifier for the check. The reason for this is that skills are just specific aspects of an ability score. That’s why the rulebook always reads “Strength (Athletics) check”. If the creature doesn’t have the specific supplemental skill, it just falls back to their general ability modifier.

If the zombie is knocked to the ground by the shove, move on to step two. If not, the hiding character has revealed her presence and resolves the rest of their turn.

2. Hold it still

Once the zombie is on the ground, that doesn’t mean it’s going to stay there, content to be tied up. So then the questions become whether or not the party can keep the zombie down, and can they tie the zombie up in such a way that it can’t somehow escape?

This is likely a team effort. Somebody needs to keep the zombie pinned while a second character whips out the rope and make an effort to tie rope around the zombie or clap manacles on its limbs, etc.

Once again, you have some opposed checks to roll. I, personally, would have one player need to make a grapple check against the zombie. This works in the same way as a shove would except that the player is attempting to keep a hold of the zombie, not shove it.

Note that, even if the zombie is successfully grappled, that doesn’t mean it can’t try and bend its head down to bite or kick or scratch in some way. That is, it can still attack even while being held down. In a rules sense, the zombie isn’t deprived of its action (ie not incapacitated) by a grapple.

3. Tie ’em up.

Assuming that the zombie has been knocked prone and pinned by a character, somebody needs to tie it up. This character might need to make a Sleight of Hand check to slip the rope around the pinned zombie.

One thing I like to promote at my table is creative application of skills. For example, a player might suggest to me that, since we’re talking about hogtying a zombie, might the animal handling skill apply? In situations like this, I like to let the player explain how a skill crosses over into a unique application. The point is to empower the players and allow for some fun storytelling opportunities.

But, of course, the zombie isn’t going to make things easy for the player. One option would be to roll some sort of opposed check for the zombie to prevent the bonds from being applied well enough to actually restrain the zombie.

Another might be to just refer to the table of DCs for typical difficulty classes and pick out a value that you think adequately captures the general difficulty of the task. Tying a rope around a zombie that is kicking and flailing might be a bit tricky so I would likely peg that somewhere in the DC 14-16 range (whereas trying to tie up an unconscious prisoner might be a few points easier).


Remember, skill checks are the way D&D determines how successful the players are at doing things. The more complicated the activity, the more checks might be involved. That, in and of itself, is one form of setting the difficulty simply because more checks means more chances of failing to accomplish one of the component steps of the larger task.

One thing that I’ve really found helpful is to talk through the process with the players and explain your thought process so that they know why they’re having to make more than just one check. I’ve found that players will often times point out that I’ve overlooked something (“Can two of us hold the zombie down to make it easier to tie up?”).

Talking through the checks also gives them hard data that they can use to gauge how challenging a task will be (and whether they want to attempt it or try something else).

For example, they may not know how strong a zombie is, but they do know how strong their character is and how likely they are to succeed in a strength-based check. You can help them understand what they’re dealing with even further by giving subtle, narrative clues as to the zombie’s strength: “You think that, whoever this zombie was before it became undead was a fairly strapping fellow based on the size of the muscles that haven’t yet rotted away.”

blockchain – Balance in bitcoin-qt suddenly significantly diminished after a small transaction?

I had my wallet stored on a storage provider for a while, and recently got a new computer and installed bitcoin-qt and imported my wallet.

It said I had about 2 BTC and showed INGOING transactions from 2013, which I had not touched since.

I recently then transferred 0.33 BTC to another recipient, and then immediately the bitcoin balance dropped to 0.07 BTC.

I have no recollection if I have used my wallet on other computers in the time between 2013 and now, but I thought the blockchain had a perfect copy of all transactions from my wallet.

Any idea what could have happend? Can I have spend bitcoin on my wallet on another computer in say, 2015, that never got relayed to my wallet stored on cloud storage? Are the transactions stored in the wallet.dat and not on the blockchain only?

Sorry, I am confused.

8 – Migrating Translated User Entities slowing migration down significantly

I have a very large site I’m attempting to migrate from D7 to D8 but am experiencing significant performance issues. We have around 35,000 users, so I don’t expect this to be fast, but if I don’t make any changes the migration will take several weeks to complete which is obviously not an option considering we can’t shut down our site for multiple weeks… Here’s my experience so far

I ran a test migration of the site via the UI which ended up stalling out when processing the d7_entity_reference_translation:user__user migration. I ended up having to use drush to finish out the migration via drush migrate-import.

After cleaning everything up post-migration (took about a week), we decided that it would be better to just keep this instance up and running and only re-migrate users and their data before going live considering that’s the only data that will have changed since the initial migration. Specifically, we plan on running these user-data related migrations using drush: d7_user, d7_entity_reference_translation:user__user, d7_google_analytics_user_settings, and d7_legal_accepted.

  • The d7_user migration took about 14 hours, not great, but it was consistent and worked.
  • The d7_entity_reference_translation:user__user translation is where we ran into performance issues. During the first hour, it processed about 3000 users (on par with the rate d7_user migrated. However, it’s slowly become slower and slower. Currently, I am sitting at the 24 hour mark and it’s only processed 25K users and in the last 3.5 hours it only processed 2000 users. Therefore, it started at a rate of about 3000/hr, and after 24 hours it’s down to only about 500/hr.

I’ve attempted the following and seen some improvements, but nothing significant:

  • Increased the max_execution_time in php.ini settings to around 600 seconds instead of 60 (little difference)
  • Increased the memory_limit up from 128M to a few GBs (note, this instance is isolated to just this task at the time, so it’s okay for us to use more memory). This seemed to work a little, but obviously not enough.

What other adjustments can be made or tips anyone might have to mitigate this performance issue?

8 – Migrating Translated User Entities slowing migration down significantly

I have a very large site I’m attempting to migrate from D7 to D8 but am experiencing significant performance issues. We have around 35,000 users, so I don’t expect this to be fast, but if I don’t make any changes the migration will take several weeks to complete which is obviously not an option considering we can’t shut down our site for multiple weeks… Here’s my experience so far

I ran a test migration of the site via the UI which ended up stalling out when processing the d7_entity_reference_translation:user__user migration. I ended up having to use drush to finish out the migration via drush migrate-import.

After cleaning everything up post-migration (took about a week), we decided that it would be better to just keep this instance up and running and only re-migrate users and their data before going live considering that’s the only data that will have changed since the initial migration. Specifically, we plan on running these user-data related migrations using drush: d7_user, d7_entity_reference_translation:user__user, d7_google_analytics_user_settings, and d7_legal_accepted.

  • The d7_user migration took about 14 hours, not great, but it was consistent and worked.
  • The d7_entity_reference_translation:user__user translation is where we ran into performance issues. During the first hour, it processed about 3000 users (on par with the rate d7_user migrated. However, it’s slowly become slower and slower. Currently, I am sitting at the 24 hour mark and it’s only processed 25K users and in the last 3.5 hours it only processed 2000 users. Therefore, it started at a rate of about 3000/hr, and after 24 hours it’s down to only about 500/hr.

I’ve attempted the following and seen some improvements, but nothing significant:

  • Increased the max_execution_time in php.ini settings to around 600 seconds instead of 60 (little difference)
  • Increased the memory_limit up from 128M to a few GBs (note, this instance is isolated to just this task at the time, so it’s okay for us to use more memory). This seemed to work a little, but obviously not enough.

What other adjustments can be made or tips anyone might have to mitigate this performance issue?

mysql – Queries are significantly slower on a VPS than a dedicated server. Is CPU the sole bottleneck?

Moved from a dedicated server to a VPS and queries that used to take less than a second are taking up to seven seconds now. The Dedicated server had MySQL 5.6, the new one has MySQL 5.7. Both servers have 32G of RAM, and MySQL was using default settings on the Dedicated server. Tables are all InnoDB and the data + indexes make up ~1.7G. innodb_buffer_pool_size is set to 3G (it’s also hosting websites; could be increased if needed but I don’t think it’d make a difference at this point).

Dedicated CPU info:

# grep -E "model name|processor" /proc/cpuinfo
processor : 0
model name  : Intel(R) Xeon(R) CPU E3-1220 v3 @ 3.10GHz
processor : 1
model name  : Intel(R) Xeon(R) CPU E3-1220 v3 @ 3.10GHz
processor : 2
model name  : Intel(R) Xeon(R) CPU E3-1220 v3 @ 3.10GHz
processor : 3
model name  : Intel(R) Xeon(R) CPU E3-1220 v3 @ 3.10GHz

VPS CPU info:

# grep -E "model name|processor" /proc/cpuinfo
processor       : 0
model name      : Intel Core Processor (Haswell, no TSX, IBRS)
processor       : 1
model name      : Intel Core Processor (Haswell, no TSX, IBRS)
processor       : 2
model name      : Intel Core Processor (Haswell, no TSX, IBRS)
processor       : 3
model name      : Intel Core Processor (Haswell, no TSX, IBRS)
processor       : 4
model name      : Intel Core Processor (Haswell, no TSX, IBRS)
processor       : 5
model name      : Intel Core Processor (Haswell, no TSX, IBRS)
processor       : 6
model name      : Intel Core Processor (Haswell, no TSX, IBRS)
processor       : 7
model name      : Intel Core Processor (Haswell, no TSX, IBRS)

A lot of the queries being ran have subqueries and JOINs. EXPLAIN output for one query can be found here (pretty long, didn’t want to paste here): https://pastebin.com/Za4pX25h

Query cache helps, but the problem is the tables are updated pretty regularly so the cache gets flushed a lot.

MySQL CPU usage when the query runs on the Dedicated server is 6.0%, whereas on the VPS it goes up to 98-105%. If it’s not strictly a CPU problem, is there something else I could look at? Thanks in advance.

Has Facebook changed algorithms over time significantly to lower the reach of page posts?

I’m a part time pencil drawing artist. 2-3 years ago, one simple sketch on my page would get easily lot of views and around 100 likes. I’m not saying that my sketch was good/bad based on these stats. The point is, it felt pretty normal.

Over the time, I improved my sketching, and started drawing more and more, and reach on my page decreased heavily and likes too.

And now situation is like: I posted a new artwork today and it got no like (it may get 1 or 2 likes later though).

I often see notifications from Facebook to advertise my posts instead. And one more thing. Whenever I don’t post on my page for few months, I start getting page followers automatically.

So does it mean Facebook knows my page has some value and I must be having some money, so I should boost (spend money to get likes) every post. And hence the algorithms are designed to give very low reach to my posts?

If I plot a graph of posts performance over 2 years, it would fall exponentially.

I really don’t think my work quality has decreased exponentially.

Is it just my illusion? Or has some truth about it and algorithms have to do something about it? Does Facebook actually want us to spend money on it?

I will be really thankful to you.

PS: I really want to know this, because if that’s the truth, I’m already wasting my energy on Facebook. I would quit instead.

sql server – Time to generate execution plan for select into from a view takes significantly more time than select

I have a view, vw_example, that is quite complicated and has multiple joins across multiple databases. This view has been causing significant delays when used as part of other queries, and we’ve narrowed down the problem to execution plan generation.

One thing we’ve noticed that has vexed us is that this query takes a few seconds to run (and display all the data, ~44k rows):

SELECT * FROM vw_example

whereas this query takes minutes to run (tbl_example is created by the query):

SELECT * INTO tbl_example FROM vw_example

We’ve additionally compared the SQL execution plans between the two in Microsoft SQL Server Management Studio (2014). The former plan took 3 seconds to generate. The latter plan took 37 minutes.

Comparing the two plans, the only difference is that the latter plan has a “Table Insert” node at the start. Everything else is identical.

Does anyone have any idea why it would take significantly longer to generate a plan for the SELECT INTO statement vs a SELECT statement?

php – Are static websites significantly faster than dynamic?

For a typical site on a server which is not under heavy use the difference should be negligible – certainly nowhere near 500ms. That said, generalization is difficult and very site/application dependant.

Most of the overhead is not the reading/writing to/from the database, but the additional overheads of parsing the code through php – especially when the database contents are manipulated/parsed – eg adding a header and footer, searching the returned page for placeholders (eg shortvodes in WordPress), complex designs which require multiple database queries, access perms etc.

That said, computers are unbelievably fast at these kinds of operations, and a filesystem is just a kind of non-sql database – in fact I’d go for an optimised database based solution on SSD over an unoptimised staric html solution on hdd on a system which is memory constrained.

Its worth noting that – on a lightweight database/static page setup overheads relating to Internet traffic – eg DNS lookup, https negotiation will be a far bigger component of how long it takes to load a page then actually getting the page.

Its also worth commenting that many caching modules/plugins are a hybrid system which renders the static page to disk and whenever its changed in the database recreates a static page which is given to users on request.

java – How can I speed up the code ? I get the correct value but the for numbers that are large the take taken to complete increases significantly

The code below builds a string that and finds the number or the element the that index.

if 1 is passed as an argument the output will be 1
if 2 is passed as an argument the output will be 1
if 3 is passed as an argument the output will be 2

the string that is built will look something like this:

112123123412345123456123456712345678…

the output should return which element is at the passed in argument.

public class Solution {
    public static int solve(long n) {
        return Integer.parseInt(buildString(n)((int) n - 1));
    }

    public static String() buildString(long n) {
        StringBuilder entireNumber = new StringBuilder();
        String startString = "";

        for (int i = 1; i <= n; i++) {
            startString = startString + i;
            entireNumber.append(startString);
            if (entireNumber.length() > n) {
                break;
            }
        }
        return entireNumber.toString().split("");
    }
}
```

If sensor resolution numbers increase significantly from 12.1 MP to 50.6 MP, why’s actual difference in horizontal width much less pronounced?

I don’t understand the embolded phrase below from Camera Resolution Explained. Please explain like I’m 10. I’m unschooled at photography or physics.

How’s “the actual difference in horizontal width” “much less pronounced”? I don’t understand what that first collage below (showing 12.1 MP to 50.6 MP) is trying to prove?

In order to yield twice larger prints at the same PPI, you would need to multiply sensor resolution by 4. For example, if you own a D700 and you are wondering what kind of sensor resolution you would need to print 2x larger, you multiply 12.1 MP (sensor resolution) x 4, which translates to a 48.4 MP sensor. So if you were to move up to say the latest Canon 5DS DSLR that has a 50.6 MP sensor, you would get prints a bit larger than 2x in comparison. To understand these differences in resolution, it is best to take a look at the below comparison of different popular sensor resolutions of modern digital cameras from 12.1 MP to 50.6 MP:

Image Resolution Comparison

As you can see, despite the fact that sensor resolution numbers increase significantly when going from something like 12.1 MP to 50.6 MP, the actual difference in horizontal width is much less pronounced. But if you were to look at the total area differences, then the differences are indeed significant – you could take 4 prints from the D700, stack them together and still be short when compared to a 50.6 MP image, as shown below:

12.1 MP vs 50.6 MP Resolution

Keep all this in mind when comparing cameras and thinking about differences in resolution.