web application – Is ProtonMail implemented a mechanism to prevent cookies stealing?

I want to check if I can steal my own ProtonMail cookies. I connect to my account, I delete the cookie named AUTH-x12334xxxaazzzrf6567788ddd (cookie name is randomized name). I refresh the page and as excepted, I am disconnected. It’s means that cookie AUTH-xxxx... is session cookie.

I connect me again, I have a new cookie. I copy the name, the content and the path (/api/). I paste this cookie into a new private Firefox window but I am not connected.

How is it possible, is ProtonMail implemented a mechanism to invalidate a cookie after a first usage ?

theming – CSS styling breaks field hide mechanism

I have some jquery set up to hide a radio list field when the user clicks on an adjacent field. I’m trying to style the radio list to display horizontally, but it breaks the jquery. I believe this happens because I am declaring the display value for the radio list in CSS, which overrides the hide function in the code.

The jquery looks like this:

      // apply event handler to radio inputs
      $('.main-field input').change(function(){
        // if 1 or more YES selections are found
        if($('.main-field input:checked').length > 0){
          // hide submit button
          $('.radio-list-field').hide();
        }
        // otherwise
        else{
          // show submit button
          $('.radio-list-field').show();
        }
      });
 });
})(jQuery);

The css portion to style the radio list currently includes this:

.radio-list-item-wrapper {
    display: inline;
    margin: 10px 1em;
    padding: 0;
    list-style: none;
    position: relative;
}

I have tried using flex instead of inline, but that doesn’t work either. None of the other attributes affect functionality.

Is there a way to display a radio list horizontally without declaring the display: inline/flex attribute, or perhaps a different approach that gets me the display I want without breaking the jquery code?

This is for a Drupal 7 site.

design – Designin a task scheduler and processing mechanism in a single thread with a possibility to pause/disable/resume it

Hello i have a problem in which i need to schedule some “tasks” at a certain point of time (using system clock, the time is saved in Task.processingTime_ member)

The tasks can come from other threads, but the task processing logic must be done in a single thread.

My current design:
The processing thread is stuck in the while loop:

while (shouldRun_)
{
    if (auto task = taskProvider_.provide())
    {
        taskProcessor_.process(std::move(*task));
    }
}

Beforehand i had the provider(AKA Scheduler but not renamed or more likely provider/scheduler hybrid class :P) and the processor in this while loop. The class split into taskProecssor seemed to work well, but splitting the scheduling logic into taskProvider doesn’t show any direct benefits. Perhaps the design isn’t the best.

Code for the Provider:

class TaskProvider
{
public:
    std::optional<Task> provide()
    {
        std::unique_lock lock{mutex_};
        while (true)
        {
            cv_.wait(lock, (this){return shouldTryReturnTask();});
            if (scheduleState_ == ScheduleState::Disabled)
            {
                return {};
            }
            const auto status = cv_.wait_until(lock, tasks_.top().processingTime_);
            if (status == std::cv_status::timeout && scheduleState_ == ScheduleState::Running)
            {
                auto ret = tasks_.top();
                tasks_.pop();
                return ret;
            }
        }
    }

    void add(Task task)
    {
        if (scheduleState_ == ScheduleState::Running)
        {
            const std::scoped_lock lock{mutex_};
            tasks_.push(std::move(task));
            cv_.notify_one();
        }
    }

    void resume()
    {
        const std::scoped_lock lock{mutex_};
        scheduleState_ = ScheduleState::Running;
        cv_.notify_one();
    }

    void pause()
    {
        const std::scoped_lock lock{mutex_};
        scheduleState_ = ScheduleState::Paused;
        cv_.notify_one();
    }

    void disable()
    {
        const std::scoped_lock lock{mutex_};
        scheduleState_ = ScheduleState::Disabled;
        cv_.notify_one();
    }

    void clear()
    {
        const std::scoped_lock lock{mutex_};
        scheduleState_ = ScheduleState::Clearing;
        tasks_ = decltype(tasks_){};
    }
private:
    bool shouldTryReturnTask() const
    {
        return scheduleState_ != ScheduleState::Paused && (!tasks_.empty() || scheduleState_ == ScheduleState::Disabled);
    }

    struct TaskComp
    {
        bool operator()(const Task& lhs, const Task& rhs)
        {
            return lhs.processingTime_ > rhs.processingTime_;
        }
    };

    enum class ScheduleState
    {
        Running,
        Paused,
        Disabled,
        Clearing,
    };

    std::atomic<ScheduleState> scheduleState_{ScheduleState::Running};
    mutable std::mutex mutex_;
    mutable std::condition_variable cv_;
    std::priority_queue<Task, std::vector<Task>, TaskComp> tasks_;
};

This design threw a few problems at me. I think my solution isn’t the best and i am trying to check alternative, but cant figure them out by myself.

How the system is used:
a) processing the next available task

This whole thing must happen in one thread. The application is using too many of them and additional thread isn’t needed to satisfy this use case.

The processing thread will call taskProvider_.provide(). Now the thread will be blocked by the taskProvider. It will return the task when time has come for the task to be processed, and it will be forwarded to taskProcessor to do some processing logic.

The new tasks can also be scheduled while taskProvider_.provide is blocked and this new task might have to be processed sooner than any of the tasks. This is why i used cv_.wait_until. I want to block the thread until the soonest task is available, when a new sooner task is added i notify the condition variable and if it didn’t sleep for specified time then it should update the sleep time for the new soonest task.

b) pause

I want to pause the running thread and all of the tasks will be cleared.
From the outside it is done by calling TaskProvider::pause and then TaskProvider::clear.

Also adding new tasks should be ignored when the pause was called.

Since the processing thread will be blocked in the TaskProvider i have thought that maybe the pause should be implemented by making condition_variable wait until it is resumed by resume function.
If i implemented the pause in working thread then i would need a way to signal Provider anyway to exit the provide.

c) resume
Resume is quite simple, we change the status variable which allows first conditionVariable to proceed if some task will be added in the meantime

d) Disabling

This is needed for when i want to destroy the working thread.
The working thread might be blocked in the provide call, so i have created this mechanism to return std::optional. The call to disable will disallow cv_.wait_until to return task + the first cv_.wait will be interrupted by call to TaskProvider::disable.

Thanks to that when i want to destroy the working thread i would do:

taskProvider::disable()
workingThread.shouldRun_ = false;

Please let me know if you can find simpler solution, especialy one which i could unit test 🙂
What i dislike the most is having these two conditionVariable wait calls and return of std::optional so i can handle the exit from running thread (maybe an exception would be better. The user won’t be pausing it very frequently so i find it a little more elegant than std::optional).

bip9 version bits – Should block height or MTP or a mixture of both be used in a soft fork activation mechanism?

Using block heights for the start and timeout parameters has the advantage of giving miners a known number of signaling periods. Loss of hashpower doesn’t reduce the number of retarget periods available for activation. Especially for an activation mechanism over a shorter time horizon (e.g. the Speedy Trial proposal) it may be important to ensure miners have the maximum number of signaling periods.

Block heights are also arguably easier to communicate and easier to reason about as blockchain developers are used to working with them. In addition, BIP 8 (which uses exclusively block heights) has been updated to incorporate the Speedy Trial proposal but at the time of writing (March 2021) BIP 9 (which uses MTP) has not been updated to incorporate the Speedy Trial proposal.

Using MTP (median time past) has the advantage of being able to schedule an activation at a specific time to avoid activation occurring in the middle of the night for some region in the world. A codebase that has previously implemented MTP will obviously have to make code changes to implement block heights instead which will need to be reviewed.

It is also subject to debate whether using block height consistently or using a mixture of both block heights and MTP is preferable for making the implementation and release of an alternative competing compatible activation mechanism (e.g. a UASF release) more difficult or in some way avoiding a scalp for marketing purposes.

In summary, there appears to be consensus that block heights are used exclusively in activation mechanisms for future soft forks but it is less clear whether there is consensus to use them exclusively for the proposed Taproot activation mechanism, Speedy Trial.

This answer was taken from comments on GitHub and the mailing list from David Harding, Andrew Chow, AJ Towns, Jeremy Rubin and Sjors Provoost.

authentication – Standard Protocols or secure mechanism to authenticate users offline

We have a use case wherein a mobile app that can be used by multiple users on the same device needs to authenticate the users in some offline scenarios. We were using Password grant as in OAuth2 to check the password when the user is online and storing the password’s PBKDF2 hash in the device. I know this is less secure, is there any better alternative?

magento2.4 – Magento 2: why we use cache mechanism at a block level?

I have a doubt that I see some cache codes in layout xml like below

 <referenceContainer name="store.menu">
            <block class="VendorMegamenuBlockMegamenu"
                name="megamenu"
                template="Vendor_Megamenu::Vendor/Megamenu/megamenu.phtml"
                before="-" ttl="3600">
                 <arguments>
                    <argument name="cache_lifetime" xsi:type="number">3600</argument>
                </arguments>
             </block>
        </referenceContainer>

So I do not define cache at block level so Magento 2 Full page mechanism will get that block in cache automatically ,so My question is if Magento 2 already do full page in cache blocks so why we define block level cache ?

8 – Is there a database lock mechanism?

I have a parent child entity relationship which is driven by a REST API. The payload contains data for both the parent and the child. The parent can have many children.

When the (child) payload is posted the API first checks if the parent exists (based solely on an ID value in the payload). If it does exist, that parent entity is assigned to an entityref field when the child entity is created. If the parent doesn’t exist; it is created and the newly created parent is assigned to the child.

This works great.. until. It seems now that the API user is publishing multiple children with the same parent at the same time (I assume some sort of multi-threaded process). So the breakdown occurs when the first 2 children are created (at the “same time”) and they both determine there is no existing parent; and then create duplicate parents.

Is there some Drupal db method that allows me to lock creation of a certain entity type; triggered by the first spot in the API code where the parent is to be created? I assume they are not exactly the same time; but they are within 1 second of each other and more precisely they are within the time it takes to create the first parent. Perhaps even a REST function to handle threaded calls?

taproot – What is the point of miner signaling in a soft fork activation mechanism? What should the threshold be on miner signaling?

What is the point of miner signaling in a soft fork activation mechanism?

Miners are signaling readiness for a soft fork activation, they are not signaling to support (or vote for) a soft fork. It seems generally accepted at the time of writing (February 2021) that any opposition towards a soft fork should be raised and discussed before an activation mechanism is proposed. If arguments are raised that haven’t previously been considered or that opposition is material and sustained across the wider community, activation for that soft fork shouldn’t be considered.

Ideally you want as many miners enforcing the Taproot rules at the point it activates as possible. Otherwise an invalid Taproot spend could creep into a block and some miners wouldn’t reject it as they would be treating it as an anyone-can-spend. It would need a small, naturally occurring re-org to get that invalid Taproot spend out of the blockchain. This wouldn’t be a disaster but ideally you don’t want a greater frequency or greater magnitude of re-orgs than usually occur. Ideally you want all miners enforcing the Taproot rules from the point of activation.

As devrandom pointed out on IRC there is motivation for presumably unready miners who haven’t yet signaled to get ready urgently if the soft fork is activating soon. They don’t want to expend resources mining blocks (with an invalid Taproot spend in them) that will ultimately be rejected by a proportion of the network. But that doesn’t guarantee they will be ready and may lead to miners rushing to get ready for activation which again is not optimal.

There is also the consideration of miners deliberately or inadvertently producing blocks with invalid Taproot spends in them fooling SPV/non-upgraded clients that is discussed here.

What should the threshold be on miner signaling?

This is harder to answer as the threshold chosen appears to be a trade-off between ensuring as many miners are ready to activate the soft fork as possible and preventing a small minority of miners from unnecessarily delaying the soft fork for political or strategic reasons. With the SegWit soft fork in 2017, a BIP 148 user activated soft fork needed to be proposed because miners were assessed to be deliberately blocking activation to attempt to force through an additional block size increase.

In this developer survey conducted by AJ Towns the question “What do you consider a reasonable threshold for activation by hashpower supermajority?” it appears 90 percent or 95 percent would be the preferred threshold. The threshold for the SegWit activation was set initially at 95 percent.

taproot – What is the benefit of forced signaling in a soft fork activation mechanism?

David Harding answered this on IRC. The MUST_SIGNAL forced signaling is there to activate the soft fork for all nodes setting LOT=false when the rest of the network is setting LOT=true.

I run a node with LOT=false; everyone else runs a node with LOT=true. At block xxxxxx, y’all start enforcing taproots rules, but I never saw any signal, so I continue treating taproot transactions as anyone-can-spend, which is bad for me personally. If there are a lot of people with LOT=false, it also makes it unclear whether taproot is really being enforced, increasing the risk that miners may try to steal funds sent to taproot outputs.

In addition David Harding argues that forced signaling is not particularly dangerous.

What makes forced signaling so dangerous? We had that with the BIP34, BIP66, and BIP65 forks. Except for a hiccup with BIP66 due to spy mining, I don’t think there was any problem. BIP8 forced signaling is also only required for a brief period, so any disruption should be short

wpa2 – What is a most secured and weakest wireless security Mechanism?

I am having issue regarding wireless security Mechanisms.

  • What are WEP, WPA and WPA2
  • What are TKIP,EAP, LEAP, PEAP, EAP-TLS, EAP-TTLS and CCMP
  • What are different mode of WPA2 (enterprise mode….etc)

How do we implement these security mechanism effectively on organizational Network?. I am having good theoretical knowledge about each point but still i don’t understand difference between point one and point two and how to combine or use these technologies in real world implementation.