architecture – How to maintain the maintainable authorization model on calls between different microservices?

In our environment (as in many others), it is common for one microservice to call another to perform a task.

In our environment, authentication is clear enough: we have a signed JWT containing a list of permissions and roles, as well as a user ID, a client ID, and so on.

What we understand less is authorization – ensuring that the authenticated client can (or can not) do the right job, but that the underlying services have all the access they need to do their job. work (even if the client does not do it). to be able to do the same things directly).

We examined different options:

  1. Each service performs its own authorization and, if an elevation of privilege is required, it generates a "God mode" token with an otherwise unchanged payload and a different key pair and makes the call for help of it. The main concern here is the copy / paste permission code, and the fact that there will be a strong incentive to always enable the God mode during inter-service calls (which makes it whatever little redundant).
  2. Each service makes its own authorization and only transfers the user's token when it has to make a call. The concern here is code duplication as in Option 1, as well as the fact that it may create a complex interdependent network of permissions that involve other permissions that involve other permissions that … (ad nauseam ), creating a maintenance headache the number of services increases.
  3. A lightweight API gateway service that performs a "simple" authorization (nothing more advanced than "is this client allowed to use this endpoint?"), Attaches a user object to the payload, and lets behaviors more specific to the underlying services, which accept any call as being allowed out of the door. Performance and stability is the main problem with this option: the API Gateway service creates a single point of failure that can render the entire system inaccessible in the event of a malfunction, in addition to creating a frequently modified dependency for each service.

The question here is twofold:

  1. Are there any additional pitfalls to the three models described above that we did not take into account?
  2. Which of them is the most common in nature?

Note that this question is do not on meshed service offerings like Istio, because we consider them somewhat orthogonal to this question.

Pathfinder – Higher Command versus Maintain – Save on a Will?

My birthday was subject to Higher Command At the table last night, the Fighter and Oracle have succumbed. The cleric, hoping to prevent some of his teammates from escaping, attempted a Hold the person on the fighter. The fighter unfortunately failed to save it and was not selected.

I have a few questions in this scenario.

  1. a) Should the fighter make the save throw or could he or she fail voluntarily? My feeling is that he should make the saving throw because he does not have much knowledge about spells or spellcasters and does not know where the spell comes from, not to mention its effects. b) If under duress, however, is the fighter's spirit even his?

  2. Would the fighter even have a backup or the Hold the person to be more like a counter-spell, and if so how would it work?

Distributed system – What makes good design to maintain consistency in a last write wins the database without generating any event

We have an application that creates a message on each resource update (resource update, creation, deletion) and places it in a queue. This message is captured by a work process, converted into a format that the receiving application (let's call it a hub) understands and sends. The hub implements a policy of latest writing wins (implemented with mongodb). Whenever the sending application fails to send a message, we queue it in a queue in check, which is taken over by a job and sent again in a few minutes. The problem here is how can we ensure consistency when we send messages again, as this may overwrite a newer script (due to the latest write policy).

It was thought to use timestamps, but it is also possible that multiple users are processing the message in queue, which can also result in an inconsistent state due to the order and speed of the processes.

I have read several implementations of Lamport Clock and Vector Clock. Anyone can suggest a better solution for managing consistency in a final editor wins the policy, especially in .net.

dnd 5th – Do I have to roll to maintain concentration if a target other than me who is affected by my concentration spell suffers damage?

Your decision is correct.

Your decision is correct. Invisibility sets out the circumstances that put an end to the condition. Receiving damage does not end the situation.

Invisibility (PHB 243):

A creature you touch becomes invisible until the end of the spell. No matter what
the target door or door is invisible as long as it is lit
the target person. The spell ends for a target that attacks or casts
a type.

See also Invisible (PHB 291).

Concentration (PHB 203):

Normal activity, such as moving or attacking, does not interfere with
concentration. The following factors may break the concentration:

Take damage. Whenever you take damage while you focus on
a spell, you must make a saving save the Constitution to maintain your
concentration. The DC equals 10 times half the damage you suffer,
the number is higher. If you take damage from multiple sources, such as a
arrow and the breath of a dragon, you make a separate save stream for each
source of damage.

You will need to take damage to interact with your concentration spell. You do not suffer any damage (or activate any of the other triggers that break the concentration), so your concentration is maintained and you do not have to do any checking to maintain your focus.

reactjs – The React subcomponent (Typescript) does not maintain its state until it is rendered for the third time

I have an external component with a rendering method like this:

: JSX.Element
return (
                console.log (& # 39; data loading completed! & # 39;)}
{({loading, error, data, startPolling, stopPolling}) =>
if (loading)
return "Loading ..."
if (error)
return `Error: $ {error.message}`
if (data)
// Pass data to private properties
this.customers = data.getCustomers.customers
this.totalRecords = data.getCustomers.metadata.totalRecords

// Build a user interface
return (
                                    {/ * TITLE OF THE PAGE * /}

List of customers

{/ * Customer list * /}
    { (customer => ( )) }
{/ * Pagination * /} this.setPageFor (newOffset, newPage)} />
) } }}
) }

I also have an event manager for the subcomponent (Paginator) onPageChanged like that:

                private setPageFor (offset: number, page: number)
this.setState ({
offset: offset,
currentPage: page

and finally the subcomponent looks like this:

import React, {SyntheticEvent} from & # 39; react & # 39;

// ------------------------------------------------ ---- ---------------------------------
// component class
// ------------------------------------------------ ---- ---------------------------------
the Paginator export class expands
constructor (props: IPaginatorProps)
// call super
super (accessories)

// Initialize State
this.state = {
initialPageInRange: 1,
currentPage: 1

public render (): JSX.Element
return (
) } // ------------------------------------------------ ---- ------------------------- private renderPageItems (): JSX.Element[] { // return value Const Elements: JSX.Element[] = [] for (let iCounter: number = 0; iCounter <this.getMaxPossibleRange (); iCounter ++) { items.push (
  • this.goToPage (this.state.initialPageInRange + iCounter, e)} > {this.state.initialPageInRange + iCounter}
  • ) } return items } goToPage private (page: number, e: SyntheticEvent) { e.preventDefault () const newOffset: number = this.getOffset (page) const newPage: number = this.getCurrentPage (newOffset) this.setState ({ currentPage: newPage }) this.props.onPageChange (newOffset, newPage) } // ------------------------------------------------ ---- ------------------------- private getOffset (currentPage: number): number { return ((currentPage - 1) * this.props.pageSize) } // ------------------------------------------------ ---- ------------------------- private getCurrentPage (offset: number): number { return ((Math.ceil (offset / this.props.pageSize)) + 1) } // ------------------------------------------------ ---- ------------------------- getTotalPages private (): number { return (Math.ceil (this.props.totalRecords / this.props.pageSize)) } // ------------------------------------------------ ---- ------------------------- getMaxPossibleRange (): number { return (this.props.maxRangeSize <= this.getTotalPages()) ? this.props.maxRangeSize : this.getTotalPages() } } //--------------------------------------------------------------------------------- // Component Interfaces //--------------------------------------------------------------------------------- interface IPaginatorProps { maxRangeSize : number // 3 pageSize : number // 3 totalRecords : number // 19 currentPage : number // 1 onPageChange : (newOffset: number, newPage: number) => empty } // ------------------------------------------------ ---- --------------------------------- IPaginatorState interface { initialPageInRange: number currentPage: number }

    As you can see, a main component issues an Apollo query with an offset parameter (set in state) that is updated whenever setPageFor (offset) is called when updating the state and the rendering of the component.

    Now, in the subcomponent, I have two ways to get the currentPage value, one using the props passed from the parent component (clients in this case), and the other using the component's state to set an initial value of 1 and update the status with each click in a page link.

    The page link when you click on it calls goToPage (), which sets the local state and triggers the event of the modified page as a caller (client component).

    The current behavior of this component is not to change the

  • tag className to & # 39; active & # 39; until the third time that a link is in a hurry.

    I tried to use shouldComponentUpdate ()and by extending React.PureComponent, I even tried to set wait times (I've never written this …) to check if it worked, but the result is still the same .

    Now, if I comment on the line that triggers the onPageChanged () event in the go to page () method, the subcomponent for pagination renders perfectly, but if it is not the case and the subcomponent is allowed to send the event to the parent, it returns the whole tree and it's like if the subcomponent had been reassembled, which had the effect of removing it.

    The strange part is that the third time I click on a link, everything works as expected, it fails twice and runs the third.

    I am totally puzzled at this point, after hours of weekends, analyzing this code with the debugger.

    I apologize for having copied and pasted all this code, but I think it was necessary to get the picture of what I'm trying to achieve here.

    P.S. At this point, the problem can be solved by using props to pass the current pageproperty of the client component that is the caller of the subcomponent (for paging).

    I've been working with React for a few months now and this is the first time I've got such a weird mistake. So I would really like to know why this behavior occurs and why the state is lost the first two times that the component renders. Also, in order to reuse the component, I think it's best to keep these two variables in the state. Any thoughts of someone who knows React better will be greatly appreciated, or is it the problem of the Apollo library?

    Thank you.

  • proof of work – how do the block strings in each node maintain consistency?

    Most full nodes run on normal processors, which is now fundamentally inefficient for extraction. The extraction process is managed by specific extraction pools. Thus, for complete nodes that basically see the same block chain (there may be inconsistencies at the tip if two blocks are being exploited simultaneously), they have the same UTXO set. Indeed, the UTXO game is built on the basis of the transactions included in the blocks. Whenever a block is received by a complete node, they delete the UTXOs consumed by the transaction entries and add the UTXOs created in the output. For a mining node, it is similar. If they were exploiting the last block at the same time as the other minor, the first miner would build the next block using a different UTXO set than the other minor. However, after one or two blocks when the network converges (they can not generate blocks indefinitely), the network would have the same UTXO set.

    For mempool, the story is different. Bitcoin transactions are relayed over the network whenever possible. Therefore, some nodes may not see certain transactions until they are finally included in the blocks. So, yes, there are inconsistencies in mempool.

    How can a node resume extraction after the introduction of the new block?

    As mentioned above, if two blocks are exploited at the same time, some miners would work on a different version than the others. The accepted principle is to build on the block that was first received. However, this is not always the case. So, when a mining node that is still calculating the hash header remark that a new block is being exploited, it realizes that it has lost the "run" for that height of particular block and tries to exploit the most recently received block.

    Liberals, would you be willing to wage a civil war to maintain your abortion?

    Report abuse

    Additional details