office 365 – MS 365 Architecture and interdependencies between, SP online, MS teams and others in perspective of security and functionalities

We are migrating to MS 365 from 2010, I am new to this thing and was asked to prepare a proper architecture for the same, but as I am working I realized that msteams and other groups create their own site collections. So now I want to understand the architecture of this MS 365 and between, SP online, MS teams and others in perspective of security and functionalities.

Because , I dont want MS teams to create site collections as well as I dont want people to share anything to any body, i wanted well administered and well controlled environment.

Please advice any links, courses etc.

architecture – What is right way of using event aggregator/message bus

Recently I had conversation with colleague, who proposed that whole app could be driven through event aggregator (or message bus).

I think this is really good pattern if someone wants to decouple publisher and subscriber, especially in cases where we have multiple publishers/subscribers. If there is just one, I think it is perfectly fine to request service from DI and call method on it, not really a need for event aggregator (but it is fine IMO). Also if there is request – response, there is not really and event, and maybe someone should use other patterns (i.e. observer pattern?).

However my colleague proposed something much more “drastic” – tunneling whole app communication through event aggregator. For instance when UI wants to know something from DomainEntity it would create event RequestSomethingFromDomainEntity(id: 37), Then domain entity would respond with SomethingFromDomainEntity(id: 37, data: data()).

I see following pros:

  • Such classes are super easy to test, you just generate events
  • Basically everything is decoupled from everything (at least semantically)
  • Implementing business rules is super easy, because interactor classes need only access to event aggregator

And following cons:

  • you do not really know what events class may expect or produce
  • someone cannot really tell anymore who talks to who in app, because everything has dependency on event aggregator and anything can handle messages
  • I image it is really hard to debug

My questions are:

  • Can someone who worked in this style share his thoughts?
  • Are there any other pros/cons?
  • Do pros outweighs cons?

microservices – Bug triage on a micro-services architecture

We do have a micro-services architecture with a team assigned to each micro-service considering it as a product on itself.

The “real product” is a front-end that uses multiple micro-services to offer functionality to the end-user (no micro front-ends), that means that all the bugs are reported at the front-end level as that’s what the end-user experiences.

One normal flow would be:

  1. The front explores what is happening and sees a 500 returned by service A. He sends the bug to the team of service A.
  2. The service A team explores what happens and sees that when service A authenticates against service B he is getting a 401 with valid credentials. He sends the bug to the team of service B.
  3. The service B team explores why the 401 error and discovers that the service C returned an empty list of valid users. The bug is assigned to service C.
  4. The service C team detects a bug in their code and fixes it.

This means the following:

  1. All the bugs are reported to the front team and that means the front team is actually doing the triage of every single bug and supporting an extra workload.
  2. The bugs are reassigned between the teams in a “cascade” flow until the issue is detected increasing the time required to solve it due to the different delays between something gets assigned and investigated.
  3. When there is no clear root cause as the bug happens due to complex interactions is difficult to get someone leading the organization of the diagnostics as there is no clear owner.

How do you manage such situations with a similar organization?

architecture – Is Dedicating A Thread To Inputs A Good Idea In Game Design?

You don’t gain any benefit from polling for input faster than you act on it.

Even if you read the input early on your input thread, it’s just going to sit in queue for the next game update step to pick it up, accomplishing nothing in the meantime.

So the situation is equivalent to the game update thread just reading all the input since the last update, and applying it directly to the current update step.

Keeping input on the game update thread is architecturally simpler, and avoids the risk of variations in thread timing making your controls feel inconsistent. (eg. with a 200 hz input loop and a 120 hz game update loop, sometimes your update loop gets 2 input samples, sometimes it gets 1, creating a beat frequency in your control responsiveness)

architecture – TransactionScope in DAO or in BLL (Business Logic Layer)?

I’ve been working with Entity Framework, Repositories, Unit of Work, DDD, CQRS…

But I have an different challenge now…my company is working in the below architecture:

Contrllers -> BLL (Business Logic Layer) -> DAO with raw sql connection…

I have to insert a Customer with many dependents. I have a DAO for Customer and another for Dependent. So, I will insert in two tables…

I now how to use TransactionScope, how it works…But…what’s the best practice? Using TransactionScope in Business Layer? In DAO?

I have CustomerDAO, CustomerBO…where should I put the transaction scope? some people say to create a service class, but is it the same of CustomerBO?

computer architecture – How would I go about calculating the index field / tag field?

For index field I got ‘9’ because 2^(9) = 512 words. But I’m stuck on what the formula for calculating the tag field is… any ideas?

Given a cache that holds 512 words and block size of one word. Assume 32-bit addresses. The index field is ____ bit. The tag field is ____ bit.

architecture – TaskManager class design

We implemented a TaskManager class for this purpose: executing a number of tasks asynchronously one after the other. Basically, this class includes a queue of tasks and the functions to run/execute them.

Considering that the class is a hybrid between a collection and a manager, the question is: Should we have a separate queue inside it or not? What is the best architecture in this scenario?

public class TaskManager
    public List<Task> Queue; 

    public Exexcute(Task t) {...}
    public ExexcuteAll() {...}


public class TaskManager
    private List<Task> _queue; 

    public AppendToQueue() {...}
    public RemoveFromQueue() {...}

    public Execute(Task t) {...}
    public ExecuteAll() {...}

computer architecture – How to compute the Cycles in a pipelined single cycle processor

I’m an undergrad studying computer engineering and I’m in my first of many courses on computer organization/architecture. In the lectures and online I see diagrams like the one pictured below from the university of washington. In general, it seems that the number of cycles in a single cycle pipelined architecture can be derived by the equation

$C = text{stages} + text{Instruction Count} – 1\\$

That seems to fit with the diagram, each instruction must pass through the same 5 stages. The first instruction has no idle resources and so must use clock cycles equivilent the number of stages. Each other instruction may take advantage of the idle resources and so must only add a single cycle each to process.

My first question is whether my understanding is correct. My second question is what would happen if one of these instructions encountered a data hazard and was forced to stall? How would that affect the number of cycles required? I assume it would go up 1 cycle per stall, but I don’t know for certain.
enter image description here

architecture – What is the most common approach for microservices to access data managed by other micro services?

Both are viable. They have their tradeoffs.

First one is easier, but if Order service is down, so is Fullfilment service. It also becomes a problem with overall stability. If a service depends on 3 other services, and those have stability of 99.5%, then the service itself would have 0.995^3 = 0.985 = 98.5% . This might be unacceptable. And when you have dozens of services with many instances, these numbers quickly add up.

Second is more difficult, but Fullfilment service can work even when Order service is down. It also allows the Fullfilment service to store the data in a way that is easy for it to consume. This might not be true for whatever API the other services provide for it.

Third option is to use ‘delayed’ creation. In this scenario, the Fullfilment service tracks the state of it’s request in persistent storage and starts it in ‘is being fullfilled’ state. Then, it sends asynchronous message to Order service, which Order service will process when it is up and ready. Once it is done, it will send response back to Fullfilment service, which will continue in the processing. This is different form of complication than keeping copy of the data, as you need to keep track of fullfilment state and revert it if something goes wrong (eg. with saga). And it requires client of Fullfilment service to know that it’s request can be in ‘to be finished’ state. But it gives you advantage of high stability and no need to keep duplicate data.

computer architecture – Two Vs Dual Port RAM

Regarding the difference between Two Vs Dual Port RAM Here is what I understand:

The first can read and write at the same time but can’t read twice or read twice at the same time while the second can read and write, read twice or write twice.

Is this correct?

Plus, Is this connected to software or hardware, let’s suppose I have 2*8=16GB Ram how can I convert them from Two port to Dual port?