architecture – TransactionScope in DAO or in BLL (Business Logic Layer)?

I’ve been working with Entity Framework, Repositories, Unit of Work, DDD, CQRS…

But I have an different challenge now…my company is working in the below architecture:

Contrllers -> BLL (Business Logic Layer) -> DAO with raw sql connection…

I have to insert a Customer with many dependents. I have a DAO for Customer and another for Dependent. So, I will insert in two tables…

I now how to use TransactionScope, how it works…But…what’s the best practice? Using TransactionScope in Business Layer? In DAO?

I have CustomerDAO, CustomerBO…where should I put the transaction scope? some people say to create a service class, but is it the same of CustomerBO?

database design – Necessity of a DB-replication as extra security layer

If the application (publicly accessible) is attacked / DDoS´d, only the connected replica DB will be affected and the main DB can operate normally

That’s the reason.

If the Application running against the replica database is publicly accessible, then it can be attacked. If it is attacked then, as you say, the replica database will bear the brunt of that attack. The remainder of the company, running against the primary database, can continue operating (largely) as normal.

As long as the web application is of secondary importance to your company, then this arrangement makes sense. If you work for eBay or Amazon? Not so much!

That’s not to say that the primary database is completely unaffected by this! The replication processes, trying to write to the over-loaded replica, may be impacted too, which may feed back into poorer performance on the primary or, depending on how the replication system works, potential disk space problems with logs not being shipped and cleared down as quickly as they should be.

unity – How can I manipulate a layer with click and drag functionality?

This works well on clicking layers but not click and dragging layers.

public class FishClicker : MonoBehaviour {

public LayerMask whatIsFish;

public float clickRadius = 0.1f;

void Update() {

    if (Input.GetMouseButtonDown(0)) {
        var mousePos = Camera.main.ScreenToWorldPoint(Input.mousePosition);
        var hit = Physics2D.OverlapCircle(mousePos, clickRadius, whatIsFish);

        if (hit != null) {
            var fish = hit.GetComponentInParent<Fish>();

            if (fish != null) { fish.DoClickThing(); }


for DoClickThing(); this works

public void ControlFishClick()
    screenPos = myCam.WorldToScreenPoint(transform.position);
    Vector3 v3 = Input.mousePosition - screenPos;
    angleOffset = (Mathf.Atan2(transform.right.y, transform.right.x) - Mathf.Atan2(v3.y, v3.x)) * Mathf.Rad2Deg;


for DoDragThing() i want this to work

public void ControlFishDrag()
    stopMovement = true;
    rigid2D.velocity = new Vector2(0, 0);
    Vector3 v3 = Input.mousePosition - screenPos;
    float angle = Mathf.Atan2(v3.y, v3.x) * Mathf.Rad2Deg;
    transform.eulerAngles = new Vector3(0, 0, angle + angleOffset);

for DoReleaseClick() i want this

public void ControlFishUp()
    stopMovement = false;

Does it make sense to use microservices on top of a monolithic data layer thats unlikely to change?

If you have a very large monolithic database that is not partitioned or sharded, with strong relationships between the tables/data, does it still make sense to use microservices as the application layer?

python – ValueError: Input 0 of layer conv1d is incompatible with the layer: : expected min_ndim=3, found ndim=2. Full shape received: [32, 380]

I am trying to do the classification of the inputs into categories. For the sake of reproducibility, the training and validations datasets I am using are shared here.

A little background about the dataset: these CSV files are converted satellite images (shared here for convenience) (from .TIF to Numpy arrays). The images have 3 channels.

What I am trying to do below is feeding the datasets into a simple CNN layer that extracts the useful features of the images and feed that as 1D into the LSTM network for classification.

from keras.models import Sequential
from keras.layers import Dense, Flatten
from keras.layers.convolutional import Conv1D
from keras.layers import LSTM
from keras.layers.convolutional import MaxPooling1D
from keras.layers import TimeDistributed
from keras import optimizers
from keras.callbacks import EarlyStopping
import pandas as pd
from sklearn.model_selection import train_test_split

df_train = pd.read_csv("data/train/training_dataset.csv", header=None, sep=',')
df_validation = pd.read_csv("data/validation/validation_dataset.csv", header=None, sep=',')

#train,test = train_test_split(df, test_size=0.20, random_state=0)

model = Sequential()

model.add(TimeDistributed(Conv1D(filters=5, kernel_size=2, activation='relu', padding='same'), batch_input_shape=(32, 1, 380)))
model.add(LSTM(50, return_sequences=True))

adam = optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0)

model.compile(optimizer=adam, loss='mse', metrics=('mae', 'mape', 'acc'))
callbacks = (EarlyStopping('val_loss', patience=3)), df_validation, batch_size=batch_size)


The shapes are:

df_train.shape: (17980, 380)
df_validation.shape: (17980, 380)

However, when I run my code, I am getting the following error

ValueError: Input 0 of layer conv1d is incompatible with the layer: : expected min_ndim=3, found ndim=2. Full shape received: (32, 380)

How can we fix this error?

architecture – Repository Pattern with Services Layer – Good Practice?

This is my first time I am using repository pattern and applying a layered architecture in a project. I have followed the article found here. The complete code found on the article can also be found on the writer’s github.

I read the whole article beginning to end before getting my hands on the keyboard to start coding. It al made sense (still does).

I am not sure if I have to summarise the whole article here but in simple terms, he divided his solution into 4 projects. Core, Data, Services and WebApi. Repository Pattern and UnitOfWork is used to abstract data access.

After reading the article I decided to apply the very same logic to my project. After doing so, I realised it took me the whole day and I found myself writing a lot and a lot of code which seemed unnecessary and duplicated. For example, in this article he only have two entities. Hence it seemed like a very simple application. I currently have 28 and that’s only the beginning! This ment, apart from my actual model, I ended up having 28 separate repository interfaces, 28 separate service interfaces, 28 actual repositories, 28 actual service classes and similar number of DTOs (Resource classes) for mapping purposes.

So. I am happy to do the extra work if it will prove to be a solid foundation for my project and if it will contribute positively to my learning progress. However, something inside tells me I’ve done a lot of unnecessary work, which either could be simplified or probably have a better way of doing. Hence, I am here seeking advice if this article is guiding me in the right direction.

I am using the latest versions (including pre-release) of DotNet Core and Entity Framework Core.

Clean Architecture Gateway layer depends on outer layer

Looking at the clean architecture layers and flow diagrams, and implemented it my self in my applications, I’ve always wondered which layer is supposed to contain the DB, or any 3rd Party service or SDK.

Looking at both of these images raises the question if there isn’t violation in the outer layers.

enter image description here

enter image description here

I’ve imagined the layers division like this:

enter image description here

But this means that there is a violation of the dependancy rule. Since the gateway always knows about both the external service, and the application it self, the entities.

Is there a correct way to draw these layers? I’ve read a couple of resources asking this question, but didn’t really get a full answers to what I need. For example:, Doesn’t repository pattern in clean architecture violate Dependency inversion principle?

I get it that the meaning of clean architecture is kept, and the inner layers, the entities and the use case, aren’t affected by a change in the DB and the gateway, but was just wondering if maybe this is more accurate:

enter image description here


From the book:

Recall that we do not allow SQL in the use cases layer; instead, we use gateway interfaces that have appropriate methods. Those gateways are implemented by classes in the database layer.

So I guess this means that the data access is really in the most outer layer:

enter image description here

Maybe for this specific example, there is no real use for the interface adapters layer?

But also from the book about interface adapters layer:

Similarly, data is converted, in this layer, from the form most convenient for entities and use cases, to the form most convenient for whatever persistence framework is being used (i.e., the database). No code inward of this circle should know anything at all about the database. If the database is a SQL database, then all SQL should be restricted to this layer—and in particular to the parts of this layer that have to do with the database.

Also in this layer is any other adapter necessary to convert data from
some external form, such as an external service, to the internal form
used by the use cases and entities.

So it kinda contradicts that the data access is in the database layer, since this is what it does, converts from the DB, for example SQL rows, into the application’s entities. Are these layers not really separated? I’m confused.

Alternatives to Cisco 3750G Layer 3 Switches


Over the last 10+ years I’ve heavily used Cisco 3750Gs in colocated racks as they were perfect for our needs.

They were solid swit… | Read the rest of

networking – How data link layer understands loss and applies flow control?

I’m using the Internet with mobile or ADSL; as I was thinking about the network structure, we knew that the Data Link layer is in charge of Flow control, and the Transport Layer has both responsibilities of Flow control and Congestion control.

So I have three questions:

  • In Smartphones, we have an extra small piece of hardware as a wifi module. Does my mobile do all the TCP/IP stack stuff like data link layer responsibilities and even transport layer duties via that small wifi chip?
  • As I’m using ASDL at home, and it’s just a simple some_brand_router, How is flow control happening through my network card? (denote that I don’t have Cisco or any other switches that I can activate flow control via interfaces provided through the terminal).
  • Despite the transport layer or Network layer implemented via software/kernel/firmware, how does the Data link layer use my hardware as a sliding window and understand that we have a loss due to satisfying reliability?

Thanks for your help and getting me out of ignorance :v