domain driven design – Obtaining application generated ID from repostory or from entity constructor?

In the Book “Implementing Domain-Driven Design” the author suggests to implement a repository method to provide the next application-generated (not database-generated) ID. Like so:

class MyRepo {
  public MyId nextId() {
    return new MyId(UUID.fromRandom());
  }
}

That would lead to code like this:

var id = repository.nextId();
var entity = new MyEntity(id, ...);

But although I can see the point that providing IDs is somehow the responsibility of a repository, I don’t see an actual benefit of this implementation. Why not assigning the ID directly on object construction?

@Entity
public class MyEntity {
  @Id
  private MyId id = new MyId(UUID.fromRandom());
}

One could also argue that the identity is a central part of the entity itself.

But apart from this philoshophical difference, I see the benefit of creating the ID directly in the entity that no additional call to the repository is needed.

Do you see any advantage of providing the ID by the repository?

tables – Stuck with my UI design

I am facing sort of a mental block while redesigning a new UI at my Job. The previous UI was not user friendly and we received a lot of feedback to make it better. Now, finally after 2 years our team has decided to revisit the UI and make it better.

I mocked out a few proposals but all were heavily influenced by our existing UI and so I never showed them to my team. I am stuck and cannot think past the existing UI design we have. I feel like I am not getting anywhere and cant come up with anything new.

So any suggestion or advice would be helpful.

Problem: I am trying to create a UI that allows users to map nodes (define mapping).
I have x number of source buckets each containing more than 1 node. And then I have 1 target bucket that also has more than 1 node. The goal is to map nodes in target bucket to nodes in source bucket.

For example,

Source Buckets: A = {‘node1′,’node2′,’node3’} B = {‘node4′,’node5’}

Target Buckets: C = {‘node6′,’node7′,’node8′,’node9’}

Mapping (that user should be able to define):

C::node6 -> A::node2

C::node7 -> B::node5

C::node8 -> B::node4

C::node9 -> A::node1

Note how all the nodes of Bucket C are eventually mapped to some node form source buckets.

enter image description here

I am not sure, what would a nice and simple way to allow users to define this mapping?

I do not want to talk about current design much but just that we are using tables to display this mapping in the UI. The buttons that a user presses to define individual mappings are outside of tables.

Any better way to define and visualize these mappings? Any resource I can look at?

design – What is the most suitable data structure for IPv4 addresses with intersecting ranges?

Mostly in all routers and operating systems the Longest Prefix Match algorithm is used for searching the trie of IPv4 addresses.
Packet filters like Iptables don’t need some special data structure for finding IPv4 addresses.

I need to implement some data structure, where I could efficiently (log(N)) find IPv4 addresses, which are specified in firewall rules. But there could also be IPv4 address ranges. These ranges could be intersected and have exclusions (exclude one range or IP from another one). But I thing exclusions could be resolved in time of constructing (deploying) data structure.
So the Longest Prefix Match doesn’t fit.
I was considering the Interval Tree but I’m not sure that this is the most effective way.

P.S. inserting and deleting is not a big problem but it would be good to be also log(N)

postgresql – How can I design efficient db for fund transactions?

I was thinking about a simple fund transaction from one user to another. All users will be on one table and transactons on another table. But could not figure out what is the efficient way to show a user‘s all (send & received) transactions in one row.

How can I join them at one table like to get something like this

  {
    "user_id": "1",
    "user_name ": "aaa",
    "transactions": (
      {
        "tx_id": 1,
        "tx_amount": 5.00,
        "tx_to_user": 3
      },
      {
        "tx_id": 3,
        "tx_amount": 1.00,
        "tx_from_user": 2
      },
    ),
  }

Users Table

+---------+-----------+--------------+
| user_id | user_name | user_balance |
+---------+-----------+--------------+
| 1       | aaa       | 10.00        |
+---------+-----------+--------------+
| 2       | bbb       | 50.00        |
+---------+-----------+--------------+
| 3       | ccc       | 40.00        |
+---------+-----------+--------------+

Transactions Table

+-------+--------------+------------+-----------+
| tx_id | tx_from_user | tx_to_user | tx_amount |
+-------+--------------+------------+-----------+
| 1     | 1            | 3          | 5.00      |
+-------+--------------+------------+-----------+
| 2     | 3            | 2          | 1.00      |
+-------+--------------+------------+-----------+
| 3     | 2            | 1          | 1.00      |
+-------+--------------+------------+-----------+

Am learning with Express and Postgres. Thanks ❤

api design – API for input stream operation read until

I want to create a stream class The input stream should read/parse a continuous range from left to right providing convenience methods. The implementation isn’t a problem but choosing a consistence API.

A basic example would be:

input_stream is("Hello! 42");

is.peak(); // 'H'
is.read_n(7); // "Hello! "
is.read<int>(); // 42

I also want to support an operation known as read until to read the input stream until a specific string occurs. On first glance, this doesn’t sound very problematic but if you take a closer look there are 3 possible outcomes:

input_stream is("Hello! 42");

is.read_until("!"); // ?
  1. function returns “Hello!”, input stream points to ” 42″
  2. function returns “Hello”, input stream points to ” 42″
  3. function returns “Hello”, input stream points to “! 42”

I want to support all 3 variants with 3 different functions which should be unambiguous to the reader but it is really hard to explain what actually happen. Here are my ideas:

• read_until_include(cmp) – process the cmp and returns the part before + cmp

• read_until_exclude(cmp) – process the cmp but returns only the part before

• read_up_to(cmp) – does not process the cmp and
returns only the part before (leaves the cmp in the input stream)

I am not a big fan of my own idea. Could anyone please give me an advice where to go from here?

design – How to properly implement Rest Controllers to handle overlapping entities?

I have:

  • A User entity.
  • A Poll entity.

Relationship: User creates polls.

Use-case:
When an arbitrarily user is clicked his/her profile is loaded and shown.
The profile includes a list of polls created by the clicked user.

Which of the following api calls is the proper usage of such a use-case?

  1. website.com/api/users/{username}/polls
  2. website.com/api/polls?username=xxx

Between The Lines:
I currently have a UserController & a PollController.

PollController has:

  • getPolls()
  • createPoll()
  • getPollById()

UserController has functions related to the user, handles api calls starting with /api/users/…

I am trying to figure out which controller should handle the request to get polls by a user.

java – Are experienced developers and software architects able to describe an entire software application in terms of design patterns?

A common misconception among developers is that you can describe an application, or build an application, entirely by bolting together design patterns. Writing software is not a matter of choosing design patterns, arranging them properly, and then shipping a product. Software development has not become quite that modularized. If developers cannot build software entirely composed of design patterns, then software architects cannot have conceptual discussions about software entirely composed of design patterns.

A design pattern is a specific problem coupled with a general description of how to solve that problem. It is a tool for communication, primarily, so I can see how someone might think architects can speak in terms of design patterns. They do speak in terms of design patterns, but there is not a design pattern to solve every problem. Instead, architects will speak and think in terms of the bigger picture elements of software design. Design patterns are certainly part of an architect’s vocabulary, but it extends far beyond that. Design patterns, architectural patterns (onion architecture, clean architecture, micro services architecture), design philosophies (domain-driven design) and design techniques (separation of concerns, interface segregation principle, polymorphism, encapsulation, data hiding, etc) can be the main tools of communication.

So, no. Architects cannot describe an application entirely using design patterns. They need architectural patterns, design philosophies and techniques as well.

design patterns – Hover on a card with clickable links

I am working on card which has a clickable link/ clickable options but the card itself is not clickable. I want to add a shadow to the card when someone is hovering on/ clicking on the link. I have read in many posts that only when something is clickable, we should put a hover action on. Does it seem like a good interaction pattern?
No Hover and Hoverable Card

design – What are the benefits of having database in separate instance to main application?

Database systems generally don’t have fixed overheads – for instance, a table or index can be loaded from disk on demand or stored in RAM, with the same result but different performance. As such, many database systems are designed to make maximum use of available resources – they will claim as much RAM as allowed from the OS in advance and manage it internally, and assume they can use whatever CPU cores exist.

That strategy means they don’t “play nicely” with other applications on the same instance, particularly those with unpredictable resource needs. You can set a limit for what the database preallocates, but set it too low and you waste resources, set it too high and your other applications can’t deal with short bursts in demand.

Running two instances frees you from worrying about that conflict, and lets you pick appropriate resources for each part.