I'm trying to model the data and their flow in a software system that
requires a different version of the values to follow.
The problem is that there are different types of entities needed
be tracked over time (versions), so I thought about that and I
ended up defining a concept of global time in the system:
A reference to a single entity will be defined as follows:
Ref (EntityType, UniversalKey, TimeTick) which
is an incremental value per isolated business problem (for example, in a project management system, two projects
TIMETICK values, because these projects are different types of universe!) and the relationship / problem will be
something like that:
Ref (ImportedFile, "Tasks-A", 1) | Ref (TaskDef, "XYZ", 2) | Ref (Treatment, "XYZ", 3) | Ref (result, "XYZ", 4)
Tasks-a will contain TaskDef numbers, but I just said a value.
In the case above, when a new version of
Tasks-a is available, it is possible that some
taskdefs has been changed:
Ref (ImportedFile, "Tasks-A", 5) | Ref (TaskDef, "XYZ", 6)
In response to this, the model should be able to match the previously processed treatment.
Result with updated
that I'm trying to solve that by
UniversalKey and follow the differences with
Right now, I am looking for similar experiences and models to better understand the possible dark side of this solution.
- What are the possible dark sides of this approach? Where can I find more details on this type of pattern?
- When the data granularity changes, what good way to handle the situation? for example consider several
Processing! Obviously, you need a way to group
Processingand be able to match
taskdef! Should I split the UniversakKey or add GroupingKey? Any suggestion?