malware – Is it possible to achieve persistence in Windows through using WinLogon without touching userinit, notify, or shell keys?

I am interested in finding out if it is possible to achieve persistence through winlogon without using one of those 3 mentioned keys. I am trying to determine if it’s safe to ignore registry key entries made into Winlogon parent directory. I’ve never seen an instance of malware achieving persistence through winlogon without using any of those keys, does anyone know of any techniques?

database design – Event sourced persistence and data model refactoring

I’ve been working on a system that is using event-sourced persistence for it’s data model. It is using standard model of sequence of events with aggregates being rehydrated from those events and projections used for more complex queries.

One thing that has been bothering me lately is the question of refactoring. As we follow agile, incrmental way of building the system, it becomes innevitable that new features might result in changed to the data model. In this case, it would be the events need to change. One specific example is the case improved model domain splitting one event into two separate one, with properties of the original being split into the two events.

Currently, the system is using sub-optimal model of using only single event. This results in usage of teh aggregate being inconsistent, as it is forced to create event later instead of creating one event first and later creating second event with additional data. I’m thinking of refactoring this, but because we keep history of all events, it would result in having both single-event model and double-event model in history, which needs to be supported in both aggregate rehydration and projections. This would not reduce complexity, but just move it from point of creation of the events to the point of consumption of the events.

So I’m thinking about chaning the events history, but as we are using sequence numbers and version history, it is impossible to insert events into past and in between two other events of an aggregate. Only option that comes to mind is to create a full copy of the events store and transform the events from one into another. But that feels like really complex operation, as it needs two persistence stores to run at the same time, and software to support reading from both.

So I wonder if there is simpler way of doing this kind of refactoring?

domain driven design – DDD – Object Life Cycles, Multiple Roots/Aggregates in a context, and Persistence

I have an application in which I’ve identified the need for an Authorization Context. There are some really basic invariants:

An action can only be performed in a system if a user if they have the associated function
Functions can be grouped into roles for easier management.
Role names must be unique so they aren’t confusing to a user
Users can’t have duplicate functions
Users can’t be in a role more than once

Functions are a pre-defined, immutable list which basically consist of a name and a description.
I get the feeling that these are value objects. Something like:


Role seems like it should be an entity, so it has an id, name, description, and list of functions


And a User, which can have functions or roles. (I’ve finally come to terms with a user being able to have different representations in different contexts, which is why I only need an Id here)


First question: Is this even designed correctly? While I feel like the Functions could be value objects, the fact that they can be added and removed from a role or user make them feel like entities to me. Additionally I feel like the user and role only need references to the other entity Id’s, and not the entire objects, to satisfy the invariants.

Second problem: What is my Aggregate? My first inclination was that I would have an AuthorizationAggregate. That would have the list of FunctionValueObjects, and a list of currently defined roles. But that seems weird to me because the AuthorizationAggregate woudln’t have an ID of its own. And what do I do with the user? I don’t think holding a reference to every single user in the application when the aggregate is retrieved from the repository is a great idea, especially if all that’s taking place is a role being created. However, wouldn’t those users need to be updated in a role had a new function added (which is why I made the point earlier about only needed Ids).

So that leads me to think, do I have an AuthorizationRolesAggregate for the creation/modification/deletion of the Roles, and adding / removing functions from the roles?

Then do I have a separate AuthorizedUserAggregate for adding/removing roles and functions? I feel like this aggregate is from within the same Context, but the representations of the roles/functions may be different (just ids, for example). Also, when adding a role to a user, should I be taking a Role as the parameter for the AddRole method (eg: AddRole(Role role))? So should I be using a domain service here to pull the valid role from the AuthorizationRolesAggregate, and passing that to the AddRole method of the user?

3rd problem: This is more technical than theoretical, but what is the proper way for creating an aggregate. A static Create method on the Aggregate class? A create method on the repository? What about deleting an entity or an aggregate? In the above case, if I used the AuthorizationRolesAggregate to delete a role, what’s the best strategy for udpating all off the affected users? Raise a RoleDeleted domain event, and update N number AuthorizedUserAggregates to delete the associated role from the user becoming eventually consistent?

I think part of why I’m really struggling with this right now because right now it’s backed by a 3NF RDBMS, and I know that If I remove the role the next time I load an affected user, it will load properly without that associated role. However, that’s incredibly lazy, and doesn’t address any currently loaded members. I also know if I were to move to a document db, or event sourcing, that would not be the case.

I’m also fighting this idea that if I were to create a new role using the AuthorizationRolesAggregate, having to potentially check the entirety of the Aggregate (all of the loaded roles and their functions) when I go to persist the Aggregate. That again makes me think that’s not the right design.

Any pointers in the right direction would be very appreciated.

database design – How are the data lake and polyglot persistence different?

Battery exchange network

The Stack Exchange network includes 176 question and answer communities, including Stack Overflow, the largest and most reliable online community for developers who want to learn, share knowledge and develop their careers.

Visit Stack Exchange

magento2 – Persistence of data in the same form to edit and add a new Magento 2 element

I got this structure for my form:

  • route: admin / partnerportal / partner / edit
  • ui_component: partnerportal_partner_edit.xml
  • layout: partnerportal_partner_edit.xml

I'm using the same form to edit existing items and add new items, and it works great. However, when something goes wrong in the validation (saving the controller), I want to go back to the edit form with all the data / modifications that I made, so I used the dataPersistor. However, there is a problem: for the editing part, it works well, but when I create a new element, it does not work, the fields return empty.

I found the reason, it is because the url must be admin / partnerportal / partner / edit / entity_id / value (ex. Value = 5), but in reality it is admin / partnerportal / edit. How can I fix it?

vue.js – Problem with data persistence in the API, which has an association, using Vuejs, Vuetify and Axios

I have a problem that when using the publish method using axes, I would need to send the data in the following json format:

  "lastName":"Mendes de Jesus",

However, during the recording, the sex is only stored on the front but is not kept in the database, my console log records the following situations: In the config: object, the data is correct and with gender association, but in config: data the association no longer appears indicating that something is missing to persist the data and I'm going crazy with it, follow the Console image:


This row which has been saved, in json format, gender becomes null and the image of the data table with the gender field empty:



And my .vue file:

Could someone save me? Thanks in advance.

persistence – How to correctly translate UML association, aggregation and composition into a Hibernate mapping?

There are a number of questions about the differences between UML association, aggregation and composition and many answers, some practices and other phylosoficals. Here I ask you to talk about practical differences!

In some responses, I found:

  1. Reference languages ​​like Java can't really implement compositions, because the life cycle of the instance is controlled by garbage collection;
  2. Associations and aggregations have no practical difference, so we just need to delete the aggregations and work with associations and compositions; however, these two types of relationships exist;
  3. These three concepts only make sense in programming languages ​​like C ++, which have an object model based on an instance (and not on a reference);
  4. Aggregation allows many owners, unlike Composition; some sources are different.

However, no answer I have found so far has addressed these concepts in the context of persistent objects. No example has been given regarding persistence, even if it is a very common developmental condition.

When an object persists in a database system, we have a life cycle model free from waste collection, because an instance (or table row if you want) deletion occurs in response to a deliberate act of part of the software implementing a product requirement.

The difference between Association and Composition is indeed very clear, they will produce different annotations in the code. A very noticeable difference is that with a composition, cascading deletion will be activated, so when the owner ID is deleted, items are also deleted. In Association, no cascading deletion is enabled.

However, what differences will we find when annotating association and aggregation, in particular, when in both cases we have a cardinality greater than 1?

Domain-driven design – Updating and persistence of aggregates

I am trying to understand the best possible solution in the following situation:

When updating part of an aggregate, could be any part of the aggregate, be it the root or any other entity, how could these changes persist in the db layer.

There have been a lot of solutions on StackExchange advising something like using your ORM models as domain objects so you can change the attribute on the aggregate and let the ORM layer differentiate and dump the database changes, most of the examples contain references to the Enity Framework if I'm not mistaken.

My partial understanding of DDD was that you should not define persistence layer logic in your domain models. The persistence logic must be defined in your repository, possibility of having a variety of persistence mechanisms (Postgres, MongoDB, S3 etc). In addition, the “strewn” domain models with a persistence logic and / or “original” SQL objects make it much more difficult to test my domain objects / aggregates.

I'm having a hard time figuring out an easy and simple solution, maybe there aren't any, on how to map these changes to my ORM layer, in my case, I & # 39; 39; uses Postgres. Unless the answer is solid mapping between your ORM model and the domain model in your repository it seems really difficult and verbose to do it.

I understood from reading other solutions that there are a few other possibilities (they all have their own drawbacks):

  1. Generate a ModelAttributeChanged every time you change something in your domain model. You can either resend this event or store it somewhere on the domain model. When you keep this model in the repository, first query the ORM model and map these changes to the ORM model before committing it.

    changed_name_event = person_aggregate.set_name('Henk'), changed_name_event)
  2. After changing something on the aggregate, explicitly call the update method on the repository also to update the attribute. You need to update everything on both the aggregate and the repository, and you need to know in advance which attributes of your aggregate will change before calling the correct method on your repository.

    repository.change_person_name(person_aggregate, 'Henk')

Ideally, I just want to be able to update my aggregate and save it via my repository. However, because I map my ORM model to an AR, aggregate root, I "lose" the mapping to the ORM model. I could of course keep track of all the changes I make to the aggregate and each time I call the repository apply these changes to the ORM model and validate it in the database. My "problem" with this solution is that I need solid mapping and tracking of changes for nested entities is a difficult, complex and error prone process.

If this is the evil necessary to completely isolate the logic of your domain, I agree, but it seems to me that a lot of logic must be defined in order to make this abstraction work.

android – Firebase: Persistence – Priority in offline updating?

I have persistence enabled in my app, to keep the BD updated after disconnecting the device:


I performed the following test:
1º.- I have disabled the data and the wifi of the device.

2º.- I modified a BD register, from the Firebase console.

3º.- I modified the same register, but in the device, with another data different from that which I modified in the BD.

4º.- I activated the Wi-Fi and when entering the application, the device information was updated with data from the BD Firebase.

So far everything is okay because persistence has worked.

But I did the same test BUT I first updated the information on the device, then the BD. I assumed that now the BD information would be updated with that of the device, but this is not the case, the device has been updated with the modification of the BD. I assumed that Firebase would have a timestamp, or something similar, to determine which data is the most recent and which data can be updated and which is not.

So what criteria does Firebase follow to know which data is the most recent?

logging – Database vs. logs – Evaluation of the persistence approach

I have a node js application, in which I need to perform background jobs. My plan is to use a data structure that will contain all the necessary work. My design also suggests that this data structure must persist between reboots of the application. To achieve this, I can think of two options for storing the data structure:

a) use a database

b) use a file

Solutions like external queues could probably be an exaggeration for my application (the data structure will contain 5 to 10 tasks at any time).

Currently, I am inclined to use a log file, which I think will be faster and in addition I do not need to query this information (so need a database).

How do you evaluate this approach? Is using a log file for persistence of the run a good practice? Are there any other advantages / disadvantages? Is there an alternative to consider in my scenario?