e-commerce – Data-driven design vs natural behavior

I've seen it before, and I personally hate it. In my personal experience, I tend not to end the trip or just leave the page. But it may be just me.

However, whatever the reasons, you are Cheating the user to do something that she did not ask. Basically, you deny one of the 8 golden rules of interface design

7 Supports the internal control locus.

Experienced operators want to feel in charge of the system and
that the system responds to its actions. Design the system to do
uses the action initiators instead of the responders.

(I would say that you will also break points 1, 4 and 6 to some extent)

Also keep in mind that your assumption has a flaw: your users see 12 products because the stream you currently have is designed for that. Otherwise it will change.

In short, I think you should offer these 12 (or 15 or 20 or whatever) results if your user is looking for something. Otherwise, if the user clicks on the product, he expresses his will or INTENTIONALITY to see this product and not what you think it might want. And then, and only after she's done that, can you suggest her to see other options using different strategies (eg: location-related, similar features, compare products? , etc.)

But of course, it's always a good idea to test things like that. Who knows, you may find a better way. But if you want to follow this path, I advise you to do extensive testing with several usability research bypass a prototype (and forget "5 is enough" because it is not the case, try as many users as you can) and maybe a private beta. Or if you're feeling adventurous, try a multivariate test on your live site with both versions, but it's pretty risky and personally I wouldn't do it.

magento2.2 – Change the default behavior of clicking on any element in Pages, Blocks, Products, etc. in Admin

I would like to change the default behavior when you click on any item on any page that provides you with a list to edit that page. For example, when you go to Content> Pages, it populates a list of all of your pages.

enter description of image here

When you click on any of the items in the list, it opens an editable form in which you can edit the page items directly from there instead of opening the edit page.

enter description of image here

It does this for blocks and product lists as well. I really don't like this because I think it slows down any development process and opens the door to errors for the page title and identifier for low-skilled users. Is there a way to change that to what we had in magento 1 where when you click on a page to edit it it immediately took you to the edit page for that item?

Thank you

Which actions of the users deserve to be recorded, to have a better perspective of their behavior?

If you ask yourself this question, then to be honest, I suggest you do not take this kind of decision in advance, but rather incorporate an adaptive logging solution that you can control from the server side this which is registered (and when).

Since you do not know what you want to record, you need to record everything to find out what you want to record. Carry on journaling and especially know what you do not want to continue logging.

Otherwise, you run the risk of not recording something that you do not realize that could be interesting. It's very easy to get an idea of ​​what might be interesting and to record it, but in reality it's just going to tell you something you already thought you knew, where the fact is that you do not know what you do not know!

I do not want to be embarrassing, but if you can not answer the question given your own knowledge of the field, it seems to me that you certainly can not expect someone who does not know anything about your system answers for you.

That said, it should also be said that in the same way that white spaces are an important part of the content and that gaps are an important part of the discourse, you may find it useful to get an idea of ​​the periods of inactivity between actions. Obviously, you can not attribute the inactivity of a single user to a particular problem – I speak when you have enough data to be able to form a trend or a pattern.

reference request – Behavior of the main eigenfunction of the Fractional Laplacian

How does the first proper function $ phi_ {1} $ behave near the limit of $ Omega $ or $$ (- Delta) ^ s phi_ {1} = lambda_ {1} phi_ {1}, text {in} Omega; phi_ {1} = 0 text {in} Omega ^ c $$ in a smooth delimited area with n dimensions with $ N> 2s $ ?

I believe that the behavior should be
$$ c_ {1} d (x, partial Omega) ^ s leq phi_ {1} (x) leq c_2 d (x, partial Omega) ^ s $$ for some people $ c_1> $ 0 and $ c_2> 0. $ Is it true?

User behavior – The psychology of "none of these answers"

I am currently working on a design for a formal registration service that asks users to categorize themselves according to a closed option list. The list is not exhaustive, some users do not see the option that suits them. However, user tests show some reluctance to use the "None of these answers" option.
I wonder if the negative precept in the phrase "none of these answers" has been explored somewhere? The time spent hesitating at this stage suggests that users are not inclined to leave the page unresolved. Can any one point me in a useful direction? Thank you!

postgresql – Understanding the composite behavior of the BTREE + GIN_TRGM_OPS index and the odd behavior ()

hoping that someone can try to help me decipher an index behavior. I work on allowing some simple contains type searches on various columns of user data (~ varchar <255) and try to understand the behavior of the index, as well as better understand if there is a better approach (full text maybe better? ). need a broader search application at one point, but moving on to this is prohibitively time consuming for our application at the moment)

Anyhoo, in my case, we are mainly looking for all the users of this table by starting with the tuple category / type (due to the inheritance of a table with w / Rails)

Sample table and index, using Postgres11:

CREATE TABLE people (
    id SERIAL,
    email character varying(255) not null,
    first_name character varying(255) not null,
    last_name character varying(255) not null,
    user_category integer not null,
    user_type character varying(255) not null
);

-- Dummy Data
INSERT INTO people (email, first_name, last_name, user_category, user_type)
SELECT
  concat(md5(random()::text), '@not-real-email-plz.com'),
  md5(random()::text), 
  md5(random()::text), 
  ceil(random() * 3), 
  ('{Grunt,Peon,Ogre}'::text())(ceil(random()*3))
FROM
  (SELECT * FROM generate_series(1,1000000) AS id) AS x;

-- Standard, existing lookup
CREATE INDEX index_people_category_type ON people USING btree (user_category, user_type);

-- taken from https://niallburkley.com/blog/index-columns-for-like-in-postgres/
CREATE INDEX idx_people_gin_user_category_and_user_type_and_full_name 
ON people
USING GIN(user_category, user_type, (first_name || ' ' || last_name) gin_trgm_ops);    

-- first name
CREATE INDEX idx_people_gin_user_category_and_user_type_and_first_name 
ON people
USING GIN(user_category, user_type, first_name gin_trgm_ops);

-- last name
CREATE INDEX idx_people_gin_user_category_and_user_type_and_last_name 
ON people
USING GIN(user_category, user_type, last_name gin_trgm_ops);

-- email
CREATE INDEX idx_people_gin_user_category_and_user_type_and_email 
ON people
USING GIN(user_category, user_type, email gin_trgm_ops);

-- non-composite email (had for testing and raised more questions)
CREATE INDEX idx_people_gin_email 
ON people
USING GIN(email gin_trgm_ops);

I read that the order did not matter in the GIN indexes. So I guess my first question is whether it's also possible to create an index, including multiple columns, for a combination of their uses. In my opinion, no, because the indexes vary considerably according to the size but were not certain of the implication of the details of the order.

Whatever it is, on what I have observed!

One of the first things that I noticed was that it seems that the GIN index immediately replaces the first b-tree index when it simply looks for category and by type.

EXPLAIN ANALYZE VERBOSE

SELECT DISTINCT id
FROM people
WHERE user_category = 2
  AND (user_type != 'Ogre')

Results:

Unique  (cost=52220.05..53334.71 rows=222932 width=4) (actual time=251.070..339.769 rows=222408 loops=1)
  Output: id
  ->  Sort  (cost=52220.05..52777.38 rows=222932 width=4) (actual time=251.069..285.652 rows=222408 loops=1)
        Output: id
        Sort Key: people.id
        Sort Method: external merge  Disk: 3064kB
        ->  Bitmap Heap Scan on public.people  (cost=3070.23..29368.23 rows=222932 width=4) (actual time=35.156..198.549 rows=222408 loops=1)
              Output: id
              Recheck Cond: (people.user_category = 2)
              Filter: ((people.user_type)::text <> 'Ogre'::text)
              Rows Removed by Filter: 111278
              Heap Blocks: exact=21277
              ->  Bitmap Index Scan on idx_people_gin_user_category_and_user_type_and_email  (cost=0.00..3014.50 rows=334733 width=0) (actual time=32.017..32.017 rows=333686 loops=1)
                    Index Cond: (people.user_category = 2)
Planning Time: 0.293 ms
Execution Time: 359.247 ms

Is the original b-tree totally redundant at this point? I was expecting the planner to still choose whether only these two columns were used if the b-tree tree was faster for these types of data, but it seems that this is not the case.

Then I noticed that our existing queries depended on lower() and seemed to completely ignore the GIN indexes, or rather use the last one that was created, even if this column was not used in the query:

EXPLAIN ANALYZE VERBOSE

SELECT DISTINCT id
FROM people
WHERE user_category = 2
  AND (user_type != 'Ogre')
  AND (LOWER(last_name) LIKE LOWER('%a62%'))

Results (comparing last_name but using the email index):

HashAggregate  (cost=28997.16..29086.33 rows=8917 width=4) (actual time=175.204..175.554 rows=1677 loops=1)
  Output: id
  Group Key: people.id
  ->  Gather  (cost=4016.73..28974.87 rows=8917 width=4) (actual time=39.947..181.936 rows=1677 loops=1)
        Output: id
        Workers Planned: 2
        Workers Launched: 2
        ->  Parallel Bitmap Heap Scan on public.people  (cost=3016.73..27083.17 rows=3715 width=4) (actual time=22.037..156.233 rows=559 loops=3)
              Output: id
              Recheck Cond: (people.user_category = 2)
              Filter: (((people.user_type)::text <> 'Ogre'::text) AND (lower((people.last_name)::text) ~~ '%a62%'::text))
              Rows Removed by Filter: 110670
              Heap Blocks: exact=7011
              Worker 0: actual time=13.573..147.844 rows=527 loops=1
              Worker 1: actual time=13.138..147.867 rows=584 loops=1
              ->  Bitmap Index Scan on idx_people_gin_user_category_and_user_type_and_email  (cost=0.00..3014.50 rows=334733 width=0) (actual time=35.445..35.445 rows=333686 loops=1)
                    Index Cond: (people.user_category = 2)
Planning Time: 7.546 ms
Execution Time: 189.186 ms

Whereas the transition to ILIKE

EXPLAIN ANALYZE VERBOSE

SELECT DISTINCT id
FROM people
WHERE user_category = 2
  AND (user_type != 'Ogre')
  AND (last_name ILIKE '%A62%')

The results are much faster and use the expected index. What about the lower() call that seems to make the planner hover?

Unique  (cost=161.51..161.62 rows=22 width=4) (actual time=27.144..27.570 rows=1677 loops=1)
  Output: id
  ->  Sort  (cost=161.51..161.56 rows=22 width=4) (actual time=27.137..27.256 rows=1677 loops=1)
        Output: id
        Sort Key: people.id
        Sort Method: quicksort  Memory: 127kB
        ->  Bitmap Heap Scan on public.people  (cost=32.34..161.02 rows=22 width=4) (actual time=16.470..26.798 rows=1677 loops=1)
              Output: id
              Recheck Cond: ((people.user_category = 2) AND ((people.last_name)::text ~~* '%A62%'::text))
              Filter: ((people.user_type)::text <> 'Ogre'::text)
              Rows Removed by Filter: 766
              Heap Blocks: exact=2291
              ->  Bitmap Index Scan on idx_people_gin_user_category_and_user_type_and_last_name  (cost=0.00..32.33 rows=33 width=0) (actual time=16.058..16.058 rows=2443 loops=1)
                    Index Cond: ((people.user_category = 2) AND ((people.last_name)::text ~~* '%A62%'::text))
Planning Time: 10.577 ms
Execution Time: 27.746 ms

Then add another field in things …

EXPLAIN ANALYZE VERBOSE

SELECT DISTINCT id
FROM people
WHERE user_category = 2
  AND (user_type != 'Ogre')
  AND (last_name ILIKE '%A62%')
  AND (first_name ILIKE '%EAD%')

Is still pretty fast all the way

Unique  (cost=161.11..161.11 rows=1 width=4) (actual time=10.854..10.860 rows=12 loops=1)
  Output: id
  ->  Sort  (cost=161.11..161.11 rows=1 width=4) (actual time=10.853..10.854 rows=12 loops=1)
        Output: id
        Sort Key: people.id
        Sort Method: quicksort  Memory: 25kB
        ->  Bitmap Heap Scan on public.people  (cost=32.33..161.10 rows=1 width=4) (actual time=3.895..10.831 rows=12 loops=1)
              Output: id
              Recheck Cond: ((people.user_category = 2) AND ((people.last_name)::text ~~* '%A62%'::text))
              Filter: (((people.user_type)::text <> 'Ogre'::text) AND ((people.first_name)::text ~~* '%EAD%'::text))
              Rows Removed by Filter: 2431
              Heap Blocks: exact=2291
              ->  Bitmap Index Scan on idx_people_gin_user_category_and_user_type_and_last_name  (cost=0.00..32.33 rows=33 width=0) (actual time=3.173..3.173 rows=2443 loops=1)
                    Index Cond: ((people.user_category = 2) AND ((people.last_name)::text ~~* '%A62%'::text))
Planning Time: 0.257 ms
Execution Time: 10.897 ms

But back to this extra index, non-tuple, created at the bottom of the page and filtered by email, which seems to use another index in the context of things?

EXPLAIN ANALYZE VERBOSE

SELECT DISTINCT id
FROM people
WHERE user_category = 2
  AND (user_type != 'Ogre')
  AND (last_name ILIKE '%A62%')
  AND (email ILIKE '%0F9%')

At a different path:

Unique  (cost=140.37..140.38 rows=1 width=4) (actual time=4.180..4.184 rows=7 loops=1)
  Output: id
  ->  Sort  (cost=140.37..140.38 rows=1 width=4) (actual time=4.180..4.180 rows=7 loops=1)
        Output: id
        Sort Key: people.id
        Sort Method: quicksort  Memory: 25kB
        ->  Bitmap Heap Scan on public.people  (cost=136.34..140.36 rows=1 width=4) (actual time=4.145..4.174 rows=7 loops=1)
              Output: id
              Recheck Cond: ((people.user_category = 2) AND ((people.last_name)::text ~~* '%A62%'::text) AND ((people.email)::text ~~* '%0F9%'::text))
              Filter: ((people.user_type)::text <> 'Ogre'::text)
              Rows Removed by Filter: 4
              Heap Blocks: exact=11
              ->  BitmapAnd  (cost=136.34..136.34 rows=1 width=0) (actual time=4.125..4.125 rows=0 loops=1)
                    ->  Bitmap Index Scan on idx_people_gin_user_category_and_user_type_and_last_name  (cost=0.00..32.33 rows=33 width=0) (actual time=3.089..3.089 rows=2443 loops=1)
                          Index Cond: ((people.user_category = 2) AND ((people.last_name)::text ~~* '%A62%'::text))
                    ->  Bitmap Index Scan on idx_people_gin_email  (cost=0.00..103.76 rows=10101 width=0) (actual time=0.879..0.879 rows=7138 loops=1)
                          Index Cond: ((people.email)::text ~~* '%0F9%'::text)
Planning Time: 0.291 ms
Execution Time: 4.217 ms

The cost seems negligible, but you wonder what it means for a fairly dynamic number of columns that could be filtered? Would it be ideal to also create a non-tuple index for all fields?

Sorry for the length, I've turned my wheels around for a while trying to figure it all out, but any idea would be great (and it seems like there 's not a ton' s worth of it. GIN indices like this one, although maybe I'm missing something more fundamental overall)

User behavior – Is the Gmail cancellation template sufficient for bulk mailing? What are the best alternatives?

We are creating an email client that would send email campaigns to thousands of users (Mailchimp would be a similar product to compare). There are ideas and opinions, one of which is to add a cancellation option to the mailing.

The owner of this approach being convinced that this would increase the level of trust of users, I am defending the opposite idea of ​​adding an extra step to review mail content and recipients and to seek final approval.

Although my solution adds extra friction to the flow, it also adds an increased level of confidence to user activities.

Now, all of the above is an assumption that we will test with users, but I would like to know more about the subject, especially about personal opinions.

I use Gmail every day and I personally find that this canceled feature adds more stress in my life. It happened once or twice to undo the e-mail and change some details, but if I could not do it, it would not be a disaster either.

On the other hand, I think when a user sends a bulk mail, he needs to better understand what he is about to do.

What are your thoughts? Are there other alternatives?

PS: I've already read this question about the same feature, while I understand the assumptions, I'd like to hear more about it.

User behavior – what should be implemented first in the redesign? Color of the application or layout of the application?

If you plan to redefine the entire Mob application, including the main experience, the layout and even the main colors of the brand, which approach would you adopt?

  1. Introduce the latest brand colors in the existing design / layout
  2. Implement the new presentation / experiment, test and introduce the colors.

The main reason for the splitting of the implementation is to give the user an impression of continuity between the existing screens and the new screens. So, of course, users are not insane when they have to navigate a brand new application (with branded colors and modified baseline experiences).

In C #, is there a way to impose behavior coupling in the interface methods? Is this question a smell of design?

You ask for too many interfaces.

Interfaces are contracts. They say which package of methods is implemented by a given class and ensure that these methods will be there if someone calls them.

That said, it's also the only thing the interfaces do.

They are completely unaware of what the methods do. They are free to do what they want.

A given class can implement "isEnabled" to always return true. Another may link "Disable" to a database call and refuse to disable it if something specific happens.

You can not control that. This is not the job of your interface.


How can I apply this behavior, then?

Use tests.

Create a unit test group that can accept an object of the type in question and test the behavior.

If the test is successful, you are ready to leave.
If they fail, something is wrong and you should check your code.

That said, you have no elegant way of forcing this to a third party if you develop an API, you should not do it either. This is not what the interfaces are for.

Programming Practices – Separating Data and Behavior in Java

The inclination I had was that changing an animal's behavior and properties would be two distinct reasons to change, so I thought I needed to use encapsulation to separate them.

Things that can change do not involve reasons to change. For an ad absurdum reduction, a change to the naming convention would be a reason to modify virtually all the code. Do you want to isolate your code from the names? No.

Instead, your reasons for changing are always external to the system. If you do not know them, you will not find them in the code. Instead, you'll find them in what the code is trying to do.

For example, if you model the reality and implement actions that an animal can do in real life … one reason to change might be to discover that animals can do something you did not know.

See also my answer to The question of not having more than one method violates the principle of sole responsibility ?.


The problem I see is that now, instead of just calling the variables themselves, the Animal has to call its AnimalInfo class to retrieve the information, as if it were no longer the owner of his own information.

Not necessarily a problem. The conversion of part or all of the state into a separate type constitutes a valid refactoring. However, it does not pay you anything. From SRP's perspective in particular, consider that a change of AnimalInfo implies a change of Animal. I think in your case, separating AnimalInfo is useless, even counterproductive.


Instead, I think your problem is with getters and setters. You basically do a struct (except that it's Java). Are all possible combinations of fields valid? You should check that! You should not leave the object in an invalid state! This is what encapsulation is for. For example, can the animal hunt while sleeping?

You will probably want to implement it as a state machine. You could have an enum type AnimalState who has the possible states of the animal (hunt, sleep, etc.). then Animal has a getter for the state, and methods that change the state (at least one opener for the state).

Done correctly, you should be able to change the list of states for the Animal, without changing the Animal classroom. It's the separation of data and behavior, without breaking the encapsulation.

In fact, properly done, you could change the AnimalState can be changed into a class, and each possible state is an instance, which has a name. This would allow you to load the list of states from a configuration file, a database, a user input, and so on.

Another benefit of having AnimalState to be a class, it is that you can do derived types. For example, you can have a guy who has FoodSupply and use it for the state of eating. Although, I'm not sure that's the way you want to take this.

※: There might be rules regarding transitions from one state to the other. Also so bool TrySetState(AnimalState newState) could be useful. In addition, depending on the needs, you can find bool TransitionState(AnimalState expectedState, AnimalState newState) or similar useful.


In the end, it depends on the requirements of the system. It is useful to think about how they might change and to facilitate code change in the future. For example, in this case, we know that there is a list of things that the animal can do, we can imagine that the list can change (that is, a change of ## 147 ##). 39; requirements), so it makes sense to create a code that facilitates this change (eg using an enum type). Likewise, the requirement could change to say that the statements come from a database.

Note that you do not have to write database adapters simply because the client can request it. The separation AnimalState is enough because it allows you to change AnimalState without changing Animal.

I could be totally off-mark. Maybe animals can hunt while sleeping in this system. If in doubt, ask the customer. The requirements are paramount. You need to understand where the requirements come from, to understand why they might change, which will dictate the separation of your code.


I want to mention another possible design. Instead (or in addition) of a separation state, you can separate mutation of the state. In other words, you can make your class immutable, then use a method that returns a new instance of Animal with the change of state (or the same instance without change was necessary).