lo.logic – can you then define every finite transitive pointed model? (and why?)

enter image description here

We say a modal formula $phi$ defines a pointed transitive model $cal M,s$ if for all pointed transitive model $cal N , t: cal N , t models phi Longleftrightarrow M, s mathop{leftrightarrow} N , t$. Now suppose $|{bf P}|$ is finite, can you
then define every finite transitive pointed model? (and why?)

Maya 3d Model. 2+ Materials on a face? Snooker Table felt with lines overlay

I’ve made a snooker table in Maya for use in a game in Unity.
The felt texture on the table is a tiled texture, I used Blinn, and ‘place2dTexture’ node to do the tiling.

It all looked great. Except I had hoped to ‘overlay’ another texture on top of the felt which is the straight line and semi-circle drawn across the table.

I couldn’t use the same texture with UV mapping because the felt texture is tiled, and of course the lines texture is not (ie. It is intended to fill the whole plane on 1 to 1).

Is there a solution to this? Basically I need to know a way to overlay a non-tiled, partly transparent texture, on top of a tiled one.

Thanks for any help.

c# – How are changes propagated from the ViewModel to the Model and how often in MVVM?

I am learning the MVVM paradigm and I have seen a few different implementations surrounding the Model and how it is updated which I want to understand.

My understanding of the View and ViewModel interaction is clear (I think) and is as follows:

  • Properties in a View are bound to properties in a ViewModel
  • ViewModels do not know about a View but a View does know about a ViewModel
  • Changes in a View set property values in the ViewModel through bindings
  • Changes to a ViewModel property are received by Views as they are bound to properties in the ViewModel*

Typically I have seen INotifyPropertyChanged being implemented in the ViewModel to notify Views of changes.

The question I have is how changes are propagated to the model.

Example

Let’s say that I am making an application to control an audio player (“AudioApp”) and I am focusing just on AudioApp’s volume for simplicity.

I would have (psuedocode):

// ApplicationModel.cs
public class Application
{
    private float volume;
    public float Volume;
}

And

// ApplicationViewModel.cs
public class Application
{
    private float volume;
    public float Volume
    {
        get => volume;

        set
        {
            // Let's pretend the slider goes 0 -> 1 and this needs 0 -> 100
            if (volume == value * 100) return;

            volume = value * 100;
            RaisePropertyChanged("Volume");
        }
    }

And

// VolumeView.cs
<Rectangle Fill="Blue" HorizontalAlignment="Left" Margin="50,20,0,0" Height="{Path = Volume, Mode = TwoWay}"/>

Question Part 1.
If a View sets a property directly on the ViewModel, and the ViewModel updates a View via bound properties and INotifyPropertyChanged how do property changes ever reach the Model?

This tutorial actually implements INotifyPropertyChanged in the Model and uses an ObservableCollection in the ViewModel, but I thought the idea was to set property values in the ViewModel, not in the Model directly. Additionally, this doesn’t work for my example, where there are not collections of instances of the model.

Similarly, this answer recommends the following flow:

 1. Viewmodel is created and wraps model
 2. Viewmodel subscribes to model's `PropertyChanged` event
 3. Viewmodel is set as view's `DataContext`, properties are bound etc
 4. View triggers action on viewmodel
 5. Viewmodel calls method on model
 6. Model updates itself
 7. Viewmodel handles model's `PropertyChanged` and raises its own `PropertyChanged` in response
 8. View reflects the changes in its bindings, closing the feedback loop

but other answers say:

typically this is only needed if more than one object will be making changes to the Model’s data, which is not usually the case.

Question Part 2. That quote does not make sense to me. Even if only one object is making changes to the Model’s data, how does the object change the data (and how does that object receive its update?

Question Part 3. (Most important) the reason this matters is because my app ultimately needs to update AudioApp’s volume through a network request. Am I correct that the network communication would be considered “business logic” and therefore be handled by the model when it’s volume property changes (assuming I can figure out the canonical way to do that)?

*Bonus: Does this cause a feedback loop where changing a slider (View) from .5 -> .6 -> .7 would change the value in the ViewModel from .5 -> .6 which would then update the View (which is now at .7) back to .6?

python – A simple attention based text prediction model from scratch using pytorch

I have created a simple self attention based text prediction model using pytorch. The attention formula used for creating attention layer is,

enter image description here

I want to validate whether the whole code is implemented correctly, particularly my custom implementation of Attention layer.

The whole code

import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F

import random
random.seed(0)
torch.manual_seed(0)

# Sample text for Training
test_sentence = """Thomas Edison. The famed American inventor rose to prominence in the late
19th century because of his successes, yes, but even he felt that these successes
were the result of his many failures. He did not succeed in his work on one of his
most famous inventions, the lightbulb, on his first try nor even on his hundred and
first try. In fact, it took him more than 1,000 attempts to make the first incandescent
bulb but, along the way, he learned quite a deal. As he himself said,
"I did not fail a thousand times but instead succeeded in finding a thousand ways it would not work." 
Thus Edison demonstrated both in thought and action how instructive mistakes can be. 
""".lower().split()

# Build a list of tuples.  Each tuple is (( word_i-2, word_i-1 ), target word)
trigrams = (((test_sentence(i), test_sentence(i + 1)), test_sentence(i + 2))
            for i in range(len(test_sentence) - 2))

# print the first 3, just so you can see what they look like
print(trigrams(:3))

vocab = list(set(test_sentence))
word_to_ix2 = {word: i for i, word in enumerate(vocab)}

# Number of Epochs
EPOCHS = 25

# SEQ_SIZE is the number of words we are using as a context for the next word we want to predict
SEQ_SIZE = 2

# Embedding dimension is the size of the embedding vector
EMBEDDING_DIM = 10

# Size of the hidden layer
HIDDEN_DIM = 256

class Attention(nn.Module):
    """
    A custom self attention layer
    """
    def __init__(self, in_feat,out_feat):
        super().__init__()             
        self.Q = nn.Linear(in_feat,out_feat) # Query
        self.K = nn.Linear(in_feat,out_feat) # Key
        self.V = nn.Linear(in_feat,out_feat) # Value
        self.softmax = nn.Softmax(dim=1)

    def forward(self,x):
        Q = self.Q(x)
        K = self.K(x)
        V = self.V(x)
        d = K.shape(0) # dimension of key vector
        QK_d = (Q @ K.T)/(d)**0.5
        prob = self.softmax(QK_d)
        attention = prob @ V
        return attention

class Model(nn.Module):
    def __init__(self,vocab_size,embed_size,seq_size,hidden):
        super().__init__()
        self.embed = nn.Embedding(vocab_size,embed_size)
        self.attention = Attention(embed_size,hidden)
        self.fc1 = nn.Linear(hidden*seq_size,vocab_size) # converting n rows to 1
        self.softmax = nn.Softmax(dim=1)

    def forward(self,x):
        x = self.embed(x)
        x = self.attention(x).view(1,-1)
        x = self.fc1(x)
        log_probs = F.log_softmax(x,dim=1)
        return log_probs

learning_rate = 0.001
loss_function = nn.NLLLoss()  # negative log likelihood

model = Model(len(vocab),EMBEDDING_DIM,CONTEXT_SIZE,HIDDEN_DIM)
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)

# Training
for i in range(EPOCHS):
    total_loss = 0
    for context, target in trigrams:
        # context, target = ('thomas', 'edison.') the
        
        # step 1: context id generation
        context_idxs = torch.tensor((word_to_ix2(w) for w in context), dtype=torch.long)

        # step 2: setting zero gradient for models
        model.zero_grad()

        # step 3: Forward propogation for calculating log probs
        log_probs = model(context_idxs)

        # step 4: calculating loss
        loss = loss_function(log_probs, torch.tensor((word_to_ix2(target)), dtype=torch.long))

        # step 5: finding the gradients
        loss.backward()

        #step 6: updating the weights
        optimizer.step()

        total_loss += loss.item()
    if i%2==0:
        print("Epoch: ",str(i)," Loss: ",str(total_loss))

# Prediction
with torch.no_grad():
    # Fetching a random context and target 
    rand_val = trigrams(random.randrange(len(trigrams)))
    print(rand_val)
    context = rand_val(0)
    target = rand_val(1)
    
    # Getting context and target index's
    context_idxs = torch.tensor((word_to_ix2(w) for w in context), dtype=torch.long)
    target_idxs = torch.tensor((word_to_ix2(w) for w in (target)), dtype=torch.long)
    print("Acutal indices: ", context_idxs, target_idxs)
    log_preds = model(context_idxs)
    print("Predicted indices: ",torch.argmax(log_preds))

Find the file size when uploading the document in sharepoint library using Javascript object Model

I am uploading the Document to SharePoint library while uploading I need to find the file size , if file size More than 100 Mb I need to skip one functionality. This need to wrote in Js file and add the Js File to the Master Page.

Need help.

Thanks,

statistical inference – Partial likelihood in Cox’s proportional hazards model

I’m reading about Cox’s proportional hazards approach to (continuous) survival analysis and I’m finding it difficult to understand his argument for the derivation of the partial likelihood in his 1972 paper.

He states on page 6 of the pdf in the link that the probability to observe a failure at time $t_{(i)}$ on the individual $i$ (given that there is exactly one failure at $t_{(i)}$) is equal to $$frac{exp(z_{(i)}beta)}{sum_{lin R(t_{(i)})}exp(z_{(i)}beta)}.$$

Now I understand that the time-dependent part in the denominator is supposed to cancel against the time-dependent part in the numerator. However, the denominator itself surprises me, because it seems that this denominator is associated to the expected number of failures, rather than the probability to observe exactly one failure (which is what I would expect from the conditional probability that this partial likelihood is supposed to describe).

I’m wondering if, instead, I should view this partial likelihood as some approximation to the exact problem (and if so, why this approximation is justified).

Also, I’m curious if it’s still necessary with modern computers to actually split the full likelihood into partial likelihoods or if this is something that was mostly useful in the 70s.

python – pytorch LSTM model with unequal hidden layer

i have tuned a lstm model in keras as follows. but i dont know how write that code in pytorch. i put my pytorch code here but i dont think be right, because It does not give the right answer. how much I searched, I could not find a sample code in pytorch for more than one lstm layer with unequal hidden layers. my input shape is (None,(60,10)) with output shape (None,15) Please express a similar example for my keras model in pytorch. Thank

my_Keras_model:

model_input = keras.Input(shape=(60, 10))
x_1 = layers.LSTM(160,return_sequences=True)(model_input)
x_1 = layers.LSTM(190)(x_1)
x_1 = layers.Dense(200)(x_1)
x_1 = layers.Dense(15)(x_1)
model = keras.models.Model(model_input, x_1)

my_pytorch_model:

input_dim = 10
hidden_dim_1 = 160
hidden_dim_2 = 190
hidden_dim_3 = 200
num_layers = 1
output_dim = 15

class LSTM(nn.Module):
    def __init__(self, input_dim, hidden_dim_1, hidden_dim_2, hidden_dim_3 ,num_layers, output_dim):
        super(LSTM, self).__init__()
        self.hidden_dim_1 = hidden_dim_1
        self.hidden_dim_2 = hidden_dim_2
        self.hidden_dim_3 = hidden_dim_3
        self.num_layers = num_layers
        
        self.lstm_1 = nn.LSTM(input_dim, hidden_dim_1, num_layers, batch_first=True)
        self.lstm_2 = nn.LSTM(hidden_dim_1, hidden_dim_2, num_layers, batch_first=True)
        self.fc_1 = nn.Linear(hidden_dim_2, hidden_dim_3)
        self.fc_out = nn.Linear(hidden_dim_3, output_dim)

    def forward(self, x):
        input_X = x
        h_1 = torch.zeros(num_layers, 1 , self.hidden_dim_1).requires_grad_()
        c_1 = torch.zeros(num_layers, 1 , self.hidden_dim_1).requires_grad_()
        h_2 = torch.zeros(num_layers, 1 , self.hidden_dim_2).requires_grad_()
        c_2 = torch.zeros(num_layers, 1 , self.hidden_dim_2).requires_grad_()
        out_put = ()
        
        for i, input_t in enumerate(input_X.chunk(input_X.size(0))):
          out_lstm_1 , (h_1, c_1) = self.lstm_1(input_t, (h_1.detach(), c_1.detach()))
          out_lstm_2 , (h_2, c_2) = self.lstm_2(out_lstm_1, (h_2.detach(), c_2.detach()))
          out_Dense_1 = self.fc_1(out_lstm_2(:, -1, :))
          out_Dense_out = self.fc_out(out_Dense_1)
          out_put += out_Dense_out
        out_put = torch.stack(out_put, 0).squeeze(1)
        return out_put

sql server – How to model a course > section > lesson structure where lessons can belong to either sections or courses?

I have three entities:

  1. Course
  2. Section
  3. Lesson

Each “course” is made up of several “lessons”.
The “lessons” inside a course, can either be categorized into “sections”, or not.

So, the contents of a course could look either like this:

Foo Course:
    Lesson 1
    Lesson 2
    Lesson 3
    Lesson 4

Or like this:

Bar Course:
    Section 1:
        Lesson 1
        Lesson 2
    Section 2:
        Lesson 3
        Lesson 4

So, in other words, a course can either directly have “lessons”, or it can have “sections” that in turn have “lessons”.

From the other perspective, a “lessons” can either directly belong to a “course”, or belong to a “section” that in turn belongs to a “course”.

I’m struggling with how to implement this structure in a a relational database.

If every “lesson” had to necessarily belong to a “section”, it would be easy, I could just simply have a “Course” table, a “Section” table with a “CourseId” column, and a “Lesson” table with a “SectionId” column.

But my scenario is not as straightforward as that. A “section” can potentially exist as a middleman between a “course” and several “lessons”, but it can also be absent, in which case a “course” directly has the lessons and no there are no “sections”.

I’d appreciate any suggestions regarding how such a structure can ideally be implemented in the context of relational databases.

Thanks.

database design – Please review my data model

First off, I appreciate anyone who takes the time to review this post and give me feedback. Not only is it going help me learn, but it will help my overall database knowledge.

The flow of the frontend application that this data model will support is as follows:

Tournament organizers create tournaments

  • tournaments have organizer contact info
  • tournaments occur at a location
  • tournaments have participants

People register via a registration code

enter image description here

I believe the data model is fairly well normalized. At least 3NF.

Write a query to display the mobile model name

Write a query to display the mobile model name which has made the maximum sales. Give an alias for the maximum sales as "MAX_SALES". Sort the records based on model name in descending order.
(Hint: Use mobile_master and sales_info tables to retrieve records)

sql