ultrafilters – Transforming large size kernel into equivalent small size kernels

I have a problem where I want to convert kernel of large size,i.e, 64×64 into the number of 3×3 size kernels such that convolving them with the same image results in the equivalent output. (It does not imply splitting of the large kernel into small kernels)

nt.number theory – Can factorization of very large numbers be aided by associating them with a series (described below) of quadratic polynomials?

My name is J. Calvin Smith. I graduated in 1979 with a Bachelor of Arts in Mathematics from Georgia College in Milledgeville, Georgia. My Federal career (1979-2012) in the US Department of Defense led me to learn, explore, and take courses in cryptologic mathematics and number theory for cryptology, which in turn led me to the problem – which I assume is still difficult – of factoring very large numbers. I came up with a series of quadratic polynomials associated with any number one is trying to factor – an example of the technique follows – and I want to find out if this is a technique with promise, one that would help make factorization be algorithmic and fast, reducing the order of magnitude of the problem.

It is most easily described by way of example. Let p = 1009 and q = 2003.

n = pq = 2021027.

In this situation, we can effortlessly factorize n: we know what p and q are. What I want to do here is take what we know and use it as a means toward a means: a way to find a way, that way being a method of factoring numbers of arbitrary size straightforwardly.

The largest integer less than the square root of n is 1421. Let us set a
new variable m to 1421.

1009 = m – 412.
2003 = m + 582.

n = m^2 + 170*m – 239784.

The determinant in the quadratic formula for this polynomial would be B^2 – 4AC = 28900 + 959136 = 988036, the square of 994. Thus the quadratic formula will give us integer roots. This follows obviously from how we constructed the quadratic: as the product of two known factors.

But since m^2 = 2019241, we also have as a polynomial for n:

n = m^2 + 1786, which cannot be factored using the quadratic formula.

But 1786 = m + 365 = 2m – 1056 = 3m – 2477 and so on. Adding each of these expressions to m^2 produces a series of quadratic polynomials, almost all of which cannot be factored into integer roots.

Let us now look at m^2 + 1786, with or without the next few polynomials in a series thus constructed, and see if we can determine/calculate ahead of time that we would be hitting the jackpot, so to speak, at the polynomial with the 170*m term? (i.e., the 171st quadratic in the series)

B C B^2-4*C
0 1786 -7144 no real square root
1 365 -1459 no real square root
2 -1056 4228 = 65^2 + 3 = 66^2 – 128.
3 -2477 9917 = 99^2 + 116 = 100^2 – 83.

In general here, Determinant is (B + 2842)^2 -8084108. (This is B^2 + 5684B – 7144, after completing the square.) What I have not yet figured out, but I suspect might be easy, is for which values of B the determinant becomes a perfect square, thus causing the quadratic formula to produce integer roots and lead us to the answer we want – the factorization. Further, will this approach scale nicely? I am hoping the discovery of the perfect-square values of the determinant’s quadratic, regardless of n = pq chosen (and in that case p and q still unknown), can be done algorithmically and easily.

performance tuning – Large difference in the time it takes to compute the transpose of a matrix

This is a simplified version of the program I’m working on. I’m working with some large vectors that I have to transpose at the end in order to get the result I’m looking for. I have two versions of the program, both giving the same result, but one taking way longer than the other:

n = 200.; 
m = 300000;
p = Table(RandomReal({-1, 1}), m);

(* First program *)

AbsoluteTiming(Q = Reap(Do(Sow(( {p, p}*{i/50, i^2/100}), 1);
  Sow(( {p, p}*{i/50, i^2/100}) // Transpose, 2);, {i, n}))((2));)
AbsoluteTiming(v11 = Q((1)); v21 = Q((2)) // Transpose;)

{1.79722, Null} (* Time it takes to compute the two vectors, Q((1)) and Q((2)) *)
{1.02865, Null} (* Mainly time it takes to transpose Q((2)), as setting v11=Q((1)) takes about 10^-6 seconds *)

(* Second program *)

v12 = v22 = Table(0, {i, n});


AbsoluteTiming(Do(v12((i)) = ({p, p}*{i/50, i^2/100});
v22((i)) = ({p, p}*{i/50, i^2/100}) // Transpose;, {i, n}))
AbsoluteTiming(vec1 = v22//Transpose;)

{1.78438, Null} (* Time it takes to compute the two vectors, v12 and v22 *)
{14.5686, Null} (* Time it takes to transpose v22 *)

As you can see, the computation time is the same in both programs, but there’s a huge difference when transposing the matrix at the end. When m is larger, the second program even crashes due to memory issues when trying to transpose the matrix, while the first one takes only a few seconds. When checking at the end, both vectors are identical:

v22 == Q((2))
vec1 == v21

True
True

How can there be such a huge difference in the time it takes transposing two identical matrices?

forms – Can a 2-column grid of info be acceptable if it mitigates an awkwardly large amount of white space?

enter image description here

I’m in the middle of a psuedo-overhaul of my company’s platform, with an emphasis on “object-level” pages (ie the page for an individual task, or individual appointment).

A problem I’m running into is that the info for many of these pages is laid out in a two-column format (see pic), which I know any competent ux designer will tell you is bad for scanning. But while combining both columns into one single left-aligned column might improve scannability, it would also leave just a gaping void of whitespace on the right portion of the page. I can’t really articulate any ux code this violates beyond just looking ridiculous and stark, but it still seems like an issue worth surfacing.

Any thoughts on this? I want to follow best practices but I also would like to preserve a sense of visual balance and appeal.

Thanks!

Always On Synchronous Commit Mode With Large Delays

For SQL Server Enterprise’s Always On availability groups in Synchronous Commit mode, is there some amount of latency that can’t be exceeded? For example, can it work between very distant data centers, such as between the US and Europe?

uploads – 413 Request Entity Too Large nginx/1.18.0 (Ubuntu)

This message occurs whenever I attempt to upload a file greater than 2M in wordpress. I have made the follow changes in order to increase the allowed file upload sizes:

In nginx.conf, I added

client_max_body_size 200M;

In php.ini, I modified

upload_max_filesize = 200M
max_file_uploads = 20
Post_max_size = 256M

In wp-config.php, I added

@ini_set( 'upload_max_filesize' , '200M' );
@ini_set( 'post_max_size' , '256M' );
@ini_set( 'memory_limit' , 256M' );

Even with these parameters set in the three configuration files I am still getting the message
413 Request Entity Too Large nginx/1.18.0 (Ubuntu)

Can anyone help?

microservices – Service integration with large amounts of data

I am trying to assess the viability of microservices/DDD for an application I am writing, for which a particular context/service needs to respond to an action completing in another context. Whilst previously I would handle this via integration events published to a message queue, I haven’t had to deal with events which could contain large amounts of data

As a generic example. Let’s say we have an Orders and Invoicing context. When an order is placed, an invoice needs to be generated and sent out.

With those bits of information I would raise an OrderPlaced event with the order information in, for example:

public class OrderPlacedEvent
{
    public Guid Id { get; }
    public List<OrderItem> Items { get; }
    public DateTime PlacedOn { get; }
}

from the Orders context, and the Invoicing context would consume this event to generate the required invoice. This seems fairly standard but all examples found are fairly small and don’t seem to address what would happen if the order has 1000+ items in the order, and it leads me to believe that maybe integration events are only intended for small pieces of information

The ‘easiest’ way would be to just use an order ID and query the orders service to get the rest of the information, but this would add coupling between the two services which the approach is trying to remove.

Is my assumption that event data should be minimal correct? if it is, how would I (or even, is it possible to?) handle such a scenario where there are large pieces of data which another context/service needs to respond to, correctly?

exiftool – Changing Date on large numbers of scanned files

I have thousands of scanned photos and would like to change the EXIF data to have the proper dates.
I can do this all with exiftool, but am looking for a way to process the files after I have organized the files into groups by date.

I created subfolders with the format mmddyy and placed each photo for that date within.

I would like a script to do the following (Windows based computer, so DOS, powershell, or VBscript would work):

Step through each folder, take the date from the folder and apply the following commands:

exiftool -datetimeoriginal="19yy:mm:dd 12:00:00" directory*.jpg
exiftool "-datetimeoriginal+<0:0:${filesequence;$_*=3}" directory*.jpg
move *.jpg e:Fixed

Any suggestions? I haven’t scripted in years and am definitely rusty.

True CDN cost for a large traffic site?

I have received some great help from this amazing forum regarding finding a good storage hosting provider and I’m currently negotiating with… | Read the rest of https://www.webhostingtalk.com/showthread.php?t=1825563&goto=newpost

printing – How to develop large dark room prints?

I want to make some large dark room prints (40×50 or larger). However the dark room I have access to, does not have room for 40×50 trays. So before I go and make a whole bunch of mistakes, working out how to do this I would like to know if there any tried and true methods that someone can point me in the direction of.

I am happy with how I am going to set up an enlarger sideways to make the exposure, it is just how to develop the print which I need to work out.

Half baked ideas I have:

  1. Use a 20l ‘poly-pail’ (bucket with a sealing lid) and use it like a rotary tank
  2. Make a thin vertical tank out of acrylic sheet
  3. Hold the exposed print vertically above the sink and use a aquarium pump to hose the print down with dev fix etc.
  4. Use a wall paper wetting bucket and carefully dunk the paper in and out (under the wire).

Any suggestions?