## recovery mode – Analyzing boot loop Root cause from console-ramoops-0 ( & logcat)

`Boot-looped` and got stuck in powered by android logo. `Logcat` hasnt been much useful. Wanna know what’s causing the stuck.
All I have is this from `/sys/fs/pstore/console-ramoops-0`

``````'android.frameworks.sensorservice@1.0::ISensorManager/default': No such file or directory
``````

I am attaching Entire /sys/fs/pstore/console-ramoops-0

what seems the issue ?
I have TWRP backups of `vendor system boot` that i tinkered restores with. No Luck. This is stock `MIUI ROM` with `TWRP as recovery base` so after flashing dm-verify-no encrypt.zip ( cant recall the exact name ) + certification.zi & permssiver I get past `MIUI` logo but now looping at powered by `android` logo

## transactions – Help analyzing this color coded hexdump of the Bitcoin blockchain

Sample of genesis, growth, and current height

Sample of the latter part and the end of blk02655.dat. This probably covers parts of the last 10000 blocks.

``````\$ pip install pixd
\$ python -m pixd path/blk00000.dat
``````

https://pypi.org/project/pixd/

Trying to understand:

• How many bytes per line and how 1MB looks
• Recognizing block headers, transaction data,
• Categorizing the patterns found
• Confirm order of blocks
• Locate special blocks and transactions (first segwit block, first transaction, batch transactions, multisig, coinjoins, LN channel opens)

Anyone else curious?

Thanks!

## performance tuning – Optimize molecule distance analyzing code

I have a very large dataset (31552000 lines) of xyz coordinates in the following format

``````1 2 3
4 5 6
7 8 9
. . .
``````

I have to take a distance using the special method below.

``````Distance({a_, b_, c_}, {d_, e_, f_}) :=
Sqrt((If(Abs(a - d) >= (40/2), Abs(a - d) - 40, Abs(a - d)))^2 + (If(
Abs(b - e) >= (40/2), Abs(b - e) - 40, Abs(b - e)))^2 + (If(
Abs(c - f) >= (40/2), Abs(c - f) - 40, Abs(c - f)))^2)
``````

Then I import the data.

``````data = Partition(
Partition(ReadList("input.txt", {Real, Real, Real}), 16), 38);
``````

The formatting is kind of strange. Every 16 rows is one molecule, and every 38 molecules is one timestep. I take the distance between the 16th atom of each molecule and the 5th atom of each molecule.Then I select the distances that are less than 5.55 and determine the length of the resulting list. This is repeated for each of the 29,000 timesteps.

``````analysis =
Flatten(
Table(
Table(
Length(
Select(
Table(
Distance(data((r, y, 16)), data((r, x, 5))),
{x, 1, 38}),
# <= 5.55 &)
),
{y, 1, 38}),
{r, 1, 29000})
);
``````

This last section is my most computationally intensive part. For 29000 timesteps and 38 molecules, it takes 40 minutes to process fully. It also takes too much memory (16+ gigs per kernel) to parallelize. Is there any other method that will improve the performance? I have tried using compile, but I realized that Table, the biggest bottleneck, is already complied to machine code.

Below is an example of a dataset that takes my computer 2 minutes to complete with the analysis code. It is scalable to larger timesteps by changing 4000 to larger numbers.

``````data = Partition(
Partition(Partition(Table(RandomReal({0, 40}), (3*16*38*4000)), 3),
16), 38)
``````

## sharepoint enterprise – Analyzing timer job performance

Good morning, I have a SharePoint farm with 3 servers and since last week I have been noticing an issue. Any job executed on server 2 is veeeery slow (it takes more or less 10m to change the status of the jobs, even restarting sptimerv4 it takes the same time to change the status from paused to running) while any jobs executed on the other servers are showing in real time. What can I do debug the timer job on server 2 to find the cause of this slowness?

## python – Reading and analyzing budget and transaction data from user input and .csv

Good first effort!

On `currentBudget`:

One of the things that is easy to tell you about is Python’s context managers docs.

The context manager lets you write code in a different context and takes care of preparing that context and cleaning it up. In your case, there’s a context to deal with opened files 🙂

``````def currentBudget():
fileTransaction = open("transactions.csv", "r")
fileTransaction.close()
# ...
``````

it is recommended that you do

``````def currentBudget():
with open("transactions.csv", "r") as f:
# ...
``````

The context manager makes sure the file is properly closed even if the code raises an error.
Generally, inside the `with` statement you write the least amount of code possible, so that the context can get cleaned up ASAP.

Also, if you are dealing with `csv` files you might want to take a look at the `csv` library Python provides: `csv` docs.

On organisation:

I really liked that you grouped related `print`s in functions to call repeatedly! One thing you might consider is factoring the different actions you take, depending on the `userInput` variable, into separate functions. That way, the body of your main function is cleaner and easier to read, and then you have a separate function for each functionality.

On user input:

Clever usage of `.lower()` to handle capitalised input 😉

You are doing relatively basic I/O, with the user just typing a single letter to pick an action. One thing you might want to do is

``````userInput = input("nEnter your Choice: ").trim().lower()
``````

This removes leading/trailing whitespace (with `.trim`).
Additionally, you may want to also add `(0)` at the end of the line if it is acceptable for the user to write a whole word but you only need to use the first letter to distinguish. This doesn’t make much sense while the actions are `a)`, `b)`, etc, but would make sense if you renamed the options to reflect what they actually do.

Good luck 🙂

## Choosing unsupervised learning algorithm for analyzing the spectrum of a linear operator

I am a theoretical physicist, and new to CS.stackexchange, and have a little knowledge of CS, and in Machine Learning (only some general stuff). In physics we often analyze the spectrum of linear bound operators on finite dimensions, or simply try to understand the eigenspace of some matrix $$M: v_j, lambda_j$$, where $$M$$ is a general square matrix of size $$Ntimes N$$ over the complex field, $$v_j$$ – is a right eigenvector with the corresponding eigenvalue $$lambda_j$$.

Basically, in many cases we have a certain intuition (that comes through experience) about how to categorize $$v_j, lambda_j$$ pairs in certain groups, however, as $$N$$ grows, and as one tunes certain parameters of the problem (affecting the matrix elements of $$M$$) – it becomes really hard to categorize all eigenstates. The question is: can I use unsupervised learning algorithms to analyze the eigenspace and obtain some information about it? For instance questions like: what number of different well-defined groups there are, and what kind of numbers can describe this group uniquely?

What looks as a problem to me as that the number of data points here is the same as the dimensionality of the problem – so there is very few data. May be some ML specialists that are enthusiastic about math used to deal with that a lot and have a ready-to-use recipe. Standard stuff like K-means, and such do not look like something that will work out here.

Thank you in advance, and hope the overall question makes sense.

UPDATE: A small remark I want to make is that the statement “the number of data points here is the same as the dimensionality of the problem” is true for the formulation of problem “as is”. What I mean by that is that in principle we can think about every eigenvector as a complex-valued distribution function in a $$k$$-dimensional space, and then convert for each eigenvector $$v_j$$ it’s $$N$$-complex-valued components into a smaller amount of meaningful characteristics such as, for example, “moments” of the distribution, or measure some correlations between the components of $$v_j$$, or something else… So the problem can be, actually, transformed into a different one, if required.

## algorithms – Maximum Subarray Problem – Analyzing best case, worst case, and average case time complexity big o

New to the board, if this is the wrong section I apologize and I will delete it. Will be helpful to be provided correct exchange to guide me through this process of learning.

If you have a given an array such as `A(1...n)` of numeric values (can be positive, zero, and negative) how may you determine the `subarray A(i...j) (1≤ i ≤ j ≤ n)` where the sum of elements is maximum overall (subvectors). Regarding the brute force algorithm below, how do you go about analyzing its best case, worst case, and average-case time complexity in terms of a polynomial of n and the asymptotic notation of ɵ. How would you even show steps? Without building out the algorithm?

``````// PSEUDOCODE
// BRUTE-FORCE-FIND-MAXIMUM-SUBARRAY(A)
n = A.length
max-sum = -∞
for l = 1 to n
sum = 0
for h = l to n
sum = sum + A(h)
if sum > max-sum
max-sum = sum
low = l
high = h
return (low, high) # No, return of MAX-HIGH
``````

Note: New to the forum, not sure if this is the correct exchange. But I am referring to and referencing problems from https://walkccc.me/CLRS/Chap04/4.1/.

## plotting – Importing a huge set of data and analyzing it properly

I have this data set in .csv or .xlsx and I want to import them to mathematica and analyze them. Now here I have only shown a part of the dataset just to give an example of how it looks.

Basically there is DrainI(i), DrainV(i), GateI(i), GateV(i) where GateV(i) is constant for $$i=1,2,3,4…$$.

Now I want to export this into Mathematica notebook and essentially plot DrainI(i) v/s DrainV(i) for different GateV(i) in the same plot. Is there an efficient way to do this without having to store all data into different lists. Tools like Origin or IgorPlot are best suited for such plots but I also want to do some data manipulation such as finding the slope near $$DrainV=0$$ (as mentioned below), hence I want to do everything in Mathematica.

Then I want to approximate DrainI(i) as a linear function of DrainV(i) near $$DrainV(i)=0$$. What is the best way to do this?

Also I am not sure how to upload the data, so I just put a screenshot from Excel file.

## A graph database suitable for analyzing a heap snapshot?

It looks like recommendation questions aren’t explicitly OT, so here goes:

I’m working on some tooling for analyzing a dump of the heap of a running program. The dump is just a list of nodes with associated metadata and references to other nodes (possibly-cyclical).

I don’t have any experience with graph databases, and I’m wondering if I would save myself a lot of time by building tooling around a graph DB. So I’m looking for recommendations and pointers to resources, and advice.

Some specific questions:

• are there any graph databases that have functionality built in for computing a dominator tree? (googling this didn’t seem to get any results)
• are there any DBs that have tooling for visualizing a huge graph?

## Analyzing the Runtime of Shuffling Algorithm

The following is psuedocode used to shuffle the contents of an array, A, of length n. As a subroutine for shuffle, there is a call to Random(m) which takes O(m) time to run given the input m. Determine the runtime of the following algorithm and justify your answer. Which of these two is faster?

``````function Shuffle(A)
Split A into two equal pieces Al, and Ar (this takes constant time)
Al = Shuffle(Al)
Ar = Shuffle(Ar)
for i = 0 to len(A)/2 − 1:
for j = 0 to i − 1 do:
Al(j) = Al(j) − Ar(i) + Random(6)
Ar(i) = Ar(j) − Al(i) + Random(6)
Al = Shuffle(Al)
Ar = Shuffle(Ar)
Combine A = Al + Ar (this takes constant time)
return A = Al + Ar

function Shuffle1(A)
for i = 0 to len(A) − 1:
for j = 0 to i − 1:
for k = 0 to j − 1:
A(k) = A(k) + A(j) + A(i) + Random(k)
return A
``````

After a good amount of work I believe that the runtime for Shuffle1(A) to be O(n^3k) as the three loops seem to run a total of n^3 times with constant time array access and O(k) work done, but I am not quite sure. I am having a lot of difficulty tackling the runtime of Shuffle(A). Should I be using the Master Theorem in any way? Any advice and help would be greatly appreciated!