what is the ideal size of buttons for ios and andriod

I have designed a mobile app, but I want to know what is the best size for the buttons for both (ios and andriod) so that I do not have any problems with the operating system and the user experience.

twrp – How to reduce the abnormal size of internal storage on ALE-L21

I have a custom ROM and TWRP and I have a rooted phone but the operating system was installed recently and even without a custom ROM I can clearly see that my internal storage value is huge abnormally because in Titanium Backup it doesn't show a huge size for internal storage nor for the system partition and I have about 16 GB in the specification value of my phone, what's the problem, I need to fix it because some apps need to be installed in internal storage.
Operating system internal storage informationenter description of image here

Full number with "zeros" until the total size is 9 (JAVASCRIPT)!

I receive a number by parameter .. ex: 18 and I must complete it with "zeros" until the total size is 9 numbers .. the result would be 180000000.

if the value received is for example 678, the result would be 678000000.

let number = 18;
let count = number.toString().length;

for (let i = count; i < 9; i++) {
  number = number.toString() + '0';
}

number = parseInt(number);

console.log(number);

I managed to solve it in the following way, is there an easier way to do the same algorithm? Thank you!

SQL server – Should I activate Trace Flag 1117 for data files of equal size?

I was reading about the proportional fill algorithm in SQL Server and then I remembered TF1117. BOL declares:

When a file in the filegroup reaches the automatic growth threshold, all files
in the file group grow. This trace flag affects all databases and is
recommended only if each database is sure to grow all files in one
group of files of the same amount.

What I can't understand is that if the data files fill up proportionately, will they not automatically grow in proportion? In this case, we cannot omit using TF1117.

python – File size when parsing XML

The code below takes a directory of xml files and analyzes them in a csv file.
Currently, scanning around 60 XML is fast and the output is a Csv file of around 250 MB.

This is a real big file and the reason is that the columns are repeated. I repeat the columns for the reason that each element must have the information. in reading is on cases where Z048 had multiple lines of setdata, this is why the other columns in red had to be repeated.

enter description of image here

I plan to increase the number of xml to 5 KB, which means the csv file will be relatively large.

Ask this question to maybe get an answer if the size of my csv file can be reduced in one way or another. Even though when coding, I tried to think that I wanted my code to be fast and not produce large csv files.

from xml.etree import ElementTree as ET
from collections import defaultdict
import csv
from pathlib import Path

directory = 'path to a folder with xml files'

with open('output.csv', 'w', newline='') as f:
    writer = csv.writer(f)

    headers = ('id', 'service_code', 'rational', 'qualify', 'description_num', 'description_txt', 'set_data_xin', 'set_data_xax', 'set_data_value', 'set_data_x')
    writer.writerow(headers)

    xml_files_list = list(map(str, Path(directory).glob('**/*.xml')))
    print(xml_files_list)
    for xml_file in xml_files_list:
        tree = ET.parse(xml_file)
        root = tree.getroot()

        start_nodes = root.findall('.//START')
        for sn in start_nodes:
            row = defaultdict(str)

            repeated_values = dict()
            for k,v in sn.attrib.items():
                repeated_values(k) = v

            for rn in sn.findall('.//Rational'):
                repeated_values('rational') = rn.text

            for qu in sn.findall('.//Qualify'):
                repeated_values('qualify') = qu.text

            for ds in sn.findall('.//Description'):
                repeated_values('description_txt') = ds.text
                repeated_values('description_num') = ds.attrib('num')

            for st in sn.findall('.//SetData'):
                for k,v in st.attrib.items():
                    row('set_data_'+ str(k)) = v
                for key in repeated_values.keys():
                    row(key) = repeated_values(key)
                row_data = (row(i) for i in headers)
                writer.writerow(row_data)
                row = defaultdict(str)

Time complexity of an algorithm whose output is not scaled linearly with the size of the input

Suppose I have a CalculateOutput (n) function which creates an array of size n and repeatedly modifies this array by iterating through each element from 0 to n – 1 (let's say this is done in linear time). When the array is in a particular order, the number of times the CalculateOutput function has traversed the array is returned. The point is that when n increases, the output does not necessarily increase (e.g. CalculateOutput (4) = 5 while CalculateOutput (5) = 2). How could I determine the time complexity of this algorithm? Or what other information would I need to determine the operating time?

I believe if there was another method to determine the number of iterations on the array (call it m) for a given n, then it would be CalculateOutput = O (m * n). But I do know what it is that m running the algorithm described above.

Size of a function space

I am interested in measuring the size or size of a function space. Is there a concrete measure for such a case?

turing machines – Which calculation model supports integers of arbitrary size?

I suspect you are looking for the RAM model or the transdichotomic model. They differ mainly by the way they take into account the size of the integers and their effect on the time of the different operations. See What is the difference between the transdichotomic model and AMR ?.

If you want your results to have any applicability, the key is to choose a model that avoids processing time doing arithmetic on integers of exponential length as polynomial, otherwise you may end up with absurd results. If you count each arithmetic operation as $ O (1) $ time, no matter the size of the numbers, then you end up with nonsensical results like polynomial algorithms for factoring and in fact all #PSPACE. See https://cs.stackexchange.com/a/53885/755.

Note that proving that a problem is in P is not the same as proving that it works in polynomial time on a strange computational model. P is defined as problems that have an algorithm that runs in polynomial time on a Turing machine. As mentioned in the previous paragraph, if you pass "on a Turing machine" to another calculation model, this declaration may no longer hold.

This is a famous open problem with factoring in P or not. The problem is likely to be very difficult, with our current state of understanding. I would not recommend taking such an ambitious goal.

html – Irregular font size in browsers of some Android devices

In the following HTML document:

http://fiddle.jshell.net/v0yqL8xp/show/

When running on an Android device> = 8 and a Chrome browser (79.03945.116), I find the following problem related to the font size (font-size: 16px;):
The two divs of the .medium class reduce the text compared to the .large class
If I increase the size of the .medium class to 100%, the text returns to the correct size.

Has anyone had similar problems?

HTML and CSS: http://jsfiddle.net/v0yqL8xp/

Extract:

*
{
	padding: 0;
	margin: 0;
}


.medium
{
	float: left;
	width: 50%;
}

.large
{
	width: 100%;
}

p
{
	text-align: justify;
	font-size: 16px;
	padding: 14px;
}

Fugiat sit et tempor magna adipisicing excepteur veniam. Velit reprehenderit amet do et sint officia Lorem esse irure eiusmod consectetur. Laboris esse duis enim dolore et ex aliqua proident ullamco id ullamco nostrud amet. Qui culpa ad sint adipisicing do cillum velit proident exercitation ipsum. Pariatur reprehenderit reprehenderit elit irure ea cupidatat labore minim. Labore qui est minim occaecat commodo minim consectetur id aliquip. Sit veniam excepteur cupidatat eiusmod.

Fugiat sit et tempor magna adipisicing excepteur veniam. Velit reprehenderit amet do et sint officia Lorem esse irure eiusmod consectetur. Laboris esse duis enim dolore et ex aliqua proident ullamco id ullamco nostrud amet. Qui culpa ad sint adipisicing do cillum velit proident exercitation ipsum. Pariatur reprehenderit reprehenderit elit irure ea cupidatat labore minim. Labore qui est minim occaecat commodo minim consectetur id aliquip. Sit veniam excepteur cupidatat eiusmod.

Fugiat sit et tempor magna adipisicing excepteur veniam. Velit reprehenderit amet do et sint officia Lorem esse irure eiusmod consectetur. Laboris esse duis enim dolore et ex aliqua proident ullamco id ullamco nostrud amet. Qui culpa ad sint adipisicing do cillum velit proident exercitation ipsum. Pariatur reprehenderit reprehenderit elit irure ea cupidatat labore minim. Labore qui est minim occaecat commodo minim consectetur id aliquip. Sit veniam excepteur cupidatat eiusmod.

I want to automatically change the uppercase text and font size in the Google sheet script, anyone can help?

function onEdit(e) {
  if (typeof e.value != 'object') {
    e.range.setValue(e.value.toUpperCase());
  }
}
function onEdit() {
var ss = SpreadsheetApp.getActiveSpreadsheet();
 var sheet = ss.getActiveSheet();

  var cell = sheet.getRange('A:J');
 cell.setFontFamily('Calibri').setFontSize('12').setFontWeight('');
}