## Operating system scheduling doubt – Computer Science Stack Exchange

Consider a uniprocessor system executing three tasks T1,T2 and T3 each of which is composed of an infinite sequence of jobs (or instances) which arrive periodically at intervals of 3, 7 and 20 milliseconds, respectively. The priority of each task is the inverse of its period, and the available tasks are scheduled in order of priority, which is the highest priority task scheduled first. Each instance of T1,T2 and T3 requires an execution time of 1, 2 and 4 milliseconds, respectively. Given that all tasks initially arrive at the beginning of the 1st millisecond and task preemptions are allowed, the first instance of T3 completes its execution at the end of_____________________milliseconds.

The priority of each task is the inverse of its period, and the available tasks are scheduled in order of priority.

I’m confused with this line, Do they mean Burst time as a period or the time which Task arrive periodically.

BTW for this question it doesn’t matter what we take In the end T1 will always has higher priority but can you help me here

## performance tuning – Generating and operating with random number

I’m working basically with the creation of random numbers that represent the positions of $$N$$ electrons within a box. Later, I sum all the individual electric fields generated by the electrons, thus yielding the total electric field at one specific point in space (outside the box). After doing that for $$N_{re}$$ realizations of random electrons position, I build a histogram for the electric field at that specific point.

When I chose $$N=10000$$ electrons and $$N_{re}=10000$$, my computer takes a lot of time to perform these calculations — even using Parallel commands.

Would you guys have any suggestion for improvement on the speed calculation? I’d appreciate anything that might help.

Here is the code, it’s actually very short. I should notice that I’m also fitting the histogram with a well known distribution.

``````MonteCarlo(k_, Nt_, R_, Nre_, z_) := Module({el, (Epsilon)d, nI, Rdip},

(Epsilon)d = 9.6 8.8542 10^-12;
el = 1.6 10^-19;
nI = Nt/R^3;

rdop(i_) := {RandomReal({-R/2, R/2}, {1})((1)),RandomReal({-R/2, R/2}, {1})((1)),RandomReal({-R/2, R/2}, {1})((1))};
Rlistdop = ParallelTable(rdop(i), {i, 1, Nt});

AbsoluteTiming(Do(Rlistdip(j) = ParallelTable(rdip(i), {i, 1, Nt}), {j, 1,Nre}));
El(j_, i_):=el/(4 (Pi) (Epsilon)d) (Rlistdop((i)) - rNV)/(Norm(Rlistdop((i))-rNV)^3);
rNV = {0, 0, -R/2 - z};

Et(j_) := Sum(El(j, i), {i, 1, Nt});
Elmean(j_) := 1/Nt Sum(El(j, i), {i, 1, Nt});

dataEz(k)=ParallelTable(If(Abs(Et(j)((3))) < 1 10^8, Et(j)((3)), Nothing), {j, 1, Nre});
pEz(k)=Histogram({dataEz(k)}, {-2 10^7, 2 10^7, 10^4}, PDF,PlotRange -> {{-2 10^7, 2 10^7},
All}, PlotTheme -> "Scientific",ChartLayout -> "Overlapped",LabelStyle ->Directive(Black,
{FontFamily -> "Latin Modern Roman", FontSize -> 17}));

(Mu)0(k)=(Mu)(k)/.FindDistributionParameters(dataEz(k),StudentTDistribution((Mu)(k),bba(k),cca(k)))((1));
b0(k)=bba(k)/.FindDistributionParameters(dataEz(k),StudentTDistribution((Mu)(k),bba(k),cca(k)))((2));
c0(k)=cca(k)/.FindDistributionParameters(dataEz(k),StudentTDistribution((Mu)(k),bba(k),cca(k)))((3));

(ScriptCapitalD)(k) = StudentTDistribution((Mu)0(k), b0(k), c0(k));
(Mu)0(k) = Median((ScriptCapitalD)(k));
fitd(x_)(k) := PDF((ScriptCapitalD)(k), x);

xhalf1(k)=x/.Solve(PDF((ScriptCapitalD)(k), (Mu)0(k))/2 ==PDF((ScriptCapitalD)(k),x),x)((1));
xhalf2(k)=x/.Solve(PDF((ScriptCapitalD)(k),(Mu)0(k))/2==PDF((ScriptCapitalD)(k),x),x)((2));
Ehalf(k) = Abs(xhalf1(k)) + Abs(xhalf2(k));
Ehalfa(k, z) = Ehalf(k);

)
``````

Using the module above, my situation correspond on

``````Nt = 10000;
R = 1. 10^-7;
Nre = 10000;

MonteCarlo(1, Nt, R, Nre, 2 R)
``````

Thanks!
Best,
Denis

## operating systems – PSH flag in TCP header send a singal to the receiver process?

we know that when TCP data is received with PSH set, it will immediately transfer the received data to the application.and let’s when pushed data arrives the receiver side and the receiver application is not reading any data(busy with other tasks). then how does the receiver pass the data to its application process since the receiver’s process is not even reading data?is a signal sth like SIGXXX triggered and sent to receiver process to notify it that pushed data has arrived and you need to process it immediately?

## Operating system , synchronization

Consider the methods used by processes P1 and P2 for accessing their critical sections whenever needed, as given below. The initial values of shared boolean variables S1 and S2 are randomly assigned.
Method used by P1

while (S1 == S2);Critical Section S1 = S2;

Method used by P2while (S1 != S2);Critical SectionS2 = not(S1);

Which one of the following statements describes the properties achieved?

Mutual exclusion but not progress

Progress but not mutual exclusion

Neither mutual exclusion nor progress

Both mutual exclusion and progress

## operating systems – How does an OS limit a program capabilities, if it’s working directly with the cpu?

Modern CPUs have privilege modes that are used by the operating system lock out certain instructions.  For example in `user` mode the instructions that modify (raise) the privilege mode or access system resources like the currently configured page tables cause exceptions.  This allows the operating system to decide whether to abort your user process, or to emulate the operation.

Does the OS look into my code before loading it, and sees if I’m using there certain instructions that I’m not allowed to use and if so simply won’t execute it?

No, it doesn’t have to — the operating system puts the processor into “user” mode whenever it runs user code.  That will activate the hardware’s exception mechanism should a privileged instruction be encountered — in user mode, the exception happens instead of executing the privileged instruction (triggering an exception is the execution of that instruction rather than any privileged operation).

Btw, the system call types of instruction used by user mode code to request services from the operating system also activates the hardware exception mechanism.

How is it being managed?

The operating system always runs user code at user mode, aka user level privilege, and generally runs its own code at higher privilege.  These modes informs the processor how to handle privileged instructions.  Most user mode code won’t even bother to try to execute privileged instructions as they useless, but if they do, the hardware exception mechanism kicks in and effectively tells the operating system that this has happened and let’s it handle the situation.

In order to run user mode code, the operating system might use a “return from interrupt” instruction, to restart the user code (whether it technically has or hasn’t been previously started doesn’t matter).  A return from interrupt is a type of instruction that is one way to change the privilege level, while also changing the instruction stream (aka branching); such an instruction itself is privileged, meaning the processor won’t allow it in user mode.

When the processor gets an interrupt, it notes some of the critical CPU state — the critical CPU state is that state that it necessarily has to modify to service an interrupt.  Servicing an interrupt transfers control of the instruction stream fed into the CPU, by modifying the program counter aka instruction pointer; on interrupt, the processor effectively makes a sudden branch to the interrupt service routine.  It also makes a sudden change of mode to higher privilege that allows the ISR access to more instructions.  Because these two sudden changes are needed in order to activate the ISR, the hardware will record the prior values for software to use later when resuming the interrupted user mode code.  Thus, the hardware & operating system conspire together to run the OS in at high privilege and the user code a low privilege.

When the user mode program uses a syscall type of instruction (requesting operating system services, like I/O), the same hardware exception mechanism transfers control to the ISR.  When the operating system wants to resume the user mode process, depending on the hardware, it may have to manually advance the program counter of the user mode process across the syscall instruction before resuming — it is as if, to the user mode process, the operating system simulated/emulated the system call.

## operating systems – Multi-level paging where the inner level page tables are split into pages with entries occupying half the page size

A processor uses $$36$$ bit physical address and $$32$$ bit virtual addresses, with a page frame size of $$4$$ Kbytes. Each page table entry is of size $$4$$ bytes. A three level page table is used for virtual to physical address translation, where the virtual address is used as follows:

• Bits $$30-31$$ are used to index into the first level page table.
• Bits $$21-29$$ are used to index into the 2nd level page table.
• Bits $$12-20$$ are used to index into the 3rd level page table.
• Bits $$0-11$$ are used as offset within the page.

The number of bits required for addressing the next level page table(or page frame) in the page table entry of the first, second and third level page tables are respectively

(a) $$text{20,20,20}$$

(b) $$text{24,24,24}$$

(c) $$text{24,24,20}$$

(d) $$text{25,25,24}$$

I got the answer as (b) as in each page table we are after all required to point to a frame number in the main memory for the base address.

But in this site here it says that the answer is (d) and the logic which they use of working in chunks of $$2^{11} B$$ I feel ruins or does not go in with the entire concept of paging. Why the system shall suddenly start storing data in main memory in chucks other than the granularity defined by the page size of frame size. I do not get it.

As far as I read in a OS text book (Operating Systems by Silberschatz) Kernel mode is for privileged task, so it it true to claim that “User Level Thread can read/write Kernel threads” ?

Generally speaking, Is there any kind of protection between user and kernel level threads?

## uninstall – Erasing all operating systems on a dual boot

I recently bought a new laptop and installed both Ubuntu and Windows. It has been super buggy, I’ll spare you the details but I think a full reinstall of both operating systems might work. However, I have never uninstalled/erased a dual boot system and I’m afraid I’m gonna screw something up. Does Windows diskpart’s clean command just clear the entire disk and remove all partitions? I’m kind of clueless, so any help is appreciated.

Thanks!

## Why won’t my SanDisk memory card work on my iMac desktop. My operating system is mac OS Catalina. It works on my Mac Book Pro [closed]

Why won’t my SanDisk memory card work on my iMac desktop. My operating system is mac OS Catalina. It works on my Mac Book Pro.

## operating systems – Can application DNS requests bypass a firewall on Windows and Linux

Assuming I have firewall set to block an application from accessing the Internet, would I DNS request go through?

I tried using C (gethostbyname) on Windows and the answer is no. It seems that the application sends the DNS request to the public DNS server, but I wonder if this is the same under Linux.

In particular, could there be that there is a local DNS server (loopback interface) running on the OS such that gethostbyname would instead use the local DNS for the requests. Firewalls may block public packets but for private subnets especially for loopback that may be allowed to applications depending on how one configures the firewall. This way a block app would place a request to an unblocked app (DNS local relay server) that can communicate with a remote system.