Find the formulas of the middle point algorithm for drawing the parabola? Computer Graphics

Thanks for contributing an answer to Computer Science Stack Exchange!

  • Please be sure to answer the question. Provide details and share your research!

But avoid

  • Asking for help, clarification, or responding to other answers.
  • Making statements based on opinion; back them up with references or personal experience.

Use MathJax to format equations. MathJax reference.

To learn more, see our tips on writing great answers.

algorithm analysis – Space complexity of using a pairwise independent hash family

I’m trying to analyze the space complexity of using the coloring function $f$ which appears in “Colorful Triangle Counting and a MapReduce Implementation”, Pagh and Tsourakakis, 2011,

As far as I understand, $f:(n) rightarrow (N)$ is a hash function, that should be picked uniformly at random out of a pairwise independent hash functions family $H$. I have a few general questions:

  1. Does the space complexity required by $f$ is affected by the fact that $H$ is $k$-wise independent? Why? (if it does, then also- how?)
  2. What do we know about $|H|$? What if $H$ is $k$-wise independent?
  3. Is there a more space-efficient way to store $f$ than storing an $N times m$ matrix that maps each vertex to its color, using O($N m$) storage words?
  4. Does the total space complexity which is required in order to use $f$ as described in the paper is $|H| cdot O(text{space complexity of } f)$?

Best regards

logic – Algorithm for cutting rods with minimum waste

Given a set of cuts and their lengths we need to find out the minimum number of rods (of constant length) and the cuts required which will lead to minimum wastage.

Here we bundle the rods and cut them all at once. So we can have a bundle with any number of rods.

For example:

Input data – Consider a rod of length 120 inches

( Quantity of Cuts Required, Length (in inches) ) = (5,16″) , (5,30″) , (24,36″) , (4,18″) , (4,28″) , (6,20″)

So here we required cuts such that we get 5 rods of 16 inches, 5 rods of 30 on.


Imagine each row (in the image) is a rod of 120 inches and each table is a bundle with rows as the number of rods in that bundle. So the first table is a bundle with 5 rods with cuts (16,30,36,36) and second table is a bundle of 4 rods with cuts (18,28,36,36) and so on. We can see that we have satisfied the input data we get (5,16″) five rods of sixteen inches and so on.

enter image description here

Given input with (just like above) number of cuts and their lengths. how do we find the bundle of rods and their cuts having minimum amount of wastage?

algorithm – Worst-Case Scenario – Code Review Stack Exchange

Estimate the worst-case complexity for the auto-completion of a query if you use the following data

Your Trie. A linked list in which each node contains a training query with number. In this case, also
describe how an autocompletion should proceed on this data structure. Also estimate the worst case
complexity to create a Trie out of an amount of training queries. Specify the complexity depending on
the following parameters:

B: the number of letters of the alphabet.
Q: the maximum length of a query.
N: the number of training queries. In this task, generally assume complete search trees within the
Please provide detailed and understandable reasons, e.g. based on a sketch.

That is the code i wrote with my Partner and we struggle to do this Worst-Case Scenario.

   import java.util.Scanner;

   public class Trie {
   **Knot of the Trie**
   static class TrieNode {
   TrieNode highestCount;
   SearchTree searchTree;

   public TrieNode() {
       searchTree = new SearchTree();
       TrieNode highestCount = null;

   //inserts the letter c into the search tree of the TrieNode.
   public void add(char c, int count) {, count);
       if (searchTree.maxNode != null) {

           highestCount = searchTree.maxNode.trieNode;

   // returns the TrieNode belonging to c
   public TrieNode get(char c) {
       return searchTree.getNode(c);

   // intern SearchTree-Class
   public static class SearchTree {

   Node root;
   // Child with the highest counter
   Node maxNode;
   long highestCount;

   class Node {

       TrieNode tn;
       // 2 Children
       Node left;
       Node right;
       // Key
       char key;
       // Counter
       long count;
       TrieNode trieNode;

       public Node(char key, long count) {
           this.key = key;
           this.count = count;
           trieNode = new TrieNode();

       public Node() {
           trieNode = new TrieNode();

   public SearchTree() {
       root = null;
       maxNode = null;
       highestCount = -1;

   // Increments the key counter by count,
   //if the key does not yet exist, a new node is created with
   //Counter count introduced.
   public void inc(char key, int count) {

       root = insert(root, key, count);

   // returns counter of the key 'key'
   public long getCount(char key) {
       Node temp = search(root, key);
       if (temp != null)
           return temp.count;
       return -1;

   // delivers the (TrieNode) child for the key 'key'
   public TrieNode getNode(char key) {
       Node temp = search(root, key);
       if (temp != null) {
           return temp.trieNode;
       return null;

   // method that returns the counter of the key 'key'
   private Node search(Node node, char key) {
       if (node == null) {
           return null;
       if (node.key == key) {
           return node;
       if (key < node.key) {
           return search(node.left, key);
       return search(node.right, key);

   private Node insert(Node node, char key, int count) {
       if (node == null) {
              node = new Node(key, count);
           if (count > highestCount) {
               maxNode = node;
               highestCount = count;
           return node;
       } else if (node.key == key) {
           node.count += count;
           if (count > highestCount) {
               maxNode = node;
               highestCount = count;
       } else if (key < node.key) {
           node.left = insert(node.left, key, count);
       } else {
           node.right = insert(node.right, key, count);
       return node;
   // Trie-Klasse

   TrieNode root;

   public Trie() {
   root = new TrieNode();

   // Adds a training query to the trie
   public void add(String s, int count) {
       TrieNode current = root;
       if (current != null) {
           for (int i = 0; i < s.length(); i++) {
               current.add(s.charAt(i), count);
               current = current.get(s.charAt(i));

   // Predicts the complete queries for the prefix of a query. Is a
   //If the prefix does not exist in the tree, null is returned. You can
   //assume that prefix does not contain the special letter '*'

   public String predict(String prefix) {
   String r = "";
   TrieNode current = root;
   for (int i = 0; i < prefix.length(); i++) {
       if (current == null)
           return null;
       current = current.get(prefix.charAt(i));
   if (current == null)
       return null;
   while (current.searchTree.maxNode != null) {
       r += current.searchTree.maxNode.key;
       current = current.highestCount;
   return prefix + r;

   // Verarbeitet die Datei keyphrases.txt
   public static void eval() {
   Trie test = new Trie();
   try {
       FileInputStream fis = new FileInputStream("C:\Users\....\...\Desktop\keyphrases.txt");
       Scanner sc = new Scanner(fis);
       String() parts;
       while (sc.hasNext()) {
          parts = sc.nextLine().split(";");
           test.add(parts(0), Integer.parseInt(parts(1)));
   } catch (IOException e) {

   public static void main(String() args) {

I  think that the Worst Case would be the longest word in the searh tree.
So thats why we have a complexitiy of O(Q)  (Q: the maximum length of a query)

Also all these little if statements have a complexity of O(1);
if (searchTree.maxNode != null) = O(1)
if (temp != null) = O(1)
if (temp != null) = O(1)
if (node == null) = O(1)
if (node.key == key) = O(1)
if (key < node.key) = O(1)
else if (node.key == key) = O(1)
if (count > highestCount) = O(1)   
if (current != null) = O(1)
for (int i = 0; i < s.length(); i++) =O(n)  
for (int i = 0; i < prefix.length(); i++) =O(n) 

algorithm – How to make Print() method memory & CPU efficient?

You are receiving n objects in a random order, and you need to print them to stdout correctly ordered by sequence number.

The sequence numbers start from 0 (zero) and you have to wait until you get a complete, unbroken sequence batch of j objects before you output them.

You have to process all objects without loss. The program should exit once it completes outputting the first 50000 objects Batch size j = 100

The object is defined as such:

    "id" : "object_id", // object ID (string)
    "seq" : 0, // object sequence number (int64, 0-49999)
    "data" : "" // ()bytes
    Step                Input Value                Output State j = 1                  Output state j = 3
    0                       6
    1                       0                           0
    2                       4                           0
    3                       2                           0
    4                       1                           0,1,2                               0,1,2
    5                       3                           0,1,2,3,4                           0,1,2
    6                       9                           0,1,2,3,4                           0,1,2
    7                       5                           0,1,2,3,4,5,6                       0,1,2,3,4,5

func (receiver *Receiver) Print(seqNumber uint64, batchSize uint64, outputFile io.Writer) (error, bool) {

    fmt.Fprintf(outputFile, "( ")
    if seqNumber >= receiver.outputSequence.length {
    receiver.outputSequence.sequence(seqNumber) = true

    printedCount := uint64(0) // check for MAX_OBJECTS_TO_PRINT
    var nthBatchStartingIndex uint64
    MaxObjectsToPrint := config.GetMaxPrintSize()
    for nthBatchStartingIndex < receiver.outputSequence.length { // check unbroken sequence
        var assessIndex = nthBatchStartingIndex
        for j := assessIndex; j < nthBatchStartingIndex+batchSize; j++ { // Assess nth batch
            if j >= receiver.outputSequence.length { //index out of range - edge case
                break Loop
            if receiver.outputSequence.sequence(j) == false {
                break Loop

        count, printThresholdReached := receiver.printAssessedBatchIndexes(assessIndex, printedCount, batchSize, MaxObjectsToPrint, outputFile)
        if printThresholdReached { // print sequence threshold reached MAX_OBJECTS_TO_PRINT
            fmt.Fprintf(outputFile, " )  ")
            fmt.Fprintf(outputFile, " ----for input value %dn", seqNumber)
            return nil, false
        printedCount += count
        if printedCount >= MaxObjectsToPrint { // print sequence threshold reached MAX_OBJECTS_TO_PRINT
            fmt.Fprintf(outputFile, " )  ")
            fmt.Fprintf(outputFile, " ----for input value %dn", seqNumber)
            receiver.Log.Printf("****MaxObjectsToPrint threshold(%d) reached n", MaxObjectsToPrint)
            return nil, false
        nthBatchStartingIndex = assessIndex + batchSize // next batch
    fmt.Fprintf(outputFile, " )  ")
    fmt.Fprintf(outputFile, " ----for input value %dn", seqNumber)
    return nil, true

Here is the complete solution, written for this problem.

Print() is the method that does heavy lifting in this code, with varying size of memory & heavy CPU usage:

  1. How to make receiver.outputSequence memory effective by using datastructure other than array? because newBufferSize := 2 * seqNumber is doubling memory…

  2. How to make Print method have effective CPU usage?

algorithm analysis – Analyzing space complexity of passing data to function by reference

I have some difficulties with understanding the space complexity of the following algorithm.
I’ve solved this problem subsets on leetcode. I understand why solutions’ space complexity would be O(N * 2^N), where N – the length of the initial vector. In all those cases all the subsets (vectors) are passed by value, so we contain every subset in the recursion stack. But i passed everything by reference. This is my code:

class Solution {
vector<vector<int>> result;
void rec(vector<int>& nums, int &position, vector<int> &currentSubset) {
    if (position == nums.size()) {
    rec(nums, position, currentSubset);
    rec(nums, position, currentSubset);

vector<vector<int>> subsets(vector<int>& nums) {
    vector <int> currentSubset;
    int position = 0;
    rec(nums, position, currentSubset);
    return result;

Would the space complexity be O(N)? As far as i know, passing by reference doesn’t allocate new memory, so every possible subset would be contained in the same vector, which was created before the recursion calls.

I would also appreciate, if you told me how to estimate the space complexity, when working with references in general. Those are the only cases, where i hesitate about the correctness of my reasonings.

Thank you.

Can this SelectionSort Algorithm using Python be improved?

Seems like you already incorporated a few of the suggestions from back when this was on StackOverflow, e.g. changing the inner loop to max. However, you do so in a very inefficient way:

  • you create a slice of the list, O(n) time and space
  • you get the max of that slice, O(n) time
  • you get the index of that element, O(n) time

Instead, you can get the max of the range of indices, using a key function for comparing the actual values at those indices:

largest = max(range(0, lastUnsortedInteger+1), key=arr.__getitem__)

This way, this step has only O(n) time (for Python 3).

Some other points:

  • the an parameter (the length of the array/list) is not necessary, you can use len
  • in my opinion, it is a bit simpler looping from first to last index, and using min instead of max accordingly
  • since the swap is a single line now and only used once, we could inline this directly into the sorting function
  • the function modifies the list in-place, so no return is needed and might lead users to expect that the function does not modify the list but create a sorted copy instead
  • technically, arr is not an array but a list, and you might prefer snake_case to camelCase (it’s “Python” after all)

My version:

def selection_sort(lst):
    for i in range(len(lst) - 1):
        k = min(range(i, len(lst)), key=lst.__getitem__)
        lst(i), lst(k) = lst(k), lst(i)

Needless to say, for all practical purposes you should just use sorted or sort.

Latest Google Algorithm Update 2020

Can anybody share latest information related to Google algorithm for SEO

Write an algorithm with module and draw a flowchart with module

Thanks for contributing an answer to Computer Science Stack Exchange!

  • Please be sure to answer the question. Provide details and share your research!

But avoid

  • Asking for help, clarification, or responding to other answers.
  • Making statements based on opinion; back them up with references or personal experience.

Use MathJax to format equations. MathJax reference.

To learn more, see our tips on writing great answers.

mesh – Which Terrain LOD algorithm should I use for super large terrain?

My game needs a terrain, the requirements are:

  1. Freely Zoom in & zoom out, like GoogleEarth. Max resolution when zooming in ~100 meter, Max range when zooming out ~2000km (a whole country scale).
  2. Freely fly over to any direction with any height; Freely rotate camera; Framerate should not a bottleneck for all those basic camera movement.
  3. Support large heightmap data. I got mine real world elevation data from NASA with 7.5arc resolution, around 30k * 15k for a whole country.
  4. Also I need to consider the spherical earth surface curvature other than using a planar map. But this should be easy if I just map each vertex into spherical coordinates. I am not build the whole planetary LOD so it’s not even a problem.

I noticed there are many different LOD algorithm out there. For example:

  • ROAM: very classic algorithm, most done on CPU;
  • Geomipmapping: store the whole heightmap with best resolution as vertex buffer, with different LOD in vertex index. Then determine LOD and draw range in vertex index at runtime.
  • CDLOD:
  • Geo Clipmapping: I am feeling excited about this one since it claims to load all 200k * 100k US heightmap into video card memory and compressed ~300M, it seems to meet all my requirements. see source paper. And this is paper in 2004 and think about the video card at that time!

That’s pretty much all I know about terrain LOD.
I am not familiar recently study/research in this area.
Is Geo clipmapping my best option? What else algorithms are also useful to consider?