INA Warehouses that make machines more efficient

Several warehouse KOYO bearing http://www.noksbearings.com/KOYO-Bearing manufacturers are working with agriculture to create warehouses that make machines more efficient, including companies such as noksbearings.com. The bearings used in agriculture are to a certain extent determined by the machines in question, but they must generally be highly developed, perfectly sealed and withstand extreme conditions. For example, machines such as tractors must operate with precision and speed, but they must withstand harsh environments and the climate and protect themselves as much as possible from dirt and corrosion, so they must be properly rolled. Accept the challenge.
FAG 11204TV bearing

In general, the types of bearings used in agricultural machinery from noksbearings.com mainly include ball bearings, such as deep groove ball bearings, angular contact bearings and thrust bearings. Roller bearings are also used in agricultural machinery and include types of bearings such as tapered roller bearings and needle bearings.
TIMKEN 180RIN683

Bearings play a crucial role in the performance of agricultural machinery. What are the bearings? As noksbearings.com explains: "A roller is a device that allows relative movement between two or more parts, usually a rotary or linear motion, and the bearings can be classified globally according to the allowed movements and their principle As a result of the shape and small size of the bearings, they help to reduce friction between the moving parts of the machine, which is a determining factor in the efficiency of the machine. the machine.

sql – The most efficient way to join two tables

Below two queries, what is the most efficient query to join two tables?

Query 1

select dv.vehicleId, v.vehicalBrand
of DRIVER_VEHICLE dv
join VEHICLE v
on dv.vehicleId = v.vehicleId
where dv.driverId = 100;

Query 2

select v.vehicleId, v.vehicalBrand
from VEHICLE v
join DRIVER_VEHICLE dv
on v.vehicleId = dv.vehicleId
where dv.driverId = 100;

Here is the data model.

enter the description of the image here

c # – How can I make my code more efficient to store user details in a sqlite database

I create an application that allows users to register and their information is stored in a sqlite database. The code below is not wrong, but I want to know if I can make it more efficient.

I am doing this application for my computer course work level and the focus is on the effectiveness of our code. That's why I want to know if I can make my code more efficient. The database calls users and I want to store the user names and passwords of the users in the database.

if (File.Exists ("users.sqlite"))
{
SQLiteConnection con = new SQLiteConnection ("Data Source = users.sqlite; Version = 3;");

                                                                                                try
{
con.Open ();
string Query = "insert in user_info (name, username, password) values ​​(" + full name + "," "+ username +" & # 39 ;, "" + password + "& # 39;)";

SQLiteCommand cmd = new SQLiteCommand (Query, con);
cmd.ExecuteNonQuery ();
con.Close ();
}
catch (exception)
{
MessageBox.Show ("Error saving user", "Please try again");

}
}
other
{

SQLiteConnection.CreateFile ("users.sqlite");

SQLiteConnection con = new SQLiteConnection ("Data Source = users.sqlite; Version = 3;");
con.Open ();

string sql = "create a table user_info (name varchar (20), user name varchar (20), password varchar (20))";

SQLiteCommand command = new SQLiteCommand (sql, con);
command.ExecuteNonQuery ();

sql = "insert in user_info (name, username, password) values ​​(" + full name + "," "+ username +" & # 39 ;, "" + password + "& # 39;)";

command = new SQLiteCommand (sql, con);
command.ExecuteNonQuery ();

con.Close ();



}

python – How can I make this code more efficient?

See contact details

The main name, cell phone number and email ID are the contents that interest me. When I specify soup.find (& # 39 ;, & quot; class & # 39 ;: tip & # 39;), it only gives me "See contact details".

import requests
from bs4 import BeautifulSoup

headers = {User-Agent: Mozilla / 5.0}

# form fields sent to the server
params = {
& # 39; SchoolType: & # 39;;
& # 39; Dist1: & # 39;;
& # 39; Sch1: & # 39;;
& # 39; SearchString & # 39 ;: & # 39;
}

r = requests.post (& # 39; http: //www.registration.pseb.ac.in/School/Schoollist&#39 ;,
headers = headers, data = parameters)

soup = BeautifulSoup (r.text, & # 39; html.parser & # 39;)

all_a = soup.find_all (& # 39 ;, & quot; class & # 39 ;: tip & # 39;)

for articles in all_a:
print ('text:', item.text)
print (& # 39; rel: & # 39 ;, item['rel'])
print (& # 39; rel: & # 39; & # 39; .join (item['rel']))
print (& # 39; ----- & # 39;)

html – Is there a more efficient use of Golang models?

I wonder if the use of my template in https://github.com/kaihendry/ltabus/blob/master/main.go could be better. For example, in another code, I notice:

var views = template.Must (template.ParseGlob ("static / *. tmpl"))

Defined outside the main configuration, the templates are loaded at the start of the process. And then in the handler:

views.ExecuteTemplate (w, "passwordprompt.tmpl", map[string]interface{}{})

In my own code, I wonder if it is worth applying the same practice. Is it possible with the functions and the code below?

func handleIndex (w http.ResponseWriter, r * http.Request) {
funcs: = template.FuncMap {
"nameBusStopID": func (string) string {return bs.nameBusStopID (s)},
"getenv": os.Getenv,
}

t, err: = template.New (""). Funcs (funcs) .ParseFiles ("templates / index.html")

if err! = nil {
log.WithError (err) .Error ("Failed to scan the model")
http.Error (w, err.Error (), http.StatusInternalServerError)
return
}

arrival, err: = busArrivals (r.URL.Query (). Get ("id"))
if err! = nil {
log.WithError (err) .Error ("failure to retrieve bus timing")
}

t.ExecuteTemplate (w, "index.html", arriving)
}

Do not hesitate to criticize all the code.

I make a very accurate data entry and an efficient entry to meet your expectations for $ 15

  • I make a very accurate data entry and an effective keystroke to meet your expectations

I render services acceptable to both my previous clients and potential clients.
I also offer data entry and proofreading services.
High accuracy in data entry
High rate of efficiency in replay.
Have two years of experience in these related jobs.



This service does not have evaluation – order and leave first!


$15In stock

.

Data Structures – Efficient Storage of Functions?

Let $ f $ be a binary function set on the board $ S $, which contains numbers of $ 1 $ at $ n $. He is also given this number of $ | text {Img} (f) | $ k. I want to store the function $ f $ such that the space required is minimal.

I'm trying to store the function with a graph, but the number of edges is quadratic $ n $.

Question: How to store the function $ f $ space effectively?

entity framework – The LINQ statement will result in the most efficient SQL query for counting grandchildren

What is the most correct / best LINQ instruction to generate the most efficient SQL query to get the number of rows of grandchildren from a single parent. In addition, is it feasible in lambda or does it require query syntax? Details:

1) Parent Table -> one to many relationship with the child table

2) Children's table -> one to many relationship with grandchildren's table

3) Number of rows of grandchildren using a parent ID that may or may not be valid

4) from a DBContext

I guess the syntax of the query would look like this (?):

var parentID = 1;
var count = (from gc in context.Grandchild
join c in context.Child on gc.ChildID is equal to c.ChildID
where p.ParentID == parentID
select gc) .Count ();

If this is correct, what is the lambda equivalent? Thank you

Runtime Analysis – Is there a more efficient algorithm for this problem?

I have an array of sorted numbers:

arr = [-0.1, 0.0, 0.5, 0.8, 1.2]

I want the difference (dist below) between consecutive numbers for this table to be greater than or equal to a given threshold. For example, if the threshold is 0.25:

dist = [0.1, 0.5, 0.3, 0.4] # must be> = 0.25 for all elements

arr[0] and arr[1] are too close to each other, one of them must be modified. In this case, the desired table would be:

valid_array = [-0.25, 0.0, 0.5, 0.8, 1.2] # all elements distance> = threshold

In order to obtain valid_array, I want to change the minimal amount of elements in arr. So, I subtract 0.15 from arr[0] rather than, for example, subtracting 0.1 from arr[0] and add 0.05 to arr[1]:

[-0.2, 0.05, 0.5, 0.8, 1.2]

The previous table is also valid, but we modified 2 items rather than one.
In order to obtain valid_arrayI already have a brute force solution that works well, but it is slow enough for large berries. My questions are:

What is the temporal complexity of this brute force solution?

Is there a more efficient algorithm?

Find a data structure and / or an efficient algorithm

I'm responsible for creating a timeline-based animated chart and I can not find an appropriate data structure and / or algorithm that is reasonably efficient. My work system suffers from performance issues, plus my data set is large.

Let's say I have a group of people in a table

[Bob, Jane, Kevin, Alice, Bob, Alice, Jane, Bob, Fred]

My final result will be a series of columns corresponding to the number of people appearing X number of times in the table.

Bob - 3 times
Jane, Alice - 2 times
Kevin, Fred - 1 time

This would translate in my chart by

Column 1 - 2 people who appear 1 time
Column 2 - 2 people who appear 2 times
Column 3 - 1 person who appears 3 times

Normally, this data structure is a unique and very fast build pass. However, I have been instructed to build this as a chronology of people in the table, in the order of the table.

Bob is the first person on the board. We remove it and apply it to our chart. he currently is the only person taken into account. So after treating it, our columns are

Column 1 - 1 person who appears 1 time
Column 2 - 0 people who appear 2 times
Column 3 - 0 people who appear 3 times

Then we add Jane, Kevin and Alice, so we have:

Column 1 - 4 people who appear 1 time
Column 2 - 0 people who appear 2 times
Column 3 - 0 people who appear 3 times

Now we add Bob again and our chart changes. Bob does not count anymore in the first column.

Column 1 - 3 people who appear 1 time (Jane, Alice and Kevin)
Column 2 - 1 person who appears 2 times (Bob)
Column 3 - 0 people who appear 3 times

This continues until the end of the table. Performance is currently slowed down when there are more than a few hundred data points.

I also need to know the final state of the graph to be able to calculate the sizing.

Can anybody help with a structure / algorithm to store / manage the data so that performance remains linear?

Excuse me please if it is the wrong exchange to ask, and recommend an appropriate alternative if possible.

Jason