Sorting - standard algorithms?

Sorting - standard algorithms? Topic: Best case scenario for insertion sort time
July 19, 2019 / By Chanelle
Question: I have a big list to sort in VB, which is the fastest method or the best? •Bubble sort •Selection sort •Insertion sort •Quick sort •Merge sort •Heap sort •Preliminary Results •BubbleExit Sort •Comb Sort •Shell Sort •Shaker Sort •Radix Sort •Results •Project Files •Visual Sorting I had a look and Radix seems to be the fastest. I just want to know what type is the best =D any help cheers. the big list i am sorting is String and is to be A to Z style Haha i wasnt sure, thanks =) I knew radix was fast, its the only one i have used in the past you see xD bubbly bubbles yaay fun! i will use that for my program.
Best Answer

Best Answers: Sorting - standard algorithms?

Anora Anora | 2 days ago
There are no fastest sort method. For a specific sample of data, one type of sorting will always be faster than the other. Generally though, Bubble sort is only good for small number of data that is almost nearly sorted. Its advantage is its working is simple to describe and understand, and is often taught to Algorithm 101. Quicksort is considered one of the fastest sorting algorithms, having an O(n log(n)) performance; however it has a worst case scenario of O(n^2), on certain type of initial arrangements could make it run very slow. Heap sort is also considered as one of the fastest sorting algorithm and it has no worst case scenario. However, it has large overhead, making QuickSort sometimes faster. Radix/bucket sort is lexicographic sort, avoiding the O(n log(n)) speed limit of comparison-based sort (such as Quicksort). However, due to not being a comparison sort, it is not capable of sorting certain sorts of data (pun not intended). Radix sort also have relatively high overhead. In general though, you should just use the built-in sorting library in whatever programming language you uses. Most of the time, writing a your own sorting function is just not worth the slight speed gain; and often half-baked sorting function would just be slower than the generic built-in one. Using built-in sort is also more bug-proof. In general, the fastest sorting algorithm would be one that mixes several algorithms together. Quicksort and Radix sort are fast for sorting large amount of data, but they are recursive sorting, and as the part/bucket gets smaller other sorting methods that have lower overhead is often used. Also, sometimes your problem really is not algorithm. Sometimes it is much easier, faster, and better to use the correct data structure for the problem. With the correct data structure (such as sorted binary tree), sorting often becomes unnecessary.
👍 252 | 👎 2
Did you like the answer? Sorting - standard algorithms? Share with your friends

We found more questions related to the topic: Best case scenario for insertion sort time

Anora Originally Answered: Algorithms: weighing task?
Dear Jimbo, (a) Using weights w[1] = 1g, w[2] = 2g, and w[3] = 4g, you can combine them to balance items on the other side of the scale as follows: 1g item versus w[1], 2g item versus w[2], 3g item versus w[2] and w[1], 4g item versus w[3], 5g item versus w[3] and w[1], 6g item versus w[3] and w[2], and 7g item versus w[3], w[2], and w[1]. (b) Notice that the numbers in part (a) are using base-2 (i.e., binary) integer representation. To see this, form a binary number B having three bits ("binary digits"), such that B = [b[3], b[2], b[1]], where each b[i] is either 1 or 0 whenever the corresponding w[i] is on the scale or off of it, respectively. Then the decimal weight on the balance D = 2^2 b[3] + 2^1 b[2] + 2^0 b[1], or more compactly, D = ∑ 2^(i - 1) b[i], for i in {1, 2, 3}, which is equivalent to ∑ w[i] b[i]. Observe how the table below matches up with part (a). D, B - , - - - 1, 001 2, 010 3, 011 4, 100 5, 101 6, 110 7, 111 The binary-based weightings can be extended to weigh arbitrarily heavy items by including additional weights which are successive powers of 2 (i.e., 8g, 16g, 32g, etc.). With eight such weights, the heaviest article that could be weighed is ∑ 2^(i - 1) , for i in {1, 2, 3, . . . , 8} = 128 + 64 + 32 + 16 + 8 + 4 + 2 + 1 = 255g. So how can eight weights be used to weigh items up to 3280g? The key idea is to realize that it may be possible to balance the scale by putting weights onto both sides, so that weights can be added or subtracted. This makes for an unusual base-3 integer representation known as balanced (or signed) ternary, where each "trit" ("trinary digit") is either -1, 0, or +1. In this case, form a balanced ternary number T having eight trits, such that T = [t[8], t[7], t[6], t[5], t[4], t[3], t[2], t[1]], where each t[i] is 1 when the corresponding w[i] is on the scale opposite the item being weighed, -1 when the corresponding w[i] is on the same side of the scale as the weighed item, and 0 if w[i] is off the scale entirely. The weights themselves are sequential powers of 3, with w[i] = 3^(i - 1), for i in {1, 2, 3, . . . , 8}. That is, the weights are 1g, 3g, 9g, 27g, 81g, 243g, 729g, and 2187g. (c) Notice that this part doesn't require writing code, or that the algorithm be efficient, although that could be done. However, in the present situation, I think descriptive language is easier to understand. Step 1. Input integer N, with 1 ≤ N ≤ 3280, and initialize w[i] = 3^(i - 1), for i in {1, 2, 3, . . . , 8}. For convenience, define cumulative weights W[i] = ∑ w[k], for k in {1, . . . , i}, and i in {1, 2, 3, . . . , 8}. That is, W[i] is the sum of the lightest i weights. Step 2. Search W to find the smallest m such that N ≤ W[m]. This m is indicates the most-significant trit of T, meaning that w[i] = 0 for all i > m, and that w[m] = 1. Practically speaking, this is equivalent to placing weights opposite the article, one at a time starting with the lightest, until the weights on the scale total at least N. Step 3. Systematically check all weight combinations of w[i] and t[i], for all i < m, until finding N = w[m] + ∑ w[i] t[i]. One way to systematically examine all weight combinations is to make nested loops. The outermost loop would vary t[m - 1] over {-1, 0, 1}, within which the next loop would vary t[m - 2] over {-1, 0, 1}, and so forth, with the innermost loop varying t[1] over {-1, 0, 1}. If the weights are placed on the scale in the order and positions which match these loops, then sum w[m] + ∑ w[i] t[i] will increment by 1 for each step through the loops. As a consequence, to find any particular N, start with m = 1 to find N = 1, then move on to m = 2 and loop over t[1] equalling -1, 0, and +1 to get the weights needed for N = 2, N = 3, and N = 4, respectively. Continue this process using m = 3, m = 4, m = 5, . . . , and m = 8, in sequence, which generates the rest of the integers in sequence, finally reaching ∑ w[i] = ∑ 3^(i - 1) = 3280g. Note that the trinary-based weightings can be extended to weight arbitrarily heavy items, similar to the like observation for binary-based weightings, given that additional weights are successively higher powers of 3. Also, as stated above, this algorithm isn't intended to minimize the number of weighings needed to find an article's weight. It should be clear that a cleverer approach would be to choose the combination of weights on each weighing so as to eliminate multiple possibilities at once, rather than checking sequentially. That might be preferred for a practical implementation, but the complexity would likely interfere with understanding the basic principles of the algorithm above. As a practical issue, real items will usually not weigh precise integer amounts, so on a real scale a small sliding weight to allow shifting up to half an ounce in either direction could bring the scale into balance.
Anora Originally Answered: Algorithms: weighing task?
Arabs are greater handsome than the Israelis , extraordinarily the Lebanese . i'm basically kidding to make you smile or cry . the version is that the Israelis are a one single Semitic tribe which from all tribes has it is very own rituals . they have their very own faith diverse from all different tribes and that they nonetheless talk a dialect of an early Semitic language . The Arabs encompass many tribes and eventhough the Arabs are initially from the Arab peninsula , however the Arabs now encompass many races take place to talk Arabic . The Arabs evaluate particularly everyone speaks Arabic an Arab . and manage him as certainly one of them .

Youkahainen Youkahainen
Bubble sort is definitely NOT the fastest. It runs in O(n^2) time. Radix is the fastest, but I believe it only works in certain scenarios and has some drawbacks (I believe it runs in O(n) time). Quick-sort is the fastest overall for any scenario (runs in O(nlog(n))) Merge sort works the same way as Quick-sort, so it's just as fast, but it eats up a lot more memory and memory allocation and deallocation slows it down. Of course, I'm not 100% familiar with all these sorting methods, but whatever.
👍 110 | 👎 -2

Youkahainen Originally Answered: Why is standard change in enthalpy of reaction independent of choice of standard pressure?
For general case of gas phase, enthalpy depends on pressure and temperature, or any other parameter, at least it needs two parameters. For specific case such us ideal gas condition, for lower pressure and very high temperature, gas depend solely on temperature. Your reaction of products must be assumed to behave as an ideal gas, so its contribution to the enthalpy of the products depends solely on the temperature of the products. Refer to the "enthalpy vs entropy" diagram, Mollier diagram. For gas phases, especially in higher temperature region, we can see that constant enthalpy line closed to constant temperature line, hence pressure can be varied, but enthalpy remain the same if the temperature is already fix. This mean that you can pick any of standard pressure, but it will not change enthalpy because enthalpy fix to the temperature. ∆h(T,P) = hf + [h(T,P) - h1(Tref, Pref)] = hf + [h(T) - h(Tref)] ∆h(T,P) ~ [h(T) - h(Tref)] Gibbs Free energy define as, ∆g = ∆h - T∆s Let us refer back again to "enthalpy vs entropy" diagram. For gas phase we need at least two parameters, so to determine entropy, we also need two parameters. If we pick temperature and pressure, it can be seen that entropy is ALWAYS depend on both parameters. So, we must chose specific pressure and temperature. T∆s = T*s(T,P) - Tref*s(Tref, Pref) Then, ∆g = ∆h - T∆s ∆g = [h(T,P) - h1(Tref, Pref)] - T*s(T,P) - Tref*s(Tref, Pref) Because enthalpy for ideal gas depend only on temperature, ∆g = [h(T) - h(Tref)] - T*s(T,P) - Tref*s(Tref, Pref) Ideal gas energy depend solely on temperature can be trace back from kinetic theory of (ideal)gas. KE = (c*R*T)^0.5 c = constant, depend on degree of freedom of molecules
Youkahainen Originally Answered: Why is standard change in enthalpy of reaction independent of choice of standard pressure?
Here is what going on ..the change in heat either a gain or a loss is always relative to the process and its components. Changing a pressure value especially standard pressure say from Torr to atmospheres is nothing more than a manipulation of a constant...I mean it a relative evaluation of pressure ..shoot lets make up a unit ..how about figets that where one fidget is equal to 3.9 atmospheres...did the delta Hr change there heck no we just used something that wasn't accepted as standard.. Now if you raise the pressure and the process takes place as usual what does it take ? More energy so your delta Ge responds accordingly...This relationship is pretty simple except in adiabatic reactions when no actual energy is transferred....Have a good one from the E...

If you have your own answer to the question best case scenario for insertion sort time, then you can write your own version, using the form below for an extended answer.