Thursday, October 29, 2015

Thinq - A Linq Experiment




Thinq - A Linq Experiment




   View/Download the source code from the project's GitHub



   So I wrote a program, just an experiment, where I was making a range class using IEnumerables (C#), and each element doesn't have to increment by one, but any amount. so I was creating ranges like 7 to 10 million, increment by 7, so upon enumeration it would yield multiples of 7. This is also called arithmetic progression.

   Then I started combining different multiples with query operators like Where operator or Intersect like IEnumerable result = multiples7.Intersect(multiples13.MoveNext()), essentially creating a function that keeps only those numbers that are multiples of both 7 and 13, starting with the least common multiple.

   So I began testing. After some playing, I decided to take the first 7 primes, and find any common multiples to them between 1 and 10 million. Much to my surprise, it found all the common multiples of the first 7 prime numbers under 10 million (there are only two of them, 4849845 & 9699690), and it did it in 500 milliseconds on some very modest hardware (1 core, 2.16GHz, 4GB ram).

   I bumped up the ceiling to 50 million and I got an OutOfMemoryException because the IEnumerable holds on to every value it gets from the function MoveNext(). I threw in some metrics and discovered that it took about 3 seconds and some 32-million, 64-bit integers for my computer to declare 'out of memory'.

   Well, at least it was fast, even if it did eat up all my ram in 3 seconds, it was still promising. 


   The solution was to create an IEnumerator that was aware of the arithmetic sequences that constrained the results set. When MoveNext() is called repeatedly during enumeration, I avoid the infinite memory requirement by restricting the result set returned from MoveNext(); it returns the next whole number that is divisible by every arithmetic sequence's 'common difference', or increment value. In this way, you have created a enumerable sequence that is the _intersection_ of all of the sequences.

   The enumerator is prevented from running to infinity by obeying two limits: A maximum numeric value (cardinal) that GetNext() will return to ("results less than 50 million") and a maximum quantity of results (ordinal) that GetNext() yields ("the one millionth result").    If either of these limits are exceeded, the while loop will fail to evaluate to true. It is very common for my processor-intensive, long running or 'mathy' applications to employ a temporal limit (maximum time-to-live) or support cancellation, but this little experiment has been so performant that I have been able to get by without one.

   So what kind of improvement did we get out of our custom enumerable? I can now find all the common factors for the first 8 prime numbers up to 1 billion in 25 seconds! I was impressed; the application used to max out around 50 million and run out of memory, and now it can investigate to one billion in a reasonable amount of time and the memory it uses is not much more than the 8 or so integers in the result set. 1 billion, however seems to be the sweet spot for my single 2.13 GHz laptop. I ran the same 8 primes to 2 billion and it took 1 minute, 12 seconds:



TIME ELAPSED: 01:12.38
LCM[3,5,7,11,13,17,19,23] (max 2,000,000,000)

17 FACTORS:

111546435 
223092870 
334639305 
446185740 
557732175 
669278610 
780825045 
892371480 
1003917915 
1115464350 
1227010785 
1338557220 
1450103655 
1561650090 
1673196525 
1784742960 
1896289395


No comments: