Jekyll2021-02-12T08:37:01+00:00https://ehfd.github.io/feed.xmlPolymath CollectorMedical student working towards polymathy. Current and former World Record Holder of Euler–Mascheroni Constant, Apéry’s Constant, Catalan's Constant, Lemniscate Constant, Log(2), Log(10) as of September 2020. If you are interested in sponsoring to set a world record for various mathematical constants including Pi, contact me using the email below. You can't copy contents of this website to another source without my permission. Contact me if you want to buy me a cup of tea. Especially you big corps.{"avatar"=>"https://user-images.githubusercontent.com/8457324/82421935-c225ae80-9abc-11ea-8110-b35c57838923.jpg", "bio"=>"Medical student working towards polymathy. Current and former World Record Holder of Euler–Mascheroni Constant, Apéry’s Constant, Catalan's Constant, Lemniscate Constant, Log(2), Log(10) as of September 2020. If you are interested in sponsoring to set a world record for various mathematical constants including Pi, contact me using the email below. You can't copy contents of this website to another source without my permission. Contact me if you want to buy me a cup of tea. Especially you big corps.", "email"=>"ehf@users.sourceforge.net", "links"=>[{"label"=>"GitHub", "icon"=>"fab fa-fw fa-github", "url"=>"https://github.com/ehfd"}, {"label"=>"Hive.Blog", "icon"=>"fas fa-fw fa-blog", "url"=>"https://hive.blog/@ehf"}]}ehf@users.sourceforge.netNatural Logarithm of 10 (Log(10))2020-09-06T14:25:00+00:002020-09-06T14:25:00+00:00https://ehfd.github.io/world-record/natural-logarithm-of-10-log10<h2 id="the-natural-logarithm-of-10-log10">The Natural Logarithm of 10 (Log(10))</h2>
<p>Please cite:<br />
Kim, S. Normality Analysis of Current World Record Computations for Catalan’s Constant and Arc Length of a Lemniscate with a=1. arXiv Preprint <a href="https://arxiv.org/abs/1908.08925">arXiv:1908.08925</a><br />
if this article or the calculated digits were useful.</p>
<p>This world record computation of 1,200,000,000,100 digits by Seungmin Kim was done from Fri Jul 31 18:13:07 2020 to Tue Aug 18 10:02:59 2020 using the Primary Machin-like Formula (4 terms) algorithm. This time again after most of my computations, I have verified the calculation using the Secondary Machin-like Formula (4 terms) algorithm from Wed Aug 19 10:48:52 2020 to Sun Sep 6 12:14:40 2020.<br />
Validation file generated by y-cruncher v0.7.8 Build 9506 for computation, and y-cruncher v0.7.8 Build 9506 for the verification run:<br />
Computation: <a href="https://web.archive.org/web/20200915115810/http://www.numberworld.org/y-cruncher/records/2020_8_18_log10.txt">https://web.archive.org/web/20200915115810/http://www.numberworld.org/y-cruncher/records/2020_8_18_log10.txt</a><br />
Verification: <a href="https://web.archive.org/web/20200915075933/http://www.numberworld.org/y-cruncher/records/2020_9_6_log10.txt">https://web.archive.org/web/20200915075933/http://www.numberworld.org/y-cruncher/records/2020_9_6_log10.txt</a></p>
\[\ln(a\cdot 10^n) = \ln a + n \ln 10\]
<p>An important identity of \(\ln 10\) enabling effective computation of natural logarithms of numbers with <a href="https://en.wikipedia.org/wiki/Scientific_notation">scientific notation</a> (<a href="https://en.wikipedia.org/wiki/Natural_logarithm#Natural_logarithm_of_10">Wikipedia</a>)</p>
<p><a href="https://en.wikipedia.org/wiki/Natural_logarithm#Natural_logarithm_of_10">Natural logarithm of 10</a> is defined as the logarithm function value of 10 where the base is <a href="https://en.wikipedia.org/wiki/E_(mathematical_constant)">\(e\)</a> (of which the function is also the <a href="https://en.wikipedia.org/wiki/Inverse_function">inverse function</a> of the <a href="https://en.wikipedia.org/wiki/Exponential_function">exponential function</a>). All natural logarithms of any natural number larger than 1 are proved irrational and transcendental with the <a href="https://en.wikipedia.org/wiki/Lindemann%E2%80%93Weierstrass_theorem">Lindemann–Weierstrass theorem</a>. It also has the <a href="https://en.wikipedia.org/wiki/Bailey%E2%80%93Borwein%E2%80%93Plouffe_formula">BBP-type representation</a> \(\ln 10 = \frac{1}{16} \sum_{k = 0}^\infty \left(\frac{24}{4k+1}+\frac{20}{4k+2}+\frac{6}{4k+3}+\frac{1}{4k+4}\right) \frac{1}{16^k}\) (<a href="https://mathworld.wolfram.com/NaturalLogarithmof10.html">Source</a>), which could be useful in verifying future record computations because it does not have to compute all the digits again after the first computation.</p>
<p>The Machin-like formulas are based on the identity of <a href="https://en.wikipedia.org/wiki/Inverse_hyperbolic_functions">inverse hyperbolic cotangent</a> \(coth^{-1} x = \frac{1}{2}[\ln(1 + \frac{1}{x}) - \ln(1 - \frac{1}{x})]\), as the inverse hyperbolic cotangent has a trivial series expansion formula. This is trivial to induce and is in material discussed in calculus textbooks. If such similar equations are added or subtracted as appropriate to leave only one logarithm value on the right side, that becomes the quickly converging formulas used for computations.</p>
\[\ln 10 = 239 coth^{-1} 99 - 59 coth^{-1} 449 + 113 coth^{-1} 4801 - 33 coth^{-1} 8749 \\ = 478 coth^{-1} 251 + 180 coth^{-1} 449 - 126 coth^{-1} 4801 + 206 coth^{-1} 8749\]
<p>Primary (4 terms) and secondary (4 terms) Machin-like formulas for \(\ln 10\)</p>
<p>Same as all other mathematical constants, I have used y-cruncher by Mr. Alexander J. Yee for this computation. This program is commonly used for stress testing and benchmarking overclocked PC builds (obviously this program performs a very rigorous computation), along with fellow mathematical computing program Prime95 and linear algebra program Linpack.<br />
The computation was theoretically about twice as intensive as Pi and half as intensive as <a href="/world-record/aperys-constant/">Apéry’s constant</a>. Unlike other series, the natural logarithms have multiple separate inverse hyperbolic cotangent series to compute the final result so each series are quite less intensive than ordinary but there are multiple series computations and accumulations. The primary formula and the secondary formula took around the same time as both have the same 4 terms.</p>
<p>Computation:<br />
System information:<br />
Operating System: Linux 5.5.6-1.el7.x86_64 x86_64<br />
Processor(s):<br />
Name: Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz<br />
Logical Cores: 72<br />
Physical Cores: 36<br />
Sockets: 2<br />
NUMA Nodes: 2<br />
Base Frequency: 2,299,987,856 Hz</p>
<p>My first computation used two Xeon CPU sockets from the Haswell era (and thus supports AVX2 SIMD vector operations that are used crucially in vector computations like y-cruncher) as I decided the Xeon Scalable Skylake Purley processors that support AVX-512 was overkill since I was gonna confront I/O bottlenecks anyways. The time would not have had much difference from using just one CPU socket. I have optimized I/O throughput further by changing the Bytes/Seek parameter and allocating more I/O Buffer to reach RAID-level performance in my file systems.</p>
<p>Start Date: Fri Jul 31 18:13:07 2020<br />
End Date: Tue Aug 18 10:02:59 2020<br />
Total Computation Time: 1470494.849 seconds<br />
Start-to-End Wall Time: 1525791.855 seconds<br />
CPU Utilization: 1070.37 % + 15.05 % kernel overhead<br />
Multi-core Efficiency: 14.87 % + 0.21 % kernel overhead</p>
<p>The multi-core efficiency improved slightly with good Bytes/Seek optimization compared to the computation of <a href="/world-record/natural-logarithm-of-2-log2/">Log(2)</a>, and of course did not reach the utilization possible by the CPUs since I was using swap storage (Dr. Ian Cutress’s <a href="/world-record/lemniscate-constant/">Lemniscate Constant</a> calculation had a multi-core efficiency of 94.04 % and CPU utilization of 9027.61 %, which means that the CPU was not bottlenecked by other factors). I also used the Cilk Plus Work-Stealing multiprocessing framework along with the dynamic version of y-cruncher.</p>
<p>Memory:<br />
Working Memory: 696,086,196,992 ( 648 GiB)<br />
Total Memory: 697,932,185,600 ( 650 GiB)<br />
Logical Largest Checkpoint: 2,242,654,353,976 (2.04 TiB)<br />
Logical Peak Disk Usage: 6,516,333,392,080 (5.93 TiB)<br />
Logical Disk Bytes Read: 420,720,903,373,472 ( 383 TiB)<br />
Logical Disk Bytes Written: 368,315,890,119,480 ( 335 TiB)</p>
<p>Disk operation was similar to the verification computation of <a href="/world-record/natural-logarithm-of-2-log2/">Log(2)</a> and was similarly less than a half compared to <a href="/world-record/aperys-constant/">Apéry’s constant</a> because the algorithm is easier and is split to smaller computations, and this contributed to a faster computation along with more RAM.<br />
One caveat is that HDD I/O speeds are again great bottlenecks to virtually any other component, and perhaps having Optane DIMMs or even more normal RAM can help the speed of the computation speed greatly.</p>
<p>Verification:<br />
System information:<br />
Operating System: Linux 3.10.0-693.21.1.el7.x86_64 x86_64<br />
Processor(s):<br />
Name: Intel(R) Xeon(R) Gold 5220 CPU @ 2.20GHz<br />
Logical Cores: 72<br />
Physical Cores: 36<br />
Sockets: 2<br />
NUMA Nodes: 2<br />
Base Frequency: 2,194,728,160 Hz</p>
<p>Start Date: Wed Aug 19 10:48:52 2020<br />
End Date: Sun Sep 6 12:14:40 2020<br />
Total Computation Time: 1520754.941 seconds<br />
Start-to-End Wall Time: 1560348.114 seconds<br />
CPU Utilization: 932.46 % + 37.40 % kernel overhead<br />
Multi-core Efficiency: 12.95 % + 0.52 % kernel overhead</p>
<p>Memory:<br />
Working Memory: 748,959,060,352 ( 698 GiB)<br />
Total Memory: 751,290,873,600 ( 700 GiB)<br />
Logical Largest Checkpoint: 2,252,980,350,032 (2.05 TiB)<br />
Logical Peak Disk Usage: 6,533,333,252,760 (5.94 TiB)<br />
Logical Disk Bytes Read: 426,298,248,606,512 ( 388 TiB)<br />
Logical Disk Bytes Written: 373,070,460,848,488 ( 339 TiB)</p>
<p>This build has a slower filesystem than the system used for computation, and thus caused the verification computation to take more time even with two Cascade Lake Xeon CPU sockets that support AVX-512. CPU utilization was impacted because of slower disk even when the CPU was more recent. Total Computation Time and Disk R/W were slightly higher.</p>
<p>I hope computing and sharing the results can result in more insight that can be used by mathematicians for new mathematical knowledge.</p>
<p>If you want to take a look at the digits for the <a href="https://mathworld.wolfram.com/NaturalLogarithmof10.html">Natural Logarithm of 10 (Log(10))</a>, you can download it from <a href="https://archive.org/details/log10_200818">This Link</a> (Almost 2 TB total but don’t worry, it will just redirect to a registry with a link to download).</p>
<p><strong>Note that digits are released as an <a href="https://creativecommons.org/licenses/by-nc-nd/4.0/">Attribution-NonCommercial-NoDerivatives 4.0 International</a> License, meaning no commercial purposes and you cannot distribute a remixed, transformed, or built upon version without my consent. You must also give appropriate credit, provide a link to the license, and indicate if changes were made even if it is not a prohibited use case.</strong></p>
<p>Archive for computation results in the y-cruncher website: <a href="https://web.archive.org/web/20200915070251/http://www.numberworld.org/y-cruncher/">https://web.archive.org/web/20200915070251/http://www.numberworld.org/y-cruncher/</a><br />
Special thanks to Mr. Alexander J. Yee for developing and releasing y-cruncher and providing advice, and the <a href="https://archive.org/">Internet Archive</a> for hosting the computed digits.</p>{"avatar"=>"https://user-images.githubusercontent.com/8457324/82421935-c225ae80-9abc-11ea-8110-b35c57838923.jpg", "bio"=>"Medical student working towards polymathy. Current and former World Record Holder of Euler–Mascheroni Constant, Apéry’s Constant, Catalan's Constant, Lemniscate Constant, Log(2), Log(10) as of September 2020. If you are interested in sponsoring to set a world record for various mathematical constants including Pi, contact me using the email below. You can't copy contents of this website to another source without my permission. Contact me if you want to buy me a cup of tea. Especially you big corps.", "email"=>"ehf@users.sourceforge.net", "links"=>[{"label"=>"GitHub", "icon"=>"fab fa-fw fa-github", "url"=>"https://github.com/ehfd"}, {"label"=>"Hive.Blog", "icon"=>"fas fa-fw fa-blog", "url"=>"https://hive.blog/@ehf"}]}ehf@users.sourceforge.netHow I set the world record for the Natural Logarithm of 10 (Log(10)).Natural Logarithm of 2 (Log(2))2020-08-19T11:18:00+00:002020-08-19T11:18:00+00:00https://ehfd.github.io/world-record/natural-logarithm-of-2-log2<h2 id="the-natural-logarithm-of-2-log2">The Natural Logarithm of 2 (Log(2))</h2>
<p>Please cite:<br />
Kim, S. Normality Analysis of Current World Record Computations for Catalan’s Constant and Arc Length of a Lemniscate with a=1. arXiv Preprint <a href="https://arxiv.org/abs/1908.08925">arXiv:1908.08925</a><br />
if this article or the calculated digits were useful.</p>
<p>This world record computation of 1,200,000,000,100 digits by Seungmin Kim was done from Tue Jul 14 14:02:26 2020 to Wed Jul 29 01:09:58 2020 using the Primary Machin-like Formula (3 terms) algorithm. This time again after most of my computations, I have verified the calculation using the Secondary Machin-like Formula (4 terms) algorithm from Thu Jul 30 16:47:49 2020 to Wed Aug 19 03:00:23 2020.<br />
Validation file generated by y-cruncher v0.7.8 Build 9506 for computation, and y-cruncher v0.7.8 Build 9506 for the verification run:<br />
Computation: <a href="https://web.archive.org/web/20200810062417/http://www.numberworld.org/y-cruncher/records/2020_7_29_log2.txt">https://web.archive.org/web/20200810062417/http://www.numberworld.org/y-cruncher/records/2020_7_29_log2.txt</a><br />
Verification: <a href="https://web.archive.org/web/20200915095126/http://www.numberworld.org/y-cruncher/records/2020_8_19_log2.txt">https://web.archive.org/web/20200915095126/http://www.numberworld.org/y-cruncher/records/2020_8_19_log2.txt</a></p>
\[\ln 2 = \sum_{n=1}^\infty \frac{(-1)^{n+1}}{n}=1-\frac12+\frac13-\frac14+\frac15-\frac16+\cdots\]
<p>An identity of \(\ln 2\) called the <a href="https://en.wikipedia.org/wiki/Harmonic_series_(mathematics)#Alternating_harmonic_series">alternating harmonic series</a> (<a href="https://en.wikipedia.org/wiki/Natural_logarithm_of_2">Wikipedia</a>)</p>
<p><a href="https://en.wikipedia.org/wiki/Natural_logarithm_of_2">Natural logarithm of 2</a> is defined as the logarithm function value of 2 where the base is <a href="https://en.wikipedia.org/wiki/E_(mathematical_constant)">\(e\)</a> (of which the function is also the <a href="https://en.wikipedia.org/wiki/Inverse_function">inverse function</a> of the <a href="https://en.wikipedia.org/wiki/Exponential_function">exponential function</a>). All natural logarithms of any natural number larger than 1 are proved irrational and transcendental with the <a href="https://en.wikipedia.org/wiki/Lindemann%E2%80%93Weierstrass_theorem">Lindemann–Weierstrass theorem</a>. It is an important value for radioactive or other decay problems and the <a href="https://en.wikipedia.org/wiki/Rule_of_72">rule of 72</a> for continuous compounding in investment and banking. Many identities for especially Log(2) have been found because of its simplicity compared to other logarithmic values. It also has the <a href="https://en.wikipedia.org/wiki/Bailey%E2%80%93Borwein%E2%80%93Plouffe_formula">BBP-type representation</a> \(\ln 2 = \frac{2}{3} + \frac{1}{2} \sum_{k = 1}^\infty \left(\frac{1}{2k}+\frac{1}{4k+1}+\frac{1}{8k+4}+\frac{1}{16k+12}\right) \frac{1}{16^k}\), which could be useful in verifying future record computations because it does not have to compute all the digits again after the first computation.</p>
<p>The Machin-like formulas are based on the identity of <a href="https://en.wikipedia.org/wiki/Inverse_hyperbolic_functions">inverse hyperbolic cotangent</a> \(coth^{-1} x = \frac{1}{2}[\ln(1 + \frac{1}{x}) - \ln(1 - \frac{1}{x})]\), as the inverse hyperbolic cotangent has a trivial series expansion formula. This is trivial to induce and is in material discussed in calculus textbooks. If such similar equations are added or subtracted as appropriate to leave only one logarithm value on the right side, that becomes the quickly converging formulas used for computations.</p>
\[\ln 2 = 18 coth^{-1} 26 - 2 coth^{-1} 4801 + 8 coth^{-1} 8749 \\ = 72 coth^{-1} 99 - 18 coth^{-1} 449 + 34 coth^{-1} 4801 - 10 coth^{-1} 8749\]
<p>Primary (3 terms) and secondary (4 terms) Machin-like formulas for \(\ln 2\)</p>
<p>Same as all other mathematical constants, I have used y-cruncher by Mr. Alexander J. Yee for this computation. This program is commonly used for stress testing and benchmarking overclocked PC builds (obviously this program performs a very rigorous computation), along with fellow mathematical computing program Prime95 and linear algebra program Linpack.<br />
The computation was theoretically about twice as intensive as Pi and half as intensive as <a href="/world-record/aperys-constant/">Apéry’s constant</a>. Unlike other series, the natural logarithms have multiple separate inverse hyperbolic cotangent series to compute the final result so each series are quite less intensive than ordinary but there are multiple series computations and accumulations. The primary formula took less than 3/4 as much time as the secondary formula because of this.</p>
<p>Computation:<br />
System information:<br />
Operating System: Linux 5.5.6-1.el7.x86_64 x86_64<br />
Processor(s):<br />
Name: Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz<br />
Logical Cores: 72<br />
Physical Cores: 36<br />
Sockets: 2<br />
NUMA Nodes: 2<br />
Base Frequency: 2,299,973,744 Hz</p>
<p>My first computation used two Xeon CPU sockets from the Haswell era (and thus supports AVX2 SIMD vector operations that are used crucially in vector computations like y-cruncher) as I decided the Xeon Scalable Skylake Purley processors that support AVX-512 was overkill since I was gonna confront I/O bottlenecks anyways. The time would not have had much difference from using just one CPU socket. I have optimized I/O throughput further by changing the Bytes/Seek parameter and allocating more I/O Buffer to reach RAID-level performance in my file systems.</p>
<p>Start Date: Tue Jul 14 14:02:26 2020<br />
End Date: Wed Jul 29 01:09:58 2020<br />
Total Computation Time: 1169940.880 seconds<br />
Start-to-End Wall Time: 1249652.423 seconds<br />
CPU Utilization: 907.02 % + 7.10 % kernel overhead<br />
Multi-core Efficiency: 12.60 % + 0.10 % kernel overhead</p>
<p>The multi-core efficiency was similar to <a href="/world-record/aperys-constant/">Apéry’s constant</a>, and still did not reach the utilization possible by the CPUs (Dr. Ian Cutress’s <a href="/world-record/lemniscate-constant/">Lemniscate Constant</a> calculation had a multi-core efficiency of 94.04 % and CPU utilization of 9027.61 %, which means that the CPU was not bottlenecked by other factors). I also used the Cilk Plus Work-Stealing multiprocessing framework along with the dynamic version of y-cruncher.</p>
<p>Memory:<br />
Working Memory: 685,885,657,856 ( 639 GiB)<br />
Total Memory: 687,194,767,360 ( 640 GiB)<br />
Logical Largest Checkpoint: 2,118,600,314,392 (1.93 TiB)<br />
Logical Peak Disk Usage: 6,578,365,008,416 (5.98 TiB)<br />
Logical Disk Bytes Read: 366,084,546,862,600 ( 333 TiB)<br />
Logical Disk Bytes Written: 320,267,770,209,808 ( 291 TiB)</p>
<p>Disk operation was decreased to less than a half compared to <a href="/world-record/aperys-constant/">Apéry’s constant</a> because the algorithm is easier and is split to smaller computations, and this contributed to a faster computation along with more RAM.<br />
One caveat is that HDD I/O speeds are again great bottlenecks to virtually any other component, and perhaps having Optane DIMMs or even more normal RAM can help the speed of the computation speed greatly.</p>
<p>Verification:<br />
System information:<br />
Operating System: Linux 3.10.0-693.21.1.el7.x86_64 x86_64<br />
Processor(s):<br />
Name: Intel(R) Xeon(R) Gold 5220 CPU @ 2.20GHz<br />
Logical Cores: 72<br />
Physical Cores: 36<br />
Sockets: 2<br />
NUMA Nodes: 2<br />
Base Frequency: 2,194,720,736 Hz</p>
<p>Start Date: Thu Jul 30 16:47:49 2020<br />
End Date: Wed Aug 19 03:00:23 2020<br />
Total Computation Time: 1636391.663 seconds<br />
Start-to-End Wall Time: 1678354.249 seconds<br />
CPU Utilization: 672.88 % + 34.22 % kernel overhead<br />
Multi-core Efficiency: 9.35 % + 0.48 % kernel overhead</p>
<p>Memory:<br />
Working Memory: 565,140,748,928 ( 526 GiB)<br />
Total Memory: 566,935,683,072 ( 528 GiB)<br />
Logical Largest Checkpoint: 2,242,654,353,960 (2.04 TiB)<br />
Logical Peak Disk Usage: 6,516,333,392,080 (5.93 TiB)<br />
Logical Disk Bytes Read: 476,756,678,694,304 ( 434 TiB)<br />
Logical Disk Bytes Written: 417,615,844,506,104 ( 380 TiB)</p>
<p>Less RAM and one extra series expansion, plus this build also having a slower filesystem caused the verification computation to take more time even with two Cascade Lake Xeon CPU sockets that support AVX-512. CPU utilization was impacted because of slower disk even when the number of cores increased. Total Computation Time and Disk R/W were over 1/3 more.</p>
<p>I hope computing and sharing the results can result in more insight that can be used by mathematicians for new mathematical knowledge.</p>
<p>If you want to take a look at the digits for the <a href="https://mathworld.wolfram.com/NaturalLogarithmof2.html">Natural Logarithm of 2 (Log(2))</a>, you can download it from <a href="https://archive.org/details/log2_200729">This Link</a> (Almost 2 TB total but don’t worry, it will just redirect to a registry with a link to download).</p>
<p><strong>Note that digits are released as an <a href="https://creativecommons.org/licenses/by-nc-nd/4.0/">Attribution-NonCommercial-NoDerivatives 4.0 International</a> License, meaning no commercial purposes and you cannot distribute a remixed, transformed, or built upon version without my consent. You must also give appropriate credit, provide a link to the license, and indicate if changes were made even if it is not a prohibited use case.</strong></p>
<p>Archive for computation results in the y-cruncher website: <a href="https://web.archive.org/web/20200915070251/http://www.numberworld.org/y-cruncher/">https://web.archive.org/web/20200915070251/http://www.numberworld.org/y-cruncher/</a><br />
Special thanks to Mr. Alexander J. Yee for developing and releasing y-cruncher and providing advice, and the <a href="https://archive.org/">Internet Archive</a> for hosting the computed digits.</p>{"avatar"=>"https://user-images.githubusercontent.com/8457324/82421935-c225ae80-9abc-11ea-8110-b35c57838923.jpg", "bio"=>"Medical student working towards polymathy. Current and former World Record Holder of Euler–Mascheroni Constant, Apéry’s Constant, Catalan's Constant, Lemniscate Constant, Log(2), Log(10) as of September 2020. If you are interested in sponsoring to set a world record for various mathematical constants including Pi, contact me using the email below. You can't copy contents of this website to another source without my permission. Contact me if you want to buy me a cup of tea. Especially you big corps.", "email"=>"ehf@users.sourceforge.net", "links"=>[{"label"=>"GitHub", "icon"=>"fab fa-fw fa-github", "url"=>"https://github.com/ehfd"}, {"label"=>"Hive.Blog", "icon"=>"fas fa-fw fa-blog", "url"=>"https://hive.blog/@ehf"}]}ehf@users.sourceforge.netHow I set the world record for the Natural Logarithm of 2 (Log(2)).Breaking the Pi World Record?2020-07-28T14:00:00+00:002020-07-28T14:00:00+00:00https://ehfd.github.io/world-record/breaking-the-pi-world-record<p>Note: this post should hopefully be understandable with anyone who is a computer power user.
Also, this post is meant to supplement Mr. Alexander Yee’s <a href="http://www.numberworld.org/y-cruncher/">y-cruncher</a>, not replace it. You have to see his website for the crucial technical details required to make world records.</p>
<p>Difficulties of mathematical constants: (Excerpt from <a href="http://www.numberworld.org/y-cruncher/">y-cruncher</a> v0.7.8.9506, Mr. Yee thankfully let me post this)</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Compute a Constant: (in ascending order of difficulty to compute)
# Constant Value Approximate Difficulty*
Fast Constants:
0 Sqrt(n) 1.46
1 Golden Ratio = 1.618034... 1.46
2 e = 2.718281... 3.88 / 3.88
Moderate Constants:
3 Pi = 3.141592... 13.2 / 19.9
4 Log(n) > 35.7
5 Zeta(3) (Apery's Constant) = 1.202056... 62.8 / 65.7
6 Catalan's Constant = 0.915965... 78.0 / 105.
7 Lemniscate = 5.244115... 60.4 / 124. / 154.
Slow Constants:
8 Euler-Mascheroni Constant = 0.577215... 383. / 574.
Other:
9 Euler-Mascheroni Constant (parameter override)
10 Custom constant with user-defined formula.
*Actual numbers will vary. Radix conversion = 1.00
</code></pre></div></div>
<p>If you did not read the <a href="/world-record/algorithms-and-significance-of-major-mathematical-constants-part-1/">First Post</a> and <a href="/world-record/algorithms-and-significance-of-major-mathematical-constants-part-2/">Second Post</a> of the significance and algorithms of mathematical constants, read them to understand algorithms and the significance of all main mathematical constants.</p>
<p>Note there are more mathematical constants that are defined with custom formula files available with the executable, but they are more complicated math, so if you really want to set custom formula records, you should eventually know more as you research them.</p>
<h2 id="introduction">Introduction</h2>
<p>Of the people that I have contacted related to my ongoing world record project, all of them either have set a world record for Pi or are people that definitely will set one with the highest professionalism and optimization.</p>
<!-- Courtesy of embedresponsively.com //-->
<div class="responsive-video-container">
<iframe src="https://www.youtube-nocookie.com/embed/DXX823edcGo" frameborder="0" webkitallowfullscreen="" mozallowfullscreen="" allowfullscreen=""></iframe>
</div>
<p><a href="https://twitter.com/iancutress">Dr. Ian Cutress</a> also did not hide his intent to pursue a world record for Pi, perhaps in a collaboration with Gaming PC reviewers <a href="https://www.youtube.com/user/LinusTechTips">LinusTechTips</a> and <a href="https://www.youtube.com/channel/UChIs72whgZI9w6d6FhwGGHA">Gamers Nexus</a>. But a world record is a world record. It requires stuff to set them. If it didn’t, it wouldn’t have been called one and instead everyone would do these everyday. What is required to set these records then?</p>
<h2 id="whats-required">What’s Required</h2>
<p>We are going to consider 100 trillion digits of Pi with the Chudnovsky (1988) (reduced memory) algorithm, which is a sensible next step to the current record of 50 trillion digits by <a href="https://blog.timothymullican.com/calculating-pi-my-attempt-breaking-pi-record">Timothy Mullican</a> as of Jan 2019.</p>
<p>Verification of the digits are done using the <a href="https://en.wikipedia.org/wiki/Bailey%E2%80%93Borwein%E2%80%93Plouffe_formula">BBP spigot algorithm</a> thus the computing resources for verification is negligible (under a week with a desktop CPU).</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>1. Mediocre relatively recent server CPUs around 28-36 cores (I mean this is still a lot but this isn't high-end, it still won't saturate these cores because of I/O bottleneck). The CPU's performance is not important, instead the maximum RAM and PCIe lanes for the secondary storage it can support is important. Thus it is normally better to use multiple sockets with smaller cores.
2. At least 768GB RAM at the very very minimum. At least 2-3 times if the objective is finishing in under 6 months, not over a year.
3. 440 TiB = 483.785 TB of high-speed RAID storage and 166 TiB = 182.519 TB to store the generated digits (doesn't need to be additional).
</code></pre></div></div>
<h2 id="hardware-costs">Hardware Costs</h2>
<p>Let’s start with simply summing up the hardware making up this specification, which is the cost of starting from scratch. This was the way the Pi world record in 2009 by biologist <a href="http://www.fbs.osaka-u.ac.jp/eng2/scientist/shigeru_kondo/index.php">Shigeru Kondo</a> and y-cruncher developer Alexander J. Yee was done; assembling computers and connecting many hard drives manually.</p>
<p>RAM: One 64GB DDR4 PC4-23400 ECC REG Samsung RAM stick costs about $400 in Amazon. 32 of them (if we can even stick all of them to the server motherboard) cost $12800, summing to 2.048 TB of RAM. One 128GB DDR4 PC4-21300 ECC REG RAM stick 128 GB costs around $1100. 16 of them cost $17600, summing to 2.048 TB of RAM too. Intel Optane DIMMs (only for Intel Xeon Scalable CPUs) cost minimum $2100 each per 256 GB DIMM, 8 of them plus 8 64GB DDR4 RAM cost $20000 and sums to 2.56 TB, so this also can be an option when there is a lack of RAM sockets.</p>
<p>CPU: <a href="https://blog.timothymullican.com/calculating-pi-my-attempt-breaking-pi-record">Timothy Mullican</a> used 4x Xeon E7-4880 v2 CPU sockets for the record and had around 50% multicore efficiency. The closest benchmark I could find related to this system is <a href="https://www.cpubenchmark.net/cpu.php?cpu=Intel+Xeon+E7-4890+v2+%40+2.80GHz&id=2685&cpuCount=4">This (Passmark score 34472)</a>. Since the E7-4890 v2 CPU is a faster clocked version, it can be expected that the performance will be similar or even lower.<br />
For single-processor options, Intel Xeon W-3275 (28 cores with AVX-512 and 6-channel DDR4-2933, max 1 TB and 12 DIMMs, 64 PCIe lanes) costs $4537.90. AMD EPYC 7502P (32 cores with AVX2 and 8-channel DDR4-3200, max 2 TB, 128 PCIe lanes) costs $2300. AMD EPYC 7502P theoretically works, so let’s keep it.<br />
For multiple-processor options, two Xeon Gold 6238 CPUs, each costing $2612, sums up to $5224 and 44 cores, and two Epyc 7352 CPUs, each costing $1350, sums up to $2700 and 48 cores. If we use older-generation used CPUs it will cost less (but nevertheless doesn’t impact the cost greatly compared to other components).</p>
<p>Storage: If the requirement is 440 TB, we need more than that in order to do RAID 5/6, which are configurations that tolerate one or more disk failure. One WD 8 TB shuckable external hard drive costs $140 but is slower than 7200 RPM HDDs, 70 of them costs $9800. A normal 8TB HDD with a higher I/O speed costs $200. 70 of them cost $14000. We may put in NVMe SSDs for faster buffering, and a 4 TB SSD costs 800$, so prepare to break some more money. This configuration can (hopefully) get parallel I/O speeds of around 5-6 GB/s and more if the NVMe buffers are configured correctly and meaningfully.</p>
<p>Cost for motherboard and RAID connection: Maybe around a couple thousand plus another few thousand on motherboards and cooling, summing to maybe $4000 or upwards? Add another $1500 per additional socket.</p>
<p>Total for pure hardware: minimum $33700 with Epyc 7502P and 128GB DDR4 ECC DIMMs minimum $30800 with 2x EPYC 7352 and 64GB DDR4 ECC DIMMs, minimum $40524 with 2x Xeon Gold 6238 and Optane mixed with DDR4. Note that this is the bare minimum and parts costs in real life will be more for sure.</p>
<p>This is not including electricity. Two CPUs spend under 400W and closer to 300W. One spends under 200W. 70 HDDs spend around 560W if we assume each HDD spends 8W. If there is no GPU, it will spend around 900W total. Assuming 0.2$/kWh, running for 6 months will cost roughly $800. So the whole summed bare minimum is expected to be around $32000. I would advise to prepare around $50000 if starting from scratch. If you have no substantial experience of computer hardware, you also have to hire someone to set up all the hardware.</p>
<p>I don’t think many people outside some extremely rich people happening to be interested in mathematics would be willing to do this from scratch and rather people doing computing work as a job and thus have accessibility to hardware.</p>
<h2 id="cloud-computing">Cloud Computing?</h2>
<p>I also thought about cloud computing since <a href="https://cloud.google.com/blog/products/compute/calculating-31-4-trillion-digits-of-archimedes-constant-on-google-cloud">Google</a>’s <a href="https://en.wikipedia.org/wiki/Emma_Haruka_Iwao">Emma Haruka Iwao</a> achieved the <a href="http://www.numberworld.org/blogs/2019_3_14_pi_record/">world record of Pi</a> using the Google Cloud Platform <code class="language-plaintext highlighter-rouge">m1-megamem-96</code> (formerly <code class="language-plaintext highlighter-rouge">n1-megamem-96</code>) instance. The good side of cloud computing is that it is scalable and the required man-hours are hugely less if the time of whoever doing this is expensive. The downside is that we have to use the provided computing node as-is and thus cannot optimize as much as building our own systems.</p>
<p>We can use Ubuntu 18.04 or CentOS 8 so we don’t have to pay for the OS. For GCP the new <code class="language-plaintext highlighter-rouge">m1-ultramem-80</code> looks perfect. Let’s see the pricing per month. Around $6430 per month with sustained usage discount. The sole-tenant node <code class="language-plaintext highlighter-rouge">m1-node-96-1433</code> with less RAM of which is around 500$ less than <code class="language-plaintext highlighter-rouge">m1-ultramem-80</code> costs around $5900 per month. If the computation lasts 6 months, that is already similar to the whole cost of buying all the hardware but you don’t get to keep or sell the hardware.
The real problem doesn’t come with the compute node. Around 460 TiB a month costs $31260. Even if the real usage cost is approximately 2/3 since the peak point isn’t persistent, it still costs $20840 a month on average.</p>
<p>For AWS the <code class="language-plaintext highlighter-rouge">x1e.16xlarge</code> costs around $9800 a month. Dedicated hosts also exist, but have inadequate ratios between the CPU cores and RAM. The Amazon EBS volume has a max size of 16 TB (costing around $700 per month) so we require an average of 15 of them. Thus it costs over $20000 a month.</p>
<p>Both of these fees could be reduced with using a different storage surface, but it is still way over assembling a system from scratch.</p>
<p>You can now see that setting a Pi world record with cloud providers is a huge expenditure and is impractical unless the resources are sponsored by AWS or Google themselves. Hiring people to set up a system is clearly cheaper.</p>
<h2 id="hpc-clusters">HPC Clusters</h2>
<p>This is now the most plausible form of computing resources that could work. Distributed file systems are already deployed and already designed to handle large data at high throughput. There are so many nodes that occupying one node for a month doesn’t impact anything big. This is the case for the world record by <a href="https://pi2e.ch/blog">Dr. Peter Trueb</a>, with the HPC cluster supported by high-energy physics research instrument manufacturer <a href="https://www.dectris.com">Dectris</a>. This is efficient in sense that a lot of HPC Clusters are idle and can be easily utilized for a few months, and also house distributed file systems designed for parallelization. File systems like <a href="https://docs.ceph.com/docs/master/rbd/">Ceph RADOS Block Device</a> or <a href="https://www.beegfs.io/content/">BeeGFS</a> can prove a very easy solution for RAID-level parallel file I/O once deployed.</p>
<p>A new concept of computing frameworks for computing Pi can also work. Instead of utilizing secondary storage, connecting supercomputers using recent high-speed <a href="https://en.wikipedia.org/wiki/InfiniBand">Mellanox InfiniBand</a> and utilizing the required 500 TB RAM over hundreds to thousands of nodes can also work out since InfiniBand is definitely faster than SSDs or HDDs. Since supercomputers have high CPU Core/RAM GB ratios this can be inefficient for y-cruncher workloads and have a lot of idle CPU cores, but this can reduce the total compute time dramatically.</p>
<p>It is expected for Academic HPC Clusters to be very efficient, as long as someone is up for this task. Cost estimates are (obviously) unavailable since every clusters or supercomputers are different.</p>
<h2 id="conclusion">Conclusion</h2>
<p>A world record is a world record. It wouldn’t have been called one if everyone could easily do it. I have introduced three main ways that the world record of Pi could be done, and all of these have the same problem; the I/O wall. The 4 GHz Power Wall for the CPU was once the bottleneck of numerical computations, and has been solved by multiprocessing. The bottleneck from I/O is adversely affecting a reasonable method of breaking the current barrier and it will be harder and harder to set records as long as disk speeds of economic mediums such as HDDs stay this way. SSDs are still expensive and the redundancy of SSDs are insufficient for the extreme Read/Write y-cruncher requires. This causes SSDs to potentially corrupt during the computation or be literally single-use. Technologies for storage must be improved for the plight for the world record of Pi to continue on rapidly as before.</p>{"avatar"=>"https://user-images.githubusercontent.com/8457324/82421935-c225ae80-9abc-11ea-8110-b35c57838923.jpg", "bio"=>"Medical student working towards polymathy. Current and former World Record Holder of Euler–Mascheroni Constant, Apéry’s Constant, Catalan's Constant, Lemniscate Constant, Log(2), Log(10) as of September 2020. If you are interested in sponsoring to set a world record for various mathematical constants including Pi, contact me using the email below. You can't copy contents of this website to another source without my permission. Contact me if you want to buy me a cup of tea. Especially you big corps.", "email"=>"ehf@users.sourceforge.net", "links"=>[{"label"=>"GitHub", "icon"=>"fab fa-fw fa-github", "url"=>"https://github.com/ehfd"}, {"label"=>"Hive.Blog", "icon"=>"fas fa-fw fa-blog", "url"=>"https://hive.blog/@ehf"}]}ehf@users.sourceforge.netHere I make a qualitative cost/time estimate on the feasibility and what is required for computing the next world record for Pi and the necessary other steps to achieve this.Apéry’s Constant2020-07-28T12:10:00+00:002020-07-28T12:10:00+00:00https://ehfd.github.io/world-record/aperys-constant<h2 id="the-apérys-constant">The Apéry’s Constant</h2>
<p>Please cite:<br />
Kim, S. Normality Analysis of Current World Record Computations for Catalan’s Constant and Arc Length of a Lemniscate with a=1. arXiv Preprint <a href="https://arxiv.org/abs/1908.08925">arXiv:1908.08925</a><br />
if this article or the calculated digits were useful.</p>
<p>This world record computation of 1,200,000,000,100 digits by Seungmin Kim was done from Thu May 21 15:11:49 2020 to Mon Jun 22 08:38:33 2020 using the Wedeniwski (1998) algorithm. This time again after Catalan’s Constant, I have verified the calculation using the Amdeberhan-Zeilberger (1997) algorithm from Wed Jun 24 09:02:14 2020 to Sun Jul 26 22:22:36 2020.<br />
Validation file generated by y-cruncher v0.7.8 Build 9506 for computation, and y-cruncher v0.7.8 Build 9506 for the verification run:<br />
Computation: <a href="https://web.archive.org/web/20200810061235/http://www.numberworld.org/y-cruncher/records/2020_6_22_zeta3.txt">https://web.archive.org/web/20200810061235/http://www.numberworld.org/y-cruncher/records/2020_6_22_zeta3.txt</a><br />
Verification: <a href="https://web.archive.org/web/20200810062529/http://www.numberworld.org/y-cruncher/records/2020_7_26_zeta3.txt">https://web.archive.org/web/20200810062529/http://www.numberworld.org/y-cruncher/records/2020_7_26_zeta3.txt</a></p>
\[\zeta(3) = \sum_{n=1}^\infty\frac{1}{n^3} = \lim_{n \to \infty}\left(\frac{1}{1^3} + \frac{1}{2^3} + \cdots + \frac{1}{n^3}\right)\]
<p>The definition of Apéry’s Constant (<a href="https://en.wikipedia.org/wiki/Ap%C3%A9ry%27s_constant">Wikipedia</a>)</p>
<p><a href="https://en.wikipedia.org/wiki/Ap%C3%A9ry%27s_constant">Apéry’s Constant</a> is defined by the above equation, ζ is the <a href="https://en.wikipedia.org/wiki/Riemann_zeta_function">Riemann zeta function</a>. The zeta function is well regarded as the key to many unsolved problems in mathematics, physics, and chemistry, and I think it is learned by undergrad junior to senior students in mathematics. It has been named after <a href="https://en.wikipedia.org/wiki/Roger_Ap%C3%A9ry">Roger Apéry</a>, the mathematician that proved that \(\zeta(3)\) was irrational and generated great insight to the zeta function itself. Because Apéry’s Constant was proved as irrational, we know the digits will continue indefinitely. Take a look at this <a href="https://mathworld.wolfram.com/AperysConstant.html">Wolfram Mathworld</a> entry for the mathematical stuff.</p>
<p>Interesting fact: <a href="https://sg.linkedin.com/in/sebastian-wedeniwski">Dr. Sebastian Wedeniwski</a>, the discoverer of the Wedeniwski (1998) algorithm, was the person behind <a href="https://en.wikipedia.org/wiki/ZetaGrid">ZetaGrid</a>, which was one of the largest <a href="https://en.wikipedia.org/wiki/Distributed_computing">distributed computing</a> projects of the early 2000s and had the purpose of finding roots of the <a href="https://en.wikipedia.org/wiki/Riemann_zeta_function">zeta function</a> to test if there are any counterexamples of the <a href="https://en.wikipedia.org/wiki/Riemann_hypothesis">Riemann hypothesis</a>. He is now the Chief Information Officer (executive position) at the Standard Chartered Bank at Singapore after 18 years at IBM, currently in charge of all informational management of the multinational banking group. <a href="http://www.numberworld.org/">Mr. Alexander Yee</a> (the person who created <a href="http://www.numberworld.org/y-cruncher/">y-cruncher</a>) also works at Citadel Securities, a huge hedge fund located in Chicago, after his time at Google. I guess the people from mathematical computing academic diciplines meet in the financial industry.</p>
<p>Same as all other mathematical constants, I have used y-cruncher by Mr. Alexander J. Yee for this computation. This program is commonly used for stress testing and benchmarking overclocked PC builds (obviously this program performs a very rigorous computation), along with fellow mathematical computing program Prime95 and linear algebra program Linpack.<br />
It was a complicated constant to compute based on that I have doubled the number of digits from all my computations before, but it was pretty easy compared to that messy I/O bottleneck of the <a href="/world-record/euler-mascheroni-constant/">Euler-Mascheroni Constant</a>.</p>
<p>Computation:<br />
System information:<br />
Operating System: Linux 3.10.0-327.36.1.el7.x86_64 x86_64<br />
Processor(s):<br />
Name: Intel(R) Xeon(R) CPU E5-2670 v3 @ 2.30GHz<br />
Logical Cores: 48<br />
Physical Cores: 24<br />
Sockets: 2<br />
NUMA Nodes: 2<br />
Base Frequency: 2,299,961,280 Hz</p>
<p>My first computation used two Xeon CPU sockets from the Haswell era (and thus supports AVX2 SIMD vector operations that are used crucially in vector computations like y-cruncher) as I decided the Xeon Scalable Skylake Purley processors that support AVX-512 was overkill since I was gonna confront I/O bottlenecks anyways. The time would not have had much difference from using just one CPU socket. I have optimized I/O throughput further by changing the Bytes/Seek parameter and allocating more I/O Buffer to reach RAID-level performance in my file systems.</p>
<p>Start Date: Thu May 21 15:11:49 2020<br />
End Date: Mon Jun 22 08:38:33 2020<br />
Total Computation Time: 2702135.464 seconds<br />
Start-to-End Wall Time: 2741203.782 seconds<br />
CPU Utilization: 997.73 % + 33.91 % kernel overhead<br />
Multi-core Efficiency: 20.79 % + 0.71 % kernel overhead</p>
<p>The multi-core efficiency did improve compared to the constants before this (to a level of a high-end desktop CPU maybe, this is because there was more RAM than before), but still did not reach the utilization possible by the CPUs (Dr. Ian Cutress’s <a href="/world-record/lemniscate-constant/">Lemniscate Constant</a> calculation had a multi-core efficiency of 94.04 % and CPU utilization of 9027.61 %, which means that the CPU was not bottlenecked by other factors). This is also the first time I use Cilk Plus Work-Stealing multiprocessing framework along with the dynamic version of y-cruncher.</p>
<p>Memory:<br />
Working Memory: 499,483,876,096 ( 465 GiB)<br />
Total Memory: 499,786,337,280 ( 465 GiB)<br />
Logical Largest Checkpoint: 2,295,791,263,000 (2.09 TiB)<br />
Logical Peak Disk Usage: 7,799,479,894,560 (7.09 TiB)<br />
Logical Disk Bytes Read: 795,796,887,749,864 ( 724 TiB)<br />
Logical Disk Bytes Written: 696,627,866,292,616 ( 634 TiB)</p>
<p>Disk operation was decreased by a fourth compared to the <a href="/world-record/euler-mascheroni-constant/">Euler-Mascheroni Constant</a> because the algorithm is easier, and this contributed to a faster computation along with more RAM. Disk writes were overall similar to <a href="/world-record/catalans-constant/">Catalan’s Constant</a> since there were two times the digits and half the algorithm difficulty.<br />
One caveat is that HDD I/O speeds are again great bottlenecks to virtually any other component, and perhaps having Optane DIMMs or even more normal RAM can help the speed of the computation speed greatly.</p>
<p>Verification:<br />
System information:<br />
Operating System: Linux 3.10.0-693.21.1.el7.x86_64 x86_64<br />
Processor(s):<br />
Name: Intel(R) Xeon(R) Gold 5220 CPU @ 2.20GHz<br />
Logical Cores: 72<br />
Physical Cores: 36<br />
Sockets: 2<br />
NUMA Nodes: 2<br />
Base Frequency: 2,194,831,008 Hz</p>
<p>Start Date: Wed Jun 24 09:02:14 2020<br />
End Date: Sun Jul 26 22:22:36 2020<br />
Total Computation Time: 2620847.662 seconds<br />
Start-to-End Wall Time: 2812821.324 seconds<br />
CPU Utilization: 934.98 % + 33.72 % kernel overhead<br />
Multi-core Efficiency: 12.99 % + 0.47 % kernel overhead</p>
<p>Memory:<br />
Working Memory: 536,619,505,280 ( 500 GiB)<br />
Total Memory: 536,870,912,000 ( 500 GiB)<br />
Logical Largest Checkpoint: 2,288,826,179,160 (2.08 TiB)<br />
Logical Peak Disk Usage: 7,793,874,082,416 (7.09 TiB)<br />
Logical Disk Bytes Read: 784,068,383,106,800 ( 713 TiB)<br />
Logical Disk Bytes Written: 687,145,764,017,680 ( 625 TiB)</p>
<p>Changed stuff around for the verification computation. Slightly more RAM and two Cascade Lake Xeon CPU sockets that support AVX-512. CPU utilization was similar even when the number of cores increased, and this is explained by having slightly more RAM. Total Computation Time and Disk R/W were slightly less despite a more inefficient algorithm.</p>
<p>Overall, more RAM and better optimization based on insight of both the hardware and the software led to a better computing experience compared to before. Apéry’s Constant is a very important mathematical constant in expanding the horizon of human knowledge in mathematics. I hope computing and sharing the results can result in more insight that can be used by mathematicians for better insight.</p>
<p>If you want to take a look at the digits for the <a href="https://mathworld.wolfram.com/AperysConstant.html">Apéry’s Constant</a>, you can download it from <a href="https://archive.org/details/apery_200726">This Link</a> (Almost 2 TB total but don’t worry, it will just redirect to a registry with a link to download).</p>
<p><strong>Note that digits are released as an <a href="https://creativecommons.org/licenses/by-nc-nd/4.0/">Attribution-NonCommercial-NoDerivatives 4.0 International</a> License, meaning no commercial purposes and you cannot distribute a remixed, transformed, or built upon version without my consent. You must also give appropriate credit, provide a link to the license, and indicate if changes were made even if it is not a prohibited use case.</strong></p>
<p>Archive for computation results in the y-cruncher website: <a href="https://web.archive.org/web/20200810060943/http://www.numberworld.org/y-cruncher/">https://web.archive.org/web/20200810060943/http://www.numberworld.org/y-cruncher/</a><br />
Special thanks to Mr. Alexander J. Yee for developing and releasing y-cruncher and providing advice, and the <a href="https://archive.org/">Internet Archive</a> for hosting the computed digits.</p>{"avatar"=>"https://user-images.githubusercontent.com/8457324/82421935-c225ae80-9abc-11ea-8110-b35c57838923.jpg", "bio"=>"Medical student working towards polymathy. Current and former World Record Holder of Euler–Mascheroni Constant, Apéry’s Constant, Catalan's Constant, Lemniscate Constant, Log(2), Log(10) as of September 2020. If you are interested in sponsoring to set a world record for various mathematical constants including Pi, contact me using the email below. You can't copy contents of this website to another source without my permission. Contact me if you want to buy me a cup of tea. Especially you big corps.", "email"=>"ehf@users.sourceforge.net", "links"=>[{"label"=>"GitHub", "icon"=>"fab fa-fw fa-github", "url"=>"https://github.com/ehfd"}, {"label"=>"Hive.Blog", "icon"=>"fas fa-fw fa-blog", "url"=>"https://hive.blog/@ehf"}]}ehf@users.sourceforge.netHow I set the world record for Apéry's Constant.ISC High Performance 2020 Digital2020-06-26T12:45:20+00:002020-06-26T12:45:20+00:00https://ehfd.github.io/computing/isc-high-performance-2020-digital<p><a href="https://www.isc-hpc.com/">ISC High Performance</a> <a href="https://www.isc-hpc.com/isc-2020-preview.html">2020 Digital</a> has been held virtually online because of COVID-19 from May 22nd to May 25th. The conference is one of the most popular and highly regarded high-performance computing conferences in the world, and you can get insight and experience from both academic and industry technologies related to HPC.</p>
<p>Some conclusions I want to make are the following. As <a href="https://en.wikipedia.org/wiki/Fugaku_(supercomputer)">Fugaku</a> the ARM architecture <a href="https://en.wikipedia.org/wiki/Supercomputer">supercomputer</a> got the first place of <a href="https://www.top500.org/">TOP500</a> the energy-efficient ARM architecture has great potentials for high-performance systems. The fact that there are package managers for non x86-64 architecture supercomputers can ease the need of wasting time on debugging compilations, and that the architecture thought to be only used in smartphones and tablets now have what it has for desktops. This is strongly supported by Amazon AWS deploying ARM CPU cloud instance servers that they are set a more effective price compared to Intel and AMD x86-64 CPUs. Intel is aiming for an era beyond the distinction of CPUs and GPUs and instead merging both together which can result in a lower latency in the current computing environment. AMD is slowing up to Intel in the server CPU industry after the success of the Ryzen desktop CPU series with the new Epyc series, which provides a great advantage in terms of number of cores per socket and innovated unified memory access. NVIDIA is promoting the <a href="https://en.wikipedia.org/wiki/InfiniBand">InfiniBand</a> interconnect for clusters and <a href="https://en.wikipedia.org/wiki/Supercomputer">supercomputers</a> after acquiring the industry leader Mellanox, stressing the importance of I/O in HPC architectures. The new Tesla A100 GPU accelerator focuses on tensor processing mainly for machine learning. Fugaku has been deployed earlier than scheduled and supercomputers worldwide are being used for molecular dynamics simulation and machine learning (mainly assisted by GPU accelerators in a majority of supercomputers) to target potential candidates for COVID-19 medications. The accelerated speed assisted by parallel computing in HPC will be a game changer for sure. I have experience with protein structural bioinformatics, and I will introduce the field in a later time.</p>
<p>Publicly avaliable videos of the conference:</p>
<!-- Courtesy of embedresponsively.com //-->
<div class="responsive-video-container">
<iframe src="https://www.youtube-nocookie.com/embed/TPb2-sLgL8g" frameborder="0" webkitallowfullscreen="" mozallowfullscreen="" allowfullscreen=""></iframe>
</div>
<!-- Courtesy of embedresponsively.com //-->
<div class="responsive-video-container">
<iframe src="https://www.youtube-nocookie.com/embed/KFlR4EwSUlc" frameborder="0" webkitallowfullscreen="" mozallowfullscreen="" allowfullscreen=""></iframe>
</div>
<!-- Courtesy of embedresponsively.com //-->
<div class="responsive-video-container">
<iframe src="https://www.youtube-nocookie.com/embed/maaJDPtp-Kk" frameborder="0" webkitallowfullscreen="" mozallowfullscreen="" allowfullscreen=""></iframe>
</div>
<p>This is from 2019 but it is a very interesting keynote video:</p>
<!-- Courtesy of embedresponsively.com //-->
<div class="responsive-video-container">
<iframe src="https://www.youtube-nocookie.com/embed/Hkx7uRd0WW8" frameborder="0" webkitallowfullscreen="" mozallowfullscreen="" allowfullscreen=""></iframe>
</div>{"avatar"=>"https://user-images.githubusercontent.com/8457324/82421935-c225ae80-9abc-11ea-8110-b35c57838923.jpg", "bio"=>"Medical student working towards polymathy. Current and former World Record Holder of Euler–Mascheroni Constant, Apéry’s Constant, Catalan's Constant, Lemniscate Constant, Log(2), Log(10) as of September 2020. If you are interested in sponsoring to set a world record for various mathematical constants including Pi, contact me using the email below. You can't copy contents of this website to another source without my permission. Contact me if you want to buy me a cup of tea. Especially you big corps.", "email"=>"ehf@users.sourceforge.net", "links"=>[{"label"=>"GitHub", "icon"=>"fab fa-fw fa-github", "url"=>"https://github.com/ehfd"}, {"label"=>"Hive.Blog", "icon"=>"fas fa-fw fa-blog", "url"=>"https://hive.blog/@ehf"}]}ehf@users.sourceforge.netISC High Performance 2020 Digital has been held virtually online because of COVID-19 from May 22nd to May 25th.Algorithms and Significance of Major Mathematical Constants: Part 22020-05-22T17:07:20+00:002020-05-22T17:07:20+00:00https://ehfd.github.io/world-record/algorithms-and-significance-of-major-mathematical-constants-part-2<p>Note: this post is appropriate for people with at least a high school mathematics or calculus background, although anyone enthusiastic in mathematics can understand this if Google and Wikipedia is your friend.
I am also not a professional mathematician, so this post may include inaccuracies.
Also, this post is meant to supplement Mr. Alexander Yee’s <a href="http://www.numberworld.org/y-cruncher/">y-cruncher</a> in easier words, not replace it. You have to see his website for the crucial technical details required to make world records.</p>
<p>If you are interested in actually setting a world record with y-cruncher, read <a href="/world-record/optimizing-y-cruncher-to-actually-set-world-records/">This Post</a> for an in-depth explanation of optimizing the configurations of y-cruncher as well.</p>
<p>If you did not read the <a href="/world-record/algorithms-and-significance-of-major-mathematical-constants-part-1/">First Post</a>, it would help to read it first.</p>
<p>Difficulties of mathematical constants: (Excerpt from <a href="http://www.numberworld.org/y-cruncher/">y-cruncher</a> v0.7.8.9506, Mr. Yee thankfully let me post this)</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Compute a Constant: (in ascending order of difficulty to compute)
# Constant Value Approximate Difficulty*
Fast Constants:
0 Sqrt(n) 1.46
1 Golden Ratio = 1.618034... 1.46
2 e = 2.718281... 3.88 / 3.88
Moderate Constants:
3 Pi = 3.141592... 13.2 / 19.9
4 Log(n) > 35.7
5 Zeta(3) (Apery's Constant) = 1.202056... 62.8 / 65.7
6 Catalan's Constant = 0.915965... 78.0 / 105.
7 Lemniscate = 5.244115... 60.4 / 124. / 154.
Slow Constants:
8 Euler-Mascheroni Constant = 0.577215... 383. / 574.
Other:
9 Euler-Mascheroni Constant (parameter override)
10 Custom constant with user-defined formula.
*Actual numbers will vary. Radix conversion = 1.00
</code></pre></div></div>
<p>Note there are more mathematical constants that are defined with custom formula files available with the executable, but they are more complicated math, so if you really want to set custom formula records, you should eventually know more as you research them.</p>
<p>I will add whether your name can go on Wikipedia if you set a record for this or not, but <strong>please don’t start this from scratch for the sake of becoming famous or irrelevantly adding a line to your CV because it isn’t worth it</strong>. I just did it to test the long-term stability of the high-performance system that I use for other actually production-purpose uses in research that must not have went wrong and also assay my system administration competence, and it won’t make you famous just because you had nice hardware and a bunch of hard drives just in case you are looking at this for such purpose. I just had spare time when I did world records and I wanted to do a slightly more meaningful thing than the default stress test <a href="http://www.numberworld.org/y-cruncher/">y-cruncher</a> provided. This is why I upload every world record computation I do to help any mathematicians in the future.</p>
<h4 id="zeta3-aperys-constant">Zeta(3) (Apery’s Constant)</h4>
<p><a href="https://en.wikipedia.org/wiki/Ap%C3%A9ry%27s_constant">Wikipedia</a>
<a href="/world-record/aperys-constant/">My Post on the World Record</a></p>
<p>First, the <a href="https://en.wikipedia.org/wiki/Riemann_zeta_function">Riemann zeta function</a> \(\zeta(s) =\sum_{n=1}^\infty\frac{1}{n^s}\).
This expression converges when the real part of the <a href="https://en.wikipedia.org/wiki/Complex_number">complex number</a> \(s\) is larger than 1, but can be expanded beyond this range with <a href="https://en.wikipedia.org/wiki/Riemann_zeta_function#Representations">other series and integral representations</a>.
\(\zeta(2) =\sum_{n=1}^\infty\frac{1}{n^2} = \frac{1}{1^2} + \frac{1}{2^2} + \frac{1}{3^2} + \cdots\) was called the <a href="https://en.wikipedia.org/wiki/Basel_problem">Basel problem</a>, and was proved by <a href="https://en.wikipedia.org/wiki/Leonhard_Euler">Leonhard Euler</a> as being equivalent to \(\frac{\pi^{2}}{6}\).</p>
<p>The values of the <a href="https://en.wikipedia.org/wiki/Riemann_zeta_function">Riemann zeta function</a> has been involved in all kinds of unsolved problems in physical (especially in dynamics and quantum), mathematical, and chemical sciences, and thus the approximate values are used in industries and academia all the time.</p>
<p>Now, Apery’s Constant \(\zeta(3) = \sum_{n=1}^\infty\frac{1}{n^3} = \lim_{n \to \infty}\left(\frac{1}{1^3} + \frac{1}{2^3} + \cdots + \frac{1}{n^3}\right)\) has been proved as irrational (but unknown if transcedental) by <a href="https://en.wikipedia.org/wiki/Roger_Ap%C3%A9ry">Roger Apéry</a> in 1978.
As much as the <a href="https://en.wikipedia.org/wiki/Riemann_zeta_function">Riemann zeta function</a> is involved in unsolved problems of science, the value of Apery’s Constant was involved in insight of many physical and mathematical problems.
For example, \(\frac{1}{\zeta(3)}\) is the probability of three positive integers to be <a href="https://en.wikipedia.org/wiki/Coprime_integers">relatively prime</a>, is involved in an electron’s gyromagnetic ratio using quantum electrodynamics, random minimum spanning trees in data structures, and the <a href="https://en.wikipedia.org/wiki/Debye_model">Debye model</a> and the <a href="https://en.wikipedia.org/wiki/Stefan%E2%80%93Boltzmann_law">Stefan–Boltzmann law</a> in thermodynamics because of the constant’s nature.</p>
<p>It is a very interesting and significant constant to set a world record, and currently <a href="https://twitter.com/iancutress">Dr. Ian Cutress</a> has the world record for this as of May 2020.</p>
<p>Rapidly converging series have been found by <a href="http://www.gutenberg.org/cache/epub/2583/pg2583.html">Dr. Sebastian Wedeniwski</a> in 1998. The series representation is less trivial compared to Pi leading to a higher intensity in computing it, resulting to be one of the more intensive constants. You will get your name on Wikipedia, but the difficulty of computing starts to rapidly elevate because these constants have more properties unknown than known and algorithms are less efficient.</p>
<p>Interesting fact: <a href="https://sg.linkedin.com/in/sebastian-wedeniwski">Dr. Sebastian Wedeniwski</a>, the discoverer of the Wedeniwski (1998) algorithm, was the person behind <a href="https://en.wikipedia.org/wiki/ZetaGrid">ZetaGrid</a>, which was one of the largest <a href="https://en.wikipedia.org/wiki/Distributed_computing">distributed computing</a> projects of the early 2000s and had the purpose of finding roots of the <a href="https://en.wikipedia.org/wiki/Riemann_zeta_function">zeta function</a> to test if there are any counterexamples of the <a href="https://en.wikipedia.org/wiki/Riemann_hypothesis">Riemann hypothesis</a>. He is now the Chief Information Officer (executive position) at the Standard Chartered Bank at Singapore after 18 years at IBM, currently in charge of all informational management of the multinational banking group. <a href="http://www.numberworld.org/">Mr. Alexander Yee</a> (the person who created <a href="http://www.numberworld.org/y-cruncher/">y-cruncher</a>) also works at Citadel Securities, a huge hedge fund located in Chicago, after his time at Google. I guess the people from mathematical computing academic diciplines meet in the financial industry.</p>
<h4 id="catalans-constant">Catalan’s Constant</h4>
<p><a href="https://en.wikipedia.org/wiki/Catalan%27s_constant">Wikipedia</a></p>
<p><a href="/world-record/catalans-constant/">My Post on the World Record</a></p>
\[G = \beta(2) = \sum_{n=0}^{\infty} \frac{(-1)^{n}}{(2n+1)^2} = \frac{1}{1^2} - \frac{1}{3^2} + \frac{1}{5^2} - \frac{1}{7^2} + \frac{1}{9^2} - \cdots\]
<p>Related to a lot of identities in integral calculus, it is in relation to important problems in combinatorics such as the <a href="https://en.wikipedia.org/wiki/Trigamma_function">trigamma function</a>. There are a lot in integral identities that conclude to a value that leads to Catalan’s constant, and they are all in the <a href="https://en.wikipedia.org/wiki/Catalan%27s_constant">Wikipedia</a> link. It is a simple series yet we know almost nothing about its properties including irregularity or transcendentality.</p>
<p>Since the definition itself is a pretty intuitive series expansion, the derived rapid converging series are abundant. The most efficient algorithms were discovered by Khodabakhsh Hessami Pilehrood, Tatiana Hessami Pilehrood and another by Jesús Guillera after 2010. It is slightly harder than Apéry’s constant though the only thing more special over Apery’s constant is that it has binomial coefficients inside the series. You can get your name on Wikipedia as it is important.</p>
<h4 id="lemniscate-constant">Lemniscate Constant</h4>
<p><a href="https://en.wikipedia.org/wiki/Gauss%27s_constant#Lemniscate_constants">Wikipedia</a></p>
<p><a href="/world-record/lemniscate-constant/">My Post on the World Record</a></p>
<p><img src="https://user-images.githubusercontent.com/8457324/82198834-d9875f00-9937-11ea-949a-09cc2d85a78a.png" alt="Lemniscate" />
This is a <a href="https://en.wikipedia.org/wiki/Lemniscate_of_Bernoulli">lemniscate</a>. (https://commons.wikimedia.org/wiki/File:Lemniscate_of_Booth.png) It looks like a dumbbell and has been defined by <a href="Jakob Bernoulli">Jakob Bernoulli</a> and first comes out in precalculus.</p>
<p>Cartesian Coordinates: \((x^2 + y^2)^2 = 2a^2 (x^2 - y^2)\)</p>
<p>Polar Coordinates: \(r^2 = 2a^2 \cos 2\theta\)</p>
<p>Let’s start with Gauss’s constant to define the constant related to the lemniscate.</p>
\[G = \frac{1}{\operatorname{agm}\left(1, \sqrt{2}\right)} = 0.8346268\dots\]
<p>or</p>
\[G = \frac{2}{\pi}\int_0^1\frac{dx}{\sqrt{1 - x^4}}\]
<p>If you don’t know what AGM is, see the <a href="/world-record/algorithms-and-significance-of-major-mathematical-constants-part-1/#logn-logarithm">Logarithm section of this link</a>.</p>
<p>This can also be expressed with the <a href="https://en.wikipedia.org/wiki/Gamma_function">gamma function</a>, which is a continuous extension of the factorial (\(n! = n \cdot (n-1) \cdot \cdots \cdot 2 \cdot 1\)) function, which originally is a natural number function.</p>
<p>\(\Gamma(n) = (n-1)!\) for any integer \(n\) and \(\Gamma(z) = \int_0^\infty x^{z-1} e^{-x}\,dx, \ \qquad \Re(z) > 0\)</p>
\[G = \frac{\left[\Gamma\left( \tfrac{1}{4}\right)\right]^2}{2\sqrt{ 2\pi^3}}\]
<p>Gauss’s constant is transcendental because of this property. It is also related to the <a href="https://en.wikipedia.org/wiki/Theta_function#Jacobi_theta_function">Jacobi theta function</a></p>
<p>Then the first and second lemniscate constants are defined as the following.</p>
<p>\(L_1 = \pi G\), \(L_2 = \frac{1}{2G}\)</p>
<p>The lemniscate constant <a href="http://www.numberworld.org/y-cruncher">y-cruncher</a> calculates is actually in long name called arc length of a lemniscate with a=1 (OEIS <a href="https://oeis.org/A064853">A064853</a>). It is the length of a lemniscate of which \(a\) in Cartesian or Polar Coordinates is 1.</p>
<p>It is defined as \(s =4\int_0^1\frac{dt}{\sqrt{1-t^4}} = 2L_1 = \frac{\pi}{L_2} = 2 \cdot \pi \cdot G = \frac{\left[\Gamma\left( \tfrac{1}{4}\right)\right]^2}{\sqrt{ 2\pi}}\) and it is arc length of a lemniscate with a=1 when \(a\) is the constant you see up there in the Polar Coordinates notation.</p>
<p>Overall, this is an interesting mathematical constant in geometric algebra, but expect no name on Wikipedia.</p>
<p>The algorithm from here starts to get more complicated rather than simple series you saw until now.</p>
<p>The first and most recent algorithm is the AGM-Pi algorithm. Remember \(G = \frac{1}{\operatorname{agm}\left(1, \sqrt{2}\right)}\). Then \(s = \frac{2\pi}{\operatorname{agm}\left(1, \sqrt{2}\right)}\). So if we calculate \(\pi\) and calculate the AGM value, we can get the arc length of a lemniscate with a=1. It looks simple, and it is actually the easiest one to calculate <strong>provided you calculate everyting in the RAM</strong>. Remember the AGM algorithm has bad <a href="https://en.wikipedia.org/wiki/Locality_of_reference">memory locality</a>. If we use HDD swaps, the becomes more serious and becomes slower than the traditional algorithms. I oversaw the note that computations regarding swaps could be slower than traditional algorithms and ended up spending more time by using this algorithm.</p>
<p>A more traditional algorithm changes the integral definition of Gauss’s constant. Recall \(G = \frac{2}{\pi}\int_0^1\frac{dx}{\sqrt{1 - x^4}}\). We can change this to a series expansion using the definition of definite integral to a <a href="https://en.wikipedia.org/wiki/Riemann_sum">Riemann sum</a>. We can use the series expansion of the inverse (arc) function of the <a href="https://en.wikipedia.org/wiki/Lemniscatic_elliptic_function#Lemniscate_sine_and_cosine_functions">Lemniscate sine function</a> derived from the definition of Gauss’s constant.</p>
<p>Gauss Formula: \(Lemniscate = 8 ArcSinlemn(\frac{1}{2}) + 4 ArcSinlemn(\frac{7}{23})\)</p>
<p>Sebah’s Formula: \(Lemniscate = 8 ArcSinlemn(\frac{2}{3}) - 4 ArcSinlemn(\frac{7}{137})\)</p>
<p>The first formula is simpler and the second is used to verify if we aren’t using the AGM-Pi algorithm. Because it isn’t a simple series expansion, the difficulty increases.</p>
<h4 id="euler-mascheroni-constant">Euler-Mascheroni Constant</h4>
<p><a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Mascheroni_constant">Wikipedia</a></p>
<p><a href="/world-record/euler-mascheroni-constant/">My Post on the World Record</a></p>
<p>Dreadful. Just dreadful. Three times harder than the next hardest mathematical constant.</p>
\[\gamma = \lim_{n\to\infty}\left(-\ln n + \sum_{k=1}^n \frac1{k}\right) = \int_1^\infty\left(-\frac1x+\frac1{\lfloor x\rfloor}\right) dx\]
<p>When \({\lfloor x\rfloor}\) is the floor function.</p>
<p><img src="https://user-images.githubusercontent.com/8457324/82203832-ce83fd00-993e-11ea-9034-a97a618429ee.png" alt="" /></p>
<p>The area of the blue region converges to the Euler–Mascheroni constant. (<a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Mascheroni_constant">Wikipedia</a>)</p>
<p>This comes out predominantly while learning the <a href="https://en.wikipedia.org/wiki/Integral_test_for_convergence">integral test for convergence</a>.</p>
<p>Third-most mathematical constant and is the hardest. the \(\gamma\) in <a href="http://www.numberworld.org/y-cruncher">y-cruncher</a> is the Euler-Mascheroni Constant, and Mr. Alexander Yee first made this program to compute this very constant.</p>
<p>There is no one AGM or series expansion equation to compute this, and it is even more complicated than Gauss Formula or Sebah’s Formula of the lemniscate. I think this was easily harder than Google’s Pi world record (50 times more digits than my Euler-Mascheroni Constant record) if we rule out the fact that they need more hard drives. It took as much as them, and it is as picky to manage as them. I asked <a href="https://twitter.com/iancutress">Dr. Ian Cutress</a> to verify this, and he got an ECC corrected error that I never saw in one of my own computations. Disk R/W is 1/3 of <a href="https://cloud.google.com/blog/products/compute/calculating-31-4-trillion-digits-of-archimedes-constant-on-google-cloud">Google’s record</a> while the digits are 1/50.</p>
<p>There are so many papers that attempt to make the Brent-McMillan algorithm computation easier, both by new algorithms and optimization of existing ones, but it is hard.</p>
<p>The lemniscate constant started becoming a hard computation because it was a rational arithmetic of two different series expansions. Guess what. People failed to find an algorithm better than <a href="http://www.ams.org/journals/mcom/1980-34-149/S0025-5718-1980-0551307-4/S0025-5718-1980-0551307-4.pdf">the one developed in 1980</a>. Because it is the limit between the harmonic series and the natural logarithm, we have to literally subtract the natural logarithm from the equivalent value of the harmonic series. This causes the computation to become three times harder than even the arc length of a lemniscate with a=1.</p>
<p>Brent-McMillan Algorithm:</p>
\[\gamma = \frac{A}{B} - ln(n) + O(e^{-4n})\]
<p>where \(O(e^{-4n})\) is the <a href="https://en.wikipedia.org/wiki/Rate_of_convergence#Convergence_speed_for_discretization_methods">rate of convergence</a> in <a href="https://en.wikipedia.org/wiki/Big_O_notation">Big O notation</a>, \(A = \sum_{k=0}^{\infty} (\frac{n^k}{k!})^2 H(k)\), \(B = \sum_{k=0}^{\infty} (\frac{n^k}{k!})^2\), and \(H(n) = \zeta(1) = \sum_{k=1}^{n} \frac{1}{k}\)</p>
<p>Further refined Brent-McMillan Algorithm to increase rate of convergence (note that the rate of convergence is not a square faster because the equation becomes more complicated):</p>
\[\gamma = \frac{A}{B} - \frac{C}{B^2} - ln(n) + O(e^{-8n})\]
<p>where \(O(e^{-8n})\) is the <a href="https://en.wikipedia.org/wiki/Rate_of_convergence#Convergence_speed_for_discretization_methods">rate of convergence</a> in <a href="https://en.wikipedia.org/wiki/Big_O_notation">Big O notation</a>, \(A = \sum_{k=0}^{\infty} (\frac{n^k}{k!})^2 H(k)\), \(B = \sum_{k=0}^{\infty} (\frac{n^k}{k!})^2\), \(H(n) = \zeta(1) = \sum_{k=1}^{n} \frac{1}{k}\), and \(C = \frac{1}{4n} \sum_{k=0}^{2n} \frac{((2k)!)^3}{(k!)^4 \cdot (16n)^{2k}}\).</p>
<p>It looks already hard, but it becomes even harder because it is a class of formulae that give better approximations successively, so the series is different based on setting \(n\) depending on the expected precision. Because series \(A\) includes the harmonic series, it is a double summation and makes it very hard to find a Binary Splitting recursion and is significantly more complicated compared to others once found. Series \(A\), \(B\), \(C\) have non-linear irregular convergence behaviour that makes computation trickier and Series \(C\) and \(H\) are divergent. The only other viable algorithm is called Sweeney’s Method, which is easier but slower.</p>
<p>This constant has a bigger presense in mathematics than Catalan’s Constant and appears in as many unsolved problems in mathematics as Apery’s Constant, but while Apery’s Constant was proved as irrational, we know nothing about whether it is algebraic/transcendental or even irrational. <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Mascheroni_constant#Appearances">This</a> is a short list of some notable problems Euler-Mascheroni Constant appears in, and it also appears related to various definite integrals, <a href="https://en.wikipedia.org/wiki/Digamma_function">digamma function</a> \(\gamma\), which is the <a href="https://en.wikipedia.org/wiki/Derivative">derivative</a> of the <a href="https://en.wikipedia.org/wiki/Gamma_function">gamma function</a> \(\Gamma\) (remember we saw this in Gauss’s constant and Lemniscate), and also the <a href="https://en.wikipedia.org/wiki/Riemann_zeta_function">Riemann zeta function</a> \(\zeta\) (remember Apery’s constant is \(\zeta(3)\)). So it this constant is a very important intersection in mathematics research, but we don’t know a lot about this. Lots of potential in researching this, but it is too hard to compute. That is why I think me and <a href="https://twitter.com/IanCutress">Dr. Ian Cutress</a>’s endeavour counts toward mathematical advances because there are other constants people can set records cheaper and we still did it.</p>
<p>This record will get your name on Wikipedia as part of the history of known digits.</p>
<h4 id="series-backlinks">Series Backlinks</h4>
<p><a href="/world-record/algorithms-and-significance-of-major-mathematical-constants-part-1/">First Post</a></p>
<p>If you are interested in actually setting a world record with y-cruncher, read <a href="/world-record/optimizing-y-cruncher-to-actually-set-world-records/">This Post</a> for an in-depth explanation of optimizing the configurations of y-cruncher as well.</p>
<p>More information of algorithms used by y-cruncher: <a href="http://www.numberworld.org/y-cruncher/internals/formulas.html">Link</a>.
This site has the equations and time complexity of all algorithms.</p>{"avatar"=>"https://user-images.githubusercontent.com/8457324/82421935-c225ae80-9abc-11ea-8110-b35c57838923.jpg", "bio"=>"Medical student working towards polymathy. Current and former World Record Holder of Euler–Mascheroni Constant, Apéry’s Constant, Catalan's Constant, Lemniscate Constant, Log(2), Log(10) as of September 2020. If you are interested in sponsoring to set a world record for various mathematical constants including Pi, contact me using the email below. You can't copy contents of this website to another source without my permission. Contact me if you want to buy me a cup of tea. Especially you big corps.", "email"=>"ehf@users.sourceforge.net", "links"=>[{"label"=>"GitHub", "icon"=>"fab fa-fw fa-github", "url"=>"https://github.com/ehfd"}, {"label"=>"Hive.Blog", "icon"=>"fas fa-fw fa-blog", "url"=>"https://hive.blog/@ehf"}]}ehf@users.sourceforge.netThis is the second post of a series in overviewing major mathematical constants that are computed to set a world record, their difficulties, and mathematical significance.Optimizing y-cruncher to Actually Set World Records2020-05-22T14:30:20+00:002020-05-22T14:30:20+00:00https://ehfd.github.io/world-record/optimizing-y-cruncher-to-actually-set-world-records<p>Note: this post should hopefully be understandable with anyone who is a computer power user.
Also, this post is meant to supplement Mr. Alexander Yee’s <a href="http://www.numberworld.org/y-cruncher/">y-cruncher</a> in easier words, not replace it. You have to see his website for the crucial technical details required to make world records.</p>
<p>Difficulties of mathematical constants: (Excerpt from <a href="http://www.numberworld.org/y-cruncher/">y-cruncher</a> v0.7.8.9506, Mr. Yee thankfully let me post this)</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Compute a Constant: (in ascending order of difficulty to compute)
# Constant Value Approximate Difficulty*
Fast Constants:
0 Sqrt(n) 1.46
1 Golden Ratio = 1.618034... 1.46
2 e = 2.718281... 3.88 / 3.88
Moderate Constants:
3 Pi = 3.141592... 13.2 / 19.9
4 Log(n) > 35.7
5 Zeta(3) (Apery's Constant) = 1.202056... 62.8 / 65.7
6 Catalan's Constant = 0.915965... 78.0 / 105.
7 Lemniscate = 5.244115... 60.4 / 124. / 154.
Slow Constants:
8 Euler-Mascheroni Constant = 0.577215... 383. / 574.
Other:
9 Euler-Mascheroni Constant (parameter override)
10 Custom constant with user-defined formula.
*Actual numbers will vary. Radix conversion = 1.00
</code></pre></div></div>
<p>If you did not read the <a href="/world-record/algorithms-and-significance-of-major-mathematical-constants-part-1/">First Post</a> and <a href="/world-record/algorithms-and-significance-of-major-mathematical-constants-part-2/">Second Post</a> of the significance and algorithms of mathematical constants, read them to understand algorithms and the significance of all main mathematical constants.</p>
<p>Note there are more mathematical constants that are defined with custom formula files available with the executable, but they are more complicated math, so if you really want to set custom formula records, you should eventually know more as you research them.</p>
<h3 id="overview">Overview</h3>
<p>Now we get real. How do we compute actual world records with y-cruncher?
The first thing we need in mind is hardware.
The CPU doesn’t matter much as long as it is a comparably high-end desktop or mediocre server CPU, and that disk R/W speeds have been the bottleneck so you have to care more about the latter.</p>
<p>The way to decrease bottleneck is (excerpt from <a href="http://www.numberworld.org/y-cruncher/faq.html">y-cruncher</a>):</p>
<blockquote>
<p>The “fastest storage configuration” because that’s the bottleneck.<br />
The “largest memory configuration” because it minimizes the amount of disk I/O that is needed.<br />
A “mediocre CPU” because that’s all you need before you hit the disk bottleneck.</p>
</blockquote>
<h3 id="storage">Storage</h3>
<p>As I wrote in the <a href="/world-record/algorithms-and-significance-of-major-mathematical-constants-part-1/">First Post</a>, what is bottlenecking you is not the CPU. An 8-core 16-thread AMD Ryzen 3700X was sufficient to set a record for the lightest mathematical constants as long as you have swap RAID storage that amounts to TBs. HDDs are redundant so it can withstand massive writes but it is slow. SSDs are fast so that there is less bottleneck but such an intensive amount of writes lead it to be almost single-use, and the disk will likely fail when it does intensive operations next time. Optane SSDs are pretty redundant and fast with random R/W though not much as RAMs, but they are expensive, though not expensive as more RAM. Because of this, people mostly stick to HDDs by accelerating the overall speed using multiple arrays with RAID, and hardware reviewers like Dr. Ian Cutress have attempted a twist like Optane DIMMs that are 512GBs per DIMM, unlike 32GB maximum for ECC RAMs normally.</p>
<h3 id="ram">RAM</h3>
<p>Memory allocation is bad in Linux compared to Windows as of now. So use Windows (preferably a server version, also turn off automatic updates…) for a meaningful increase in speed. However, some CPUs such as certain generations of AMD RYZEN Threadripper favor the Linux CPU scheduling system, so consult Mr. Alexander Yee for which OS to use if you can choose. There is also something called “Locked Pages” and “Large Pages” that increases throughput and prevents time waste on allocating and de-allocating memory, instead confining to the program. Thus having full superuser permission to enable this improves I/O time, although not crucial. I know this program is very frequently used to stress test overclocked computers, but I recommend against overclocking the CPU or RAM unlike what Mr. Alexander Yee said in his webpage for any large computation because overclocking causes loads of silent errors that don’t matter much in other casual workloads like games. This computation isn’t casual at all.</p>
<blockquote class="twitter-tweet"><p lang="en" dir="ltr">Reasons why you might need ECC <a href="https://t.co/vCKI3r5vUC">pic.twitter.com/vCKI3r5vUC</a></p>— 𝐷𝑟. 𝐼𝑎𝑛 𝐶𝑢𝑡𝑟𝑒𝑠𝑠 (@IanCutress) <a href="https://twitter.com/IanCutress/status/1262338620570185728?ref_src=twsrc%5Etfw">May 18, 2020</a></blockquote>
<script async="" src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
<p>And also, ECC is pretty important as a last method to absorb errors, although I never got an ECC corrected error for any of my computations. Overclocking ECC RAM is also very pointless.</p>
<h3 id="cpu">CPU</h3>
<p><img src="https://user-images.githubusercontent.com/8457324/87635402-eed30b00-c779-11ea-85f9-3136f855e859.png" alt="" /></p>
<p>This is the pattern of CPU utilization in a typical swap computation with 32 threads with AVX2 for a world record on y-cruncher v0.7.8.9506. I captured this from a computation of mine. You can see that at the start of the computation, the CPU is almost fully utilized since the initial steps are run on RAM, but after the program offloads to swap secondary storage, there is a clear difference between when the CPU is utilized fully, and when it is utilized less than 25% because of I/O bottlenecks. As a computation extends in time to store more digits than before, the zone where the CPU is underutilized is stretched more and more. As the time where the CPU is fully utilized is significantly less than the I/O bottlenecked time, the number of cores is not very important in conserving computation time.</p>
<p>An 8-core 16-thread AMD Ryzen 3700X was sufficient to set a world record for the lightest mathematical constants, but the problem with desktop (including HEDT) CPUs are that there is a maximum of total RAM, normally around 128 GB. This will contribute more to I/O bottleneck, making it impractical for more complicated constants that access the memory way more in the same number of digits. The reason people use multi-socket server/workstation CPUs are because it can house more RAM, decreasing bottleneck. More cores isn’t exactly the point that drastically increases speed. I am very interested on results shall this program was used on supercomputer or mainframe builds connected to each other using Mellanox <a href="https://en.wikipedia.org/wiki/InfiniBand">InfiniBand</a> I/O Fabric should such computation happens.</p>
<h3 id="configuration-in-linux-for-using-the-optimized-y-cruncher">Configuration in Linux for Using the Optimized y-cruncher</h3>
<p>First if you are running in Linux (the Windows version has all the features embedded inside as default and does not require additional installations), I recommend using the dynamic version (especially in multisocket environments) which requires the system dependencies in Ubuntu 18.04 (or distros based on this version, but Ubuntu 18.04 prevents unexpected errors) and also requires installing <code class="language-plaintext highlighter-rouge">numactl</code> as of y-cruncher v0.7.8.9506 and versions before that. CentOS 8 is also reported to work without any other tweaks as long as you installed <code class="language-plaintext highlighter-rouge">numactl</code>, you should be generally have no problems. This does not mean that your host requires only the Ubuntu 18.04 OS or CentOS 8 to run the dynamic version, instead you can run this on any recent Linux distros on the static version, but you have to be careful if you want to use the dynamic version.</p>
<p>If you do not have root permissions and thus cannot use the default package repositories, you can install <a href="https://anaconda.org/conda-forge/numactl-devel-cos6-x86_64">numactl-devel-cos6-x86_64</a> which has the required <code class="language-plaintext highlighter-rouge">libnuma.so.1</code> library for running the dynamic version using <a href="http://conda.io">Miniconda</a>, and add the directory of where <code class="language-plaintext highlighter-rouge">libnuma.so.1</code> is to the system variable <code class="language-plaintext highlighter-rouge">LD_LIBRARY_PATH</code> (<code class="language-plaintext highlighter-rouge">export LD_LIBRARY_PATH='/path/to/lib:$LD_LIBRARY_PATH'</code>, check the full path with <code class="language-plaintext highlighter-rouge">find . -name "libnuma.so.1"</code> at the directory where the conda environment is in, but you have to use the absolute path for the system variable instead of the relative path).</p>
<p>CentOS 7 (or any Linux distro with kernel version tested >= 3.10.0) also works (but also watch out for unexpected errors), but requires extra work; as the default <code class="language-plaintext highlighter-rouge">libstdc++</code> is an incompatible version, to fix this you can install the Red Hat Developer Toolset (<code class="language-plaintext highlighter-rouge">devtoolset-9</code>) from the <code class="language-plaintext highlighter-rouge">CentOS SCLo RH x86_64</code> repository and activate its environment (this is untested so I can’t ensure that this actually works well), or more preferably and not requiring root permissions install <a href="https://anaconda.org/conda-forge/libstdcxx-ng">libstdcxx-ng</a> in the same way as the <a href="http://conda.io">Miniconda</a> installation of <a href="https://anaconda.org/conda-forge/numactl-devel-cos6-x86_64">numactl-devel-cos6-x86_64</a> (you can install both at the same time), and same as <a href="https://anaconda.org/conda-forge/numactl-devel-cos6-x86_64">numactl-devel-cos6-x86_64</a>, adding the directory of where <code class="language-plaintext highlighter-rouge">libstdc++.so.6</code> is to the system variable <code class="language-plaintext highlighter-rouge">LD_LIBRARY_PATH</code> (<code class="language-plaintext highlighter-rouge">export LD_LIBRARY_PATH='/conda/path/to/cpp:/conda/path/to/numa:$LD_LIBRARY_PATH'</code>, check the full path with <code class="language-plaintext highlighter-rouge">find . -name "libstdc++.so.6"</code> at the directory where the conda environment is in, but you have to use the absolute path for the system variable instead of the relative path).</p>
<p>If you get errors related to <code class="language-plaintext highlighter-rouge">libcilkrts.so.5</code> and/or <code class="language-plaintext highlighter-rouge">libtbb.so.2</code> when executing y-cruncher after this configuration (common if you run it as a remote command or with <code class="language-plaintext highlighter-rouge">bash -c</code>), add the full path of the <code class="language-plaintext highlighter-rouge">Binaries</code> directory of the y-cruncher download to the <code class="language-plaintext highlighter-rouge">LD_LIBRARY_PATH</code>, delimiting each directory with a colon also.</p>
<p>Using recent OS containers with light <a href="https://en.wikipedia.org/wiki/OS-level_virtualization">OS-level virtualization</a> like Docker, LXC, or Singularity also works and was tested for installing the correct dependencies from package repositories without much virtualization overhead in performance, but you don’t really have to use OS-level virtualization even if you don’t have root permissions as long as your host OS has any recent kernel version. OS-level virtualization will not increase your kernel version even if you use a newly released OS container so whether y-cruncher works or not on your system is all to your host OS.</p>
<p>If you use the static version instead of the recommended dynamic version, you have to use the custom-coded Push Pool (this is the preferred framework for desktop level CPUs of around 16 threads) multiprocessing framework for large workloads using one or more CPU sockets summing to over 64 threads, which is less efficient than Intel’s <a href="https://en.wikipedia.org/wiki/Cilk">Cilk Plus</a> or <a href="https://en.wikipedia.org/wiki/Threading_Building_Blocks">Threading Building Blocks</a>. Threading Building Blocks is the replacement to Cilk Plus from Intel but the performance from a past computation of the world record of Pi have been underwhelming with it so far, thus the better working Cilk Plus is used for now (but this could change later).</p>
<p>The y-cruncher program will then automaticaly choose the recommended frameworks for each component to be used based on the number of cores and you will not be restricted on the selections as long as you have the dynamic version.</p>
<h3 id="the-y-cruncher-program">The y-cruncher Program</h3>
<p>Every computer is different, so they must be tuned to get its maximum throughput to decrease time dramatically. The y-cruncher program has some tools to check them. The first tool is the stress testing application, which proves that your build can initially withstand certain <a href="https://en.wikipedia.org/wiki/Fast_Fourier_transform">Fast Fourier Transform</a> operations and other heavy computations. The second and perhaps the most important tool is the I/O Performance Analysis if you are (likely) computing with swap secondary storage like SSDs or HDDs. After getting the results you have seen with running this benchmark, you have to tweak the Far Memory Tuning configuration based on what you get. An explanation of how to do this will come up later.</p>
<p>For running the real world record level computations, you go in the <code class="language-plaintext highlighter-rouge">Custom Compute a Constant</code> menu. Note that you should run Test Swap Multiply in Advanced Options if you are running computations with huge digits that are bigger than the current record digits in Pi as advised by Mr. Yee. First choose the constant and understand the algorithms and their performances. Then you can choose whether to use swap storage or not and set how much RAM the program it should use. It will be automatically set to 90-95% of all available RAM, and this is the appropriate guideline that should be kept if you are tweaking the configurations since the remaining 5-10% should be used by the OS and other background tasks. You then have to choose the path of where the digits will be saved and for swap computations also where swap files should be stored during the computation. If you already have hardware <a href="https://en.wikipedia.org/wiki/RAID">RAID</a> available, you can just specify the path, and if not the program can set a custom software <a href="https://en.wikipedia.org/wiki/RAID">RAID</a> configuration if you list the paths in the program. You can even optionally set a backup command to be run automatically using the Post-Checkpoint Command in the Checkpoint menu. You can also set the I/O Memory Buffer Allocator here, and if you use the dynamic version of Linux or the Windows version and have multiple sockets, you can see it is automatically set to use <code class="language-plaintext highlighter-rouge">libnuma</code> version of Node Interleaving and the custom coded version of Node Interleaving if not. The Affinity configuration designates the cores or threads that are more accessible to the secondary storages than others. It is normally not changed if the OS handles everything.</p>
<p>Now the last configuration left is the I/O Buffer Size and Bytes/Seek parameter. Here is when the I/O Performance Analysis benchmark comes in. I will only talk about HDDs as SSDs have a whole lot different figures (I have seen Bytes/Seek parameters of over 80 MB for mixed NVMe + HDD clustered filesystem configurations and also oppositely microscopic Bytes/Seek parameter values around 128 KB for a pure NVMe RAID configuration) and you should run the I/O Performance Analysis and conclude what Bytes/Seek parameter is appropriate. The rule of thumb for I/O Buffer Size is using 64 MiB times the number of hard drives (or divide the sequential read rate shown in the results of the I/O Performance Analysis to the sequential read speed of one hard drive or SSD to infer how many hard drives are in the array). For the Bytes/Seek parameter you first have to know the logics. This is the number of bytes the hard drives can read sequentially in the time equivalent to the disk seek time. A normal hard drive has a seek time of 10ms and one hard drive normal has a sequential read rate of 100-200 MB/s, so the Bytes/Seek parameter can be around 1-2 MiB and thus we first assume it is around 2 MiB because setting this smaller can change the computation speed more dramatically than setting it larger. You should multiply this to the number of hard drives and think of this as the starting value. Then you should use the I/O Performance Analysis sequential read results directly and divide it to the disk seek time of 10 ms and additionally fine tune it in the direction that the displayed analysis results say. If you see a red texted result for one or more of the benchmark results you should definitely increase the Bytes/Seek dramatically as this can cause a big bottleneck. The Sequential Read (Write) throughput should be about <strong>three times</strong> the throughput of Threshold Strided Read (Write). You have to experiment with this multiple times to achieve optimized speed for world record sized computations and utilize your CPU as much as possible. The time invested here will really help with the computation and it can be reused in another computation with the same system configuration. If there is a big difference in Threshold Strided Read and Threshold Strided Write speeds, there unfortunately is not much remaining to do and it is not possible to tune for optimization. This happens commonly in distributed file systems and if this occurs tune Bytes/Seek so that the lower of the two is not less than 1/4 of the Sequential Speed and we can’t do much more than that. Check <a href="http://www.numberworld.org/y-cruncher/guides/swapmode.html">This Page</a> for a more in-depth guide overall.</p>
<h3 id="miscellaneous">Miscellaneous</h3>
<p>Now that we hopefully got a stable system with the fastest I/O throughput possible ready, we can go on and actually try setting a world record. If you set a record and your record digit output size is reasonable, please consider uploading to an open repository such as Google Drive if you have unlimited storage given to G Suite for Education/Business accounts (check <a href="https://www.reddit.com/r/DataHoarder/wiki/backups">This Link</a> out too) or the <a href="https://archive.org">Internet Archive</a> for other researchers to utilize them when they need the digits.</p>
<p>If you did not read the <a href="/world-record/algorithms-and-significance-of-major-mathematical-constants-part-1/">First Post</a> and <a href="/world-record/algorithms-and-significance-of-major-mathematical-constants-part-2/">Second Post</a> of the significance and algorithms of mathematical constants, read them to understand algorithms and the significance of all main mathematical constants.</p>
<p>If you are using your own build and you have to manage the heat, I can tell you more heat makes the parts more likely to cause silent defects to the digits. Use very good CPU coolers and case coolers. I think liquid cooling is not plausible for any professional builds that is on for a long time because of the leaks, but may work only if you are mainly using in games and don’t want noisy fans. Otherwise air cooling is normally more stable. Passive cooling in server racks are also very solid.</p>
<p>For more information on speed optimization and management methods to prevent any silent corruptions in <a href="http://www.numberworld.org/y-cruncher/">y-cruncher</a>, read everything from the <a href="http://www.numberworld.org/y-cruncher/#PerformanceTips">Performance Tips</a> section of <a href="http://www.numberworld.org/y-cruncher/">y-cruncher</a> and every link and text under that <strong>thoroughly</strong>, which includes <a href="http://www.numberworld.org/y-cruncher/algorithms.html">Algorithms and Internals</a>, the <a href="http://www.numberworld.org/y-cruncher/faq.html">FAQ</a>, <a href="http://www.numberworld.org/y-cruncher/guides/multithreading.html">Multi-Threading</a>, <a href="http://www.numberworld.org/y-cruncher/guides/memory.html">Memory Allocation</a>, <a href="http://www.numberworld.org/y-cruncher/guides/swapmode.html">Swap Mode</a>, and <a href="http://www.numberworld.org/y-cruncher/guides/custom_formulas.html">Custom Formulas</a> for people who need to use this function.</p>{"avatar"=>"https://user-images.githubusercontent.com/8457324/82421935-c225ae80-9abc-11ea-8110-b35c57838923.jpg", "bio"=>"Medical student working towards polymathy. Current and former World Record Holder of Euler–Mascheroni Constant, Apéry’s Constant, Catalan's Constant, Lemniscate Constant, Log(2), Log(10) as of September 2020. If you are interested in sponsoring to set a world record for various mathematical constants including Pi, contact me using the email below. You can't copy contents of this website to another source without my permission. Contact me if you want to buy me a cup of tea. Especially you big corps.", "email"=>"ehf@users.sourceforge.net", "links"=>[{"label"=>"GitHub", "icon"=>"fab fa-fw fa-github", "url"=>"https://github.com/ehfd"}, {"label"=>"Hive.Blog", "icon"=>"fas fa-fw fa-blog", "url"=>"https://hive.blog/@ehf"}]}ehf@users.sourceforge.netNow we get real. We overview the technical aspects of y-cruncher and what should be optimized and configured for the fastest throughput possible in a system to set world records.Algorithms and Significance of Major Mathematical Constants: Part 12020-05-21T12:01:50+00:002020-05-21T12:01:50+00:00https://ehfd.github.io/world-record/algorithms-and-significance-of-major-mathematical-constants-part-1<p>Note: this post is appropriate for people with at least a high school mathematics or calculus background, although anyone enthusiastic in mathematics can understand this if Google and Wikipedia is your friend.
I am also not a professional mathematician, so this post may include inaccuracies.
Also, this post is meant to supplement Mr. Alexander Yee’s <a href="http://www.numberworld.org/y-cruncher/">y-cruncher</a> in easier words, not replace it. You have to see his website for the crucial technical details required to make world records.</p>
<p>If you are interested in actually setting a world record with y-cruncher, read <a href="/world-record/optimizing-y-cruncher-to-actually-set-world-records/">This Post</a> for an in-depth explanation of optimizing the configurations of y-cruncher as well.</p>
<p>I have been following <a href="http://www.numberworld.org/y-cruncher/">y-cruncher</a> since like 2011, which is 2 years after it was out. I calculated a few hundred million to a billion digits of Pi with my laptop back then.
That was nowhere close to the <a href="http://www.numberworld.org/digits/Pi/">world record</a> even back then, but I got a hang of how I should operate the system in whole instead of leaving it alone to rapidly overheat and do nothing about it.</p>
<p>It was almost 8 years after that until I set a series of world records, and I see the <a href="/world-record/euler-mascheroni-constant/">Euler-Mascheroni Constant</a> as the most significant result by myself.
It is in mathematics, the most important mathematical constant that comes out first in calculus related to the integral test, next to Pi and \(e\), as Pi first comes out in elementary school and the definition of \(e = \lim_{n\to\infty} \left( 1 + \frac{1}{n} \right)^n\) in an earlier chapter of high school calculus.</p>
<p>Here, I overview mathematical constants that <strong><a href="http://www.numberworld.org/y-cruncher/">y-cruncher</a> by Alexander Yee</strong> made various the world records for.</p>
<p>Difficulties of mathematical constants: (Excerpt from <a href="http://www.numberworld.org/y-cruncher/">y-cruncher</a> v0.7.8.9506, Mr. Yee thankfully let me post this)</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Compute a Constant: (in ascending order of difficulty to compute)
# Constant Value Approximate Difficulty*
Fast Constants:
0 Sqrt(n) 1.46
1 Golden Ratio = 1.618034... 1.46
2 e = 2.718281... 3.88 / 3.88
Moderate Constants:
3 Pi = 3.141592... 13.2 / 19.9
4 Log(n) > 35.7
5 Zeta(3) (Apery's Constant) = 1.202056... 62.8 / 65.7
6 Catalan's Constant = 0.915965... 78.0 / 105.
7 Lemniscate = 5.244115... 60.4 / 124. / 154.
Slow Constants:
8 Euler-Mascheroni Constant = 0.577215... 383. / 574.
Other:
9 Euler-Mascheroni Constant (parameter override)
10 Custom constant with user-defined formula.
*Actual numbers will vary. Radix conversion = 1.00
</code></pre></div></div>
<p>Note there are more mathematical constants that are defined with custom formula files available with the executable, but they are more complicated math, so if you really want to set custom formula records, you should eventually know more as you research them.</p>
<p>I will add whether your name can go on Wikipedia if you set a record for this or not, but <strong>please don’t start this from scratch for the sake of becoming famous or irrelevantly adding a line to your CV because it isn’t worth it</strong>. I just did it to test the long-term stability of the high-performance system that I use for other actually production-purpose uses in research that must not have went wrong and also assay my system administration competence, and it won’t make you famous just because you had nice hardware and a bunch of hard drives just in case you are looking at this for such purpose. I just had spare time when I did world records and I wanted to do a slightly more meaningful thing than the default stress test <a href="http://www.numberworld.org/y-cruncher/">y-cruncher</a> provided. This is why I upload every world record computation I do to help any mathematicians in the future.</p>
<h4 id="sqrtn">Sqrt(n)</h4>
<p><a href="https://en.wikipedia.org/wiki/Square_root">Wikipedia</a></p>
<p>It’s easy to compute. The algorithm is very trivial as it is simply <a href="https://en.wikipedia.org/wiki/Newton%27s_method">Newton’s Method</a> (\(x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)}\)) to find a closer approximation per step. Not much issue even compared to radix conversion from decimal to hexadecimal.
Mostly, the values used to set records are \(\sqrt{2}\) or \(\sqrt{3}\), the earlier one being more significant.
However, the definition is not complex, and so many stuffs have been proved about the square root values such as irrationality, since \(\sqrt{2}\) is simply \(2^{1/2}\) and related calculations can be done by any high school student.
The fact that these values are proved as real numbers degrade the value of research in approximation, and these values likely won’t provide any significant addition to mathematical advances. Only Sqrt(2) will add your name to Wikipedia, and it is a competition of how many hard drives people have.</p>
<h4 id="golden-ratio">Golden Ratio</h4>
<p><a href="https://en.wikipedia.org/wiki/Golden_ratio">Wikipedia</a></p>
\[\varphi = \frac{1+\sqrt{5}}{2}\]
<p>Culturally significant, mathematically not so much? It is \(\sqrt{5}\), divided by 2, and adding 0.5 to it. I won’t repeat what I said in <strong>Sqrt(n)</strong>.
The world record values are trivial to set obviously as long as you have hard drives to store the swap so the world record is soaring as much as Pi. Also a competition of how many hard drives people have, and your name won’t be on Wikipedia unless you make your own table on the document and the moderators approve it.</p>
<h4 id="e-base-of-the-natural-logarithm">e (Base of the Natural Logarithm)</h4>
<p><a href="https://en.wikipedia.org/wiki/E_(mathematical_constant)">Wikipedia</a></p>
\[e = \sum\limits_{n = 0}^{\infty} \frac{1}{n!} = \frac{1}{1} + \frac{1}{1} + \frac{1}{1\cdot 2} + \frac{1}{1\cdot 2\cdot 3} + \cdots = \lim_{n\to\infty} \left( 1 + \frac{1}{n} \right)^n\]
<p>As I said above, it is widely regarded as the second-most important mathematical constant.
In addition to its universal significance in mathematics, its application in economics (compound interest), statistics (normal distribution and probability theory), physics (complex numbers for RLC circuits and countless applications in modern physics) is so vast.
The downside is that it is very well-defined in so many ways and proved it was irrational and transcendental so that not many mathematicians take its approximate value significantly. However it has great mathematical value.</p>
<p>The <a href="https://en.wikipedia.org/wiki/Taylor_series">Taylor series</a> definition is used to set the record as it is easy to calculate how many terms are required to ensure the decimal and hexadecimal digits.
The world record values are easy to set as long as you have hard drives to store the swap so the world record is soaring over the Golden Ratio. Also a competition of how many hard drives people have, and since it’s a really important mathematical constant your name can be mentioned in Wikipedia but will likely be erased when someone else renews it by seeing how the moderators set up the tables.</p>
<h4 id="pi-π">Pi (π)</h4>
<p><a href="https://en.wikipedia.org/wiki/Pi">Wikipedia</a></p>
<p>Most of you seeing this should be interested in this the most.
Former Pi world record holder <a href="https://pi2e.ch/">Dr. Peter Trueb</a> made a <a href="https://pi2e.ch/blog/2016/07/30/the-chudnovsky-formula/">blog</a> and he has explained very deeply about the Chudnovsky Formula used to calculate Pi.
I’ll just add that it was proved that Pi is irrational and transcendental and the Chudnovsky Formula is a great rapidly converging formula that made Pi computation easier after the <a href="https://en.wikipedia.org/wiki/Ramanujan%E2%80%93Sato_series">Ramanujan–Sato series</a>.
But unlike \(e\), which is basically the simplest form of Taylor series, it requires more computing power but the bottleneck for world records already went to the swap disk I/O rather than CPU.
For all constants, it is required to verify the computation with another mathematically independent to set a world record, but a <a href="https://en.wikipedia.org/wiki/Spigot_algorithm">spigot algorithm</a> developed under 30 years ago called the <a href="https://en.wikipedia.org/wiki/Bailey%E2%80%93Borwein%E2%80%93Plouffe_formula">Bailey–Borwein–Plouffe (BBP)</a> formula calculates the last few digits of Pi in hexadecimal and verifies that the computations did not have any errors.
It is computationally impractical to calculate all digits in between, but since the last few digits are sufficient to verify the whole series are intact, it is a good and short verification method. Since it is pretty simple Mr. Alexander Yee does this for you when one sets a record normally.</p>
<p>I guess this is the whole point this “world record” concept exists and your name & face will become more famous when you set a world record. Most people who recently set the records for Pi are respected professionals in their own fields anyways however. <a href="https://cloud.google.com/blog/products/compute/calculating-31-4-trillion-digits-of-archimedes-constant-on-google-cloud">Google</a> have previously set a world record of Pi to promote their Cloud Platform, and they paid money to Mr. Yee to use the program proprietorially. The number of digits have soared to an astronomical figure, so it is both a challenge of keeping your system intact preventing any silent errors for almost a year and preparing a stack of hard drives.</p>
<h4 id="logn-logarithm">Log(n) (Logarithm)</h4>
<p><a href="https://en.wikipedia.org/wiki/Logarithm">Wikipedia</a></p>
<p>Has similarly vast applications as \(e\).
\(log_n (x)\) with a positive base that is not 1 and positive domain is the inverse function of \(n^{x}\), so \(ln(x)\) is the inverse of \(e^{x}\). It is proved that the range of any logarithm function with a positive base that is not 1 and positive domain are real.</p>
<p>The computation is done with a <a href="https://en.wikipedia.org/wiki/Logarithm#Calculation">Primary Machin-like Formula</a> that can be auto-generated as a sum of a fraction multiplied by an inverse hyperbolic tangent approximate value (also generated with Taylor series but simpler) as it converges faster than the simple Taylor series of the logarithm. Proving identity of the logarithm and inverse hyperbolic tangent first comes out in calculus. I remember solving this problem on my calculus course. The difficulty of each logarithm value varies, and is considered multiple times harder than Pi of the same number of digits.</p>
<p>It can also be approximated by the <a href="https://en.wikipedia.org/wiki/Arithmetic%E2%80%93geometric_mean">arithmetic–geometric mean (AGM)</a> although too inefficient to be used for the world record computations. This itself is used in precalculus algebra or olympiads for finding ranges. However, the algorithm of the same principle is actually used for the world record computations of <a href="/world-record/lemniscate-constant/">Lemniscate Constant</a>, coming up on my <a href="/world-record/algorithms-and-significance-of-major-mathematical-constants-part-2/">Second Post</a>.</p>
\[0<\frac{n}{1/x_1+1/x_2+\cdots+1/x_n}\leq\sqrt[n]{x_1x_2\cdots x_n}\leq\frac{x_1+x_2+\cdots+x_n}{n} \leq\sqrt{\frac{x_1^2+x_2^2+\cdots+x_n^2}{n}}\]
<p>This is the Root-Mean Square-Arithmetic Mean-Geometric Mean-Harmonic Mean Inequality (RMS-AM-GM-HM) relation (largest to smallest), and we will take the middle 2 relations only.</p>
\[\ln (x) \approx \frac{\pi}{2 M(1,2^{2-m}/x)} - m \ln (2)\]
<p>\(M(a,b)\) here or \(agm(a, b)\) generally denotes the arithmetic–geometric mean of \(a\) and \(b\) and is repeatedly calculated by computing the <a href="https://en.wikipedia.org/wiki/Arithmetic_mean">arithmetic mean</a> \(\frac{a+b}{2}\) and <a href="https://en.wikipedia.org/wiki/Geometric_mean">geometric mean</a> \(\sqrt{ab}\) and making them the new \(a\) and \(b\). This rapidly converges as there are more steps \(m\) to make more digits.</p>
<p>This method may be presumed to be faster than series expansion because AGM has a <a href="https://en.wikipedia.org/wiki/Time_complexity">time complexity</a> of \(O(n log(n)^2)\) instead of \(O(n log(n)^3)\) of most series with the <a href="https://en.wikipedia.org/wiki/Big_O_notation">Big O notation</a>, but because each term of AGM is multitudes more complex (which is the constant in <a href="https://en.wikipedia.org/wiki/Big_O_notation">Big O notation</a>) than series and that AGM has bad <a href="https://en.wikipedia.org/wiki/Locality_of_reference">memory locality</a>, a series expansion incorporating <a href="https://en.wikipedia.org/wiki/Binary_splitting">Binary Splitting</a> is faster. This is the same reason AGM algorithms for Pi are not used. Also note that the bottleneck of HPC computations is no longer the CPU. It is the storage. If there is not enough RAM to house all the generated data used for computation, an external swap storage must be used, and they are so slow they throttle what the CPU and RAM can process. Refer more <a href="http://www.numberworld.org/y-cruncher/faq.html#series_vs_agm">Here</a>.</p>
<p>I wish I could go deeper on the binary splitting method but to fully explain from the basics I have to give an example of expanding long expressions, so I replace my explanation with the <a href="http://numbers.computation.free.fr/Constants/Algorithms/splitting.html">First Link</a>, <a href="http://www.numberworld.org/y-cruncher/internals/binary-splitting.html">Second Link</a>, and <a href="http://www.numberworld.org/y-cruncher/internals/binary-splitting-library.html">Third Link</a>.</p>
<p>Like Sqrt(n), each approximate values itselves are mainly symbolic rather than mathematical, although the algorithms are in rapid research. You will only get your name on Wikipedia for Log(2).</p>
<h4 id="series-backlinks">Series Backlinks</h4>
<p>I will overview the hardest mathematical constants <a href="/world-record/algorithms-and-significance-of-major-mathematical-constants-part-2/#zeta3-aperys-constant"><strong>Apery’s Constant</strong></a>, <a href="/world-record/catalans-constant/"><strong>Catalan’s Constant</strong></a>, <a href="/world-record/lemniscate-constant/"><strong>Lemniscate</strong></a>, and <a href="/world-record/euler-mascheroni-constant/"><strong>Euler-Mascheroni Constant</strong></a> in more detail also as all of these are constants I made world records for.</p>
<p><a href="/world-record/algorithms-and-significance-of-major-mathematical-constants-part-2/">Second Post</a></p>
<p>If you are interested in actually setting a world record with y-cruncher, read <a href="/world-record/optimizing-y-cruncher-to-actually-set-world-records/">This Post</a> for an in-depth explanation of optimizing the configurations of y-cruncher as well.</p>
<p>More information of algorithms used by y-cruncher: <a href="http://www.numberworld.org/y-cruncher/internals/formulas.html">Link</a>.
This site has the equations and time complexity of all algorithms.</p>{"avatar"=>"https://user-images.githubusercontent.com/8457324/82421935-c225ae80-9abc-11ea-8110-b35c57838923.jpg", "bio"=>"Medical student working towards polymathy. Current and former World Record Holder of Euler–Mascheroni Constant, Apéry’s Constant, Catalan's Constant, Lemniscate Constant, Log(2), Log(10) as of September 2020. If you are interested in sponsoring to set a world record for various mathematical constants including Pi, contact me using the email below. You can't copy contents of this website to another source without my permission. Contact me if you want to buy me a cup of tea. Especially you big corps.", "email"=>"ehf@users.sourceforge.net", "links"=>[{"label"=>"GitHub", "icon"=>"fab fa-fw fa-github", "url"=>"https://github.com/ehfd"}, {"label"=>"Hive.Blog", "icon"=>"fas fa-fw fa-blog", "url"=>"https://hive.blog/@ehf"}]}ehf@users.sourceforge.netI overview major mathematical constants that are computed to set a world record, their difficulties, and mathematical significance.Euler-Mascheroni Constant2020-02-15T11:46:20+00:002020-02-15T11:46:20+00:00https://ehfd.github.io/world-record/euler-mascheroni-constant<h2 id="the-euler-mascheroni-constant">The Euler-Mascheroni Constant</h2>
<p>Update: <strong>To people who came here from Dr. Ian Cutress’s <a href="https://twitter.com/iancutress">Twitter</a> or <a href="https://www.youtube.com/channel/UC1r0DG-KEPyqOeW6o79PByw">TechTechPotato Youtube</a>:</strong> Welcome. I am the very person that asked Dr. Ian to compute this dreadful mathematical constant for over 3 months to verify my initial computation. But I guess we did it. I give big credit to Dr. Ian Cutress because without him, my computation would have never become official. This website is meant to provide additional information about computing world records in addition to the information in the <a href="http://www.numberworld.org/y-cruncher/">y-cruncher</a> website, so have some more look. Thank you.</p>
<p>Another update:</p>
<!-- Courtesy of embedresponsively.com //-->
<div class="responsive-video-container">
<iframe src="https://www.youtube-nocookie.com/embed/DXX823edcGo" frameborder="0" webkitallowfullscreen="" mozallowfullscreen="" allowfullscreen=""></iframe>
</div>
<blockquote class="twitter-tweet"><p lang="en" dir="ltr">Start-to-End Wall Time: 8994062.448 seconds (104.1 days)<br /><br />Finally have the record! 🥳🎉🎉<br /><br />But<br /><br />Logical Disk Bytes Read: 2,548,639,747,841,704 (2.26 PiB)<br /><br />Logical Disk Bytes Written: 2,163,049,114,475,792 (1.92 PiB)<br /><br />I need to check if these SSDs are working</p>— 𝐷𝑟. 𝐼𝑎𝑛 𝐶𝑢𝑡𝑟𝑒𝑠𝑠 (@IanCutress) <a href="https://twitter.com/IanCutress/status/1265394384901754881?ref_src=twsrc%5Etfw">May 26, 2020</a></blockquote>
<script async="" src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
<blockquote class="twitter-tweet"><p lang="en" dir="ltr">Reasons why you might need ECC <a href="https://t.co/vCKI3r5vUC">pic.twitter.com/vCKI3r5vUC</a></p>— 𝐷𝑟. 𝐼𝑎𝑛 𝐶𝑢𝑡𝑟𝑒𝑠𝑠 (@IanCutress) <a href="https://twitter.com/IanCutress/status/1262338620570185728?ref_src=twsrc%5Etfw">May 18, 2020</a></blockquote>
<script async="" src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
<p>And also, ECC is pretty important yes, although I never got an ECC corrected error for any of my computations.</p>
<p>Please cite this webpage if my world record of the Euler-Mascheroni Constant were useful and also the below citation if you used computations from my other posts or the digit analysis methodologies:<br />
Kim, S. Normality Analysis of Current World Record Computations for Catalan’s Constant and Arc Length of a Lemniscate with a=1. arXiv Preprint <a href="https://arxiv.org/abs/1908.08925">arXiv:1908.08925</a></p>
<p>Out of all the mathematical constants that I have set a world record computation for until now, the Euler-Mascheroni Constant is so far the most mathematically significant constant (the only ones more significant are probably π and e).</p>
<p>This world record computation of 600,000,000,100 digits by Seungmin Kim was done from Mon Aug 19 17:21:44 2019 to Sat Jan 11 18:06:11 2020 using the Brent-McMillan with Refinement ( n = 2^38 ) algorithm. Verification calculation has been done by <a href="https://twitter.com/iancutress">Dr. Ian Cutress</a> using the Brent-McMillan ( n = 2^39 ) algorithm from Wed Feb 12 09:34:27 2020 to Tue May 26 12:55:30 2020.</p>
<p>Validation file generated by y-cruncher v0.7.7 Build 9501 for computation, and y-cruncher v0.7.8 Build 9503 for the verification run:<br />
Computation: <a href="https://web.archive.org/web/20200215075907/http://www.numberworld.org/y-cruncher/records/2020_1_27_gamma.txt">https://web.archive.org/web/20200215075907/http://www.numberworld.org/y-cruncher/records/2020_1_27_gamma.txt</a><br />
Verification by <a href="https://twitter.com/iancutress">Dr. Ian Cutress</a>: <a href="https://web.archive.org/web/20200528081821/http://www.numberworld.org/y-cruncher/records/2020_5_26_gamma.txt">https://web.archive.org/web/20200528081821/http://www.numberworld.org/y-cruncher/records/2020_5_26_gamma.txt</a></p>
\[\gamma = \lim_{n\to\infty}\left(-\ln n + \sum_{k=1}^n \frac1{k}\right) = \int_1^\infty\left(-\frac1x+\frac1{\lfloor x\rfloor}\right) dx\]
<p>The definition of Euler–Mascheroni constant (<a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Mascheroni_constant">Wikipedia</a>)</p>
<p><a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Mascheroni_constant">Euler–Mascheroni constant</a> is defined by the above equation and denoted as the symbol γ. The definition itself will be familiar to many freshmen studying calculus, as it is the infinite limit difference of the harmonic series and the logarithm, and can be converted to the area of the blue region in the figure below. I am pretty sure this figure is in a calculus textbook related to the integral test for convergence. Even though it looks irrational in a numerical scope, it is unproven if it is transcendental, or even irrational. Take a look at this <a href="http://mathworld.wolfram.com/Euler-MascheroniConstant.html">Wolfram Mathworld</a> entry for the mathematical stuff.</p>
<p><img src="https://user-images.githubusercontent.com/8457324/82203832-ce83fd00-993e-11ea-9034-a97a618429ee.png" alt="" /></p>
<p>The area of the blue region converges to the Euler–Mascheroni constant. (<a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Mascheroni_constant">Wikipedia</a>)</p>
<p>Same as all the other constants, I have used y-cruncher by Mr. Alexander J. Yee, basically the only program that can do this task for this computation. This program is commonly used for stress testing and benchmarking overclocked PC builds (obviously this program performs a very rigorous computation), along with fellow mathematical computing program Prime95. Compared to the earlier constants, this constant is very intensive to compute since it is not just one series expansion. The time to compute and disk writes went basically out of bounds compared to the earlier constants.</p>
<p>Computation:<br />
System information:<br />
Operating System: Linux 3.10.0-693.21.1.el7.x86_64 x86_64 (CentOS 7)<br />
Processor(s):<br />
Name: Intel(R) Xeon(R) Gold 6140 CPU @ 2.30GHz<br />
Logical Cores: 72<br />
Physical Cores: 36<br />
Sockets: 2<br />
NUMA Nodes: 2<br />
Base Frequency: 2,294,552,832 Hz</p>
<p>I have used two CPUs that are compatible with AVX-512 operations that are used crucially in vector operations like y-cruncher. However, y-cruncher had an even more severe I/O bottleneck in this computation as the required read/writes were more intensive than any of my computations, so the number of cores did not assist my operation well. It was better than the <a href="/world-record/lemniscate-constant/">Lemniscate Constant</a> but worse than <a href="/world-record/catalans-constant/">Catalan’s Constant</a>. I have used RAID scratch storage for my operations, but the speed of the disk was very slow compared to the required read/write operations.</p>
<p>Start Date: Mon Aug 19 17:21:44 2019<br />
End Date: Sat Jan 11 18:06:11 2020<br />
Total Computation Time: 11899422.659 seconds<br />
Start-to-End Wall Time: 12534266.770 seconds<br />
CPU Utilization: 451.31 % + 191.22 % kernel overhead<br />
Multi-core Efficiency: 6.27 % + 2.66 % kernel overhead</p>
<p>The computation took roughly 5 times more than <a href="/world-record/catalans-constant/">Catalan’s Constant</a> with about 80-90% efficiency to it, so the disk writes were the fundamental issue. I definitely cannot do this computation one more time.</p>
<p>Memory:<br />
Usable Memory: 201,226,489,856 ( 187 GiB)<br />
Logical Peak Disk Usage: 4,380,033,959,120 (3.98 TiB)<br />
Logical Disk Bytes Read: 3,146,383,900,360,116 (2.79 PiB)<br />
Logical Disk Bytes Written: 2,755,641,530,520,684 (2.45 PiB)</p>
<p>Looks like we have a new unit called PiB. This is the first computation that I have done that exceeds 1 Pebibyte. HDD I/O speeds are still great bottlenecks to virtually any other component, and thus having Optane DIMMs, SSDs with modules that tolerate a huge amount of read/writes, or more normal RAM can help the speed of the computation speed greatly.</p>
<p>For verification results, check the link at the start of the post.</p>
<p>If you want to take a look at the digits for the Euler–Mascheroni constant, you can download it from <a href="https://archive.org/details/euler_200111">This Link</a> (Over 1 TB total but don’t worry, it will just redirect to a registry with a link to download).</p>
<p><strong>Note that digits are released as an <a href="https://creativecommons.org/licenses/by-nc-nd/4.0/">Attribution-NonCommercial-NoDerivatives 4.0 International</a> License, meaning no commercial purposes and you cannot distribute a remixed, transformed, or built upon version without my consent. You must also give appropriate credit, provide a link to the license, and indicate if changes were made even if it is not a prohibited use case.</strong></p>
<p>Archive for computation results in the y-cruncher website:<br />
<a href="https://web.archive.org/web/20200528093605/http://www.numberworld.org/y-cruncher/">https://web.archive.org/web/20200528093605/http://www.numberworld.org/y-cruncher/</a><br />
Special thanks to Mr. Alexander J. Yee for developing and releasing y-cruncher and providing advice, <a href="https://www.anandtech.com/">AnandTech.com</a> Senior Editor <a href="https://twitter.com/iancutress">Dr. Ian Cutress</a> for verifying my computation, and the <a href="https://archive.org/">Internet Archive</a> for hosting the computed digits.</p>{"avatar"=>"https://user-images.githubusercontent.com/8457324/82421935-c225ae80-9abc-11ea-8110-b35c57838923.jpg", "bio"=>"Medical student working towards polymathy. Current and former World Record Holder of Euler–Mascheroni Constant, Apéry’s Constant, Catalan's Constant, Lemniscate Constant, Log(2), Log(10) as of September 2020. If you are interested in sponsoring to set a world record for various mathematical constants including Pi, contact me using the email below. You can't copy contents of this website to another source without my permission. Contact me if you want to buy me a cup of tea. Especially you big corps.", "email"=>"ehf@users.sourceforge.net", "links"=>[{"label"=>"GitHub", "icon"=>"fab fa-fw fa-github", "url"=>"https://github.com/ehfd"}, {"label"=>"Hive.Blog", "icon"=>"fas fa-fw fa-blog", "url"=>"https://hive.blog/@ehf"}]}ehf@users.sourceforge.netHow I went on to break the world record of perhaps the hardest and the third most important mathematical constant.Catalan’s Constant2019-07-23T05:11:35+00:002019-07-23T05:11:35+00:00https://ehfd.github.io/world-record/catalans-constant<h2 id="the-catalans-constant">The Catalan’s Constant</h2>
<p>Please cite:<br />
Kim, S. Normality Analysis of Current World Record Computations for Catalan’s Constant and Arc Length of a Lemniscate with a=1. arXiv Preprint <a href="https://arxiv.org/abs/1908.08925">arXiv:1908.08925</a><br />
if this article or the calculated digits were useful.</p>
<p>This world record computation of 600,000,000,100 digits by Seungmin Kim was done from Sat May 25 22:37:01 2019 to Tue Jun 18 18:59:44 2019 using the Pilehrood (2010-short) algorithm. This time, I have also verified the calculation using the Guillera (2008) algorithm from Fri Jun 7 11:13:58 2019 to Tue Jul 16 10:29:12 2019.
Validation file generated by y-cruncher v0.7.7 Build 9501 for computation, and y-cruncher v0.7.7 Build 9499 for the verification run:<br />
Computation: <a href="https://web.archive.org/web/20190724102605/http://www.numberworld.org/y-cruncher/records/2019_6_18_catalan.txt">https://web.archive.org/web/20190724102605/http://www.numberworld.org/y-cruncher/records/2019_6_18_catalan.txt</a><br />
Verification: <a href="https://web.archive.org/web/20190724102625/http://www.numberworld.org/y-cruncher/records/2019_7_16_catalan.txt">https://web.archive.org/web/20190724102625/http://www.numberworld.org/y-cruncher/records/2019_7_16_catalan.txt</a></p>
\[G = \beta(2) = \sum_{n=0}^{\infty} \frac{(-1)^{n}}{(2n+1)^2} = \frac{1}{1^2} - \frac{1}{3^2} + \frac{1}{5^2} - \frac{1}{7^2} + \frac{1}{9^2} - \cdots\]
<p>The definition of Catalan’s constant (<a href="https://en.wikipedia.org/wiki/Catalan%27s_constant">Wikipedia</a>)</p>
<p><a href="https://en.wikipedia.org/wiki/Catalan%27s_constant">Catalan’s constant</a> is defined by the above equation, where β is the <a href="https://en.wikipedia.org/wiki/Dirichlet_beta_function">Dirichlet beta function</a>, which is closely related to the Riemann zeta function (I think both of them are learned by undergrad students in mathematics). Even though it looks irrational in a numerical scope, it is unproven if it is transcendental, or even irrational. Take a look at this <a href="http://mathworld.wolfram.com/CatalansConstant.html">Wolfram Mathworld</a> entry for the mathematical stuff.</p>
<p>Same as the <a href="/world-record/lemniscate-constant/">Lemniscate Constant</a>, I have used y-cruncher by Mr. Alexander J. Yee for this computation. This program is commonly used for stress testing and benchmarking overclocked PC builds (obviously this program performs a very rigorous computation), along with fellow mathematical computing program Prime95. <br />
It was also very hard for me to maintain this server stable, as this takes all components of a computer to the extreme.</p>
<p>Computation:<br />
System information:<br />
Operating System: Linux 3.10.0-693.21.1.el7.x86_64 x86_64 (CentOS 7)<br />
Processor(s):<br />
Name: Intel(R) Xeon(R) Gold 6140 CPU @ 2.30GHz<br />
Logical Cores: 72<br />
Physical Cores: 36<br />
Sockets: 2<br />
NUMA Nodes: 2<br />
Base Frequency: 2,294,522,799 Hz</p>
<p>I have used two CPUs that are compatible with AVX-512 operations that are used crucially in vector operations like y-cruncher. However, y-cruncher has severe I/O bottlenecks (although I managed to select a better algorithm than the <a href="/world-record/lemniscate-constant/">Lemniscate Constant</a> and optimized some important aspects of the computation), so the number of cores did not assist my operation well. I have used RAID scratch storage for my operations, but the speed of the disk was very slow compared to the required R/W operations.</p>
<p>Start Date: Sat May 25 22:37:01 2019<br />
End Date: Tue Jun 18 18:59:44 2019<br />
Total Computation Time: 2028121.582 seconds<br />
Start-to-End Wall Time: 2060562.370 seconds<br />
CPU Utilization: 564.31 % + 151.19 % kernel overhead<br />
Multi-core Efficiency: 7.84 % + 2.10 % kernel overhead</p>
<p>The multi-core efficiency was slightly improved compared to the <a href="/world-record/lemniscate-constant/">Lemniscate Constant</a>, but still did not reach the optimal efficiency (Dr. Ian Cutress’s <a href="/world-record/lemniscate-constant/">Lemniscate Constant</a> calculation had an efficiency of 94.04 % and CPU utilization of 9027.61 %, which means that the CPU was not bottlenecked by other factors).</p>
<p>Memory:<br />
Usable Memory: 201,159,380,992 ( 187 GiB)<br />
Logical Peak Disk Usage: 3,962,541,437,992 (3.60 TiB)<br />
Logical Disk Bytes Read: 543,482,162,425,752 ( 494 TiB)<br />
Logical Disk Bytes Written: 474,688,611,298,000 ( 432 TiB)</p>
<p>Disk operation was decreased in half compared to the <a href="/world-record/lemniscate-constant/">Lemniscate Constant</a> because of an efficient algorithm, and this contributed to a faster computation. <br />
One caveat is that HDD I/O speeds are great bottlenecks to virtually any other component, and perhaps having Optane DIMMs or more normal RAM can help the speed of the computation speed greatly.</p>
<p>Verification:<br />
System information:<br />
Operating System: Linux 3.10.0-693.21.1.el7.x86_64 x86_64 (CentOS 7)<br />
Processor(s):<br />
Name: Intel(R) Xeon(R) Gold 6140 CPU @ 2.30GHz<br />
Logical Cores: 72<br />
Physical Cores: 36<br />
Sockets: 2<br />
NUMA Nodes: 2<br />
Base Frequency: 2,294,527,776 Hz</p>
<p>Start Date: Fri Jun 7 11:13:58 2019<br />
End Date: Tue Jul 16 10:29:12 2019<br />
Total Computation Time: 3218837.554 seconds<br />
Start-to-End Wall Time: 3366914.430 seconds<br />
CPU Utilization: 639.34 % + 150.50 % kernel overhead<br />
Multi-core Efficiency: 8.88 % + 2.09 % kernel overhead</p>
<p>Memory:<br />
Usable Memory: 201,226,489,856 ( 187 GiB)<br />
Logical Peak Disk Usage: 3,986,470,844,768 (3.63 TiB)<br />
Logical Disk Bytes Read: 947,069,197,181,784 ( 861 TiB)<br />
Logical Disk Bytes Written: 829,531,225,016,496 ( 754 TiB)</p>
<p>A less efficient algorithm lead to a lot of disk I/O, resulting to be similar R/W to the <a href="/world-record/lemniscate-constant/">Lemniscate Constant</a>.</p>
<p>Overall, as the Catalan constant is a comparably simple series compared to the AGM-Pi algorithm of the <a href="/world-record/lemniscate-constant/">Lemniscate Constant</a>, due to the way the algorithm works, it had lead to a faster computation with less I/O bottleneck.</p>
<p>If you want to take a look at the digits for the <a href="http://mathworld.wolfram.com/CatalansConstant.html">Catalan’s constant</a>, you can download it from <a href="https://archive.org/details/catalan_190618">This Link</a> (Over 1 TB total but don’t worry, it will just redirect to a registry with a link to download).</p>
<p><strong>Note that digits are released as an <a href="https://creativecommons.org/licenses/by-nc-nd/4.0/">Attribution-NonCommercial-NoDerivatives 4.0 International</a> License, meaning no commercial purposes and you cannot distribute a remixed, transformed, or built upon version without my consent. You must also give appropriate credit, provide a link to the license, and indicate if changes were made even if it is not a prohibited use case.</strong></p>
<p>Archive for computation results in the y-cruncher website:<br />
<a href="https://web.archive.org/web/20190722034426/http://www.numberworld.org/y-cruncher/">https://web.archive.org/web/20190722034426/http://www.numberworld.org/y-cruncher/</a><br />
Special thanks to Mr. Alexander J. Yee for developing and releasing y-cruncher and providing advice, and the <a href="https://archive.org/">Internet Archive</a> for hosting the computed digits.</p>{"avatar"=>"https://user-images.githubusercontent.com/8457324/82421935-c225ae80-9abc-11ea-8110-b35c57838923.jpg", "bio"=>"Medical student working towards polymathy. Current and former World Record Holder of Euler–Mascheroni Constant, Apéry’s Constant, Catalan's Constant, Lemniscate Constant, Log(2), Log(10) as of September 2020. If you are interested in sponsoring to set a world record for various mathematical constants including Pi, contact me using the email below. You can't copy contents of this website to another source without my permission. Contact me if you want to buy me a cup of tea. Especially you big corps.", "email"=>"ehf@users.sourceforge.net", "links"=>[{"label"=>"GitHub", "icon"=>"fab fa-fw fa-github", "url"=>"https://github.com/ehfd"}, {"label"=>"Hive.Blog", "icon"=>"fas fa-fw fa-blog", "url"=>"https://hive.blog/@ehf"}]}ehf@users.sourceforge.netHow I set the world record for Catalan's Constant.