View text source at Wikipedia
The binary GCD algorithm, also known as Stein's algorithm or the binary Euclidean algorithm,[1][2] is an algorithm that computes the greatest common divisor (GCD) of two nonnegative integers. Stein's algorithm uses simpler arithmetic operations than the conventional Euclidean algorithm; it replaces division with arithmetic shifts, comparisons, and subtraction.
Although the algorithm in its contemporary form was first published by the physicist and programmer Josef Stein in 1967,[3] it was known by the 2nd century BCE, in ancient China.[4]
The algorithm finds the GCD of two nonnegative numbers and by repeatedly applying these identities:
As GCD is commutative (), those identities still apply if the operands are swapped: , if is odd, etc.
While the above description of the algorithm is mathematically correct, performant software implementations typically differ from it in a few notable ways:
The following is an implementation of the algorithm in Rust exemplifying those differences, adapted from uutils:
use std::cmp::min;
use std::mem::swap;
pub fn gcd(mut u: u64, mut v: u64) -> u64 {
// Base cases: gcd(n, 0) = gcd(0, n) = n
if u == 0 {
return v;
} else if v == 0 {
return u;
}
// Using identities 2 and 3:
// gcd(2ⁱ u, 2ʲ v) = 2ᵏ gcd(u, v) with u, v odd and k = min(i, j)
// 2ᵏ is the greatest power of two that divides both 2ⁱ u and 2ʲ v
let i = u.trailing_zeros(); u >>= i;
let j = v.trailing_zeros(); v >>= j;
let k = min(i, j);
loop {
// u and v are odd at the start of the loop
debug_assert!(u % 2 == 1, "u = {} should be odd", u);
debug_assert!(v % 2 == 1, "v = {} should be odd", v);
// Swap if necessary so u ≤ v
if u > v {
swap(&mut u, &mut v);
}
// Identity 4: gcd(u, v) = gcd(u, v-u) as u ≤ v and u, v are both odd
v -= u;
// v is now even
if v == 0 {
// Identity 1: gcd(u, 0) = u
// The shift by k is necessary to add back the 2ᵏ factor that was removed before the loop
return u << k;
}
// Identity 3: gcd(u, 2ʲ v) = gcd(u, v) as u is odd
v >>= v.trailing_zeros();
}
}
Note: The implementation above accepts unsigned (non-negative) integers; given that , the signed case can be handled as follows:
/// Computes the GCD of two signed 64-bit integers
/// The result is unsigned and not always representable as i64: gcd(i64::MIN, i64::MIN) == 1 << 63
pub fn signed_gcd(u: i64, v: i64) -> u64 {
gcd(u.unsigned_abs(), v.unsigned_abs())
}
Asymptotically, the algorithm requires steps, where is the number of bits in the larger of the two numbers, as every two steps reduce at least one of the operands by at least a factor of . Each step involves only a few arithmetic operations ( with a small constant); when working with word-sized numbers, each arithmetic operation translates to a single machine operation, so the number of machine operations is on the order of , i.e. .
For arbitrarily large numbers, the asymptotic complexity of this algorithm is ,[8] as each arithmetic operation (subtract and shift) involves a linear number of machine operations (one per word in the numbers' binary representation). If the numbers can be represented in the machine's memory, i.e. each number's size can be represented by a single machine word, this bound is reduced to:
This is the same as for the Euclidean algorithm, though a more precise analysis by Akhavi and Vallée proved that binary GCD uses about 60% fewer bit operations.[9]
The binary GCD algorithm can be extended in several ways, either to output additional information, deal with arbitrarily large integers more efficiently, or to compute GCDs in domains other than the integers.
The extended binary GCD algorithm, analogous to the extended Euclidean algorithm, fits in the first kind of extension, as it provides the Bézout coefficients in addition to the GCD: integers and such that .[10][11][12]
In the case of large integers, the best asymptotic complexity is , with the cost of -bit multiplication; this is near-linear and vastly smaller than the binary GCD algorithm's , though concrete implementations only outperform older algorithms for numbers larger than about 64 kilobits (i.e. greater than 8×1019265). This is achieved by extending the binary GCD algorithm using ideas from the Schönhage–Strassen algorithm for fast integer multiplication.[13]
The binary GCD algorithm has also been extended to domains other than natural numbers, such as Gaussian integers,[14] Eisenstein integers,[15] quadratic rings,[16][17] and integer rings of number fields.[18]
An algorithm for computing the GCD of two numbers was known in ancient China, under the Han dynasty, as a method to reduce fractions:
If possible halve it; otherwise, take the denominator and the numerator, subtract the lesser from the greater, and do that alternately to make them the same. Reduce by the same number.
— Fangtian – Land surveying, The Nine Chapters on the Mathematical Art
The phrase "if possible halve it" is ambiguous,[4]
Covers the extended binary GCD, and a probabilistic analysis of the algorithm.
Covers a variety of topics, including the extended binary GCD algorithm which outputs Bézout coefficients, efficient handling of multi-precision integers using a variant of Lehmer's GCD algorithm, and the relationship between GCD and continued fraction expansions of real numbers.
An analysis of the algorithm in the average case, through the lens of functional analysis: the algorithms' main parameters are cast as a dynamical system, and their average value is related to the invariant measure of the system's transfer operator.