An integer is either a perfect square or its square root is irrational. Essentially: when you compute the square root of an integer, there are either no figures to the right of the decimal or there are an infinite number of figures to right of the decimal and they don’t repeat. There’s no middle ground — you can’t hope, for example, that the decimal expansion might stop or repeat after a hundred or so terms.
The proof of this theorem is surprisingly simple, not much harder than the familiar proof that the square root of 2 is irrational.
Suppose is a fraction in lowest terms, i.e.
and
are co-prime (i.e. their gcd is 1), and
is a solution to
where
is an integer and
is an integer. Then:
and so:
Now the right side of the equation above is an integer, so the left side must be an integer as well. But is relatively prime to
, and so
is relatively prime to
. The only way
could be an integer is for
to equal 1 or -1. And so
must be an integer.
Another way to get the same result is to assume is an irreducible fraction and is not an integer (i.e.
), and consider
. Clearly
and
are co-prime and the denominator
, so
is not an integer.
So what we said about square roots extends to cube roots and in fact to all integer roots (for example, the fifth root of an integer is either an integer or an irrational number). In other words: no (non-integer) fraction, when raised to a power, can produce an integer.
(reblogged from John D. Cook’s blog)