** Assumptions
of Algebra**

INTRODUCTION

The word “algebra” came to Europe from the Arabic word “al-jabr” from a book by the Persian mathematician Muhammad ibn Musa al-Khwarizmi written in about 830 A.D. and titled “Addtion and Subtraction after the Method of the Indians”. Al-Khwarizmi’s family apparently practiced the old Zoroastrian religion.

But this was simply a rehash of earlier developments in Mesopotamia, Egypt, and Greece. One of the earliest texts describing the algebraic method was the “Rhind Papyrus” written about 1650 B.C. by the Egyptian Ahmes, who transcribed it from an earlier work that he dated to between 2000 and 1800 B.C. The Greek mathematician Iambichus in his book “Introductio Arithmatica” mentions an earlier work by Thymaridas (400 B.C. to 350 B.C) on the algebra of solving simultaneous linear equations.

In modern times, a formal development of algebra begins with fundamental axioms which are assumed to be true without proof and which can then be used to develop the entire structure of algebra. These relations concern only addition and multiplication and do not hold for subtraction or division. There are four groups of equations as follows:

Inverse Law of Addition

Inverse Law of Multiplication

Commutative Law of Addition

Commutative Law of Multiplication

Associative Law of Addition

Associative Law of Multiplication

Distributive Law

DIVISION AND SUBTRACTION

Unlike addition and multiplication, the operations of subtraction and division are not so well behaved in the real number system.

To start, the subtraction operator is defined as changing the sign of the second argument “b” and then to adding. The cumulative law of subtraction does not hold.

Neither is the associative law of subtraction valid. Note that sloppy notation is sometimes allowed by assuming subtraction progresses from left to right as shown below.

Strictly speaking division is defined as inverting the second argument “b” and then multiplying. And the commutative law of division does not hold.

The associative law of division is not valid either. And sloppy notation is barely meaningful only with the added stipulation that operations progress from left to right. This is important to remember especially when writing computer programs which adopt this convention.

Rather we should write

SIGNED MULTIPLICATION OF INTEGERS

If we have the positive integers “a”, “b”, and “c” such that a>0 and b>0 and c>0.

Then we can write

or

So the multiplication of a positive and a negative is a negative number.

And in a similar manner we can write

or

And so the multiplication of two negatives is a positive number.

A perhaps amazing implication of this is that the square root of a negative number does not exist in the real number system. To get around this shortcoming, imaginary numbers were defined. They were immediately useful as values for the roots of polynomials. But because only real numbers are measurements of real things, imaginary numbers are necessarily non-physical. Rather they must be converted to a real number by some simple algorithm to have any meaning. That is to say as the measurement of any real quantity.

MINUS SIGNS

Unfortunately there is often some misunderstanding concerning the dreaded minus sign “-“. This is because this sign has three separate meanings depending on context. What saves us is that all three yield identical results. These three separate usages are:

1. The minus sign, when preceding a number, is simply part of the name. Some examples might be “-3” or “-17.2”. In this case the minus sign simply designates a specific number which happens to be negative. Note that positive numbers such as “+2” often are written without the leading plus sign “+”.

2. When applied as a unary operator to a variable or an expression, it is short hand for multiplication by “-1”. So for example we might have

or in an expression as

3. When used as a binary operator, a minus sign means the formal operation of subtraction as follows