NUMBER SYSTEM

1. Number System

Number System

The concept of number systems is fundamental to Computer Architecture and Organization because computers operate using numerical representations of data and instructions. A number system defines how numbers are represented using a set of digits and a base (also called radix). In computing, different number systems are used to simplify hardware design, data processing, and communication between humans and machines.

A number system is made up of two key components: digits and base. The base determines the number of unique digits used to represent numbers. For example, the decimal number system has a base of 10 and uses digits from 0 to 9, while the binary number system has a base of 2 and uses only two digits, 0 and 1. The value of a number in any system depends on the position of its digits, which is referred to as positional notation. Each position represents a power of the base.

Number systems are essential in computer architecture because they define how data and instructions are represented and processed in a computer. While humans naturally use the decimal system (base 10), computers operate using the binary system (base 2), which consists only of 0s and 1s. This is because digital hardware relies on two-state electronic components. Other number systems like octal (base 8) and hexadecimal (base 16) are used to make binary numbers easier for humans to read and work with, especially in programming and memory representation.

The binary system forms the foundation of all computer operations, where each digit (bit) represents a power of 2. Larger units like bytes (8 bits) are used to store data. To simplify long binary sequences, octal groups bits into sets of three, while hexadecimal groups them into sets of four, making them more compact and readable. Conversion between these number systems is a key skill, allowing users to switch between human-friendly formats and machine-level representations. Additionally, binary arithmetic, such as addition and subtraction, is fundamental to how the CPU performs calculations.

Another important aspect of number systems is how they represent different types of data, including negative numbers and characters. Signed numbers are commonly represented using two’s complement, which simplifies arithmetic operations in computers. Beyond numbers, all forms of data (text), images, and instructions are ultimately encoded in binary. Understanding number systems therefore provides a strong foundation for learning more advanced topics like assembly language, memory organization, and overall computer system design.

In computing, the most important number systems are decimal, binary, octal, and hexadecimal. The decimal system, which has a base of 10, is the standard system used in everyday human activities such as counting, measuring, and financial transactions. It is easy to understand because it uses ten digits ranging from 0 to 9. However, despite its convenience for humans, the decimal system is not suitable for direct implementation in computers because digital electronic systems are built using components that operate in two distinct states, typically represented as on and off.

As a result, computers rely on the binary number system, which has a base of 2 and uses only two digits: 0 and 1. These two digits correspond directly to the two states of digital circuits, making binary the most natural and efficient system for computer operations. Each binary digit is known as a bit, and it represents the smallest unit of data in a computer. Binary forms the basis for all types of data processing, storage, and communication within a computer system.

 A collection of 8 bits is called a byte, and it is commonly used to represent a single character or a small unit of data, such as a letter, number, or symbol. In binary representation, each position in a number corresponds to a power of 2, starting from the rightmost digit. For example, the binary number 1011 can be expanded as 1×2³ + 0×2² + 1×2¹ + 1×2⁰, which equals 11 in decimal. This positional system allows computers to represent and manipulate numerical values efficiently using simple electrical signals.

Although binary is ideal for machine-level operations, it can be lengthy and difficult for humans to read and interpret. To address this challenge, octal (base 8) and hexadecimal (base 16) number systems are used as more compact representations of binary numbers. These systems simplify binary by grouping bits into sets of three (for octal) or four (for hexadecimal), making it easier for programmers and engineers to work with large binary values. Together, these number systems provide a bridge between human understanding and computer processing.

Although binary is essential for machine operations, it can be long and difficult for humans to read. To address this, the octal and hexadecimal number systems are used as compact representations of binary numbers. The octal system has a base of 8 and uses digits from 0 to 7. It simplifies binary representation by grouping binary digits into sets of three. For example, the binary number 110101 can be grouped as 110 and 101, which correspond to 6 and 5 in octal, giving 65₈.

The hexadecimal number system is a base-16 system that uses sixteen symbols to represent values. These include the digits 0 to 9 and the letters A to F, where A represents 10, B represents 11, up to F which represents 15. This extension beyond the decimal digits allows hexadecimal to express larger values in fewer digits. As a result, it is particularly useful in computing environments where compact representation of large binary values is required.

One of the key advantages of hexadecimal is its close relationship with the binary number system. Since 16 is a power of 2 (2⁴), each hexadecimal digit corresponds exactly to four binary digits (bits). This makes conversion between binary and hexadecimal straightforward and efficient. For example, the binary number 1111 is equivalent to F in hexadecimal, while 1010 corresponds to A. By grouping binary digits into sets of four, long and complex binary numbers can be simplified into shorter and more readable hexadecimal forms.

Hexadecimal is widely used in various areas of computing, particularly in memory addressing, debugging, and low-level programming. Memory addresses in computer systems are often represented in hexadecimal because it provides a concise way to display large binary values. Programmers and system engineers also use hexadecimal when analyzing machine-level data, such as inspecting registers, writing assembly language programs, or debugging code. Its readability and compactness make it an essential tool in understanding how computers operate internally.

An important aspect of working with number systems is the ability to convert between them. Converting from binary to decimal involves multiplying each bit by its corresponding power of 2 and summing the results. On the other hand, converting from decimal to binary requires repeated division by 2 while keeping track of remainders. Similarly, conversions between binary and hexadecimal (or octal) rely on grouping bits into sets of four (or three for octal). Mastering these conversion techniques is crucial for students of computer architecture, as it enables them to move seamlessly between human-readable and machine-level representations of data.

Arithmetic operations can be performed in the binary number system just as they are in the decimal system, including addition, subtraction, multiplication, and division. However, these operations follow simpler rules because binary uses only two digits: 0 and 1. Among these operations, binary addition is the most fundamental and forms the basis for all other arithmetic processes in a computer. The basic rules are straightforward: 0 + 0 equals 0, 0 + 1 equals 1, 1 + 0 equals 1, and 1 + 1 equals 10, which produces a sum of 0 and a carry of 1 to the next higher bit position.

When performing binary addition involving multiple bits, the concept of carrying becomes very important. For example, adding 1011 and 0110 requires adding each column from right to left while carrying over values when necessary. This process is similar to decimal addition but simpler due to the limited number of digits. Binary subtraction, multiplication, and division follow similar logical principles, often relying on repeated addition or shifting operations. These arithmetic processes are essential because they reflect how actual computations are carried out within the computer hardware.

Understanding binary arithmetic is crucial for appreciating the role of the Arithmetic Logic Unit (ALU) in the Central Processing Unit (CPU). The ALU is responsible for executing all arithmetic and logical operations in a computer system. It performs binary calculations at very high speeds using digital circuits. Every operation, whether simple or complex, is ultimately broken down into basic binary operations handled by the ALU. Therefore, mastering binary arithmetic provides insight into how computers process data at the hardware level.

Another important concept in number systems is the distinction between signed and unsigned numbers. Unsigned numbers represent only non-negative values, meaning they start from 0 and increase up to a maximum value based on the number of bits available. For example, with 4 bits, an unsigned number can represent values from 0 to 15 (0000₂ = 0 and 1111₂ = 15). However, this range does not include negative numbers. In contrast, signed numbers allow both positive and negative values, which is essential in many real-world computations such as temperature measurement or financial calculations. For instance, in a 4-bit signed system using two’s complement, the range becomes -8 to +7, where 0111₂ represents +7 and 1000₂ represents -8.

Computers commonly use two’s complement representation to handle negative numbers because it simplifies arithmetic operations. To find the two’s complement of a binary number, all bits are first inverted and then 1 is added to the result. For example, to represent -5 in binary using 4 bits, we start with +5 (0101₂), invert the bits to get 1010₂, and then add 1 to obtain 1011₂, which represents -5. This method allows subtraction to be performed as addition; for example, 7 − 5 can be computed as 7 + (−5). In binary, 7 (0111₂) plus -5 (1011₂) equals 1 0010₂; ignoring the overflow carry gives 0010₂, which is 2 in decimal. This efficiency is why two’s complement is widely used in modern computer systems.

Number systems are also closely related to data representation. Characters, images, audio, and instructions are all encoded using binary numbers. For example, character encoding schemes like ASCII assign numerical values to letters and symbols. Similarly, machine instructions are represented in binary form for execution by the CPU.

Therefore, number systems form the backbone of computer architecture. While humans interact primarily with decimal numbers, computers rely on binary for all internal operations. Octal and hexadecimal systems serve as convenient representations that bridge the gap between human understanding and machine processing. A solid understanding of number systems enables students to grasp more advanced topics such as assembly language, memory organization, and digital logic design.