In computer science, integers appear as a group of binary digits where the size of the grouping is not constant. Due to this variability, the set of integer sizes available in the system depends on the type of computer. In this situation, computer hardware is responsible for providing a representation or memory address for the integer. Hence, it is known as integer representation. This article deals with in-depth classification and analysis of integer representation in computer science.
Integer value and representation
The value of an item in integer representation refers to the mathematical integer. It is typically represented in the form of the source code of a program and appears as a sequence of digits with a prefix of – or + sign. A few programming languages also allow notations for representation. This primarily includes hexadecimal or octal figures. Internal representation in these cases reflects the value stored in a computer’s memory.
The mechanism involved in such internal representation of integers is different from mathematical integers where minimal or maximum value changes are possible. However, if we talk about the most common representation of a positive integer, it comes in a string of bits through the application of the binary numeral system.
Although the order of memory bytes responsible for storing bits differs, the precision of integral type in the case of number bits remains the same in the representation. Therefore, integer value and representation differ by a considerable gap in such cases.
Unsigned Integers
Unsigned integers in computer programming refer to a variable capable of holding only positive numbers. Unlike signed integers, the property of unsigned integers is open for applications in most numeric data types, including short, long, int, or char. These can hold zero and positive numbers, while signed integers hold zero, positive and negative numbers.
For example, in the case of 32-bit integers, the typical range of an unsigned integer ranges between 0 to 2, while the signed version can go from -2 to 2. In this case, the range remains the same, but the number line’s differences are visible.
Unsigned integer overflow means the resulting value is out of range. In such cases, it is appropriate to divide it by one greater than the most significant number of the similar type and keep the remainder.
For example, let’s assume the number 280 is appropriate to fit in a 1-byte range of 0-225. In this case, one greater than the most significant number of the similar category will be 256.
Therefore, we will get one remainder of 24 if we divide 280 by 256. It implies that the remaining value, 24, is the value stored in the computer system.
A few real-life examples of unsigned binary digits include the number table in mathematics and the number of members in a family. In this case, the unsigned integers like 10 and 5 are represented in the computer system only through binary notation or bits. However, unsigned numbers are represented in a fixed size in computer systems, such as 4, 8, 16, or 32 bits. Whereas, if these numbers are represented in a computer system through 8 bits, It can be said that the system uses an 8-bit word size for processing. Thus, it defines the relationship between a number, how it is represented in the computer system, and its relative value.
Signed Integers
Whenever we use negative numbers in day-to-day applications, we usually write a negative sign with the number. For example, – 8 negative 8. In such situations, we can easily recognise +8 as positive 8, although the commonly accepted norms imply that it is usually ideal for omitting positive prefixes. The number’s sign is an attribute holding a positive, negative, or value representing zero.
Keeping this definition in mind, we can say that any number sign stored as its part is called a signed integer. It uses a single bit term as a sign bit, making it possible to hold positive and negative numbers, including a zero. In addition, all signed integers, except int, can use an optional int suffix.
On the other hand, there are three main ways to do so if we talk about representing signed integers in a computer. These are as follows –
Magnitude and sign
One’s Complement
Two’s complement
Although we have the above three methods to represent signed integers, the easiest way to do so is the sign-magnitude method.
The sign-magnitude binary format is the most accessible conceptual format representing signed integers. In this method of representing signed integers, it is necessary to understand the significance of digits. The main reason for this is the dependency of the number digits on its relative digit. In this case, it is necessary to treat the number as a positive value because it is still not one.
Once the value reaches 1, the status indicates that the number has finally become negative. However, it is necessary to know that bits are responsible for indicating the magnitude of the number because some of the signed numbers and their equivalent values follow the same assumption.
Conclusion
Concerning the representation of integers in computer programming, there are two main methods for the representation. The first method is signed integers, and the second method is unsigned integers. A signed integer is a 32-bit datum that can hold positive and negative numbers, including a zero. On the other hand, unsigned integers are represented by unsigned numbers with 0 as their significant byte and three as the least significant one. Therefore, there are striking differences between the two classifications and how they are represented in a computer system.