The SQL92 standard defines fundamental data types that mold the types you'll find in the various SQL-based database implementations. In the article “SQL basics: String data types,” we gave you a detailed description of the standard SQL92 definition of string types. Now, let's take a closer look at number data types.

By understanding the underlying constraints that data type definitions follow, you'll be better equipped to use and migrate between various database implementations. This article will give you an overview that you can use in conjunction with your database manufacturer's documentation.

Of the four data types—string, number, datetime, and interval—numbers have the most available data types and the greatest constraints on implementation. Numbers also pose the most serious risk of degraded data during ports between differing database implementations. Design variances between implementations—say the same type has different default size limits in Oracle and SQL Server—can cause the porting process to truncate your numbers, altering their value. For this reason, you must explore implementation-specific differences before porting, or risk compromising the accuracy of your data.

With this caveat in mind, let's look at the SQL92-standard numeric data types.

The basics

Numeric types collectively are called

*numbers*. All numbers have a precision value which indicate the number of total significant digits. Numbers also can have an optional scale value that denotes the position of the least significant digit to the right of the decimal. For example, the number 1234.56 has a precision of 6, a scale of 2, and would be defined as NUMERIC(6,2).

Each database implementation has rules that determine how numbers are rounded or truncated. SQL92 provides for built-in operations, such as plus, minus, times, or divide, in addition to functions to determine length and other attributes required for value handling. All number data types are mutually comparable and assignable. Implementations vary, but the result of such a comparison generally has the data type with the greatest precision.

NUMERIC

Usage: NUMERIC(precision, scale)

- Is an exact numeric type, meaning that it's a literal representation of the number’s value (Rounding or truncating of this number to meet the specified precision and scale is dictated by a predefined formula.)
- Uses decimal precision for rounding, based on the total number of digits defined by the scale value
- Has a total length equal to the defined precision, plus 1 if the scale is greater than 0 (for the decimal point)
- Has a decimal portion exactly the size dictated by the scale

DECIMAL | DEC

Usage: DECIMAL(precision, scale) | DEC(precision, scale)

- Is an exact numeric type
- Uses decimal precision
- Has a total length equal to the defined precision, plus 1 if the scale is greater than 0
- Has a decimal portion at least the size indicated by the scale but expandable up to a limit set by the manufacturer

INTEGER | INT

Usage: INTEGER(precision)

- Is an exact numeric type
- Uses decimal precision or binary precision, which is based on the total number of binary bits used to represent the value (This is implementation-specific and will correlate with SMALLINT.)
- Has a scale of 0 always
- Has a minimum and maximum precision, defined by the manufacturer
- May have a vendor-supplied default value for the precision, if no explicit value is specified

SMALLINT

Usage: SMALLINT(precision)

- Is an exact numeric type
- Has the same rounding implementation as INTEGER (decimal or binary precision)
- Has a scale of 0 always
- Has a maximum allowed precision that's less than or equal to the maximum precision of INTEGER

FLOAT

Usage: FLOAT(precision)

- Is an approximate numeric type, meaning that it represents an exponential format for a given value, for example, 1.23e-45 (Rounding and truncating for this type are defined largely by the manufacturer.)
- Uses binary precision when rounding
- Precision value represents the minimum size used, up to the manufacturer’s maximum

REAL

Usage: REAL

- Is an approximate numeric type
- Uses binary precision, based on the manufacturer’s maximum limit
- Has a default precision that must be less than that set for the DOUBLE PRECISION

DOUBLE PRECISION

Usage: DOUBLE PRECISION

- Is an approximate numeric type
- Uses binary precision, based on the manufacturer’s maximum limit
- Has a default precision that must be greater than that set for the REAL

String theory

Database manufacturers build upon these foundation data types to create the data types that you actually use. For numbers, this can include types of the same name, such as INT or REAL, or new types that provide particular performance or functionality that meets a database deployment’s specific needs.

In our next article, we’ll take a look at the datetime and interval data types. In the meantime, please post your comments, questions, or feedback in the discussion below, or send our editors an e-mail if you have a topic idea for the SQL basics series.