In Oracle-speak, this would be a NUMBER(9, 0)
or anything that maps to a 32-bit (or even a 16- or 64-bit) integer.
CREATE TABLE T(C1 NUMBER, C2 NUMBER(9, 0) ); -- Unconstrained and constrained number
INSERT INTO T(C1, C2) VALUES (10, 10);
SELECT C1, C2 FROM T; -- C1 is NUMBER, C2 is NUMBER(9,0)
Oracle assumes a precision and scale, then appears to have no means to set them short of modifying the table structure itself:
SELECT
CAST(C1 AS INTEGER) FAILURE1, -- Unconstrained number
TRUNC(C1) FAILURE2, -- Unconstrained number
ROUND(C1, 0) FAILURE3, -- Unconstrained number
CAST(ROUND(C1, 0) AS NUMBER(9, 0)) FAILURE4, -- Unconstrained number
TO_NUMBER(TO_CHAR(PERCENT_DONE, '99') FAILURE5, -- ~Overcomplicated and still fails
FROM T
This makes the authoring of applications which work with Oracle more difficult, and also makes them slower and less memory efficient. ORMs like Stack Overflow's own Dapper and their underlying drivers have no idea what the unconstrained number might contain, so they map these columns to types, like System.Decimal
, rather than just a simple integral type.
Oracle does have TO_BINARY_FLOAT()
and TO_BINARY_DOUBLE()
which maps to float and double respectively, but how can this be done with an integer?
I put together a minimal example to show the problem.
In Oracle: CREATE TABLE my_schema.T(c1 NUMBER(9,0) NOT NULL, c2 NUMBER NOT NULL)
The following complete C# console app can be pasted over a new project's Program.cs to demonstrate:
using System.Data;
using Oracle.ManagedDataAccess.Client;
// Normally you want to use Dapper or EF and not the raw driver calls.
// The presence of the SQL and cstr in this file are for demonstration purposes only.
var connection = new OracleConnection();
connection.ConnectionString = "SOME CONNECTION STRING";
using var cmd = new OracleCommand();
cmd.CommandText = """
SELECT
c1,
c2,
CAST(C1 AS INTEGER) i1,
CAST(C2 AS INTEGER) i2,
TRUNC(C1) t1,
TRUNC(C2) t2,
ROUND(C1, 0) r1,
ROUND(C2, 0) r2,
CAST(ROUND(C1, 0) AS NUMBER(9, 0)) rc1,
CAST(ROUND(C2, 0) AS NUMBER(9, 0)) rc2,
CAST(C1 AS NUMBER(18, 0)) rc1,
CAST(C2 AS NUMBER(18, 0)) rc2
FROM my_schema.T
""";
cmd.Connection = connection;
connection.Open();
using IDataReader reader = cmd.ExecuteReader();
while(reader.Read()) {
for(var columnCount = 0; columnCount < reader.FieldCount; columnCount++) {
Type type = reader.GetFieldType(columnCount);
var val = reader.GetValue(columnCount);
var fieldName = reader.GetName(columnCount);
Console.WriteLine($"{fieldName}:{val}:{type}");
}
}
This yields:
C1:100:System.Int32
C2:100:System.Decimal <-- Want C2 as a NUMBER(9,0) or similar
I1:100:System.Decimal
I2:100:System.Decimal
T1:100:System.Decimal
T2:100:System.Decimal
R1:100:System.Decimal
R2:100:System.Decimal
RC1:100:System.Int32 <-- ✅
RC2:100:System.Int32 <-- ✅
RC1:100:System.Int64 <-- ✅
RC2:100:System.Int64 <-- ✅
Those last four rows with Int32 and Int64, are what I need. Decimal = bad, unless we really do need a 128-bit struct type to store the value.
CAST()
changes the outgoing data type.
My original post showed this returning an unconstrained number, but apparently, I was mistaken. My original test had a ROUND wrapping the CAST, which I did not expect to drop all information about precision and scale.