I have encountered a problem using dbWriteTable function from the DBI package (also part of the ROracle package).
I am trying to export a data.frame to the Oracle server, where the data.frame has a column with mixed encoding ("unknown" and "UTF-8"). If I export all rows to the Oracle server then characters with accent (so UTF-8 characters) become non-UTF-8 characters on the Oracle server, while if I export only those rows which have UTF-8 encoding, then the characters are displayed correctly on Oracle.
It seems to me that dbWriteTable function downgrades the encoding to the lowest level found in the column. Is it a bug in the dbWriteTable function or have I missed setting the encoding properly? Does anybody know a workaround for the problem?
The code by which you could reproduce the problem I am facing with:
# Library
library("ROracle")
# Parameters
oracle_username <- "USER"
oracle_password <- "PASSWORD"
oracle_table_name <- "PLEASE_GIVE_A_NAME"
# Oracle kapcsolat
connection_string <- "THIS_IS_COMPANY_SPECIFIC"
drv <- DBI::dbDriver("Oracle")
# Setting up the connection
con <- DBI::dbConnect(drv, username = oracle_username, password = oracle_password, dbname = connection_string)
# Table to be exported to Oracle
dt_example <- as.data.frame(STRING = c("Unknown Urszula", "UTF-8 Uránia"))
# Checking the encoding
Encoding(dt_example$STRING)
# Exporting all rows
DBI::dbExecute(con, paste("drop table", toupper(oracle_table_name)))
DBI::dbWriteTable(con, name = toupper(oracle_table_name), value = dt_example, overwrite = TRUE, append = FALSE)
# --> The result on the server is Unknown Urszula and UTF-8 Ur??nia
# Exporting only the row with UTF-8 encoding
DBI::dbExecute(con, paste("drop table", toupper(oracle_table_name)))
DBI::dbWriteTable(con, name = toupper(oracle_table_name), value = dt_example[2, ], overwrite = TRUE, append = FALSE)
# --> The result on the server is UTF-8 Uránia
I have the following system / environment:
> R.version
_
platform x86_64-suse-linux-gnu
arch x86_64
os linux-gnu
system x86_64, linux-gnu
status
major 3
minor 5.0
year 2018
month 04
day 23
svn rev 74626
language R
version.string R version 3.5.0 (2018-04-23)
> Sys.getlocale()
[1] "LC_CTYPE=en_US.UTF-8;LC_NUMERIC=C;LC_TIME=en_US.UTF-8;LC_COLLATE=en_US.UTF-8;LC_MONETARY=en_US.UTF-8;LC_MESSAGES=en_US.UTF-8;LC_PAPER=en_US.UTF-8;LC_NAME=C;LC_ADDRESS=C;LC_TELEPHONE=C;LC_MEASUREMENT=en_US.UTF-8;LC_IDENTIFICATION=C"
> Sys.getenv(c("LANG", "ORACLE_HOME"))
LANG ORACLE_HOME
"en_US.UTF-8" "/usr/lib/oracle/12.1/client64/lib"
The NLS_LANG parameter is set to AMERICAN_AMERICA.EE8ISO8859P2 on the operating system.
The Oracle server has the following NLS setup:
select * from V$NLS_PARAMETERS;
NLS_LANGUAGE AMERICAN
NLS_TERRITORY AMERICA
NLS_CURRENCY $
NLS_ISO_CURRENCY AMERICA
NLS_NUMERIC_CHARACTERS .,
NLS_CALENDAR GREGORIAN
NLS_DATE_FORMAT DD-MON-RR
NLS_DATE_LANGUAGE AMERICAN
NLS_CHARACTERSET EE8ISO8859P2
NLS_SORT BINARY
NLS_TIME_FORMAT HH.MI.SSXFF AM
NLS_TIMESTAMP_FORMAT DD-MON-RR HH.MI.SSXFF AM
NLS_TIME_TZ_FORMAT HH.MI.SSXFF AM TZR
NLS_TIMESTAMP_TZ_FORMAT DD-MON-RR HH.MI.SSXFF AM TZR
NLS_DUAL_CURRENCY $
NLS_NCHAR_CHARACTERSET AL16UTF16
NLS_COMP BINARY
NLS_LENGTH_SEMANTICS BYTE
NLS_NCHAR_CONV_EXCP FALSE
Any help is appreciated! Thank you in advanced!
I had a similar problem: a data.frame with spanish letters like the ñ
character. When I used dbWriteTable
and ROracle
to create a table the encoding of this characters was not correct.
If I use Encoding(myData[[col]]) <- "UTF-8"
looks fine in R, but wrong in Oracle, so I transfromed the character data.frame columns to local
and then to latin1
and now looks ok in Oracle.
for (col in colnames(myData)){
if (class(myData[[col]]) == "numeric"){
next()
} else {
Encoding(myData[[col]]) <- "UTF-8"
myData[[col]] <- enc2native(myData[[col]])}
Encoding(myData[[col]]) <- "latin1"
}
For context... My Oracle db encoding is:
SELECT * FROM NLS_DATABASE_PARAMETERS;
NLS_CHARACTERSET = WE8ISO8859P1
And I have a .Renviron
with this entry:
NLS_LANG=".WE8ISO8859P1"
So, when I use dbGetQuery
I get the correct characters.