Title: | Tools for working with SQLite in R |
---|---|
Description: | Tools for working with SQLite in R, in particular support for simple YAML schemas. |
Authors: | Sebastian Kranz |
Maintainer: | Sebastian Kranz <[email protected]> |
License: | GPL >= 2.0 |
Version: | 0.3 |
Built: | 2024-11-03 04:04:42 UTC |
Source: | https://github.com/skranz/dbmisc |
Convert data from a database table to R format
convert.db.to.r( vals, rclass = schema$rclass, schema = NULL, as.data.frame = is.data.frame(vals), null.as.na = TRUE, origin = "1970-01-01" )
convert.db.to.r( vals, rclass = schema$rclass, schema = NULL, as.data.frame = is.data.frame(vals), null.as.na = TRUE, origin = "1970-01-01" )
vals |
the values loaded from the database table |
rclass |
the r class of the table columns, is extracted from schema |
schema |
a table schema that can be used to convert values |
null.as.na |
shall NULL values be converted to NA values? |
origin |
the origin date for DATE and DATETIME conversion |
Convert data from a database table to R format
convert.r.to.db( vals, rclass = schema$rclass, schema = NULL, null.as.na = TRUE, origin = "1970-01-01", add.missing = TRUE )
convert.r.to.db( vals, rclass = schema$rclass, schema = NULL, null.as.na = TRUE, origin = "1970-01-01", add.missing = TRUE )
vals |
the values loaded from the database table |
rclass |
the r class of the table columns, is extracted from schema |
schema |
a table schema that can be used to convert values |
null.as.na |
shall NULL values be converted to NA values? |
origin |
the origin date for DATE and DATETIME conversion |
The schema is added as attribute to the connection
object and is automatically used by dbInsert
,
dbGet
, and dbUpdate
.
dbConnectSQLiteWithSchema( dbname, schema.file = paste0(tools::file_path_sans_ext(dbname), ".yaml"), schema = load.and.init.schemas(schema.file) )
dbConnectSQLiteWithSchema( dbname, schema.file = paste0(tools::file_path_sans_ext(dbname), ".yaml"), schema = load.and.init.schemas(schema.file) )
dbname |
Filename of the SQLite database |
schema.file |
YAML file that contains the database schema. By default it is assumed to be in the same folder as the database with the same name but the extension ".yaml" |
schema |
If you already loaded a schema file manually with |
Create database table and possible indices from a simple yaml schema
dbCreateSchemaTable( db, table, schema = schemas[[table]], schemas = get.db.schemas(db), schema.yaml = NULL, schema.file = NULL, overwrite = update, silent = FALSE, update = TRUE, verbose = 1 )
dbCreateSchemaTable( db, table, schema = schemas[[table]], schemas = get.db.schemas(db), schema.yaml = NULL, schema.file = NULL, overwrite = update, silent = FALSE, update = TRUE, verbose = 1 )
db |
dbi database connection |
schemas |
schemas as R list |
schema.yaml |
alternatively a schema as yaml text |
schema.file |
alternatively a file name of a schema yaml file |
overwrite |
shall existing tables be overwritten? |
silent |
if TRUE don't show messages |
update |
shall old data be copied from existing tables? |
Create or update database tables and possible indices from a simple yaml schema
dbCreateSchemaTables( db, schemas = get.db.schemas(db), schema.yaml = NULL, schema.file = NULL, overwrite = update, silent = FALSE, update = TRUE, verbose = 1 )
dbCreateSchemaTables( db, schemas = get.db.schemas(db), schema.yaml = NULL, schema.file = NULL, overwrite = update, silent = FALSE, update = TRUE, verbose = 1 )
db |
dbi database connection |
schemas |
schemas as R list |
schema.yaml |
alternatively a schema as yaml text |
schema.file |
alternatively a file name of a schema yaml file |
overwrite |
shall existing tables be overwritten? |
silent |
if TRUE don't show messages |
update |
if TRUE (default) copy old data from existing tables. |
Create or update a SQLite database from a schema file
dbCreateSQLiteFromSchema( schema.file, schema.dir = dirname(schema.file), db.name = NULL, db.dir = schema.dir, update = TRUE, verbose = 1 )
dbCreateSQLiteFromSchema( schema.file, schema.dir = dirname(schema.file), db.name = NULL, db.dir = schema.dir, update = TRUE, verbose = 1 )
schema.file |
the dbmisc schema file in yaml format |
schema.dir |
the directory of the schema file (if schema.file does not contain a path) |
db.name |
the name of the database file |
db.dir |
the directory of the database file, by default the schema directory |
update |
if TRUE copy and update the existing data in the tables. If FALSE just generate empty tables. |
verbose |
if 0 don't show what is done. If 1 or larger show most of the run SQL commands. |
Delete row(s) from table
dbDelete( db, table, params, sql = NULL, run = TRUE, log.dir = NULL, do.log = !is.null(log.dir), user = NA, where.in = FALSE )
dbDelete( db, table, params, sql = NULL, run = TRUE, log.dir = NULL, do.log = !is.null(log.dir), user = NA, where.in = FALSE )
db |
dbi database connection |
table |
name of the table |
params |
named list of values for key fields that identify the rows to be deleted |
sql |
optional a parameterized sql string |
run |
if FALSE only return parametrized SQL string |
Get rows from a table
dbGet( db, table = NULL, params = NULL, sql = NULL, fields = NULL, joinby = NULL, jointype = c("inner", "left", "right")[1], run = TRUE, schema = if (length(table) == 1) schemas[[table]] else NULL, schemas = get.db.schemas(db), rclass = schema$rclass, convert = !is.null(rclass), convert.param = FALSE, orderby = NULL, null.as.na = TRUE, origin = "1970-01-01", where.in = FALSE, where.sql = NULL, empty.as.null = FALSE, n = -1 )
dbGet( db, table = NULL, params = NULL, sql = NULL, fields = NULL, joinby = NULL, jointype = c("inner", "left", "right")[1], run = TRUE, schema = if (length(table) == 1) schemas[[table]] else NULL, schemas = get.db.schemas(db), rclass = schema$rclass, convert = !is.null(rclass), convert.param = FALSE, orderby = NULL, null.as.na = TRUE, origin = "1970-01-01", where.in = FALSE, where.sql = NULL, empty.as.null = FALSE, n = -1 )
db |
dbi database connection |
table |
name of the table. If you specify more than one table the later tables will be joined. You then should specify the |
params |
named list of values for key fields. If you don't use a custom SQL statement the list will be used to construct a WHERE clause. E.g. |
sql |
optional a parameterized custom sql string
Can contain parameters passed with the select * from mytable where name = :myname To avoid SQL injection you should provide all values that can be provided by a user as such parameters or make sure that you escape them. |
fields |
If not NULL can be used to specify fields that shall be selected as character. For joined tables, you must enter fields in the format "tablename.field". E.g. |
joinby |
If you specify more than one table the later tables shall be joined by the variables specified in |
jointype |
The type of the join if you specify a |
run |
if FALSE only return parametrized SQL string |
schema |
a table schema that can be used to convert values |
rclass |
the r class of the table columns, is extracted from schema |
convert |
if rclass is given shall results automatically be converted to these classes? |
orderby |
names of columns the results shall be ordered by as character vector. Add "DESC" or "ASC" after column name to sort descending or ascending. Example: |
null.as.na |
shall NULL values be converted to NA values? |
origin |
the origin date for DATE and DATETIME conversion |
where.in |
Set TRUE if your params contain sets and therefore a WHERE IN clause shall be generated. |
where.sql |
An optional SQL code just for the WHERE clause. Can be used if some parameters will be checked with inequality. |
empty.as.null |
if TRUE return just NULL if the query returns zero rows. |
n |
The maximum number of rows that shall be fetched. If |
If the function is called again with the same parameter check if the something was changed in the database inbetween by looking at the time stamp of the log file. If there were no changes restore the values from memory. If there were changes load data again from database.
dbGetMemoise( db, table, params = NULL, schema = schemas[[table]], schemas = get.db.schemas(db), log.dir = NULL, refetch.if.changed = !is.null(log.dir), empty.as.null = FALSE, ... )
dbGetMemoise( db, table, params = NULL, schema = schemas[[table]], schemas = get.db.schemas(db), log.dir = NULL, refetch.if.changed = !is.null(log.dir), empty.as.null = FALSE, ... )
If refetch.if.changed = FALSE (default if no log.dir is provided), always use the data from memory.
Insert row(s) into table
dbInsert( db, table = NULL, vals, schema = schemas[[table]], schemas = get.db.schemas(db), sql = NULL, run = TRUE, mode = c("insert", "replace")[1], add.missing.cols = TRUE, rclass = schema$rclass, convert = !is.null(rclass), primary.key = schema$primary_key, get.key = FALSE, null.as.na = TRUE, log.dir = NULL, do.log = !is.null(log.dir), user = NA )
dbInsert( db, table = NULL, vals, schema = schemas[[table]], schemas = get.db.schemas(db), sql = NULL, run = TRUE, mode = c("insert", "replace")[1], add.missing.cols = TRUE, rclass = schema$rclass, convert = !is.null(rclass), primary.key = schema$primary_key, get.key = FALSE, null.as.na = TRUE, log.dir = NULL, do.log = !is.null(log.dir), user = NA )
db |
dbi database connection |
table |
name of the table |
vals |
named list of values to be inserted |
schema |
a table schema that can be used to convert values |
sql |
optional a parameterized sql string |
run |
if FALSE only return parametrized SQL string |
mode |
"insert" or "replace", should have no effect so far |
add.missing.cols |
if TRUE (default) and a schema is provided than automatically add database columns that are missing in |
rclass |
the r class of the table columns, is extracted from schema |
convert |
if rclass is given shall results automatically be converted to these classes? |
primary.key |
name of the primary key column (if the table has one) |
get.key |
if TRUE return the created primary key value |
Get a data frame with column information for a database table
dbTableCols(db, table)
dbTableCols(db, table)
db |
dbi database connection |
table |
name of the table |
Update a row in a database table
dbUpdate( db, table, vals, where = NULL, schema = schemas[[table]], schemas = get.db.schemas(db), sql = NULL, run = TRUE, rclass = schema$rclass, convert = !is.null(rclass), null.as.na = TRUE, log.dir = NULL, do.log = !is.null(log.dir), user = NA, where.in = FALSE )
dbUpdate( db, table, vals, where = NULL, schema = schemas[[table]], schemas = get.db.schemas(db), sql = NULL, run = TRUE, rclass = schema$rclass, convert = !is.null(rclass), null.as.na = TRUE, log.dir = NULL, do.log = !is.null(log.dir), user = NA, where.in = FALSE )
db |
dbi database connection |
table |
name of the table |
vals |
named list of values to be inserted |
where |
named list that specifies the keys where to update |
schema |
a schema as R list, can be used to automatically convert types |
sql |
optional a parameterized sql string |
run |
if FALSE only return parametrized SQL string |
rclass |
the r class of the table columns, is extracted from schema |
convert |
if rclass is given shall results automatically be converted to these classes? |
null.as.na |
shall NULL values be converted to NA values? |
Creates an example data frame from a database schema table using provided column values and default values specified in schema
empty.df.from.schema(.schema, .nrows = 1, ..., .use.defaults = TRUE)
empty.df.from.schema(.schema, .nrows = 1, ..., .use.defaults = TRUE)
Creates an example row from a database schema table using provided column values and default values specified in schema
empty.row.from.schema(.schema, ..., .use.defaults = TRUE)
empty.row.from.schema(.schema, ..., .use.defaults = TRUE)
Extract schemas from a data base connection
get.db.schema(db, warn.null = TRUE)
get.db.schema(db, warn.null = TRUE)
Create rclasses of each column and primary keys
init.schema(schema, name = NULL)
init.schema(schema, name = NULL)
schema |
the table schema as an R list |
name |
of the table |
Load and init database table schemas from yaml file
load.and.init.schemas(file = NULL, yaml = readLines(file, warn = FALSE))
load.and.init.schemas(file = NULL, yaml = readLines(file, warn = FALSE))
file |
file name |
yaml |
yaml as text |
log a command that changes a database
logDBcommand( type, sql = "", user = "NA", log.dir = NULL, table = NULL, do.log = TRUE )
logDBcommand( type, sql = "", user = "NA", log.dir = NULL, table = NULL, do.log = TRUE )
Get a vector of R classes of the database columns described in a schema
schema.r.classes(schema)
schema.r.classes(schema)
schema |
the schema |
The output is shown per cat and copied to the clipboard. It can be used as a template for the .yaml schema file
schema.template(li, name = "mytable", toClipboard = TRUE)
schema.template(li, name = "mytable", toClipboard = TRUE)
li |
The R list for which the schema shall be created |
name |
optional a name of the table |
toCliboard |
shall the created text be copied to the clipboard |
Set schemas as hidden attribute to a data base connection db
set.db.schema(db, schemas = NULL, schema.file = NULL)
set.db.schema(db, schemas = NULL, schema.file = NULL)
Create a parametrized or escaped SQL WHERE clause from the provided parameters.
sql.where.code( db = NULL, params, where.in = FALSE, parametrized = !where.in, add.where = TRUE )
sql.where.code( db = NULL, params, where.in = FALSE, parametrized = !where.in, add.where = TRUE )
db |
a database connection needed for correct escaping via glue_sql |
params |
a list of parameters assume that db fields have the same name and have to be equal to provided values. |
where.in |
Set true TRUE a member of params can be vector and return all rows that match an element. By default FALSE to generate more compact code. |
add.where |
If TRUE start with WHERE |
paramertrized |
shall the generated code use SQL parameters. |
Convert an R object to a date object that can be used in a WHERE clause.
to.db.date(val, origin = "1970-01-01")
to.db.date(val, origin = "1970-01-01")
Convert an R object to a datetime object that can be used in a WHERE clause.
to.db.datetime(val, origin = "1970-01-01")
to.db.datetime(val, origin = "1970-01-01")