Files
Database Design
Many dictionaries have a long and storied history, and began their
days with ISAM files. As object based drivers are primarily SQL based
it's worth stating some design tips which should be applied to
existing dictionaries for best functionality and performance;
| Tip |
Context |
| All Tables need a Primary Key |
Everywhere. Modern systems require primary keys, and the
object based drivers assume you have one. |
| Ideally (but not required), the Primary key should have one
component. |
Where possible the primary key should have a single
component. This is not a requirement, but will improve
performance. |
Dictionaries coming from ISAM roots also tend to make use of features
which are common in ISAM files, but generally have been frowned upon,
or have not been available in the SQL context. Some of these
limitations have now been removed.
| Tip |
Context |
| Arrays |
Object based drivers support arrays,
storing the data either in the native array format (if the
database supports it) or in JSON if not. |
| Exclude Null Keys |
Previously not supported in SQL databases, now generally supported. |
| Giving keys unique external names |
Used to be a requirement, but now not
necessary (and possibly hurtful) in object based drivers |
| STRING versus CSTRING |
There was a perception that STRINGs in the database needed
to be converted to CSTRING. This is not true for object based
drivers. |
| MEMOs and BLOBs |
Memos are supported by object based
drivers. BLOBs are supported by all SQL drivers. |
| Record Length |
In Topspeed the maximum length of a single record is 15K,
not including MEMO and BLOB fields.
In MsSql the maximum length is 8K, however VARCHAR and TEXT
fields can be excluded from this calculation.
In PostgreSQL the maximum length is 8K[1], however
VARCHAR, BYTEA and TEXT fields can be excluded from this
calculation.
In SQLite (and therefore also in InMemory) the default size is
4k, but is configurable up to 64K. BLOB and TEXT are not
included in the calculation.
Firebird defaults to 8K, but can be extended to 16K at the
time of the database creation. As with the others, BLOB and
TEXT fields are not included in this limit.
In summary, keeping the record under 8K in size is generally a
good idea. However it's not a hard limit as large
strings (VARCHAR fields) are usually not included in this
calculation. MEMO and BLOB fields are also not included.
[1] Technically the PostgreSQL value can be changed, but only
by recompiling the actual server. |
| ULONG |
In traditional drivers the ULONG data type mapped to a INT
in the database. Since an INT is a Signed Integer, this isn't
a terribly good mapping, but will suffice for "storing" a
value. (The value will be signed if viewed using an external
tool.) This leads into a larger discussion about data types
below; |
Data Types
Clarion has a very rich set of data types. SQL also has a very rich
set of types. Problems occur when Clarion has a datatype which is
not directly supported in SQL, and vice versa. In truth, everything
will work well. The only time an issue occurs is when other tools
want to read and write the same database. Where "good" types have
been chosen, there is no room for confusion, however when "bad"
types are used then there can easily be a mismatch between the
program, and the data in the database.
For example, in SQLite there are no boundaries on an INTEGER type.
So when a Clarion type of say SHORT is used (which has a range of
-32768 to + 32767) the program is by implication limited to those
values. However another program may input a number outside that
range into the database. In order to limit that CHECK constraints
can be added o the field, but they in turn have a slight performance
implication.
With a database like SQLite there are very few mitigations you can
use, but fortunately SQLite data is the least likely to be shared
with other programs. Other database engines (like MS SQL etc) are
much more liekly to be shared, and data-range-mismatches are much
more likely to be a problem.
Ideally, your Dictionary (and hence your program) would limit itself
to using types that are universally supported by SQL engines, and
which have an exact matching counterpart in SQL. This set of types
includes;
| Clarion Type |
SQL Type |
| Byte |
TinyInt |
| Long |
Integer |
| String(n) |
CHAR(n) |
| CString(n) |
VARCHAR(n) |
Decimal(p,s)
PDecimal(p,s) |
Numeric(p,s) |
| Real |
Double |
| Sreal |
Real |
| Date |
Date |
| Time |
Time |
Blob(Binary)
Blob(Text) |
VARBINARY(max)
VARCHAR(max) |
Once you move outside this list, then the driver will need to make a
conversion to map the Clarion type onto the most suitable SQL type.
If your program is creating, and maintaining the database then this
is not a problem (for your program). It only becomes a problem if
other programs also write to the database. Some data types present
more of a problem than others;
| Clarion Type |
SQL Type |
Comment |
Short
|
Integer
|
A Short is limited to 16 bits, but an Integer can handle
32 bit numbers.
|
| UShort |
Integer |
A UShort is limited to 16 bits (0-65535) but an Integer
can handle the full range of 32 bit numbers. |
| ULong |
|
|
|
|
|
Arrays
None of the traditional SQL drivers support arrays. Object based
drivers however do support arrays.
There are some limitations;
- Arrays cannot be used in Keys, and (generally speaking) are not
easy to use in FILTER or ORDER
attributes.
- Arrays of the GROUP, MEMO
and BLOB data types are not
supported.
- Individual elements in the array cannot be set to NULL.
(The array itself can be null, but no element inside the array can
be null.)
Changing the size of an array in the dictionary may, or may not,
trigger a table update (ie Error 47). Some databases set arrays to
specific sizes, others do not.
Storage of array data in the actual database can vary from one
database to another. How the data is stored in the database however is
of little concern to the program, and won't matter when writing
Clarion code. The rest of the section discusses different possible
storage approaches, but this is for background information only and is
not important when using
FILE or
VIEW structures.
Some databases (like PostgreSQL and Oracle) have direct support for
arrays, some (Firebird) have more limited support, and some (SQLite,
MySQL, MariaDb, MS SQL Server) make use of JSON functionality, storing
the array as a JSON text string, and using JSON commands to interact
with specific array values.
Auto Numbering
One of the joys of working with Clarion is that creating a
Primary
Key Identifier for each row in the table has always been
trivial. By creating a field (a number, usually a
LONG)
in the table, and then creating a key on this field, and setting the
key to
auto-number, the underlying code
would be generated to create unique values in this row. This approach,
where the program creates the value, is known as
Client
Side Auto Numbering.
There are quite a few structural problems associated with this
approach, and these became evident as time passed and bigger, and more
complicated, systems were designed. One of these problems is the case
where multiple programs are created (some of which may not be in the
Clarion language) which are adding data. Relying on each of them to
number the field correctly is not ideal.
Most SQL databases support the idea of an
IDENTITY[1] column - that is one which
is allocated
by the server, when a record
is added. Using that feature, the task of doing auto-increments is
moved onto the server, and the client need not worry about the field
at all.
Server Side Auto Numbering is
born. Since this option exists it is the work of but a moment to
identify the field, and let the server increment the number for you.
Unfortunately it's not always quite that simple. Auto Numbering has
other issues and while this solves one problem, it opens the door to
others. Thus configuring your dictionary to use Server Side Auto
Numbering can be tedious.
The primary problem is that the program may need to know what the
auto-incremented value of the record just added is. In many cases the
ADD, of a parent record (for example
Invoice) generates a number which will
immediately be used when adding child records (for example
LineItems).
(Happily the driver can sort this out for you.)
Secondly though, the
templates still need
to know that auto-numbering exists because when forms are encountered
the parent record is added when the form opens. This then gets that
unique ID value, so that child records can be added while the form is
open (again, think of the
Invoice /
LineItems situation.) As long as rows are
being added this works well, but it gets messy when an
ADD
is cancelled.
Thirdly, your SQL server may recycle numbers which have been used (but
discarded). Or it may not. Consult your SQL database
documentation to determine what yours does. Hopefully you are happy
with the default approach. Some object based drivers may offer a way
to explicitly reuse or not reuse numbers.
Fourthly a
LONG field type can hold a
maximum value of just over 2.1 billion. So you will encounter problems
with tables that accumulate data quickly, and may accumulate more than
2.1 billion rows over its lifetime (assuming row numbers are not
reused.) Ideally you would just use an
INT64
integer instead, but we don't (yet) have that as a natural data type
in Clarion. (A
REAL would be a good
alternative in this case though.)
Fifthly some databases allow you to override the Server-side
generation (by passing in a specific value) others do not. If a field
should not be set by the program you can add the
|READONLY
attribute to the
External
Name. When this switch exists, the field will not be
included in
ADD statements. It will
automatically be set though when reading the table, and also after the
ADD command is called.
Lastly it should be noted that gaps in the sequence are not only
possible they are very likely. This is because the IDENTITY is
incremented
even if the INSERT ultimately
fails. In almost all cases the server allocates the number
before the
INSERT occurs, and it does not
return the number to the pool if the
INSERT
fails. If you need incrementing numbers, which do not have gaps, then
server side autonumbering is NOT for you.
All of the above serves as a warning that Server Side Auto
Incrementing should be used deliberately, and with care. It works
fine, but the boundaries of the usage need to be understood.
Note 1:
Different database engines have different terminology for an auto
numbered field. MS SQL Server calls it an
Identity.
PostgresSQL refers to it as a
Serial,
SQLite uses
AutoIncrement, MySQL and
MariaDB call it
Auto_Increment, Firebird
calls it a
Sequence and so on. Regardless
of the database terminology the setting(s) in the dictionary refer to
it as
ISIDENTITY.
Implementing Client Side Auto Numbering
- Create a KEY on the field, and set
Auto-Numbering on the key to ON.
Typically this is a single component key, and is the primary key
for the table, but there could be more components in the KEY
(as long as this field is the last one.)
- For SQL drivers you can make use of the \AutoInc
and \PreAutoInc Driver Strings to
create a SELECT statement which will
most optimally prime the auto incrementing value for you. If
these are not used then generic code will be generated by the
templates, which may not be optimal for your SQL database. Unless
you are adding lots of records, in a short space of time, this
performance factor though will not be important.
Implementing Server Side Auto Numbering
- Make sure Auto numbering is OFF for all keys
containing this field. For server-side auto-numbering the key MUST NOT have auto-numbering option
turned on.
- Instead, identify the field in question by setting the Field
User Option
ISIDENTITY=1
- In code, sometime before the record is added, tell the driver
that the field exists. (If you are using an APP this is done for
you by the template.) The code consists of two lines; (It is
documented here for hand-coded programs.)
SomeFile{PROP:ServerAutoInc}
SomeFile{PROP:ServerAutoInc,n} = m
Where n is the field number and m is the column number of the returning
SQL code.
OR
- (Use this step in place of step #3). Object based drivers can
automatically set this property, if the |
ISIDENTITY attribute is added to the field external
name. This is somewhat less work than the step #3, and also
doesn't require the program to add code to the program. Thus it
works equally well in App-generated, or hand-coded programs. This
step is in addition to, not in replacement of, step #2 above.
- If you want the templates to treat the file as
server-side-auto-numbered, specifically if you want forms to ADD the record on form open, then you need
to add a File User Option to the
table in the dictionary;
EmulateAutoNumKey=1
The mechanism for populating the assigned value, and retrieving it by
the program, has been different for each database engine over the
years. However recently the
RETURNING (or
OUTPUT) SQL clause has been added to most common
databases and so this is the suggested and preferred way for object
based drivers to get the value (and the current drivers use these
features.) This does limit access to earlier versions of the database
engine though.
| Database |
Version |
Year |
| MS SQL Server |
Sql Server 2005 |
2005 |
| PostgreSQL |
8.2 |
2006 |
| SQLite |
3.35.0 |
2021 |
| Firebird |
2.0 |
2006 |
| MariaDB |
10.5 |
2020 |
| MySQL |
8.0.27 |
2021 |
Using the
RETURNING clause incurs a slight
(and the emphasis is on slight) performance penalty when doing an
ADD. For this reason the
APPEND
command does not include the
RETURNING
clause and thus maintains its status as
A Faster
ADD. This makes
APPEND suitable
for things like bulk adds where the identity value is not needed in
the procedure. Tables that do not have an auto-numbered field do not
incur this performance penalty on
ADD.
After a call to
ADD or
PUT,
the identity field in the table record is updated to match the value
in the database. (In other words, after the write, the value in the
file record matches the database, regardless of it's value before the
write.)
You cannot change the value of an IDENTITY field by using
PUT.
Values in any IDENTITY (or READONLY) fields are not sent to the
server, but are rather updated from the server value after the read.
UUID's
Auto Numbering (server or client side) presents a number of other
issues as well. This is not the place to discuss them in depth, but in
no particular order they include;
- They "leak" information from the system where they are used.The
most well-known example of this is the German Tank Problem. This introduces
security concerns, especially as databases move from the local LAN
to wider systems on the web or on mobile.
- They require a "single database" for adding records. It's
difficult (and prone to failure) to use auto-numbered records in
distributed environments. That might be as simple as exporting to
Excel, editing, and re-importing. Or it might be as complex as
data being sent to phones, new data captured on the phone, and
then sent back to the server.
- Merging data sets becomes hard work, and irreversible. So when
the data from two databases (say branch offices) needs to be
consolidated into a single database, everything has to be remapped
- which takes significant time and effort.
- In order to add child records to the database, the parent record
must be created (in order to get the parent ID) first. This
complicates a lot of UI (especially when it comes to managing the
CANCEL button).
Switching from an Auto Numbered field to a
Client-Side
UUID field overcomes these problems.
Aside: You may have seen GUID, or UUID,
values represented like this;
75163EA6-76C8-473D-A619-E9F30B489912.
While this is a common way of
displaying
a UUID, it's
not actually stored like
this. The stored value is a binary string, this display is the binary
value, but has been hex-encoded for viewing.
Client Side Generated UUID values
While there are variations, you can consider a Universally Unique
Identifier (
UUID) field to be a random
string field, which is 16 characters long. You add it to your record
structure in the dictionary as a
String(16).
Because the value in it is random, or at least a significant part of
it is random, data can be easily added by multiple different systems
at the same time, and all the issues described above simply go away.
Object based drivers support setting a field in the dictionary to be
a UUID. The driver will automatically populate the field (if it is
blank) or use the value in the field (if one is pre-set). In other
words, other than the file declaration, no program code is needed to
support UUID's.
There are four primary arguments
not
to use UUID's but to use
Auto Numbering
instead;
- Since the UUID consumes 16 bytes[1]
instead of 8 bytes (for a BigInt)
or 4 Bytes (for a Long), and
because primary keys propagate into child tables, and many keys,
a database based on UUID's will consume more disk space than one
based on numbers. While this is a valid point, your context will
matter. For example wide tables (tables with many columns) will
likely take up space anyway, and an extra few bytes per row will
not make a significant difference. Also, disk space is cheap, so
how much this size difference matters overall to a system is
doubtful. In most cases the percentage increase in size use is
likely to be minimal.
- Since UUID's are not numeric they require more CPU time to
process (comparing, sorting etc). True, but again, whether it is
significant or not depends on your context. Most CPU's are good
at dealing with (short) strings, and this is seldom the
bottleneck in database operations. Using UUID's is a common
practice, so databases are optimizing for this which in turn
reduces this performance penalty.
- Some databases (notably MS Sql Server) use (by default) the
Primary Key as the Clustered Key.
Meaning that records are physically laid out on disk in this
order. This can be more efficient when processing records in
that order. Since UUIDs are random, they tend to be added
out-of-order. Which can make ADD
slower. This can be mitigated by choosing another key to be the
clustered key[3].
This may be advantageous if supported by your database - not all
of them do. If changing the clustered key is not an option
then a specific form of UUID (version 7 - see below) deals with
this by pseudo-ordering ID's. So this is a non issue when using
that form of the UUID.
- Collisions: Since randomness is involved, and multiple
databases could be adding data, it's possible to get two records
added with the same UUID, which in turn could be a problem.
However while this is possible,
it's not probable. Regardless of
the UUID version chosen the random numbers generated are huge.
(Think of it as rolling 32 dice, and having them all come up 6
at the same time.) Probability is a complex subject, and humans
are not good at comparing probabilities, but it's fair to say
you have more risk of being hit by a small rock falling from the
sky (ie a meteorite) than of encountering a collision in UUID
space.
There are 3 different kinds of UUID supported by the driver;
| Attribute |
Behavior |
UUID4
UUID
|
The UUID conforms to the standard UUID format, version 4. This contains the most
amount of randomness (122 bits worth.) The value in the
string is a binary value, so cannot be displayed on the
screen. This is fine for most systems, but if you are
exporting the data to a text output you will need to encode
the value. [2]
|
| UUID7 |
The UUID conforms to the standard UUID format, version 7 This contains significant
randomness (74 bits worth per millisecond.) The value in the
string is a binary value, so cannot be displayed on the
screen. The primary advantage of this format is that the
high bytes are an increasing value, based on the time. So
data will be added to the end of the table, thus avoiding
the clustering problem. |
UUID8
|
Version 8 of the UUID standard allows programs to create
the UUID in (almost) any form they like. It only mandates
that two nibbles in the UUID are set to specific values. The
UUID8 implemented in the driver are NOT standard UUID8
values, because (for backward compatibility reasons) they do
not implement these two nibbles.
Historically Clarion programs have generated text GUIDs as
16 alpha-numeric characters. To maintain compatibility, the
UUID8 implemented in the new driver classes simply populate
the string with a alpha-numeric random text value. This
results in around 80 bits of randomness. Since it is
Alphanumeric it can be exported and imported (to JSON, Excel
etc) without the need for additional encoding or decoding.
In addition since it is not binary, it can be stored ina
simple Char or VarChar
field in the database, which makes sorting and searching
easier for the database engine.
Use the UUID8 attribute on fields in existing dictionaries
that used a UUID or GUID field in this way. |
IMPORTANT NOTE: In versions 1.15 and
earlier of the drivers the attribute
UUID
was synonymous with
UUID8. From
version 1.16 it is synonymous with
UUID4.
To set a field in the dictionary;
- Create a String(16), (or String(36) ) typically (but not
necessarily) called GUID. (Do not use a CSTRING
field type here.)
- Set the External Name of the field name, adding one of the
following attributes; |
UUID4 | , | UUID7 | , | UUID8 | .
Setting the attribute to UUID4 or
UUID7 implicitly makes the field | BINARY | (You do not need to add the
attribute here.)
- IMPORTANT If you are using UUID4
or UUID7 then any related
field, in another table, it needs to have the same attributes.
Generally speaking the SQL Type of a field in a parent table
should match the SQL Type of related fields in child tables.
When adding new records simply clear the
GUID
field. The driver will populate a value in the field for you when
ADD or
APPEND
is called. If you wish you can pre-populate the field with a value,
and that value will then be used during the
ADD
or
APPEND. When adding child records
(say LineItems) that reference back to a parent table uuid (say an
InvoiceGuid) then you would necessarily populate the child field
with the parent value.
For example, to create a new Customer record, the following code
generates a
UUID version 7, and saves
it in the
Cus:Guid field.
Cus:Guid = Customers{prop:uuid7}
If you do not populate the field yourself, in other words if the
Cus:Guid field is blank when the record is
added or appended, then it will be populated automatically by the
driver to a new random value.
Note 1: Uuids
(aka Guids) are often displayed as a 36 character Hex-Encoded
string. However this is just a display convention - the actual data
is stored in 16 bytes.
Note 2: StringTheory has several methods for encoding
Binary values as text. Including
ToHex,
Base64Encode and
UUEncode
.
Note 3:
PostgreSQL, SQLite and Firebird don't have clustered indexes at all
(data is stored on the disk in an ad-hoc manner). MS Sql does, but
allows any index to be the clustered index. Oracle has
Index-Ordered-Tables which is similar to a clustered index. MySQL
and MariaDB can only cluster on the Primary key. To mark a key as
the clustered index in the dictionary use the
CLUSTER
attribute in the external key name.
Server Side UUID Types
This section is specifically to explain what happens when other
programs add rows to the database.
Some servers, although not all, support a server-side Unique
Identifier type. This has various names depending on the database
engine being used. Where a UUID type is not directly available, a
more generic (usually binary) type can be used.
| Database |
Default Type for UUID8 and UUID
SqlType(STRING) |
Default Type for UUID4 and UUID7
SqlType(UUID) |
Default Value UUID4 |
Default value UUID7 |
| SQLite / InMemory |
TEXT |
BLOB |
randomblob(16)[5] |
Not Supported[1] |
| MSSQL Server |
CHAR(n)
VARCHAR(n) |
UNIQUEIDENTIFIER |
NEWID() |
NEWSEQUENTIALID() |
| PostgreSQL |
CHAR(16) |
uuid |
gen_random_uuid() |
gen_random_uuid()[3] |
| MySQL / MariaDB |
CHAR(16) |
BINARY(16) |
UUID_TO_BIN(UUID(), 1)[2] |
UUID_TO_BIN(UUID(), 1) |
| Firebird |
CHAR(16) |
CHAR(16) CHARACTER SET OCTETS |
Not Supported[1] |
Not Supported[1] |
| Oracle |
CHAR(16) |
RAW(16) |
SYS_GUID() |
SYS_GUID()[3] |
If the field is marked as
UUID8 (in the
Extended Name
Attributes) then the driver will store the value as text
as-is.
If the field is marked as
UUID4 or
UUID7 then the driver will convert the
field to a binary type appropriate to the driver being used.
For example the MSSQL2 driver translates
UUID4
into
UNIQUEIDENTIFIER. This is equivalent
to setting
SQLTYPE(UUID) or
SQLTYPE(UNIQUEIDENTIFIER).
These SQL types are not needed for
UUID4
and
UUID7 as they are done
automatically.
In the rare case (and it's not recommended) that you need to
store the uuidi n the database in as a 36 character text, hex
encoded, value, you can do so by setting
SQLTYPE(STRING).
For example;
Uid7
STRING(16),NAME('Uid7 | UUID7 ')
Uid7
STRING(36),NAME('Uid7 | UUID7 | SQL')
Uid4
STRING(36),NAME('Uid4 | UUID4| SQLTYPE(STRING)')
When the data is marked as a
UUID4 or
UUID7 then an appropriate
server-side
default value will also be set when the table is created.
This default is
only applied when the field itself
is blank. So it only comes into affect when records are written to
the database
by other programs, since the drivers
automatically populate
UUID4 and
UUID7 (and
UUID8)
fields.
Note
1: Default values only apply when other programs write new
records to the database. All the object based drivers automatically
populate the field on the client side, if the field is marked as
UUID4,
UUID7
or
UUID8.
Note
2: MySql and MariaDB only generate a
UUID7
type, so that default is used for
UUID4
as well.
Note
3: PostgreSQL and Oracle only generates a
UUID4
type, so that default is used for
UUID7
as well (meaning that server-generated UUID7's are not sequential.)
Note 4: To suppress the automatic default
the
DEFAULT() attribute can be used to
explicitly set the default to blank.
For example
Uid
STRING(16),NAME('Uid4 | UUID7| DEFAULT()')
! portable
Note
5: Technically
randomblob(16) does
not return a UUID that is conformant to RFC4412. However it suffices
as a backstop for other programs adding data to the database.
Windows GUIDs
As mentioned above
UUID values are
actually a 16 character binary string. This format is compact and
efficient for databases, but less efficient when displayed on the
screen since the
UUID value is binary,
and binary strings do not usually display well. For this reason, on
the Windows platform,
UUID's are
usually displayed as a 36 character hex-encoded string, with four
hyphen separators.
The object based drivers are aware of this, and contain a built-in
transformation - allowing the string declared in the dictionary to
be a
string(36) and the value in the
database to
remain as a 16 byte binary value.
This behavior is automatic IF the field declared in the dictionary
(and marked with one of the
UUID
attributes) is a
String(36). If you
don't want, or need, this functionality then declare it as a
String(16) instead.
ADD and APPEND
Using traditional drivers, the whole record is inserted when a
ADD or
APPEND is
done. The object based, SQL drivers support
prop:Project. This means that only the fields
listed in
prop:Project will be written
to the database when an
ADD or
APPEND
is called. This can be faster if only a few fields in the row are
being set during the write.
In the context of an
ADD and
APPEND
the Projected fields are the fields being projected
into
the server from your program.
If supported by the driver, when a
ADD is
done, all server-side fields (identity, and
read
only fields as well as any not-projected fields which have
default values) in the record are reset to their current server-side
value.
APPEND does not read the server
side values, so is slightly faster if the server-side values are not
needed.
See also
PUT.
ADDRELATION
AddRelation(File pChildFile, File pParentFile, Long
pOnUpdate, Long pOnDelete, *? pChildField1, *? pParentField1, <*?
pChildField2>, <*? pParentField2>,...)
The
ADDRELATION command informs
the
FILE of a relationship to a parent
file. Relationships are always defined in the child file, not in the
parent file. In other words all relationships are defined as
Many-To-One (Many child rows to one parent row.)
| Parameter |
Description |
| pChildFile |
The relationship is always defined form the child tables
point of view. The child requires that the parent row exists.
Many child rows can point to the same parent rows. (In some
cases only 1, or indeed no, child rows point to a specific
parent row.) |
| pParentFile |
the parent table in the relationship. |
| pOnUpdate |
The action to take if the parent field value changes. In
other words if the parent field (say CustomerId) changes, then
all the CustomerId fields in the children can have an action.
This action is one of driver:cascade,
driver:clear, driver:restrict,
driver:NoAction. |
| pOnDelete |
The action to take if the parent row is deleted. In other
words if the parent row is deleted then all the children rows
can have an action. This action is one of driver:cascade,
driver:clear, driver:restrict,
driver:NoAction. |
| pChildField1 |
The field in the child, which matches the field in the
parent (defined in the next parameter). These fields must be
of the same type. |
| pParentField1 |
The field in the parent which matches the field in the child
(defined in the previous parameter.) |
| ... |
Up to 10 pairs of fields are supported. |
Some databases (specifically SQLite) require that for best performance
the call to
ADDRELATION is done before
the call to
CREATE. In all cases
it is ideal to add relationships early in the design life cycle,
before the tables have been created, or data has been added. Adding
relationships to existing databases can fail if the data in the tables
contains values that would violate the relationships.
If relationships have been defined in this way, and
CREATE
is called on the child table, then the existence of the parent table
will be checked, and it will also (if necessary) be created. Databases
require that parent tables are created (or exist) before child tables
can be created.
When the
pOnUpdate or
pOnDelete
action is set to
driver:Clear, this does
not mean that strings are set blank, or that numbers go to zero. It
means the value of the field will be set to
NULL.
If the field itself is set as
NOT
NULL then an error might be generated when the table is created,
or (more likely) you'll get an error when deleting the parent record.
Deleting the parent record, would result in
NULL
being assigned to the field, but since the field is defined as
NOT NULL, this fails, which in turn causes the
DELETE to fail.
For more on relationships in general see
Relationships.
BINDFIELD
BindField(FILE pFile,String pName, *Long pField)
BindField(FILE pFile,String pName,
*Real pField)
BindField(FILE pFile,String pName, *String pField)
BindField(FILE pFile,String pName, *Cstring pField)
BindField(VIEW pView,String pName, *Long
pField)
BindField(VIEW pView,String pName,
*Real pField)
BindField(VIEW
pView,String pName, *String pField)
BindField(VIEW pView,String
pName, *Cstring pField)
| Parameter |
Type |
Description |
| pFile / pView |
File / View |
The label of the FILE or VIEW to which the field is being
bound. |
| pName |
String |
The name of the field being bound. This should be the name
as it will appear in the SQL result set. |
| pField |
Long, Real, String, CString |
A local variable, one of a Long, real, String or Cstring to
hold the result. When the record buffer is updated (on a call
to NEXT, PREVIOUS) the local variable will also be updated. |
Using traditional drivers, when looping through a
FILE
or
VIEW, fields being returned are
placed into the file record buffer.
However when accessing SQL databases it's possible to
SELECT
values which are not in the file record. These might be calculated
values, or totals or whatever.
For example using
prop:Project
a specific set of fields could be set to be retrieved by a
SET
/
NEXT loop. For example;
Customers{prop:Project} =
'Cus:CustomerCode,COUNT(*) AS CustomerRank'
In this case two fields are being projected, the normal field
Cus:CustomerCode
and also a calculated value which is placed into a field called
CustomerRank.
If the table contains a field (in this example
Cus:CustomerRank)
which can hold the calculated value then that works well. However the
table may not contain such a (spare) field. In this situation using
BindField to add an external value to hold the
result is necessary.
For example
CustomerRank Long
Code
BindField(Customers,'CustomerRank',CustomerRank)
Then that field can be used to hold the return value in the result
set. In this case, when a record is read (by the
NEXT
or
PREVIOUS command) the local variable
CustomerRank will be populated with the value
in the result set. In effect the local variable has be "bound" into
the record for the duration of the loop.
Only fields of type
Long,
Real,
String or
Cstring
can be bound in this way.
Fields bound in this way can be used in the
prop:Order
property. They are not applicable to the
prop:Filter
property.
Local variables used in
prop:Filter
should be bound, as normal, using the Clarion
BIND
command. Conversely local variables bound using
BIND
have no meaning in a
prop:Project or
prop:Order property. In other words
BIND
and
BINDFIELD are different, and should
not be conflated together.
The binding is cleared when the
FILE or
VIEW is closed, or when
CLEARPROPS
is called.
BINDFIELD is supported in the commands
NEXT,
PREVIOUS,
GET ,
GETRANDOM
and
REGET.
Blobs
Blobs are a source of common problems in Clarion, because they don't
behave like regular fields and variables. The goal of this section in
the document is to clarify behavior, and to highlight places where the
behavior may vary from the behavior of traditional file drivers.
In the table below
aBlob is a blob
reference, assigned as
aBlob &= table{prop:blob,-n}
fil:blob is the blob as declared in the
dictionary
| Code |
Traditional Driver |
Object Based Driver |
s=aBlob{prop:Size}
s=fil:Blob{prop:size} |
The current size of the blob. |
| s=table{prop:size,-n} |
blank |
The current size of the blob |
{prop:Size} = x
|
Prop:Size
does not need to be called before setting the blob contents.
This behavior is not changing, however deviates from the blob
documentation. |
{prop:Size} = x
|
When setting the Blob Size, new space is not cleared. |
New space is cleared to nulls. |
s=aBlob{prop:value}
s=fil:Blob{prop:value}
s=table{prop:value,-n} |
The current contents of the blob are returned. |
aBlob[0:3] = 'abcd'
fil:Blob[0:3] = 'abcd' |
Assigns text into the blob. Makes
the blob bigger if necessary. |
aBlob{prop:value} = 'abcd'
fil:Blob{prop:value} = 'abcd'
table{prop:value,-n}
= 'abcd'
|
Assigns the text into the blob.
Replaces the existing value completely and adjusts the size
up, or down, as necessary. |
| CLEAR(FILE) |
|
|
BUFFER
Buffer(FILE entity, UNSIGNED pageSize, UNSIGNED
behind=0, UNSIGNED ahead=0, UNSIGNED timeout=0)
The traditional drivers make use of a
BUFFER
command which allows the
FILE or
VIEW to buffer some number of records, either
some already read or some number of future records. This can have a
dramatic impact on performance, although the command is only used in
the ABC ViewManager, in the shipping classes and templates.
The object-based drivers do not currently make much use of the
BUFFER command. They already automatically do
page-based (or indeed file-based) access , so there's no need for an
explicit call to
BUFFER. Setting
prop:FetchSize
is typically all that is required to control the buffer size.
BUFFER can be used to set the fetchsize.
The value is in rows, and is either the
pageSize
(if
ahead is 0) or
pageSize
* ahead if
ahead is greater then 0.
All the other parameters are ignored.
For example
BUFFER(Customers,50)
is equivalent to
Customers{prop:FetchSize} = 50
is equivalent to
BUFFER(Customers,10, ,5)
For more information on the use of
BUFFER
with the DOS2 driver (which behaves differently) see
Buffer.
BYTES64
Bytes64(FILE pFile),BIGINT
The
BYTES command returns a
LONG,
which limits the number to around 2.1 billion. This may not be enough
(for example the DOS2 driver supports files greater than 2 Gigabytes).
The
BYTES64 command is a new command
which returns a
BIGINT variable (at the
time of writing this, this is a
REAL.)
If this command is called against a traditional driver then the
regular
BYTES command is called, and a
value less than 2.148 billion is returned.
See also
Pointer64 and
Records64.
CALL / CALLOUT
Call(FILE pFile, String pStoredProcedure, <?
Parm1>, ... , <? Parm20>)
CallOut(FILE pFile, String pStoredProcedure,
<*? Parm1>, ... , <*? Parm20>)
SQL databases have the ability to contain
Stored
Procedures. These are blocks of code (procedures) that are
written in the Database's programming language and stored in the
database itself. These procedures may, or may not, return a
result set.
This section discusses different forms of Stored
Procedures, and how they might be used. Different database engines
use different terminology, and different SQL syntax. The goal here
is to show the use of CALL and CALLOUT.
With the traditional drivers, the way to call stored procedures was to
use the
prop:Sql or
prop:SqlRowset
properties. The text for this SQL has to be carefully constructed
according to the syntax of the database being called.
The object based drivers introduce a two new commands,
CALL
and
CALLOUT which simplify calling a
stored procedure, and makes it more portable.
| Command |
Use When |
Requires |
| CALL |
Calling a Stored Procedure |
Parameters can be variables or fixed values. |
| CALLOUT |
Calling a Stored Procedure which has OUTPUT parameters.
Can also be used to call any other Stored Procedure |
All Parameters have to be variables. Use BINDFIELD to bind output variables.
DO NOT call BINDFIELD
for input variables (ie variables being passed into
the stored procedure.) |
Simple call, Nothing Returned
Assume a Stored procedure which is called AddProduct,
and which takes 3 parameters. It can be called like this;
Call(SomeFile,'AddProduct',107,'Store
7','107')
The table referred to here (SomeFile)
is not actually accessed - it's simply used to identify the database
connection (and hence database) that contains the stored procedure.
The second parameter is the name of the stored procedure.
The following parameters (up to 20 of them) contain the parameters
to the procedure. In this example the AddProduct
stored procedure takes 3 parameters.
Since this procedure returns nothing, this single line of code is
all that is required.
Result Set Returned
If the procedure returns a result set, then the
CALL
command takes the place of the
SET
command. The
NEXT command is then used
to iterate through the result set. In this example a stored
procedure called
GetSomeProducts() returns
some subset of the
Product table. As
usual, the result set will be copied into the Product table, based
on column name. Should you be retrieving a result set that does not
match any table in the dictionary, then a
Dummy
Table could be used.
Call(Product,'GetSomeProducts')
Loop
Next(Product)
! Etc
Output Parameters
Some stored procedures don't return a result set, rather they return
values as "output" values in the parameter list. They might be
declared something like this
[1];
CREATE PROCEDURE [dbo].[Math]
@N1 INT,
@N2 INT,
@R3 INT OUTPUT,
@R4 INT OUTPUT
AS
BEGIN
SET @R3 = @N1 + @N2;
SET @R4 = @N1 * @N2;
END;
Because this updates parameter values, it's not possible to use
CALL in this case. It is necessary to use
CALLOUT with
BINDFIELD
instead.
All the parameters need to
be variables, however
only the output
variables need to be bound.
p1 = 3
p2 = 7
BindField(Product,'r1',r1)
BindField(Product,'r2',r2)
CallOut(Product,'dbo.Math',p1,p2,r1,r2)
Stored Functions
Stored Functions are called as part of a SQL statement, not using CALL
or CALLOUT.
Stored Functions (sometimes called Scalar-Valued Functions,
or Aggregate Functions) are a special kind of stored procedure
(actually not a stored
procedure at all, but a stored
function)
which
RETURN a single value. For example,
here is a simple function which takes two parameters and returns the
result ;
[1]
CREATE FUNCTION [dbo].[MultiplyNumbers]
(
@Number1 INT,
@Number2 INT
)
RETURNS INT
AS
BEGIN
RETURN @Number1 * @Number2;
END
The advantage of stored functions is that they can be used in
SELECT statements. Indeed they MUST be used in
SELECT statements (in the
SELECT
list,
WHERE Clause,
ORDER
BY or
UPDATE SET.) So you could
call this function as;
result Long
BindField(Product,'result',result)
SQL(Product,'Select dbo.MultiplyNumbers(10,10) As result')
Next(Product)
! result = 100
Stored Functions
CANNOT be called
with the
CALL command.
Table Valued Functions
Table Valued functions are similar to Stored Functions, but they
apply to the FROM clause of a query. They
return a result set, and can be used in the query wherever a result
set is applicable.
CREATE FUNCTION dbo.GetOrderSummary (@CustomerID
INT)
RETURNS @Summary TABLE
(
OrderYear INT,
OrderCount INT,
TotalSpent DECIMAL(18,2)
)
AS
BEGIN
INSERT INTO @Summary
SELECT YEAR(OrderDate), COUNT(*),
SUM(TotalAmount)
FROM dbo.Orders
WHERE CustomerID = @CustomerID
GROUP BY YEAR(OrderDate);
RETURN;
END;
In this example the function takes in a customer id, and returns a
multiple record set. This set in turn can be used as the source for
a different query.
SQL(Dummy,'SELECT
* FROM dbo.GetOrderSummary(''Cus42'') WHERE TotalSpent >
100;')
Loop
Next(Dummy)
! Etc
Table Valued Functions CANNOT be
called with the CALL command.
Note
1: This function is using the MSSQL Server Syntax. Different
databases will have a different syntax.
CLAIM
Approach 1
CLAIM(KEY pKey, ? pFIELD, ? pValue, ? pNewValue)
CLAIM(FILE pFile, STRING pOrder,
STRING pFilter, ?
pFIELD, ? pNewValue)
Approach 2
CLAIM(KEY pKey, ? pFIELD, ? pValue)
CLAIM(FILE pFile, STRING pOrder, STRING pFilter)
The
CLAIM command is a new command which
allows the developer to lay claim to a record for a period of this
time. During this time no other user will be able to claim the same
record. The record will be visible to other users (so included in
browses and reports etc) but will be locked.
Ultimately
CLAIM is a form of the
GET command - after the call a record is loaded
in the record buffer, and can be written back with a
PUT.
The goal of
CLAIM is to fetch a record
from a table, and at the same time locking the record into a read-only
mode. It guarantees that only one client program can claim a record.
If multiple clients claim a record from the same table at the same
time, then each will receive a unique row, no row will be given to two
clients. This allows multiple client programs to share a "task" table,
ensuring that each row in the table is only allocated to a single
client.
If no record matches the filter (ie no records are waiting to be
claimed) then
Error 33 Record Not Available
is set.
If another client attempts to write to a record that has been claimed,
then they will get an
Error 5 Access Denied.
If the pField parameter is not in the pFile table then
Error
126 Field Does Not Belong To File is set.
In Approach 2 (described below), only a single record can be claimed
from any one table at a time.
If
CLAIM is called a second time (before
the call to
COMMIT or
ROLLBACK) then an
Error 139
Claim Already Active will be set.
If the table is closed before a
ROLLBACK
or
COMMIT is called the the
CLOSE
will set
Error 56 Logout Already Active.
In approach 2 (described below)
CLAIM
automatically starts a
transaction, if
one is not already started.
ROLLBACK(table)
or
COMMIT(table) MUST
be called to end the claim.
If the connection to the database ends unexpectedly, or the thread
ends or the program terminates unexpectedly, the transaction is ended
with a
ROLLBACK, and ALL
writes to the database made by the thread during the transaction are
removed.
| Parameter |
Type |
Description |
| pFile |
FILE |
The table to read |
| pKey |
KEY |
A key to determine the sort order of the
table. The first available record, matching the filter, in
this order, will be claimed. |
| pOrder |
STRING |
Instead of specifying a key for the order, a string
containing the order can be used. For example
'Tas:Priority,-Tas:Time' |
pField
pValue |
ANY
ANY |
A filter on the table. Only records where this field equals
this value will be retrieved.
Equivalent to a pFilter of 'field =
value' |
| pFilter |
STRING |
A custom filter for determining which records to read. Can
be used when the filter is more complex than a single field
equals a single value. If pNewValue is passed then this custom
filter MUST contain the equivalent of pField = pValue as one
of it's terms. |
| pNewValue |
ANY |
The pField field will be updated with this new value, and
the record will immediately be unclaimed. In other words the
lock on the table is immediately released. Some databases
(SQLite, Firebird) which do not support FOR
UPDATE SKIP LOCKED must make use of the pNewValue
parameter. |
There are fundamentally two approaches when it comes to allocating
records. The first is short and sharp;
Approach 1
- Claim a record, with a field set to some value. (Status
= 0)
- Immediately update the field to a new value (Status
= 1)
- End the claim, ie Unlock the record
- Continue processing as desired
- Write back to the record as desired
The approach above is supported in all databases, including SQLite and
Firebird. The key attribute here is that the record is locked (so only
one user can
CLAIM it) but the duration
for the lock is very short, just long enough to update the filter
field (
status in the above example).
If a NewValue is passed to
CLAIM then the
first three steps are done automatically. So the process in the
program code becomes
- CLAIM a record, with a field set to
some value (pField = pValue). CLAIM will automatically update the field
to a new value (pField = pNewValue).
The row is automatically unlocked, COMMIT
is not required (but does set errorcode to 91
No Logout Active.)
- Continue processing as desired
- Write back to the record as desired
For maximum portability, or if the second approach is not supported by
your database, then the above approach is recommended. However this
approach can result in records that have been claimed, but not
completed (because the user lost interest, turned off the machine
etc.) So for this pattern a timestamp field is usually added to the
row so that records that are "claimed but not completed" for too long
can be returned to the pool by some background process.
Ths approach is also recomended for tasks which take a long time to
accomplish (hours, or days or more).
In this approach, since
pField is
immediately set to
pNewValue (and the record
is written) other users can immediately see that the record is taken.
Example
Claim(Tas:PriorityKey,Tas:Status,0,1) ! there is
no transaction in play here
! do whatever
! write whatever
Example
Claim(Tasks,'tas:date, tas:time','tas:Status = 0
and tas:Level=4',tas:status,0,1) ! there is no transaction in play
here
! do whatever
! write whatever
Approach 2
A second approach claims the record (which locks it) but no implicit
field update is done. Hence the record remains locked to other users
(they can read the record but not write it) until the transaction is
ended with a
ROLLBACK or
COMMIT.
- CLAIM a record, where a pField set
to pValue. (Status = 0)
If a transaction is not currently in progress then one will be
started using the LOGOUTCONNECTION
method.
- Continue processing as desired
- Write back to the record as desired (presumably changing the
pField value, or deleting the record.)
- A call to COMMIT or ROLLBACK
will end the transaction (as normal.) The claimed record will
remain locked for writing until the transaction ends.
If CLAIM started the transaction,
then a call to COMMIT(table) or ROLLBACK(table) (passing the table) is
necessary.
This approach is supported by most (but not all) database engines, and
has the benefit of not changing the status field. Thus if the
connection is lost unexpectedly the record will be returned to the
pool immediately to await another client for processing. However
because the status field is not changed (and written back to the
database), other users cannot immediately see that the record is being
processed.
Example
Claim(Tas:PriorityKey,Tas:Status,0) ! a
transaction commences here
! do some work
Tas:Status = something not 0
! write whatever
Commit(Tasks) ! this ends the transaction
Example
LogoutConnection(Employees)
! do some stuff
Claim(Tas:PriorityKey,Tas:Status,0)
! do some work
Tas:Status = 1
! do some more stuff, transaction is still active
Commit(Employees) ! ends the transaction
Example
Claim(Tasks,'tas:date, tas:time','tas:Status = 0
and tas:Level=4') ! a transaction commences here
! do some work
Tas:Status = something not 0
! write whatever
Commit(Tasks) ! this ends the transaction
Compatibility
The
CLAIM command makes use of a feature
in the database engine called
FOR UPDATE SKIP
LOCKED. While widely supported (although with different
syntax), it is not universal so consult the documentation for your
specific driver to check if there are any restrictions.
| Database |
Version |
Year |
| MS SQL Server |
Sql Server 2005 |
2005 |
| PostgreSQL |
9.5 |
2016 |
SQLite /
InMemory[3] |
3.35 [1]
|
2021
|
| Firebird |
2.0 [1] |
2006 |
| MariaDB |
10.6 [2] |
2021 |
| MySQL |
8.0.0 [2] |
2018 |
Oracle
|
9i
|
2001
|
Note 1:
Only supports approach 1
Note 2:
Individual row locking in MySQL and MariaDB can lock more than a
single row. Both approaches will work, however approach 1 is
recommended.
Note 3:
InMemory tables cannot be shared between different programs, so at
first glance this feature is not useful for the InMemory2 driver.
However the tables are shared between threads, so if building a task
queue within a program, with multiple threads processing the tasks,
then this feature can be useful in this context.
CLEAR
The
CLEAR(FILE) command clears the record
buffer.
With the object based driver it It is recommended that this form of
CLEAR is called rather than the
CLEAR(fil:record)
form because then BLOBS and MEMOS as well as fields are cleared. With
the traditional driver neither form clears BLOBs or MEMOs.
Using
CLEAR(FILE) clears the record, the
blobs
and also clears all the properties by
calling ClearProps
(clears prop:Filter et
al) which are supported for many of the methods.
Note: Calling
CLEAR
does not reset the
NULL status of the
fields. A field that is
SETNULL before
the
CLEAR, remains set to
NULL
after the
CLEAR. To clear the null status
of all fields in the record use
SetNoNull(File,fil:record)
Note: Calling
CLEAR(FILE,1)
or
CLEAR(FILE,-1) are not supported by
the compiler.
Note: In build 1.15 and earlier, this call
cleared the properties as well as the record. This was removed in
build 1.16 because it is common to clear the record while inside an
ADD loop.
Note: If calling the method in the object
directly then use the method name
ClearFile
instead of the word
Clear.
CLEARPROPS
CLEARPROPS(FILE pFile)
The
CLEARPROPS command clears all
the properties (
prop:Filter
et al) which are supported for many of the methods.
It does not clear the record buffer.
It does not clear
prop:Sql.
It is recommended to call this command before
COPY,
MERGE,
SET,
RESET,
EXPORT,
IMPORT .
This command also sets the
error code back to
No
Error (0).
CLEARRELATIONS
CLEARRELATIONS(FILE pFile)
The
ADDRELATION command
adds an entry into the internal relationships queue for the FILE
object.
The
CLEARRELATIONS command frees
this queue, removing all existing entries.
It does not touch the database itself, and does not remove any
relationships created in the database. (See
SUSPENDRELATIONS for that.)
Typically the
ADDRELATION
command is called before creating a table. This allows the
CREATE command to create parent tables, and also
create server-side relationships between the tables. The relationship
exists for the life of the thread. Occasionally it is desirable to
CREATE tables with, or without,
the relationships defined. By calling
CLEARRELATIONS
the internal queue will be freed and so it is possible to
CREATE a table, without relationships, even after
an earlier call to
ADDRELATION
on the thread.
CLONE
CLONE(FILE
pFile, String pNewTableName, Long pOptions)
The
CLONE command allows you to create a
table in the database, where the structure is based on the SQL
definition of an existing table.
In other words, unlike
CREATE
which works from the Clarion
FILE
declaration, the
CLONE command uses an
existing table in the database, and creates a new table, based on that
table. Exactly which parts of the table are cloned depends on the
pOptions parameter (and the capabilities of the
database).
You do not need to call this command directly if you are using the
COPY command.
COPY
automatically drops a table, and calls
CLONE
internally whenever it is called. The options passed to the
COPY
command are passed on (internally) to the
CLONE
command.
You do not need to call this command directly if you are using the
MERGE command.
MERGE
automatically clones the table, if the destination table does not
already exist.
| Parameter |
Type |
Description |
| pFile |
FILE |
The file structure which points to the SQL table to clone. |
| pNewTableName |
String |
The name of the table to create. The Primary Key is
automatically created as well. |
| pOptions |
Long |
Any, or all, of the following options added together;
Driver:UniqueKeys Unique
keys (other than the Primary Key)
Driver:DefaultValues The default
values for the various fields, as set in the source table, are
set the same in the cloned table.
Driver:Constraints The
constraints for the individual fields
Driver:ForeignKeys The
Foreign Keys for the table, as used in relationships
Driver:Indexes Any
additional indexes (keys) for the table. This excludes the
Unique keys which are dealt with above.
Driver:Triggers Any
triggers for the table. If you copy triggers then those
triggers should be inspected to make sure they are doing what
was intended. It's possible to create triggers that cannot be
converted, especially if the trigger itself is generating SQL
on the fly from string values. |
Databases
Not all databases support all the above options. See the clone command
in
specific
database documentation for more specific information.
Example
Clone(Customers,'CustomersArchive',DriverDefaultValues
+ Driver:Constraints)
See also
COPY ,
MERGE ,
CREATE.
prop:Collation
This property is used to get the collation for a column in the
database, as set in the dictionary.
Prop:Collation
retrieves the value as set in the
Extended Name
Attribute of the field.
s = customers{prop:collation,3}
Returns the collation for the 3rd field in the table, as set in the
dictionary.
To get the collation of a field, as set in the database, use
prop:CollationDB.
prop:CollationDB
This property is used to get the collation from the database, or set
the database collation prior to a
CREATEDB
command.
s = customers{prop:collationDB}
This returns the default collation for the database. It
typically results in a string value, which is specific to that
database. For example SQL Server might return something like
SQL_Latin1_General_CP1_CI_AS,
PostgreSQL might return
English_United States.1252
and so on. Not all databases support collation at the database level.
For example the reply from SQLite will always be
BINARY.
Some databases allow the default collation to be set before calling
the
CREATEDB command. This can
be a very important step when creating a database, and care should be
taken to set the default collation correctly. The specifics for each
database are different, so checking the
documentation
for the driver is important. Changing the default creation after
creating the database can be very tricky, if it is allowed at all.
Example
customers{prop:collationDB} =
'de-DE-u-ks-level2'
CreateDB(Customers)
When called with an index, it retrieves the collation for a specific
(string) field in the database.
s = customers{prop:collationDB,5}
Returns the collation for the 5th field in the table, as set in the
database.
To get the collation for a field, as set in the dictionary, use
prop:Collation.
COMMIT
COMMIT(FILE pFile)
If
LOGOUTCONNECTION is
used to start a transaction, then
COMMIT
MUST be called with the primary table. Calling
COMMIT
with no parameters in this situation will cause a GPF.
For more information on commit see
Transactions.
Context
There's a popular saying;
"If you get get five database administrators in a
room, and ask a SQL question, you'll get ten opinions."
The primary reason for this is that SQL databases operate in many
different contexts. And almost all SQL advice is context specific.
Things that are optimal in one context may be sub-optimal in another.
When reading, or getting, advice, it is important for the expert to
understand your context, and to be offering advice that matches your
context. Be careful of implementing advice which may be good in one
context, but is not good in yours. Here are some broad, common,
contexts. Your context may overlap some of these, or may be quite
different.
- You make a commercial program, distributed to many customers,
who each make use of a SQL database to store the data your program
creates. They don't have a Database Administrator (DBA) on staff -
you are expected to install, and maintain, the database for them.
- You work internally at an organization. They have one, or many,
databases internally. Your program interacts with those databases
adding, and retrieving information. You have little, or no, input
to the organization and maintenance of the data.
- You make a web application where data is stored locally (or "in
the cloud") and is administered by your staff. The database may
support one, or many, web servers at the same time.
- Most of the work the program does is reading data and/or
generating reports. Performance should be optimized for speedy
reads. Or perhaps most of the work is gathering data, perhaps from
automated IoT devices. Performance should be optimized for speedy
writes. Or perhaps it's a combination of both, some tables should
have fast writes, others fast reads.
- Your tables are small (thousands of rows), or your tables are
large (millions, or billions, of rows).
These are just some of the context considerations. How you design your
database, and how you deploy your database, will depend largely on
your context. When advice is taken out of context then "myths" evolve.
For example, it's a common Clarion SQL myth that "tables should not
have keys declared in the dictionary. The database will determine the
best keys based on usage, and will create them as necessary." This
advice varies between sort-of-true, and not-true depending on your
context. Firstly it's important to understand that keys improve read
performance, at the cost of some write performance, and some disk
space. Regardless of context, keys are important for fast read
performance. What can vary greatly is how those keys are determined,
created, and maintained. If a DBA is overseeing the database this job
generally falls to them. If there is no DBA this job generally falls
to the program. The SQL database does not do it "by itself".
When it comes to database maintenance, your program runs in one of two
contexts;
- Your program "owns" the data. It creates the tables, looks after
them, updates them, alters them, and so on. In this scenario the
dictionary is the source of "truth" - as you update the dictionary
so the database updates as well. If you are coming from an ISAM
(TPS) background this is the context you are used to operating in.
In this context if you add a table in the dictionary it can be
added to the database. If you add keys to your FILE
structures, they can be created in the database and so on.
- Your program accesses the database, but does not own it. This is
a common situation when the program is merely accessing a database
that is updated and maintained by either a human (a DBA) or
another program. In this context the program adapts to the
database, not the other way around. The database itself is the
source of truth.
In this context the FILE structure,
and keys can be a subset of what actually exist in
the database. You can define additional keys in the dictionary,
but these are not propagated into the database itself.
It is useful for the program, and by extension the driver, to know
which of these two contexts are in play. The driver can do work to
ensure that the database structure is kept up to date, however for
backwards compatibility reasons this feature is OFF by default. To
turn this feature on set the
Driver Option,
either at the global level or at the table level. It can also be set
globally, and overridden at the table level if desired.
The
OPEN command checks that the
Dictionary field set is a subset of the Database field set (ie fields
can exist in the database that are not in your dictionary.) It
checks that Primary and Unique keys declared in the dictionary are
declared in the database. Other than that, extra keys in the
dictionary or in the database are ignored.
CONNECT
Connect(FILE pFile, LONG pOptions=0)
The
CONNECT command is not supported by
traditional SQL drivers. It is new to object based drivers.
The object based driver typically connects to the database when a
command requires it. So the
CONNECT
command is not usually called in code. However there are situations
where a connection to the database is desirable, even if the code is
not ready yet to perform a command. In this case the
CONNECT
command can be called.
The
CONNECT command can take an
Options
parameter, which is File Driver dependent. See the specific driver
documentation on what can be used here.
Clarion supports a number of database connection string formats. The
driver will necessarily alter the connection string to a format
acceptable to the database engine. After (attempting to) connect to
the database, the connection string as used by the driver, not the one
passed to the driver, can be read using
c = pFile{prop:TryConnectedAs}
See also
DISCONNECT.
Connection Pooling
Inside the driver, at a fairly low level, the driver has to "connect"
to the database. This is known as a connection. Typically the
programmer does not need to understand how this happens, or how it
works, it just happens.
For various performance and resource reasons multiple driver objects
can share the same same connection. Again this is done automatically
by the driver, and in most cases there's no reason to modify this
behavior. In some cases though it can be useful to prevent objects
from sharing the same connection, and so some control over the pool is
offered.
Firstly it's worth understanding when connections will be pooled, and
when new connections will be created;
Connections are only pooled if the objects are on the same thread, and
if their owner strings are identical. Even a small change in the owner
string will trigger a separate connection, so that is one way to
control whether tables will share a connection or not. In addition to
the owner, a new property,
prop:Pool,
also exists. Connections will only be shared if the tables have the
same
prop:Pool number. The default value
for
prop:Pool is 0.
Therefore, if there are two tables, and it is desirable for them not
to share a connection, then the
prop:Pool
for one of them can be set to a non-zero number.
prop:Pool is automatically set back to 0 when
CLOSE
is called
Pooling also comes into play when transactions are involved. After a
transaction is started, then all writes, to all tables sharing that
connection, are part of the transaction (whether the file was listed
in the
LOGOUT command or not.) Therefore
by setting
prop:Pool to a common
(non-zero) value before the tables are opened, the tables can be
"isolated" from the other tables. In most cases there will be no gain
from this change, but in some cases, where some writes must occur
outside of the transaction (while a transaction is underway) this
property can be useful.
If a table is open, and you wish it to change to a new connection,
then set
prop:Pool, and call
CONNECT.
It should be noted that if the file is open when this happens, it will
be closed. For this reason it is best done before the file is OPENed.
If for some reason it's desirable to get the actual connection handle,
this can be retrieved using
prop:Hdbc.
COPY
COPY(FILE
pFile, STRING pNewFileName, Long pOptions=0)
The
COPY command is not supported by
traditional SQL drivers. Object based drivers allow the
COPY
command to be implemented by the driver.
The
COPY statement duplicates a table in
the database. The table to copy can be open or closed. If the new
table name is the same as the current table then the
COPY statement is ignored.
If the destination table already exists, then
it
is DROPPED first (and all existing information in the table
is lost). If you want to merge (not copy) the table, then see the
MERGE command.
Note that this only copies the table structure,
and data. It does not copy indexes,
relationships, triggers and so on.
The pOptions parameter is passed through to the
CLONE
command, which in turn determines which parts of the table
structure
are copied. For more details on the pOptions parameter see the
documentation for
CLONE.
The option
Driver:DefaultValues is
automatically set when
COPY is called.
All supported properties should be set (or
cleared) before the call to COPY
.
Calling CLEAR(file) or CLEARPROPS(file)
will clear all the properties.
The
COPY command supports
prop:Project ,
prop:Filter
,
prop:SQLFilter,
prop:Limit,
prop:Offset. If a subset of the fields are
projected, then the destination table will set the remainder of the
fields to be their default value, which may be
NULL.
The destination table when created will include all the columns
present in the source table, regardless of whether all the columns are
projected or not.
All the properties are cleared at the end of the call to
COPY,
regardless of success or failure.
See also
CLONE,
MERGE ,
RENAME.
CREATE
The
CREATE command is seldom used with
traditional SQL drivers because it lacks the flexibility and power
most people require when creating SQL tables in their database. In an
effort to make creating tables simpler, the
CREATE
in object based drivers is more powerful, and safer to use.
| Traditional SQL Driver |
Object Based SQL Driver |
| Does an implicit DROP TABLE before
the CREATE TABLE statement. This
conforms to the behavior of ISAM drivers, however is
problematic if the OPEN command
incorrectly returns a File Not Found (02)
error. This problem has lead to many tables in databases being
inadvertently dropped. |
Does not do a DROP TABLE . If the
table already exists then CREATE
fails with a File System Error (90)
.
If the table should be dropped then use the REMOVE
command before calling CREATE. |
| No control over how the field is created in SQL. |
Supports Extended Name Attributes
including SQLTYPE, COLLATE,
BINARY, NOTNULL,
DEFAULT, ISIDENTITY,
UUID and more. |
| Does not support Server Side constraints |
Supports server-side constraints using Extended
Name Attributes. |
| Does not support Relationships |
Relationships can be defined using the ADDRELATION command. If this is done
before CREATE then foreign keys,
and foreign key constrains (and actions) are created. |
Create in specific order if Server-Side Relations exist.
(although since there's no support for server-side
relationships, you don't use CREATE
at all.) |
If there are server-side relations declared for a table (via
ADDRELATION ) then
all parent tables must exist before a child table can be
created. The driver will automatically cascade the CREATE
to make sure all parent tables exist. For example, when
creating LineItems (which has a
parent Invoice), then Invoice
is also created (if it does not yet exist).
Thus tables can be created in any order - the driver will
cascade the CREATE as necessary. |
CREATEDB
CREATEDB(FILE
pFile, STRING pNewDatabaseName, <String pOptions>)
CREATEDB(FILE
pFile)
| Parameter |
Type |
Description |
| pFile |
FILE |
A file handle that is connected to the database. Any file
handle will do. |
| pNewDatabaseName |
String
(optional) |
The name of the new database to create. If no name is
supplied then the name from the current owner attribute is
used. If no name is supplied then the pOptions parameter
cannot be used. |
| pOptions |
String (optional) |
SQLite ignores this parameter. In SQLite there are no
options supported, however options can be specified after
creation using SEND and
pragmas.
In other databases this is the (optional) additional part
which follows the CREATE DATABASE name.
This string is added as-is to the SQL command that is being
generated. It will vary greatly from one database engine to
the next. |
Creating a new database from inside a program can be tricky because of
user permissions. In most cases, since creating the database is a
one-time task, it is done manually by the database administrator. The
administrator will then give appropriate users appropriate permissions
to the database. One of those users is typically a generic user that
the program will use.
That said, if one is connected with a user of sufficient power, it is
possible to create a new database within the server using the
CREATEDB
command.
In order to create a database, a valid connection (including a
valid database name) to the database engine is required. The driver
automatically uses the "master" database for this purpose. In order
for this command to work, permissions to access the master database
are necessary.
In MSSQL this database is called
master.
In PostgreSQL this database is called
postgres.
This command may fail if the user used in the connect string does not
have sufficient rights to read this "master database".
The new database name can be set in one of two ways;
The first approach is to use an existing owner string which contains a
suitable user name and password for connecting to the server.
glo:owner =
'DSN=DriverKit;UID=sa;PWD=xxx;Trusted_Connection=No;WSID=yyy;DATABASE=filedriverkit;Encrypt=no;'
If ExistsDB(Customers,'newDatabase')
= false
CreateDB(Customers,'newDatabase')
End
The second approach is to set the new
database name as part of the Owner, and then call
CREATEDB
without a database name. For example;
glo:owner =
'DSN=DriverKit;UID=sa;PWD=xxx;Trusted_Connection=No;WSID=yyy;DATABASE=newDatabase;Encrypt=no;'
If ExistsDB(Customers
CreateDB(Customers)
End
If the create fails (typically because the database already exists)
then an
Error 90 FileSystemErr is
generated. The exact error reason, as returned by the server, is
available using the
FileError() command.
Note : All tables on this connection should
be closed before this command is called. This command will close the
connection, and open a new one to the master database (which will then
be closed as well.) While other tables should recover from a closed
connection, presumably they could not be opened in the first place if
the database does not exist. Connections are thread based, so
connections on other threads should not be affected.
See Also :
EXISTSDB
Custom Properties
Object based drivers support the setting, and retrieval of
custom
properties.This allows you to name, and set a property and
associate it with the table. The property is valid for the life of the
object (ie is limited to the thread if the file is threaded.)
To set a property the syntax is;
file{'someproperty',index} = 'whatever'
The
index parameter is optional, if
omitted it defaults to 0.
To retrieve a property the syntax is
something = file{'someproperty',index}
again, the index parameter is optional and defaults to 0.
The property name is not case sensitive. If the name is a number then
care should be taken not to overlap with property numbers defined in
property.clw or
CWDriver.inc.
(For best future-compatibility do not use a number as the name.)
Custom properties are useful because they are attached to the object,
meaning that they are in scope in any place where the FILE is in
scope. So they reduce the need for parameters (attached to the file)
to be passed.
Database Version
A number of properties exist which allow the program to query the
version number of the Database server, and if applicable, the version
number of the database client layer.
| Property |
Description |
Prop:DBMSVer
Prop:DbVersion
|
A string containing the version number of the database,
including the Major, Minor and Build numbers. |
| Prop:DbVersionMajor |
The Major version number of the database server. |
| Prop:DbVersionMinor |
The Minor version number of the database server. |
| Prop:DbVersionBuild |
The Build number of the database server. |
| PROP:ClientVersion |
A string containing the version number of the connector,
including the Major, Minor and Build numbers. For MSSQL this
is the ODBC version, for Postgres this is the LibPQ version
and so on. |
| Prop:ClientVersionMajor |
The Major version number of the connector. |
| Prop:ClientVersionMinor |
The Minor version number of the connector. |
| Prop:ClientVersionBuild |
The Build number of the connector. |
| Prop:DriverVersion |
Returns the version number for the DriverKit version (in
other words the version of this library.) |
DATE or LONG
Summary: Use a
DATE
field in your dictionary for dates. Works perfectly for dates after 1
Jan 1801.
Clarion supports two data types suitable for storing a date.
The original type used for dates is a
LONG.
This counts the number of days since December 28 1800. The first valid
number is 4 (1 Jan 1801) and the highest supported value is 2994626
(31 Dec 9999).
A newer type,
DATE, was added in Clarion
for Windows 2.0. This type allowed the compiler to identify actual
DATE data, and also mapped better to the
DATE data type available in most SQL engines.
Conceptually the
DATE type supports
dates from 1 Jan 0000 to 31 Dec 9999. However all the Clarion
functions that apply to dates (
DAY,
MONTH,
YEAR,
FORMAT and so on) work on a
LONG,
so the
DATE is automatically converted
to a
LONG when these functions are
called. This limits the useful range of the
DATE
field to the limits of the
LONG (ie from
1 Jan 1801).
Collectively this means that handling a
DATE
value in Clarion is simple, as long as the date is after Jan 1 1801.
If dates are after this value then you can use a
LONG
(or
DATE) field in the Clarion program,
and this matches to either an
INT or
DATE field in the SQL database. In most cases the
SQL type will be
DATE (because this makes
more sense to other programs accessing the database.)
If the matching field is declared in Clarion as a
LONG,
then the
Extended Name Attributes can specify
that the SQL type is
DATE.
SomeDate Long,name('Somedate |
SQLTYPE(Date)')
If the date range in the SQL database though contains dates earlier
than 1801, then quite a lot of care has to be taken when working with
these dates. In the code, the compiler will automatically convert the
value to a
LONG whenever the date field
is used. Functions which accept a
DATE
as a parameter will convert the date to a
LONG,
EVEN IF the value being passed in is a
DATE.
To preserve the
DATE while passing as a
parameter it must be passed by reference, ie
*DATE.
While going into detail on the use of this type, for dates earlier
than 1801 is outside the scope of this document, it is something
programmers should be aware of.
prop:DatePicture
prop:DatePicture is used when a
DATE,
TIMESTAMP or
TIMESTAMPOFFSET
field in the SQL database is converted to text in the program.
Used in
prop:TextValue,
EXPORT and so on.
Extended Date Pictures are supported. Default
value is
@D010- (using hyphens,
not slashes, is part of the
ISO 8601 specification).
See also
prop:TimePicture
prop:DateTimeSeparator
When dates and times are formatted, or deformatted from string values,
then there is typically a separator between the date and time. In most
cases this is a single space, however it may be necessary to support
either T or ' T '. The property
prop:DateTimeSeparator
allows you to define the separator.
The default value for this property is a single space. This is the
most common separator used by databases.
Example
Customers{prop:DateTimeSeparator} = 'T'
Export(Customers,'customers.csv')
This property is reset to the default value (a single space) when the
CLEARPROPS command is called.
DELETE
Delete(FILE pFile)
Delete(FILE pFile,String pFilter)
The traditional command
DELETE(File)
works as it always has. It deletes the most-recently-read record
(which has been retried using
GET,
REGET,
NEXT or
PREVIOUS.
Object based drivers extend the command allowing a filter statement to
be passed to the command. This allows multiple records to be deleted
at the same time.
If supported by the driver,
prop:Affected
is set with the number of rows which were deleted.
To delete all the records in a table see the
EMPTY
command.
Note: OVER and ARRAY fields cannot be used
in the filter.
DISCONNECT
Disconnect(FILE pFile)
The
DISCONNECT command is not supported
by traditional SQL drivers. Traditional drivers do support
file{prop:Disconnect} however in
traditional drivers that command applies to
all
FILE structures, and across all threads,
for that driver. The
DISCONNECT command
does not behave like that.
In object based drivers, calling
CLOSE on
a table disconnects the
FILE from the
connection manager. This won't necessarily close the connection; that
will only be done when the last table using the connection is closed.
Also, some commands can be executed when the file is in a closed state
- and these will necessarily make a connection. Therefore, if you use
a command, and the table is not open, you may want to call
CLOSE
or
DISCONNECT to release remove the table
from the connection.
It is not absolutely necessary to call either
CLOSE
or
DISCONNECT. If a File Object goes out
of scope
[1], then it will
automatically be CLOSEd and DISCONNECTed for you.
Note 1: The
file object goes out of scope when the FILE structure goes out of
scope. The FILE structure goes out of scope as follows;
a) If the FILE is declared locally in a procedure, then it goes out of
scope when the procedure ends.
b) If the FILE is declared locally in a MODULE then it behaves as if
it is global.
c) If the FILE is declared globally, with the ,THREAD attribute, then
it'll go out of scope when the thread ends.
d) If the FILE is declared globally, without the ,THREAD attribute,
then it'll go out of scope when the program ends.
Prop:Disconnect
Traditional drivers support using
prop:Disconnect
to close the connections to a database. This is a fairly dangerous
command as it applies across all threads, and to all tables using the
same database. It is primarily used when the developer is sure all
tables are closed (on all threads) and is used to preempt program
hangs caused by server-side disconnections
[1].
In object based drivers, connections are handled differently.
[2]
In this situation connections are threaded (ie each thread makes use
of its own connection) and the
DISCONNECT
command allows a specific table to disconnect from the database,
however the actual database connection is not disconnected until the
last table object has disconnected.
For this reason, in object based drivers, the call to
prop:Disconnect
has been deprecated. If a call to
prop:Disconnect
is made, then no action will be taken.
Note 1: The
main use of
prop:Disconnect is to
prevent program hangs caused by server-side disconnections. These
occasional hangs are a serious inconvenience to programs using
traditional SQL drivers. Since object based drivers use a different
connection management approach, which do not suffer from these hangs,
this solution is not necessary anymore.
Note 2:
This next paragraph is technical, and of no interest to most
programmers. Feel free to ignore.
There is a new class inside the class called the
DriverConnectionPoolClass.
A threaded instance of this class exists inside the driver, so each
thread has its own connections. Different tables connecting to the
same database, with the same database options, share the same
connection. When the last table disconnects from the database then the
connection is automatically closed.
Prop:Distinct
Before calling the
SET command, the
prop:Distinct property can be set. This will
add the
DISTINCT clause to the SQL
SELECT
statement. This tells the server to only return unique records,
duplicate records are not included in the result set. Object based
ISAM drivers may, or may not, implement this property so check driver
documentation.
customers{prop:Distinct} = true
Set(Cus:NameKey)
Loop ...
This property
is not reset to false
except on a
OPEN ,
CLOSE
,
CLEAR and
CLEARPROPS.
If you wish to re-use the
FILE with this
property off, then you must set it to
false
(if you previously set it to
true).
It is not necessary to set
prop:Paging
to false when
prop:Distinct or
prop:NotDistinct are used, then paging is
automatically disabled.
Rows in the table are unique. So retrieving all the fields in a row
will mean that
prop:Distinct will have
no effect (since all the rows will be fetched.) Therefore, in the
FILE context,
prop:Distinct
is only useful if you are using
prop:Project
to retrieve only a subset of columns, and those columns do not include
all the fields in any unique index.
Since there is no row-identifying field in the result set, the results
of this request are automatically set to be read only (so no
PUT
calls are allowed.)
Some databases (notably MS SQL Server, Postgres, MySQL and MariaDB)
require the
ORDER BY clause to ONLY include
fields which are included in the
prop:Project
list. Other databases (notably SQLite, Firebird, and Oracle) allow the
ORDER BY clause to include fields not
included in the
prop:Project list. For
maximum portability set the
prop:Order
to be fields included in the
prop:Project
list.
Expanding the example above;
customers{prop:Distinct} = true
customers{prop:Project} = 'Cus:Lastname,Cus:Age'
customers{prop:Order} = 'Cus:Lastname')
Set(Cus:NameKey)
Loop ...
This will return a list of names and ages, but without any duplicates.
If the
prop:Distinct is set, but the
prop:Project is not set, then the
prop:Distinct
is ignored and the implicit
prop:Readonly
is not applied.
See also
prop:NotDistinct.
Driver Options String
Clarion traditional drivers support two ways to send commands to the
driver. The first is via the
Driver Options
String - which is typically set in the dictionary. While this
can be a variable (as it is with the Owner and Full Path Names) is is
usually coded to a fixed value by developers.
In addition to this is the
SEND
command, which allows commands to be passed on an ad-hoc basis.
One disadvantage of the dictionary driver string is that it is set on
a table-by-table basis. Which means that for some settings the driver
string has to be duplicated on every table (or, at the very least, a
global variable has to be set for each table.)
Object based drivers (in Clarion 10 and later) introduce an additional
way
[1] to set driver
options; by setting a user-defined system property. The system
property is named
Driver:xxx where
xxx is the name of the driver. So, for
example, the SQLite2 driver uses the property
system{'Driver:SQLite2'}.
Example
System{'Driver:SQLite2'} = '/master=Dict'
System properties are
not threaded.
Thus you can set the property once, presumably when the program
starts, and it will then apply for all tables, and on all threads,
using that file driver.
Of course the regular file driver string remains. It is evaluated
after the system property. If the same setting is in the system
property and the file driver string, then the value in the file driver
string will be used. The file driver string can still be set on a
table by table basis, and if a variable is used, then the variable can
be threaded.
The
SEND command can be used to
send additional commands and instructions to the driver (as has always
been the case). This naturally overrides the existing value in the
setting,however care should be taken with regard to when the
SEND
command is used. The system property and file driver string are
evaluated at specific points in the code
[2],
and this effectively "resets" the properties mentioned in the system
property and file driver string back to their original setting. And
SEND commands on these properties will be
reset. Other
SEND commands will not be
reset.
The exact options available to the developer depend on the driver. See
the driver documentation for specific options and values.
While the driver strings and
SEND command
are useful to set behavior in the driver, it's also desirable to set
properties in the database engine itself. These options are very
database specific, and may even vary between one version of a database
and the next. For the options available to your database see the
database documentation. (For example, in SQLite the documentation is
at
https://www.sqlite.org/pragma.html
.)
The object based drivers support these options at all three levels -
in the system property, the file driver string, and using the
SEND
command.
Example - system property;
System{'Driver:SQLite2'} = '/AUTOINC=TRUE
/pragma_cell_size_check=true /UNSAFE=Stop'
Example - file driver string;
Customer
FILE,DRIVER('SQLite2','/AUTOINC=TRUE
/pragma_cell_size_check=true')
Example,
SEND command;
SEND(Customer,'/AUTOINC=TRUE
/pragma_cell_size_check=true')
Some database commands are not instructions to set a setting, but
rather a request to get the current value of a setting. For example in
SQLite the command
PRAGMA data_version; is a
request to get the current value in
data_version.
Including this in a driver string doesn't make a lot of sense (since
it would happen on every
OPEN) however it
is useful to use with the
SEND command.
The results of the call are returned by the
SEND
command.
d = SEND(Customers,'/pragma_data_version')
Common Driver Options
| Option |
Default Value |
Description |
| /ALLOWDETAILS |
|
|
| /BUSYTIMEOUT=value |
5
(SQLite 53) |
Timeouts for driver. Value is in seconds. |
| /DUMMY |
|
See /DUMMY. |
| s = SEND('/DUMMY') |
|
Returns the current value of the /DUMMY
option for this table. |
| /EXPLAIN=true |
false |
When this is on then ALL SET
/ NEXT loops for the FILE,
and all VIEW structures, where
this is the primary file, will trigger a Query
Plan Explanation. Use this while developing to
determine where additional keys are needed to improve
performance. |
| /IGNORETRUNCATION=true |
false |
Fields are declared in the database, and fields are
declared in the FILE structure. Sometimes these field types
line up nicely, and sometimes they do not. For example
Clarion has a USHORT type, whereas SQL does not. SO, in SQL
a USHORT turns into a LONG. Or your FILE may contain a
STRING(100), but in SQLITE (for example) Strings can be any
length.
If the driver reads a field, and it is too large to fit into
the assigned Clarion data type, then a DataTruncatedError
(132) is generated. This is desirable because
retrieving partial content could lead to bugs in the
program, or worse (if the program writes the data back)
unexpected changes in the database values.
In some cases though it is desirable to read the record,
even though some of the data is being truncated. For
example, a report may not care if long strings are
truncated. If this is the case then this option can be set
to true. |
| s = SEND('/IGNORETRUNCATION') |
|
Returns the current value of the /IGNORETRUNCATION
option for this table. |
| /LOGFILENAME |
|
|
| /LOGMESSAGE |
|
|
/MASTER=n
|
database
|
n is one of ;
database : the database is the
source of truth. Field and key changes in the dictionary
will not be propagated to the database. (The program will
Error 47 if a field is in the FILE
structure, but not in the database)
dict : The dictionary is the
source of truth. The database will be updated with new
fields and keys as they are added to the dictionary. Old
fields and keys will not be removed.
dictfield : Fields added to the
dictionary will be added to the database table. Keys added
in the dictionary will not be added to the table.
dictkey : Keys added to the
dictionary will be added to the database table. Fields added
to the dictionary will not be added to the table. (The
program will Error 47 if a field is in the FILE
structure, but not in the database).
strictdict : As for dict,
but fields and keys removed from the dictionary will be
removed from the database.
NOTE: This option is not yet
supported directly by the drivers.
|
| s = SEND('/MASTER') |
|
Returns the current value of the /MASTER
option for this table. |
| /PROPSQLMATCHBYNUMBER |
false |
In some cases programmers did not use a DUMMY
table, but rather fed prop:Sql
statements directly into real table structures. Since these
prop:Sql statements usually
lack AS clauses, a match-by-number
facility is needed, but only when prop:Sql
is called.
This switch is designed to be set globally
if your application contains this pattern. Note that this
switch ONLY applies to prop:Sql.
It does not apply to the SQL
command. Therefore when converting prop:Sql
to SQL be sure to
add an AS clause to the SQL to
identify which field in the record should receive the
result. For example
LineItems{prop:Sql} = 'SELECT Count(*)
AS Quantity' ! Where Quantity is an arbitary
field in the record to receive the result. |
| s = SEND('/PROPSQLMATCHBYNUMBER') |
|
Returns the current value of the /PROPSQLMATCHBYNUMBER
option for this table. |
| /TURBOSQL=n |
false |
n is true or false.
When true no file structure
check is done when the table opens or is used. |
| s = SEND('/TURBOSQL') |
|
Returns the current value of the /TURBOSQL
option for this table. |
| /UNSAFE=n |
false |
n is one of;
false : Unsafe use of prop:Sql
is not allowed.
true : Unsafe use of prop:Sql
is allowed
stop : Unsafe use of prop:Sql
will trigger a STOP, and then
the prop:Sql command will
proceed. |
| s = SEND('/UNSAFE') |
|
Returns the current value of the /UNSAFE
option for this table. |
| /ZERODATE=n |
null (if the field is
nullable)
1 (if the field is NOT NULL)
|
Most databases[3]
cannot store 0 in a date field. This option allows the
developer to control what will happen when a DATE
field in the program contains a blank value (ie 0) and is
being written to a database that does not accept 0 date
values. The options are
/ZERODATE=null : The value is
written as a NULL. If the column is NOT NULL then ErrorCode
will be set to 90. FileErrror() will contain
additional information.
/ZERODATE=0 : If a date is 0,
the write will fail. The Error code will be set to 90 File System Error. FileErrror()
will contain additional information.
/ZERODATE=1 : The date will be
set to 01/01/0001 .
/ZERODATE=20250531: The date
will be set as 31 May 2025. |
| s = SEND('/ZERODATE') |
|
Returns the current value of the /ZERODATE
option for this table. |
Note 1:
This is only available in Clarion 10.11975 or later. User system
properties were introduced at this time.
Note 2: The
system property and driver string are parsed and processed at two
points in the life-cycle of the
FILE
structure;
- When the object is initialized. This happens when the file is
"touched" in any way (by any command or property) for the first
time (on the thread, if the FILE is
threaded.) In other words, this can be thought of as happening
once per thread, when the FILE is
first used on the thread.
- When the file is opened. Indeed every time it is opened. This
has the potential to reset options set with SEND
which occurred before the OPEN.
Note 3:
SQLite allows a DATE field to be stored as 0. It is treated as
1970/01/01 when used in SQLite internal date functions.
MySQL allows zero date.
Dummy Tables
It was not uncommon to create a so-called "Dummy Table" in an
application, and then use ad-hoc
prop:Sql
statements to populate a result set, using this dummy table as a
container for the result set. For example, a file structure might look
like this;
Dummy
FILE,DRIVER('SQLite'),OWNER(glo:Owner),NAME(glo:filename),PRE(Dum),CREATE,BINDABLE,THREAD
IDKey
KEY(Dum:ID),NOCASE,PRIMARY
Record RECORD,PRE()
F1
STRING(255)
F2
STRING(255)
F3
STRING(255)
F4
STRING(255)
F5
STRING(255)
ID
STRING(255)
END
END
The idea is that by having some quantity of generic fields, code like
this could be used;
Open(Dummy,42h)
Dummy{prop:sql} = 'select * from Countries Order By CountryCode'
Loop
Next(Dummy)
If Errorcode() then break.
! dum:F1 is the first column in the result set, dum:F2 is the
second and so on.
End
This code works primarily because the traditional drivers match the
result set to the file structure based on the order of the fields. (By
contrast, the object based drivers match on name.)
With the new drivers this pattern is largely unnecessary. The addition
of support for various properties, the addition of new commands, like
PROJECTFIELD and
JOINTABLE make this pattern obsolete, at least in
most cases. Nevertheless for existing code to continue working, the
new drivers support the concept of a dummy table used in this way.
There are however some steps which are important.
Firstly, dummy tables should be identified using a driver string,
/DUMMY. As in;
Dummy
FILE,DRIVER('SQLite2','/DUMMY'),...
This tells the driver that the file is not actually a table in the
backend, just a structure used for ad-hoc queries. Specifically it
means that;
- Fields are matched based on field-number and not on field-name.
GROUP and OVER
fields are skipped over when assigning data into fields.
- The table is not actually created in the
database. You can call CREATE
on it (nothing happens) and when calling OPEN
it does not check the back end to see if the table exists. As far
as the back end is concerned it does not exist, it is purely a
client-side entity.
It should be noted that the
FILE
structure still requires a primary key. This key is a fiction, since
there is no matching table there is no key, but under the object based
driver structure rules a primary key is required.
Since the table does not exist on the back end, the
EXISTS command returns
false.
[1] Other commands (like
RENAME,
COPY,
MERGE etc) have no meaning and
calling these on a dummy table will not work.
Note 1: More
accurately it'll return
true if the
table exists,
false if it does not.
Although the new driver will not create the table, it may already be
there if the table was created by a traditional driver at some point.
Note 2: In some cases programmers did not
use a DUMMY table, but rather fed
prop:Sql
statements directly into real table structures. Since these
prop:Sql
statements usually lack
AS clauses, a
match-by-number facility is needed, but only when
prop:Sql
is called. For this see
/PROPSQLMATCHBYNUMBER.
DUPLICATE
The
DUPLICATE command has been
extended to support non-unique keys.
From the Clarion documentation;
The DUPLICATE procedure returns a non-zero value
(true) if writing the current record to the data file would post the
"Creates Duplicate Key" error.
The current behavior remains, but with a small extension;
If the
DUPLICATE(key) form of the command
is called
with a non-unique key (ie a key
with the
DUP attribute) the command will
return
true if a "matching" record
already exists. However writing to the disk in this case will not
cause an Error.
Calling it in the form
Duplicate(FILE) only tests unique keys. The above extension
to the behavior only applies when called with a non-unique-key as the
parameter.
EMPTY
Traditional drivers require that the file be open in Exclusive mode to
use the
EMPTY command. Object based
drivers do not have the same limitation. You can empty the table
regardless of the open mode (assuming that the the table is indeed
open, and that the open-mode is not read-only.)
EMPTY removes all the records from the
table. To remove one, or more than one record, see the
DELETE command.
If the table contains Server-Side auto-numbered fields (Identity
fields) then the driver will set the sequence numbers back to 0, so
that the next ADD on the table will set the IDENTITY field to 1.
Note: If you are using
server-side
relationships then the use of
EMPTY
can be damaging. If called when relationships are active then the
defined
On Delete behavior may be executed
by the database. This includes cascading deletes, clearing of child
fields, and so on. Indeed the call may fail for one, or more, rows if
there is a
Restrict limitation placed on
deleting the row. This may not be intended behavior.
If the call to
EMPTY is part of a wider
process (to drop the data, and then re-populate it) then strong
consideration should be given to suspending relational updates (using
SUSPENDRELATIONS). For
more information see the section on
Relationships.
A similar issue exists when using the
REMOVE
command.
Exclude Empty Keys
A lot of programs get to SQL via an ISAM route. They start off as
Topspeed files, and at some point the system changes to being SQL.
Converting programs like this can be challenging because some features
of Topspeed are not supported by the traditional SQL drivers. One of
these features is the creation of Keys that make use of the option
Exclude Empty Keys.
Unlike
Arrays and
Memos
(two other common issues when converting from Topspeed) understanding
the functionality provided by the key, in your application, is
critical in duplicating that functionality in SQL. Different programs
use the key to provide different functionality.
Compact sort ordering
The original use of the key was to create compact sort orders on the
disk. By removing the blank values from the key, the key itself would
take up less space on the disk. In the modern era, disk space became
plentiful, and cheap, so this use case is not used much anymore (other
than as an historical artifact in the program.) This use-case is less
applicable to SQL.
If this was the reason for the option being set in your dictionary,
then it can simply be removed from the dictionary. The key can then be
created as normal, and can include all values. The program can
simply be used as-is, however it may be necessary to add, or alter,
the
prop:Filter when using the key to
exclude rows with zero or blank values.
Not-Blank-Unique
A second common use case pairs the
Exclude Empty
Keys option with the
Require Unique
Value option. In this case the Tospeed driver enforces
uniqueness on the field IF a value is entered.
[1]
So, for example, a phone number must be unique (if one is entered) but
can be left blank if unknown (and no uniqueness check is done if the
field is blank.)
This matches the SQL concept of a
Partial Index.
Most databases
[2]
support Partial Indexes, although some drivers may use an alternative
strategy to achieve the effect. If the program is using the key in
this way, then it can simply carry on using it like this, as long as
the index is created correctly. (using the
CREATE
statement will create it correctly.)
Note 1:
It's worth explicitly pointing out that if the
KEY
is not Case Sensitive, then the uniqueness is not case sensitive.
Meaning that if a product
a/123 exists
then you cannot add
A/123.
Note 2:
Partial Indexes are not supported in MySql or Maria / DB. Firebird
requires version 4.0 or later. SQLite requires version 3.8.0 or later.
MS SQL Server supports them (since SQL Server 2008) , but they are
called Filtered Indexes. Where not supported alternative techniques
can be employed by the driver to achieve the same effect.
Note 3: Things get somewhat more complex
(and inconsistent) when the key has multiple columns.
SQLite and PostgreSQL behave the same as Topspeed; If ANY field in the
key is not blank or null then the row is added to the key.
MS SQL will include a row into the key only if ALL fields are not
blank or null.
EXISTS
Exists(FILE pFile),LONG
Exists(FILE pFile,STRING pTableName),LONG
Exists(FILE pFile,Long pWhat, STRING
pTableName),LONG
The Clarion
EXISTS command allows the
program to query the disk system to see if a file exists on the disk.
The object based drivers extend this command
to check
the database to see if the table, key etc exists in the database or
not.
| Parameter |
Type |
Description |
| pFile |
FILE |
A file handle that is connected to the database. If the
other parameters are omitted then the search is for a table,
with the name as set in the Full Path Name of this table. |
| pTableName |
String |
The name of the table to search for. |
| pWhat |
Long |
One of the following;
driver:Table
driver:StoredProcedure
driver:Trigger
driver:Index |
The function returns
true or
false.
If the database request fails then
Errorcode()
is set to one of
BadNameErr (45),
NoDriverSupport (80), or
BadOwnerErr
(122) and the reply value is
false.
EXISTSDB
ExistsDB(FILE pFile,STRING pDatabaseName),LONG
ExistsDB(FILE pFile),LONG
The
EXISTSDB command allows you to query
the database to see if a specific database exists.
The SQLite2 (and by extension the Memory2) drivers do not support this
command since SQLite has no concept of a Database Name.
In order to check if a database exists, a valid connection
(including a valid database name) to the database engine is required.
The driver automatically uses the "master" database for this purpose.
In order for this command to work, permissions to access the master
database are necessary.
In MSSQL this database is called
master.
In PostgreSQL this database is called
postgres.
This command may fail if the user used in the connect string does not
have sufficient rights to read this "master database".
A database name can be passed into this function. It will return
true if the database exists, and
false
if it does not.
glo:owner =
'DSN=DriverKit;UID=sa;PWD=xxx;Trusted_Connection=No;WSID=yyy;DATABASE=whatever;Encrypt=no;'
r = ExistsDB(Customers,'someDatabase')
If the
EXISTSDB command is called with no
second parameter, (or the second parameter is a blank string) then the
database name from the connection string will be checked.
glo:owner =
'DSN=DriverKit;UID=sa;PWD=xxx;Trusted_Connection=No;WSID=yyy;DATABASE=whatever;Encrypt=no;'
r = ExistsDB(Customers)
In the above example,
EXISTSDB will check
for the
whatever database.
See Also
CREATEDB.
EXCEPT
Except(FILE pFile)
Except(FILE pFile,STRING pFilter)
The
EXCEPT command is used when creating
Subsets. It allows results to be removed from
the current result set.
It typically (although does not have to) follows a call to
SUBSET.
Example
Create a list of customer email addresses, but exclude addresses
listed in the Blocked table.
Set(Cus:NameKey)
Subset(Customer,Cus:Email)
Subset(Blocked,Blo:Email)
Except(Blocked)
Example
Create a list of Invoices for this month, for which a Credit Note has
not been issued
Set(Inv:NumberKey)
Subset(CreditNote,'Cre:InvoiceId')
Union(Invoices,'Month(inv:date) = ' & Month(Today() & ' AND
Year(inv:date) = ' & Year(Today())
Except(Creditnote)
Other related commands are
UNION,
UNIONALL and
INTERSECT.
EXPORT
Export(FILE pFile,String pDosFileName,Long pOptions=0,Long pFormat=driver:csv)
The
EXPORT command is used to
create a text file from the database records. To import data from a
text file see the
IMPORT
command.
The goal of the EXPORT,
and IMPORT commands it to implement
server-side import and export, as provided for by the database.As
such the file is created, or consumed, on the server, not the
client.
From version 1.11 the EXPORT and IMPORT commands have changed from being
server-side to client-side. While future server-side commands may be
added, the server-side features are not sufficiently powerful to be
useful. Making the functions client-side allows for more
consistency, more control, and allows data sets to be easily moved
between databases.
| Parameter |
Type |
Description |
| pFile |
FILE |
The label of the FILE to
export. |
| pDosFileName |
STRING |
The full path and name of the file to save the export to. If
this parameter is set to driver:clipboard
(no quotes) then the result is set to the clipboard, not a
file. |
| pOptions |
LONG |
driver:Append - The data is
appended to an existing file.
driver:Header - A header line is
added as the first line in the CSV file. If driver:append
is set, and the file is not blank, then the header is not
added. In other words, the header will only be added as the
first line in the file.
driver:Bin2Base64 - BLOB fields
will be encoded as Base 64, and prefixed with base64:
. If this option is not set then BLOB fields are HEX encoded,
with a prefix set to hex:. |
| pFormat |
LONG |
Defaults to Driver:CSV. This
parameter determines the format of the output. Other supported
options are driver:TSV and driver:SV. |
The formats and options supported may depend on the specific database
driver. The individual driver documentation should be checked for any
deviations from this general information.
| Parameter |
Supported Values |
Description |
| pFormat |
driver:CSV
driver:TSV
driver:SV
|
The format of the file being loaded is a CSV (common
separated value) file, TSV (tab separated value) or SV (custom
Separated Value).
|
| pOptions |
driver:Header |
The first row in the text file is a header file, and should
not be imported. |
|
driver:Upsert |
The Import will be done using the upsert methodology. This
means that if the row already exists in the database it is
updated to the values in the CSV file. If this is not used,
then the add methodology command will be used - which in turn
means that rows that already exist in the database will be
left alone, and the imported row will be ignored. |
The
EXPORT is primed using a
SET
command. After the
SET, instead of doing
a
NEXT,
EXPORT
is called instead. All the properties relevant to the
SET
command (including filtering and ordering) are relevant here.
For the
driver:CSV
format the following additional properties are also in play;
| Property |
Default Value |
Description |
| prop:Separator |
',' |
The separator to use between fields in the row. The property
can be up to 10 characters long. Null characters are not
permitted. |
| prop:LineEndings |
'<13,10>' |
The separator to use between rows. Default value is
Carriage-Return Line-Feed. The property can be up to 10
characters long. Null characters are not permitted. |
| prop:AlwaysQuote |
false |
If set to true, then string fields are always quoted, even
if they do not contain reserved characters.
If false then only strings that contain reserved characters
(the separator, line endings, and double quote) are quoted. |
| prop:NullString |
'/N' |
Null values are written to the text file using this
placeholder. The property can be up to 10 characters long.
Null characters are not permitted. Null strings are NOT
affected by the prop:AlwaysQuote
property. |
| prop:DatePicture |
@D010/ or
@D010- (SQLite2) |
Date fields are written to the export file using this
picture. |
| prop:TimePicture |
@T093 |
Time fields are written to the export file using this
picture. Extended Pictures are supported. |
String fields (including not-binary Memos and Blobs) are clipped
before writing to the database.
Binary fields in the database (ie BLOB fields, Binary Memos and Binary
Strings) which are included in the export are Base 64 encoded before
being saved to the file. The value in the file starts with
base64:
- Programs reading the file should Base 64 decode these fields in
order to retrieve the original value.
For example a CSV line might look like this;
1,abc,2,0,base64:ICAgICAgICAgIA==,101.01
In this line the 5th column value is base 64 encoded.
Example
Export the Customers table in Name order
Set(Cus:NameKey)
Export(Customers,'Customers.Csv',driver:header)
Notes:
Decimal fields include all place digits, even if zero. So 11.00 not 11
Memo fields are clipped.
See also
IMPORT .
prop:Explain (SQL)
The
SET command allows you to create a
result set from the data in the database, and then loop through that
result set using the
NEXT and
PREVIOUS
commands.
Typically a
KEY (or
INDEX)
is used to define the sort order in the
SET.
However with the addition of the
prop:filter
and
prop:order
properties to the
SET command, it is
possible for the
SET command to become
quite complex. Ultimately the server will decide which indexes to use,
or whether a full table scan is required. It can be useful to know
which (if any) index it ended up using, and if a full table scan was
needed, perhaps another index on the table could improve performance.
Explain mode is triggered by setting
prop:Explain
Customers(prop:Explain}
or
Customers(prop:Explain} = true
This property is ONLY applied to the next
SET
command, and is then automatically cleared. The
SET
command will still function, so the use of this property should not
affect the behavior of the actual code.
Obviously this property does some work, and so will slow down the
query a little bit. Thus it should be used for development only, and
not included in general code.
The format and destination of the output depends on the specific file
driver. Consult the driver documentation for information on the
expected results, where the results go, and the formatting of results.
See also:
Performance Tips
Extended Name Attributes
The External Name attribute of a field, as declared in the code (ie,
as set in the dictionary) is "visible" to the driver. For this reason
extra desired functionality for a specific field can be passed to the
driver in this way. The External name becomes a Pipe (
| ) separated list, which contains both the External Name
and also
Extended Attributes.
Clarion traditional drivers support some extended name attributes
already. These can be applied at the field level to specific fields in
the table declaration. This allows the developer more control over the
way the driver treats individual fields. It's especially useful for
SQL tables, although is used by some of the other drivers as well,
where it is appropriate to do so. For a more in-depth discussion of
Extended Name Attributes in general see
ClarionHub.
Note that for traditional drivers, the Attribute name is Case
Sensitive, and the spacing in the name attribute matters. In object
based drivers the attribute is not case sensitive, and extra
white-space can be included, or excluded, as you desire. Object based
drivers support all the traditional attributes, as well as some
additional attributes.
For more information on the traditional attributes, the Clarion
Documentation is recommended.
| Attribute |
Support |
Applies |
Description |
| BINARY |
Traditional & Object Based |
FIELD, BLOB |
Specifies that the field will hold binary data. Can be used
on STRING and PSTRING
fields (Specifically not CSTRING).
The driver will take this into account when calling CREATE
to create the table. |
| BOOLEAN |
Object Based |
FIELD |
Note that if the field is a string type then the value MUST
still be one of '0' or '1'.
'True' and 'False'
are not considered to be boolean values. See also Validation.
Fields that are of SQLTYPE=BOOLEAN
or SQLTYPE=BIT do not need this
attribute added as well. |
| CASE |
Object Based |
FIELD |
Field should be created with a case sensitive collation. If
the database has a Nocase collation as the default for
strings, then an appropriate Case Sensitive collation will be
used for this string. If the database is case sensitive
already, then this option is ignored. Only applicable when the
table is created. See also NOCASE
below. Ignored if COLLATE or BINARY is set. |
| CHECK |
Object Based |
FIELD |
See VALIDATE below. See also Validation. |
| CHECKFORNULL |
Traditional |
FIELD, BLOB |
In SQL, records with the NULL
value in the WHERE clause are not
included in the result set.
This switch alters WHERE clauses, so
that records with NULL fields are
included in the result set. |
| COLLATE |
Object Based |
FIELD, BLOB |
IN SQL when creating a table the COLLATE
attribute determines how the field is sorted and compared to
other fields. Are trailing spaces clipped? Is the field Case
Sensitive? and so on. |
DEFAULT=xxx
DEFAULT(xxx)
|
Object Based |
FIELD, BLOB |
When creating the table, specify that this field should be
set to this default value. Do not use quotes. Usually used in
conjunction with NOTNULL. Can be
set to a blank string using just DEFAULT=
or DEFAULT() with no value.
If the default value is a server-side function (like NEWSEQUENTIALID()
or gen_random_uuid() or whatever)
then it must be preceded by a !.
For example;
| DEFAULT=!gen_random_uuid() |
| ISIDENTITY |
Object Based |
FIELD
|
Specifies that the field is a server-side auto-incrementing
field. There may be some restrictions on this based on the
specific driver, so check the driver documentation. The field
will not be included in ADD, APPEND, PUT,
UPSERT statements. |
| NOCASE |
Object Based |
FIELD |
Field should be created with a case insensitive collation.
If the database has a case sensitive collation as the default
for strings, then an appropriate Case Insensitive collation
will be used for this string. If the database is case
insensitive already, then this option is ignored. Only
applicable when the table is created. See also CASE
above. Ignored if COLLATE or BINARY is set. |
NOTNULL
|
Object Based |
FIELD, BLOB |
When creating the table, specify that this field should be
set to NOT NULL as a constraint. |
| NOWHERE |
Traditional |
FIELD, BLOB |
Some back ends do not allow certain data types in a WHERE clause, but they fail to advise the
ODBC Accelerator Driver of this restriction. The NOWHERE
switch lets you manually advise of the restriction when WATCH causes the driver to generate
an error. |
| READONLY |
Traditional & Object Based |
FIELD, BLOB |
The field will not be included in ADD,
APPEND, PUT,
UPSERT statements. When ADD,
PUT or UPSERT
are called then readonly fields in the buffer are
automatically updated to reflect the database value. |
| REQ |
Object Based |
FIELD |
Equivalent to NOTNULL and <> 0 or <>
'' |
SELECTNAME=xxx
UPDATENAME=yyy
INSERTNAME=zzz |
Traditional |
FIELD, BLOB |
When Selecting, Updating, or Inserting, the specified name
will be used as the field name instead of the External Name,
or Label. Do not use quotes. |
SQLTYPE=xxx
SQLType(xxx)
|
Object Based |
FIELD, BLOB |
When creating the table, specify that this field should be
set to this type, regardless of the type Clarion would
traditionally use. Do not use quotes.
(Each driver also has a custom form of this equate, see
documentation for each driver.)
The following generic types can be used, and they will be
translated into the appropriate type for the database engine
by the driver;
UUID
STRING
For example in MSSQL a UUID
becomes a UNIQUEIDENTIFIER and STRING becomes a VARCHAR. |
| UUID4, UUID7,
UUID8 |
Object Based |
FIELD
|
These fields should contain a UUID
value when being added to the database. If the field is blank
on an ADD or APPEND
then a UUID value will be generated into the field for you.
UUID's are an alternative to Auto
Numbering. |
> n, >=
n
< n, <=
n
<> n |
Object Based |
FIELD |
Used for server-side validation
of the field. Validation will be added when creating the
table.
Ignored if the field is BOOLEAN.
See also Validation. |
| VALIDATE(xxx) |
Object Based |
FIELD |
xxx is a SQL CHECK
expression which is included in the table declaration when CREATE is called. See also Validation. |
| WATCH |
Traditional |
FIELD, BLOB |
If set at the field level, then only
the fields with WATCH will be
checked when writing a record with a PUT. |
External Key Names
Most databases store key names external to the tables in the database.
In other words all the key names are stored internally in a table,
database wide, and thus the key names need to be unique.
Traditionally this means that Clarion developers have got into the
habit of putting unique external names against keys in the dictionary.
Typically (but not necessarily) by merging the prefix with the key
name. This, however, would be the wrong thing to do. It is wrong for
two reasons;
Firstly, it's unnecessary. The drivers automatically create a unique
name if one is not supplied. For traditional drivers it consists of
prefix_keylabel. So adding this as the
external name is redundant. Object based drivers use
tablename_keyname.
[1]
Secondly, and more importantly, though it breaks
ALIAS
functionality when the database contains two tables of the same
structure.
An
ALIAS is a dictionary approach to
duplicating a table in the Clarion dictionary. It is primarily used
for two use cases; because multiple record buffers into the same table
are required, or because multiple tables with the same structure, (or
the same table in multiple databases) need to be accessed at the same
time.
If the external key name is "fixed" then the
ALIAS
cannot
CREATE a clone of the
table because the key names will then be non-unique. Leaving the
External Key Name blank will avoid this
problem because then the prefix of the
ALIAS
will come into play (which will then result in a unique key name.)
The
External Key Name is not in fact
used in program code anywhere. Code makes use of the key label, so the
name is irrelevant. Thus leaving the driver to generate unique names
is the best approach.
Setting the
External Key Name
explicitly in the dictionary has implications for the
CREATE command (as mentioned above) as well as
the
COPY and
MERGE
commands, which may create "clones" of the table.
Note 1:
Object based drivers use the longer name because this allows the same
FILE in the program to create multiple
tables in the database which have the same structure, but different
table names. Since the same
FILE is used
the prefix would be common, thus if the prefix was used in the key
name, the key name would not be unique.
FROMQUEUE / TOQUEUE
FromQueue(FILE pFile, QUEUE pQueue)
ToQueue(FILE pFile, QUEUE pQueue)
There are situations where cloning a file structure as a local queue
can be advantageous. These new commands exist as a convenience to make
it somewhat easier to move data between queues and tables. While it is
possible to write short loops to move data between a queue and a
table, the loop can be as simple, or as complicated as you need it to
be. These built-in commands are very simple, limited in functionality,
and do minimal error checking. If nothing else though their code
serves as a foundation for more complex code which you may need in
more complicated situations.
First, the Queue must be created in a very specific way;
SomeQueueName
Queue(pre:Record),Prefix(SomePrefix)
! additional fields go here if desired
End
The prefix is required since the field names are the same as the
table. Declaring it using the
pre:Record
means that the Queue is declared with the same data structure as the
File record. If the file record changes, then the queue will as well.
Note that since
BLOB and
MEMO
fields are outside the Record structure, these commands cannot be used
with
BLOB and
MEMO
fields.
Using
TOQUEUE primes the contents of the
queue from the table using a
SET / NEXT
loop. All the properties which can be set for use by the
SET command are therefore in scope, and can be
set before the call to
TOQUEUE. These
include
prop:Project
and
prop:Filter. Since
(short of running out of RAM) Queues just work, this command is not
expected to return any errors. However if the file is closed, or the
SET command generates an error, then that
error will be set, and the command will fail.
Calling
FROMQUEUE writes the data in the
queue to the table. It uses the
UPSERT
command, thus adding new records, or updating existing records. Again,
prop:Project is in scope and can be used
to do partial-record writes. BLOB and MEMO fields are left unchanged
if the record is being updated.
It is not necessary to call
TOQUEUE
before
FROMQUEUE. When doing bulk adds to
a table doing that by filling the queue, then calling
FROMQUEUE
is a fast way to populate the table. If not already inside a
transaction then a transaction around the bulk insert (or update) will
be done automatically.
The queue can be declared as above, with additional fields added to
the queue. These additonal fields will not be used by the FROMQUEUE or
TOQUEUE commands.
The moving of data between the file record, and the queue buffer, is a
simple memory copy. No attempt is made to assign the data on a
field-level basis. Declaring the queue without using the pre:record,
but using just fields, will need to be done very carefully to avoid
field mismatches.
Example
CustomerQueue
Queue(Cus:Record),Prefix(CusQ)
End
code
Open(Customers,42h)
! Load the queue with any number of records
FromQueue(Customers,CustomerQueue) ! the queue records are
merged into the table with an UPSERT
Another Example
EmployeeQueue
Queue(Emp:Record),Prefix(EmpQ)
End
Code
Open(Employees,42h)
Set(Emp:NameKey)
Employees{prop:filter} = 'Emp:Age > 50'
ToQueue(Employees,EmployeeQueue)
! do what you like on the queue here
FromQueue(Employees,EmployeeQueue)
FLUSH (SQL)
For SQL drivers, when doing a SET/NEXT style loop, a
SELECT
statement is sent to the server and the results are fetched with the
NEXT and
PREVIOUS
commands.
When breaking out of the loop, in most cases, that result set is no
longer needed. In traditional drivers this information is only passed
through to the driver when the
FILE (or
VIEW) is closed. Object based drivers
support the
FLUSH command. This tells the
driver (and, if necessary, the database) that the result set is no
longer needed, and can be flushed from memory.
Along with the use of the
prop:Limit
property, this is the most useful thing you can do after a SET/NEXT
loop in order to improve performance, not just of the server, but of
the program as well.
FLUSH cannot be called if a transaction is
active. Doing so will set the error code to
56
Logout Already Active.
But wait, there's more.
If
STREAM has been called to
mark the start of a block of writes, then
FLUSH
closes the stream and allows the database to continue with normal
operations. This means that the performance gains from using
STREAM
and
FLUSH with ISAM files is translated
to similar performance gains in SQL.
GETRANDOM
The object based drivers introduce
GETRANDOM
as a new command.
GetRandom(FILE pFile)
GetRandom(FILE pFile, String pFilter)
The
GETRANDOM function works the same as
a
GET, except that it doesn't get a
specific record from the table, but rather a random record. This is
useful when say allocating an employee (randomly) to a bug, selecting
a customer (randomly) to win a prize, displaying a random "tip of the
day", doing random audits and those sorts of places where "just get a
record by random" is a good solution.
If passed without a filter, then the command will prime the record
buffer with any record from the table. If passed with a filter then
only those records which pass the filter will be available to be
chosen.
Example
Select an employee, who has the status "available", to be allocated to
this task;
GetRandom(Customer)
GetRandom(Employee,'emp:Available = 1')
If no record is available then the error
33
Record Not Available is set.
IMPORT
Import(File pFile, String pDosFileName, Long pOptions=0,Long pFormat)
The
IMPORT command is used to import text
files into the database. For exporting a table see the
EXPORT command.
| Parameter |
Type |
Description |
| pFile |
FILE |
The label of the FILE to
import into. |
| pDosFileName |
STRING |
The full path and name of the file to import from. |
| pOptions |
LONG |
Default to 0. |
| pFormat |
LONG |
Defaults to Driver:CSV. This
parameter determines the format of the input. |
The formats and options supported may depend on the specific database
driver. The individual driver documentation should be checked for any
deviations from this general information.
| Parameter |
Supported Values |
Description |
| pFormat |
driver:CSV
|
The format of the file being loaded is a CSV (common
separated value) file. |
| pOptions |
driver:Header |
The first row in the text file is a header file, and should
not be imported. |
|
driver:Upsert |
The Import will be done using the upsert methodology. This
means that if the row already exists in the database it is
updated to the values in the CSV file. If this is not used,
then the add methodology command will be used - which in turn
means that rows that already exist in the database will be
left alone, and the imported row will be ignored. |
CSV
The import file can be in CSV format.When parsing this format the
following properties may be used.
| Property |
Default |
Description |
| prop:LineEndings |
CRLF
'<13,10>' |
The separator which separates each line in the CSV file |
prop:Separator
|
comma
','
|
The separator between each field in the line.
|
SQLite Note: The use of
STREAM
and
FLUSH can have a hugely significant
performance increase. Use around the call to
IMPORT
where possible.
Encoding
Incoming fields in the text file are may be prefixed to specify the
text encoding used.
| Prefix |
Encoding |
Description |
| base64: |
base64 |
A common binary-as-text format. More efficient than hex
encoding |
hex:
|
hex
|
Each byte in the binary is encoded as two
hexadecimal characters.
|
| json: |
json |
JSON encoding only encodes some of the characters (for
more see json.org). This can be efficient space-wise. |
INTERSECT
Intersect(FILE pFile)
Intersect(FILE pFile,STRING pFilter)
The
INTERSECT command is used when
creating
Subsets. It allows results to be
removed from the current result set if they do not appear in this set
as well.
It typically (although does not have to) follows a call to
SUBSET.
Example
Find all customers that purchased NetTalk and StringTheory
Set(Cus:NameKey)
Subset(Orders,ord:CustomerId)
Union(Orders('Ord:ProductId = ''NetTalk''')
Union(Orders('Ord:ProductId = ''StringTheory''')
Example
Find all Products that have at least one sale every year for the last
five years.
Set(Products)
Subset(Orders,ord:ProductId)
Loop y = Year(Today()) - 5) to Year(today()) -1
Intersect(Orders,'YEAR(ord:date) = ' & y)
End
Other related commands are
UNION,
UNIONALL and
EXCEPT.
prop:InTransaction
Traditional drivers do not support
prop:InTransaction
.
This queries the driver to determine if the driver believes it is
inside a transaction or not.
If a
transaction is started with
LOGOUT or
LOGOUTCONNECTION
then the property returns the connection handle (ie a LONG number).
If no transaction is happening on the connection then it returns
0 (not a blank string).
Remember, transactions in the new drivers take place at the connection
level - all files using that connection are automatically part of the
transaction. Multiple connections are supported, and hence
prop:InTransaction
can be used to determine if two different tables are currently in the
same transaction, or different transactions.
This is similar to
prop:TransactionStatus,
however
prop:TransactionStatus queries
the database, not the driver. This can return more information on the
status of the transaction (On, Off, Error etc) depending on the
driver. See the driver's documentation for possible return values. If
the database terminates the transaction then
prop:TransactionStatus
returns
driver:TransactionOff or
driver:TransactionError even when
prop:InTransaction
is returning a non-zero value. In that situation a
ROLLBACK should be called to terminate the
transaction.
prop:Json
Traditional drivers do not support
prop:Json
.
Object based drivers return the text version of the field from the
current record buffer, as a json encoded value. (See
prop:Value if you want the raw memory value and
prop:Text if you want the
unadorned text value). This property is especially useful if the
field is an array field. If the field is not an array then a
single-element-JSON-array is returned.
If the field is an array then a JSON array is returned, with each
element in the array corresponding to an array value.
DATE and
TIME
fields (but not
LONG fields masquerading
as date and time) are formatted into a human readable date (@D010- by
default) and time (@t04) and returned as JSON strings. Time
values are formatted as "+1" (meaning that 1 is added to the value
before formatting.) This behavior can be altered by the driver so see
your specific driver docs for non-standard behavior. (For example, the
SQLite driver formats the time with added milliseconds - ie
hh:mm:ss.ccc. )
For Example;
j = customers{prop:Json,3}
Keys, Indexes, Run time Indexes and Sort
Orders.
Clarion supports flat-file (ASCII, BASIC, DOS) drivers, ISAM (TPS,
DAT, BTrieve) drivers and SQL drivers. Clarion itself defines four
different key types in the dictionary. Understanding how these four
types are used in your program, and how they map to a SQL engine is
important.
| Clarion Term |
ISAM |
SQL Term |
SQL |
| Key |
A sort order that is kept up to date as records are added to
the table. |
Primary Key, Unique Key, Foreign Key, Index |
These are kept up to date as the data is altered. |
| Static Index |
A sort order that is not kept up to date as records are
added, but is constructed on an ad-hoc basis using the BUILD command before it is used. |
SQL has no direct equivalent of this
(although it can be mimicked in some database engines.) |
For purposes of the SQL File Drivers, Static Indexes in the
FILE structure are treated the
same as Keys. In other words, from the SQL point of view, A
Static Index is just another Key.
|
| Run time Index |
A sort order that is undefined at compile time, but can be
defined at run time. This allows for different users to define
different sort orders. |
Run time Indexes have no real place in SQL. |
If you wish for additional indexes (keys) then you can
simply add them at the database level (or via a CREATEKEY command.)
|
| Order |
A sort order that is defined in the dictionary,
but which is not included in the FILE
structure at all, and so does not appear in the data file. It
is used only at the template level as a simple way of
collecting relevant information for the templates. |
While keys are always discussed in terms of the
Order
of records, they are far more important when it comes to
Filtering
records. For optimal read performance it is important to have a key in
your table which matches the filter you are applying to a table. The
order of the result set, after the filter is applies is of secondary
importance.
It is not necessary for the list of keys and indexes in your
dictionary and in the SQL database to match. It is possible to define
keys in your dictionary that are not actually in the database, and
it's possible to have indexes in your database which improve
performance, even if those indexes are not declared in your
FILE
structure.
prop:Label
File{prop:Label,n} can be used to get the
label of a field, memo, blob or key in a table. Traditional drivers
however have no way to get the label of the
FILE
itself. (The name of the table can be returned by the
NAME
command, but there is no way to get the label.)
Object based drivers attempt to close this hole a bit by allowing
File{prop:Label,0} to return something other
than a blank string.
If not explicitly set (see below) it returns an upper case label for
the table. This is the Full Path Name of the table, as it was set when
the file structure was first touched. If the table has no path name
then it will be the first 8 characters of the Label.
The best way to get a valid value here is to set this property in
code, and then it can be retrieved later on. It can be set with;
table{prop:Label} = 'Whatever'
Note: Setting prop:Label
affects ALL instances of this structure, on all threads, for the
life of the program
Labels are always converted to uppercase by the driver. So after the
above code;
s = table{prop:Label,0} or
s
= table{prop:Label} will return
'WHATEVER'.
prop:Limit, prop:Offset & prop:Paging
When querying a database, the server builds a result set, and then
supplies that result set to the program. The result set is typically
constrained by using the
SET command,
along with the
prop:Filter.
Under the hood, the drivers (by default) make use of a paging system
which allows the server to build the results one page at a time. This
reduces load on the server. The size of the page is stored in
prop:FetchSize
and can be set by the
BUFFER command or
by directly setting the property. If not set the default value is used
(at the time of writing this value was 30, but it might change.)
All properties including prop:Limit
and prop:Offset should be set before
the call to SET .
For some queries using pages is inefficient. If you know the result
set will be a reasonable size (ie will fit into memory) and you know
you will be using all (or most of) the records in the set, or if the
combination of the filter and order forces a full table scan, then it
can be more efficient to query the database once, and get the whole
result set. In that case you can manually override the paging by
setting
file{prop:Paging} = false
If you would prefer to do your own paging, then you can bypass the
SET/NEXT paging mechanism by setting your own
prop:Limit
and
prop:Offset[1]
values. The driver will treat this as a single request, and apply
your values rather than the built-in paging values. Setting the
prop:Offset property only has effect if you
set the
prop:Limit property as well.
Setting
prop:Limit to
-1
allows the
prop:Offset to be used with
"no limit".
file{prop:Limit} = 1000
file{prop:Offset} = 2000
In other situations you may have a firm idea of the number of records
you want to retrieve. For example you may be selecting the top ten
sales performers for the year, or the 10 biggest invoices and so on.
In that case setting a specific limit allows the server to optimize
the request for just the data you are looking for.
Note 1: Note that
paging using
prop:Offset has limitations
on tables which are dynamic - ie where records are being added or
deleted quickly. In a dynamic situation you may find better results
using
POSITION and
RESET
to move from one page to the next.
prop:Offset
though is particularly useful when multiple pages are handled at the
same time, and the next page is required before the end of the current
page is reached.
Logging
A key technique to debugging data access, or understanding how (and
when) the driver is being used, is to enable logging. When logging is
turned on the flow of commands and data through the driver becomes
visible, and can lead to a greater understanding of what is happening.
Not surprisingly, over the years, this feature has expanded in the
traditional drivers and it can be difficult to know what to do and
when.
Trace.Exe
Any discussion of driver logging starts with the
Trace.Exe utility, which is located in the
\Clarion\Bin
folder. This program sets logging settings at the driver level (in
other words it applies to all tables using a specific driver.) It
also applies to ALL programs using that driver.
It works by setting settings in an INI file. The default INI file
file is in the
%AppData%\SoftVelocity\Clarion\Drivers.ini file.
It supports both traditional drivers, and object based drivers.
Since it is run outside your program, logging can be turned on and
off without changing, or recompiling your code. While useful, this
can have security (and performance) implications. For this reason,
in object based drivers, support for the INI file settings, and
hence for
Trace.Exe is
OFF
by default. It can be turned on by setting
system{prop:TraceFile}
= 'default' , or by setting
system{prop:TraceFile}
to a specific other INI name. A good place for this is in the
Global Embed point
Dictionary Construct.
If supported, the settings are read by the program when the driver
is first accessed. So if changes are made to the settings in
Trace.Exe, your program will likely need to
be restarted in order for the new settings to take effect.
It is possible for your program to specify an alternate INI file
location (
system{prop:TraceFile}) or to
set the settings to be read from the Windows Registry instead (
System{prop:TraceKey}). If these are set
then the
Trace.Exe program has no
effect, and the settings have to be set manually. These settings
apply to
ALL drivers.
It is also possible to suppress global logging by setting
System{prop:DriverTracing} = 0
This overrides the settings in
system{prop:TraceFile}
and
System{prop:TraceKey} and turns
off all driver-wide logging. However this does not affect specific
FILE or
VIEW
tracing as discussed later. To take effect this property MUST be set
before any file driver is initialized. In other words it only
applies at the point where the INI, or Registry, settings are read.
A good place for this is in the Global Embed point
Dictionary
Construct.
Driver-Wide Settings
As discussed above, global settings can be set using the
Trace.Exe
program. However that only works if the
Drivers.ini
file is in the specified location, and has that name. It's possible
to set the name, and location of the INI file using
system{prop:TraceFile}.
Alternatively the settings can be stored in the Windows Registry
using
system{prop:TraceKey}. For
example;
system{prop:TraceFile} = 'c:\log\Settings.Ini'
system{prop:TraceKey} =
'HKEY_LOCAL_MACHINE\SOFTWARE\DriverKit'
If both
system{prop:TraceFile}and
system{prop:TraceKey}are set then the INI
file is ignored, and the registry settings are used.
For object-based drivers either
system{prop:TraceFile}
or
system{prop:TraceKey} MUST
be set to something for global driver-wide logging to take place.
Frankly, this approach to logging is
not
recommended for security, performance, and the sheer size
of the log file reasons.
The supported setting names are as follows;
| Setting |
Type |
Matching Code Property below |
| BindData |
Boolean/REG_DWORD |
PROP:LogBind |
| Details |
Boolean/REG_DWORD |
prop:Details |
| Profile |
Boolean/REG_DWORD |
prop:LogClarion |
| Trace |
Boolean/REG_DWORD |
prop:LogSQL
|
| TraceFile |
String/REG_SZ |
prop:Profile
|
File Settings
A better, and safer, way to log driver interactions is to log
information for a specific file, for a specific block of code. This
reduces the impact on performance, limits the security risk to the
area you are logging, and results in a smaller (and hence easier to
understand) log file. To simplify things even more, two new commands
have been added;
LOGGINGON
and
LOGGINGOFF. Using
these it's possible to quickly enable, and disable logging for a
file. In addition to these commands, the following properties are
also supported;
| Property |
Behavior |
| prop:LogBind |
In SQL the commands and the data are separate (most of the
time). This is known as parameterization.
If set to true then then the
values of fields which are "bound" to the SQL command are
shown in the log. |
| prop:LogClarion |
If set to true then the
Clarion commands which triggered the SQL are included in the
log. |
prop:LogData
prop:Details |
If set to true then the
contents of the record buffer, at the time of the Driver
command, is included in the log. |
| prop:LogExplain |
If set to true then the SQL
engine will be asked to provide a Query Plan for SET
/ NEXT loops. This plan is then
included in the log. |
| prop:Log |
Allows a custom line to be injected into the log at this
point. Recommend using LOG
command instead. |
| prop:LogSQL |
If set to true then this includes SQL statements in the
log file. |
Prop:Profile
Prop:Logfile |
If set (to a file name), this toggles logging on. All file
calls and errors returned by the file driver are logged.
Assigning a filename to PROP:Profile turns on logging, while
assigning an empty string ('') turns off logging. Querying
this property returns the name of the current logfile, and
an empty string ('') if profiling is turned off. |
For ad-hoc logging another two commands have been added.
LOG allows you to inject custom
text directly into the log file (or, if no log file is set, it sends
the text to
Debugview++).
LOGBUFFER does the same
thing, but logs the current contents of the buffer. If no log file
is set this is also sent to
Debugview++. This means, that at any point in
the code, the current contents of the buffer can be observed simply
by calling
LOGBUFFER. For example;
LogBuffer(Customers)
Because the log file can be appended by multiple users, on multiple
workstations, and by multiple threads, if the log is being sent to a
file then a large amount of information is included in each
line of the log.
A typical line in the log looks something like this;
[SQLite2][607][HULK][Gerald][4840][1][
4/03/25][ 7:21:37][][sqlput2.sqlite][Ident91][66][DATA]
IDE91:NAME | Brian
More Generally, the information is as follows;
[DriverName][Object ID][Computer Name][Windows
User][Process ID][Thread][Date][Time][Server][Database][Table
name][Open Status][type] data
Type is one of
[CLAR] : Clarion Code statement
[SQL ] : SQL Statement sent to the
database
[BIND] : A binding of a SQL parameter
to a specific value
[DATA] : The contents of a field in the
record buffer.
[EXPN] : The Query Plan as returned by
the server.
LOG
Log(FILE pFile, String pText)
Log(VIEW pView, String pText)
This performs the same function as setting
prop:Log
(but with tidier syntax). It writes to the currently open log file (as
opened by
LOGGINGON, or
prop:profile). If no file is currently open,
then it writes it to
Debugview++.
Example
Log(Customers,'Calculating Tax Advantages here')
See Also:
Logging
LOGBUFFER
LogBuffer(FILE pFile)
LogBuffer(VIEW pView)
If called on a
FILE then all the fields
(including blobs and memos) for the file structure are sent to the
log.
When called on a
VIEW then all the
fields (including blobs and memos) for all the
FILE
structures in the
VIEW, are sent to the
log.
Neither form of this command applies
prop:Project
- meaning that all the fields are included in the output, regardless
of whether those fields are in the
VIEW
or not.
If no log file is active then the contents of the structure are sent
to
Debugview++.
This means that at any point in your code, while you are debugging,
you can inspect the current contents of the
FILE
or
VIEW structure simply by calling
LOGBUFFER.
Example
LogBuffer(Customers)
LogBuffer(CustomersView)
See Also:
Logging
LOGGINGOFF
LoggingOff(FILE pFile)
LoggingOff(VIEW pView)
Ends logging which was started with a call to
LOGGINGON
(or by a call to
prop:Profile).
Example
LoggingOn(Products)
! some code goes here
LoggingOff(Products)
See Also:
Logging
LOGGINGON
LoggingOn(FILE pFile,<String pFileName>,Long
pOptions=0)
LoggingOn(FILE pFile,Long
pOptions=0)
LoggingOn(VIEW pView,<String pFileName>,Long
pOptions=0)
LoggingOn(VIEW pView,Long pOptions=0)
This command turns on Logging for a file. This logs commands into a
text file, or
Debugview++.
| Parameter |
Type |
Description |
| pFile / pView |
File |
The label of the FILE or VIEW to log. All commands related to
this FILE or VIEW
will be added to the log file. Multiple FILE
and VIEW structures can write to
the same log file. |
| pFileName |
String |
The name of the text file to log to. If the file already
exists then the new log will be added to it.
If the parameter is omitted, or is a blank string, then the
output will be sent to Debugview++.
The filename cannot be a numeric value. |
| pOptions |
Long |
One or more of the following equates, added together. If
this parameter is omitted, or 0, then ALL the options,except
driver:LogExplain, default
to true.
driver:LogSQL : SQL commands, as
sent to the database, are included in the log. They are
identified with rows including [SQL ].
driver:LogClarion : Clarion
commands, as written in the code, are sent to the log. They
are identified as rows including [CLAR].
driver:LogData : The contents of
the record buffer are sent to the log. These are identified as
rows including [DATA].
driver:LogBind :
Parameterized values bound to a SQL statement are included in
the log. These are identified with [BIND].
driver:LogExplain : When doing a
SELECT statement, caused by a SET/NEXT
loop, the Query Plan (as explained by the database engine) is
included in the log. These lines are identified with [EXPN].
driver:LogConnections :
Information about the connections being created and destroyed
is included in the log. |
For example;
LoggingOn(Customers)
Get(Customers,cus:PrimaryKey)
LoggingOff(Customers)
Or
LoggingOn(Customers,'Customers.Log',driver:LogSQL
+ driver:LogExplain)
Set(cus:PrimaryKey)
Loop
Next(Customers)
Break
End
LoggingOff(Customers)
See Also:
Logging
LOGOUT
For technical reasons, if an error occurs doing a call to
LOGOUT,
the program may GPF. To avoid this, and get an
Errorcode
instead, set use the
LOGOUTCONNECTION
command instead.
prop:InTransaction
can be used to query if a transaction on a connection is currently
active, and how it was started.
LOGOUTCONNECTION
LogoutConnection(FILE pFile)
For more information on
LOGOUTCONNECTION
see
Transactions.
Note that if
LOGOUTCONNECTION is used,
then
ROLLBACK and
COMMIT
MUST take the primary table as a
parameter.
Example
LogoutConnection(Customers)
! some writes happen here
Commit(Customers)
prop:InTransaction
can be used to query if a transaction on a connection is currently
active, and how it was started.
Note 1: A call to
CLAIM,
approach 2, will result in a transaction being started, if one is not
already running.
Memos
In Clarion each data field has a type;
LONG,
STRING and so on. These types are all
fixed-length, the string has a declared length, and so on. Most
databases also support
BLOB (Binary
Large Object) fields. These are fields of indeterminate length, and
which can hold binary values (and which can grow very large.) In
between these two extremes lies a hybrid field, the
MEMO.
The
MEMO originated in the Clarion DAT
format (in Clarion Professional Developer, a DOS product). The DAT
format supports a single
MEMO field in
the record, and does not support
BLOB.
To make it easier to port DOS programs to Windows the TOPSPEED file
driver supported
MEMO (as well as
BLOB.) None of the traditional SQL drivers
support the
MEMO type, although all
support the
BLOB type.
That's a pity though because the
MEMO
can be used directly on a Window, as the source to an
ENTRY
or
TEXT control, or as a column in a
browse. The templates, and the language, mostly treat the
MEMO
as a large string.
When converting TOPSPEED programs to a SQL database it's common
practice to change the
MEMO into a
CSTRING. While this makes the behavior of the
type consistent (and hence the conversion simpler) it can lead to
issues where the chosen database has a maximum record size. Converting
to a
BLOB is also possible, however a
BLOB cannot be used as the source to Window
controls, and so extra code has to be added there. It also cannot be
added to the
VIEW by the templates -
again creating more work.
The object based drivers
do support the
MEMO type. How this is mapped to the
database is dependent on the specific driver. This will make
conversions easier, but more importantly makes it simpler to make use
of the
MEMO benefits in SQL-based
applications.
There is however a catch. The Clarion Documentation states the
following;
Memory is allocated for a MEMO field's buffer
when the file is opened, and is de-allocated when the file is
closed.
In practice that's not true. Both the TOPSPEED and CLARION drivers
support the use of the
MEMO field even
if the table is closed. In other words memory is set aside for the
MEMO when the thread starts, regardless of
whether the table is ever opened or not. This can have a
significant impact on the memory assigned to each thread, especially
if several large
MEMO fields are in the
dictionary.
With regard to blobs; the documentation for
BLOB
is equally clear - a
BLOB is not
available if the file is closed. In practice this is enforced, and if
you access a
BLOB field when the file is
not open, the program will likely GPF.
Given that
MEMO support has been added
to the object based SQL drivers, there is a choice to be made. Either
follow the documentation (which would be ideal for memory reasons) or
follow the behavior of the DOS and CLARION driver (which would be
ideal for compatibility reasons.) As is usually the case,
compatibility wins and so the current behavior is preserved.
On the database side, text
MEMOs are
stored as a variable length, text field. Trailing spaces are clipped
before storing. Binary memos are stored as full length.
MERGE
Merge(FILE pFile, FILE pDestination, Long pOptions=0)
Merge(FILE pFile, STRING pDestination, Long pOptions=0)
MERGE is currently under review, and not all
parts are working yet.
MERGE is a new command. It allows the
contents of one table to be pushed into another table. The destination
table becomes a super-set of the source table. If a source record
exists which matches any unique key in the destination table then the
destination record is updated, not duplicated.
The
FILE structure of the Source File is
the list of fields that will be merged.
If there are fields that exist in the SQL table, but do NOT appear in
the
FILE declaration, then they will not
be merged.
If the second parameter is a
FILE then
the parameter MUST be a
FILE structure
with an object based driver. The Destination table does not need to be
open. If the Destination table has a Full Path Name variable or Owner
variable then those MUST be set correctly before this call. If the
destination table does not exist then it will be created making use of
the
pOptions parameter. For more on the
possible options see the
CLONE
command.
If the second parameter is a
STRING then
it should name a table (in the same database) with a similar, or the
same structure. If the table does not exist it will be created for
you.
The tables are assumed to have the same, or similar, table
declarations. The fields are matched by the column name in the
database. Fields which are in the destination file, but not the source
file are set to their (SQL) default value (which may be a
NULL).
Fields which are in the source file, but not the destination file, are
not copied.
If the second parameter is a
STRING it
should be the fully qualified table name (ie of the form
database.schema.tablename). If the destination table is in the same
database then the name can be just the destination tablename. If this
form of the function is used then the destination table MUST include
all the fields, with the same names, as exist in the source
FILE
declaration.
If the destination table is in the same database, or (if supported by
the database) a table in a different database on the same server, then
the process is fast and no data transfer to the client is required.
However if the destination table is a different driver or different
server, then the process is somewhat slower as data has to flow first
from the source database to the program, and from there to the
destination server. The destination driver MUST support the
UPSERT
method.
This command is a one-way, upsert, merge. No deletes are performed on
records from either table.
All supported properties should be set (or
cleared) before the call to MERGE
.
Calling CLEAR(file) or CLEARPROPS(file)
will clear all the properties.
The
MERGE command supports
prop:Project ,
prop:Filter
,
prop:SQLFilter,
prop:Limit,
prop:Offset. If a subset of the fields are
projected, then the destination table will set the remainder of the
fields to be their default value, or Null.
All the properties are cleared at the end of the call to
MERGE,
regardless of success or failure.
If the
MERGE command is called against a
traditional driver then the
Errorcode()
is set to
NoDriverSupport (80).
See Also
COPY ,
CLONE .
NOMEMO
From the Clarion Documentation;
In Clarion the NOMEMO statement arms "memoless"
record retrieval for the next GET, REGET, NEXT, or PREVIOUS
statement encountered. The following GET, REGET, NEXT, or PREVIOUS
gets the record but does not get any associated MEMO or BLOB
field(s) for the record. Generally, this speeds up access to the
record when the contents of the MEMO or BLOB field(s) are not needed
by the procedure.
None of the traditional SQL drivers support this command, and so this
occasionally significant optimization is not available to SQL users.
Object based drivers though do support
NOMEMO,
even for SQL databases. With one slight modification to the original
specification;
- For Object based drivers the NOMEMO
works for GET , GETRANDOM and REGET
as normal. It retrieves one record without memo or blobs, and the
property is reset for subsequent reads.
- However for NEXT and PREVIOUS
loops, the NOMEMO call applies to the
whole result set as defined by the call to the SET
or RESET command, and is only cleared
by the next SET or RESET
command. In other words all NEXT and
PREVIOUS calls inherit the NOMEMO
option from the most recent SET or RESET command. Therefore if NOMEMO
functionality is desired it must be called before SET
is called in order for it to apply.
NOMEMO also works on the
COPY and
MERGE
commands. It is cleared after these calls regardless of success or
failure. Calling
NOMEMO(file) before
calling
COPY or
MERGE
will result in the BLOB and MEMO fields not being copied, or merged.
In the case of a
MERGE existing BLOB
fields in the destination table are unaltered.
If a read is followed by a write, then the write will inherit the
NoMemo state of the read. In other words, if a record is read with
NOMEMO active, and then it is
PUT,
then the BLOB and MEMO fields are not included in the
PUT.
If called before the
SEARCH
command, then the Blob and Memo fields are included in the search,
however the result set returned by the
NEXT
loop following the
SEARCH will not
include MEMOs or BLOBs. As above, as described by the SET / NEXT loop,
it is not cleared automatically.
NOMEMO will cleared by a call to
CLEAR(File)
or
CLEARPROPS(File).
Prop:NotDistinct
The opposite of
DISTINCT result sets, are
sets where
only records that are
duplicated are included.
The value in this property represents the number of times the row is
duplicated. So, to find all rows with 2 or more instances set this
property to 2. For all rows with 3 or more instances set it to 3, and
so on. To turn the property off, set it to 0. Setting it to 1 has no
meaning, and has the same effect as setting it to 2.
This property
is not reset to 0 except
on a
OPEN ,
CLOSE
,
CLEAR and
CLEARPROPS.
If you wish to re-use the
FILE with this
property off, then you must set it to 0 (if you previously set it to a
value).
Rows in the table are unique. So retrieving all the fields in a row
will mean that
prop:NotDistinct will
have no meaning (no rows will be fetched.) Therefore, in the
FILE context,
prop:NotDistinct
is only useful if you are using
prop:Project
to retrieve only a subset of columns, and those columns do not include
all the fields in any unique index.
Some databases (notably MS SQL Server, Postgres, MySQL and MariaDB)
require the
ORDER BY clause to ONLY include
fields which are included in the
prop:Project
list. Other databases (notably SQLite, Firebird, and Oracle) allow the
ORDER BY clause to include fields not
included in the
prop:Project list. For
maximum portability set the
prop:Order
to be fields included in the
prop:Project
list.
It is not necessary to set
prop:Paging
to false - when
prop:Distinct or
prop:NotDistinct are used, then paging is
automatically disabled.
Since there is no row-identifying field in the result set, the results
of this request are automatically set to be read only.
customers{prop:NotDistinct} = 2
customers{prop:Project} = 'Cus:Lastname,Cus:Age'
Set(Cus:NameKey)
Loop ...
The result set will contain one of each record (it doesn't return
duplicated rows) but all the rows in the result set exist more than
once in the table. As with
prop:Distinct,
the result set does not include row identifiers, however from this
result set you can do a select to fetch the rows with those values.
If the
prop:NotDistinct is set, but the
prop:Project is not set, then the
prop:NotDistinct is ignored and the implicit
prop:ReadOnly is not applied.
See also
prop:Distinct.
Nulls
In the object based drivers most of the Nulls functionality is handled
by the base class.
Known differences between the traditional driver handling of nulls,
and the object based driver handling of nulls is discussed below.
| Code |
Traditional Drivers |
Object Based Drivers |
GetNulls
|
Returned a string with length (4 * number of fields).
|
Returns a string with length (1 * number of fields).
|
| GetNulls |
String could not be manipulated directly |
String contains a + character
for non-null fields, and a .
character for null fields. This makes it possible to inspect
the string to see all the null fields, and also to construct a
string suitable for passing to SetNulls. |
POINTER64
Pointer64(FILE pFile),BIGINT
The
POINTER command returns a
LONG,
which limits the number to around 2.1 billion. This may not be enough
(for example the DOS2 driver supports files greater than 2 Gigabytes).
The
POINTER64 command is a new command
which returns a
BIGINT variable (at the
time of writing this, this is a
REAL.)
If this command is called against a traditional driver then the
regular
POINTER command is called, and a
value less than 2.148 billion is returned.
See also
Bytes64 and
Records64.
POSITION
The
POSITION command returns a string,
and that string contains enough information to "mark" where the record
is. It can then be retrieved later, as a single record using
REGET, or a
SET /
NEXT sequence can be restored, using
RESET, to this position for further processing.
The actual string returned by
POSITION
varies by use-case and driver. No information about the contents, or
length of the position string should be assumed. Generally speaking,
when using
Position(file) or
Position(key)
the string should be at least large enough to hold the Primary Key
fields.
There are two forms of the command,
POSITION(file)
and
POSITION(key). Traditional SQL
drivers do not support
POSITION(file)
since most SQL engines do not support any kind of record numbering.
However the object based drivers treat this case
POSITION(PrimaryKey).
So the
POSITION(file) form is supported
- it's the same as
POSITION(PrimaryKey).
Since object based drivers require a primary key, and since primary
keys are always unique,
POSITION(AnyKey)
is always equivalent to
POSITION(PrimaryKey).
In other words,
POSITION always returns
the position of the primary key. This implies that all forms of
POSITION when called on a
FILE,
return the same value.
If
prop:Project is set,
and the read did not include the primary key fields, then POS will
fail, returning an empty string . An error code
121
Invalid Project List will be set.
Note: Because the position string returned
can contain null characters (
chr(0)) the
return value MUST be stored in a
STRING,
not a
CSTRING.
Note: Although
POSITION
returns a common value, regardless of the key used, the key passed
into
RESET still matters. In traditional
drivers this means that the
Position(key)
must match the
Reset(key). In object
based drivers
POSITION returns a common
value, so any key can be used in the
RESET.
Note: This section specifically applies to
using
POSITION with a
FILE.
For use with a
VIEW see
here.
prop:Project
When reading a record from a file (using
GET,
NEXT or
PREVIOUS),
typically all the fields are read from the file into the record
structure. In some cases though only a small number of fields are
necessary. Reducing the number of fields fetched from the server can
improve performance as less network traffic needs to flow over the
wire.
prop:Project is introduced by
object based drivers to offer the programmer this level of control.
prop:Project is a comma separated list
of field names. Either the Clarion Label (including the prefix) or the
database column name can be used.
prop:Project is set in the usual way;
Tablename{prop:project} = 'cus:id, cus:name,
cus:phone'
If the Primary Key fields are NOT included in the projected list then
the read is automatically set into Read-Only mode, and the results
read by the SET/NEXT loop cannot be
PUT
back into the database. Doing a
PUT after
a read-only read results in an
Error Access
Denied (5). The use of
POSITION,
and by implication
REGET are also
unavailable if the primary keys fields are not projected.
If a
GROUP is projected, then all the
fields inside the group are automatically projected.
If an
OVER field is projected, the the
over'd field is projected.
In addition to fields already in the table, it's possible to project
calculated fields into local variables outside of the record structure
using
BindField.
Read
From a read perspective,
prop:Project
is supported by
SET (which then affects
the following
NEXT and
PREVIOUS
calls), and
GET commands. Notably
REGET is not affected by
prop:Project -
REGET
always reads the whole record into the buffer.
Write
From a write perspective, prop:Project
is supported by the ADD, APPEND,
COPY, MERGE,
PUT and UPSERT
commands. For ADD, APPEND,
COPY and MERGE
when doing a partial write the other fields are set to the (SQL
Side) default values, or null.
prop:Prefix
In traditional drivers prop:Prefix (aka prop:Pre) is a write-only
property, used when constructing tables using the dynamic file driver.
In object based drivers it can be read and returns the PREFIX
attribute of the table.
Primary Key
All tables in your dictionary should have a Primary Key. This is
simply good database design, and is a requirement for most database
engines. A Primary Key should be identified in the FILE structure by
use of the
PRIMARY attribute. This is
typically set in the Dictionary Editor, for one of your keys. While
the use of a Primary Key is not enforced by Clarion itself, it should
be considered a basic requirement especially when using a SQL engine.
In the absence of a formally identified Primary Key, the object based
drivers will treat the first key marked as Unique as the Primary key.
However this approach may end up selecting a key that you do not
expect. Since most engines fix the primary key when the table is
CREATEd in the database this can have long-term consequences.
prop:Name
Unfortunately the Traditional driver behavior for
prop:name
is not consistent. Therefore the behavior of both traditional and
object based drivers should be considered as driver dependent. And to
make things more exciting, the values change once the FILE structure
has been initialized
[3] (on this thread,
assuming the FILE is Threaded.)
The different known behaviors are documented here;
Before Initialization;
| Type |
Driver |
Applies To |
Behavior |
Attributes |
| Traditional |
SQLite (maybe?) |
Field and Blob |
If the NAME is not set, then the LABEL (no prefix) is
returned. |
Not included [1] |
| Traditional |
All Other SQL |
Field, Memo and Blob |
If the NAME is not set then a blank string is returned. |
Not Included [1] |
| Traditional |
All ISAM |
Field and Blob |
If the NAME is not set then a blank string is returned. |
Included |
| Object Based |
All |
Field and Blob |
If the NAME is not set then a blank string is returned. |
Not Included [2] |
After Initialization;
| Type |
Driver |
Applies To |
Behavior |
Attributes |
| Traditional |
CLARION |
Field (not memo) |
If the NAME is not set then a blank string is returned. If
the name is set it is UPPER cased. |
Included (all spaces removed) |
| Traditional |
CLARION |
Memo |
If the NAME is not set then a blank string is returned. |
Included |
| Traditional |
All SQL |
Field and Blob |
If the NAME is not set, then the LABEL (no prefix) is
returned. |
Not Included [1] |
| Traditional |
All ISAM |
Field, Memo and Blob |
If the NAME is not set then a blank string is returned. |
Included |
| Object Based |
All |
Field, Memo and Blob |
If the NAME is not set, then the LABEL (no prefix) is
returned. |
Not Included [2] |
Note 1: To get the attributes for
Fields use the
WHO command.
Note 2: To get the attributes for
fields and blobs can use prop:attributes. For Fields WHO, can be used.
Note 3: The file is initialized on a call to
OPEN
(successful or not.) The file remains initialized after a
CLOSE,
so this is not the same as "The file must be Open". Other commands may
also trigger initialization, but if there are they are not known at
this time.
Note 4: Bonus tip - when called as just
File{prop:name},
The CLARION driver returns the full path name once the table has been
opened, even if the table has been closed. All the other ISAM drivers
return the table name after it has been closed, not including the
path.
[todo] discuss setprop:name, and the behavior in the
new apps.
New Properties (and new uses for old
properties)
Properties are a useful way to extend the functionality of the driver,
without having to make use of the Object methods directly. For this
reason a number of new properties are supported, and can be used using
the regular property syntax. These properties only apply to object
based drivers, if used against traditional drivers they are ignored.
Some of these properties exist already and are supported by the VIEW
driver when applied to VIEWs. However traditional drivers may not
support these properties when used in a FILE context.
| Property |
Driver |
Applies To |
Commands |
Description |
| prop:AddOnly |
All |
FILE |
ADD, APPEND |
When ADDing records to a database, the primary key position
of the record is stored so that you can follow up the ADD
with a DELETE or PUT
if necessary. In cases where you are adding a lot of records,
and you know you won't be doing a PUT
or DELETE, you can set file{prop:AddOnly}
= true. This bypasses the record tracking, which
saves some time. This property is reset by any read (GET,
NEXT, PREVIOUS,
RESET, as well as CLEAR
and CLEARPROPS). If a PUT or DELETE
is done after an ADD (with this
property on) then a Record Not Available
(33) error is raised. See also prop:ReadOnly
below. |
| prop:Affected |
SQL |
FILE / VIEW |
DELETE, PUT |
When changing, or deleting rows, the number of rows affected
by the write will be stored in prop:affected. This does not
include rows updated internally by the server via cascading
relational writes. See prop:ChildrenAffected. |
| prop:Arrays |
All |
FILE |
|
Returns 1 if he driver supports arrays. Returns a blank
string if not. |
| prop:Attributes |
All |
FILE |
File Structure |
Returns the Extended Name Attributes
for a field or blob. Attributes are included in the Name of a
field in the file structure, after a Pipe ( | ) character. |
| prop:Bytes |
All |
FILE |
|
This behaves the same as the BYTES
command, but is not limited to returning a LONG.
Use this in cases where the returned value may be greater then
a LONG can hold (ie greater than
approximately 2.1 billion). See, for example, the DOS2
drivers' support of files greater than 2 gigabytes in size.
See also BYTES64. |
| prop:ChildrenAffected |
SQL |
FILE / VIEW |
DELETE, PUT |
Some databases return the number of related rows in other
tables also affected by the write. This property may not be
available in all drivers. Review driver documentation. |
| prop:ClearProps |
All |
SET / NEXT |
|
Clears all properties mentioned in this list (unless
otherwise noted.) Is equivalent to calling the CLEARPROPS command. Automatically called
when doing a CLEAR(File). |
| prop:Database |
|
|
|
Returns the DATABASE part of
the current SQL connection string. See also prop:Server
and prop:User. |
| prop:DatabaseName |
ALL |
FILE |
|
Returns the database engine name. In other words something
like SQLite, PostgreSQL, MySQL or whatever. |
| prop:Distinct |
SQL
|
FILE / VIEW |
SET
|
Causes the result set to only contain distinct records. For
more see Prop:Distinct.
|
| prop:DriverVersion |
ALL |
FILE |
|
Returns the driver version/build number. |
| prop:Explain |
SQL |
FILE / VIEW |
SET
|
Gets feedback from the SQL database for a SQL command.
For more see prop:Explain.
Unlike the other properties this one is
reset to false immediately the SET
command is called. |
| prop:FastPut |
SQL |
FILE / VIEW |
PUT |
Default value is false. If set
to true, then a PUT
statement will not reset
read-only fields in the record buffer (from their
corresponding database values in the database.) This performs
the same option for a PUT that APPEND does for ADD. |
| prop:Filter |
SQL |
FILE / VIEW |
COPY, MERGE,
SET
|
Causes the result set to be filtered. In the FILE context is
especially useful to set the end of the SET/NEXT
loop. |
prop:Interface
|
All
|
FILE
|
All READs and WRITEs.
|
Allows custom field handling to be added to a table, without
altering the shipping file driver. This is equivalent to
calling SetCustomType. (Not
cleared by ClearProps). |
| prop:GroupBy |
SQL |
FILE |
SET |
|
| prop:Having |
SQL |
FILE |
SET |
|
| prop:Json,FieldNumber |
ALL |
FILE |
|
Returns the field as a JSON array. If the field is not an
array, then the JSON is still a JSON array, just with a single
element. Where necessary JSON encoding is used to encode JSON
reserved characters. The field number is passed along with the
property to determine which field is returned. Use negative
field numbers to indicate Blobs and Memos.
somestring =
Customers{prop:json,fieldnumber} |
prop:Limit
|
SQL |
FILE / VIEW
|
COPY, MERGE,
SET |
Allows a FILE or VIEW loop to be limited to a specific
number of records. Can be used in a SET /
NEXT loop. Setting this to -1 tells the driver to fetch
all records. This property must be set to something for the
Offset property to apply.
|
| prop:NotDistinct |
|
|
|
|
| prop:OBD |
All |
FILE |
|
Returns 1 if the file is using an Object Based Driver.
Returns a blank string for traditional drivers. |
prop:Object
|
All
|
FILE /
KEY / BLOB / VIEW
|
|
As the name suggests, object based drivers are built using
classes, and each FILE or VIEW structure gets an
associated object assigned to it. It can be useful to access
this object directly. This property returns a pointer to the
DriverFileClass object. See Getting the
Object for the FILE Structure for more.If used against a
KEY or BLOB then the DriverFileClass object of the FILE is
returned. (Not cleared by ClearProps). |
prop:Offset
|
SQL |
FILE / VIEW |
COPY, MERGE,
SET |
Allows the initial records in the result set to be skipped.
This property is only applied if the LIMIT property is set.
The LIMIT property can be set to -1 to return all records.
|
| prop:Order |
SQL |
FILE / VIEW |
SET
|
Causes the result set to be ordered. In the FILE context is
especially useful for ad-hoc ordering of a SET/NEXT
loop. |
| prop:OrderBy |
SQL |
FILE / VIEW |
SET |
|
| prop:Paging |
SQL |
FILE / VIEW |
SET |
When set to true (which is the default) then Files and Views
fetch the records from the database one page at a time. In
some cases though it's faster to fetch the whole result set in
one request. Set prop:paging to false if this is the case. |
| prop:Pointer |
All
|
FILE
|
|
Functions the same as the POINTER
command, but is not limited to returning a LONG. Use this in
cases where the returned value may be greater then a LONG can
hold (ie greater than approximately 2.1 billion). See, for
example, the DOS2 drivers' support of files greater than 2
gigabytes in size. See also POINTER64.
(Not cleared by ClearProps). |
| prop:Pool |
SQL |
FILE |
OPEN |
Allows more control over the polling of database
connections. See Transactions. |
| prop:Project |
SQL |
FILE |
COPY, MERGE,
SET |
Allows for partial record reads and writes. |
| prop:ReadOnly |
All |
FILE |
SET |
When reading records (like in a SET/NEXT
loop) the driver needs to keep track of the record just loaded
in case you want to do a PUT or DELETE on the record. This tracking
takes up some time. To improve performance you can set file{prop:ReadOnly} = true in which
case this tracking is bypassed. If an ADD,
APPEND, PUT
or DELETE is done after a read
(with this property on) then a, Access
Denied (5) error is raised. See also prop:AddOnly
above.
Prop:ReadOnly is not reset to
false except on a OPEN , CLOSE
, CLEAR and CLEARPROPS.
So if you set it for a file, then you are responsible to unset
it if you switch to using the file in read/write mode. |
| prop:Records |
All
|
FILE
|
|
Functions the same as the RECORDS
command, but is not limited to returning a LONG. Use this in
cases where the reply value may be greater then a LONG can
hold (ie greater than approximately 2.1 billion). See also RECORDS64.
|
| prop:RelationsOff |
SQL |
FILE |
SUSPENDRELATIONS
RESUMERELATIONS |
If server side relationships have been suspended, for this
connection, on this thread, then this property will return 1.
Otherwise it will return a blank string. |
| prop:Search |
All |
FILE / VIEW |
SEARCH |
Returns the search term for the most recent call to SEARCH. |
| prop:SearchRank |
All |
FILE / VIEW |
SEARCH, NEXT |
While looping through the NEXT
or PREVIOUS records, after doing
a SET then SEARCH,
then the rank of the current record is accessible using prop:SearchRank. Note that this value
is a REAL, and is not comparable
between databases.
|
| prop:Server |
|
|
|
Returns the SERVER part of the
current SQL connection string. See also prop:User
and prop:Database. |
| SQLFilter |
SQL |
FILE / VIEW |
COPY, MERGE,
SET |
|
| SQLOrder |
SQL |
FILE / VIEW |
COPY, MERGE,
SET |
|
| prop:TryConnectedAs |
SQL |
FILE |
|
Clarion supports a number of connection string formats. The
driver necessarily translates these into the form required by
the database engine. This translated form of the owner string
can be retrieved (regardless of success or fail of the
connection) using prop:TryConnectedAs. |
| prop:User |
SQL |
FILE / VIEW |
|
Returns the USER part of the
current SQL connection string. See also prop:Server
and prop:Database. |
| prop:Where |
SQL |
FILE / VIEW |
COPY, MERGE,
SET |
This property is not set by the programmer, but is available
to the programmer to inspect after a call to SET. This contains the combined WHERE
clause as constructed by the class. |
| prop:WherePage |
SQL |
FILE / VIEW |
|
This is the same as the prop:where statement, but includes
the additional WHERE clauses used
for the paging. |
PUT
Using traditional SQL drivers, only the changed fields are sent to the
server. In other words, not all the fields in the record are updated.
This is more efficient, and it also protects other fields, that may
have been changed in the row, by other programs (or threads) between
the
GET and
PUT
calls. The object based SQL drivers mimic this behavior.
This is worth repeating, because it is not widely understood. If you
PUT a record,
ONLY the
fields that changed are written. Other fields in the record, in the
database, are unaltered
EVEN IF they changed between
when you read the record, and updated it. In other words, the
PUT
statement is a
PARTIAL WRITE.
In addition to this, the object based, SQL drivers support
prop:Project.
When reading the record,
prop:Project
lets you limit the fields that are being read from the table. This
property is used again in the
PUT. So not
all the fields are tested for changes, only the fields that are in
prop:Project, which typically means the
fields which were read in the first place.
In the context of a
PUT, the projected
fields are the fields being projected
into
the server from your program.
It is possible to adjust
prop:Project
between the read and the write, which in turn affords the programmer
even greater control over which fields are written back to the
database. If you write back a field, which was not read, then the
write will only happen on a field if the value of the field is
different to the value before the record was read. So adding fields to
prop:Project should be done with care.
Some care should be taken when using
GET,
REGET,
NEXT
and
PREVIOUS. All these commands also
support
prop:Project. Changing the value
of
prop:Project between the read and the
PUT should be done with caution. Typically
you will not want to write back more fields than were read (although
it's possible to do so.) The drivers do not alter the value in
prop:Project after a read.
Equally,
prop:Project is not cleared, or
reset, after a
PUT. For subsequent reads
the value should be cleared or reset or it will be used again.
By default, if supported by the driver, when a
PUT
is done, all server-side fields (identity, and readonly fields) in the
record are reset to their current server-side value. Use the
prop:FastPut property to suppress this behavior.
See also:
Add.
prop:Readonly
When a table is opened, access rights for the table are set.
Performance and security are improved if the table is opened in
read-only mode. However since tables are
opened generically, this is seldom used.
Even if the table is opened in the default
read-write
mode, there are times when telling the driver that a block of code
will be
read-only has performance
implications. For this reason
prop:ReadOnly
has been introduced in object based drivers.
Set the property to true or false. For example, setting it to true;
customers{prop:ReadOnly} = true
has the effect of telling the object that it will only be used in a
read-only context (for now.) While in this state calling
ADD or
APPEND
will result in an
Access Denied (5) error.
Calling
PUT or
DELETE
will result in a
Record Not Available (33)
error. Once the
read-only moment has
passed the property can be set back to
false.
The property is set automatically on
OPEN
or
SHARE if the file is opened in
read-only access mode. If the file is opened
in
read-only mode (via the
OPEN
or
SHARE commands) then attempting to set
the property to
false has no effect. If
it's desirable to change the access mode after the table has been
opened then see
REOPEN.
RECORDS
Records(FILE pFile)
Records(FILE pFile, String pFilter),BIGINT
Records(FILE pFile, *? pField1, <*? pField2>,
<*? pField3>, ..., <*? pField15>),BIGINT
| Parameter |
Type |
Description |
| pFile |
File |
The label of the file to count. |
| pFilter |
String |
A filter expression which expresses the records to count. |
| pField1..pField15 |
Any |
Fields in the FILE record. Up to 15 fields can be entered
here. The count will be filtered using the current values in
these fields. These cannot be OVER or ARRAY fields. |
The
RECORDS command has been extended to
allow the counting of more things than just the records in the
table.Two possible approaches can be used; a filter expression, or a
list of 1 to 15 fields with fixed values.
Example
Return the number of records with a date in January.
r = Records(Invoice,'Inv:Date >=
Date(1,1,2024) and Inv:Date <= Date(1,31,2024)')
or
r = Records(Invoice,'Month(Inv:Date) = 1 and
Year(Inv:Date) = 2024')
Return the number of Invoices allocated to Customer 5 on the 12th of
January
Inv:Date = Date(1,12,2024)
Inv:Customer = 5
r = Records(Invoice,Inv:Date,Inv:Customer)
Note: OVER and ARRAY fields cannot be used
in either the filter or as pField parameters.
RECORDS64
Records64(FILE pFile),BIGINT
The
RECORDS command returns a
LONG,
which limits the number to around 2.147 billion. This may not be
enough - many databases support more than 2 billion rows in a
table. The
RECORDS64 command is a
new command which returns a
BIGINT
variable (at the time of writing this, this is a
REAL.)
If this command is called against a traditional driver then the
regular
RECORDS command is called, and a
value less than 2.148 billion is returned. So this new command is
backward compatible with existing drivers.
See also
Bytes64 and
Pointer64.
REGET
Reget(FILE pFile, String pPosition)
Reget(KEY pKey, String pPosition)
The
REGET command reloads a row from the
FILE based on the
Position
parameter. The
Position is a string that has
previously been populated with a call to the
POSITION
command.
The
REGET command does
NOT
observe the
prop:Project
property. In other words it always loads all the fields in the record
buffer. Calling
REGET will clear the
prop:Project property.
The
REGET command does
NOT
support the length parameter. See
DOS2 for
more.
Traditional SQL drivers do not support
Reget(File).
Object based drivers treat this as
Reget(PrimaryKey)
so it is supported. Indeed since all calls to
REGET
(regardless of the key specified) are treated as
Reget(PrimaryKey),
they are effectively all equivalent.
See also :
RESET .
Relationships
In many aspects good SQL advice is limited to the specific context
where that advice works. It is critical to understand your own context
well in order to make good database design decisions. And perhaps no
area in database design is as contentious as relationships. They are
enormously useful in some cases, but equally unhelpful in others.
From a database perspective relationships allow the developer to
define connections between tables. This is primarily used for two
reasons;
- To maintain data integrity (ie preventing "bad" data from being
added to the database and
- To improve performance when updating primary key values, or
deleting associated records.
For example, say there is an
Invoices table,
and each invoice contains multiple line items. If an invoice is
deleted from the system, then it makes sense to cascade the delete to
the child (
LineItems) records. In some
relationships this is desired. In others (for example, deleting a
customer) you do not want to cascade the delete (deleting
Invoices
when deleting
Customer would be bad.) And in
some cases you would want to prevent a row (say the
Customer)
from being deleted if child records (
Invoices)
exist.
With traditional (and object based) drivers all of this can be done in
the client program, and indeed Clarion is set up to support
client-side data integrity in this way. But doing it on the client can
be slow, especially if a lot of rows in different tables are updated.
For example changing the Unique ID value for a
Product
might result in millions of
LineItems rows
needing to be updated. Object based drivers add the possibility of
doing this work on the server, not the client.
Of course all magic comes at a price, and database relationships are
no different. There are two areas where having a relationship comes at
a cost;
- There's a performance penalty when adding or updating records.
The fields in the relationship need to be checked, and naturally
that comes at a cost.
- In distributed-data systems (think data on phones being
synchronized to a SQL database) the order of synchronization
suddenly becomes important. Parent tables have to be synchronized
before child tables.
So choosing to use relationships, on the server side, is a question of
balancing benefits in some cases with costs in others. There is no one
right answer which will work in every situation.
Traditional drivers make no attempt to support server-side
relationships, those are left to the developer to manage outside of
the program. Object based drivers add a command
ADDRELATION which allows the developer (or
template) to inform the file driver of the desired relationships.
For some database drivers (most notably SQLite) this call
MUST
be made before the table is created. The relationship is baked into
the table creation, and changing that afterwards can be "expensive".
For most other databases, relationships can be created (and updated)
on the fly.
Relationships consist of a parent table, a child table, action on
update, action on delete, and the list of one or more linking field
pairs. Up to 10 field pairs are supported.
Relationships make some table maintenance more difficult. For example,
dropping a table is problematic if other tables are related to this
table, and contain child data. The common technique of using
REMOVE
or
EMPTY to empty a table, then
repopulating it with some data fails because you may not be allowed to
delete parent records, or worse, the delete cascades to the children
(or the children are cleared.)
So in some cases, especially when table maintenance is being done,
it's necessary to suspend and later on resume relationship checks. The
new commands
SUSPENDRELATIONS
and
RESUMERELATIONS
have been added to give the programmer more control. Some databases do
this globally, on a connection or session basis, or on a table by
table basis. Be aware that if it's done globally then this could
affect other threads in the program or other programs. Check your
driver specific documentation to fully understand the implications of
these commands. And, of course, use them only when necessary.
REOPEN
Reopen(FILE pFile,LONG pNewMode)
If the file is already open, then it can be reopened with different
access rights.
This is an alternative to calling
CLOSE
and then
OPEN with a new access mode
parameter. This bypasses all the clean-up that usually occurs when
doing a
CLOSE or
OPEN.
Use this to change the access mode, but without affecting other
properties or record buffers that may be set.
If the reopen is changing to a read-only access mode, then
prop:readonly
is set to true. If the reopen is changing to a read/write access mode
then
prop:readonly is left
unchanged.
If the file is currently closed, then
REOPEN
behaves like a normal
OPEN, except that
the File Name property is not updated (if the File Name property is a
variable.)
Example
Reopen(Customers,42h)
Customers{prop:readonly} = false
If the
REOPEN command is called against a
traditional driver then the
Errorcode()
is set to
NoDriverSupport (80).
REMOVE
REMOVE(FILE pFile)
The
REMOVE command for object based
drivers is the same as for traditional drivers. It deletes the table
from the database (in SQL terms it does a
DROP
TABLE.
There is however an additional consideration when using
REMOVE
in a SQL context, where server-side relationships are active. If you
are using
server-side relationships
then the use of
REMOVE can be damaging.
If called when relationships are active then the defined
ON
DELETE behavior may be executed by the database. This
includes cascading deletes, clearing of child fields, and so on.
Indeed the call may fail for one, or more, rows if there is a
RESTRICT
limitation placed on deleting the row. This may not be intended
behavior.
If the call to
REMOVE is part of a wider
process (to drop the table, and then recreate it) then strong
consideration should be given to suspending relational updates (using
SUSPENDRELATIONS).
See also the
EMPTY command which
has similar implications, and the section on Server Side
Relationships
in general.
RENAME
RENAME(FILE pFile, STRING pNewFileName)
The
RENAME command is not supported by
traditional SQL drivers. However object based drivers allow the
RENAME command to be implemented by the
driver. The following is the expected behavior when used by a SQL
driver;
This command renames the table in the database. Note that only the
table may be renamed. The indexes may or may not be renamed by the
driver
[1], depending on what the
database requires.
Warning: Renaming a table, and other items
related to that table (like indexes) can disrupt server-side code like
server-side triggers and stored procedures that use the old names.
Using
RENAME on tables which are included
in other triggers, or in stored procedures could result in those
triggers or stored procedures failing.
The table can be renamed if it is open however note that the Full Path
Name setting for the table is not changed. Thus subsequent operations
on the table may fail if the Full Path Name is not updated at the same
time. (Unless the next command is a
CREATE.)
To avoid possible issues it is recommended to only
RENAME
a table when it is closed.
Obviously renaming a table could cause other programs connected to the
database to encounter an error. So this command should be used with
some caution.
If the destination table already exists, then an error
Access
Denied (5) is generated.
Note 1: The
SQLite2 driver renames the indexes, as long as the old index name
contains the old table name followed by an underscore.
The MSSQL2 driver renames the keys, indexes and constraints if the old
table name forms part of those names.
See Also
COPY
RESULTGET
RESULTGET(FILE pFile, long pRow)
RESULTGET(FILE pFile, BIGINT pRow, long pColumn)
RESULTGET(FILE pFile, BIGINT pRow, long pColumn,
*Byte pNull)
RESULTGET is one of the functions that
lets the program access the result set directly.
RESUMERELATIONS
RESUMERELATIONS(FILE pFile)
This ends a matching
SUSPENDRELATIONS
command. Calling
CLOSE also automatically
triggers the end of the
SUSPENDRELATIONS
command. If
SUSPENDRELATIONS has not been
called then this does nothing. If
SUSPENDRELATIONS
has been called multiple times, then this resumes all of them.
(In other words there is not a counter counting Suspends and Resumes.)
Note that most
SUSPENDRELATIONS commands
apply to a connection, or the entire database (consult your driver
documentation for specifics.) However to work correctly it is
necessary to call
RESUMERELATIONSON for
the same table as
SUSPENDRELATIONS.
Calling
RESUMERELATIONS without a
preceding
SUSPENDRELATIONS is meaningless
and harmless. No Error will be generated.
See also
SUSPENDRELATIONS
and
Relationships.
ROLLBACK
ROLLBACK(FILE pFile, STRING pSavePoint)
ROLLBACK(FILE pFile)
If
LOGOUTCONNECTION is
used to start a transaction, then
ROLLBACK
MUST be called with the primary table. Calling
ROLLBACK
with no parameters in this situation will cause a GPF.
For more information on rolling back to a save point, see
SAVEPOINT .
If the server has gone into an Error state, or has terminated the
transaction internally, then this can be detected using
prop:TransactionStatus and
prop:InTransaction. If these two properties do
not align then a
ROLLBACK should be
performed to terminate the transaction.
Saving State
The templates (both ABC and Clarion) generate FILE structures as
Global, usually threaded, structures. This has a number of advantages,
but also means that there can be side-effects when a procedure is
using the FILE structure, and calls another procedure that uses it as
well.
To help with this problem Clarion has three commands (
GETSTATE,
RESTORESTATE,
FREESTATE)
used to save the state of a
FILE so that
it can be restored later. This allows procedures (either the caller or
the callee) to snapshot the state of the structure, and then return it
to that state, thus ensuring that there are no side-effects.
For ISAM drivers this works well, and can be "perfect". However for
SQL drivers there are limitations, precisely because another program
is involved (the SQL server) and it does not necessarily have the
ability to do the same. Fortunately these limitations seldom matter to
the program.
It's also worth noting that saving a state is "expensive" in RAM
terms. There's quite a lot of information (properties, result set and
so on) which are stored. When you no longer need the state
FREESTATE
should be called.
FREESTATE is not called
automatically when
CLOSE is called
(because the state transcends the
FILE
open status.) It will be freed when the thread ends, but doing so when
it is no longer needed is recommended.
SAVEPOINT
SAVEPOINT(FILE pFile, STRING pSavePoint)
SAVEPOINT is a new command that can be
used inside a transaction (ie between a
LOGOUT
and a
COMMIT). It places a "marker" (the
pSavePoint parameter) into the
transaction. This allows a
ROLLBACK
just to that save point, rather than all the way back to the beginning
of the transaction.
For Example (this code is obviously highly abbreviated);
Logout(0,Invoices,LineItems) ![1]
Loop
Add(Invoice)
SavePoint('Invoice' & Inv:id)
Loop
Add(LineItem)
If Errorcode()
Rollback('Invoice' & Inv:id)
Break
End
End
End
Commit()
Note 1: For SQL
drivers actually listing all the tables is obsolete. See
Transactions
for more information on that.
SEARCH
SEARCH(FILE pFile, STRING pSearchString)
SEARCH(VIEW pView, STRING pSearchString)
Object based drivers introduce ranked
Full
Text Search features to the driver. This is done with the
SEARCH command. This command is effectively a
complex filter, and is applied between the
SET
command and the
NEXT and
PREVIOUS
commands.
Result sets are ordered by rank - in other words the most relevant
results are returned first. (Note that most databases treat higher
rank as more important, however SQLite regards lower as better. This
doesn't matter unless the program explicitly exposes the rank value.)
Note that Full Text Search is not like
INSTRING
or
LIKE. It's not matching a string of
characters, but rather one or more whole words. Searching for the word
data does not return a match for
BackupData or
DataManager
- only for columns that contain the whole word
data.
The search is not case sensitive.
By default the full text search is created across all the
string,
pstring,
cstring,
memo and
blob
fields in the table structure. Fields marked as
BINARY,
UUID or
BASE64
are excluded. Fields with a
Custom Field
Interface are excluded.
The syntax for the search string can be simple, but it can also be be
made more specific using a well formed query. The following operators
are supported;
| Operator |
Description |
| OR , | |
The OR operator (or
alternatively a pipe symbol can be used) returns a result set
where either of the terms is present. |
| AND, +, & |
The AND operator (or
alternatively a plus or ampersand symbol can be used) returns
a result set where both of the terms is present. |
| NOT, -, !, ~ |
The NOT (or alternatively a
minus, tilde or exclamation symbol) removes any row with this
term from the result set. |
| "text" |
Double quotes are used to indicate phrases which need to be
matched in the row. |
| ( ) |
Parenthesis are used to indicate operator precedence. |
All of the above is a formal way to say that the search string behaves
the way you would expect.
The search term cannot start with the
OR
or
NOT Operator. If added they are
dropped. Thus the search term
OR LOVE
or
NOT LOVE
becomes just
LOVE
The search string of the most recent search can be retrieved using
s = someTable{prop:search}
Examples
Return result sets containing the word
RED or the
word
BLUE.
'red | blue'
'red or blue'
File Example
Set(Pro:NameKey)
! [1]
Products{prop:filter} = 'Pro:Active = 1' ! [2]
Search(Products,'NetTalk')
Loop
Next(Products)
If Errorcode() then break;
! whatever
End
[1] : The result set will be order by relevance (known as the
rank)
first, and then the sort order specified by the
SET
will be applied after that.
[2] : The
SEARCH is in addition to, not
in place of, all the regular
SET
behaviors. So all the other properties that apply to the
SET
are still in play. These properties should be set BEFORE the call to
the
SEARCH command.
View Example
Find all the customers with the word
INC
in the record. The most relevant records are returned first.
Set(CustomersView)
Search(CustomersView,'inc')
Loop
Next(CustomersView)
If Errorcode() then break.
! whatever
End
Field Types
Fields in the row that are searched have to be text, not binary
fields.
If you have a BLOB
or MEMO in your dictionary, and that
field is not marked as BINARY
then the driver will create it as a VARCHAR
type field (or similar) when the table is created in the database.
If the field is declared as a VARCHAR (or
similar) then it can be searched using full text search.
BLOB (or STRING,
PSTRING) fields that are marked as BINARY in the dictionary, and therefore
created as a BLOB (or similar) in the database cannot be used in a
full text search.
SEND
SEND(FILE pFile, STRING pCommandList)
The
SEND command is used to send
instructions, and set values to the file driver which control the way
the driver behaves. For more information see
Driver
Options.
The command list is a space-separated list of commands, which may, or
may not, need associated values.
If the
SEND command is used without
specifying a value, then the current value of the setting is returned
and the value is unchanged. If a value is passed then the value after
(attempting) to set the value is returned. Some databases will not
accept a setting, so the setting, as used by the database, is
returned. (In other words, if the value returned does not equal the
value returned, then the option failed to set for some reason.)
The
SEND command can accept a
space-separated list of settings. If multiple settings are set at the
same time then only the value of the last setting is returned.
Note: Because it accepts a space separated
list care should be taken not to include extra spaces, such as before
or after the = sign. If spaces in the name or value are necessary,
then enclose the term in double quotes. For example;
Send(Customers,'/pragma_foreign_keys=ON')
! good
Send(Customers,'/pragma_foreign_keys = ON') ! bad
Send(Customers,'"/pragma_foreign_keys = ON"') ! good
SET, RESET, POSITION
SET(FILE pFile)
SET(FILE pFile, KEY pKey)
SET(FILE pFile, LONG pOffset)
SET(KEY pKey)
SET(KEY pKey, LONG pOffset)
SET(KEY pKey, KEY pKey)
SET(KEY pKey, KEY pKey, LONG pOffset)
The
SET command in Clarion comes in seven
different flavors. Fundamentally it breaks down into combinations of 3
parameters;
- The order to move through the table
- The start position
- The offset in the table to start from
For traditional drivers the
OFFSET
parameter is a
LONG, and thus has a
maximum value of 2.1 billion. If your
OFFSET
is, or may be, greater than this number then create a variable of type
BIGINT, populate that with the larger
number, and then call
SET with that
variable as the
OFFSET parameter. Using
this form will ensure that the whole
BIGINT
value is passed to the driver. This is especially important when
using the DOS2 driver on files larger than two gigabytes.
For example;
ptr BIGINT
code
ptr = ! some number, perhaps > 2 gigs
SET(Customers,Ptr)
The traditional SQL Drivers do not support all seven forms of the
SET command. Typically they only support
SET(key) and
SET(key,key).
The other forms of the
SET command,
especially setting any kind of
OFFSET
are not supported. Object based drivers however do support all these
forms.
Traditional drivers do not support
prop:Order
or
prop:SQLOrder when doing a
SET
on a File, object based drivers do.
The following points explain the interactions, and limitations of the
object based drivers new additional features;
- When no key (and no prop:Order or
prop:SQLOrder) is specified (ie SET(File)), then the Primary Key is used.
So this is equivalent to calling SET(PrimaryKey)
. However if SET(File) is used and
the prop:Order or
prop:SqlOrder are set then
those properties are used for the ORDER BY
clause.
- When a
OFFSET parameter is
used then this translates into an OFFSET-1
in the SQL Result Set.
This is different to setting prop:Offset,
as that requires prop:Limit to be
set as well, whereas the OFFSET
parameter applies to the result set even if prop:Limit
is not set. If prop:Limit is set,
and prop:Offset is set then prop:Offset
has precedence over the OFFSET
parameter.
Note that the OFFSET reflects the
row number that will be selected first. In other words Offset-1
rows are discarded. So if OFFSET is
set to 5 then the first record read by the NEXT
command will be the fifth record in the sequence.
By contrast, prop:Offset indicates
the number of rows that are skipped. If Prop:Offset
is set to 5, then the first row read by the NEXT
command would be the sixth row in the result set.
- If the SET(key,key) form of SET is
used, and prop:Order or prop:SQLOrder
are used, then those fields identified in prop:Order
and prop:SqlOrder will behave the
same as fields in the key. In other words the current value of the
fields will be taken into account when constructing the result
set, and the current values will be considered as the "starting
position".
- If prop:SQLOrder is set, and it
contains a SQL calculation, then the built-in paging on the result
set is not possible, and the whole result set will be returned in
a single request. Consider using Prop:Limit
in this situation.
- The following properties are used by calls to SET
and RESET. They are NOT CLEARED after
the call, they must be manually cleared if necessary before future
calls to SET. They ARE cleared by a
call to CLOSE or CLEARPROPS
prop:Distinct, prop:Filter, prop:Limit,
prop:NotDistinct, prop:Offset, prop:Project, Prop:SqlFilter,
Prop:Order, Prop:SqlOrder.
Reset
SET(FILE pFile, STRING pPosition)
SET(KEY pKey,
STRING pPosition)
Despite the name, it's best to think of
RESET
as just another form of the
SET
command. It is possible to call
RESET
without calling
SET, so using
RESET
does not require a
SET . Rather
RESET simply adds another way of doing a
SET, and in all respects behaves in exactly
the same manner.
The initial values used by the
SET
command are not in the record buffer (as they are with a call to
SET) but rather in the
Position parameter passed into the
RESET command. The
Position parameter is the result of an
earlier call to
POSITION.
See also
REGET.
Position
POSITION(FILE pFile)
POSITION(KEY pKey)
Reset is based on a value returned by
POSITION.
In traditional drivers,
POSITION in
turn is based on the contents of the record buffer when it is
called, and importantly on a specific key when it is called. In
other words a key is chosen when calling
POSITION
and that same key needs to be used when calling the
RESET
command.
For example;
GET(file,5) ! it doesn't matter how the
record is fetched
p = POSITION(file:PrimaryKey)
RESET(file:PrimaryKey,p) ! works
RESET(file:OtherKey) ! fails
SQL object based drivers
SET supports
prop:Order and
prop:SQLOrder.
Since
RESET should be considered as a
form of
SET, it makes sense that
POSITION should also support
prop:Order
and
prop:SQLOrder[1].
However the only way to do this is to have
POSITION
return the "whole contents of the record buffer" (which is what it
does.)
In other words, the length of the string returned by the record
buffer is now the size of the record, not some size determined by
the Primary Key and selected key. While this requires more memory
for the saved position, it has some beeficial side effects;
POSITION
and
RESET are faster, and since the
POSITION contains the whole position, the
RESET can be done on any key, and is not
limited to the key, or
prop:Order set
at the time of the call to
POSITION.
Note 1: prop:SQLOrder allows for calculated fields
to be included into the
ORDER BY clause.
Since these clauses are outside the record buffer they cannot be
captured by
POSITION. Hence if
calculated fields are used in the
prop:SQLOrder
property then the
RESET position may
not match the actual record position, and thus in this case
RESET
cannot be used.
SET / NEXT Loop
From the earliest days of Clarion, embed code (and hand-code
generally) has used loops to walk through a FILE, in a pattern known
as the SET / NEXT Loop.
The basic pattern looks something like this;
Clear(Invoice)
Set(Inv:DateKey)
Loop
Next(Invoice)
If Errorcode() then break.
! do something
End
This code is ubiquitous and works well with ISAM files, but is not
very SQL friendly. A slightly expanded form of this code follows the
same approach;
Clear(Invoice)
Inv:Date = Date(1,1,2024)
Set(Inv:DateKey,Inv:DateKey)
Loop
Next(Invoice)
If Errorcode() then break.
If inv:Date > Date(1,31,2024) then break.
! do something
End
In the above code all the invoices for January are fetched. Again this
code is very fast in ISAM but also very inefficient in SQL. When
converting a program to SQL it is usually recommended to change this
code to use a VIEW. However with object based drivers it is possible
to make a small tweak to make this a lot more efficient for SQL back
ends.
To understand why, it's important to understand that VIEWs have two
major efficiency benefits over a SET/ NEXT loop. Firstly they allow
you to limit the number of fields populated with each iteration
through the loop. If you are only going to use a few fields out of a
lot of fields, then this is good to do. Secondly they allow the result
set to be bounded by using a Filter.
Object based drivers add support for a number of old, and new,
properties to the SET/NEXT loop which better informs the engine on the
goals of the loop, and this results in code which is more efficient in
the database.
All properties should be set (or cleared) before the call to SET
. Unless otherwise specified properties are not
cleared by the call to SET and must be manually cleared (if desired)
before the next set. Changing these properties during a loop is not
recommended (except where documented below.)
To ensure that properties set by previous calls do not affect the
SET / NEXT loop, start the loop be calling ether Clear(file)
or ClearProps(File).
Prop:ClearProps
Calling
prop:clearprops sets all these
properties back to their default value (null or false.) This is the
same as calling
ClearProps.
customers{prop:clearprops}
Prop:Explain
A
SET command, along with all the
properties described here, result in a SELECT statement being sent
to the database.
If your loop runs slowly, it can be useful to see the way in which
the databases uses the statement. And specifically if it is able to
make use of keys on the table to best generate the result.
prop:explain can be set, before the call to SET, to get information
from the database regarding the query.
Prop:Explain is cleared automatically by a call to
SET.
The
SET still functions, and the code
runs as normal. The output is dependent on the driver.
For more information on this see
Explain.
Prop:Filter
In the example code at the top of this section, a
SELECT
statement is generated and a
Result Set
is constructed by the server. Using the
SET(key,key)
line sets the starting point of the result set, but crucially
there's no end point defined. So the server builds a result set of
all the invoices since Jan 1, 2024 - whereas we are really only
interested in the invoices actually in January. Letting the server
in on that little bit of information allows the server to build a
smaller result set.
Object based drivers allow for this by adding support for the
prop:filter statement. So a small tweak to
the code becomes;
Clear(Invoice)
Inv:Date = Date(1,1,2024)
Invoice{prop:filter} = 'Inv:Date <= Date(1,31,2024)'
Set(Inv:DateKey,Inv:DateKey)
Loop
Next(Invoice)
If Errorcode() then break.
If inv:Date > Date(1,31,2024) then break. ! it doesn't
matter if this line is here or not
! do something
End
This is a lot simpler, and quicker to do than refactoring the code
completely to use a VIEW.
The code could also have been translated to this;
Clear(Invoice)
Invoice{prop:filter} = 'Inv:Date =>
Date(1,1,2024) AND Inv:Date <= Date(1,31,2024)'
Set(Inv:DateKey)
Loop
Next(Invoice)
If Errorcode() then break.
! do something
End
All the same rules that apply to
View{prop:filter}
apply here, and indeed
File{prop:SQLFilter}
is also supported as well.
If you use a
SET(key,key) form of the
SET statement, and also a Prop:Filter then the Prop:Filter will be
appended to the key filter. So if the key
is on say
Inv:Date, Inv:Time, and you
set
Prop:order = 'Inv:Total > 0'
then the result is filtered by
Date, Time,
Total.
If you use the
SET(file) or
Set(key)
form of the SET statement then the prop:filter will be used alone
(as there are no key fields.)
Various Clarion and SQL functions can be used in a filter. See
View
Filter Functions for more.
Prop:Limit
Another way to limit the result set generated by the SET / NEXT loop
is to use the
Prop:Limit setting. This
is useful in those cases where the number of records you want is
limited. For example, here's code to get the top ten invoices for
the year;
Clear(Invoice)
Num = 0
Clear(Inv:Total,1)
Set(Inv:TotalKey,Inv:TotalKey) ! Total Key is descending highest
to lowest
Loop
Next(Invoice)
If Errorcode() then break.
If Num > 10 then break.
If Inv:Date < Date(1,1,2024) or Inv:Date >
(12,31,2024) then break.
Num += 1
! do something
End
Clearly we can use
prop:Filter to
optimize this, but
prop:Limit will also
be helpful;
ClearProps(Invoice)
Invoice{prop:filter} = 'Inv:Date =>
Date(1,1,2024) AND Inv:Date <= Date(1,31,2024)'
Invoice{prop:Limit} = 10
Set(Inv:TotalKey,Inv:TotalKey)
Loop
Next(Invoice)
If Errorcode() then break.
! do something
End
Prop:Limit is used to limit the result
set to a specific number of records.Use of
Prop:Limit
implies the same behavior as
prop:Paging =
false (the whole result is loaded into ram in one bite.)
Prop:Limit works for FILEs and VIEW's. It must be set (or cleared)
before the call to
SET. It is not reset
automatically (except on an
OPEN ,
CLOSE ,
CLEAR,
CLEARPROPS, where it is reset to false.)
Changing the value of this property to not-zero while inside a SET /
NEXT loop will have the effect of terminating the SET/NEXT loop at
the end of the current page. Changing the value of this property to
zero inside the LOOP has no effect.
Prop:Offset
If you wish to skip a number of records at the start of the result
set, then you can use
prop:offset to do
so. All the FilePointer forms of
SET
also implicitly set the
prop:offset to
the passed-in
FilePointer value.
Prop:Order
In the example at the top there was helpfully a key (
TotalKey)
which was on the field we wanted to sort by, and also helpfully it
was a descending key. In real-life we probably aren't that lucky. In
which case you likely have already converted this code to using a
VIEW (or worse, you are loading the whole thing into a Queue and
then sorting the Queue.)
Fortunately though object based drivers support
Prop:Order.
So you can use an ad-hoc ordering scheme where you don't have a key.
ClearProps(Invoice)
Invoice{prop:filter} = 'Inv:Date => Date(1,1,2024) AND
Inv:Date <= Date(1,31,2024)'
Invoice{prop:Limit} = 10
Invoice{prop:Order} = '-Inv:Total'
Set(Invoice)
Loop
Next(Invoice)
If Errorcode() then break.
! do something
End
If you use a
SET(key) form of the SET
statement, and also a
Prop:Order then
the Prop:Order will be appended to the Key Order. So if the key is
on say
Inv:Date, Inv:Time and you set
Prop:order = 'Inv:Total' then the result
is ordered by
Date, Time, Total.
If you use the
SET(file) form of the
SET statement then the prop:order will be used first (as there are
no key fields.)
Internally a unique order is required. If necessary the engine will
add the primary key fields to the end of the order, to make it
unique.
Prop:Order (and prop:SQLOrder) work for FILEs and VIEW's. It must be
set (or cleared) before the call to
SET.
It is not reset automatically (except on an
OPEN
or
CLOSE, where it is reset to a blank
string.) Changing the value of this property while inside a SET /
NEXT loop will have no effect on the loop.
prop:Paging
Traditional drivers make use of a SQL Cursor to build a result set
in the database. The driver can then access that cursor to easily
move forward and backwards in the result set, update records and so
on. This is a simple and effective approach.
There are however disadvantages with this approach. It requires the
server to build the whole result set on the server, even if only a
small number of records will actually be retrieved. For example, a
Browse on a Customers table will trigger a (potentially large)
cursor on the back end, even if the user only views the first page
of results. And since that user may sit on that browse for some
time, the whole cursor is maintained in the database for that time,
consuming server resources.
Object based drivers take a different approach. Result sets are
constructed as "page requests". So a limited number of rows are
requested from the server, these are fetched, and then the server is
free to carry on as normal. After the request is made (and
fulfilled) no resources are consumed on the server.
However all magic comes at a price. There is some overhead in both
creating these pages, and in calculating the next page to fetch. If
the result set is known to be "small" (ie will fit into client-side
memory) then
prop:Paging can be set to
false. This allows the engine to bypass
some code and so slightly improve performance. It also results in
one less query to the database (since there's no "empty last page"
to fetch.)
Prop:Paging works for
FILE
and
VIEW structures, and is set to
true by default. It must be set (or cleared) before the
call to
SET. It is not reset
automatically (except on an
OPEN ,
CLOSE ,
CLEAR,
CLEARPROPS, where it is reset to true.)
Changing the value of this property to
true
or
false while inside a SET / NEXT
loop has no effect.
Prop:SqlFilter
Prop:SQLFilter is supported.
Prop:SqlOrder
Prop:SQLOrder is also supported.
However note that in the
FILE context,
there is no need to (and you should not) use a SQL Alias (or table
name) when identifying the field(s). This is different to the
traditional drivers which
require the
use of the SQL Alias. The use of SQL Aliases in the property is
allowed with object based drivers, and it will work, but this
technique is brittle.
There is a catch though.
Prop:SQLOrder
allows for the result set to be sorted by a calculated value. For
example;
ORDER BY YEAR(CURDATE()) - YEAR(birthdate)
ASC;
However it should be noted that since these values are opaque to the
driver, it's not possible for the driver to reliably "page" though
the file when this property is used. For this reason when making use
of the
Prop:SqlOrder field the entire
result set is returned. Consider using
prop:Limit
in cases like this to put some upper bound on the number of records
returned. This is only an issue when using a calculated value in the
order.
Prop:ReadOnly
When a record is read from the databases, some work is done to
ensure that a subsequent
PUT or
DELETE command will be able to update that
same record in the database. If
prop:ReadOnly
is set then this work can be avoided.
Since this property is used at the Table level, it is not cleared by
prop:Clear, nor is it reset in any way.
So if you use this to improve performance in a SET / NEXT context
you
must clear it again after loop is
complete.
If you attempt to do a
DELETE or
PUT on a row loaded when this property was
set then a
Record Not Available (33)
error is returned.
NOMEMO
Doing a SET/NEXT loop fetches all the fields in the table. If the
table includes
MEMO and/or
BLOB
fields, and you are not planning to use those fields in your loop,
then you can improve performance by using the
NOMEMO
command. Traditional SQL drivers do not support
NOMEMO,
but object based drivers do.
If you do a
PUT inside th
NOMEMOe
loop, and is active, then the
MEMO
and/Or
BLOB fields are not updated.
NOMEMO is automatically cleared by a call to
GET
or
REGET. It is also cleared by a call
to
SET, however it will remain "active"
for the duration of the SET / NEXT loop. Changing this property
inside the loop has no effect on the results returned by the loop.
Strings
Nothing in life is simple, it seems, and Strings are no exception.
Consistancy however is a virtue. So let's talk about Strings and
Cstrings
[2],
how they differ in SQL, and how you should use them in your database
design.
First, the basics. Strings and CStrings are both native Clarion types,
and most developers fall into the camp of using one or the other in
their programs. One enduring myth is that "SQL
requires
Strings to be CStrings". The truth though is somewhat more nuanced.
For most databases, as long as the declaration is consistent (in other
words as long as you use
STRING
everywhere, or
CSTRING everywhere)
there's no real problem. All your code should continue to "just work".
Under the hood though, there are a few differences, and these can be
summarized as follows;
Cstrings consume less space on the disk (and in memory), but may be
marginally (and I mean
marginally)
slower than Strings. In general Cstrings are considered preferable (to
save space), and performance differences can be ignored.
Strings cannot (really) contain meaningful trailing space characters
(since they're clipped off when doing comparisons and so on). Cstrings
cannot contain nul (
Chr(0) )
characters. Neither are common in data anyway, so it would be rare for
this to be important. Of course it may be important in a specific
case, in which case care should be taken to use the correct one.
Aside: For some cases
your program needs to store both possible trailing spaces, and null
characters. In these cases consider the
BLOB
data type, or encoding the string before storing it (as, for
example, Base64). If neither of these are suitable then use a
STRING type, and an extra field containing
the length may be appropriate. Or use the
STRING
type and append a known suffix to the string when storing it (like
say adding a ] character to the end of the string.) These cases
though are unusual and must be determined on a case by case basis.
Using the
BINARY extended
name attribute with a
STRING
changes the declaration to
BINARY, and for
a
CSTRING to
VARBINARY.
This is an alternative when using an object based driver.
Don't use a
STRING in one table (as say
the ID field) and a
CSTRING in another
(as the foreign key field.) JOINS which match Strings to Cstrings are
very slow, and demand lots of server resources.
Some databases (like SQLite) don't really care, since they treat the
types as the same (In SQLite's case, they map to the same underlying
type,
TEXT, which have no specified length).
The object based drivers (by default) typically map a
STRING(n)
to a
CHAR(n), and a
CSTRING(n)
to a
VARCHAR(n-1)[1].
This is consistent with the behavior in traditional drivers. If your
program has come from say TOPSPEED, and has typically used the
STRING datatype, then things will work
perfectly well. the database will likely be somewhat larger than using
Cstrings, but performance will likely be on par, and any difference in
performance between
STRING and
CSTRING
will likely not be noticed.
Individual drivers (like FireBird) may supplement this information
though, so be sure to check the documentation for the specific driver
to see if this topic is raised. Some databases may have a strong
preference for either
STRING or
CSTRING.
Changing from
STRING to
CSTRING
in the dictionary will have little to no impact on program code -
changing from
CSTRING to
STRING
though may be significant.
Note 1: In
Clarion a Cstring(n) can only hold n-1 characters, as the last
character has to be a nul (
Chr(0) ).
SQL engines however do not store the null character, so only need n-1
bytes to store the string.
Note 2: Another String type in Clarion is
the
PSTRING. Traditional SQL drivers do
not support this field type. Object based drivers treat it the same as
a
STRING. This makes it possible for
TOPSPEED files to be more easily converted to SQL. While use of
a
PSTRING in programs is rare, one
possible use case was for storing variable length, binary,
information. If your program is using this type for this reason then
that code will fail. Consider using an alternate method for storing
such information, as discussed above, or add the
BINARY
name attribute to the field.
Note 3: Some databases may not be able to
store a nul character in a
STRING. If
you are going to allow nul characters in the string, then the
BINARY
attribute is recommended. For example both the traditional and object
based drivers for MS SQL will terminate
STRING
fields if they encounter a nul character. While this is obviously true
for
CSTRING, it is less obviously also
true for
STRING.
Subsets
The easiest way to think of a subset is as a complex filter. A filter
(
prop:Filter) is great at getting a
subset of records into a result set. They work well when the decision
of whether to include a row or not can be determined simply by looking
at the row. So to get all the records in a specific date range, you
need to look no further than the date column.
There are a group of queries though that need to go deeper. Once the
condition lies outside the row itself, then
subsets
can become useful. SQL allows multiple results sets to be created, and
then those results combined (in various ways) to make a single result
set. The four operations are
UNION (and
related
UNIONALL),
EXCEPT
and
INTERSECT.
Command
|
Description
|
| EXCEPT |
Subtracts sets (A - B). |
| INTERSECT |
Finds common elements (A ∩ B) (removing duplicates) |
| UNION |
Combines sets (A + B) (removing duplicates) |
| UNIONALL |
Combines sets (A + B) (keeps duplicates).
If you know your data is unlikely (or unable) to contain
duplicates then UNIONALL is somewhat faster on the server
because the server then does not check for duplicates. |
The result of each operation results in a result set, and the next
operation acts on that set. So it's possible to start with a result
set (
UNION command) then add another set (
UNION) remove two sets (
EXCEPT
,
EXCEPT) and follow that up with another
UNION and an
INTERSECT.
A subset contains just enough fields to identify the rows that are
included (or excluded.) In practice this mostly means that the primary
key fields of the primary table are used. (However some other
combination of fields in the primary table could be used). Each
command in the subset (
UNION,
EXCEPT,
INTERSECT,
UNIONALL)
MUST return the same set of fields (same number, same type). The field
names might be different, but they are ultimately affecting a single
list, so they must match.
For example, say there is a list of customers. The goal is to find out
which good customers (customers who purchased something in 2022 and
2023) did not purchase anything in 2024.
To do this the program has to look at the customer list as a whole,
including customers who purchased in 2022, and 2023, but excluding
those who purchased in 2024.
Here's another example; You have a list of employees and need to know
which employees took no sick leave this year. In other words, take the
list of employees, and remove from that list any employee that took
sick leave.
A third example involves just the primary table. Each employee has a
manager (which is denoted by the
ManagerId
field). Each employee has a
role (for
some it's "Manager") and each employee has a
location
(for some it's "New Jersey".) The goal is to find all the managers who
do not manage someone located in New Jersey.
All three of these examples will be fleshed out below.
A Subset starts with a usual
SET command
on the table of interest (this is known as the
Primary
Table). In the first example it was
customers,
in the second and third it was
employees.
The subset is created using the (explicit, or implicit)
SUBSET command. This command indicates the key
identifiers of the parent table which constitute the list. In the
first example this would be the
CustomerId,
in the second and third the
EmployeeId.
This command might bb implicit, meaning that you don't need it at all.
If it's not used then the primary key fields of the primary table are
assumed. In that case the subset is created by using one of the
commands.
Then the subset of identifiers can be built using any number (and
combination) of the
UNION,
UNIONALL
,
EXCEPT and
INTERSECT
commands.
UNION adds rows,
EXCEPT
removes them, and
INTERSECT joins them.
All these commands only add or remove unique values. If you wish to
add non-unique values then you can use
UNIONALL
.
Example 1
Which customers purchased something in 2022 and 2023 did not purchase
anything in 2024?
Set(Cus:NameKey)
Subset(Customers,Cus:Id)
! Optional since the Cus:Id is the Primary Key
Field.
Subset(Orders,Ord:CustomerId)
Union(Orders,'Year(Ord:date) = 2022')
Intersect(Orders,'Year(Ord:date) = 2023')
Except(Orders,'Year(Ord:date) = 2024')
Loop
Next(Customers)
! etc
End
Let's work through this example;
Firstly the
SET command behaves as it
always does. It sets the starting point, sort order, filter, project
list and so on. That part is unrelated to the subset part that
follows.
The
SUBSET command binds a table, and
identifying columns to the commands that follow. In this example
two tables are involved. The first is the
Customers
table (consider this to be the
primary table).
The field(s) are the identifying fields which are in play. If the
SUBSET for the primary table is not included,
then the primary key fields for the primary table are assumed.
(In this example it is specifically included for clarity, but it could
be omitted.)
The second
SUBSET command adds the
Orders table, and indicates the foreign key
fields(s) which match the primary table. In this case
Ord:CustomerId
matches the
Cus:Id field. Note that the
fields don't need to be the same name, but they do need to be the same
type. They also need to be the same number of fields, no more and no
less. If the primary table has three fields in the primary key, then
three fields are needed here as well. The
SUBSET
command allows for up to 10 fields to be added.
Aside: In this example the subset is
working on a single foreign table, the
Orders
table. However this is not a limit - any number of foreign tables can
be used when creating the subset. The
SUBSET
command for each table only has to be called once for each table
regardless of the number of
UNION,
UNIONALL,
EXCEPT
and
INTERSECT commands used.
The next three lines determine which records are added and removed
from the subset. The
UNION commands add
customers that ordered something in the first year, the
INTERSECT
removes any that don't exist in the second year as well, and the
EXCEPT removes any which exist in the third
year.
The
LOOP through the resulting records then continues as
normal. Fundamentally all the subset does is act as a filter on the
SET /
NEXT loop.
Example 2
Which employees took no sick leave this year?
Set(Emp:NameKey)
Subset(Leave,Lea:EmpId)
Except(Leave,'Year(Lea:Date) = ' & Year(TODAY()) & ' AND
Lea:Type='''Sick''')
Since the first filtering command is
EXCEPT,
and not
UNION, the list starts with all
employees included. (This would also be true if the first command was
an INTERSECT, but if the first command is an
INTERSECT
then that's the equivalent of starting with
UNION
since again it starts with everyone, and remove anyone not in the
INTERSECT.)
This is in contrast to the earlier example which did start with
UNION, and hence does not include anyone in
the initial list.
So in this example the list starts with all employees, and then
removes any who took sick leave this year.
Example 3
Find all the managers who do not manage someone located in New Jersey.
Set(Emp:NameKey)
Union(Employees,'emp:role = ''Manager''')
Subset(Employees,Emp:ManagerId)
Except(Employees,'emp:location = ''NJ''')
In this case the list starts with a
UNION
to get a list of the employees who are Managers.
Because it did not start with a
SUBSET,
the
SUBSET is implied. Since the primary
key field is
Emp:Id, that is the implied
SUBSET field on the primary table.
After the call to Union there is a
SUBSET
command, which is on the primary table, but indicates a field other
than the primary key field. This indicates that for the next
commands, the
Emp:ManagerId (which is
the same type as
Emp:Id, and ultimately
contains an
Emp:Id value) is the subset
field. (In this case the employee has a
Emp:ManagerId
field indicating his manager.)
So all the Managers from the original list, are removed, for all the
employees located in New Jersey.
Limitations
There are some limitations to using subsets;
Firstly, all the tables involved need to be in the same database. At
this stage cross-database subsets (and by implication cross-driver
subsets) are not supported.
Secondly, if at all possible, either set prop:Paging
for the request to be false, or set prop:Limit as high as possible given the
RAM constraints of the program. Doing subset queries is (relatively)
expensive on the server side, so you do not want to do it, and then
only get a single page back (if the intention is to get the
following pages.)
Implied
Two statements in a Subset are implied in specific cases. These
simplify the calling code;
- If no SUBSET call is made, on the Primary table, then the
Primary Key fields are used. For example this line;
SUBSET(Customers,Cus:Id) ! This line is
not necessary
is not necessary (if Customers is
the primary file, and Cus:Id is
the primary key field.)
- If the first command (UNION, UNIONALL, EXCEPT
or INTERSECT) is a EXCEPT
or INTERSECT then the subset starts
with all the rows in the primary table. For example in this
sequence
UNION(Customers) ! This line is not
necessary
EXCEPT(Customers,'Cus:Status = 4')
it is not necessary to call the UNION
command, that is implied.
It should be noted that if the first command is UNION
or UNIONALL (on any table, not just
the primary table) then the inclusion of all the rows first is NOT implied.
Also implied is that a subset requires at least one command
otherwise it is not a subset. So calling
SUBSET
, and then not calling any of
EXCEPT ,
INTERSECT ,
UNION
or
UNIONALL before calling the
NEXT
(or
PREVIOUS) commands simply results
in a normal
SET /
NEXT
loop. The
SUBSET commands are
effectivly ignored.
SUBSET
SUBSET(FILE pFILE,? pParm1,<? pParm2>,...,<? pParm20>)
The
SUBSET command binds a table, and one
or more fields, to the current subset. If not called at the start of
the subset, on the primary table, then the primary key fields of the
primary table are assumed. If
EXCEPT,
UNION,
UNIONALL,
or
INTERSECT are used to
limit the set, on a different table to the primary table, then
SUBSET must be called first to bind that
table, and related fields, to the subset.
For more information on subsets see
subsets.
SUSPENDRELATIONS
SuspendRelations(FILE pFile)
The
SUSPENDRELATIONS command disables
Foreign Key integrity checking for the table. This potentially
cascades to other tables using this connection, or other threads in
the program. Consult the driver documentation for specific
information.
Warning: Suspending Relationships isn't
necessarily limited to a specific table or program. This
function should be considered as dangerous, and should be used with
caution. Should the program fail to call
RESUMERELATIONS then the database may be left
in a state where the relationships are disabled.
Some drivers are able to suspend relationships on a per-table basis,
others can only suspend all the relations in the database.
This suspension is terminated by a call to
RESUMERELATIONS, or a call to
CLOSE.
Use this command in cases where data in the table needs to be
modified, but relational updates should not be used. For example when
using
REMOVE or
EMPTY
(where the intention is to put the existing data back) or where for
some other reason it's necessary to temporarily suspend relationships.
This command is independent of any (or no) calls to
ADDRELATION. It applies to the database,
regardless of when, how, or by who, the relationships were created.
See also
RESUMERELATIONS
and
Relationships.
SQL
SQL(FILE pFile, STRING pSql,<*?
pParm1>,<*? pParm2>,...,<*? pParm20>)
A new command,
SQL, has been introduced
as a safer approach to calling embedded SQL statements. This replaces
the need for the
prop:Sql command, which
is inherantly vulnerable to SQL Injection attacks.
To understand the solution, it's best to first cover the problem.
Consider this statement;
Products{prop:sql} = 'SELECT * from Products
where name like ''%' & search & '%'';'
The problem here is that the statement is constructed from variables
(in this case
search) which may contain
harmful text. For example if this came from a web site search bar, and
contained the text
a%'; drop all tables;
then the second command (
drop all tables;)
would be executed as well. Of course this second command could be
anything (none of it good.) This is known as a
SQL
Injection attack.
Think of this SQL
SELECT statement as a
really tiny program. It's written out as text, so the database
"compiles" it, and then executes it. Since the
drop
all tables; command is part of the program it gets compiled
and executed.
The solution to this problem is to split the program part from the
data part. The program part is compiled, and then that compiled
program is passed, with the data, to the database for execution. Since
the data part is not compiled, it is not executed, and is safe
(regardless of what it contains.) This process is known as
parameterization.
The traditional
prop:Sql command is
incapable of performing parameterization because there is no way to
separate the "program" part and the "data" part of the statement.
The object based drivers offer a new command,
SQL,
which takes in the program part, and data part(s) as separate
parameters. In the program part, the parameters are represented as ?
n[1]
where
n is the parameter number. The
example above translated to the SQL command is as follows;
SQL(Products,'Select * from Products where name
like ''%'' + ?1 + ''%'';' , search)
Here's another example using two parameters;
SQL(Products,'Select * from Products where
name like ''%'' + ?1 + ''%''' and price >= ?2;' , search ,
minprice)
The SQL command supports up to 20 data values in the call.
If the same parameter is used more than once then the command can
contain the same
?n value, for example;
SQL(Products,'Select * from Customers where
firstname like ''%'' + ?1 + ''%'' or lastname like '%'' + ?1 + ''%''
;' , search )
That example may be hard to follow because the
SQL
being passed contains single quotes. That's a function of the database
SQL statement, not of the driver
SQL
command. Here is another example showing a simple string; (Courtesy of
Jon Waterhouse and Mark Sarson).
The original
prop:Sql looked like this;
mform = rptyear & '-' &
format(month,@n02) & '-01'
month_limit{prop:Sql} = 'insert into month_limit values(''' &
mform & ''')'
Because the
SQL statement needed quotes, it
was necessary to do the extra quotes work around
mform.
The parameterised
SQL statement though does
not use quotes. The parameterised version of the
SQL
is
month_limit{prop:Sql}
= 'insert into month_limit values(?1);
Even though the
?1 parameter is a string, no
quotes are used.
So the
SQL command becomes
mform = rptyear & '-' &
format(month,@n02) & '-01'
SQL(month_limit, 'INSERT INTO month_limit VALUES (?1);', mform)
Note 1: The exact
syntax of the SQL statement will vary from one database to another.
Note 2: The SQL statement can be called with
just the SQL string (ie without additional parameters), and so can be
used to execute any existing
prop:Sql
statement. (It's not ideal to do that, because it's then still unsafe,
but it can be a first step in the migration process.)
Note 3: The following is from the Clarion
documentation on
prop:Sql;
When you issue a SELECT statement using PROP:SQL,
the selected fields must match the fields declared in the named file
or view. In addition, if you use VIEW{Prop:SQL} to issue a SELECT
statement, the fields in the SELECT must be ordered based on the
field order in the file definition, not the PROJECT sequence. [ie
The traditional drivers match the field in the result to the Clarion
structure by number.]
In object based drivers
no such care must be
taken. The fields returned can be in any order, whether
working on a
FILE or a
VIEW.
The results are matched up to the appropriate fields by
name.
Indeed any result set can be retrieved, and processed, even if it does
not access the file at all. See
Result Sets
for more on this.
If you wish to match by number (for example in the case of Dummy
files) see
/DUMMY.
You may also wish to match by number if the
prop:Sql
statement contains calculations, and AS is not set. In order to assist
with programs converting from a traditional driver to a object based
driver, where there is extensive use of prop:Sql and the receiving
table is a genuine table, not a dummy table, see
/PROPSQLMATCHBYNUMBER.
Note 4: Some SQL engines allow for numbered
parameters, where a parameter can be used multiple times in the same
SQL statement. For example;
SQL(Customers,'Select * from Customers where
Cus:Active = ?1 and Cus:Special = ?1',true)
Not all engines however support this pattern, so it's more portable to
write this as;
SQL(Customers,'Select * from Customers where
Cus:Active = ?1 and Cus:Special = ?2',true,true)
Note 5: When converting from old code, it
can be useful to make use of the
/UNSAFE
driver string. You can also use the
SEND command. For example;
SEND(Customers,'/UNSAFE=true')
Note 6: The first example above uses
+ for server-side string concatenation. Different
SQL engines use different symbols, some use
||
, some use
CONCAT() and so on. Tailor your
SQL code to match the database you are sending to.
Note 7: If your
prop:Sql
starts with
CALL or
NORESULTCALL
then you likely won't convert it to a
SQL
command at all. Rather use the
CALL
or
CALLOUT commands.
Note : The parameters
are all passed as variables. You cannot put an expression into a
parameter. Rather create a variable, prime the variable, and pass
that. The type of the variable will inform the type passed to the
SQL server.
SQL Reserved Words
In the beginning we had ISAM files, and life as good. We labeled the
fields in the table as anything we liked. Since there was no server
between the program and the data, there was no need for the concept of
limiting the names of tables or columns.
Along came SQL databases, and along with those the SQL language. SQL
has words which are used in the language, and so cannot be used as
column, or table, names. Dealing with this can be simple, or
complicated - it all depends on your context.
The first, and most important thing to understand is that the
NAME
and the
LABEL of a field are different
things. The label is the thing you use in your code. And the label can
be (almost) anything you like. Code that works against ISAM tables can
be the exact same code that works against SQL tables. SQL reserved
words do not affect your code. Do not read this section in the context
of the label - that is not applicable here - this section is the
context of the Column
name as stored in
the SQL database.
In the dictionary this is represented by the
External
Name setting for the column. Again, this is independent of
the label. If the external name setting is blank, then internally the
name is set to be the (upper case version of the) label, not including
the prefix.
There are two basic strategies that can be employed when you have a
SQL reserved word as the name of a column. Which one you choose will
depend on your context. Let's deal with the easy context first;
If the table is new to the database then simply changing the External
Name (or indeed, if it's really new, the label) to a non-reserved word
is the ideal. Figure out a different name for the column, and move on.
In the long run this is the best option because it makes ad-hoc
queries against the database (especially from non-Clarion programs)
easier. If you can do this, do it.
If, for whatever reason, the name of the column in the database is
fixed and cannot be changed, then the driver will wrap the word in
quotes
[1][3]
when it is used in a SQL statement. You don't need to do anything,
this happens automatically. The only time it becomes a problem is when
you start using the column name in properties, SQL statements and so
on. And of course when making ad-hoc queries from other programs.
But wait, there's more. When you wrap the name in quotes like this, it
(often but not always) becomes case sensitive. In other words
"User"
is different to
"user" is different to
"USER". Again this isn't a train-smash, it
just makes using the name in ad-hoc SQL more precise.
If you have a reserved word as a label in your dictionary, and you do
not set an external name, then the driver has to choose a case for the
word. All the drivers (traditional, and object based) choose to make
the word uppercase in this situation. If you want to keep the name of
the column as-is, but at the same time make it not-upper-case, then
you can set the external name in the dictionary (with or without
quotes
[2]) and the case
of that setting will be maintained.
If after reading all of this, your conclusion is to avoid reserved
words as column names (either via the label, or via the external name)
then you have come to the right conclusion. If however it's too late
for that, then welcome to a pretty large club.
Note 1: Most
databases use double quotes but not all. Some use square brackets, or
back ticks, or whatever. This is mostly handled for you by the driver
so not something you need to worry about.
Note 2:
Traditional drivers
require that when
reserved words are used in External Names that the necessary quotes,
or brackets, or whatever are included. Object Based drivers do not
require this, and will use the appropriate delimiter when necessary.
Object based drivers don't mind the quotes or brackets being added in
the dictionary, but leaving them off allows the driver to use the
appropriate one as needed.
Note 3: The
traditional SQLite driver has a bug - it can
CREATE
a table using quotes around the external name, but it can't
OPEN
a table that has an external name with quotes. So with the traditional
SQLite driver it's necessary to set the External name to something
that's not reserved.
prop:SQLFilter
/ prop:SQLOrder
prop:SQLFilter and
prop:SQLOrder
allow you to inject absolute SQL code into the statement that is being
sent to the server. The code is sent as-is, and as such is not parsed,
processed, optimized or altered in any way. This is both a strength
and a weakness.
The strength is that it gives you very explicit control over
these clauses in the
SELECT statement.
They can either override, or append, the regular clauses.
The weaknesses though are significant, and so their use should be both
sparing and careful. You should use these only when all other options
have been exhausted. In no particular order;
-
They do not go through the parsing or binding engine, so care must
be taken to avoid security issues, especially the possibility of
SQL Injection. This becomes the developers responsibility, not the
driver's.
- They may be very dependent on the rest of the SELECT
statement. Since the SELECT may change
(as the driver evolves and is updated) this is a source of
potential future-compatible issues
- They don't change to take advantage of improvements in the SQL
engine. While the driver can be updated, bypassing the driver in
this way doesn't take advantage of the driver updates
- They are often bound to a specific database engine. This can
make changing engines harder. (While that's not a common task,
it's an important one when you need it.)
STREAM
While traditional SQL drivers did not implement
STREAM
(or
FLUSH), the object based
drivers do implement them to alert the database that a batch of writes
is about to occur. This allows the database to "wrap" the writes in a
block to improve performance. When upgrading programs that originated
in ISAM files (like Topspeed), which used
STREAM
and
FLUSH for performance gains, those
performance gains are now present in SQL as well.
Exactly how this wrapping occurs (if it occurs at all) is up to the
individual driver. Each database has different mechanisms for speeding
up bulk-write performance.
Unlike ISAM drivers, in SQL
STREAM does
not generally lock the table, however this is database dependent, so
consult your driver documentation to see if it has any locking effect.
You cannot call
STREAM while inside a
transaction. Doing so will result in the error code being set to
56 Logout Already Active.
For more information see the
Performance
section of this document.
Note: In practice, for most drivers, it's
likely that calling
STREAM starts a
transaction. Since transactions are based on the connection (not the
file) this also implies that any other files on the same connection
will also effectively be part of the
STREAM.
However
STREAM may do more than just
starting a transaction though, so for bulk writes it is recommend to
use it.
prop:Text
From the Clarion Documentation;
Prop:Text is an array property of a FILE that
sets or returns the specified MEMO or BLOB field's data. MEMO and
BLOB controls are negatively numbered, therefore the array element
number must be a negative value. MEMO and BLOB declarations begin
with -1 and decrement by 1 for each subsequent MEMO and BLOB, in the
order in which they appear within the FILE structure.
In other words,
prop:Text allows access
to set or get the value of a field, based on the field number in the
structure, and not based on the label.
Traditional driver do not support
prop:Text
on fields other than memos and blobs. If used on any other field type
it returns a blank string.
Object based drivers extend the support for
prop:Text
to all field types. It returns the text version of the field from the
current record buffer, as a string. (See
prop:Value
if you want the raw memory value). Behavior for memos and blobs is
unchanged.
If the field is an array then the "whole array" is returned, as a
single string value surrounded by square brackets
[
] (usually null padded).
If the field is an array, then it will be CSV Encoded and
prop:AlwaysQuote,
prop:Separator and
prop:LineEndings
will apply.
Fields of type
DATE will be formatted
using
prop:DatePicture.
Fields of type
TIME will be formatted
using
prop:TimePicture.
Fields of type
DECIMAL and
PDECIMAL
pad the places with Zeros. In other words, if decimal places is set to
2, then 1 is represented as 1.00.
Numbers are not comma separated. In other words 87654 is represented
as 87654 not 87,654.
For Example;
s = customers{prop:Text,3) ! gets the text
version of the 3rd field in the Customers File
s = customers{prop:Text,-1) ! gets the
text version of the first memo or blob
customers{prop:Text,-1) = 'I saw a bear
in the wood' ! Sets the text of the first memo or blob
customers{prop:Text,4} = 1234 ! sets the value in the fourth field
to 1234.
prop:TransactionStatus
Traditional drivers do not support
prop:TransactionStatus
.
This queries the server to determine if the server believes it is
inside a transaction or not.
If a
transaction is started with
LOGOUT or
LOGOUTCONNECTION
then the property returns
driver:TransactionOn
.
If no transaction is happening on the connection then it returns
driver:TransactionOff
(not a blank string).
Some drivers (notably PostgreSQL) may also return
driver:TransactionError
to indicate that a transaction is active, but an error has
occurred, and all commands (including
COMMIT)
are being ignored. A
ROLLBACK
is required to reset the database to accept new commands.
This is similar to
prop:InTransaction
however
prop:InTransaction queries the
driver, not the database.
Transactions
Clarion offers three commands which facilitate transactions;
LOGOUT,
ROLLBACK
and
COMMIT.
The
LOGOUT command takes a
Seconds
parameter and a list of 1 or more
FILE
structures. The origins of this go back to the ISAM file drivers in
Clarion's early days.
The first, and most important thing to understand when dealing with
transactions in SQL is to understand that in SQL transactions do not
work across a subset of tables. Rather the transaction belongs to a
connection, and everything on
that
connection (and
only on that connection
[1]) is included in the
transaction - whether the file is listed in the
LOGOUT
command or not.
In object based drivers, connections are pooled on a THREAD basis -
thus transactions include commands performed on the thread, but do not
include commands performed on other threads
[2].
Since only commands executed on the same connection can be part of the
transaction, it follows that only commands to the same driver will be
included in that transaction.
Object based drivers introduce the concept of
Save
Points. This is done with the
SAVEPOINT
command, and with an optional parameter to the
ROLLBACK
command. Savepoints allow a partial rollback, back to a known good
location. None of the transaction is committed, but it allows part of
the transaction to be abandoned, and then the transaction can
continue.
Traditional drivers will not allow multiple transactions (on the same
thread) to be active at the same time. This is self evident from the
nature of the
ROLLBACK and
COMMIT
commands - since they take no parameters it is not possible to
distinguish between transactions, and hence only one transaction is
allowed to be open.
Since the object based SQL drivers make use of server-side
transactions, where the transaction is connection based not table
based, it is possible to allow multiple connections, and hence
possible to allow for multiple transactions to be running at the same
time. This is especially valuable if two different databases are being
updated at the same time. Note though that these multiple transactions
are not related. It's possible for one to commit, and another to
rollback. It should not be assumed that all the open transactions end
in the same way.
In order to support this behavior, a new
LOGOUT
command is used, called
LOGOUTCONNECTION.
A single table is passed into this command - this is referred to as
the
Primary Table of the transaction. As
above ALL tables on this connection will automatically become part of
the transaction.
Equally, if a transaction is started with this command the
ROLLBACK
and
COMMIT commands MUST pass the same
primary table to the
ROLLBACK or
COMMIT
command. Failure to pass this parameter will result in a GPF.
If a
LOGOUT command fails and returns an
error code, it is no longer safe to then call
ROLLBACK
or
COMMIT. Calling either will result in
a GPF.
Open(Customers,42h) ! this will open
on the (default) connection pool 0
Products{prop:pool} = 1 ! this will open on connection
pool 1, not connection 0
Open(Products,42h)
LogoutConnection(Customers)
LogoutConnection(Products)
! some writes happen here
Commit(Customers) !
if Using LogoutConnection you MUST use Rollback(file) or
Commit(file)
Rollback(Products) ! if
Using LogoutConnection you MUST use Rollback(file) or Commit(file)
Note 1: Connections
are pooled based on thread number, and owner string. If tables have
the same driver, and the same owner string, then typically they share
a connection. In some circumstances you may want to separate tables
onto different connections. You can do this by setting
prop:Pool
to a number other than 0. The number doesn't mean anything - but
tables with different
prop:Pool values
will not share a connection. Conversely tables with the same
prop:Pool
value will share a connection (if they're on the same thread, and with
the same owner string.)
For example;
glo:owner = 'some database connection string'
Customers{prop:pool} = 1234
Open(Customers,42h)
Products{prop:pool} = 5678
Open(Products,42h)
In the above code, the Customers and Products table will not share a
connection. By implication a
LOGOUT on
one will not include the other in a transaction.
Note 2: This
is assuming the file is THREADed. If the FILE structure is not
THREADed then [to be determined].
Note 3: CLAIM
also creates an implicit transaction if one is not already active.
Note 4: prop:InTransaction
can be used to query the driver to see if a transaction on a
connection is currently active.
prop:TransactionStatus can be used to query the database to get
the status of the transaction.
UNION
Union(FILE pFile)
Union(FILE pFile,STRING pFilter)
The
UNION command is used when creating
Subsets. It allows results to be added to the
current result set (not including duplicates).
It typically (although does not have to) follows a call to
SUBSET.
Example
Find all employees who took study leave, or who also participate in
the bursary program.
Set(Emp:NameKey)
Subset(Leave,lea:EmployeeId)
Union(Leave,'Type = ''study''')
Subset(Bursary,bur:EmployeeId)
Union(Bursary)
Other related commands are
EXCEPT,
UNIONALL and
INTERSECT.
UNIONALL
UnionAll(FILE pFile)
UnionAll(FILE pFile,STRING pFilter)
The
UNIONALL command is used when
creating
Subsets. It allows results to be
added to the current result set, but includes a duplicate if the
record is already there.
It is a little faster than
UNION, so it
can be used in cases where the result is known to be unique. It can
also be used in cases where duplicates are explicitly desired.
Other related commands are
EXCEPT,
UNION and
INTERSECT.
UPDATE
Update(File pFile,String pFilter,?* pField1,String
pValue1,...,?*
pField10,String pValue10)
Update(Key pKey,?* pField1,String
pValue1,...,?*
pField10,String pValue10)
UPDATE is a new command, not present in
Traditional drivers. It is similar to a
PUT,
except that the record can be updated without the need to read it
first. Where the update does not require client-side code it is an
efficient way of bulk updating records on the server. Multiple fields
(up to 10) can be updated at the same time.
| Parameter |
Type |
Description |
| pFile |
File |
The label of the file to update. |
| pFilter |
String |
A filter expression
which expresses the records to update. If this is a blank
string then NO records are updated. If ALL the records should
be updated then this parameter should be set to driver:all. |
| pKey |
Key |
The set of rows to be updated is constructed from the key
fields, and their current values. If this form is used then
the pFile and pFilter
parameters are not used. If the key is not unique then all the
records matching the current key field values will be updated. |
| pField1..pField10 |
Any |
A field in the table. |
| pValue1..pValue10 |
String |
An expression
containing the new value. This expression needs to be capable
of being executed on the server. It can be a fixed value, or a
calculation based on the current value, or on other values in
the row.
Note that this value is considered to be an expression. It
will be evaluated. So a string that contains say a math value
('4+4+4') will be evaluated to 12. If you actually want to
actually set the field to be a string, then you must use
quotes. |
Since the update is done completely on the server, the new value is
either a constant value, or an expression which can be executed on the
server.
This command is dangerous. Because it updates a number of records at
the same time, it's very easy to get the expression wrong, and thus
affect a large number of records with a single bad command. Some care
should be taken to ensure that the command is correct before
unleashing it on production databases. And, of course, as always, good
backups of the database are always recommended.
Fields in the file record are NOT updated with the values in the
database.
Some examples;
Update a field for all rows in the table to a fixed value.
Update(Customers,driver:all, Cus:Notify, 0)
Update a field for a single record, without reading the record first.
Update(Customers,'Cus:Id = 4',Cus:Overdue,true)
OR
Cus:Id = 4
Update(Cus:IdKey,Cus:Overdue,true)
Update a field, based on other fields in the row.
Update(Invoices,'Inv:date < ' &
today()-30 & ' AND Inv:Paid = 0',Inv:Overdue, true)
Split a field, populating it into multiple new fields.
Update(Event,driver:all,Eve:Month,'Month(Eve:Date)',Eve:Year,'Year(eve:Date)')
Set the value of a string field to a constant value
Inv:Number = 6
Update(Inv:Key,Inv:Status,'''Paid''')
UPSERT
Upsert(File pFile)
UPSERT is a new command, not present in
Traditional drivers. It is similar to an
ADD,
except that if the primary key value(s) already exist in the target
table, then the existing record is updated, and no error is generated.
UPSERT only matches the Primary Key
values. If other columns are unique, and the update would cause one of
those other unique keys to generate a
Creates
Duplicate Key (40) error, then that error will still
occur.
For example this code;
Clear(file)
! set a bunch of fields here
Get(File,PrimaryKey)
If Errorcode()
Add(File)
Else
! reset all the fields, except the primary key)
Put(File)
End
Is now replaced by;
Clear(File)
! set a bunch of fields here
Upsert(File)
Validation
There's a lot to like about SQL, but generic advice for SQL is hard.
That's because context matters enormously when it comes to making
decisions. Every context is different and so the decisions you make
will depend a lot on what your context is. Validation is one of these
areas where there's no universally right answer - the correct approach
depends on your context.
Clarion templates are set up to do client-side validation. They check
the data before sending it to the database. This can be helpful as it
saves unnecessary calls to the database. It also allows the client to
impose restrictions which may be program related, not necessarily
database related.
(For example, the program may be limited to adding sales transactions,
but not be able to add expense transactions.)
If the program "owns" the database, and all the programs adding data
to the database are coded well, then this approach works well. It also
saves work for the database to do (and in some cases this is important
because the database CPU may be the constraining resource.)
However if the database is more open, or if other programs can write
to the database (including general access programs like SSMS, pgAdmin
and so on) then server-side validation may be a good thing to do as
well. Traditional drivers do not support server-side validation, but
object based drivers do.
In this context, to maintain data integrity, data validation is done
on the server before data is written to the database. While the
dictionary editor has options for
Field
Validation (on the
Validity Checks
tab for a field), these are not necessarily exhaustive possibilities
from the SQL point of view. Additionally, these options are not part
of the file structure, so extra code is needed to apply then to a
FILE structure. These are therefore
client-side
validation rules.
Object based drivers allow for server-side validation rules to be
placed in the
Extended Name. This supplements
(but does not replace) the
Validity Checks
tab (which generates Client-Side code), by allowing for Server-Side
validation (in addition to client-side validation.) Possible options
are;
| Option |
Meaning |
> n
>= n
< n
<= n
|
The value must be greater than, less than, or in a range of
values. Two terms can be used.
Price | > 10 | < 1000
The above creates a range from 10 to 1000 (exclusive). |
<> n
<>
|
The value must not equal n.
If n is omitted then a blank
string (or 0 for numeric fields) is assumed. |
| BOOLEAN |
The value must be 1 or 0. If this is set, then the above < and >
values (if they exist) are ignored. This can be used with, or
without the NOTNULL option. Note
that if the field is a string type then the value MUST still
be one of '0' or '1'.
'True' and 'False'
are not considered to be boolean values. |
DEFAULT(n)
DEFAULT=n
|
When doing an ADD or APPEND,
if the field is not supplied, then default the value to n. Do not use quotes around n. |
| INLIST(a,...,z) |
A comma separated list of allowed values. Do not use quotes
on string values. |
| NOTNULL |
Null values are not allowed |
| REQ |
The value cannot be null, and cannot contain a 0 or blank.
This is the same as NOTNULL | <>
| |
VALIDATE or
CHECK |
A validation exrpression which is used as the SQL CHECK
clause when creating the table in the database.
Validate(price % 100 = 0) |
In this table
n
can be a number, or a string, depending on the type of the field.
Quotes should NOT be used around string values.
Examples
Age
Long,Name('Age | > 0 | <= 130')
LastName String(100),Name('LastName | <> ')
Paid Byte,Name('Paid | Boolean
')
Score Byte,Name('Score | Default(100)
| NotNull')
Color String(10),Name('Color |
InList(Red,Blue,Green)')
Evaluation String(20),Name('Evaluation | Req')
Price
Long,name('Price | Validate(price %
100 = 0)')
prop:Value
Traditional driver do not support
prop:Value
on fields other than memos. If used on any other field type it returns
a blank string.
Object based drivers extend the support for
prop:Value
to all field types. It returns the "raw memory" value of the field
(not the text version of the field) from the current record buffer, as
a string. See also
prop:text
. Behavior for memos and blobs is unchanged.
If the field is an array then the "whole array" is returned, as a
single string value (usually null padded).
Note that Cstring and Pstring values are "full length", not
null-terminated. The first char in the Pstring is the number of bytes
the string uses, however the full string length is returned.
For Example;
s = customers{prop:value,3)
Detecting an Object Based Driver
To determine if a table is using an object based driver, or not, at
run time, use
prop:OBD;
If table{prop:OBD}
! object based driver
End
Getting the Object for the FILE Structure
The first time the FILE is touched (on a thread), an object for that
file is created. Assuming the FILE is
,THREAD
then each thread will have it's own instance of the File Record, and
each thread will have it's own instance of the Object. In most
situations you will not want to access the object directly - that's
what the File Driver commands are for. However if the driver
class has extra methods, or perhaps extends what a method returns,
then you may want to call the object directly. This is exactly the
same as calling the object via the File Driver commands, but can be
desirable in some situations.
For example, the DOS 2 Driver supports files larger than 2 (or 4)
gigabytes. Since the regular
BYTES command returns a
LONG,
to get the actual size of the file you need to bypass the
BYTES
command. Since the BYTES
method
returns a
BIGINT (Currently a
REAL)
if you call the method directly you can get a larger number returned.
Doing this is very straight-forward. In this example
ImportTable
is the name of the FILE structure.
ImportTable &DriverFileDos2Class
r real
code
ImportTable &= Import{prop:object}
r = ImportTable.Bytes()
Note that threaded FILE structures, by extension imply that each
thread has it's own object. So the object reference should be declared
locally (as above) or, in the case of a global declaration, that
declaration should be threaded as well. For example;
glo:ImportTable
&Dos2DriverClass,Thread
Note: This example is somewhat contrived.
The
BYTES64 command would work
just as well.
prop:Object can also be used for keys and
blobs. When used in this way it returns the
DriverFileClass object for the
FILE
which contains that
KEY or
BLOB.
Example
SomeProcedure (Key pKey)
somefile &FILE
code
somefile
&= pKey{prop:object} ! somefile now points to the
FileDriver object for the table that belongs to that key.
Views
BINDFIELD
BindField(VIEW pView,String pName, *Long pLong)
BindField(VIEW pView,String pName, *Real pReal)
BindField(VIEW pView,String pName, *String pString)
BindField(VIEW pView,String pName, *Cstring pCstring)
BINDFIELD works for
VIEWs
and well as
FILEs. For more
information see
BindField.
CLEARPROPS
This new command resets all the properties associated with the
VIEW back to the default values. It is
automatically called when the
VIEW is
OPENed or
CLOSEd.
DELETE(View)
When
DELETE is called on a
VIEW,
then the record in the Primary file is deleted. This does not invoke
client-side relational deletes or updates. It effectively bypasses
any client-side code which does relational updates or deletes.
Of course, if there is server-side code (created by
Relationships,
or otherwise) then server-side actions will occur.
JOINTABLE
JoinTable(VIEW pView, FILE pParentFile, KEY pKey,
Byte pJoinType, <*? pField1>, ... <*? pField10>)
JoinTable(VIEW pView, FILE pParentFile, FILE pJoinFile, Byte
pJoinType, String pExpression)
| Parameter |
Type |
Description |
| pView |
View |
The View label of the View to attach the new Join to. |
| pParentFile |
File |
A FILE structure already in the view, that this join will
join to. |
| pKey |
KEY |
A key in the table being added to the view as a Join. In
other words, a key in the new FILE being joined into the
View. |
| pFile |
FILE |
The new FILE being joined into the View. |
| pJoinType |
Byte |
One of driver:LeftJoin, driver:InnerJoin, driver:RightJoin,
driver:FullJoin, driver:CrossJoin,
driver:SelfJoin. |
| pField1..pFieldn |
Any |
Fields in the parent table, which match fields in the
pKey. |
| pExpression |
String |
An expression detailing which columns in the parent table
and child table match. For example 'Lin:Invoice
= Inv:InvoiceID'
These expressions can include local variables bound using
the clarion BIND command. |
Object based drivers allow a VIEW to be constructed, or altered at
run time. The
JoinTable method allows
another table to be Joined to a VIEW. Either the KEY,FIELD,..,FIELD
approach can be used, or a custom join expression can be used.
The ParentFile parameter determines the Table, that is in the view
already, to which the new table will be joined.
When declaring a VIEW in code, only two join types are possible;
driver:InnerJoin (if the
INNER
attribute is used) or
driver:LeftJoin
(if the
INNER attribute is not
used). However object based drivers support the additional
join types, and these can be set using either the
JoinTable
command, or by setting
prop:inner
on an existing Join.
Examples;
JoinTable(InvoiceView,Invoice,Cus:IdKey,driver:leftJoin,Inv:CustomerId)
JoinTable(InvoiceView,LineItems,Product,driver:RightJoin,'Lin:ProductId
= Pro:ProductId')
Bind('somevar',somevar)
someevar = 'Sup:Id'
JoinTable(InvoiceView,Product,Supplier,driver:RightJoin,
'Pro:Supplier = someevar')
Bind('SomeExpression',SomeExpression)
SomeExpression = 'Con:Email = Cus:Email'
JoinTable(InvoiceView,Customer,Contacts,SomeExpression)
See also
ProjectField.
prop:AutoProject
When the primary file in the
VIEW, or
a
JOIN in the
VIEW,
is declared with no PROJECTed fields, then the view engine simply
includes all the fields (excluding MEMOs and BLOBS) in the
VIEW.
Should you need finer control, you can set the
prop:AutoProject
property as follows;
VIEW{prop:AutoProject,fileindex} =
Driver:ProjectFields ! default
VIEW{prop:AutoProject,fileindex} =
Driver:ProjectBlobs
VIEW{prop:AutoProject,fileindex} = Driver:ProjectAll
VIEW{prop:AutoProject,fileindex} = Driver:ProjectNone
This allows the developer to control which combination of fields and
blobs are automatically included.
The
fileindex is the number of
the file in the view to set the property on. If that parameter is
omitted, or 0, then the setting is applied to the all files in the
VIEW.
Note that even if
prop:AutoProject is
set to
driver:ProjectNone, primary key
fields will still be projected to support paging,
POSITION and
REGET.
Setting
prop:Distinct
or
prop:NotDistinct
will actively suppress the auto-generation of primary key fields,
because their inclusion would result in all rows being distinct, and
no rows being not Distinct.
Setting
prop:NoKeyProject
allows fine control over which key fields are automatically
included. Setting
prop:AutoProject to
Driver:ProjectFields, or
Driver:ProjectAll
renders the
prop:NoKeyProject property
moot.
The prop:AutoProject property only applies to cases where no fields
for a file, or join, are included in the view. It does NOT apply if
any fields (in the primary file or join are projected.) It can be
set for any file in the view, however it will only be used if no
fields are manually projected.
Prop:Distinct
Before calling the
SET(table / key /
view) command, the
prop:Distinct
property can be set. This will add the
DISTINCT
clause to the SQL
SELECT statement.
This tells the server to only return unique records, duplicate
records are not included in the result set.
customers{prop:distinct} = true
Set(Cus:NameKey)
Loop ...
This property
is not reset to false
except on a
OPEN ,
CLOSE
,
CLEAR and
CLEARPROPS.
If you wish to re-use the VIEW with this property off, then you must
set it to false (if you previously set it to true).
For example;
Say you wanted a list of all customers who made a purchase in
January. Your VIEW would look something like this;
VIEW(Invoice)
JOIN(Cus:PrimaryKey,Inv:CustomerId)
Project(Cus:Name)
End
End
This would return the customers for all invoices, and the same
customer would appear multiple times if they made multiple
purchases.
However if
View{prop:Distinct} = true
is set before the call to
SET(View)
then only
DISTINCT records will be
returned by the NEXT loop, and so each customer would only appear
once in the list.
There is one important point to note;
In the above example no fields are projected from the INVOICE table
into the VIEW. In traditional Clarion this means ALL the fields
would be projected (resulting in the DISTINCT clause "failing" since
each invoice row is unique.) For this reason, when
prop:Distinct
is set to true ONLY explicitly projected fields are included in the
VIEW.
Equally, because the primary key values cannot be included in the
result set (since they would break DISTINCT) paging on this result
set is not possible. The entire result set will be fetched from the
server.
See also
prop:NotDistinct .
Prop:Name
After the View has be OPENed and before it has been SET you can use
the
{prop:name,n} of a field to set
the field value to be a SQL function. For example;
InvoiceView VIEW(Invoices)
PROJECT(Inv:Number)
PROJECT(Inv:Total)
END
InvoiceView{PROP:Name,2} = 'SUM(lin:Quantity * lin:price) AS
InvoiceTotal'
This expression is then passed through to the SQL engine, and the
result is determined by the engine.
Traditional drivers require the field names to be carefully set to
their aliased SQL value.
In object based drivers the expression (the part before the
AS
separator) is passed through the
Expression
Translator , so you can use Clarion field names in the
expression (as in the example above.)
Common SQL functions that can be used here are
SUM,
COUNT,
AVG,
MAX and
MIN.
However there are a lot of Aggregate and Scalar functions which are
supported by the SQL database, and they can all be used here.
An alternative syntax for the prop:Name setting is as follows;
view{'field_label',PROP:Name} = string
This works the same as the
{prop:name,n}
approach, but is more flexible in that the exact position in the
view is not the determining factor.
Dummy Table Join
This next part is new to object based drivers.
In the above example the Invoices table contains a field called
TOTAL. Hence it can be PROJECTed.
In some cases though there may not be a suitable field in the
table to receive the calculated value, and the language does not
allow for other variables to be included in the View structure.
In this situation a "Dummy Table" can be used as a placeholder for
the local variables. This Dummy table can be declared using the
DOS (or DOS2) file driver, and the file itself does not need to be
Opened. Consider the case where this DummyTable is declared, and
contains a field called Field1.
This Dummy Table can be JOINed in the View. This join will be
ignored by the View Driver (and not passed to the SQL engine)
because Joins are required to be on a single file driver. However,
as a special case, fields that have a name attribute, and include
the
AS clause,
will
be Projected, and mapped onto this dummy field. So, for example;
InvoiceView VIEW(Invoices)
PROJECT(Inv:Number)
JOIN(DummyTable,'')
PROJECT(Dum:Field1)
END
JOIN(LineItems,Lin:InvoiceKey,Inv:InvoiceNumber)
PROJECT(Lin:Id)
END
END
InvoiceView{PROP:Name,2} = 'SUM(lin:Quantity
* lin:price) AS InvoiceTotal'
Because the DummyTable has a different driver it is
"ignored" from the
SELECT statement.
However the
Dum:Field1 will be
populated by the
NEXT or
PREVIOUS
command to the value returned by the server (ie the SUM)
as
if it belonged to the parent file.
Note that in this example the calculation is not on the view table
itself, but on a joined table (LineItems). If you do this you will
need to add the child table to the View as well. If you do not
want to get a row returned for each line item, then make use of
the
Group By property to limit the view to
the fields in the parent table.
InvoiceView{prop:GroupBy} = 'Inv:Number'
Prop:NotDistinct
The opposite of distinct result sets, are sets where only records
that are duplicated are included.
This property
is not reset to false
except on a
OPEN ,
CLOSE
,
CLEAR and
CLEARPROPS.
If you wish to re-use the VIEW with this property off, then you must
set it to false (if you previously set it to true).
For example;
Say you wanted a list of all customers who made more than one
purchase in January. Your VIEW would look something like this;
VIEW(Invoice)
JOIN(Cus:PrimaryKey,Inv:CustomerId)
Project(Cus:Name)
End
End
This would return the customers for all invoices.
However if
View{prop:NotDistinct} = true
is set before the call to
SET(View)
then only records which would appear multiple times will be returned
by the NEXT loop. Each customer will only be returned once, but this
loop is telling you all the customers who have got multiple invoices
in the period.
There is one important point to note;
In the above example no fields are projected from the INVOICE table
into the VIEW. In traditional Clarion this means ALL the fields
would be projected (resulting in the NOTDISTINCT clause "failing"
since each invoice row is unique.) For this reason, when
prop:NotDistinct
is set to true ONLY explicitly projected fields are included in the
VIEW.
Equally, because the primary key values cannot be included in the
result set (since they would break NOTDISTINCT), paging on this
result set is not possible. The entire result set will be fetched
from the server.
See also
prop:Distinct.
Prop:Inner
In the View declaration you can use the attribute
,INNER
to signify that the Join should be an INNER join. Traditional
drivers also allow the property to be set in code, using
View{prop:inner,n} = 1
Object based drivers allow you to set the Join type in code using
the same syntax, however more join types are supported;
driver:LeftJoin
Equate(0) ! this is the current clarion default when not ,INNER
driver:InnerJoin Equate(1) !
this is the Clarion ,INNER property
driver:Rightjoin Equate(2)
driver:FullJoin
Equate(3)
driver:CrossJoin Equate(4)
driver:SelfJoin
Equate(5)
So, to set the join type to be a
Right Outer
Join you can set
View{prop:inner,n} = driver:RightJoin
Prop:GroupBy and Prop:Having
In SQL it's possible to group the rows of a result set together,
typically doing some calculation on them to extract some interesting
information. The view engine allows you to set these clauses in the
view's
SELECT statement.
In traditional drivers these have to use the aliased form of the
field name.
In object based drivers these are passed through the
Expression
Translator , so you can use Clarion field names or function
names in the properties. This makes writing them in code a lot
easier as the Clarion label for the fields can be used.
While a in-depth discussion of
GROUP BY
and
HAVING are beyond the scope of this
guide, it is well worth the effort to get familiar with them,
especially if you are generating reports. They are especially
effective at consolidating data in preparation for reporting.
(Note: The construction of the VIEW when using GROUP BY can be
complicated by server-side restrictions. More documentation and
explanation on this topic needs to be added here.)
prop:NoKeyProject
VIEW{prop:NoKeyProject} =
Driver:ProjectAllKeys ! 0 !
default
VIEW{prop:NoKeyProject} = Driver:ProjectNoKeys
! 1
VIEW{prop:NoKeyProject} = Driver:ProjectPrimaryKeys ! 2 !
recommended
When a traditional driver generates the SQL
SELECT
statement for a
VIEW, it automatically
includes
all key fields across
all keys across
all
files in the
VIEW.
This is equivalent to
prop:NoKeyProject=0
or
prop:NoKeyProject=driver:ProjectAllKeys
.
Since this can be somewhat wasteful, if all those values are not
necessary. It has the ability to suppress this by setting
prop:NoKeyProject=1.
If this property is set like this then
only
fields expressly added into the
VIEW
using
PROJECT will be included in the
output.
Setting this property to 1 has knock-on effects. Without primary key
values being projected (assuming they're not manually projected)
it's not possible to
PUT or
DELETE
VIEW records. It also breaks the
SET(key) /
NEXT(View)
pattern. And it breaks the use of
POSITION
and hence
REGET to load the
FILE records. This makes it useful for
report-type procedures that are read-only, but not useful for
browses and some processes, which can turn into writes (unless very
specific care is taken when constructing the
VIEW.)
This all or nothing approach is somewhat restrictive, and a middle
ground is likely more useful in most cases. In most cases including
all the key fields (for
all keys) is
unnecessary
[1] . But
Primary Key fields
are necessary.
Including just these fields allows the
VIEW
to behave normally in all respects, and also allows for paging,
updates,
POSITION,
REGET
and so on.
Object based drivers add this interim option by allowing
prop:NoKeyProject=driver:ProjectPrimaryKeys
. When set in this way primary keys fields are automatically
projected, but fields from the other keys are not included
automatically. (They can of course still be included in the view as
an explicit
PROJECT )
Choosing a default approach for the object based drivers here is
difficult. The most compatible default is to do the same as the
traditional drivers, but that's also the least performant. Setting
it either way is a trivial driver setting, but that still leaves the
question of which to use if the driver setting is not set. On
balance, compatibility wins over performance, so it defaults to the
same behavior as the traditional drivers. However, changing the
setting to be
/NoKeyProject=2
is strongly recommended, especially with table that have large
numbers of key fields..
Note
1: Projecting all the fields from all the keys prevents the
server from making use of
Covering Indexes, so in some cases this
performance hit can be large.
prop:Order
A
VIEW declaration can contain an
ORDER attribute. For example;
CustomerView
VIEW(Customer) ,ORDER('CUS:ID')
PROJECT(CUS:ID)
PROJECT(CUS:FirstName)
PROJECT(CUS:LastName)
END
The order can also be set at run time. For example;
Open(CustomerView)
CustomerView{prop:Order} = 'Cus:Lastname, Cus:Firstname'
However if there is no order attribute, and there is no run time
prop:Order is set, then the
VIEW
makes use of the default order, as determined when the
VIEW
is opened. And that default order is inherited from the last
SET command issued on the primary file in
the
VIEW. For example;
Open(Customer)
Set(Cus:NameKey)
Open(CustomerView)
The Clarion template chain (known as the Legacy templates) follow
this pattern. The ABC templates make use of
prop:Order
to set the order as needed.
Note 1: This pattern does not extend to
prop:Filter . Using
SET(key,key)
before the
VIEW is opened does not set
the
prop:Filter for the
VIEW.
prop:OrderAllTables
Traditional drivers only generate the
primary
file's fields into the
ORDER BY clause
unless
prop:OrderAllTables has been
set to
true. However object based
drivers ignore this property (by treating it as permanently on). The
ORDER BY clause generated by the object
based view automatically orders
all the
tables in the view to avoid any inconsistent output.
PROJECTFIELD
ProjectField(VIEW pView, String pExpression,
String pAs)
ProjectField(VIEW pView, File pFile,Long pIndex)
ProjectField(VIEW pView, *? pField)
| Parameter |
Type |
Description |
| pView |
View |
The View label of the View to attach the field to. |
| pExpression |
String |
A SQL Expression. This could be just a string containing a
field label like 'Lin:CustomerID' |
| pAs |
String |
A name to use for the result column. If blank, and the
expression is just a label, then the SQL column name will be
used for the result column name. |
| pFile |
File |
The File in the View to project the field from |
| pIndex |
Long |
The field number of the field in the file |
| pField |
Any |
A field in any table in the view. This is not a string
containing the field name, but the field itself. |
The Clarion
VIEW structure consists of
the primary table, plus a collection of
JOIN
and
PROJECT statements. With
traditional drivers the fields projected in the
VIEW
is set at compile time. The object based drivers extend this to
allow the fields to be set at run time.
With traditional drivers only fields listed in the
FILE
structure can be included in the
VIEW
. That can be limiting because SQL allows calculated fields in the
SELECT statement. Object based drivers allow an
expression
AS name to be added
into the
VIEW .
A new command called
PROJECTFIELD
(Pronounced
Pro Ject Field) has been
added to allow the developer more flexibility in adding fields to
the
VIEW, and in controlling the
contents of these fields.
The field to project can be set in one of three ways; as a field, as
a field number for a specific file, and as a SQL expression and AS
name.
For Example;
InvoiceView VIEW(Invoice)
PROJECT(Inv:Terms)
JOIN(LIN:InvKey,SQ2Inv:Guid)
END
END
Open(InvoiceView)
BindField(InvoiceView,'Total',total)
ProjectField(InvoiceView,'Lin:Quantity *
Lin:Price','Total') ![1]
ProjectField(InvoiceView,'Inv:Date','')
![2]
ProjectField(InvoiceView,Invoice,3) ! 3rd field in Invoice
table ![3]
ProjectField(InvoiceView,Lin:ProductID)
![4]
ProjectField(InvoiceView,'AGE(Inv:Date)','Inv:Terms')
![5]
In the above example;
[1] : A calculation is done on each line item, and the result put
into Total (note the necessary
BINDFIELD
.)
[2] The Inv:Date field is simple added to the view (using the name
in a string.)
[3] The 3rd field in the Invoice table is added to the view.
[4] The Lin:ProductID field is added to the view (using the field
directly).
[5] The Inv:Terms field is already in the view - the expression for
that field is being set.
Calls to
PROJECTFIELD must be done
after the call to
OPEN and before
the view is
SET. Fields added via
PROJECTFIELD are cleared on
CLOSE
or
CLEARPROPS. Once added there is no
command to remove the projected field, other than by closing the
view and opening it again.
PUT(View)
The
VIEW structure supports a
PUT
command. It is not often used in code, however it is a powerful
feature, so it's worth understanding exactly what it does.
For both traditional and object based drivers, when you do a
PUT on a
VIEW
then only the changed fields are written to the database. In other
words, the driver optimizes the SQL
UPDATE
statement, limiting the fields being updated to the ones that were
changed. This makes for a very efficient way of writing changes to
multiple records, say in a Process Template procedure.
For fields which are projected (using
PROJECT)
in the
VIEW this makes sense. A record
is read, the program inspects the value, and then, if necessary,
changes the value and writes it back. There is a catch though. For
traditional drivers, if you set a field that's
not
in the
VIEW structure then it might,
or might not, be written to the database.
For example;
InvoicesView View(Invoices)
Project(Inv:Date)
End
Loop
Next(InvoicesView)
If Today() - Inv:Date > 30
Inv:Overdue = true
Put(InvoicesView)
End
End
At first glance this code is simple - check the age of each invoice,
and set it to overdue if it's older than 30 days. However this code
will fail if 2 invoices in a row are overdue.
Since
Inv:Overdue is not projected in
the
VIEW, the current value of this
field is not refreshed as it loops through the
VIEW.
Thus for the second invoice (which is overdue) the record is NOT
written, because the value of
Inv:Overdue
did not change.
Because the write may, or may not happen, depending on some earlier
value, this code is problematic. It appears to work, and will likely
work during development, but will fail (some of the time) in
production. This is not ideal.
For this reason object based drivers
ONLY
write fields which are specifically projected in the
VIEW.
Using this approach the code will either always work, or, if the
field is not in the
VIEW, always fail.
This means that programs with this sort of code, which could
potentially sometimes fail, will now always fail. Places where a
PUT on a
VIEW
is done should be inspected, and/or tested for this change. While it
is uncommon to find a
PUT on a
VIEW
in embed code (uncommon, but possible) the
Process Template
generates this code if the
Action For Process is
set to
PUT record. A simple utility template is
provided to assist in locating any procedures of this type in the
app.
For traditional drivers the Clarion help says;
PUT only writes to the primary file in the VIEW
because the VIEW structure performs both relational Project and
Join operations at the same time. Therefore, it is possible to
create a VIEW structure that, if all its component files were
updated, would violate the Referential Integrity rules set for the
database. The common solution to this problem in SQL-based
database products is to write only to the Primary file. Therefore,
Clarion has adopted this same industry standard solution.[1]
Object based drivers remove this limitation. If you do a
PUT
on a
VIEW, then all the tables in the
view are updated, if any of the fields for that record have been
updated. For example say we had a VIEW on Invoices and LineItems,
then
Lin:Quantity = 0
Put(InvoiceView)
would only update the LineItems record, not the Invoice record. Only
the Quantity field in the LineItems table would be updated, all the
other fields will be left as-is.
If fields in multiple tables are updated, then the write is done
inside a transaction. In this way either all the tables will be
updated, or none will be updated. A write could fail because of a
validity check failure.
A point worth highlighting is that writing a new value will not
change existing values in the received result set. In other words,
if looping through a view, and say a DATE field is updated, this
update will not be reflected while inside this (page of) the result
set. Put another way; the write to the database will not affect the
results of the previous
SELECT which is
currently being processed by the
NEXT
loop.
Finally, it is also worth noting that changing Primary Key values of
a row using a
VIEW PUT
is not recommended. Updating primary key values while looping
through a result set will likely lead to unexpected behaviors.
Note 1: It is worth noting that traditional SQL drivers do
partial updates. Only the changed fields are written on a
PUT.
So if you want to do child-file updates with traditional SQL drivers
then you can simply do a
PUT(File)
instead of a
PUT(View). In other
words;
Inv:Date = today()
Lin:QuantityDelivered = 5
Put(InvoiceView) ! Writes the Invoice record
Put(LineItems) ! Writes the LineItems record.
RECORDS
Records(VIEW pView),LONG
Records64(VIEW
pView),BIGINT
Records(VIEW pView, Long pNumber),BIGINT
Like the File
RECORDS
command, the View
RECORDS command has
been extended somewhat
[1].
All the properties supported by the
SET
command (including
Distinct,
NotDistinct,
Limit,
Offset,
Having,
GroupBy,
Filter and
Order)
are applied when a call to
RECORDS is
made. Thus the
RECORDS command should
return the number of rows that a following
SET
NEXT loop will return.
While
RECORDS can be called before the
SET command, if any parameters are
passed to the
SET command then those
are not included when
RECORDS is called
before
SET. So for maximum reliability
it is recommended that
RECORDS be
called after
SET.
It's worth noting that this call to the server is relatively
expensive, as the server has to calculate this value - it isn't
stored - so it should be called only when necessary.
The
RECORDS function in Clarion is
declared as returning a
LONG. Since
databases can contain more than 2 billion rows, this may not be
sufficient space.
RECORDS64 returns a
BIGINT (which is a
REAL
at the time of writing this). The new variant of
RECORDS
(where a number is passed) also returns a
BIGINT.
If the Number parameter is passed, then the call behaves the same as
calling
SET(View,number). The number
specifies the number of fields in the ORDER which will act as a
starting position (or fixed position) for the result set. See the
SET command for more details
about this number.
The
VIEW must be open for the command
to work. If the
VIEW is closed an
error
NotOpenErr (37) is set.
Note 1: In
truth the
RECORDS(View) command is
mostly not supported in the traditional drivers. It is supported in
the object based drivers.
REGET
REGET(VIEW View, String Position)
REGET(VIEW View)
The
REGET command, when called on a
VIEW, does a
GET
on all the tables in the
VIEW. It thus
loads all the fields for that table record. This allows the program
to quickly load the
FILE record for
each table associated with the
VIEW.
The
Position parameter is a string
which was retrieved when the view record was loaded. In a browse
this position is stored in the queue, and this stored position is
used with the
REGET before, say, going
to a form.
The object based drivers extend the function call slightly, in that
it allows the
Position parameter to be
omitted. If omitted then the current values in the
FILE
record primary key values are used.
SEARCH
SEARCH(VIEW View, String pSearchTerm)
The
SEARCH command supports the
VIEW structure as well as the
FILE
structure. For more general information on using it see
SEARCH.
Currently the search only applies to the Primary File of the
VIEW - ie the first file.
SET
Set(VIEW View, <Long Number>)
The Clarion documentation for the
SET(View)
command, when a number parameter is passed, are ambiguous. First it
says;
Number: An integer constant, variable or
expression that specifies the start position based on the first
number of components of the ORDER attribute.
Later it says;
The optional number parameter limits the SET to
assume that the values in the first specified number of
expressions in the ORDER attribute are fixed.
Testing shows that the first one is the one that is implemented
[1];
the number indeed acts as a starting position not a fixed value.
In the first statement though it goes on to say;
If omitted, all ORDER attribute components are
used.
This is inaccurate. If the number is omitted then the order is
simply the order, and none of the fields act as a starting position.
With traditional drivers the time when the value is used is also
inconsistent. For ISAM files (Topspeed et al) the value at the time
of the
SET command is used as the
starting position. For SQL files the value at the time of the first
NEXT or
PREVIOUS
command is used. Since in regular code this value is unlikely to
change between the
SET and
NEXT
this will not usually cause any difference in code outcome.
With all that uncertainty in mind, the following describes how the
object based drivers implement the
SET
command.
- If the Number parameter is
used and the value is greater than zero, then the Number
acts as a starting position for the view. (ie This number of
fields in the prop:Order act as a
starting position.)
In other words setting Number to 2,
means the first two components of the prop:Order
property contain starting-position
values.
This is consistent with how the
traditional drivers actually behave.
- If the Number parameter is
used and the value is less than zero, then the number acts
as a fixed position for the view. (ie The ABS(Number) of
fields in the prop:Order act as a
fixed position.)
In other words setting the Number to
-2, means the first two components
of the prop:Order property have fixed values.
This option is new - the traditional
drivers don't support negative numbers here.
- If the Number parameter is
used, then the value of those specified number of prop:Order
fields at the time of the NEXT are
stored, and used as the value when limiting the result set.
This matches traditional SQL drivers,
but not traditional ISAM drivers.
FILE Set
Hidden in the Clarion documentation is this small nugget;
Either a SET statement on the VIEW's
primary file before the OPEN(view), or a SET(view)
statement after the OPEN(view), must be issued to set the VIEW's
processing order and starting point, then NEXT(view) or
PREVIOUS(view) allow sequential access to the VIEW.
This means that instead of using the pattern;
Open(View)
Set(View)
Next(View)
you can use the pattern
Set(Key)
! Or Set(key,key) etc. Key is in View's Primary File
Open(View)
Next(View)
This pattern is only valid if there is no
SET(View)
command.
Set(Key)
! this is completely ignored
Open(View)
Set(View)
!
...because this SET exists
Next(View)
In this pattern setting of
prop:Filter
and
prop:Order is ignored.
Set(Key)
! Or Set(key,key) etc. Key is in View's Primary File
Open(View)
View{prop:order} = 'whatever' ! Ignored
because there's no SET(view) command
View{prop:filter} = 'whatever' ! Ignored because there's
no SET(view) command
Next(View)
However if the VIEW has a
declared
ORDER and/or FILTER then this overrides corresponding part of the
the SET(key) or SET(key,key).
ProductView
View(Product),FILTER('SQPro:Id > 40')
End
Set(Key,Key)
! The
sequence part (the second key) is ignored because the View
Declaration has a FILTER
Open(ProductView)
Next(ProductView)
ProductView
View(Product),ORDER('Pro:Id')
End
Set(Pro:NameKey)
! This is ignored because the
View Declaration has a ORDER set
Open(ProductView)
Next(ProductView)
Note 1: Not mentioned in the documentation
that for traditional drivers,
BIND has
to be called on the field, or record, in order for this feature to
work. By contrast the object based drivers do not require the field
or record to be bound. (It doesn't matter if they are bound, it's
just not required.)
Subsets
Views support subsets, using the same commands and techniques as for
FILE subsets.
The subset is asumed to be working on the primary file for the VIEW.
prop:Sql
As mentioned earlier,
prop:SQL
can be used to return the last SQL statement executed. This can be
useful during debugging and performance testing.
In traditional drivers using
viewname{prop:Sql}
returns nothing. In order to see the most recent SQL for a
VIEW,
you must use
prop:Sql on the primary
FILE instead. So instead of
CustomerView{prop:Sql}
you would use
Customer{prop:Sql}.
With object based drivers this is not necessary. Use
viewname{prop:Sql}
to return the SQL for the
VIEW. The
prop:Sql for the primary
FILE
is not changed by the
VIEW.
View Performance
Views are used in different ways in the program. These differences
can radically change how the view engine processes the data.
Traditional drivers do not "know" how the view will be used, so it
has to cater for all eventualities. By contrast the object based
drivers introduce properties, which tell the view engine more about
how the data will be used. This allows it to optimize behaviour
internally to suit the use case.
Consider these use cases;
- Browses
- Browses typically only view a small part of the result set.
You may have a "browse on Customers" but only a small number
of customers are displayed at once.
- While the user might just scroll down, it's more likely the
user will adjust the filter, or locator, to find the record
they are looking for.
- From any position the browse can be scrolled up or down.
- The view is typically read-only (writes are done via the
file driver on a Form or via Edit In Place).
- Reports
- Reports tend to read through the whole result set, from
first to last, usually without interruption.
- The position of individual records is not important, the
report engine will not need to return to an earlier row in the
result set.
- The view is read-only, no writes are done.
- Processes
- Processes read through the entire result set, and then
typically do something with each row. That potentially
includes writing something back to the database.
- The process will not alter direction as the process runs.
These different use cases impact on the resources needed by the
server and the client. This in turn affects the performance of the
system, both of the program itself, and the database server (which
may be serving other people at the same time.)
Naturally the wiew engine defaults to the same approach that the
traditional drivers use - that of keeping all options open so that
it works regardless of the use case. That said, it's possible to
send some properties to the view to give it an indication of how it
will be used, and hence allow it to optimize internally for that
behavior. This can result in faster performance, and reduce resource
load on the server.
The properties, and their effects are listed below;
| Property |
Default Value |
View Behavior |
Use case |
Effect |
| prop:Paging |
true |
If true, then data is only
collected from the view a page at a time.
If false, the view will access
the whole result set (just once). Only set it to false where
the size of the result set is "small". Consider using |
Browses |
When true, this speeds up opening a browse as only a few
records are needed to fill the browse page. |
prop:FetchSize
aka
prop:PageSize |
30 |
The size of the page fetched at a time. 30 is a good
number for browses (the number of lines on the initial
opening page). For reports and processes this number can be
higher; 300 or so is a recommended value. |
Reports, Processes |
There is some overhead in fetching a page from the server.
Fewer pages mean less overhead. But bigger pages mean more
time transferring data that might not be used. |
| prop:NoPrevious |
false |
The view will not use the PREVIOUS
command. |
Reports, Processes |
|
| prop:ReadOnly |
false |
The view will not use the PUT
command, nor will any file in the view use the PUT
or DELETE command. |
Reports, Some processes |
When true, the engine does not set the internal position
for each file.
|
Filters
With traditional drivers, filters can only be applied to
VIEW
structures. Object based drivers extend this support to
FILE
structure
SET NEXT
loops. The information in this section applies to filters for either
FILE or
VIEW
structures.
Client-Side versus Server-Side Filters
Traditional Clarion drivers, have the ability to pass the filter to
the server for processing. This is good, you want as much filtering
as possible to be done for you by the server. With traditional
drivers if the filter is not server-compatible then the whole filter
is ignored, and the whole result set is passed back to the program
for client-side filtering and this includes the
Range.
Aside: There is only one filter sent to
the server. There is no actual
Range
(as suggested by the templates) in the
VIEW
syntax. The template
Range is a simple
way for the user to build up a filter, but that's all it is, a
filter construction mechanism.
If any part of the filter, including the range, is not compatible
with the server then the
whole filter
(including the range) is passed back to the client for processing.
This happens silently, and without warning. This can have
substantial performance implications and often leads to the common
myth that views are slower than files at retrieving data.
The correct way to deal with this situation (regardless of the
driver) is to separate out the incompatible parts of the filter, and
move those into client-side code (ie an embed point). Then the
compatible parts of the filter (and range) can be used by the server
and a smaller result set constructed. However in order to do this
developers need to be made aware of where it is happening.
The above applies to traditional drivers. Object based drivers
behave differently.
With object based drivers if the filter is not compatible with the
server then an error is generated (and optionally displayed.) It
will
not drop back to client-side
processing, and hence the result set will be empty. This will allow
the programmer to quickly spot the error and fix it, rather than
have the program "just run slow" consuming unnecessary client-side
AND server-side resources.
Important:
This is a significant change when it comes to VIEWs. In
traditional drivers if the filter is invalid (ie it cannot be
processed) then
ALL the records in
the View are returned in the result set. In object based drivers
this is reversed; if the filter cannot be processed then
NO
records are returned. As programs move to wider usage, especially
on the web (AnyScreen et al), this is an important security
feature.
If there is one major lesson here, it's that Server Side filters and
Client side filters must not be in the same place. Server side
filters belong in the template FILTER option, or in code, in the
prop:Filter (or
prop:SqlFilter).
Client side filters should be coded by hand in the appropriate embed
point inside the loop.
Filter Functions
When writing filters in Clarion it is common to use Clarion built-in
functions in the filter. The object based drivers support this using
the
Expression Translator, and
include support for some additional built-in functions. Additionally
SQL users can make use of any Server Side functions supported by
their specific database engine. A list of supported Clarion
functions is listed in the section
Expression
Tanslator below.
Server-Side functions
In addition to the built-in Clarion commands (which are translated
for you) in object based drivers it is possible to use other SQL
commands directly when appropriate. For example the SQL function
REVERSE exists, and it has no direct
Clarion comparison. Nevertheless if you use
REVERSE
in a filter then it will be included in the filter passed to the
server.
If you use a SQL specific function in the filter, and some other
part of the filter is not server-compatible (with your database
engine), then the result set returned will be empty.
Make sure the filter is completely SQL compatible. Move
client-side filtering code into an embed point.
Additionally, the use of specific SQL Server functions in the
filter may be specific to that SQL database. If you port your
application from one database to another then the new database
may, or may not, support this function. So when using server-side
functions be aware that some potential future portability is lost.
Expression Translator
A number of commands (
DELETE,
UPDATE etc) take a Filter
parameter, or other parameters that support Expressions. Some
properties (
prop:Project,
prop:Filter
etc) can contain a mix of Clarion code and SQL code. As noted above
these expressions are passed through the Expression Translator
before being included in the SQL statement. This section of the
documentation explores these expressions in more detail and
documents the Clarion functions which are available to be used.
Functions
The following are the default support and translation for the
drivers. The specific syntax can be different for different
databases, and overridden at the driver level so consult the
driver docs for any deviations from this.
| Function |
Support |
SQL |
Comment |
| Abs(expression) |
OBD
|
ABS(expression)
|
|
| BeginsWith(substring,string) |
OBD |
string LIKE(substring%) |
ISAM drivers will need to Add and BIND a function called
BeginsWith(substring,field) for this to work client-side.
|
| Chr(expression) |
OBD |
varies |
Common translations;
CHR(expression)
CHAR(expression)
ASCII_CHAR(expression) |
| Clip(expression) |
All
|
RTRIM(expression) |
|
expression & expression
|
OBD |
CONCAT(expression,expression)
expression || expression |
MySQL,MSSQL,MariaDB use CONCAT
Oracle and SQLite use ||
PostgreSQL, DB2 support both.
|
| Date(month,day,year) |
OBD
|
varies |
|
| Day(date field) |
OBD
|
varies
|
From a field in the database extract the
Day-of-the-Month number. [1]
|
| EndsWith(substring,string) |
OBD |
string LIKE(%substring) |
ISAM drivers will need to Add and BIND a function called
EndsWith(substring,field) for this to work client-side. |
| Inlist(expression,string1,string2,...) |
OBD
|
expression IN (string1,string2,...)
|
|
| Inrange(expression,low,high) |
OBD
|
expression BETWEEN low AND high
|
SQL BETWEEN is inclusive of
low and high values.
|
| Int(expression) |
OBD
|
FLOOR(Expression)
|
|
| Instring(substring,string,step,start) |
All |
LIKE(%x%)
|
The step and start parameters to INSTRING are ignored,
and treated as 1,1. |
Left(expression)
|
OBD
|
LTRIM(Expression)
|
|
| Lower(expression) |
All |
LOWER(expression) |
|
| Match(first,second,mode) |
All |
Various depending on Match Mode, and Database engine.
|
Regular Expression matches are Driver dependent
|
| Month(date field) |
OBD
|
varies
|
From a field in the database extract the Month number. [1] |
| Null(field) |
OBD
|
field IS NULL
|
|
expression ^ expression
|
OBD |
POWER(expression,expression) |
|
| Round(expression,order) |
OBD
|
ROUND(expression,Order)
|
|
| Sqrt(expression) |
OBD
|
SQRT(expression) |
|
| Sub(expression,start,length) |
All
|
SUBSTRING(expression,start,length) |
Some databases are case sensitive, some are not.
|
| Today() |
OBD |
|
|
Trim(expression)
|
OBD
|
TRIM(Expression)
|
|
| Upper(expression) |
All
|
UPPER(expression) |
|
| Year(date field) |
OBD
|
Varies
|
From a field in the database extract the Year number. [1] |
Note [1]:
These functions can operate on SQL
DATE
fields, but also on
LONG fields
which contain a Clarion Standard Date. In addition they also work
on
TimeStamp fields and
TimeStamp_Date
fields.
Fields
Fields are used when constructing filters or other properties.
In traditional drivers these fields have to be specified using the
Clarion field name in the Clarion part of the property, and the
field Alias name in the SQL part of
the property (for example {prop:SqlFilter}).
Extending this behavior, object based drivers allow you to use the
Clarion field name, or the SQL field name (table.column) or the
View Alias name (a.field) in any part of the property. You can
also use a plain name without any prefix, and the engine will
match that first against column names, and if that fails then
it'll look for a label in the field with the same name.
Using the Clarion name is the most portable since then the
translation to the alias name is done for you by the Expression
Translator.
Using the Alias name is the least robust because it is susceptible
to changes in the VIEW declaration.
An extra JOIN added (or removed) to
the view can break the alias. Thus using the alias name is not
recommended, however it is preserved for backwards compatibility.
These names can be used regardless of the position in the property
- they can be used when as parameters to Clarion or SQL functions,
used in expressions, and you can use both forms of the name in
different parts of the property if desired.
Note that GROUP fields do not exist
in the database, their "grouping together" is purely a Clarion
construction. Therefore you cannot use GROUP
fields in a filter expression. The fields in
a group exist in the database, and can be used in filters.
If you include a group in a prop:Project
then all the fields in the group are projected.
You can also use local variables in a filter. These will be
evaluated and the current value sent to the server when the SET function is called. These variables MUST be bound using BIND
in order for them to be available. If they are not bound, or the EVALUATE triggers an error, then the
property will be discarded and an error generated.
Dates
Clarion stores Dates in three different ways; using the
DATE
datatype, using a Clarion Standard Date value in a
LONG
datatype, and using a complex
STRING(8)
with
OVER(STRING).
TimeStamp
String(8)
TimeStamp_Group Group,Over(TimeStamp)
TimeStamp_Date Date
TimeStamp_Time Time
End
The object based drivers do a lot of work under the hood to ensure
that you can use any date column in the database (regardless of
whether it is a
DATE,
LONG
or over a Timestamp) in a filter. Equally it allows the
DATE,
DAY,
MONTH
and
YEAR functions to be used on
DATE,
LONG
and TimeStamp-String columns. The functions will also work on the
TimeStamp_Date column.
When comparing the
TimeStamp_Date
field to something, it is necessary to use the
TimeStamp_Date
name, not just
TimeStamp. So
Timestamp_Date = 'yyyy-mm-dd'
is preferable to
Timestamp = 'yyyy-mm-dd'
.
Performance Tips
The underlying driver performance is part of the equation to your
program performing well. However another layer of performance exists -
perhaps giving the driver more context, or making use of fetures and
behaviors available in object based drivers that are not in
traditional drivers. The goal of this section is to highlight changes
which, if implemented in your program, could improve performance.
Beta Testing
Comparing performance between two drivers is hard. Building
benchmarks to measure performance is harder.
For example when converting an application from the TOPSPEED driver
to a SQL driver it's often just assumed that the application will go
faster. In some places it will, but not in all places. In some
places code may need to be changed to allow the SQL driver to be
performant. Certainly SQL can go faster than Topspeed, and out
the box it does so in many situations. But equally to maximize the
performance gains from the SQL driver some changes to the code may
be desirable.
Equally when comparing a traditional SQL driver with an object based
SQL driver it's hard to talk about performance as a single entity.
There are things the traditional driver will do faster, but the
object based driver may do more checking, be more reliable, or allow
for more variety in functionality. And there are some tasks the
object based driver will do faster.
Performance also involves the template set, the class code,
hand-code and the driver itself. Over the next period of time,
changes will be made to the template layer, and the class layer, so
that generated applications can best make use of the functionality,
and abilities of the new drivers.
Server performance is also a consideration. Requests made to a
server consume server resources, and that resource usage is seldom
factored in when building benchmark tests. The object based drivers
have explicitly set out to limit server-side resource usage, which
in the real world allows servers to perform better for all users,
even if the speed for lab-based, client-side, benchmark tests may be
slower.
Benchmarks should take into account the size of the database.
Generally speaking things slow down as the database gets larger. So
benchmarks on small data sets can deliver different results to the
same benchmarks run against large data sets.
Network performance also plays a huge role. Network performance is
made up of two factors; the size of the connection (aka the
bandwidth, which determines the raw speed of moving the data) and
the distance between the client and the server (aka the latency,
which determines the overhead time of each request.) Networks with
high latency benefit from fewer, larger, requests. Networks with low
bandwidth benefit from more frequent, smaller, requests.
Lastly it should be pointed out that performance is a process, not
an event. First it must be made to work right. Once that phase has
been achieved then it must be made to work fast. Since these drivers
are very new, they have not yet been optimized for performance. In
ad-hoc testing here they are faster in some tasks, slower in others.
Currently, the drivers therefore make no claims as to overall
performance. Naturally they should be "fast enough" for normal
usage, and users should not notice a dramatic speed gain, or loss.
In the longer run the goal will be to ensure that they perform as
quickly as traditional drivers for most tasks, and faster for some
tasks.
Performance Levels
From a File Driver perspective, there are fundamentally there are
three ways to improve performance in your application.
- L1Using the correct File Driver,
Driver Settings, Database Configuration, templates and classes
to apply globally to the application.
- L2The addition of code to the existing
code, which give the driver more information about the task and
context at hand. For example by setting properties, using
commands like BUFFER, STREAM,
or LOGOUT.
- L3The refactoring of existing code to
make better use of better commands, or new features, to
accomplish the root task.
In the documentation below suggestions will be marked with a
L1,
L2
or
L3 to indicate the nature of the change.
Level 1 things are simply to apply, and can make application-wide
performance improvements.
Level 2 changes require some code review, but slow parts of the
program can be easy to identify, and with small changes can be
improved.
Level 3 changes require changing the existing pattern of code, and
identifying new patterns that are more suited to the database (this
is especially true for programs and programmers raised on ISAM but
newer converts to SQL.)
DELETE
The traditional way to delete a record in Clarion is to load it
first into the record buffer and then call the
DELETE
command. This obviously still works.
L2 However object based drivers allow you to
skip the READ and simply delete the record with an appropriate
filter. For example, if the primary key field is
id,
and the record id to delete is
10,
then instead of
cus:id = 10
Get(Customer,Cus:IdKey)
If errorcode() = NoError
Delete(Customer)
End
you can just write
Delete(Customer,'cus:id = 10')
If you wish to know the number of records deleted (in this case 1)
you can test
prop:affected
Delete(Customer,'cus:id
= 10')
affected = Customer{prop:affected}
If no records were affected the the
Record Not
Found (35) error code is set. So this code would also work;
Delete(Customer,'cus:id = 10')
If Errorcode() = NoRecErr
! record not found
End
This (slightly) speeds up the delete of individual records.
L3 However if you are deleting records
in a loop then a more dramatic performance improvement is possible.
For example, say you're clearing out the logs, and so deleting all
records from January. Your code might look like this;
Log:Date = Date(1,1,2024)
Set(Log:DateKey,Log:DateKey)
Loop
Next(Log)
If Errorcode() then break.
If Log:Date > Date (1,31,2024) then break.
Delete(Log)
End
this can be replaced with
Delete(Log,'Date >= Date(1,1,2024) and Date
<= Date(1,31,2024)')
Assuming there are a lot of log records, this speed improvement
could be very substantial.
Of course getting the filter wrong could be pretty bad, so you might
want to check, and test, this sort of statement well.
Explain
In Clarion, a
SET /
NEXT
loop works on a Result Set. This result set is constructed by the
database engine when the first
NEXT (or
PREVIOUS) after a
SET
command is encountered. This translates into a
SELECT
statement sent to the database engine.
When a
SELECT request, is sent to the
server, it is processed by the server, and the server then decides
on the best way to build the result set for the request.
Understanding what it is doing, and when it is doing badly, can be
critical to getting the best performance from a request.
Fortunately the database is willing to show the plan it decides is
optimal. If the request is slow, then being able to see the plan
gives the developer vital information as to how to improve the plan,
and in doing so, improve the performance of the request.
There are three different ways to trigger this explanation;
- Use LOGGINGON with
either no options, or the driver:explain
option
- Use prop:Explain to
turn explaining on. Remember to turn it off when done.
- Use the Driver Option /EXPLAIN
to set system-wide explaining on (or off).
The result of the explain (the explanation) is sent to
DebugView++. The format will vary from one
database engine to the next. The magic words to look out for are
Table Scan or just Scan. Table Scans, especially on larger tables,
are slow.
For example, here is the output from SQLite, for a
SELECT
where a Key is available;
-- EXPLAINING:
----------------------------------------------
SELECT A.TITLE,A.YEAR,A.EXTRACT,A.GUID FROM Movies A WHERE
(A.TITLE > ?1 COLLATE NOCASE) OR (A.TITLE = ?1 COLLATE NOCASE
AND A.YEAR >= ?2) ORDER BY A.TITLE COLLATE NOCASE ASC,A.YEAR
ASC LIMIT 30 OFFSET ?3 ;
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-- ID | Parent | Description
-- 08 | 0 | SEARCH A USING INDEX
Movies_TITLEKEY (TITLE>?)
-------------------------------------------------------------
The explanation includes the
SELECT
statement and then the plan for executing it. In this case a
suitable key (
Movies_TITLEKEY) is readily
available.
Here's the same browse though, sorted by a field not in a key;
-- EXPLAINING:
----------------------------------------------
SELECT A.TITLE,A.YEAR,A.EXTRACT,A.GUID FROM Movies A WHERE
(A.EXTRACT > ?1 COLLATE NOCASE) OR (A.EXTRACT = ?1 COLLATE
NOCASE AND A.GUID >= ?2) ORDER BY A.EXTRACT COLLATE NOCASE
ASC,A.GUID ASC LIMIT 30 OFFSET ?3 ;
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-- ID | Parent | Description
-- 07 | 0 | SCAN A
-- 28 | 0 | USE TEMP B-TREE FOR
ORDER BY
-------------------------------------------------------------
In this case there is no suitable key, so a table scan is necessary.
(As evidenced by the line
SCAN A). So this
SELECT will be somewhat slower (depending
on the number of rows in the table.)
Adding a key here would make this request faster.
Adding more keys though does come with a cost. The size of the
database will get bigger, because that new key needs to be stored
somewhere. This is less of an issue than it used to be, since disk
space is cheap. In addition adding, or updating records, can be
slower since the additional key needs to be maintained at the same
time. So while adding more keys makes reads faster, at the same time
it makes writes slower. Usually this is a fair trade-off for most
tables since reading activity usually greatly exceeds writing
activity. However for large tables, or tables accumulating records
at very high speed (think IoT logging devices etc) fewer keys may be
preferable.
After generating the explanation the driver then processes the
command as normal, so the underlying command still works. Naturally
there is some performance overhead in this, but the debugging output
can be a critical link in understanding where performance is
sub-optimal.
Keys
The role of additional Keys (or Indexes in SQL parlance) is one
which draws many opinions. This variety is a side effect of several
factors - firstly that they behave quite differently in SQL compared
to ISAM, and secondly because the context of the table matters.
These factors combine to mean there is no "one true answer" to any
question about keys.
In ISAM the View engine, as coded in the file driver, is responsible
for selecting the key to use to fetch records from the disk. It
primarily uses the
ORDER of the view
to choose which key to use. Once it has selected the key, it then
applies the
FILTER to return the
result set. The Clarion templates push the programmer gently in the
direction of building browses and reports around a defined key, and
also offer filter construction assistance (ie a RANGE) around that
same key.
In SQL the task of filtering the records, and then ordering the
result set is passed to the database server. The database engine has
a query planner which takes the
ORDER BY
and
WHERE into account when selecting a
key. Primarily it uses the
WHERE clause,
not the
ORDER BY to determine which key to
use. Therefore in SQL it's important to add keys to match the
filtering the program uses.
A common SQL belief amount Clarion programmers is that "SQL does not
need keys". There's a perception that SQL database will generate
it's own indexes as and when it needs them. It will automatically
identify common patterns and optimize itself to the needs of the
client program. That perception isn't entirely accurate. While some
databases do offer automatic indexing features to a limited extent,
SQL databases generally do not automatically create indexes based on
query patterns. Most databases collect statistics on table data and
use them for query optimization, but they don’t auto-create indexes.
For example PostgreSQL’s
ANALYZE command
updates statistics for the planner, but it won’t create new indexes.
Some databases suggest indexes based on workload analysis; for
example SQL Server's Database Engine Tuning Advisor or MySQL’s
EXPLAIN ANALYZE can help identify missing
indexes. But ultimately the Database Administrator (DBA) needs to
manually create indexes based on this information.
Again, context is important. If your database has a DBA (and the DBA
is willing to do the work) then you should leave the index creation
to the DBA. However if this is not the case, then the program is
responsible for creating its own indexes. And indexes are very
necessary to performance. The correct index at the right time
will vastly improve read performance of the data. This is especially
true for browses and reports. Use the EXPLAIN command to understand
how your database is using keys and when it is doing a table scan
(which likely means another key would be useful.)
Set / Next Loops
The simplest form of the SET/NEXT loop gives very little information
to the database about the result set you are expecting. By being
aware of the nature of your SET/NEXT look, and using appropritate
properties, you can help the driver, and the database, optimize the
request for speed, and also for consuming fewer resources. This
section explores the thought behind which properties to set, and
when to set them.
| Question |
Action |
Reasoning |
| Are all the records from this position in the file going
to be used, and the total number of records in the table is
say under 5000? |
Set prop:Paging to false. |
As long as the result set will fit into memory, and as
long as all the records (or "most" of them) will be used,
this approach is the fastest. |
| Are you going to use the first records, up to some range
limit and then ignore all the records after that? |
Leave prop:Paging as true. Set a "reasonable" value for
the number of records you are expecting in prop:Limit |
Using prop:Limit reduces the
resources used by the server fetching information you won't
use. Fetching 1 page with all the data is ideal, but
smaller, fewer, pages reduces wasteful rows, at the cost of
more requests to the database. Getting everything in 1 or 2
pages is ideal. |
STREAM / FLUSH
ISAM files make use of
STREAM and
FLUSH to speed up multiple writes. So for
actions like generating a number of records, or importing data from
another source these can have a large impact on performance. These
functions revolve around the way data is written first to disk
cache, and then ultimately to disk. By default ISAM drivers wait
until the disk is physically written before returning - this is best
for "knowing" that the write is permanent before continuing.
By using
STREAM the program would not
wait for this disk-write, and that improved performance by a LOT.
The cost of this speed-up was that
STREAM
required exclusive access to the file, which in turn blocked other
programs on the LAN from accessing the data. This was often a
problem which prevented
STREAM from
being used.
Since SQL drivers do not write to the disk (that's the job of the
SQL server) the traditional SQL drivers did not implement
STREAM
or
FLUSH.
Conceptually though, SQL databases do have facilities for improving
write performance. If they know that a group of writes will happen
in quick succession, they can make allowances, thus allowing the
performance to be better. While the exact mechanism may differ from
database to database the important thing is letting it know when the
batch write will be started, and when it is complete.
The object based drivers use
STREAM and
FLUSH for this purpose. Before doing a
batch update (
ADD,
APPEND,
PUT,
DELETE
,
UPSERT etc) call
STREAM.
Once it is complete call
FLUSH.
Stream(Invoices)
! do a bunch of writes to the Invoices table
Flush(Invoices)
There may be locks that the database creates during the update, so
it's a good idea to remember to call
FLUSH
once it's done. If not done then the
FLUSH
will be done when the table closes.
Result Sets
In some situations a common pattern is to generate a
SELECT
statement (either via a
SET or
SQL command, or via a (
now
deprecated)
Prop:Sql
statement) and get the results back into a queue. Code can then
easily move around the queue to perform functions and so on.
The object based SQL drivers have built-in support for this pattern.
There are a number of new commands which allow direct access to the
Results buffer. These commands are supported for both
FILE
and
VIEW structures.
While a result set is "linked" to the
FILE
or
VIEW that sent the command, the
command itself might not match the
FILE
or
VIEW structure. Attempting to read
the result via the
FILE record buffers
would likely result in type mismatches etc. For this reason it may
be advantageous to access the results directly using;
| Command |
Description |
FLUSH(File)
FLUSH(View) |
Clears the memory used by the result set. |
RESULTGET(File,Row)
RESULTGET(View,Row) |
Copies a row from the result set into the file record.
Use this to explicitly set the row number to a specific
value in preparation for calls to RESULTPREVIOUS
and RESULTNEXT. |
RESULTGET(File,Row,<Column>,<Null>)
RESULTGET(View,Row,<Column>,<Null>)
|
These commands allow direct access to a cell in the result
set. They return the contents of the cell. If the cell in
the result set is NULL then an
empty string is returned and the Null
parameter (if supplied, as a *Byte)
is set to true.
|
RESULTNEXT(File)
RESULTNEXT(View) |
Increments the Row number, and moves that row of data from
the result set into the record. If there are no more records
the ErrorCode() is set to BadRecErr (33). |
RESULTPREVIOUS(File)
RESULTPREVIOUS(View) |
Decrements the Row number, and moves that row from the
result set into the record. If the Row number is less than 1
then the ErrorCode() is set to
BadRecErr (33). |
RESULTROW(File)
RESULTROW(View)
File/View{prop:ResultRow} |
Returns the current Row number which was last copied to
the record. |
RESULTRECORDS(File)
RESULTRECORDS(View)
File/View{prop:ResultRecords} |
Returns the number of rows in the result set. |
The results buffer is populated by a number of commands, but the
most common usage would be with a SET / NEXT or SET / PREVIOUS
pattern. Setting
prop:Paging = false
would allow a single SET / NEXT to retrieve the whole result set.
The commands
ResultGet,
ResultNext,
ResultPrevious move a row from the
result set into the file buffer(s). Since at this point the result
set is already in memory, it behaves at memory related speeds.
The result set is cleared on a
SET
command. It may also be cleared on a
NEXT
or
PREVIOUS command (if the
SET
is in paging mode, and a new page is about to be fetched.)
Typically it's "somewhat faster" to let the driver clean up the
memory as it needs to, however if you retrieve a large result set,
and you have finished it, and wish to free the memory it used, then
call
FLUSH.
The number of records in the result set (or in this page of the
larger result set) can be obtained using the
RESULTRECORDS
command. The current position in the result set (or in this page of
the larger result set) can be obtained using the
RESULTROW
command.
Unstructured Data
Unstructured Data is an overloaded term, and can mean different
things in different contexts. The following contexts have an impact
on the way you use unstructured data, and the perfomance
implications of each approach.
Client making an adhoc request
that does not match a FILE or VIEW structure
In this case the program makes use of the
SQL
function. It passes a custom SQL command to the server and gets a
response. This might be an adhoc SELECT, or a call to a Stored
Procedure, or whatever. The important thing though is that this
request does not match up to any
FILE
or
VIEW structure in the program.
In the past a generic structure was used to hold the results from
such a call. Typically it was a
QUEUE
with some fixed number of fields, where the number and size of
those fields were (arbitrarily) large enough to hold the returned
result set.
Object based drivers offer two alternatives to this approach;
Firstly, the result set is automatically retrieved and stored in a
Result Set object for you. This object can then be queried
directly using the
RESULTGET
functions which allow you to specify a specific cell in the
result. For example;
SQL(Invoice,'select count(*) from Invoice
where Paid=1;')
ans = ResultGet(Invoice,1,1)
A second approach is to turn the result set into a single JSON
object, or array, and return that to the program as a large
string. The programmer can then feed that string into their
favourite JSON parsing class, and work directly on the JSON tree.
Debugging
Introduction
When your code does not give you the result set you wanted, it
usually means you've done something wrong. However discovering what
that mistake is can be frustrating because your code goes through
the driver, and morphs into something (perhaps not what you
intended) in the request to the back end. When that database
is SQL then debugging the issue is not as simple as say firing up
TopScan to look at the table.
The drivers provide a number of ways to give clarity as to what
exactly is happening with the command. Understanding these features
is a key part to quickly, and effectively, identifying your mistake
and moving forward.
Properties
A number of properties exist, which can be queried immediately after
the call. These are the easiest things to check first, and usually
show the problem immediately. In the case of a
SET
these can be examined after the
NEXT or
PREVIOUS command, and apply to both a
FILE and a
VIEW.
| Property |
Description |
| prop:GroupBy |
|
| prop:Having |
|
| prop:OrderBy |
This is the ORDER BY Clause in the SELECT statement that's
sent to the database. It is constructed of the key fields
(if SET(key) is used), the ORDER attribute of the view (aka
Prop:Order), and Prop:SqlOrder. The order is also
supplemented with the Primary Key fields of all tables in
the View, or the Primary key fields of the File. |
| prop:Project |
|
| prop:Where |
|
| prop:Sql |
This is the entire SQL statement. Assigning this into a
string allows you to immediately examine the SQL statement
generated by the driver. Note that some file driver commands
(like CREATE) end up generating
multiple SQL commands. In this case the prop:Sql
returned is a semi-colon separated list of commands.
str.SetValue(Customers{prop:Sql})
or
s = Customers{prop:Sql}
|
Explain
SQL databases support a command called
EXPLAIN
which allows the database to pass information back to the Clarion
program which "explains" the method the server used to build the
result set.
The output varies wildly from one database engine to another, and
some familiarity with the output from your preferred database engine
is necessary in interpreting the result.
When
prop:Explain is set
before a command, then the driver will ask the server for an
explanation, but then will also go ahead and execute the command. So
the program will continue working.
Logging
File Drivers allow
logging. This allows the
communication between the program, and the database to be logged,
and inspected, which in turn can help isolate ineffiencies and
errors. If you are used to using
TRACE.EXE
then this is a good section to read.
Alert: Logging data flowing between the
program and the server can be a security risk, because that data is
logged in plain text. Therefore if at all possible, do not log data.
Logging commands is not a problem, but logging data "leaks" that
data out of the database.
Traditionally, the output of the logging can go to one of two
places; either to
DebugView++ (or equivalent) or a TEXT file on
the disk.
Traditionally the option to turn logging on and off can be done
either through an INI file interface, or via code in the program.
The INI interface is highly convenient (and can be set and unset
TRACE.EXE using the program.) However
because the INI interface is a security risk, this feature has to be
enabled in your program. See
logging for
more details.
Custom Field Types
Introduction
Object based drivers allow a custom class to be assigned to
a field. This allows for ad-hoc support for database field types
which may not exist in Clarion, or be natively supported by the
driver.
For example, MsSQL databases can make use of the
GUID
data type. This is a 16 byte binary string type. You can just import
it into your dictionary as a 16 byte string, but since it is binary
it's not terribly useful like that. It would be more useful in the
dictionary defined as a 36 byte text string, consisting of the
binary value hex-encoded, plus some separator hyphens. Custom Fields
allows you to create this alternative representation of the field,
and acts as the "encoder / decoder" between the Clarion program, and
the database.
Of course this technique is not limited to GUID fields. That's just
an example. This technique can be used to translate from any field
type in the database to a suitable field type in the dictionary.
Importantly these custom fields exist in your program,
not
in the driver itself. So they act as a way for the program to
"extend" the driver, without having to adjust the driver code. If
the driver supports this field type natively, then there's no need
to make use of this custom field interface. (As it happens, the
drivers
support GUID fields natively, thus
rendering this example moot.)
A custom field class implements the
iDriverField
interface, which is declared in
CWDriver.Int
. Examples of custom field types exist in
DriverFields.Inc
and
DriverFields.Clw. As mentioned
above these files are not compiled into any driver, they are
compiled into your program.
Using a
Custom Class
- In the dictionary declare the variable (typically a string)
large enough to contain the program version of the field. In the
GUID example, the dictionary string would be a String(36)
or a Cstring(37).
- Add an Include to the global embed point, After Global
Includes, to include your class.
Include('DriverFields.Inc'),Once
- If safe to do so, declare a global, threaded, object of the
custom class. In some cases (like in the example DriverFields.Inc)
these classes are re-entrant, and thread safe, so you can
declare a singleton of them in the INC file. In the DriverFields.Inc
file an object (DriverGuid) of the
class DriverGuidClass is created
for you.
- Assign the interface, from the object, to the field.
Note: This
affects ALL instances of this FILE
structure, on all threads, for the life of the program
Use the new command SETCUSTOMTYPE
SetCustomType(Customers,'Cus:Guid',DriverGuid.iDriverField)
The second parameter can either be the field name (in quotes) or
the field number.
The third parameter is the iDriverField
interface, as implemented in your custom class.
Creating
a Custom Class
- Create an INC file, and declare a class that (at least)
implements the iDriverField
interface;
map
end
include('CWDriver.Int'),once
DriverGUIDClass Class, Implements(iDriverField), Type,
MODULE('DriverFields.Clw'), LINK('DriverFields.Clw',1), DLL(0)
End
! if the class has no properties, then a single global object
(singleton) can be used for multiple fields
DriverGuid DriverGuidClass
- Create a matching CLW which fleshes out the interface methods;
!-------
DriverGUIDClass.iDriverField.GetFieldType
Procedure()!
! return a unique text identifier for the field type
code
return 'GUID'
!-------
DriverGUIDClass.iDriverField.FromRecordField
Procedure(Byte pFieldType, Long pFieldAddress, Long
pFieldLength, Long pWriteAddress, Long pWriteLength)
code
! your code goes here, this moves the field from the
Clarion record structure, into the format as it will be stored
in the database
return 16 ! the number of bytes in the output
!-------
DriverGUIDClass.iDriverField.ToRecordField
Procedure(Byte pFieldType, Long pFieldAddress,Long
pFieldLength, Long pSrcAddr, Long pSrcLength)
code
! your code goes here. This moves the data from the
database format into the format declared in the FILE structure
(ie in the Clarion Dictionary.)
return self.iDriverField.GetSourceSize()
!-------
DriverGUIDClass.iDriverField.ClearRecordField
Procedure(Byte pFieldType, Long pFieldAddress,Long
pFieldSize,String pValue)
str string(16)
code
! your code to clear the field goes here - eg str=''
Return
self.iDriverField.ToRecordField(pFieldType,pFieldAddress,pFieldSize,address(str),size(str))
!-------
DriverGUIDClass.iDriverField.IsRecordFieldClear
Procedure(Byte pFieldType, Long pFieldAddress,Long
pFieldSize,String pValue)
str string(16)
code
! your code to check if the field is clear goes here
Return True ! or False
!-------
DriverGUIDClass.iDriverField.GetSourceSize
Procedure()
code
Return 16 ! The byte length of the data in the database
! ---------
DriverGUIDClass.iDriverField.GetSQLType
Procedure() !,STRING
code
return 'CHAR(16)'
The complexity of the code in these methods depends on the
complexity of the field structure in the database, and the
difficulty (or simplicity) of converting that to and from the
Clarion record structure.
Considerations
Calling
CLEAR(File) is a file driver
function. This is good. This call goes to the file driver, which in
turn can then clear the file record. It does this on a
field-by-field basis - meaning that custom fields can be cleared
with the interface
ClearRecordField
method.
By contrast calling
Clear(Fil:record)
is a simple call to
CLEAR a GROUP. This
call does
not go through the driver.
In this case the Record fields will be cleared as if they are simple
Strings, or Longs etc.
Also note that (despite the Clarion documentation) in Clarion it is
not possible to
CLEAR(Table,n). The
CLEAR interface method includes this
parameter in the
iDriverField
interface in case this changes in the future, but for now the
parameter is always
0.
Writing a Driver
Drivers
Included
-
DOS2
An example of a flat-file driver. This driver mimics the DOS
file driver, with some additional functionality.
Experimenting with this driver is a good starting point for
understanding how to build, register, update and deploy a
driver.
-
SQLite2
An example of a SQL driver. This implements the SQLite database
using the SQLITE3.DLL.
-
InMemory2
This is an example of a Derived driver. This driver derives from
the SQLite2 driver.
Build
Utilities
There are two parts of the build process which are somewhat
different to a regular Clarion project build. This section explains
them, and also explains the utilities that are used to overcome
them.
Export Files / Pre-Build Command
The first issue is simple, and easily solved. When creating
classes in hand-code, and then pre-compiled into a DLL, an EXP
file needs to be maintained so that those classes correctly export
the methods in the class. In an APP system these EXP files are
generated for you (and you may not even be aware of them) but when
there's no APP they have to be maintained manually.
Since maintaining these files by hand is a pain, a utility is
provided (in
\clarion\accessory\driver\obdexport)
called
ObdExport.Exe. This is a
command line program which reads in an INC file and creates a EXP
file for you. As it is a command line program it can be called
automatically as part of the build process. For the included
drivers a (different) BAT file containing a call to this utility
is included in the
\clarion\accessory\driver
folder.
For example, for the DOS2 driver, the file
ExportDos2.Bat
exists. This has been added to the
CLADOS2.cwproj
file as a
Pre-Build Event Command Line.
Because pre build commands line start in the
\Clarion\bin
folder (not the BAT file location) the Pre-Build Command Line is
set to
$(ProjectDir)ExportDos2.Bat
Inside the BAT file are two lines of code;
cd /d "%~dp0"
..\bin\obdexport.exe -i=.\Source\Dos2DriverClass.Inc
-e=.\Source\ClaDos2.Exp -l=CLADOS2 -b=7E0000H -d=DOS2
The first line changes the "current drive and directory" to the
BAT folder (typically
\clarion\accessory\driver).
The second line calls the ObdExport program, which is in the
\clarion\accessory\bin folder.
The options for the ObdExport program are as follows
| Command Line Switch |
Meaning |
| -i |
The name of the INC file to parse |
| -e |
The name of the EXP file to create. |
| -l |
The Library name. This is used in the first line of the
EXP file as
LIBRARY 'whatever' GUI |
| -b |
The Image_Base of the library. This parameter is
optional. It should be unique for each DLL you create. |
| -d |
If the DLL being created is a File Driver, then it needs
to export two functions first (for the IDE to use.) This
option specifies the name of the driver. It is not used
for the general DLL's (CLASTR and CLAOBD). |
When ObdExport runs it sends information to
DebugView++, so you can see it is working
correctly. This is also helpful to understand where it may be
going wrong if there is a problem.
The source code for ObdExport is in
\driver\obdexport,
however you should not need to recompile this utility.
Patching / Post-Build
Command
The second difference is very technical
[1],
and again a utility is provided to resolve it. the utility is
called
ObdPatch.exe, and again a BAT
file is in the driver folder to call it for each driver. The BAT
file is called from the project, as a
Post-Build event
Command Line.
PatchSQLite2.Bat
Inside the file are two lines of code
..\bin\obdPatch.exe -d=clalit2.dll
-c=.\source\SQLite2Driver.Clw -w=1
copy .\clalit2.dll ..\..\bin
The first line calls the ObdPatch.Exe and the second copies the
DLL to the
\clarion\bin folder.
The options for the ObdPatch program are as follows
| Command Line Switch |
Meaning |
| -d |
The name of the DLL file to edit |
| -w |
If set to 1 then the new address is written directly
into the DLL file |
| -c |
The name of the CLW file containing the TDescAddress
equate. If this option is set then the CLW file will be
updated. |
When ObdPatch runs it sends information to
DebugView++, so you can see it is working
correctly. This is also helpful to understand where it may be
going wrong if there is a problem.
The source code for ObdExport is in
\driver\obdpatch,
however you should not need to recompile this utility.
Note
1: The technical part is not really interesting. But
essentially the DLL needs to have an address hard-coded into it at
compile time. The compiler cannot do this, so it's done by the
utility after the DLL is compiled. And the source code is updated
with the address. In the source code the address is an equate;
TDescAddress Equate(007e7140h)
GPF's
Due to some cleverness by the compiler, some code in the driver will
not GPF in the usual way
[1].
Rather, if the code in the driver GPF's then the call will terminate
at that point, and the code will continue to run from the next line
in your program. The
ErrorCode() will
be set to 90 (
File System Error)
and
FileError() will return the stack
trace.
This is a fantastic feature because it isolates your program from
GPF's caused in external libraries which are used by the driver (ie
libraries supplied by the database supplier.) However it can make
writing drivers challenging, because useful GPF's are suppressed.
For some kinds of errors, while constructing the driver, it may be
advantageous to call the method directly (thus bypassing the GPF
protection) in order to get an appropriate GPF, which will then lead
to the source of the error.
Note 1: If the call
is a new command, one not present for traditional drivers, then it
will likely GPF as normal. However if it's property related, or a
command used by traditional drivers, then it won't GPF.
Object Based Drivers
Drivers based on these classes are colloquially known as object
based drivers. This is because they are built using a set of base
classes which in turn makes building a driver for a specific
database easier, requiring only the code that makes that database
unique.The following base classes are provided in
DriverClass.Clw,
and defined in
DriverClass.Inc.
Interfaces are declared in
CWDriver.Int.
| Class |
Purpose |
| DriverBaseClass |
Used to share functionality between the FILE classes and
the VIEW classes. Not usually used directly. |
| DriverFileClass |
A class which contains common driver functionality. Derive
from this class if you are making an ISAM (TPS-Like) or
Flat-File (DOS-Like) driver. |
| DriverFileSQLClass |
Based on the DriverFileClass, but with SQL specific code.
Derive from this class if you are making a SQL driver. |
| DriverResultSetClass |
SQL drivers have the ability to hold a result set (client
side) which can then be iterated through using NEXT
and PREVIOUS. This result set
is stored by a separate class, and access to this result set
is accessed through the iResultSet interface. The DriverResultSetClass
is a default implementation of this interface, however it's
possible to replace this class if necessary (perhaps to
improve performance, or whatever.) |
| Interface |
Purpose |
iDriverField
|
This interface allows for custom field
types to be added at the application level, external
to the driver.
|
iResultSet
|
To avoid the need for cursors on the SQL side, result sets
can be moved to the client side and stored in a ResultSet.
This result set can then be iterated on using NEXT
and PREVIOUS. Note that the
size of this result is usually limited to the prop:FetchSize.
The prop:FetchSize can also be
set by the first parameter to the BUFFER
command.
As with the custom field types, the Result
property in the class can be set externally to the driver to
any implementation of the interface.
|
Deriving a File Driver, and applying it
to a FILE Structure
In most situations the supplied object based driver will be
sufficient for your programming needs. This driver has been
registered and you use it like any other driver. However
since the driver is supplied as source code, and as a class, it's
possible to inspect the code the driver is using, and, if you
like, derive that class with your own class, and set one (or
more) tables to use your derived class.
For example, say you wanted to add functionality to the DOS2
driver, overriding the ADD method(s). This is a straight-forward
process.
1. Create a Derived Class
Derived classes consist of two files, and INC file and a CLW
file. These files belong in your \Clarion\Accessory\Drivers
folder.
While the nuances of creating classes is beyond the scope of this
document, a simple INC file looks like this;
include('Dos2DriverClass.Inc'),once
DosReadOnlyClass CLASS(DriverFileDos2Class), Type,
MODULE('DosReadOnlyClass.Clw'), LINK('DosReadOnlyClass.Clw', YourLinkMode),
DLL(YourDLLMode)
ADD
PROCEDURE(),DERIVED
ADD
PROCEDURE(LONG pLength),DERIVED
END
Then in the CLW you flesh out the methods;
member()
map
end
include('DosReadOnlyClass.Inc'),once
include('errors.clw'),once
!-----------------------------------------------------------------
DosReadOnlyClass.ADD
PROCEDURE()
Code
Self.SetError(NoDriverSupport)
!-----------------------------------------------------------------
DosReadOnlyClass.ADD
PROCEDURE(LONG pLength)
Code
Self.SetError(NoDriverSupport)
2. Use the derived class in your program
In global Includes
include('DosReadOnlyClass.Inc'),once
In your project settings
YourLinkMode and YourDLLMode need to be set
appropriately.
Declare an object. Put it in the same scope as your FILE
structure. (So usually GLOBAL).
ImportReadOnly DosReadOnlyClass
At a suitable point in your code (presumably before you start
using the file, or at the moment when you want to switch it to the
new derived class)
Import{prop:Object} &= ImportReadOnly
In the above line Import is the label of the FILE
structure.
If the FILE is threaded (,THREAD)
then your object should also be ,THREAD
and this assignment needs to be done on a per-thread (when
desired) basis.
Creating a New Driver
The following are a set of tips for writing a new driver.
- Your driver class will derive from either an existing driver,
or the supplied DriverFileClass and DriverViewClass classes.
- Starting from a "blank" driver is useful, the DoNothingDriver
is recommended. It contains the basic boilerplate code (calls to
clear the error, call the parent, and so on. todo comments
assist in identifying the places where you will need to write
code. Edit these files - give them a new name, new classname,
new LinkMode and DLL Mode
- Copy in the Driver.Clw file, and rename and edit
appropriately.
- If you derive any of these methods you MUST call the parent
method AFTER your code (regardless of whether they worked or
not);
CLOSE, GET,
NEXT, PREVIOUS,
REGET.
Note: Even if you don't support some
of these operations, or some flavors of the operations, then you
likely DO want to derive these methods, calling the parent
first, and then setting
Self.SetError(NoDriverSupport)
Since these parent methods do not set this error.
- If you derive any of the following methods you MUST call the Self.PreRead(OpCode) method BEFORE your
code.
GET , REGET,
NEXT, PREVIOUS.If
Errorcode() is set after this call then you should abort the
read.
- If you derive any of the following methods you MUST call the
parent method BEFORE your code;
OPEN, PUT
, ADD, APPEND,
DELETE, SET,
RESET . You should abort the call if
the parent method sets the ErrorCode().
Note: Even if you don't support some
of these operations, or some flavors of the operations,
then you likely DO want to derive these methods, calling
the parent first, and then setting
Self.SetError(NoDriverSupport)
Since these parent methods do not set this error.
- If you derive any of the following methods you MUST call the Self.AfterWrite(OpCode) method AFTER
your code even if the write is not successful, and an error is
generated;
PUT , ADD,
APPEND, DELETE.
- Your new class needs a copy of the ReplaceObject
method. Copy this from DriverClass.Clw.
Change the type of the old field so that it uses your class
name, not the DriverFileClass name. That's the only change
required. Do not call the parent method.
Dos2DriverClass.REPLACEOBJECT
PROCEDURE(Long pOldObjectAddress)
old &DriverFileDos2Class
Code
If pOldObjectAddress <> 0
old &= (pOldObjectAddress)
self :=: old
dispose(old)
End
self.ClaFCB.rblock = address(self)
Return
- Derive the GetPROP:Driver method
to return the name of your driver.
DriverFileDos2Class.GETPROP:Driver
PROCEDURE()!,STRING
code
return 'DOS2'
Compiling a Driver
Compiling the drivers is fairly straightfoward, but there is a
wrinkle.
The IDE, and specifically the Dictionary Editor, reads the DLL to
determine the supported data types and so on. To do this it doesn't
"Load" the DLL in the usual fashion, but rather reads the DLL
directly. One of the values it reads is the
DriverGroup
structure, and specifically the
tDesc
property in that group, which is initialized to an address.
The compiler though cannot prime groups with addresses. To solve
this problem a small utility program (
ObdPath.Exe)
exists. This utility inspects the DLL, and tweaks the DLL file so
that the address is set correctly. This utility is therefore called
at the end of the compile to tweak the DLL file, to set the correct
value into the DLL file.
Additionally the utility will update the CLW file, by setting the
TDescAddress equate. Although this doesn't
make much difference for this compile it means that the value is set
correctly for the next compile. It's worth noting that the value can
potentially change from one compile to the next (if the source code
changes.)
Example;
c:\clarion111\bin\obdPatch.exe -d=cladnd.dll
-c=DoNothingDriver.Clw -w=1
Commands Summary
In the context of Deriving classes, and deriving methods, there is
some parent code which is attached to each method . This table acts
as a reference for deriving these methods.
| Method |
Required
File
State [1] |
When is Parent called |
Notes |
| ADD |
Open |
Before derived code |
Call parent first and abort if error.
Derived class must call AfterWrite method after derived
code.
|
| ADDRELATION |
Either |
|
Handled in DriverFileClass, so no need to derive. |
| APPEND |
Open |
Before derived code |
Call parent first and abort if error.
Derived class must call AfterWrite method after derived
code. |
| BINDFIELD |
|
|
Handled in DriverViewClass, so no need to derive. |
| BOF |
Open |
Don't call parent |
No code in base class. Defaults to NoDriverSupport
(80) Error.
Derived class should call _checkopen, and abort if error.
|
| BUFFER |
Open |
Before derived code |
Call parent, then , and abort if error. |
| BUILD |
Either |
Don't call parent |
No code in base class. Defaults to NoDriverSupport
(80) Error.
|
| BYTES |
Either |
Before Derived Code |
SQL Drivers don't need to derive this method, the
DriverSQLClass contains the code.
|
| CLEARFILE |
Either |
|
Handled in DriverFileClass, so no need to derive. |
| CLEARPROPS_ |
Either |
As Appropriate |
|
| CLOSE |
Either |
After derived Code
|
Derived code should Self.SetError(NoError)
at start. If called when the FILE is already closed, then it
has the effect of clearing all the object properties and
buffers.
|
| CONNECT |
Closed |
Don't call parent |
No code in base class. Defaults to NoDriverSupport
(80) Error. |
| COMMIT |
Open |
|
|
| COPY |
Either |
Don't call parent |
Derived code should Self.SetError(NoError)
at start.
No code in base class. Defaults to NoDriverSupport
(80) Error.
|
| CREATE[2] |
Closed |
Before derived code |
Call parent and abort if error. |
| CREATEDB |
Closed |
Before derived code |
Call parent and abort if error. |
| DELETE |
Open |
Before derived code |
Call parent first and abort if error.
Derived class must call AfterWrite method after derived
code. |
| DISCONNECT |
Either |
Don't call parent |
No code in base class. Defaults to NoDriverSupport
(80) Error. |
| DUPLICATE() |
Open |
|
No need to derive. Calls Duplicate(key) for each key in
turn.
|
| DUPLICATE(key) |
Open |
Don't call parent |
No code in base class. Defaults to NoDriverSupport
(80) Error. |
| EMPTY |
Open |
Before derived code |
Call parent and abort if error. |
| EOF |
Open |
Don't call parent
|
No code in base class. Defaults to NoDriverSupport
(80) Error. Defaults to returning false. |
| EXCEPT |
|
|
|
| EXISTS |
|
|
|
| EXPORT |
|
|
|
| FROMQUEUE |
|
|
|
| FLUSH |
Either |
Before derived code |
Called by CLOSE. |
| FREESTATE |
Either |
|
Drivers don't need to derive this method, the
DriverFileClass contains the code. |
| GET |
Open |
After derived code |
Derived class must call BeforeRead method before derived
code, and abort if error.For forms of GET which are not
supported do not derive, the parent class will generate a NoDriverSupport (80) Error. |
| GETNULLS |
Either |
|
Drivers don't need to derive this method, the
DriverFileClass contains the code. |
| GETRANDOM |
|
|
|
| GETSTATE |
Either |
|
Drivers don't need to derive this method, the
DriverFileClass contains the code. |
| HOLD |
Open |
Don't call parent |
No code in base class. Defaults to NoDriverSupport
(80) Error. |
| INTERSECT |
|
|
|
| JOINTABLE |
|
|
|
| LOCK |
Open |
Don't call parent |
No code in base class. Defaults to NoDriverSupport
(80) Error. |
| LOG |
|
|
|
LOGGINGON
LOGGINGOFF |
|
|
|
| LOGOUT |
Open |
|
|
| LOGOUTCONNECTION |
|
|
|
| MERGE |
Open |
Don't call Parent |
Derived code should Self.SetError(NoError) at start.
No code in base class. Defaults to NoDriverSupport
(80) Error. |
| NAME |
Either |
|
Drivers don't need to derive this method, the
DriverFileClass contains the code. |
| NEXT |
Open |
After derived code |
Driver must derive this method, if only to call BeforeRead.
Derived class must call BeforeRead
method before derived code, and abort if error. |
| NOMEMO |
Open |
|
Drivers don't need to derive this method, the
DriverFileClass contains the code. |
| NULL |
Either |
|
Drivers don't need to derive this method, the
DriverFileClass contains the code. |
| OPEN |
Closed |
Before derived code
|
Call parent and abort if error.
If successful derived code must set self._opened
= true .
|
| PACK |
Either |
Don't call parent |
No code in base class. Defaults to NoDriverSupport
(80) Error.
SQL drivers should implement VACUUM here. |
POINTER
POINTER64 |
Open |
Don't call parent |
No code in base class. Defaults to NoDriverSupport
(80) Error.
If driver supports this, should call If
self._CheckOpen() = 0 then Return 0.
Note that POINTER does not clear the errorcode().
|
| POSITION |
Open |
|
Drivers don't need to derive this method, the
DriverFileClass contains the code. |
| PREVIOUS |
Open |
After derived code |
Driver must derive this method, if only to call BeforeRead.
Derived class must call BeforeRead method before derived
code, and abort if error. |
| PROJECTFIELD |
|
|
|
| PUT |
Open |
Before derived code |
Call parent first and abort if error.
Derived class must call AfterWrite method after derived
code.
|
RECORDS
RECORDS64 |
Either |
Don't call parent |
No code in base class. Defaults to NoDriverSupport
(80) Error. |
| REGET |
Open |
After derived code |
SQL Drivers do not need to derive these methods, code is
in DriverSQLClass.
Derived class must call BeforeRead method before derived
code, and abort if error.
If driver does not support keys, then derive REGET(key,str)
to set NoDriverSupport (80)
Error. |
| RELEASE |
Open |
Don't call parent |
No code in base class. Defaults to NoDriverSupport
(80) Error. |
| REMOVE |
Closed |
Don't call parent |
No code in base class. Defaults to NoDriverSupport
(80) Error. |
| RENAME |
Closed |
Don't call parent |
No code in base class. Defaults to NoDriverSupport
(80) Error. |
| REOPEN |
Either
|
Examine and decide
|
Examine the parent to decide if it is applicable or not. |
| RESET |
Open |
Before derived code
|
Code in base classes may be sufficient as it routes
through _SET.
|
| RESTORESTATE |
Either |
|
Drivers don't need to derive this method, the
DriverFileClass contains the code. |
| ROLLBACK |
Open |
|
|
| SAVEPOINT |
|
|
|
| SEARCH |
|
|
|
| SEND |
Either |
|
|
| SET |
Open |
Before derived code |
Class probably doesn't need to derive this method. Every
form flows through _Set.
Derived class will likely just derive the _Set method. Call
the parent._set first.
|
| SETNONNULL |
Either |
|
Drivers don't need to derive this method, the
DriverFileClass contains the code. |
| SETNULL |
Either |
|
Drivers don't need to derive this method, the
DriverFileClass contains the code. |
| SETNULLS |
Either |
|
Drivers don't need to derive this method, the
DriverFileClass contains the code. |
| SHARE |
Closed |
|
Drivers don't need to derive this method, the
DriverFileClass contains the code.
SHARE is the same as OPEN,
but with a different default access mode.
|
| SKIP |
Open |
Before derived code |
SQL Drivers don't need to derive this method, the
DriverFileClass contains the code. |
| SQL |
|
|
|
| STATUS |
Either |
|
Drivers don't need to derive this method, the
DriverFileClass contains the code. |
| STREAM |
Open |
Don't call parent |
No code in base class. Defaults to NoDriverSupport
(80) Error. |
| SUBSET |
|
|
|
| TOQUEUE |
|
|
|
| UNLOCK |
Open |
Don't call parent |
No code in base class. Defaults to NoDriverSupport
(80) Error. |
| UPSERT |
Open |
Before derived code |
Call parent first and abort if error.
Derived class must call AfterWrite method after derived
code. |
UNION
UNIONALL |
|
|
|
| WATCH |
Open |
|
Drivers don't need to derive this method, the
DriverFileClass contains the code. |
Notes
[1] : Some of these deviate from
documented requirements for Traditional Drivers. And different
object based drivers may impose different requirements. See driver
documentation for more details.
[2] : Command is CREATE. Method name
in the class is CreateTable.
DriverName.Clw
The source to the file driver consists of 3 files. The two class
files (.INC and .CLW) derived from DriverFileClass, and the "glue"
module which acts as the link between the file driver commands (as
compiled by the compiler) and the class. It also contains the entry
points called by the IDE when it is making use of the driver.
The name of this CLW file is unique to each driver. For example for
the DOS2 driver, this file is called
Dos2Driver.Clw.
Each driver has it's own module, and it has to be constructed very
carefully. Mistakes in this module can be difficult to debug, and
will also make the DLL behave strangely when used by the IDE.
Carefully work through the following steps to create your own
DriverName.Clw file.
- Copy an existing file to your new driver folder. Name the file
appropriately (to match the driver you are making.) In these
instructions the assumption is that you copied from the Dos2Driver.Clw
file.
- Adjust the first set of Equates. More details about the
components of this set are described below.
- Replace the DriverFileDos2Class name with your driver class
name. So, for example, if your class is say DriverFileRedisClass
then you can do a search-and-replace replacing
DriverFileDos2Class with DriverFileRedisClass. There will likely
be 5 or 6 instances of this name (depending on the comments.)
- Determine Driver Functionality.
This step is the most critical because it requires changes in
two places that MUST match. The DRIVER_HAS equates need to be
set to YES or NO. To simplify things there are a set of equates
which broadly-speaking cover functionality. For example
DRIVER_HAS_KEYS can be set to either YES or NO, which in turn
then cascades to all the equates that are set to equal
DRIVER_HAS_KEYS.
However this is only one half of the puzzle. The TypeDescriptor
field has to be set very carefully to match the equates. Items
that are NO should have a pipe ( | ) symbol at the start of the
line. Items which are YES must not. So, for example, in the DOS2
driver BLOBS are not supported, and so
DRIVER_HAS_BLOBS
Equate(NO)
The Beginning of the TypeDescription then becomes;
TypeDescriptor String( '' |
&
chr(NUM_DRIVER_OPS)
|
&
chr(Opcode:ADD)
|
& chr(Opcode:ADDfilelen)
|
&
chr(Opcode:APPEND)
|
&
chr(Opcode:APPENDlen)
|
|& chr(Opcode:BLOB_DOPROPERTY) |
|& chr(Opcode:_BLOB_SETPROPERTY) |
|& chr(Opcode:_BLOB_GETPROPERTY) |
Notice that the equates which are set to NO, also have a leading
| at the front of the line. Items which are YES (like say Opcode:ADD) do not.
TIP: The DoNothingDriver files have
most of the functionality turned on. You can leave it as-is to
start with. Then later on, when you decide specific commands are
not supported you can return here and turn them off.
- Determine Driver Types. In the file are a list of equates,
each set to YES or NO, which determine if a specific Data Type
is valid for the driver or not. This list starts with;
DRIVER_HAS_ANY
Equate(NO) !
DRIVER_HAS_BFLOAT4
Equate(NO) ! /* IBM Basic 4 byte
float
*/
DRIVER_HAS_BFLOAT8
Equate(NO) ! /* IBM Basic 8 byte
float
*/
DRIVER_HAS_BLOB
Equate(YES) ! /*
Blob
*/
DRIVER_HAS_BYTE
Equate(YES) ! /* Signed 1 byte
binary
*/
DRIVER_HAS_CSTRING
Equate(YES) ! /* C string (trailing
null)
*/
For each type determine if you want the driver to support it or
not. Then set each one to YES or NO.
As before, this list MUST match the list inside the
TypeDescriptor. For each type supported make sure the line
does not start with a Pipe symbol. For ever type which is NOT
supported it must start with a pipe symbol. So, in the Type
description the first few Type lines are;
& chr(NUM_DRIVER_TYPES) |
| &
chr(ClaANY)
|
| &
chr(ClaBFLOAT4) |
| & chr(ClaBFLOAT8)
|
&
chr(ClaBLOB)
|
&
chr(ClaBYTE)
|
&
chr(ClaCSTRING) |
and so on.
- The DriverGroup should be named, and populated for your
driver. The fields in this group are explained below.
Name Equates
The first block of equates (between the PROGRAM and MAP
commands) deal with the name of your driver. These equates are
discussed here.
- ShortName
Equate('DOS2')
The shortname is used in various places, like as part of DLL
names, but typically is just 3 or 4 characters long. The
length comes into play because the DllName and DsiDllName
equates are fixed length (13 bytes) and include the shortname.
So The easiest is to use a 4 char name, otherwise you need to
adjust those equates as well.
- LongName
Equate('DOS2Driver')
A longer form of the name if desired. Spaces are not allowed,
and the name must start with a letter. The long name is the
name used in the Dictionary droplist when selecting a driver.
It's also the name of the Pipe function, as exported by the
DLL. Changing the LongName requires that you change the EXP
file as well to match.
- DriverName Equate(LongName &
'<0>{10}')
The DriverName is used in the code when declaring a FILE
structure. This equate must be exactly 20 characters long.
- Copyright Equate('(c) 2024
by CapeSoft<0>{20}')
A copyright message. This must be exactly 40 characters long.
- DriverDesc Equate('DOS2
(Binary)<0>{17}')
The driver description. This must be exactly 30 characters
long. This appears in the Registered File Drivers list.
- DLLName
Equate('CLA'&ShortName&'.DLL<0>')
The conventional DLL name for a file driver, consisting of CLA
then the ShortName equate., followed by .DLL. This length must
be exactly 12 characters.
- DSIDLLNAME
Equate('CLA'&ShortName&'S.DLL')
The conventional DSI (Data Source Import) name for a file
driver, consisting of the DLLName with a trailing S. This is a
separate DLL for importing the file structure from a database
into the dictionary. DSI Dll's are optional, and are not
covered by the classes at this point.
- TDescAddress Equate(007EC7B4h)
This is the address of the TDescAddress structure in this DLL,
after it has been compiled. This has to be set at compile time
because it's used to register the DLL, and the DLL is not
"loaded" during that registration - but rather read directly
as a DOS Binary file. This address can be gotten by using the
CHECKDRIVER.EXE utility (yet to be
written, am using PE Explorer in the interim.)
Driver Group
This group is a data structure which is set at compile time, and
exported from the DLL. It is used when the driver is registered
into the IDE (a process that happens without loading the DLL). For
this reason it has to be set very precisely, and care has to be
taken not to change the length, or order, of any fields in the
group.
- raw_name
- dll_name
- copywrite
- desc
- pipeFunctionAddress
- drvattr
- clastrlen
- cstrlen
- reclen
- clastrlen32
- cstrlen32
- reclen32
- maxdims
- maxkeys
- maxmemos
- memosize
- memosize32
- fldsize
- kcbsize
- mcbsize
- fcbsize
- tdesc
- dsi_name
- driverMeta
Pipe Functions
When a file driver command is executed, the compiler makes a call
to the Pipe function passing various parameters. The pipe function
is named uniquely for each driver, and contains only a small
amount of code. Mostly the function passes the call through to the
underlying object (which has a Pipe method.)
A second Pipe function is provided for View calls.
Finally a function is provided which creates, and binds an object
to the FILE structure.
Apart from the global naming replacements (described above) no
changes to this code are required.
Driver Properties
The Clarion language defines a number of commands which are passed
through to the file driver for processing. Commands like
ADD,
SET,
NEXT
and so on are a database independent way for the code to access the
data. These commands are "known" by the compiler, and compiled in a
specific way, which in turn means the commands are routed to the
driver Pipe method, and from there into the driver object itself.
In addition to the commands a property syntax was introduced. This
allows a generic way to get information from the driver, to push
information into the driver, and to get the driver to execute code.
Property syntax exists in three forms, which in turn translate into
3 methods in the driver;
| Code |
Method |
something = file{prop:someprop}
something = file{prop:someprop,parameter} |
GetProperty(String pProperty,Long
parameter=0) |
file{prop:someprop} = something
file{prop:someprop,parameter} = something |
SetProperty(String pProperty,Long
parameter=0,String pValue) |
file{prop:someprop}
file{prop:someprop,parameter} |
DoProperty(String pProperty,Long
parameter=0) |
In all cases the
parameter is
optional, and it is a LONG value. This is usually (but does not have
to be) the field number in the file structure.
Not immediately obvious though is that the
prop:someprop
value is a STRING. All the declared properties are number Equates,
however this is not a limitation. String properties can just as
easily be passed;
something = file{'sometext'}
Naturally it is up to the driver to add code for each property, or
text, that it wishes to support. Most of the properties are driver
independent, and so are handled in the base classes, however new
drivers can use this property syntax to implement functionality that
may not be exposed in other drivers, or which are custom to a
specific driver.
DOS2 Driver
The goal of the DOS2 driver is twofold. Firstly it acts as an example
flat-file driver. It provides a good starting point for other
flat-file drivers, and can be derived from. For example the ASCII2 and
BASIC2 drivers could derive from this driver.
It is however also a full-featured drop-in replacement for the DOS
driver, which contains a few enhancements over the current DOS driver.
It is designed to be backward compatible with the existing driver,
except for the items mentioned below
Headline Differences
| Code |
Old Behavior |
New Behavior |
| On-Disk Files > 2 gigs in size |
Inconsistent support up to 4 gigs, then no support. |
Supported |
| BOF |
Command not Supported |
Supported |
BUFFER
BUFFER(-1,MaxMemoryInBytes) |
Command not Supported |
Supported |
BYTES(table)
table{prop:Filesize}
|
Not supported if file is closed.
Error set to File Not Open (37). |
Supported |
| EXISTS |
|
Can call Exists(File)
|
| POSITION |
Returns a 4-byte string |
Returns an 8 byte string |
| RECORDS |
Requires the FILE to be Open. Error set to File
Not Open (37). |
Works if the file is Open or Closed. |
| SKIP |
Command not Supported |
Supported. Note that SKIP
jumps by Disk Record Size NOT Bytes. To Jump by Bytes use SET(FilePointer) instead. |
BIGINT
BIGINT is a new data type to hold a
large integer. It is declared as an
EQUATE
in
CwDriver.Inc.
By using
BIGINT, source code will be
backwards compatible with older Clarion versions (ie Clarion 11, 10,
9 etc). It can be used in code, however should be avoided for
on-disk storage.
Currently
BIGINT equates to a
REAL
which means it is a 52 bit integer. While this is smaller than the
64 bits offered by the
Int64 type,
it's big enough for disk files up to about 4.5 petabytes (about 4500
terabytes).
Example
i BIGINT
Code
i = ImportFile{prop:Filesize}
BUFFER
The traditional DOS driver does not support the
BUFFER
command. The object based DOS driver does support the command, and
in some cases can speed up access to the file a lot. The key to
using the
BUFFER command effectively is
to understand the way in which your program will be using the disk
file.
The file must be
OPEN for this command
to work.
When a buffer size is set, bigger blocks of the file are read from
the disk into RAM. This improves performance in two use cases;
If you are sequentially processing a file from start to finish, then
the performance increases as the number of reads decrease. In other
words
the ideal approach is to read the whole file into RAM, and then
process it from there. While this may be not possible (due to RAM
constraints) setting the buffer size high would be beneficial.
If you are reasonably sure the file will fit into ram then set
BUFFER(somefile,Driver:BufferWholeFile)
If the attempt to allocate enough RAM for the file fails, then the
buffer will default to
DRIVER:MINRAM ,
currently 64k, and the attempt to buffer the whole file will be
abandoned.
If you want to allocate as large a buffer as possible, up to a
maximum size, then you determine, then use
BUFFER(somefile,Driver:BufferWholeFile,MaximumSizeInBytes)
This sets a cap on how large the buffer can be. This size cannot be
greater than
DRIVER:MAXRAM, which is
currently set as 1 gigabyte.
Alternatively, if not buffering the whole file, then set the buffer
size to some (large) number of records;
BUFFER(somefile,1000)
If you are doing random reads on the file, jumping around forwards
and backwards then having the file completely in ram will make a
dramatic improvement.
If you can only buffer a portion of the file, and if the record you
are reading is not in the buffer then a new buffer will need to be
loaded. If the buffer size is large then this read takes longer than
reading just a single record. Whether this is worth it or not
depends on the number of times your following reads are already
loaded (a cache "hit") versus the number of times it goes back to
the disk (a cache "miss").
To free up all memory allocated to the buffer call
BUFFER(somefile,Driver:FreeBuffer)
Driver Options
| Option |
Default Value |
Description |
| /BUSYRETRIES=n |
5 |
The DOS2 driver uses Windows API functions to read and
write files on the disk. Sometimes these operations are not
possible because some other part of the computer "locks" the
file when an even occurs. For example doing a CLOSE
and then a REMOVE could fail if
say an anti-virus triggered a scan when the file is closed.
For this reason a number of the commands will implement a
"wait and try again" approach if an ACCESS_DENIED
error is encountered. The time between tries is a second.
The number of tries is set in the BusyRetries setting. The
default is 5, meaning 5 tries, one second apart.
Commands affected: CREATE, COPY, REMOVE,
RENAME |
Large File Support
The DOS2 driver fully supports large files, specifically files
larger than 2 gigabytes, and indeed larger than 4 gigabytes. However
some of the file commands make use of a
LONG
either as a parameter, or return value. For these commands an
alternative command exists, which can either accept, or return, a
larger number.
| Traditional Command |
New Command |
Description |
| BYTES |
BYTES64 |
The BYTES command is limited
to returning a LONG value. The
BYTES64 command returns a BIGINT. |
GET
|
GET64
|
The second parameter to the GET
command is the position in the file to read from. The GET command limits this to a LONG value. The GET64
command accepts a BIGINT as
the parameter.
|
| POINTER |
POINTER64 |
The POINTER command is
limited to returning a LONG
value. The POINTER64 command
returns a BIGINT. |
| PUT |
PUT64 |
The second parameter to the PUT
command is the position in the file to write to. The PUT command limits this to a LONG value. The PUT64
command accepts a BIGINT as
the parameter. |
| RECORDS |
RECORDS64 |
The RECORDS command is
limited to returning a LONG
value. The RECORDS64 command
returns a BIGINT. |
| SET(file,pointer) |
SET64(file,pointer) |
The second parameter to the SET
command is the position in the file to read from. The SET command limits this to a LONG value. The SET64
command accepts a BIGINT as
the parameter. |
| SKIP |
SKIP64 |
The SKIP command is limited
to skipping a LONG value. The
SKIP64 command can skip a BIGINT value. |
LOCK / prop:Locked
Lock(FILE File, Long pSeconds)
Lock(FILE File)
In the traditional Clarion DOS driver, calling
LOCK
without a seconds parameter will cause the program to "wait forever"
for access to the file. Since that may never happen, this can be the
source of programs hanging. To avoid this issue the default value
(if it's not passed) is now 300. This means the function will now
wait 5 minutes not "forever". To wait for a "long long time" simply
set the
pSeconds parameter to a high
number.
The current status of the file (locked or not locked) can be
retrieved using
prop:Locked .
Lock(Customers) ! equivalent to
Lock(Customers,300)
lockedstate = Customers{prop:Locked}
REGET
Reget(FILE File, String Position)
When doing a
REGET using the
DOS2 driver, a record from a previous position is retrieved from the
disk file. The
Position parameter is the
value returned by an earlier call to
POSITION.
Note however that
REGET does not repeat
the
Length parameter which may have been
passed to an earlier
GET. In other
words, when using
REGET the whole
record is read from the position identified by the
Position
parameter.
RENAME
Rename(FILE File, String NewFileName)
The Clarion documentation for the
RENAME
command, for the traditional DOS driver says;
Files cannot be renamed to a new drive.
This is inaccurate. With current Clarion versions, and on current
Windows versions, rename can rename a file to a different drive.
SQLite2 Driver
The goal of the SQLite2 driver is twofold. Firstly it acts as an
example SQL file driver. It provides a good starting point for other
SQL file drivers.
It is however also a full-featured drop-in replacement for the SQLite
driver, and contains several enhancements over the current driver. It
is designed to be backward compatible with the existing driver, except
for the items mentioned below. For a full list of new functionality
and behavior see the
Files Index.
SQLite3.dll
SQLite does not make use of a database server, as other SQL
databases do, but rather a single DLL file (
SQLite3.dll)
encapsulates all the SQLite functionality. This file is originally
sourced from
https://sqlite.org/download.html
.
This file is very backwards compatible, and using a later version
instead of an earlier version should not cause any complications.
As of writing this, the version required by the SQLite2 driver is
3.38.0 or later. If the program attempts to use an earlier DLL a
MESSAGE will be displayed on program startup.
Headline Differences
| Code |
Traditional Behavior |
Object Based Behavior |
| CREATE |
Supported, with limits. |
Supported with smarter defaults, and allows the developer
more control. |
| NULLS |
Supported |
Supported. However the STRING
returned by GETNULLS is a
different format. |
Database Design
This section discusses database design decisions that can impact
space requirements or performance when designing for the SQLite
database.
|
|
| Page Size |
The length of the record in the dictionary needs to fit
into the SQLite page size. SQL BLOB and TEXT fields are not
included in this number.
The page size is set at the database level, and cannot be
directly changed in an existing database. (A new database
needs to be created, and the data copied.)
The default length is 4k, but a variety of options up to 64
K exist (8K, 16K, 32K, 64K).
SEND(anytable,'pragma_page_size',16')
SEND(anytable,'pragma_page_size',8192) |
ADD
When adding rows to the table,
STRING
and
MEMO fields (which do not have the
BINARY attribute) are
clipped before being stored.
CSTRING fields are not clipped.
PSTRING fields are stored "as set" - in
other words they can have trailing spaces, but do not include
padding spaces.
This behavior applies to all writes including
PUT,
APPEND and
UPSERT.
This behavior can cause problems with the traditional SQLite driver,
which expects strings to be a fixed length, and padded with spaces.
Specifically doing a
NEXT after a
RESET may fail. This would only be an issue
where the traditional driver, and object based driver are accessing
the same tables in the same database file, which is not recommended.
CONNECT
For the SQLite driver, the
CONNECT
command can take an optional Options parameter.
This is passed to
sqlite3_open_v2 as the flags parameter. It
defaults to
SQLITE_OPEN_READWRITE +
SQLITE_OPEN_CREATE .
CREATE
One major difference in the SQLite2 driver over the traditional
SQLite driver is the
CREATE command.
The traditional driver has a very simple
CREATE
command, which does not allow for much control over creating the
table. The documentation goes so far as to say;
Although CREATE is supported, only basic use is
recommended, because it cannot handle all of the features (i.e.,
constraints and relationships) that an actual SQL CREATE TABLE
statement can support. Use of PROP:SQL is preferred in this case.
In contrast to this the object based SQL drivers implement
CREATE,
and by default create a table correctly. It also gives the developer
a mechanism to control the specifics of the table creation.
| Traditional Behavior |
Object Based Behavior |
| If the table exists in the database, and CREATE
is called, then the table is dropped and a new one is
created. |
If the table already exists in the database, then a File System Error (90) Error is set.
If replacing the table is desired do a REMOVE(table)
first. |
| CREATE INDEX could generate an
error if the index already exists |
Create code for Indexes includes IF NOT
EXISTS clause. |
| After call to CREATE, prop:sql returns only the SQL for
creating the last index. |
The prop:SQL returned
contains the complete CREATE TABLE,
as well as any CREATE INDEX
statements. |
| COLLATE NOCASE added to numeric
columns in keys. |
COLLATE NOCASE only added to
string type fields in keys (where key is not case
sensitive). |
| Creating tables with TIME
fields would fail. |
TIME data type supported. |
COLLATE
When a
STRING ,or
CSTRING,
or
PSTRING, or
MEMO
field is created in the database, then it can have a default
Collation. This collation can be set
specifically using the
COLLATE(something)
Extended Name Attribute. Determining the
collation for a field (when the table is created) follows a
specific set of rules.
First look at the
Extended Name Attributes.
- If the field, memo or blob, has a COLLATE
setting then that is used.
- If the field, memo or blob, is set as BINARY
then COLLATE BINARY is used.
- If the field is a DATETIME String(8), DATETIMEOFFSET
String(12) or UUID8
then COLLATE BINARY is used.
- If the field, memo or blob, is set as NOCASE
then COLLATE NOCASE is used.
- If the field, memo or blob, is set as CASE
then COLLATE RTRIM is used for STRING, PSTRING,
and MEMO, and COLLATE
BINARY for CSTRING.
- If used in both a Case-Sensitive key, and also a
Case-Insensitive key, then COLLATE RTRIM
is used for STRING and PSTRING,
and COLLATE BINARY for CSTRING.
- If used in a Case-Sensitive key, then COLLATE
RTRIM is used for STRING
and PSTRING, and COLLATE
BINARY for CSTRING.
- If used in a Case Insensitive key then COLLATE
NOCASE is used.
- If a field use COLLATE NOCASE .
- If a MEMO or BLOB
(not set as Binary) use COLLATE RTRIM.
If you are using the traditional SQLite driver then you may notice
that it fails to apply the
NOCASE
collation, even if the key is set to
NOCASE.
Using the new driver solves this problem.
Aside: In traditional Clarion, and say
the TopSpeed driver, the intent of the programmer is set for a
specific key (Case or NoCase) and then also manually determined in
the
prop:filter (using
UPPER
and so on.) This is a bit clumsy. Mistakes in the filter mean that
a key is not used and a table scan is required. This leads to the
perception, even with ISAM drivers like Topspeed, that the
VIEW structure is slow.
A far better approach is to set the attributes at column level,
and then the keys, and filters, will follow the column collation.
It results in simpler code, simpler filters, and code which is far
more likely to use the key you think it's going to use. In most
cases strings are inherently case insensitive, ad this is the
default for the SQLite2 driver.
If a string should be treated as case sensitive then this can be
implied
by creating a case-sensitive key, but this approach is not
recommended.
A better approach is to set the field as
CASE
using the
Extended Name Attributes.
Then if a key is created, it does not need the Case-Sensitive
switch to be on as the field is already case sensitive.
Driver Options
See Also
Common Driver Options.
| Option |
Default Value |
Description |
| /AUTOINC=true |
false |
If set to true that changes the automatic ROWID
assignment algorithm to prevent the reuse of row ids over
the lifetime of the database. In other words, the purpose of
AUTOINC is to prevent the reuse
of row id values from previously deleted rows. The AUTOINC setting imposes extra CPU,
memory, disk space, and disk I/O overhead and should be
avoided if not strictly needed. It is usually not needed. |
| /NOPRIMARY=true |
false |
Adds WITHOUT ROWID to the CREATE
statement |
| /PRAGMA |
|
see Pragma. |
| /WAL |
|
Equivalent to /pragma_journal_mode=WAL
/pragma_synchronous=NORMAL
Note that WAL mode is persistent for a database - if set
once it is in force for all future accesses of the database.
However pragma synchronous is set
on a connection level basis, so is a good candidate for the
file driver string. |
| /WHERE |
|
Used with SEND. Same as
setting {prop:Where}. |
Field Options
The driver supports
Extended Name Attributes
for fields.
To set a field to a specific data type, unique to this driver, use
SQLite2Type(whatever) where
whatever
is a data type unique to SQLite.
This will override the type set using
SQLType(something)
if it exists.
Example;
paid
Byte,name('paid | sqltype(tinyint) | SQLite2Type(Integer)')
FLUSH
If the database is in
WAL mode, then a
FLUSH commits (as much as is possible
without disturbing other processes) the journal file. It is not
necessary to call this; the journal will be committed
automatically when it gets large, but manually committing (ie
calling
FLUSH) after a lot of writes
can marginally improve read performance.
PRAGMA
SQLite allows the program to control many aspects of how the
database behaves. These settings, known in SQLite as PRAGMAs can be
set by the application. Some are transitory (they only apply to the
connection and have to be re-applied) others are permanent and apply
to the database itself. Some care should be taken when using PRAGMAs
and consultation with the SQLite documentation (
https://www.sqlite.org/pragma.html) is
recommended.
The
SEND command can be used to set
pragmas, and also to retrieve the current pragma values. They can
also be set using the
Driver Options String.
To identify that it is a pragma append the prefix
PRAGMA_
to the option being set.
Example
jsl =
SEND(customers,'/PRAGMA_journal_size_limit=1000')
jsl = SEND(customers,'/PRAGMA_journal_size_limit')
Properties
| Property |
Comment |
| prop:Hdbc |
Returns the current connection handle. As used by the
sqlite3 API. |
Relationships
Server-side relationships in SQLite
HAVE
to be set before the tables are created. SQLite does not allow them
to be changed after the tables have been created.
The template includes a CODE template which generates an
ADDRELATION call for every server-side
relationship in your dictionary. This only needs to be included in
places where tables can be created. It thus makes sense to have a
single procedure in the app which uses this code template, and
creates all the tables in a new database.
SEARCH
The SQLite2 driver supports the
SEARCH
command. Internally it makes use of the SQLite extension known as
FTS5.
All the necessary overhead to support this feature in the database
will be created automatically by the driver as needed. Triggers are
automatically added to the table to ensure that the FTS indexes are
maintained.
Date, Time, DateTime
DateTime
In Clarion a common way to represent a DateTime field is to use a
STRING(8) with a DATE and TIME OVER. For example;
aDateTime
STRING(8)
aDateTime_Group
GROUP,OVER(aDateTime)
aDateTime_Date
DATE
aDateTime_Time
TIME
END
The SQLite2 driver automatically translates this to the
ISO 8601 form. A space is used between the
date and time parts because this is more compatible with the way
other databases behave.
It's also possible to use a STRING(29) or CSTRING(30) and store
the date-time yourself as a string (usually in ISO 8601 format.)
Sorting
Ideally international characters would sort "alphabetically" - in
other words something like n, ñ, m.
Unfortunately SQLite does not have builtin support for this. It is
possible to hook custom collation functions into SQLite, but that
would make the database file unavailable for other programs (ie
programs not using the SQLite2 driver) to open the database.
Therefore, for now, sorting strings like this using SQLite is not
supported by the driver. This may change in the future if an
acceptable approach can be found.
Storage
In SQLite the type of a field is not set at the Column level, but
rather at the Cell level. In other words although a column may be
designated as say a
STRING (
TEXT)
column, the fields inside the rows can be set to something else,
like a
BLOB etc.
This does not affect your application at all, whatever data you
write out is read back. The actual underlying storage is not
important. However it can become important if other (non Clarion)
applications are reading, or writing, the data as well. And for that
situation a specific situation comes into play.
SQLite treats text as being encoded as utf-8. If your application
only makes use of ASCII text (in other words characters <= 127)
then this is not a problem since utf-8 and ASCII overlap in this
range. So an ASCII string is implicitly a utf-8 string as well. If
you are only dealing with English letters, then this is no issue at
all.
If you have non-english letters in your database (for example
Spanish characters like ñ or á, é, í, ó, ú and so on) then these
characters, if left as-is, will cause the field in SQLite to be
stored as
BLOB. Again, this is not a
problem for the Clarion program, since the driver can read the
BLOB and place it in your Clarion
STRING,
and everything will be back to normal. However, other programs may
not be so forgiving. Ideally a
STRING
should be stored as a
TEXT in
SQLite, in utf-8 format, so that the Clarion program behaves like
any other program, and the database is readable, and writable, by
all.
The SQLite2 driver does this for you. It automatically converts all
STRING,
CSTRING
,
PSTRING and
MEMO
(not binary) fields into utf-8 encoding before writing, and decodes
them after reading. There is nothing for you to do.
If there is an error when encoding or decoding the data, then an
Error 140 Unable to convert to, or from, unicode
is generated.
It is also safe to use on existing databases since server-side
BLOB fields are put in the
STRING
as is, so old, ANSI encoded strings will be read and placed into the
STRING as is.
When converting from ANSI to utf-8 a CodePage is used. The CodePage
for the file is automatically set (on each thread) based on the
Clarion
system{PROP:CharSet}[1]
setting.For example if your program has set;
System{prop:Charset} = CHARSET:GREEK
Then the code page used will be
Windows 1253.
A list of Clarion Charsets can be found in
Equates.Clw.
If you have not set a
System{prop:Charset}
(or better yet, in Clarion 11 or later,
System{prop:CodePage})
in your program then the code page used will be
CP_ACP.
This is the code page which matches the Windows Locale of the
current machine.
If the automatic code page is not correct for your situation, then
you can set
tablename{prop:CodePage} = whatever
In this case you are not setting the charset, but the code page.
This is a value defined by Windows - for example
CP_ISO_8859_1
equate(28591)
The
StringTheory.Inc file contains a
number of code page equates, but any Windows Code Page value
[2]
can be used here.
If the code page value is not valid when setting
tablename{prop:CodePage}
then ErrorCode will be set to
141 Invalid Code
Page.
Aside: Some programmers have solved this
problem already by adding code to the ABC write methods,
effectively converting to utf-8 manually on each write, and
converting back to ANSI on the read. This code should be removed
for the SQLite2 driver. To keep the code, but only have it apply
to the SQLite driver, and not the SQLite2 driver, you can wrap the
code with the following;
If self.file{prop:obd} = false
! your conversion code
End
The above is for generating into an ABC FileManager object. The
generic form of the code is;
If tablename{prop:obd} = false
Note 1:
In Clarion 11 a new system property,
System{prop:codePage}
was introduced. If set, this code page is used. If not set then the
System{prop:CharSet} is used.
StringTheory 3.84 or later is required for
System{prop:codePage}
to be used.
Note 2:
A List of Windows Code Page Values can be found at
https://learn.microsoft.com/en-us/windows/win32/intl/code-page-identifiers.
Write Ahead Logging (WAL)
IF (and it's a
BIG IF) all the
programs that access the SQLite database file are on the same PC,
THEN the program can make use of
Write Ahead
Logging (known as WAL
[1]).
While WAL is dramatically faster than other journal modes, it cannot
be used when multiple processes on different hardware (or virtual
machines) are accessing the file. There are also some (less
problematic) disadvantages, so be sure to check the SQLite
documentation before turning on WAL.
Note that the journal_mode is a
persistent
setting. It can only be set if no other processes are
currently connected to the database. Once set it will be remembered,
and
it will remain in force for future
connections to this database by this, and other processes.
When switching out of WAL mode (if you decide to do so), a call to
PACK is recommended to clean up the file.
To activate WAL use the
SEND command.
It can also be set using the
Driver Options
String.
glo:Owner = 'data.sqlite'
mode = SEND(customers,'/WAL')
This can be called after the owner attribute for the file (customers
in this case) has been set, but before the file is opened.
This is a database-wide setting. You can set if using any table in
the database, however the setting then applies to every table in the
database, and, as mentioned earlier, it does not need to be reset.
It is physically stored in the database file itself.
Note 1: SQLite
docs for WAL :
https://www.sqlite.org/wal.html
Note 2: SQLite Docs for Journal Modes :
https://www.sqlite.org/pragma.html#pragma_journal_mode
Tip: If WAL is not appropriate for your
program, then the recommended journal mode is
TRUNCATE.
It can be set in code using the
SEND
command.
InMemory2 Driver
The InMemory 2 driver is based on the
SQLite2
driver, and makes use of SQLite's
In Memory functionality. As such it is a SQL
driver, and all SQL functionality is available. This section of the
document serves to cover any explicit difference to the traditional
MEMORY driver, as well as any difference to the SQLite2 driver.
Headline Differences
| Code |
Traditional MEMORY |
SQLite2 |
InMemory2 Behavior |
| OWNER |
Not needed |
Required internally |
If not supplied the name memdb
will be used. |
Driver Options
All the
SQLite2 Driver Options
are applicable. The ones listed below apply only to the Memory2
Driver
| Option |
Default Value |
Description |
| /THREADEDCONTENT=true |
False |
If this is set to true then each thread has it's own
"database". Data is not shared across threads. This is in
contrast to the normal behavior which is that the same data
is available and visible to all threads. |
Field Options
The driver supports
Extended Name Attributes
for fields.
To set a field to a specific data type, unique to this driver, use
InMem2Type(whatever) where
whatever
is a data type unique to SQLite.
This will override the type set using
SQLType(something)
if it exists.
Example;
paid
Byte,name('paid | sqltype(tinyint) | InMem2Type(Integer)')
Release History
Version 1.27 - 29 April 2026
If updating from before 1.22 then reimport
Data in Examples. CSV files from 1.21 and earlier were
incomplete.
Remember to Recompile the Drivers.
- All
- Add: IsRecordFieldClear method
to iDriverField
Interface.
- Add: Support for NULL and SETNULL for custom field types (like UNIQUEIDENTIFIER).
- Add: If UUID field is set as NULL
then field is not populated automatically to new random value.
- Add: If SetNoNull called on
file record, then all fields are set to not-null.
- Improve: Extended max length of File{prop:name} to 16k.
- Improve: _WithFieldoffset can return ClaGROUP
fields.
- Add: version resource to CLAMSS2, CLAPGS2 qand CLASTR DLL's
- MSSQL
- Fix: If a table has server-side Triggers on INSERT,
then an alternate syntax for returning server-side identities
and read-only fields is implemented.
- Fix: Allow binding of custom fields as null.
Version 1.26 - 23 April 2026
- All
- Implemented SQL
command for VIEWS.
- Extended max length of View field, prop:name, to 16K.
- MSSQL
- Fix: Use of Cstrings in SQL
table, used as parameters.
Version 1.25 - 21 April 2026
- All
- Fix: Parsing prop:Order did
not properly handle + and - (as in +Mov:Guid)
correctly, late in multi-termed order.
- Increased size of tokenstring buffer
- Fix: Guids in MSSQL and PostgreSQL cwproj files.
Version 1.24 - 21 April 2026
- MSSQL
- Add: Support for CHAR into Clarion
DATE or TIME.
- Add Support for TIMESTAMP into TIME.
- Fix: Strings > 1024 chars long could prevent the form
from loading.
- Fix: Tidy-up of Length checks if IgnoreTruncation is true.
- PostgreSQL
- Fix: Tidy-up of Length checks if IgnoreTruncation is true.
- Add Support for TIMESTAMP into TIME. (However 0 date is 1/1/2001 and
this will likely change soon.)
Version 1.23 - 20 April 2026
- All
- Added support for backend DateTime field being read directly
into a DATE field.
- Fix: Scrolling in browse could lead to the first record in
the result set being dropped
- Fix: Scrolling up and down in Browse was broken
- PostgreSQL
- Date could be off by 1 if Time = 0
Version 1.22 - 17 April 2026
Re import Data in Examples. Earlier CSV files
were incomplete.
Remember to Recompile the Drivers.
- All
- Added support for complicated SQL in view{'field',prop:name}
(Created by Browse Template "Advanced SQL" tab).
- New set of CSV files for the examples. Reimport data.
Version 1.21 - 16 April 2026
- All
- Better detection of sufficiently-unique ORDER
BY fields.
- Added new driver switch /PROPSQLMATCHBYNUMBER.
- Improved MatchByNumber to skip over GROUP and OVER fields.
- Fixes to DriverClass.Clw
- Support in base class to better deal with collations, and
developer use of "case Sensitive" and "Not Case Sensitive"
keys.
- Performance improvements to _memchk
- Fixes to i64 support
- Fix to CheckVersionClient (Wasn't comparing to
ClientVersionMinor)
- Improvements to Search base methods to support
parameterizing of search terms.
- Consistent return value from SEND command when setting
Driver Options.
- Fix: SetProp:Order and SetProp:Filter could fail when making
properties bigger.
- Various code optimizations
- DOS
- Many small fixes and improvements
- Dos Class triggers NoMemErr if a NEW or ADD(q) fails.
- SQLite
- Change: CREATE
command. The default collation for string fields is now
COLLATE NOCASE. For more see Collate.
- Fix: The CREATE
command would create a separate INDEX
that matched the Primary Key.
- Fix: CSTRING utf-8 strings could get trailing garbage chars.
- MSSQL
- Implemented SQLBIND optimizations
- Added support for TimeZoneOffset field type
- Expanded SQL Type / Clarion Type mapping matrix
- PostgreSQL
- Deleting inside a Set/Next loop didn't work.
- Expanded SQL Type / Clarion Type mapping matrix
- Various optimizations and fixes
Version 1.20 - 8 April 2026
- SQLite
- Fix: If the field was a CSTRING and the string contained
non-ansi characters, then the string was not properly cleared
in the space after the terminating null.
Version 1.19 - 2 April 2026
- All
- Support Mapping INTEGER and SHORTINT in the database to DECIMAL
field type in the table declaration.
- MSSQL
- Added support for NVARCHAR and NCHAR
data types mapping to CSTRING
and STRING field types.
Version 1.18 - 1 April 2026
- MSSQL
- Added support for BIT SQL data type.
- PostgreSQL
- Tested against PostgreSQL version 9.5.17. differences to
PostgreSQL 15 (and later) documented at Support
for old PostgreSQL versions
- When doing a Set(File) an
explicit ORDER BY clause, on the
Primary key fields, is included.
- SQLite
- When doing a Set(File) an
explicit ORDER BY clause, on the
Primary key fields, is included.
Version 1.17 - 26 March 2026
- All
- Default property prop:DatePicture changed from @D010/
to @D010-
- Optimize by adding Server, Database, and User to the
Connection queue.
- MSSQL
- Add: Support for TIMEZONEOFFSET data type
- Fixed Output directory in project, so should override
Redirection file when creating DLL and LIB for the driver.
- Cache detection of DBC17 and ODBC18 installed.
- PostgreSQL
- Local mode should be working
- Fixed Output directory in project, so should override
Redirection file when creating DLL and LIB for the driver.
- Replaced SHOW lc_collate; which
was removed in PostgreSQL 16.
Version 1.16 - 25 March 2026
- All
- IMPORTANT NOTE: In versions 1.15
and earlier of the drivers the attribute UUID
was synonymous with UUID8. From
version 1.16 it is synonymous with UUID4.
See UUID's for more.
- Change: CLEAR does NOT
call CLEARPROPS .
- Add: CLEARRELATIONS
- Add: New Error 141 "Invalid Code Page".
Set when setting table{prop:codepage} =
something, where something is not a valid
code page.
- CLEAR supports clearing
high or low.
- UUID8 fields are created as CHAR not UNIQUEIDENTIFIER
/ UUID etc.
- IMPORT clears record
before importing the next record.
- Fix: WithFieldNamed on a View could fail if the name
contained a fully qualified field name.
- PostgreSQL
- First PostgreSQL release- After compiling DriverKitSQL.SLN,
remember to REGISTER
the new file driver and restart the IDE.
- MSSQL
- Fix: Reading TEXT fields into a BLOB could drop the last character if
the data was > 1024 bytes.
- Some performance improvements for calling ADD,
APPEND, GET,
UPSERT or DELETE
in a Loop.
Version 1.15 - 15 March 2026
- All
- Optionally update your Builtins.Clw
to allow the ERROR() function to
return descriptions for new error codes.
- New command CLAIM.
- New properties : prop:InTransaction
, prop:TransactionStatus
, prop:Collation, prop:CollationDB .
- New Errors: ClaimAlreadyActiveErr (139),
UnicodeConversionError(140) .
- New Extended Name Attributes CASE and NOCASE
.
- New Properties: prop:DatePicture,
prop:TimePicture
- TimeStamp support now allows for LONG
as date and time parts, not just DATE
and TIME.
- CREATEDB command now
has optional Options parameter.
- SQL command can now
take constant values as parameters.
- prop:Text
documentation expanded and behaviors clarified.
- Improved: Better support for DATE
fields which contain dates before Jan 1801.
- Calls to LOG preserve
existing error code values.
- Some tweaks to example DriverFields.Clw, notable support for
MSSQL's BigEndian format for the first 3 parts of the GUID.
- The "naming" of some error equates were inconsistent. Some
ended in Error, or nothing. All now end in Err.
- Refactor of some of the Transactions code, now moved into
the base classes.
- Internal: Some refactoring of _ExecuteSQL method names to
make the naming in different drivers the same.
- Internal: TempFormat and TempDeformat objects added.
- Internal: Various fixes and optimizations
- SQLite
- Updated SQLite3.DLL to version 3.51.3
- Text stored in the SQLite database is now converted to utf-8
before writing, and back to ANSI after reading. This is in
line with what other SQLite programs will expect, thus making
the database visible to other programs.
- Text written to CSV via EXPORT
is converted to utf-8.
- Fix: Support ARRAY fields
being set to NULL.
- Fix: prop:Sql was not completely set after call to EMPTY
- Fix: OPEN did not check the
table exists if this was not the first open on the thread.
- MSSQL
- EXPORT command working.
- ParameterBuffer moved to base classes
- Fix: OPEN did not check the
table exists if this was not the first open on the thread.
- Fix: DUPLICATE did
not heck if file was open. Could also fail with unique,
non-primary, key.
- Fix: EXISTS could fail
if schema name was blank.
- DOS
Version 1.14 - 13 November 2025
- MSSQL
- Add: Support for detecting existing UNIQUEIDENTIFIER fields
already in the database, and treating as if
SQLTYPE(UNIQUIEIDENTIFIER) has been added to the dictionary.
- Add: Support for CSTRING(37) fields in the dict to map
to UNIQUEIDENTIFIER in the backend.
Version 1.13 - 29 October 2025
Read - Preparing an App to Use a New File Driver
There's a new Step 5.
- Logging to disk could cause a hand if the disk handle was
invalid
- Updated projects to include cwdriver.clw
- Updated projects to better support LIB mode compiles.
- MSSQL
- Handle case where connection object can be destructed before
File object
Version 1.12 - 22 October 2025
- Fix: Date related functions in VIEW filters.
- SQLite2
- Add: Support for reading TEXT fields into BYTE, SHORT,
USHORT, LONG, ULONG, REAL, SREAL record types.
Version 1.11 - 6 October 2025
- Template updated to support Multi-Proj. Also minor updates to
make the Project more accurate.
- Prevent CwDriver.Clw from being compiled multiple times in the
same multi-dll system
- Allow Owner to be set using prop:Owner
- Fix: Parsing maximum dimension of array fields did not work
for high values.
- Fix: Prevent Index-Out-Of-Range when iterating View Files.
Version 1.10 - 22 September 2025
- IMPORT and EXPORT commands changed concept from
server-side to client-side
- Change: When exporting BINARY fields to CSV, the default
prefix is hex: .
- Added Prop:DateTimeSeparator,
Prop:DriverVersion .
- _DectToString and _StringToDec refactored, lots of tweaks to
_morph (courtesy of Carl Barnes)
- MSSQL2
- Better support for creating the database itself
- Added support for STRING and CSTRING fields in the record
structure to map to DATETIME and DATETIME2 data types in the database
- Support for schemas included in the tablename
(dbo.whatever).
- fixes to missing PadString
Version 1.09 - 4 September 2025
- SQL, SUBSET commands extended to 20 parameters
- New property prop:TryConnectedAs
- The value in Owner when the last connection was attempted.
- New driver option: /IgnoreTruncation;
also prop:IgnoreTruncation
- Add: Support for TRACE.EXE
- Fix: Limit setting of fields to NULL
to times when record buffer is being used.
- Fix: SetCustomProperty does a PushErrors,
PopErrors.
- Fix: period (.) allowed in sanitized sql table names
- MSSQL2
- Add: Support for driver option /TrustedConnection
- Add: Better support for TIME
fields.
- Add: More support for massaging Owner field. Support for
different server port numbers.
Version 1.08 - 18 August 2025
- MSSQL2 Driver
released (included in DriverKitSQL).
- Add CALL and CALLOUT commands.
- Add: Ability to call prop:SQL,
or SQL without doing a
SET.
- Extend EXISTS to check
for existence of other things.
- Internal: _WithFieldNamed
handles a zero-length parameter.
- Internal: Fixes
- SQLITE2
- Fix: Test for null column name in _ToRecord
Version 1.07 - 7 August 2025
- Examples: External Name attribute added to GUID fields in the
dictionary.
- Add: /DUMMY file driver
option. Support for DUMMY tables
added.
- Add: Support for calls to SQL
/ prop:Sql then NEXT
without first calling SET
- Add: Support for RESET(View,File)
- Change: ClearProps
resets Prop:FetchSize to default
value
- Change: When creating a CONSTRAINT the constraint name now
includes the table name, not just the field name.
- Fix: Calls to NEXT and PREVIOUS
did not start by resetting ErrorCode()
to NoError
- Update: Template : Limit ADDRELATION
calls to object based drivers.
- Internal: Improvements to BuildWhere
- Internal: Fix: _BuildProjectListWalker call to
ExpandExpression
- DOS:
- Added DLL(1) to all Windows
API Map declarations
- Added support for dos:ERROR_SHARING_VIOLATION
- Fix: Support for Longpath()
that may or may not have a trailing \
- Tidy up some Error handling
- SQLITE2:
- Internal: _ExecuteToRecord
- Fix: Not all SEND
commands returned the current value
- Fix: IMPORT - Removes
BOM from CSV file if it exists
- Fix: IMPORT - Header
fields in CSV are now Case Insensitive
Version 1.06 - 24 July 2025
- New commands CREATEDB
, EXISTSDB , CLONE.
- New: The COPY and MERGE commands now take an optional pOptions parameter, which determines
what parts of a table structure
are copied to the copy table.
- New: prop:Search, prop:SearchRank
- Update: SEARCH parameter
was not being parameterized.
- Update: SQL command
has been increased from 15 parameters to 20 parameters.
- Optimize: Improved internal token processing for varadic
functions (like CONCAT).
- DOS2 Driver: Implemented LOCK
and UNLOCK commands. Added prop:locked.
- DOS2 Driver: Implemented /BUSYTIMEOUT.
(from version 1.15 now /BUSYRETRIES)
Defaults to 5 seconds retries.
Affects COPY, CREATE,
REMOVE, RENAME
- InMemory2 Driver: Override DISCONNECT
command as "meaningless".
- SQLite2 Driver (and InMemory2 Driver): Various improvements
and optimizations.
- Null string in CSV export changed from null:null
to \N. This is the default for
PostgreSQL, MySQL and many other databases. MsSQL defaults to a
blank string.
- Fix: template: when copying Memory Driver DLL needs to copy CLALIT2.DLL as well (for programs using
InMemory2 driver, but not SQLite2 driver.)
- Fix: Format Date would return a blank string with
/ / for formatted zero dates. Updated to
instead return 0000/00/00
- Fix: Multiple GET's on the not-primary-key on the same table
could fail.
Version 1.05 - May 31, 2025
- Undid the template change stricken out below
Version 1.04 - May 30, 2025
- Added Troubleshooting section to docs
to cover common mistakes.
- Clarion 12 Install
- Add: Support for /ZERODATE to
base class
- Add: SQLite2 : If no PrimaryKey when creating a table then
Error set to MissingPrimaryKeyErr
- Change: SQLite Driver - increased default BusyTimeout to 53
seconds
- Change: Template links in ClaObd.Lib to the app. This now
contains the CwDriver functions, so they are not recompiled in
each app.
- Change : Template links in ClaStr.Lib to
the app, so StringTheory is now always external. (removed in
1.05)
- Fix: Set default prop:QuoteString to "
- Fix: _WithPrimaryKey could leave self.keys.primary set to true
when returning driver:notok
- Internal : _AfterParseFields method
Version 1.03 - May 23, 2025
- A new Incompatibility has been discovered and documented; Enclose RI code in transaction frame
- Add: Driver:LogConnections as an option to LOGGINGON.
- Add: SetColumnType and GetColumnType
and SetCellAddr methods to
iResultSet interface.
- Fix: DOS2: NAME function did not
behave consistently with DOS diver if name is set via prop:Name.
- Fix: DOS2: Functions called before OPEN
(like BYTES, COPY,
RENAME etc) did not expand the
filename to include the path()
- Fix: SQLite2, InMemory2 : Exclude Null keys, with multiple
columns, just repeated the first colun multiple times.
- Fix: Potential (although rare) GPF when file structures are
being destructed.
- Template: Copies SQLite3.DLL to app folder if SQLITE2 driver
is used in dictionary
Version 1.02 - May 5, 2025
- Documentation for BUFFER
expanded.
- FIX: DOS2 : BUFFER could fail if
BufferWholeFile was set and the required RAM was too high.
- Fix: SQLite2 : _executeSQL did not return Driver:notok on
failure.
- Fix: SQlite2 : _ExecuteSqlWork could clear all parameters if
Opcode was set to 0.
- Improvement: SQLite2 : BindBlobs can Bind_Static
Version 1.01 - May 5, 2025
- Change: IMPORT and EXPORT parameter order changed.
- PeekRam method removed (Note required since StringTheory is
available.)
- Better FILEERROR information from _morph method
- Push and Pop Errors in GetCustomProperty method.
- When Export CSV, binary fields are saved as base64, and
identified with the prefix base64: Import CSV sees this prefix
and base64 decodes before writing into the database.
- Null string in CSV export changed from ^null^
to null:null. This better to align with the new base64: prefix.
- Fix: SQLite2 : Log [BIND] (real) for SQL() command did not
show the right value.
- Fix: SQLite2: Cleared self.stmt_set in PUT command
- Example: Source for the database changed from JSON to CSV.
jFiles extension removed from the app.
Version 1.00 - April 25, 2025