A much faster Class TDataBase !!!

User avatar
Maurizio
Posts: 705
Joined: Mon Oct 10, 2005 1:29 pm
Contact:

Post by Maurizio »

Hello Frank

This is from hash.txt

Regards MAurizio

Examples:
*
* LOCAL hHash := {=>} // Builds an empty hash
* LOCAL hAnother := { 'key a' => 'first value', 12 => { 'a', 'b', 'c'} }
*
* ---------------------------------------------------------------------
* * NOTE: Up to date, { => } construct was used by TAssociativeArray()
* pseudo object declaration. Most of the TAssociativeArray() system
* has been emulated in hashes, so older program should not need
to
* use it, but if you find any incompatibility in using older program,
* the inclusion of "assocarr.ch" file will remap {=>} to an
* associative array class.
* ----------------------------------------------------------------------

//-----------------------------------------------------------------------------------
This is a typical usage of the colon operator:
*
* LOCAL hHash := { 1 => 10, 2 => 20 }
*
* hHash:NewKey := 'newval'
* ? hHash:NewKey // newval
*
* hHash:NewKey := 15
* ? hHash:NewKey // 15
*
* ? hHash['NEWKEY' ] // 15
* HSetCaseMatch( hHash, .F.)
* ? hHash[ 'newkey' ] // 15
*
* ? hHash:unexisting // BOUND ERROR
*
*
* A program written using only the ':' operator has not to care about
* hash case match mode.
User avatar
James Bott
Posts: 4654
Joined: Fri Nov 18, 2005 4:52 pm
Location: San Diego, California, USA
Contact:

Post by James Bott »

Maurizio,

>Work directly on the DBF is 4/5 time faster that work with TDATABSE .
I use this function ( with xHarbour ) for load and save the date in a hash .

I admit I don't really know anything about hashes, but your code example doesn't seem to be ussing hashes, nor does it seem to be much different than the TDatabase load and save methods. Please explain.

The biggest problem with using functions is that you cannot inherit from them.

Regards,
James
demont frank
Posts: 167
Joined: Thu Mar 22, 2007 11:24 am

Post by demont frank »

James,

The great benefit of hashes is that it uses e mechanism simular to index files to locate a element without scanning a array (i suppose , i am not a specialist)

Also the syntach is much easier. In clipper i scatter the elements with

LOCAL h[FCOUNT()] , i := 1
AEVAL(h,{||h:=FIELDGET(i++)})
RETURN h

To use i have to replace fieldnames with fieldnumbers , but when the structure from the dbf changes , it is a disastre ( I try to only add new fields to overcome this , not used fields are maintained)

Using tDatabase uses the fieldname (i suppose) , but when this variabeles are used there will be a scan


With hashes we can

LOCAL h := Hash() , i := FCOUNT()

FOR i := 1 TO FCOUNT()
h:Field(i) := Fieldget(i)
NEXT

RETURN h

Now we can also use the fieldname , but the variable will be found without a scan.

With associative arrays also the index can be used , both methods can be used :

h[1] , h[fieldname(1)] , h:fieldname(1) is all the same
James Bott wrote:Maurizio,

>Work directly on the DBF is 4/5 time faster that work with TDATABSE .
I use this function ( with xHarbour ) for load and save the date in a hash .

I admit I don't really know anything about hashes, but your code example doesn't seem to be ussing hashes, nor does it seem to be much different than the TDatabase load and save methods. Please explain.

The biggest problem with using functions is that you cannot inherit from them.

Regards,
James
User avatar
James Bott
Posts: 4654
Joined: Fri Nov 18, 2005 4:52 pm
Location: San Diego, California, USA
Contact:

Post by James Bott »

Frank and everyone,

OK, I didn't notice the use of some kind of hash object in the code before.

I am all for speed, but I wonder how much in matters in this case. According to Jose's numbers assigning a value to a tdatabase object field 50,000 times took 35 seconds. So this would be 35/50000 = 0.0007 seconds per assignment. So let us say we have a database with 200 fields, it still would only require 0.0007 * 200 = 0.14 seconds to assign values to all the fields.

I am still in favor of speeding up TDatabase, but I am not sure a user will notice any speed improvement unless hundreds of records are being updated at one time.

Has anyone noticed a speed increase using this?

Regards,
James
Post Reply