Re: Perl hash query
by "Darrell King" <darrell(at)webctr.com>
|
Date: |
Sat, 12 Feb 2000 06:45:15 -0500 |
To: |
<hwg-techniques(at)hwg.org> |
References: |
fourelephants |
|
todo: View
Thread,
Original
|
|
There's a couple of flatfile management scripts in the scripting section of
my site you can download: http://www.webctr.com/scripting. Nothing great,
but you're welcome to them if they help.
The thing you are trying to do is quite possible:
1) decide how you want the data stored (i.e.: id|vote|number of votes is
just fine), one record per line.
2) create the flatfile and make permissions to be readable by the script
3) have your script step through the file line-by-line with a while loop
4) when it finds the correct id, it can use split() to assign it to some
variables
5) do your math with the variables
6) reassemble the record string with the new values
7) open the file for reading again, and a new temp file for writing
8) read in line-by line-the old file, and write it to the new one
line-by-line, checking the id number of each
9) when you get to the active id, write the new record instead of the old
one
10) when done with all records, delete the old file and rename the new one
to that name
11) continue with your exit routine.
Make sure you do the proper error checking during this process, to ensure
that all files open correctly. It would be less than ideal to delete the
old flatfile when the new one was never created...
Also be sure to set the permissions. Some systems have the script running
as a different use than you, and you won't be able to manipulate the file
using telnet or ftp if it belongs to that user and you didn't plan
permissions...
Finally, you can either open once and do the read/math/write sequance all
together, or in three seperate steps...get info, do math, make new
flatfile...its up to you. I'd say it depends upon how much manipulation you
need to do...if its alot, than it might be best to close the file so its
available to others (which reminds me...don't forget to lock it while you
are rebuilding it!)...
Darrell
----- Original Message -----
Subject: Re : Perl hash query
Looking back at my question it was a bit vague, so here is some more
details: (please excuse incorrect language I'm still learning)
I am writing a routine within a script that is a type of ranking system. The
users will have the opportunity throughout the site to rank items with a
score of one to five and then depending on the score a number of stars is
shown (you guessed it - one to five stars!). Now the script works by taking
from the user two values (submitted by a form) which are the ID if the item
eg 725 (in a hidden field) and their vote eg 4. The script then uses the ID
number to open two seperate files which will are 725score.txt and 725num.txt
(these two files are created automatically the first time someone votes) and
adds the score of 4 to the value of 725score.txt and increments 725num.txt
then divides the new score by the incremented value of 725num.txt. This
result is then moved into the graphic routine which displays the number of
stars.
Now this all works fine (and I'm actually pretty chuffed with it as it's the
first script I ever wrote!!) but as the site will allow people to ranking
hundreds and hundreds of items to rank I'm not thrilled about having
hundreds and hundreds of files each with a single number in them.
What I would like to do is simply use one file data.txt which would hold all
the above with the ID number being the key, but I just cannot figure out how
to do it. I think what I want is for the script to open data.txt, search for
the ID number abd if it is there pull out the details, say "id|vote|number
of votes" perform the calculation and return the new values to the file
where id is a unique value. I'm not sure if hashes are what I am after or
not - I'm really confused now. So, that's my question, how do I do this!
If I am missing something really obvoius or am treating the data from the
wrong angles, please let me know. I've been using the Dummies Book and
Osbornes "The Complete Reference Perl" but are now cursing not getting the
O'Reilly book which is prohibitivly expensive here in Bangkok.
Many Thanks
Stuart
HWG hwg-techniques mailing list archives,
maintained by Webmasters @ IWA