454
Platforms
©2000-2008 Tibbo Technology Inc.
writing data to one of the physical sectors. In the process, the fd. object will take
the previously used FRT sector and "release it" into the pool of spare FRT sectors.
At the same time, one of the spare FRT sectors will become active and store
changed data. The FAT operates in the same manner. While being fully transparent
to your application, the process greatly prolongs useful flash memory life.
You can, and are advised to, further reduce the wear of the FAT and FRT by
decreasing the number of writes that will be required. One way to do so is to
create all necessary files and allocate space for them once -- typically when your
application "initializes" your device.
Say, you have a log file, which stores events registered by your application. An
obvious approach would be to simply append each new event's data to the file.
This way, the file will grow with each event added. But wait a second, this means
that the FRT area, which keeps current file size, will be changed each time you add
to the file! The FAT area will be stressed too!
An alternative approach would have us create a file of desired maximum size once
and fill it up with "blank" data (such as &hFF codes). We will then overwrite this
blank data with actual event data is events are generated. This time around, our
actions will be causing no changes in the FRT and FAT areas, thus prolonging the
life of the flash IC. Incidentally, this approach is also more
.
The second method is, of course, more complicated. For example, you will need to
remember or be able to detect where in the file the new event will go, rather than
simply append the event to the end of the file. The benefits, however are plentiful
and the effort is worthwhile.
The data area of the disk has limited leveling that results in spreading unused
sector utilization. The fd. object makes sure that when your file needs a new data
sector, this data sector will be selected from a pool of available data sectors in a
random fashion. Once the data sector has been allocated to a file, however, it
stays with that file for as long as necessary. So, if you are writing at a certain file
offset over and over again, you are stressing the same physical sector of the flash
IC.
On large files, you rarely write at the same offset all the time. For example, if you
have a log file that has 1000 data sectors, then it is unlikely you will be writing to
the same sector over and over again. For smaller files, the probability is higher.
Your solution is to, from time to time (not too often), erase the file and recreate it
again. This will randomly allocate new sectors for the file.
Direct sector access
is a low-level form of working with the flash. You are your
own master, the fd. object does not help you with anything, and it is up to you to
make sure that the flash IC is not being worn out unevenly. Generally speaking,
limit the number of times you are writing to the flash and/or implement some form
of leveling where a large number of sectors are used to share the same task and
each sector gets its fair share of work.
Ensuring Disk Data Integrity
Maintaining the data integrity is a very important task, and the one where the
flash memory needs a lot of help from your smart Tibbo Basic application. The
biggest source of potential trouble is a sudden loss of power right in the middle of
writing to the flash IC. This can cause devastation on two levels:
The data in flash sectors is changed by first erasing the sector (a process in
which all sector locations return to the value of &hFF), and then writing the new
data. Should the power fail right in the middle of this process, you may end up
454
448