path: root/src/gperf-to-do
diff options
Diffstat (limited to 'src/gperf-to-do')
1 files changed, 0 insertions, 22 deletions
diff --git a/src/gperf-to-do b/src/gperf-to-do
deleted file mode 100644
index 05caecca9dbd..000000000000
--- a/src/gperf-to-do
+++ /dev/null
@@ -1,22 +0,0 @@
-1. provide output diagnostics that explain how many input keys total,
- how many after dealing with static links, and finally, after the
- algorithm is complete, how many dynamic duplicates do we now
- have.
-2. fix up GATHER_STATISTICS for all instrumentation.
-3. Useful idea:
- a. Generate the wordlist as a contiguous block of keywords, as before.
- This wordlist *must* be sorted by hash value.
- b. generate the lookup_array, which are an array of signed {chars,shorts,ints},
- which ever allows full coverage of the wordlist dimensions. If the
- value v, where v = lookup_array[hash(str,len)], is >= 0, then we
- simply use this result as a direct access into wordlist to snag
- the keyword for comparison.
- c. Otherwise, if v is < 0 this is an indication that we'll need to
- search through some number of duplicates hash values. Using a
- hash linking scheme we'd then index into a duplicate_address
- table that would provide the starting index and total length of
- the duplicate entries to consider sequentially.