My first concrete idea for a "reverse tree" for filtering suggestions to me personally in regard to my in-progress NewEco:
Each person who has an account on my system and at least 11 (eleven) seconds of credit on his/her account may spend 11 seconds of credit to post a brief (maximum 140 characters, US-ASCII only) text message, which must not contain any "naughty" words (see George Carlin's list of the seven words you can't use), nor any abusive or threatening language, nor anything which is illegal for transport/export/import over the InterNet.
(Anyone with less than 11 seconds of credit may at any time volunteer to answer a Turing question to gain additional credit.)
Each person with an account may volunteer to act as first-round filter and/or as higher-round filter. People who volunteer as first-round filters are immediately put into the pool to receive and evaluate new "suggestions" that have been posted.
Each such posted message is sent to the next available first-round filter to be evaluated per decency of language, but not for content. If the language is rejected, it must be for some specific reason, such as obscene word or phrase, abusive/derogatory or threatening language/content, or some specific violation of the law of the USA or of the ICC (International Criminal Court). Each rejected message is then put in a queue for me to personally inspect for possible criminal prosecution. Also, the person who posted the offensive message can see the account-number of the person who judged it offensive and the category of reason why it was rejected, and optionally a text explanation if the first-round filter feels it worth the time/energy.
Each accepted (non-offensive) message is marked to be public-viewable, and is passed to the second round of filtering.
Initially, I'm the only person who performs second-round filtering, but as people volunteer for second-round filtering some of the workload gets passed to them, depending on availability. Second-round filtering is per quality of suggestion, whether it would help the system, whether it would be feasible, whether it is coherently written, etc. Each "suggestion" can be rated as definitely good to pass up to next level, definitely worthless and totally rejected, needing some re-write before re-consideration, or on a topic that filter can't judge hence put back in the queue for some other second-level filter to consider. The person who submitted the suggestion directly to the person doing the filtering, either the original submitter (not the first-leval offensive-language filter), or a second-or-higher-level filter, gets an alert as to the classification, and in the case of needing re-write will now be allowed to submit an edited version. The public record also shows the filter result and the user-number of the filter.
Anyone whose "suggestion" has been approved at both first and second level will then be allowed to post one more "suggestion". Per user, at most one "suggestion" may be pending at either first or second level at any time. Each user must wait until his/her "suggestion" has passed the first and second level before posting another.
Anyone whose "suggestion" has been rejected at the first level, will be be disalloed from any further posting until I have a chance to decide whether to overrule the first filter and allow the text, in which case it goes to second level, or delete the text and warn the user, in which case the text is expunged and the offending user may try again with a less offensive text, or deactivate the poster's account and possibly report the user's IP+timestamp history to law enforcement agencies.
Anyone at second level or higher can recommend anyone especially good at filtering at the first level, who has volunteered for above-first-level filtering but who is not yet assigned above the first level, to be promoted to also be a second-level filter, in a fixed location immediately below the person who made the recommendation. Anyone so promoted may at any later time retire from first-level filtering while retaining the second-level-filter position. This is how the *tree* is built to non-trivial form when I'm initially the only person at second level. The person who did this promotion is now both a second-level filter (getting input directly from first-level filters) and a third-level filter (getting input indirectly through the new second-level filter), with third-level filtering having higher alert level. The person may at any time de-activate his status as available for second-level filtering while retaining his role as third-level filter, or may simply ignore second-level requests most of the time (so somebody else gets them instead, depending on who is available to service them).
Anyone (A) at least one level above second-level filtering, who has at least two second-or-higher filters (B,C) immediately under him/her, can demote anyone directly under them (C) to instead be under somebody else who is under them (B), thus delegating some of the workload from the now-lower person (C) to that now-middle person (B). Normally this would be done when one of the next-lower filters (C) is doing a really bad job at content-filtering, recommending too much stuff that is crap.
Anyone (A) at least two levels above second-level filter can see not only his/her directly immediately lower filters but also the tree down a second level. If this person (A) sees somebody two levels down (C) under somebody one level down (B), such that B is passing most/all traffic from C, and A agrees this traffic is very good, A may promote C to be directly under A.
Any user may, for a fee, browse the entire tree, and see the status of each message at each node within the tree.
Any text that reaches the very top, which I personally like enough, will get moved to a regular HTML WebPage, which anyone anywhere will be able to view for free. I might comment on some of the suggestions, such as linking to my plans to implement the suggestions, soliciting help, etc. Suggestions which are actually questions may be directed to a FAQ WebPage viewable by anyone for free or a FAQ search engine which only logged-in users may use for a small fee (perhaps one second of labor-time per query, but only allowed for users who had at least 11 (eleven) seconds in their account, with Turing questions always available as usual to restore credit whenever it drops below 11).
Aside: I'm thinking that *every* significant use of my system, such as submitting RevTre suggestions as above, and also posting RFB (Request For Bid) for new contracts, submitting bids on such contracts, using my FAQ search engine, etc., should require at least 11 (eleven) seconds of credit in the user's account, even if the actual cost of the service is (much) smaller, even if the actual cost is just 1 (one) second as it'd be for RevTre and FAQ searches. This would effectively require users to pass at least one randomly-selected Turing question in addition to the fixed Turing questions required to get an account and to restore 10 (ten) points of credit if it dropped to 0 (zero). This would protect against anyone building a spambot that knew the canned response needed for getting an account and for re-establishing 10 (ten) units of credit after it has dropped to zero. Such a spambot, in order to make serious use of the system, would require a human to answer one or more randomly selected the 114 (hundred and fourteen) different Turing questions, eventually many more to be added, and would require an inordinate amount of work for a spambot-maker to collect enough of these randomly-selected questions that the spambot would know them all. My code for adjusting the value of each question if people spend much longer answering it than originally estimated, or can answer it more quickly than originally estimated, will make it impossible for spambot makers to pick a single value and program their spambot with answers to all questions with that particular value and ignore the questions of other values. In fact any particular value that is hit too many times will gradually lose members, as they get their value changed to another, until that value category is empty.
Update 2011.Jan.09: A few months ago I changed account-creation and zero-credit restoration so that in such cases the account has only a tiny amount of credit, so that an additional randomly-selected Turing question is needed to achieve even the 6 seconds needed to qualify for surveys. Accordingly a higher threshold (to qualify for bidding on contracts) as suggested above is o longer needed.