Please log in.
Before you can vote, you need to register. Please log in or create an account.
Culture: Website: Reference
WikiTrue   (+7, -2)  [vote for, against]
A wiki all of whose content is necessarily true

Wikipedia is a flawed but useful resource. However, when i look at it i realise there are several logical categories of content on it. There is content which is generally regarded as true, content which is popularly regarded as true, controversial content and content whose truth value is probably not very important. Then, there's a category with a much higher standard of truth, which could be wrong of course, but whose wrongness can be clearly demonstrated using only the data on the site, for instance it might contradict itself or say different things in different entries, so they can't both be true. This would include articles on mathematics and logic, arguments, proofs and an entirely different category concerning cultural artifacts, for example Buffy is necessarily a fictional character in 'Buffy The Vampire Slayer' in a sense, i.e. there is canon and non-canon.

So my idea is this. As well as having a site like Wikipedia, whose content can be contentious and depends on dirty, muddy, confusing physical reality for a lot of its articles, over which users can argue endlessly, produce a website whose policy is only to present necessary truths - facts which cannot be rationally doubted and are true in all possible worlds. This could still contain a lot of Wikipedia-like content, including articles on calculus, fictional entities and phenomena which are however consistent within that fictional universe, and that which is true given the context of contentious issues. It could include articles on mythology, unicorns, Star Trek and the like as well. It could even include articles about things which are necessarily false, maybe with the introduction "The following article is false" in order to make it true.

Clearly people could and would argue about the necessary truth or otherwise of the content, leading to deletions but also, software could parse the text in such a way that any statement or conjunction of statements in an article would automatically be deleted if its truth table wasn't entirely consistent, or in fact couldn't even be submitted.

The value of this would be that it would be easier to trust the information in this Wiki without ploughing one's way through the whole lot to verify it. The scope would of course be limited, but the standard of reliability would be much higher.
-- nineteenthly, Mar 11 2011

Real Time Google Earth http://www.popsci.c...-people-cars-clouds
[JesusHChrist, Mar 12 2011]

Does "consistent with canon" imply "internally consistent? http://books.google...0Musketeers&f=false
[mouseposture, Mar 13 2011]

W3C: Semantic Web https://www.w3.org/...ds/semanticweb/data
[zen_tom, Jul 06 2016]

W3C: OWL Web Ontology Language https://www.w3.org/...semantics-20040210/
This is the specification by which you can define your logical constructs that can be applied to information to measure consistency. [zen_tom, Jul 06 2016]

http://wiki.dbpedia.org/ http://wiki.dbpedia...about/facts-figures
The English version of the DBpedia knowledge base currently describes 4.58 million things, out of which 4.22 million are classified in a consistent ontology (http://wiki.dbpedia.org/Ontology2014), including 1,445,000 persons, 735,000 places (including 478,000 populated places), 411,000 creative works (including 123,000 music albums, 87,000 films and 19,000 video games), 241,000 organizations (including 58,000 companies and 49,000 educational institutions), 251,000 species and 6,000 diseases. [zen_tom, Jul 06 2016]

What is truth?
-- RayfordSteele, Mar 11 2011


Well yes.
-- nineteenthly, Mar 11 2011


In context perhaps. No quarrel with internal consistency even if the entire system turns out to be false.
-- nineteenthly, Mar 12 2011


Why would it be easier to trust the articles on this Wiki more than the same articles on any other Wiki?
-- ldischler, Mar 12 2011


// Why would it be easier to trust the articles on this Wiki more than the same articles on any other Wiki? //

Because it would be impossible to compose and submit an article which wasn't true. For instance, suppose you wanted to submit the text "two plus three equals one". The software running the site would turn that into a bit of arithmetic, note that it was false and reject it. If there was already an article consisting of the text "two plus three equals five" and you edited the last word to read "six", you would be unable to submit that edit. Anything you did to make the article inconsistent would also be rejected provided the software could detect that it was, so for example, there is an article containing the sentence "all unicorns always have blue eyes", to which you want to add the comment somewhere in the article, or perhaps anywhere in the wiki, "all unicorns always have brown eyes and never have blue eyes" or words which imply that logically. That edit would be rejected.
-- nineteenthly, Mar 12 2011


WikiConsistent, perhaps. Also, mfd truth algorithm magic?
-- daseva, Mar 12 2011


Do I understand correctly that, if I post "unicorns always have blue eyes" then your subsequent attempt to post "some unicorns have brown eyes" will fail? But that if we had posted in opposite temporal order, mine would have been the rejected post?

Assuming this idea were even possible, why not vary the rules by which edits are rejected, and use it as a testbed for epistemological theories? For example, suppose it contained a theory, and some observations mostly consistent with the theory. Suppose more observations are posted, not so consistent with the theory. Epicycles are added to the theory, keeping it consistent with new observations. Could you implement a system such that, eventually someone posts a theory contradicting the old one, but so much more consistent with the accumulated observations that the new theory is accepted and the *old* one rejected?
-- mouseposture, Mar 12 2011


Yes, you do understand me correctly. However, that would only be one genre of article on the wiki. They would simply be castles in the air, as it were. They wouldn't need to be true, only consistent. However, if there were entirely non-S-intensional subjects such as mathematics on there, they would have to be true because it would be impossible to post contingent truths on them at all. It would be theoretically possible to "parse" the whole lot without any empirically verifiable information and therefore also possible to rule out the submission of any false statements whatsoever in such subjects.

I suppose there are at least two categories here: the necessarily true and the internally consistent. The necessarily true can be handled. That's not magic. It simply needs the truth value of each statement and the conjunction of every statement in an article to be true under all truth conditions of the components of the statements.

[Mouseposture], yes, i like that idea. In fact, i suppose what i might have done here is mixed up two types of content, which i nonetheless feel sort of belong together. The thing is, when i look at Wikipedia, i often doubt its validity, but there are at least two types of article where that doesn't apply in the same way. One is mathematical articles where it's possible to check their truth out for themselves, for instance i haven't looked but i can imagine an article on prime numbers which would be necessarily true and checkable simply using computing power or even your own brain power. The other is more contentious, but consists of the likes of popular culture, myths and the like, where the settings and characters do not exist or are fictionalised, but cannot behave logically inconsistently without violating canon.

Your idea is akin to the creation of a detailed fictional setting, but more rigorous, and it's really good!
-- nineteenthly, Mar 12 2011


//cannot behave logically inconsistently without violating canon// Not so fast.

Have you read Umberto Eco's essay on this topic? <link> (I recommend it: it's light and amusing, which is remarkable, considering the subject.)
-- mouseposture, Mar 13 2011


//Because it would be impossible to compose and submit an article which wasn't true.//

So there wouldn't be much content on this Wiki, it seems.
-- ldischler, Mar 14 2011


I was reading this page, thinking about annotating or voting on this idea, and just then my phone chimed and I got my thank-you e-mail from Wikipedia for my monthly donation.

On topic, for this wiki to work, it seems like it would need a branching category tree, where the validity of an article is only checked against its parent article, grand-parent, etc. Otherwise you can't have, say, an article on Buffy the Vampire Slayer and an article on Twilight, because vampires are defined differently in those fictional worlds.

So instead of having a wiki where you know everything is true, you'll always know in which context something is true by following the lineage of the article. If it's attached straight to the root or orphaned, you can probably disregard it. If it has a long and venerable lineage of articles, it's probably solid.
-- iaoth, Mar 16 2011


Yes, i like that. Probably could be covered by a graphical representation of in which world the article was true.

[Ldischler]. on the contrary. It's feasible to write entire books which can be mathematically proven to be true. Principia Mathematica (Russell and Whitehead, not Newton) appears to be one of these. I suspect there are articles on Wikipedia which have quite large necessarily true blocks of text in them.
-- nineteenthly, Mar 16 2011


I was going to link to wikitruth.org, as it's kind of relavent and shows some of the pitfalls in the idea of "truth" in this context.

The wikitruth website is gone. The "wikitruth" article on wikipedia has been deleted, along with its entire revision history and discussion page. Pitfalls indeed!

The original dialogue basically went:
Wikitruth: Wikipedia is false.
Wikipedia: Wikitruth is false.,
which was bound to cause problems.

I'm almost afraid to make this annotation, lest Jimmy's wrath descend on the Halfbakery in a Gödelian firestorm of paradoxical fury.
-- spidermother, Apr 17 2011


Now that is inspired!

It's almost the paradox of the preface: the conjunction of all statements made on Wikipedia is false but probably the majority of users believe the edits they make on Wikipedia result in statements whose conjunction is true.
-- nineteenthly, Apr 17 2011


I love it! But I think it needs to have a proof for each article, and if necessary, a list of axioms that the encyclopedia accepts. The proofs can be checked and edited by the editors for accuracy. Maybe there should be a draft status for articles that are probably true but have errors in their proofs. This is to allow their authors to repair them, and it should be clear that they are not asserted by the wiki to be true. The axiom list can be set to the common basis that people use for mathematics, logic, and possibly science - preferably intuitive stuff that laymen and mathematicians can agree on, rather than something like using ZFC and modelling numbers as sets of sets. But at the same time, the axiom list needs to be carefully managed by the editors - the quality of their wiki depends on it.

Secondly, mouseposture's idea of a wiki "testbed for epistemological theories" should also exist. It could be a sister project to WikiTrue, sharing editors and using WikiTrue articles as references where applicable. In fact, I can't understate how good an idea I think the theory wiki is. It could start by dumping loads of commonly accepted scientific findings onto itself, but with each one treated as an unfounded hypothesis until the observations start coming in.

Idischler says "So there wouldn't be much content on this Wiki, it seems." That is true, but everything on it will carry a heavy weight of reliability!

Even if the project is impossible because of Gödel's Incompleteness Theorem, it will be great, because we'll all get to see the effects of that theorem in action.
-- loggor, Apr 17 2012


The truth is you'll have to work it out for yourself.
-- pashute, Jul 06 2016


Aha!

I'm currently working on a project that does exactly this!

In addition to the WWW, Tim Berners-Lee also came up with the idea of "Linked Data" and/or "Semantic Web" which aims to provide a framework not of a linked web of documents (as we all know and love), but a linked web of data points.

It starts with a specification for describing knowledge, called Resource Description Framework (RDF). RDF breaks down any element of data into a "triple". A triple is an atomic unit of information, and consists of a subject, a predicate, and an object.
Examples might be:
Luke isA Person
R2D2 isA Droid
Vader isA Person
Tatooine isA Planet
Luke isChildOf Anakin

Each element of the triple is referenced by URI. So in actual fact, the first triple expressed in RDF might look like:

http://gfa.ents/Luke_Skywalker http://gfa.preds/basic/isA http://gfa.ents/Class/Person

Using URIs like this means that different people can generate data about the same thing, from different locations, but can still cross reference their federated information. The tricky part is agreeing on standards - but just like WWW URLs, it's possible to dereference, apply routing and aliasing etc where necessary to map from one space to another.

In addition to these factual statements, you can also define something called an Ontology, itself defined in RDF terms, in a language called Web Ontology Language (OWL) that, in addition to describing the types of relationships you'd expect to see a particular entity have in a particular context, also allows you to describe logical constraints and connections between objects such as:

isChildOf applicableTo Person
isChildOf NotApplicableTo Droid
isChildOf owl:inverseOf isParentOf

Not only can application of an ontology help identify inconsistent additions to the data such as:

R2D2 isChildOf JarJar

Which would be identified as incorrect, or at the very least, inconsistent when held up against the combination of this data-set and this particular ontology.

Also interestingly is that using the ontology, and applying it to a dataset, you can generate a whole new set of triples based on the logical inferences on the intersection between fact and logic.

So with the examples, the ontology above and some additional information like:

Anakin isSameAs Vader

You can generate the inference that:

Vader isParentOf Luke

Which means you can not only classify truthiness or intruthiness, but you can also discover new truths for which no explicit data already existed!

I'll post some links, but it is very much a thing, and a fascinating one to boot.
-- zen_tom, Jul 06 2016


That's just excellent! Thanks!
-- nineteenthly, Jul 06 2016


You're very welcome - the standard for querying such data-stores is called SPARQL (pronounced Sparkle) - other options include Tinkerpop, Gremlin and Giraph.

This can be quite entertaining as uninitiated members from outside the team are exposed to all these seemingly whimsical terms in the context of apparently hard-nosed delivery conversations.
-- zen_tom, Jul 07 2016


Can I just point out that a Web Ontology Language would be a WOL, not an OWL.
-- MaxwellBuchanan, Jul 07 2016


[zen_tom], what's gfa? And your description reminds me strongly of Evi (formerly True Knowledge), which, back when it was called True Knowledge, did that in a way that anyone could contribute entities and facts to.

I would also like to point out:

- Microformats: A way to make data on normal web pages semantic

- Semantic MediaWiki: A set of extensions to MediaWiki (the software that runs Wikipedia and loads of other wikis) that make the wiki's content database-driven and computable
-- notexactly, Jul 19 2016


[notexactly], gfa is the www equivalent they use all the way over there in a Galaxy Far (Far) Away, it's usage is now obsolete (on account of it having been a long time ago)

The relevance is less as something that gets embedded within in html documents (which is how many of these ideas were first couched), and more to do with sourcing data from open data portals.

If you're a large institution and you're fed up of writing reports that everyone ends up hating, a nice way of sorting out the mess is to open up your data via a SPARQL endpoint, and let people query the hell out of it in their own time.

Using a standard specification like RDF means less time fiddling about with defining schemas and upsetting people when you decide to change your attribute naming convention from underscore_spaces to CamelCase - RDF doesn't care because they've already specified that you use the most horrendous naming convention ever, the URI.
-- zen_tom, Jul 19 2016


"Hitler was evil" aaaaaand go!
-- Voice, Jul 19 2016


I truly disagree.
-- pashute, Jul 01 2020



random, halfbakery