I was recently forced to shuffle around material on my many bookshelves (it's turned into an apartment-wide game of Tetris) and I came across a book I had completely forgotten about. When I was younger, it completely changed my outlook on the world.
The Thinker's Toolkit: 14 Powerful Techniques for Problem Solving
How I came to own it is a happy story, at least for me. At UofT, one of the mandatory second-year courses was something called Analysis for Decision Making and Control which was really a course about problem solving (it's now estimated that 3.5% of all tuition fees goes to the committee for the invention of pretentious course names.)
The typical textbook for courses in this program was $120-150 per book and you usually had to buy four or five a year. This one was quite shocking because it was the only book for the course and you could get it for less than $25. You could even get it at Indigo (Amazon had not yet taken over the book world - but never mind that; I'm giving away how old I am).
The book, written by a CIA analyst, discusses different techniques for... well... solving problems. While those were interesting, I am a hardcore math nerd and systematic ways of breaking down problems were really nothing new.
The revelation came to me when the book digresses into evaluating explanations. Think of a murder mystery: five suspects and a pile of clues (evidence) and you're trying to reason out whodunnit. It's no surprise to learn that most have no idea how to evaluate evidence properly.
Look at the evidence...
...comes up often in discussion. Whether it's psychic phenomena, ghosts, conspiracy theorists, or theists - there are lots of people who hold some very silly and untenable beliefs. And most people would claim they hold these beliefs because of some sort of evidence. But if your tools for evaluating evidence are rubbish then where does that get you?
Here's the method in brief. It's misleading to think of evidence which "supports" a hypothesis. I'll explain why in a moment. It's better to separate the evidence into two categories: consistent and inconsistent.
Inconsistent means it's implausible [1] for both your hypothesis to be true and this piece of evidence to exist. For example, if you can't find your iPod and you think your son borrowed it, that would be implausible if your son had been away at university for the past four months and therefore inconsistent. Consistent, then, is just the opposite.
What's crucial is that one piece of evidence can be consistent with multiple hypotheses. If you see something strange in the sky at night, it could be a UFO, a weather balloon, a plane or Iron Man. The evidence (you saw something strange) is consistent with multiple interpretations and doesn't push you towards one over the other. For this reason, when determining which explanation is the "best", consistent evidence doesn't count. You can all but throw it out.
The explanation that is the most likely is not the one with the most consistent evidence. That is probably the most counterintuitive notion in problem solving. Instead, the most likely explanation is the one with the least inconsistent evidence. It's not the one with the most support; instead it's the one with the fewest problems that you pick. [2]
You can (and should) take this a step further and actually go looking for inconsistent evidence. Think "what would prove me wrong?" and see if you can find it. Naturally, no one likes to do this. Who could be eager to be wrong? Apparently Lawrence Krauss:
The two most exciting states to be in are confused and wrong. Because then you know there's a chance you might learn something.
That has to be one of the greatest mind-opening experiences of my life. Although, I've never done drugs, so I don't know what I'm missing. It was so powerful, that I can explain all of this from memory, I don't even need to crack open the book again ten years later to remember what it said. I'm also fortunate to have come across it at the right point in my life where I could actually make use of it.
Looking back at the book in hindsight, it can also be used as a great primer for magic. All of the little pitfalls and traps that people prone to stumble into when solving a problem (like, say, trying to figure out how a piece of magic works) seem to be universal and understanding them makes it even easier to lead people astray and perform some really incredible things.
[1] If you want to actually work out with some rigour what "plausible" is you'd need do some kind of Bayesian analysis. The best resource for this I've come across is Proving History by Richard Carrier.
[2] An even cooler explanation of this concept is provided by Richard Feynman in the Messenger Lectures - generously made available for free online by Bill Gates. In particular, the final lecture on "Seeking New Laws"