Link Search Menu Expand Document

Notes | Tutorial 12 | Yes/No 2/2

Date: 2020-07-10

Notes from 8th July

  • persistent error in prev epistemology: ‘amounts of goodness’
    • “degrees”
    • “for/against” arguments positive adds, negative subtracts
  • vagueness: main issue: when the problem is vague
    • insufficient to decide yes/no
  • alt to degree arguments: decisive arguments (they have priority)
    • only negative
    • some ppl think decisive arguments contradict fallibility (only positive decisive arguments tho, not negative)
  • example of decisive args: contradicts {evidence,itself,arithmatic}

we can always come up with decisive arguments?

  • correctness vs usefulness vs truth
    • relative to goal!

getting decisive arguments

  1. better goals (clear, precise)

premature work in arguments wasted if we don’t have good goals

focus - too many things at once?

TOC - chain w/ 50+ links; 48 have excess capacity / buffer so are easy.

primary goals: low number, spcific and picky; aiming for better than pass secondary: low focus, not a bottlenecks; just need to pass

limit # of bottlenecks

too many things => unstable (balanced plant)

subproblems -> only some factors

how do you get excess capacity with binary factors? must be reasonably easy in context of one’s life

divide goals into many factors with binary evaluation

analogue factors

ways to convert analogue factors to binary (all reasonable valid ones) – and they should be


pet: >$30 too much -> binary factor; even if more lower => more better

discontinuity; notable difference at a point, a jump / cusp

reasons certain values are important, cannot think about infinite things but continuums have infinite points; waste to think about early/med/late too specifically


  • look at every integer value; pseudo breakpoints; not important conceptually but is easier to think about / less needed in head – better than nothing, arbitrary breakpoints, space them evenly or appropriately (could space linearly on inputs)
    • can avoid being mislead through too much precision; obscures details
    • percentages imply continuous

maximisation problems get to binary (q: re optimisation)

maximise many factors with conversion factors compound goals -> don’t convert to single score

degree epist -> converts ALL factors to one score binary epist -> can have orthogonal factors -> never convert -> conversion factors hard, easy to come up with factors, har to come up with score; money and time often easier to agree on conversion factors -> policy in democracy

compound goals avoids need for conversion (can avoid putting a price on some things, for example)

compound goals

might need to be less picky, e.g. 4 of 5 of these goals

many solns

  • pick any, it’s fine
  • consider more goal criteria, more ambitious compound goal
    • could inspire better solns

similar to cycle of conjecture and crit

in general we calibrate goals to abilities

no solns

  • brainstorm more solns
    • brainstorm for smaller sets of criteria
  • brainstorm workarounds for particular criteria / goals
  • be less picky

(when brainstorming) direct/indirect soln:

  • direct answers problem
  • indirect does something else / changes conditions / criteria / workarounds

Notes from 10th July

yes/no starts at about (0:30:00)

converting via yes/no qs - different to breakpoints

‘will this dog fit in my car?’ -> related to breakpoint on size

‘will my friends like it?’ -> no clear breakpoint but good q to ask, still


like TOC, excess capacity for most goals, don’t have the ‘balanced factory’, becomes easy to handle those. then can focus on key goals

decisive arguments have priority over degree, there are some techniques to always make enough decisive arguments. no need for degree

avoiding coercion (1:07:00)

we have time limits/resources

ideas A,B contradict, real implications

how to act when we can’t tell if A/B is right?

-> “given I don’t know the answer to A vs B, what should I do right nwo?”

the conflict -> is a given both sides accept

persuasion is fine -> consent

reasonable from both A and B side

come up with something new,

  • side with the option with low side effects, maybe B has not many bad side effects if wrong, so could test
  • find a new idea that’s low risk
  • can stall for time, like prevent problem becoming worse but not solve it yet
  • maybe a full solution (new idea)

what if you fail coming up with new option?

We Can Always Act on Non-Criticized Ideas (1:22:00)

disagreeing ideas -> if they can’t guide you to an answer / or q to ask to get answer -> none are good enough (pending some improvement) -> new ideas required

converting pos args to neg (1:27:00)

pos args say something good, neg -> crit rivals for lacking merit

point of args is to differentiate ideas

negative arguments more rigorous

positive degree arguments claim to have reach -> error prone

binary - converting 99min (1:39:00)

thinking in more binary way means less need for conversions

breakpoints - many->one mappings

move judgments from solns -> goals / decision charts (103m) (1:43:00)

how to do that judgement well

  • decision chart
    • yes/no procedure
  • recurse the problem - no limit

look for decisive factors -> which issues to eb picky about

  • key downsides

look for things to eliminate, (many ideas you need to test -> test first the one that’s easiest to eliminate if false)

infinite recursion -> need techniques to resolve critical arguments back and forth -> trivial to continue chain in bad faith

margins of errors breakpoints (2:06:00)

continuums harder to find breakpoints, mostly can’t find exact breakpoint, 100 +- 20; qualitative change aroudn there

use solns not in grey area

if no solns outside grey -> ask questions via techniques above to reduce error bars, but mostly we don’t need to do this.

don’t optimise unless important

Libraries of crit / patterns of error (113 min) (1:53:00)

  • criticisms with reach, generality (save in library)
    • find patterns - do not refute one by one
      • refute categories of ideas; match certain patterns
      • e.g. hard-to-vary
      • is there always a criticism with reach for any bad faith arg? -> coming up with new ways of doing bad faith is a hard problem -> they aren’t great solving problems
  • everything is either
    • already covered
    • or worth considering
  • bigger library -> more things covered
    • new things become more special

big picture (early in tutorial)

system for org thinking, still need for judgement and creativity

  • erroneously applying “judge ideas on content”

  • bad political idea: “flip-flopping” / punish politicians for changing their mind
  • moving goals are okay! Just be explicit

You can leave a comment anonymously. No sign up or login is required. Use a junk email if not your own; email is only for notifications—though, FYI, I will be able to see it.

Comments powered by Talkyard.