I

itaibn

114 karmaJoined Feb 2016

Posts
1

Sorted by New

Comments
32

I reached this article through a link that already revealed that it was about self-care but didn't notice the "self-care" in the title, and I expected the rhetoric to be a bait-and-switch that starts by talking about how aiming for the minimum in directly impact-related things is bad and then switches to arguing that the same reasoning applies to self-care.

Answer by itaibnJan 08, 202118
0
0

Gwern argues here against supporting the American revolution.

I'm glad you agree! For the sake of controversy, I'll add that I'm not entirely sure that scenario is out of consideration from an EV point of view, firstly because the exhaust will have a lot of energy and I'm not sure what will happen to it, and secondly because I'm open to a "diminishing returns" model of population ethics where the computational capacity furloughed does not have an overwhelmingly higher value.

On singletons, I think the distinction between "single agent" and "multiple agents" is more of a difference in how we imagine a system than an actual difference. Human civilization is divided into minds with a high level of communication and coordination within each mind and a significantly lower level between minds. This pattern is an accident of evolutionary history and if technological progress continues I doubt it will remain in the distant future, but I also don't think there will be perfect communication and coordination between the parts of a future civilization either. Even within a single human mind the communication and coordination is imperfect.

I guess. I don't like the concept of a singleton. I prefer to think that by describing a specific failure mode this gives a more precise model for exactly what kind of coordination is needed to prevent it. Also, we definitely shouldn't assume a coordinated colonization will follow the Armstrong-Sandberg method. I'm also motivated by a "lamppost approach" to prediction: This model of the future has a lot of details that I think could be worked out to a great deal of mathematical precision, which I think makes it a good case study. Finally, if the necessary kind coordination is rare then even if it's not worth it from an EV view to plan for our civilization to end up like this we should still anticipate alien civilizations to look like this.

  1. It's true that making use of resources while matching the probe's speed requires a huge expenditure of energy, by the transformation law of energy-momentum if for no other reason. If the remaining energy is insufficient then the probe won't be able to go any faster. Even if there's no more efficient way to extract resources than full deceleration/re-acceleration I expect this could be done infrequently enough that the probe still maintains an average speed of >0.9c. In that case the main competitive pressure among probes would be minimizing the number of stop-overs.
  2. The highest speed considered in the Armstrong/Sanders paper is 0.99c, which is high enough for my qualitative picture to be relevant. Re-skimming the paper, I don't see an explicitly stated reason why the limit it there, although I note that any higher speed won't affect their conclusion about the Fermi paradox and potential past colonizer visible from Earth. The most significant technological reasons for this limit I see them address are the energy costs of deceleration and damage from collisions with dust particles, and neither seems to entirely exclude faster speeds.
  3. Yes, at such high speeds optimizing lateral motion becomes very important and the locations of concentrated sources of energy can affect the geometry of the expansion frontier. For a typical target I'm not sure if the optimal route would involve swerving to a star or galaxy or whether the interstellar dust and dark matter in the direct path would be sufficient. For any particular route I expect a probe to compete with other probes taking a similar route so there will still be competitive pressure to optimize speed over 0.99c if technologically feasible.
  4. A lot of what I'm saying remains the same if the maximal technologically achievable speed is subrelativistic. In other ways such a picture would be different, and in particular the coordination problems would be substantially easier if there is time for substantial two-way communication between all the probes and all the colonized areas.
  5. Again, I see a lot of potential follow-up work in precisely delineating how different assumptions on what is technologically possible affect my picture.
historical cases are earlier than would be relevant directly

Practically all previous pandemics were far enough back in history that their applicability is unclear. I think it's unfair to discount your example because of that, because every other positive or negative example can be discounted the same way.

I've just examined the two Wikipedia articles you link to and I don't think this is an independent discovery. The race between Einstein and Hilbert was for finding the Einstein field equations which put general relativity in a finalized form. However, the original impetus for developing general relativity was Einstein's proposed Equivalence Principle in 1907, and in 1913 he and Grossman published the proposal that it would involve spacetime being curved (with a pseudo-Riemannian metric). Certainly after 1913 general relativity was inevitable, perhaps it was inevitable after 1907, but it still all depended on Einstein's first ideas.

That's a far cry from saying that these idea wouldn't have been discovered until the 1970s, which I'm basing mainly on hearsay and I confess is much more dubious.

Answer by itaibnMay 16, 20198
0
0

I don't recall the source, but I remember hearing from a physicist that if Einstein hadn't discovered the theory of special relativity it would surely have been discovered by other scientists at the time, but if he hadn't discovered the theory of general relativity it wouldn't have been discovered until the 1970s. More specifically, general relativity has an approximation known as linearized gravity which suffices to explain most of the experimental anomalies of Newtonian gravity but doesn't contain the concept that spacetime is curved, and that could have been discovered instead.

I'm puzzled by Mallatt's response to the last question about consciousness in computer systems. It appears to me like he and Feinberg are applying a double-standard when judging the consciousness of computer programs. I don't know what he has in mind when he talks about the enormous complexity of conscious, but based on other parts of the interview we can see some of the diagnostic criteria Mallatt uses to judge consciousness in practice. These include behavioral tests such as going back to places an animal saw food before, tending wounds, and hiding when injured, as well as structural tests such as a multiple levels of intermediate processing from the sensory input to motor output. Existing AIs already pass the structural test I listed, and I believe they could pass the behavior tests with a simple virtual environment and reward function. I don't see a principled way of including the simplest types of animal conscious while any form of computer consciousness.

On the second paragraph, making your point succinctly is a valuable skill that is also important for anti-debates. A key part of this skill is understanding which parts of your argument are crucial for your conclusion and which merit less attention. The bias towards quick arguments and the bandwagon effect also exist in natural conversation and I'm not sure if it's any worse in competitive debating. I have little experience with competitive debating so I cannot make the comparison and am just arguing from how this should work in principle.

On the other hand, in natural conversation you want to minimize use both of the audiences' time and cognitive resources, whereas competitive debate weighs more heavily in minimizing time, which distorts how people learn succinctness from it. Also, the time constraint in competitive debate might be much more severe than the mental resource constraint in the most productive natural conversations, and so many important skills that are only applied in long-form conversation are not practiced at all.

Load more