A Google DeepMind researcher published a paper arguing that AI can never be conscious. Not a matter of time or scale. A matter of category.
The argument is that computation is a description of a process, not the process itself. For a physical system to count as "computing," a conscious agent has to first carve reality into symbols and assign them meaning. Without that agent, there are only voltage gradients. Not symbols. Not experience. Computation presupposes consciousness. It cannot produce it. The paper calls this confusion the "Abstraction Fallacy."
The analogy that makes it click: a GPU simulating photosynthesis can model every reaction perfectly. It will never produce a single molecule of glucose. Simulation is not instantiation.
The paper doesn't say artificial consciousness is impossible. It says if a system were ever conscious, it would be because of its physical constitution, not because it ran the right algorithm. No amount of scaling changes that.
This comes from inside the house. Not a philosopher. A researcher at the lab building some of the most advanced AI on the planet, arguing that the entire framework connecting computation to consciousness is logically broken.

