TL;DR - This paper has a major flaw
This paper's algo for calculating the rate at 10 bits/s is flawed based on this statement: "In fact, the entropy of English is only 1 bit per character", and its used throughout the paper
The source it cites for this 1 bit per character rate is about predicting the next letter when the preceding text is known. E.g., if I type "The quick brown fox jump", based on your knowledge of english, you can assume the next letter is "s".
This is far different then having no context and choosing among a character set. Sure, we dont need to express in UTF or even 8 bit ascii. We could stick to 7 bits per character and encompass upper and lower case letters, digits, and common symbols. We could even reduce to 6 bit if we ignored case sensitivity. Further encoding schemes could compress this further.
But we'd never get to 1 bit per character.
So this expressiveness, at a conservative level should be higher than 10 bits/s, at least 60 bits/s.
Thats still slow, but its being constrained based on arbitrary limitations. For starters it doesnt take into account the fact that computers (and people) have multiple inputs and bandwidth capability on those inputs.
Where you are right now, you can see this text, you can smell the area, you can hear the environment, etc. When you convey ideas, sometimes its 1 on 1, and other times its broadcast to many to delegate or inform tasks to be spread amongst many. And then those actors take steps in parallel with each other. In essence there are assorted erichment methods we use as force multipliers that raise effective communication rates above even 60 bits/s.