I could draw some analogies to overly optimistic predictions from AI companies and the opposite from skeptics 😀
I could draw some analogies to overly optimistic predictions from AI companies and the opposite from skeptics 😀
Say you develop a robot driver that has 1/10th the fatal accident rate of humans, of course it (eventually) runs someone over, killing them.
What happens? Do we encourage deploying more of those bots, or do they get restricted?
Say you develop a robot driver that has 1/10th the fatal accident rate of humans, of course it (eventually) runs someone over, killing them.
What happens? Do we encourage deploying more of those bots, or do they get restricted?
Meanwhile, getting at-ed in random issues provides for some occasional comic relief.
Meanwhile, getting at-ed in random issues provides for some occasional comic relief.
😱
😱
You can train smaller ones and pray the improvements scale up too.
You can train smaller ones and pray the improvements scale up too.
Not entirely sure why? Cost and time cost of training networks?
Not entirely sure why? Cost and time cost of training networks?
Not particularly interesting from my perspective. There's quite a few of them, all slightly weaker than the original. I like designs that are intended to leapfrog - but people don't make videos about those until after they succeed 😉
Not particularly interesting from my perspective. There's quite a few of them, all slightly weaker than the original. I like designs that are intended to leapfrog - but people don't make videos about those until after they succeed 😉
I'm not kidding:
bash -c "$(wget -O - apt.llvm.org/llvm.sh) "
I'm not kidding:
bash -c "$(wget -O - apt.llvm.org/llvm.sh) "