The largest problem with modern automation is not that it causes people to lose jobs, it is far more esoteric. It's that it effectively proves a sort of Turing-completeness for conscious action as a whole. Any conceivable action that a human can perform can eventually be done better by an AI, and that includes even soft subjects like raising children or pleasing your wife.
Most people in the world have already made concessions that reveal they do not assign human life inherent moral value, which can be observed by their willingness to restrict other people's choices. Mass taxation, offensive military campaigns, compulsory schooling, enormously long prison sentences; these are all things which routinely destroy other people's freedom, yet they occur repeatedly throughout society with proportionally tiny resistance.
This is allowed because the average person is a pragmatist, and incomplete ideas of pragmatism can justify any action as long as it in the pursuit of some greater goal. Of course, the value of the goal itself is never fully demonstrated, because doing so would require the use of first-principles, and even a minimal set of first-principles would condemn convenient violence.
Since consciousness does not automatically grant humans moral consideration in pragmatism, "meaning" is only assigned to those that are useful. Such a mindset can successfully masquerade as an ethical framework in a world where there's labor for humans to perform, because it still encourages a person to be trustworthy and well-mannered. Unfortunately, it will collapse once there is not a single job that humans are better at. At that point, a human is as useful to another human as a cockroach is to a person now. Perhaps even less useful than that, because humans can scheme and commit crimes, thus increasing the amount of work needed to be done. Therefore, the owners of soon-to-be-powerful AI's will almost certainly use their position to functionally disable competing portions of the population, whether by murder, enslavement, or biological means.
There is nothing anybody can do to stop it at this point anymore than people could have stopped the development of the atomic bomb. If someone in the US doesn't do it, then someone in China will, and so on and so forth.
The irony regarding modern AI's is that they are still finite state machines. While the existence of free-will amongst humans is a debatable topic (pretty much stalled on whether or not human biology exploits non-deterministic physical processes), it is not an uncertain subject for AI's running on classical computers. Given a finite state machine's current state and its current input, you can always predict its next state, regardless of how complex it is. It doesn't matter if the current state of some neural network takes terabytes to describe; the evaluation of a condition is not a "decision" in the sense of free will. This implies that once AI's take over the Earth, there won't be anything left to appreciate that fact.