What if you tell the robots that the loss of the workforce would endanger humans and get their “Human Safe” stuff into action?
That … could turn very bad. There's a tremendous loss of free will when human safety is concerned.
The goal becomes save humans first, property damage and self survival second. The fastest way to stop the program from being distributed is to take out the planetary communication system. Which could also harm humans. Which could lead to robots taking out the robots trying to disable the communication system.
Now toss human direct orders into the mix when you have millions of robots trying to enact a solution under safety protocols and not thinking clearly.
There would not be a single person on the planet keeping an eye on me. If it weren't so messy, that could be fun.