Imagine you train an algorithm to influence the voters to
win the election. What would be the objective ?
Maximize the probability that your candidate wins.
This does not imply winning the popular vote, nor maximizing the margin of the electoral vote.
There are results showing that machine learning can be completely fooled by images that are slightly modified, so little that the human eye can not perceive any difference. Have a look at the images in this paper on "adversarial examples": https://arxiv.org/abs/1412.6572