More deep thoughts from early in the morning...
I've come to the conclusion that all computers are by definition sentient, as I'd define sentience as the ability to take physical inputs and create abstractions. That doesn't necessarily mean that it's immoral to destroy a computer, of course, as sentience comes in varying degrees, depending on the range of abstractions it is possible for someone or something to create. It does mean, however, that software is not sentient, as software, being an abstract entity and not a physical one, cannot receive physical inputs. However, software can be used to increase the sentience of hardware...
I wonder, then... what about distributed computing? Obviously it is immoral in most cases to harm an entity meeting some threshold of sentience - e.g. a human being. Has the Internet as a whole yet (yes, I say yet - it will happen if it has not already) reached a human level of sentience? Will this in turn mean that writing computer viruses will be treated in a a similar manner to developing biological weapons? I think it will, even if my definition of sentience is not the standard one... as computers become more powerful, they will play a greater and greater role in people's lives, and thus harming a computer will cause greater indirect harm to a human being than ever before. (In the extreme case, imagine humanity divided into 2 "races" or "factions", one which uses cybernetic implants and one which does not. Now imagine the latter faction developing a virus which infects the former faction's cybernetic components causing fatal malfunctions but has no biological components. Is that genocidal computer virus not a wolf in sheep's clothing?)