I'd say you're misrepresenting (or likely had misrepresented by someone else) Kantian ethics. Kantian ethics, like them or not, basically boil down to only two rules: don't use people primarily as means rather than an end and act in such a way that you'd be okay if your actions would be used to create a universal moral law. Any other rule, within the Kantian system, has to be justified by that framework.
Virtue ethics, in general, is less about what is right or wrong than a personal guide to being virtuous; it's not intended to give moral rules, it says that if you cultivate certain virtues you will tend to act in a way that furthers the good (the good being defined however the virtue ethicist likes--it's not necessarily a great system and I'm not a virtue ethicist.
I'm not a utilitarian, or a consequentialist more generally, because I think it requires more foreknowledge than we usually have, an action whatsoever can be justified given an extreme enough scenario, and because the demands it makes on individuals are nearly infinite (most ethical systems allow for actions that go beyond the standards of baseline morality and are extra-good; in utilitarianism the standard is absolute and there's no way to exceed it which, when I tried to do utilitarianism, led me to believe that everything I did that did not actively contribute to others' wellbeing was evil. The last is more of a scrupulosity problem on my part, but it is consistent with the standards of a consequentialist ethic).
no subject
Virtue ethics, in general, is less about what is right or wrong than a personal guide to being virtuous; it's not intended to give moral rules, it says that if you cultivate certain virtues you will tend to act in a way that furthers the good (the good being defined however the virtue ethicist likes--it's not necessarily a great system and I'm not a virtue ethicist.
I'm not a utilitarian, or a consequentialist more generally, because I think it requires more foreknowledge than we usually have, an action whatsoever can be justified given an extreme enough scenario, and because the demands it makes on individuals are nearly infinite (most ethical systems allow for actions that go beyond the standards of baseline morality and are extra-good; in utilitarianism the standard is absolute and there's no way to exceed it which, when I tried to do utilitarianism, led me to believe that everything I did that did not actively contribute to others' wellbeing was evil. The last is more of a scrupulosity problem on my part, but it is consistent with the standards of a consequentialist ethic).