Robert James M. Boyles
ABSTRACT: This paper further cashes out the notion that particular types of intelligent systems are susceptible to the is-ought problem, which espouses the thesis that no evaluative conclusions may be inferred from factual premises alone. Specifically, it focuses on top-down artificial moral agents, providing ancillary support to the view that these kinds of artifacts are not capable of producing genuine moral judgements. Such is the case given that machines built via the classical programming approach are always composed of two parts, namely: a world model and utility function. In principle, any attempt to bridge the gap between these two would fail, since their reconciliation necessitates for the derivation of evaluative claims from factual premises.