tl;dr: Yes, consciousness in AI is possible in principle. Let's not repeat the mistake of treating a being as an object before we know whether it deserves better.
Epistemic state: Some thoughts of mine without having looked too much into others' works on this topic.
Due to recent advances in artificial intelligence (AI), it becomes increasingly relevant to discuss whether AI merits moral concern, and if so, which types of AIs. By "moral concern" for some entity I mean whether it is ethically relevant to care for the pleasantness of its subjective experience. One important precondition for anything being of moral concern is whether it can have subjective experience in the first place, or in other words, whether it is conscious. That is, something can only be included in the moral circle − becomes someone − if it has any ability to experience, especially to suffer or to be happy.
In this post, I want to clear up a preliminary question to this discussion, namely whether AIs can be conscious in principle, rather than whether the AIs we create actually are or whether we should be concerned about their wellbeing. This needs to be answered first, because if AIs can't be conscious, there would be no point in ever caring about AIs.
I think the clearest way to look at it is the following. I can tell that I, as a human, do have subjective experience. That is, I am conscious. In fact, a lot of people report this insight about themselves. Furthermore, we know from physics and theory of computation that any physical system is computable. That is, anything that happens in physical reality can, in principle, be described as a mathematical function and replicated in a computer. And since humans are built of elementary particles and thus are physical systems, therefore, consciousness is a computable process. Finally, since the space of possible AIs spans all possible computational processes, AIs can be conscious, too. The most obvious way to do that is to emulate a human body, though I suspect that there are easier ways to implement conscious AIs − and it's not impossible it already happened. Which physical processes actually should be classified as "conscious", and therefore might merit moral concern, is an open question. I hope answers to that question will be guided by neuroscience rather than being defined arbitrarily.
You might question whether consciousness is indeed a physical process. If consciousness is non-physical (sometimes also called transcendental), it means it can't interact with the physical world, since it would otherwise violate well-established physical laws, like energy conservation. This means that consciousness is not accessible to introspection and thus is rather an abstract property that can be assigned arbitrarily. It allows to define which entities are included in the moral circle, claiming consciousness being associated with their being or not. However, this flexibility comes are a huge cost. It allows political leaders and societal narratives to define beings outside of the moral circle, thus treating them as objects to be traded and used. There are more than enough horrific examples in human history, where large groups of beings were (or still are) excluded and abused by the people in power. One solution to this while maintaining the view of consciousness as non-physical is panpsychism, assigning some type or degree of consciousness to everything in the universe. This would then include AIs, as well.
In the end, I would like to caution that the cost of accidentally excluding an actually conscious being into the moral circle is way higher than to include an actually non-conscious thing. Thus, unless we are highly certain that a given entity is not conscious, it might be the right choice to be careful until we know more. Let us not repeat our mistakes.