Will AI dolls for young children interfere with their understanding of the
difference between people and things?
I am not convinced, a priori, that this will do harm. Perhaps it
will have a subtle effect that is only statistically detectable.
However, the crucial question is not whether AI dolls, inherently,
are likely confuse children. It is, rather, whether AI dolls could be
designed to manipulate children in a particular way: to make them
more susceptible to addictive technology.
Because, if there is a way to do that, companies will find it.
Companies such as Facebook are searching madly for ways to make their
technology more addictive. They have lots of money to invest in
research. If a certain kind of robot doll could predispose people to
be more addicted to Facebook, Facebook is likely to discover that.
Then it might push those dolls on children under some pretext —
perhaps "They are educational", perhaps, or "They make up for the lack
of teachers in our austerity-hit schools". Facebook-funded research
could substantiate these claims.
Perhaps some other kind of AI doll might be entirely harmless, or even
beneficial, but that's not the direction that Facebook et al. would find
profitable to promote.