It is about expectation. If AI is expected to replace humans, and hence cause massive deflation, then it may be a white elephant.
AI methods are about searching for a mathematical function to transform an input space into an output space, for example, XGBoost, Neural nets. This function search could lead to an analytic function from a non-countable hypothesis space of functions , eg. XG Boost or it could lead to neural net that is trained through supervised learning(in which case the function hard to specify in an analytic sense and can be viewed as a collection of weights and non-linear nodes).
Additionally, the function selection happens based on a specific loss function criteria AND the “labelling” depends on a notion of truth. Further, the function search is premised on convergence given an arbitrary bootstrap
so my question is: can the original question be reduced to a Turing machine halting problem?

On the other hand, beyond the mathematical and analytic reasoning of the science behind it, I would look at the opportunity set unleashed by AI. Advancements in humanoids, self-driving, and automating many other tasks will free up human capital to explore more advanced applications and push frontiers in anything that AI touches.
May not have positive ROI in the short term, but should be positive in the long term (5+ years).

https://bloomberg.github.io/foml/#lectures